To see the other types of publications on this topic, follow the link: Text summarization.

Journal articles on the topic 'Text summarization'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Text summarization.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Sirohi, Neeraj Kumar, Dr Mamta Bansal, and Dr S. N. Rajan Rajan. "Text Summarization Approaches Using Machine Learning & LSTM." Revista Gestão Inovação e Tecnologias 11, no. 4 (September 1, 2021): 5010–26. http://dx.doi.org/10.47059/revistageintec.v11i4.2526.

Full text
Abstract:
Due to the massive amount of online textual data generated in a diversity of social media, web, and other information-centric applications. To select the vital data from the large text, need to study the full article and generate summary also not loose critical information of text document this process is called summarization. Text summarization is done either by human which need expertise in that area, also very tedious and time consuming. second type of summarization is done through system which is known as automatic text summarization which generate summary automatically. There are mainly two categories of Automatic text summarizations that is abstractive and extractive text summarization. Extractive summary is produced by picking important and high rank sentences and word from the text document on the other hand the sentences and word are present in the summary generated through Abstractive method may not present in original text. This article mainly focuses on different ATS (Automatic text summarization) techniques that has been instigated in the present are argue. The paper begin with a concise introduction of automatic text summarization, then closely discussed the innovative developments in extractive and abstractive text summarization methods, and then transfers to literature survey, and it finally sum-up with the proposed techniques using LSTM with encoder Decoder for abstractive text summarization are discussed along with some future work directions.
APA, Harvard, Vancouver, ISO, and other styles
2

D, Manju, Radhamani V, Dhanush Kannan A, Kavya B, Sangavi S, and Swetha Srinivasan. "TEXT SUMMARIZATION." YMER Digital 21, no. 07 (July 7, 2022): 173–82. http://dx.doi.org/10.37896/ymer21.07/13.

Full text
Abstract:
n the last few years, a huge amount of text data from different sources has been created every day. The enormous data which needs to be processed contains valuable detail which needs to be efficiently summarized so that it serves a purpose. It is very tedious to summarize and classify large amounts of documents when done manually. It becomes cumbersome to develop a summary taking every semantics into consideration. Therefore, automatic text summarization acts as a solution. Text summarization can help in understanding the huge corpus by providing a gist of the corpus enabling comprehension in a timely manner. This paper studies the development of a web application which summarizes the given input text using different models and its deployment. Keywords: Text summarization, NLP, AWS, Text mining
APA, Harvard, Vancouver, ISO, and other styles
3

Vikas, A., Pradyumna G.V.N, and Tahir Ahmed Shaik. "Text Summarization." International Journal of Engineering and Computer Science 9, no. 2 (February 3, 2020): 24940–45. http://dx.doi.org/10.18535/ijecs/v9i2.4437.

Full text
Abstract:
In this new era, where tremendous information is available on the internet, it is most important to provide the improved mechanism to extract the information quickly and most efficiently. It is very difficult for human beings to manually extract the summary of a large documents of text. There are plenty of text material available on the internet. So, there is a problem of searching for relevant documents from the number of documents available and absorbing relevant information from it. In order to solve the above two problems, the automatic text summarization is very much necessary. Text summarization is the process of identifying the most important meaningful information in a document or set of related documents and compressing them into a shorter version preserving its overall meanings.
APA, Harvard, Vancouver, ISO, and other styles
4

Parimoo, Rohit, Rohit Sharma, Naleen Gaur, Nimish Jain, and Sweeta Bansal. "Applying Text Rank to Build an Automatic Text Summarization Web Application." International Journal for Research in Applied Science and Engineering Technology 10, no. 4 (April 30, 2022): 865–67. http://dx.doi.org/10.22214/ijraset.2022.40766.

Full text
Abstract:
Abstract: Automatic Text Summarization is one of the most trending research areas in the field of Natural Language Processing. The main aim of text summarization is to reduce the size of a text without losing any important information. Various techniques can be used for automatic summarization of text. In this paper we are going to focus on the automatic summarization of text using graph-based methods. In particular, we are going to discuss the implementation of a general-purpose web application which performs automatic summarization on the text entered using the Text Rank Algorithm. Summarization of text using graph-based approaches involves pre-processing and cleansing of text, tokenizing the sentences present in the text, representing the tokenized text in the form of numerical vectors, creating a similarity matrix which shows the semantic similarity between different sentences present in the text, representing the similarity matrix as a graph, scoring and ranking the sentences and extracting the summary. Keywords: Text Summarization, Unsupervised Learning, Text Rank, Page Rank, Web Application, Graph Based Summarization, Extractive Summarization
APA, Harvard, Vancouver, ISO, and other styles
5

Jawale, Sakshi, Pranit Londhe, Prajwali Kadam, Sarika Jadhav, and Rushikesh Kolekar. "Automatic Text Summarization." International Journal for Research in Applied Science and Engineering Technology 11, no. 5 (May 31, 2023): 1842–46. http://dx.doi.org/10.22214/ijraset.2023.51815.

Full text
Abstract:
Abstract: Text Summarization is a Natural Language Processing (NLP) method that extracts and collects data from the source and summarizes it. Text summarization has become a requirement for many applications since manually summarizing vast amounts of information is difficult, especially with the expanding magnitude of data. Financial research, search engine optimization, media monitoring, question-answering bots, and document analysis all benefit from text summarization. This paper extensively addresses several summarizing strategies depending on intent, volume of data, and outcome. Our aim is to evaluate and convey an abstract viewpoint of the present scenario research work for text summarization.
APA, Harvard, Vancouver, ISO, and other styles
6

Chettri, Roshna, and Udit Kr. "Automatic Text Summarization." International Journal of Computer Applications 161, no. 1 (March 15, 2017): 5–7. http://dx.doi.org/10.5120/ijca2017912326.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Patil, Aarti, Komal Pharande, Dipali Nale, and Roshani Agrawal. "Automatic Text Summarization." International Journal of Computer Applications 109, no. 17 (January 16, 2015): 18–19. http://dx.doi.org/10.5120/19418-0910.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Leiva, Luis A. "Responsive text summarization." Information Processing Letters 130 (February 2018): 52–57. http://dx.doi.org/10.1016/j.ipl.2017.10.007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Bichi, Abdulkadir Abubakar, Ruhaidah Samsudin, Rohayanti Hassan, Layla Rasheed Abdallah Hasan, and Abubakar Ado Rogo. "Graph-based extractive text summarization method for Hausa text." PLOS ONE 18, no. 5 (May 9, 2023): e0285376. http://dx.doi.org/10.1371/journal.pone.0285376.

Full text
Abstract:
Automatic text summarization is one of the most promising solutions to the ever-growing challenges of textual data as it produces a shorter version of the original document with fewer bytes, but the same information as the original document. Despite the advancements in automatic text summarization research, research involving the development of automatic text summarization methods for documents written in Hausa, a Chadic language widely spoken in West Africa by approximately 150,000,000 people as either their first or second language, is still in early stages of development. This study proposes a novel graph-based extractive single-document summarization method for Hausa text by modifying the existing PageRank algorithm using the normalized common bigrams count between adjacent sentences as the initial vertex score. The proposed method is evaluated using a primarily collected Hausa summarization evaluation dataset comprising of 113 Hausa news articles on ROUGE evaluation toolkits. The proposed approach outperformed the standard methods using the same datasets. It outperformed the TextRank method by 2.1%, LexRank by 12.3%, centroid-based method by 19.5%, and BM25 method by 17.4%.
APA, Harvard, Vancouver, ISO, and other styles
10

Okumura, Manabu, Takahiro Fukusima, Hidetsugu Nanba, and Tsutomu Hirao. "Text Summarization Challenge 2 text summarization evaluation at NTCIR workshop 3." ACM SIGIR Forum 38, no. 1 (July 2004): 29–38. http://dx.doi.org/10.1145/986278.986284.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Teng, Zhaopu. "Abstractive summarization of COVID-19 with transfer text-to-text transformer." Applied and Computational Engineering 2, no. 1 (March 22, 2023): 232–38. http://dx.doi.org/10.54254/2755-2721/2/20220520.

Full text
Abstract:
As a classic problem of Natural Language Processing, summarization provides convenience for studies, research, and daily life. The performance of generation summarization by Natural Language Processing techniques has attracted considerable attention. Meanwhile, COVID-19, a global explosion event, has led to the emergence of a large number of articles and research. The wide variety of articles makes it a perfect realization object for summarization generation tasks. This paper designed and implemented experiments by fine tuning T5 model to get an abstract summarization of COVID-19 literatures. A comparison of performance was shown to prove the reliability of the model.
APA, Harvard, Vancouver, ISO, and other styles
12

Kuyate, Swapnil, Omdeep Jadhav, and Pratik Jadhav. "AI Text Summarization System." International Journal for Research in Applied Science and Engineering Technology 11, no. 5 (May 31, 2023): 916–19. http://dx.doi.org/10.22214/ijraset.2023.51481.

Full text
Abstract:
Abstract: Our research paper presents an AI text summarization system that utilizes GPT, a powerful language model, to generate concise and meaningful summaries of lengthy text documents. The system consists of four modules: User, Android Application, GPT API, and GPT server. The User interacts with the system through the Android Application, which serves as the user interface. The GPT API acts as the intermediary between the Android Application and the GPT server, which hosts the GPT model and handles the text summarization process. The system employs state-of-the-art natural language processing techniques to generate summaries while preserving contextual coherence and salient information. The system's summarization capabilities are evaluated using metrics such as Rouge and F1 scores, demonstrating its effectiveness in capturing key information from different types of text documents. The system's integration with Android platforms provides convenient access for mobile users, making it suitable for applications such as news summarization, document summarization, and content curation. The modular architecture of the system allows for scalability and flexibility, enabling future enhancements and extensions. Our AI text summarization system utilizing GPT presents a promising approach for automatically summarizing large volumes of text, providing users with time-saving and meaningful summaries. The system has potential applications in various domains and can serve as a foundation for further research in the field of text summarization and natural language processing
APA, Harvard, Vancouver, ISO, and other styles
13

Jain, Rekha. "Automatic Text Summarization of Hindi Text Using Extractive Approach." ECS Transactions 107, no. 1 (April 24, 2022): 4469–77. http://dx.doi.org/10.1149/10701.4469ecst.

Full text
Abstract:
Text summarization is one of the applications of NLP (Natural Language Processing) that is having a large impact on users’ lives. The resultant text must convey the important information to intended users. A lot of users face the problem of content repetition while reading from the internet. Most of the time same information is repeated in majority of documents. It wastes valuable time of users. This problem raises the need of automatic text summarization. Automatic text summarization belongs to the NLP area of Machine Learning. This technique generates the shorter version of text with the help of some automated tool. The shorter version of text conveys relevant and important information to users. In the proposed work, the author is introducing an extractive approach for summarization of text. This approach is based on cosine similarity technique, which automatically generates the summarized text while keeping the relevant and meaningful information intact.
APA, Harvard, Vancouver, ISO, and other styles
14

Stein, Bonnie L., and John R. Kirby. "The Effects of Text Absent and Text Present Conditions on Summarization and Recall of Text." Journal of Reading Behavior 24, no. 2 (June 1992): 217–32. http://dx.doi.org/10.1080/10862969209547773.

Full text
Abstract:
The effects of text absent and text present conditions during summary writing were investigated. It has been hypothesized that: (a) text absent summarization (i.e., instructing subjects to read a text and then summarize it without referring back to the text) increases the quality of processing during summarization, and (b) this higher qualify processing enhances recall. Sixth-grade students summarized an expository text in either a text absent or text present condition, and subsequently were asked to do an oral free recall of the text. Regression analyses indicated that text absent summarization resulted in lower summary content in general, but in greater summary depth for more able readers. Further regression analyses indicated that summary depth resulted in increased recall in general, whereas summary content was only associated with recall for text absent summarizers; text absence alone did not result in greater recall. These findings suggest that text absent summarization may be beneficial, but only for subjects who are competent summary writers or able readers.
APA, Harvard, Vancouver, ISO, and other styles
15

Bhatia, Neelima, and Arunima Jaiswal. "Literature Review on Automatic Text Summarization: Single and Multiple Summarizations." International Journal of Computer Applications 117, no. 6 (May 20, 2015): 25–29. http://dx.doi.org/10.5120/20560-2948.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Thakur, Amey. "Text Summarizer Using Julia." International Journal for Research in Applied Science and Engineering Technology 10, no. 1 (January 31, 2022): 1371–75. http://dx.doi.org/10.22214/ijraset.2022.40066.

Full text
Abstract:
Abstract: The purpose of this paper is to introduce the Julia programming language with a concentration on Text Summarization. An extractive summarization algorithm is used for summarizing. Julia's evolution and features, as well as comparisons to other programming languages, are briefly discussed. The system's operation is depicted in a flow diagram, which illustrates the processes of sentence selection. Keywords: Text Summarizer, Extractive Summarization, Sentence Score, Topic Representation, Julia Programming Language
APA, Harvard, Vancouver, ISO, and other styles
17

Jha, Nitesh Kumar, and Arnab Mitra. "Introducing Word's Importance Level-Based Text Summarization Using Tree Structure." International Journal of Information Retrieval Research 10, no. 1 (January 2020): 13–33. http://dx.doi.org/10.4018/ijirr.2020010102.

Full text
Abstract:
Text-summarization plays a significant role towards quick knowledge acquisition from any text-based knowledge resource. To enhance the text-summarization process, a new approach towards automatic text-summarization is presented in this article that facilitates level (word importance factor)-based automated text-summarization. An equivalent tree is produced from the directed-graph during the input text processing with WordNet. Detailed investigations further ensure that the execution time for proposed automatic text-summarization, is strictly following a linear relationship with reference to the varying volume of inputs. Further investigation towards the performance of proposed automatic text-summarization approach ensures its superiority over several other existing text-summarization approaches.
APA, Harvard, Vancouver, ISO, and other styles
18

Narang, Anmol, Neelam R. Prakash, and Amit Arora. "Text Summarization using PSO." International Journal of Advanced Research in Computer Science and Software Engineering 7, no. 6 (June 30, 2017): 528–31. http://dx.doi.org/10.23956/ijarcsse/v7i6/0242.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Divekar, Akash. "Analysis on Text Summarization." International Journal for Research in Applied Science and Engineering Technology 10, no. 6 (June 30, 2022): 4222–29. http://dx.doi.org/10.22214/ijraset.2022.44848.

Full text
Abstract:
Abstract: As we enter the 21st century, with the advent of mobile phones and access to information stores, we seem to be surrounded by more information, less time, or the ability to process it. The creation of automated summaries was a clever human solution to this complex problem. However, the application of this solution was very complicated. In fact, there are a number of problems that need to be addressed before the promises of an automated text can be fully realized. Basically, it is necessary to understand how people summarize the text and build a system based on that. However, people are different in their thinking and interpretation that it is difficult to make a "gold standard" summary in which product summaries will be tested. In this paper, we will discuss the basic concepts of this article by providing the most appropriate definitions, characterization, types and two different methods of automatic text abstraction: extraction and extraction. Special attention is given to the method of extraction. It consists of selecting sentences and paragraphs that are important in the original text and combining them into a short form. It is mentally simple and easy to use
APA, Harvard, Vancouver, ISO, and other styles
20

Varagantham, Chetana, J. Srinija Reddy, Uday Yelleni,, Madhumitha Kotha, and P. Venkateswara Rao. "TEXT SUMMARIZATION USING NLP." International Journal Of Trendy Research In Engineering And Technology 06, no. 04 (2022): 26–30. http://dx.doi.org/10.54473/ijtret.2022.6405.

Full text
Abstract:
This Project represents the work related to Text Summarization. In this paper, we present a framework for summarizing the huge information. The proposed framework depends on highlight extraction from the internet, utilizing both morphological elements and semantic data. Presently, where huge information is available on the internet, it is most important to provide improved ways to extract the information quickly and most efficiently. It is very difficult for human beings to manually extract the summary of a large document of text. There are plenty of text materials available on the internet. So, there is a problem of searching for related documents from the number of documents available and absorbing related information from it. In essence to figure out the previous issues, automatic text summarization is very much necessary. Text Summarization is the process of identifying the most important and meaningful information in an input document or set of related input documents and compressing all the inputs into a shorter version while maintaining its overall objectives.
APA, Harvard, Vancouver, ISO, and other styles
21

Nisa, Aesmitul, and Mr Ankur Gupta. "Hybrid Semantic Text Summarization." International Journal for Research in Applied Science and Engineering Technology 10, no. 7 (July 31, 2022): 4772–81. http://dx.doi.org/10.22214/ijraset.2022.46070.

Full text
Abstract:
Abstract: Automatic summarizing involves condensing a written material using a computer algorithm to provide a summary that keeps the key ideas from the original text. Finding a representative subset of the data that includes the details of the complete set is the basic goal of synthesis. There are two different sorts of summarising approaches: extractive and abstractive. Our system is interested in a mix of the two methods. To produce the extracted summary in our method, we have incorporated a variety of statistical and semantic variables. Emotions are significant in life since they reflect our mental condition. As a result, our syntactic characteristic is empathy. To creating summaries, our approach fundamentally integrates syntactical, psychological, and statistical techniques. We implement petroleum text summarization using word2vec (Deep Starting to learn) as a semantic feature, K-means clustering technique, and system parameters. The innovative speech synthesizer, which combines WordNet, Lesk engine, and POS, receives the created extracted analysis and converts it into an abstractive analysis to create a hybrid exhibited great. Using the DUC 2007 dataset to assess our summarize, we produced effective results
APA, Harvard, Vancouver, ISO, and other styles
22

Khargharia, Debabrata. "APPLICATIONS OF TEXT SUMMARIZATION." International Journal of Advanced Research in Computer Science 9, no. 3 (June 20, 2018): 76–79. http://dx.doi.org/10.26483/ijarcs.v9i3.6037.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Bhargava, Rupal, and Yashvardhan Sharma. "Deep Extractive Text Summarization." Procedia Computer Science 167 (2020): 138–46. http://dx.doi.org/10.1016/j.procs.2020.03.191.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Sankarasubramaniam, Yogesh, Krishnan Ramanathan, and Subhankar Ghosh. "Text summarization using Wikipedia." Information Processing & Management 50, no. 3 (May 2014): 443–61. http://dx.doi.org/10.1016/j.ipm.2014.02.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

., Sherry. "MULTILINGUAL TEXT SUMMARIZATION TECHNIQUES." International Journal of Research in Engineering and Technology 06, no. 07 (July 25, 2017): 28–31. http://dx.doi.org/10.15623/ijret.2017.0607005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Chaves, Andrea, Cyrille Kesiku, and Begonya Garcia-Zapirain. "Automatic Text Summarization of Biomedical Text Data: A Systematic Review." Information 13, no. 8 (August 19, 2022): 393. http://dx.doi.org/10.3390/info13080393.

Full text
Abstract:
In recent years, the evolution of technology has led to an increase in text data obtained from many sources. In the biomedical domain, text information has also evidenced this accelerated growth, and automatic text summarization systems play an essential role in optimizing physicians’ time resources and identifying relevant information. In this paper, we present a systematic review in recent research of text summarization for biomedical textual data, focusing mainly on the methods employed, type of input data text, areas of application, and evaluation metrics used to assess systems. The survey was limited to the period between 1st January 2014 and 15th March 2022. The data collected was obtained from WoS, IEEE, and ACM digital libraries, while the search strategies were developed with the help of experts in NLP techniques and previous systematic reviews. The four phases of a systematic review by PRISMA methodology were conducted, and five summarization factors were determined to assess the studies included: Input, Purpose, Output, Method, and Evaluation metric. Results showed that 3.5% of 801 studies met the inclusion criteria. Moreover, Single-document, Biomedical Literature, Generic, and Extractive summarization proved to be the most common approaches employed, while techniques based on Machine Learning were performed in 16 studies and Rouge (Recall-Oriented Understudy for Gisting Evaluation) was reported as the evaluation metric in 26 studies. This review found that in recent years, more transformer-based methodologies for summarization purposes have been implemented compared to a previous survey. Additionally, there are still some challenges in text summarization in different domains, especially in the biomedical field in terms of demand for further research.
APA, Harvard, Vancouver, ISO, and other styles
27

Kakde, Mrs Kirti Pankaj, and Dr H. M. Padalikar. "Marathi Text Summarization using Extractive Technique." International Journal of Engineering and Advanced Technology 12, no. 5 (June 30, 2023): 99–105. http://dx.doi.org/10.35940/ijeat.e4200.0612523.

Full text
Abstract:
Multilingualism has played a key role in India, where people speak and understand more than one language. Marathi, as one of the official languages inMaharashtra state, is often used in sources such as newspapers or blogs. However, manually summarizing bulky Marathi paragraphs or texts for easy comprehension can be challenging. To address this, text summarization becomes essential to make large documents easily readable and understandable. This research article focuses on single document text summarization using the Natural Language Processing (NLP) approach, a subfield of Artificial Intelligence. Automatic text summarization is employed to extract relevant information in a concise manner. Information Extraction is particularly useful when summarizing documents consisting of multiple sentences into three or four sentences. While extensive research has been conducted on English Text Summarization, the field of Marathi document summarization remains largely unexplored. This research paper explores extractive text summarization techniques specifically for Marathi documents, utilizing the LexRank algorithm along with Genism, a graph-based technique, to generate informative summaries within word limit constraints. The experiment was conducted on the IndicNLP Marathi news article dataset, resulting in 78% precision, 72% recall, and 75% F-measure using the frequency-based method, and 78% precision, 78% recall, and 78% F-measure using the Lex Rank algorithm.
APA, Harvard, Vancouver, ISO, and other styles
28

Yadav, Avaneesh Kumar, Ashish Kumar Maurya, Ranvijay, and Rama Shankar Yadav. "Extractive Text Summarization Using Recent Approaches: A Survey." Ingénierie des systèmes d information 26, no. 1 (February 28, 2021): 109–21. http://dx.doi.org/10.18280/isi.260112.

Full text
Abstract:
In this era of growing digital media, the volume of text data increases day by day from various sources and may contain entire documents, books, articles, etc. This amount of text is a source of information that may be insignificant, redundant, and sometimes may not carry any meaningful representation. Therefore, we require some techniques and tools that can automatically summarize the enormous amounts of text data and help us to decide whether they are useful or not. Text summarization is a process that generates a brief version of the document in the form of a meaningful summary. It can be classified into abstractive text summarization and extractive text summarization. Abstractive text summarization generates an abstract type of summary from the given document. In extractive text summarization, a summary is created from the given document that contains crucial sentences of the document. Many authors proposed various techniques for both types of text summarization. This paper presents a survey of extractive text summarization on graphical-based techniques. Specifically, it focuses on unsupervised and supervised techniques. This paper shows the recent works and advances on them and focuses on the strength and weaknesses of surveys of previous works in tabular form. At last, it concentrates on the evaluation measure techniques of summary.
APA, Harvard, Vancouver, ISO, and other styles
29

Timalsina, Bipin, Nawaraj Paudel, and Tej Bahadur Shahi. "Attention based Recurrent Neural Network for Nepali Text Summarization." Journal of Institute of Science and Technology 27, no. 1 (June 30, 2022): 141–48. http://dx.doi.org/10.3126/jist.v27i1.46709.

Full text
Abstract:
Automatic text summarization has been a challenging topic in natural language processing (NLP) as it demands preserving important information while summarizing the large text into a summary. Extractive and abstractive text summarization are widely investigated approaches for text summarization. In extractive summarization, the important sentence from the large text is extracted and combined to create a summary whereas abstractive summarization creates a summary that is more focused on meaning, rather than content. Therefore, abstractive summarization gained more attention from researchers in the recent past. However, text summarization is still an untouched topic in the Nepali language. To this end, we proposed an abstractive text summarization for Nepali text. Here, we, first, create a Nepali text dataset by scraping Nepali news from the online news portals. Second, we design a deep learning-based text summarization model based on an encoder-decoder recurrent neural network with attention. More precisely, Long Short-Term Memory (LSTM) cells are used in the encoder and decoder layer. Third, we build nine different models by selecting various hyper-parameters such as the number of hidden layers and the number of nodes. Finally, we report the Recall-Oriented Understudy for Gisting Evaluation (ROUGE) score for each model to evaluate their performance. Among nine different models created by adjusting different numbers of layers and hidden states, the model with a single-layer encoder and 256 hidden states outperformed all other models with F-Score values of 15.74, 3.29, and 15.21 for ROUGE-1 ROUGE-2 and ROUGE-L, respectively.
APA, Harvard, Vancouver, ISO, and other styles
30

Tofighy, Seyyed Mohsen, Ram Gopal Raj, and Hamid Haj Seyyed Javad. "AHP Techniques for Persian Text Summarization." Malaysian Journal of Computer Science 26, no. 1 (March 1, 2013): 1–8. http://dx.doi.org/10.22452/mjcs.vol26no1.1.

Full text
Abstract:
In recent years, there has been an increasing amount of information on the web. Some of essential resources to shorten text documents use summarization technologies. In this paper, we present an AHP technique for Persian Text Summarization. This proposed model uses analytical hierarchy as a base factor for an evaluation algorithm and improves the summarization quality of Persian language text. The weighting and combination methods are two main contributions of the proposed text evaluation algorithm.
APA, Harvard, Vancouver, ISO, and other styles
31

Lucky, Henry, and Derwin Suhartono. "Investigation of Pre-Trained Bidirectional Encoder Representations from Transformers Checkpoints for Indonesian Abstractive Text Summarization." Journal of Information and Communication Technology 21, No.1 (November 11, 2021): 71–94. http://dx.doi.org/10.32890/jict2022.21.1.4.

Full text
Abstract:
Text summarization aims to reduce text by removing less useful information to obtain information quickly and precisely. In Indonesian abstractive text summarization, the research mostly focuses on multi-document summarization which methods will not work optimally in single-document summarization. As the public summarization datasets and works in English are focusing on single-document summarization, this study emphasized on Indonesian single-document summarization. Abstractive text summarization studies in English frequently use Bidirectional Encoder Representations from Transformers (BERT), and since Indonesian BERT checkpoint is available, it was employed in this study. This study investigated the use of Indonesian BERT in abstractive text summarization on the IndoSum dataset using the BERTSum model. The investigation proceeded by using various combinations of model encoders, model embedding sizes, and model decoders. Evaluation results showed that models with more embedding size and used Generative Pre-Training (GPT)-like decoder could improve the Recall-Oriented Understudy for Gisting Evaluation (ROUGE) score and BERTScore of the model results.
APA, Harvard, Vancouver, ISO, and other styles
32

Zamzam, Muhammad Adib. "SISTEM AUTOMATIC TEXT SUMMARIZATION MENGGUNAKAN ALGORITMA TEXTRANK." MATICS 12, no. 2 (September 25, 2020): 111–16. http://dx.doi.org/10.18860/mat.v12i2.8372.

Full text
Abstract:
Text summarization (perangkuman teks) adalah pendekatan yang bisa digunakan untuk meringkas atau memadatkan teks artikel yang panjang menjadi lebih pendek dan ringkas sehingga hasil rangkuman teks yang relatif lebih pendek bisa mewakilkan teks yang panjang. Automatic Text Summarization adalah perangkuman teks yang dilakukan secara otomatis oleh komputer. Terdapat dua macam algoritma Automatic Text Summarization yaitu Extraction-based summarization dan Abstractive summarization. Algoritma TextRank merupakan algoritma extraction-based atau extractive, dimana ekstraksi di sini berarti memilih unit teks (kalimat, segmen-segmen kalimat, paragraf atau passages), lalu dianggap berisi informasi penting dari dokumen dan menyusun unit-unit (kalimat-kalimat) tersebut dengan cara yang benar. Hasil penelitian dengan input 50 artikel dan hasil rangkuman sebanyak 12,5% dari teks asli menunjukkan bahwa sistem memiliki nilai recall ROUGE 41,659 %. Nilai tertinggi recall ROUGE tertinggi tercatat pada artikel 48 dengan nilai 0,764. Nilai terendah recall ROUGE tercatat pada artikel 37 dengan nilai 0,167.
APA, Harvard, Vancouver, ISO, and other styles
33

Karnik, Madhuri P., and D. V. Kodavade. "Abstractive Summarization with Efficient Transformer Based Approach." International Journal on Recent and Innovation Trends in Computing and Communication 11, no. 4 (May 4, 2023): 291–98. http://dx.doi.org/10.17762/ijritcc.v11i4.6454.

Full text
Abstract:
One of the most significant research areas is how to make a document smaller while keeping its essential information because of the rapid proliferation of online data. This information must be summarized in order to recover meaningful knowledge in an acceptable time. Text summarization is what it's called. Extractive and abstractive text summarization are the two types of summarization. In current years, the arena of abstractive text summarization has become increasingly popular. Abstractive Text Summarization (ATS) aims to extract the most vital content from a text corpus and condense it into a shorter text while maintaining its meaning and semantic and grammatical accuracy. Deep learning architectures have entered a new phase in natural language processing (NLP). Many studies have demonstrated the competitive performance of innovative architectures including recurrent neural network (RNN), Attention Mechanism and LSTM among others. Transformer, a recently presented model, relies on the attention process. In this paper, abstractive text summarization is accomplished using a basic Transformer model, a Transformer with a pointer generation network (PGN) and coverage mechanism, a Fastformer architecture and Fastformer with pointer generation network (PGN) and coverage mechanism. We compare these architectures after careful and thorough hyperparameter adjustment. In the experiment the standard CNN/DM dataset is used to test these architectures on the job of abstractive summarization.
APA, Harvard, Vancouver, ISO, and other styles
34

Joshi, Manju Lata, Nisheeth Joshi, and Namita Mittal. "SGATS: Semantic Graph-based Automatic Text Summarization from Hindi Text Documents." ACM Transactions on Asian and Low-Resource Language Information Processing 20, no. 6 (November 30, 2021): 1–32. http://dx.doi.org/10.1145/3464381.

Full text
Abstract:
Creating a coherent summary of the text is a challenging task in the field of Natural Language Processing (NLP). Various Automatic Text Summarization techniques have been developed for abstractive as well as extractive summarization. This study focuses on extractive summarization which is a process containing selected delineative paragraphs or sentences from the original text and combining these into smaller forms than the document(s) to generate a summary. The methods that have been used for extractive summarization are based on a graph-theoretic approach, machine learning, Latent Semantic Analysis (LSA), neural networks, cluster, and fuzzy logic. In this paper, a semantic graph-based approach SGATS (Semantic Graph-based approach for Automatic Text Summarization) is proposed to generate an extractive summary. The proposed approach constructs a semantic graph of the original Hindi text document by establishing a semantic relationship between sentences of the document using Hindi Wordnet ontology as a background knowledge source. Once the semantic graph is constructed, fourteen different graph theoretical measures are applied to rank the document sentences depending on their semantic scores. The proposed approach is applied to two data sets of different domains of Tourism and Health. The performance of the proposed approach is compared with the state-of-the-art TextRank algorithm and human-annotated summary. The performance of the proposed system is evaluated using widely accepted ROUGE measures. The outcomes exhibit that our proposed system produces better results than TextRank for health domain corpus and comparable results for tourism corpus. Further, correlation coefficient methods are applied to find a correlation between eight different graphical measures and it is observed that most of the graphical measures are highly correlated.
APA, Harvard, Vancouver, ISO, and other styles
35

Rautray, Rasmita, Lopamudra Swain, Rasmita Dash, and Rajashree Dash. "A brief review on text summarization methods." International Journal of Engineering & Technology 7, no. 4.5 (September 22, 2018): 728. http://dx.doi.org/10.14419/ijet.v7i4.5.25070.

Full text
Abstract:
In present scenario, text summarization is a popular and active field of research in both the Information Retrieval (IR) and Natural Language Processing (NLP) communities. Summarization is important for IR since it is a means to identify useful information by condensing the document from large corpus of data in an efficient way. In this study, different aspects of text summarization methods with strength, limitation and gap within the methods are presented.
APA, Harvard, Vancouver, ISO, and other styles
36

White, Clinton T., Neil P. Molino, Julia S. Yang, and John M. Conroy. "occams: A Text Summarization Package." Analytics 2, no. 3 (June 30, 2023): 546–59. http://dx.doi.org/10.3390/analytics2030030.

Full text
Abstract:
Extractive text summarization selects asmall subset of sentences from a document, which gives good “coverage” of a document. When given a set of term weights indicating the importance of the terms, the concept of coverage may be formalized into a combinatorial optimization problem known as the budgeted maximum coverage problem. Extractive methods in this class are known to beamong the best of classic extractive summarization systems. This paper gives a synopsis of thesoftware package occams, which is a multilingual extractive single and multi-document summarization package based on an algorithm giving an optimal approximation to the budgeted maximum coverage problem. The occams package is written in Python and provides an easy-to-use modular interface, allowing it to work in conjunction with popular Python NLP packages, such as nltk, stanza or spacy.
APA, Harvard, Vancouver, ISO, and other styles
37

Dehru, Virender, Pradeep Kumar Tiwari, Gaurav Aggarwal, Bhavya Joshi, and Pawan Kartik. "Text Summarization Techniques and Applications." IOP Conference Series: Materials Science and Engineering 1099, no. 1 (March 1, 2021): 012042. http://dx.doi.org/10.1088/1757-899x/1099/1/012042.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Lagrini, Samira, Mohammed Redjimi, and Nabiha Azizi. "Automatic Arabic Text Summarization Approaches." International Journal of Computer Applications 164, no. 5 (April 17, 2017): 31–37. http://dx.doi.org/10.5120/ijca2017913628.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

B., Laxmi, and P. Venkata. "An Overview of Text Summarization." International Journal of Computer Applications 171, no. 10 (August 24, 2017): 1–17. http://dx.doi.org/10.5120/ijca2017915109.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Algaphari, Ghaleb, Fadl M. Ba-Alwi, and Aimen Moharram. "Text Summarization using Centrality Concept." International Journal of Computer Applications 79, no. 1 (October 18, 2013): 5–12. http://dx.doi.org/10.5120/13703-1450.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

OKUMURA, MANABU, and HIDETSUGU NANBA. "Automated Text Summarization: A Survey." Journal of Natural Language Processing 6, no. 6 (1999): 1–26. http://dx.doi.org/10.5715/jnlp.6.6_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Lee, Kun-Hui, Seung-Hoon Na, Joon-Ho Lim, Tae-Hyeong Kim, and Du-Seong Chang. "PrefixLM for Korean Text Summarization." Journal of KIISE 49, no. 6 (June 30, 2022): 475–87. http://dx.doi.org/10.5626/jok.2022.49.6.475.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Deshmukh, Bharti. "TEXT SUMMARIZATION USING PYTHON NLTK." International Journal of Advanced Research 10, no. 06 (June 30, 2022): 202–9. http://dx.doi.org/10.21474/ijar01/14876.

Full text
Abstract:
Text summarization is basically summarizing of the given paragraph with the use of natural language processing and machine learning. There has been an explosion in the quantity of textual content records from lot of sources. This quantity of textual content is a useful supply of facts and information which needs to be efficiently summarized to be useful. In this paper, the primary tactics to computerized textual content summarization were described. The distinctive approaches for summarization and the effectiveness and shortcomings of the distinctive methods were described. The machine works through assigning rankings to sentences withinside the document to be summarized, and the use of the maximum scoring sentences in the summary. Score values are primarily based totally on functions extracted from the sentence. A linear mixture of function rankings was used. Almost all the mappings from function to score and the coefficient values withinside the linear mixture were derived from a training corpus. Some anaphor decision was performed. In addition to primary summarization, a strive was made to address the issue of targeting the text at the user. The meant user was taken into consideration to have little history information or analyzing ability. The machine enabled through simplifying the individual words or phrases used in the summary and through drawing the pre-needful history facts from the web.
APA, Harvard, Vancouver, ISO, and other styles
44

Shree, A. N. Ramya, and Kiran P. "Privacy Preserving Text Document Summarization." Journal of Engineering Research and Sciences 1, no. 7 (July 2022): 7–14. http://dx.doi.org/10.55708/js0107002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Tas, Oguzhan, and Farzad Kiyani. "A survey automatic text summarization." Pressacademia 5, no. 1 (June 30, 2017): 205–13. http://dx.doi.org/10.17261/pressacademia.2017.591.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Bao, Guangsheng, and Yue Zhang. "Contextualized Rewriting for Text Summarization." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 14 (May 18, 2021): 12544–53. http://dx.doi.org/10.1609/aaai.v35i14.17487.

Full text
Abstract:
Extractive summarization suffers from irrelevance, redundancy and incoherence. Existing work shows that abstractive rewriting for extractive summaries can improve the conciseness and readability. These rewriting systems consider extracted summaries as the only input, which is relatively focused but can lose important background knowledge. In this paper, we investigate contextualized rewriting, which ingests the entire original document. We formalize contextualized rewriting as a seq2seq problem with group alignments, introducing group tag as a solution to model the alignments, identifying extracted summaries through content-based addressing. Results show that our approach significantly outperforms non-contextualized rewriting systems without requiring reinforcement learning, achieving strong improvements on ROUGE scores upon multiple extractive summarizers.
APA, Harvard, Vancouver, ISO, and other styles
47

Yang, Zijian Győző, Ádám Agócs, Gábor Kusper, and Tamás Váradi. "Abstractive text summarization for Hungarian." Annales Mathematicae et Informaticae 53 (2021): 299–316. http://dx.doi.org/10.33039/ami.2021.04.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

C.Balabantaray, R., B. Sahoo, D. K. Sahoo, and M. Swain. "Odia Text Summarization Using Stemmer." International Journal of Applied Information Systems 1, no. 3 (February 18, 2012): 20–24. http://dx.doi.org/10.5120/ijais12-450135.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Alguliyev, Rasim M., Ramiz M. Aliguliyev, Nijat R. Isazade, Asad Abdi, and Norisma Idris. "A Model for Text Summarization." International Journal of Intelligent Information Technologies 13, no. 1 (January 2017): 67–85. http://dx.doi.org/10.4018/ijiit.2017010104.

Full text
Abstract:
Text summarization is a process for creating a concise version of document(s) preserving its main content. In this paper, to cover all topics and reduce redundancy in summaries, a two-stage sentences selection method for text summarization is proposed. At the first stage, to discover all topics the sentences set is clustered by using k-means method. At the second stage, optimum selection of sentences is proposed. From each cluster the salient sentences are selected according to their contribution to the topic (cluster) and their proximity to other sentences in cluster to avoid redundancy in summaries until the appointed summary length is reached. Sentence selection is modeled as an optimization problem. In this study, to solve the optimization problem an adaptive differential evolution with novel mutation strategy is employed. With a test on benchmark DUC2001 and DUC2002 data sets, the ROUGE value of summaries got by the proposed approach demonstrated its validity, compared to the traditional methods of sentence selection and the top three performing systems for DUC2001 and DUC2002.
APA, Harvard, Vancouver, ISO, and other styles
50

Deshpande, Sumeet. "Text Summarization for GRE Exam." International Journal for Research in Applied Science and Engineering Technology 7, no. 4 (April 30, 2019): 2495–98. http://dx.doi.org/10.22214/ijraset.2019.4455.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography