Academic literature on the topic 'Text summarisation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Text summarisation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Text summarisation"

1

Suleiman, Dima, and Arafat Awajan. "Deep Learning Based Abstractive Text Summarization: Approaches, Datasets, Evaluation Measures, and Challenges." Mathematical Problems in Engineering 2020 (August 24, 2020): 1–29. http://dx.doi.org/10.1155/2020/9365340.

Full text
Abstract:
In recent years, the volume of textual data has rapidly increased, which has generated a valuable resource for extracting and analysing information. To retrieve useful knowledge within a reasonable time period, this information must be summarised. This paper reviews recent approaches for abstractive text summarisation using deep learning models. In addition, existing datasets for training and validating these approaches are reviewed, and their features and limitations are presented. The Gigaword dataset is commonly employed for single-sentence summary approaches, while the Cable News Network (CNN)/Daily Mail dataset is commonly employed for multisentence summary approaches. Furthermore, the measures that are utilised to evaluate the quality of summarisation are investigated, and Recall-Oriented Understudy for Gisting Evaluation 1 (ROUGE1), ROUGE2, and ROUGE-L are determined to be the most commonly applied metrics. The challenges that are encountered during the summarisation process and the solutions proposed in each approach are analysed. The analysis of the several approaches shows that recurrent neural networks with an attention mechanism and long short-term memory (LSTM) are the most prevalent techniques for abstractive text summarisation. The experimental results show that text summarisation with a pretrained encoder model achieved the highest values for ROUGE1, ROUGE2, and ROUGE-L (43.85, 20.34, and 39.9, respectively). Furthermore, it was determined that most abstractive text summarisation models faced challenges such as the unavailability of a golden token at testing time, out-of-vocabulary (OOV) words, summary sentence repetition, inaccurate sentences, and fake facts.
APA, Harvard, Vancouver, ISO, and other styles
2

Reeve, Lawrence H., Hyoil Han, and Ari D. Brooks. "Biomedical text summarisation using concept chains." International Journal of Data Mining and Bioinformatics 1, no. 4 (2007): 389. http://dx.doi.org/10.1504/ijdmb.2007.012967.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Sartakhti, Moein Salimi, Ahmad Yoosofan, Ali Asghar Fatehi, and Ali Rahimi. "Single Document Summarization Based on Grey Wolf Optimization." Global Journal of Computer Sciences: Theory and Research 10, no. 2 (October 30, 2020): 48–56. http://dx.doi.org/10.18844/gjcs.v10i2.5807.

Full text
Abstract:
The amazing growth of online services has caused an information explosion issue. Text summarisation is condensing the text into a small version and preserving its overall concept. Text summarisation is an important way to extract significant information from documents and offer that information to the user in an abbreviated form while preserving its major content. For human beings, it is very difficult to summarise large documents. To do this, this paper uses some sentence features and word features. These features assign scores to all the sentences. In this paper, we combine these features by Grey Wolf Optimiser (GWO). Optimisation of features gives better results than using individual features. This is the first attempt to show the performance of GWO for Persian text summarisation. The proposed method is compared with the genetic algorithm and the evolutionary strategy. The results show that our model will be useful in this research area. Keywords: Text summarisation, genetic algorithm, sentence, score function, evolutionary strategy.
APA, Harvard, Vancouver, ISO, and other styles
4

Lloret, Elena, and Manuel Palomar. "Text summarisation in progress: a literature review." Artificial Intelligence Review 37, no. 1 (April 30, 2011): 1–41. http://dx.doi.org/10.1007/s10462-011-9216-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Jayashree, R., K. Srikanta Murthy, Basavaraj S. Anami, and Alex Pappachen James. "The impact of feature selection on text summarisation." International Journal of Applied Pattern Recognition 1, no. 4 (2014): 377. http://dx.doi.org/10.1504/ijapr.2014.068344.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zerva, Chrysoula, Minh-Quoc Nghiem, Nhung T. H. Nguyen, and Sophia Ananiadou. "Cited text span identification for scientific summarisation using pre-trained encoders." Scientometrics 125, no. 3 (May 7, 2020): 3109–37. http://dx.doi.org/10.1007/s11192-020-03455-z.

Full text
Abstract:
AbstractWe present our approach for the identification of cited text spans in scientific literature, using pre-trained encoders (BERT) in combination with different neural networks. We further experiment to assess the impact of using these cited text spans as input in BERT-based extractive summarisation methods. Inspired and motivated by the CL-SciSumm shared tasks, we explore different methods to adapt pre-trained models which are tuned for generic domain to scientific literature. For the identification of cited text spans, we assess the impact of different configurations in terms of learning from augmented data and using different features and network architectures (BERT, XLNET, CNN, and BiMPM) for training. We show that identifying and fine-tuning the language models on unlabelled or augmented domain specific data can improve the performance of cited text span identification models. For the scientific summarisation we implement an extractive summarisation model adapted from BERT. With respect to the input sentences taken from the cited paper, we explore two different scenarios: (1) consider all the sentences (full-text) of the referenced article as input and (2) consider only the text spans that have been identified to be cited by other publications. We observe that in certain experiments, by using only the cited text-spans we can achieve better performance, while minimising the input size needed.
APA, Harvard, Vancouver, ISO, and other styles
7

LLORET, ELENA, and MANUEL PALOMAR. "COMPENDIUM: a text summarisation tool for generating summaries of multiple purposes, domains, and genres." Natural Language Engineering 19, no. 2 (July 16, 2012): 147–86. http://dx.doi.org/10.1017/s1351324912000198.

Full text
Abstract:
AbstractIn this paper, we present a Text Summarisation tool, compendium, capable of generating the most common types of summaries. Regarding the input, single- and multi-document summaries can be produced; as the output, the summaries can be extractive or abstractive-oriented; and finally, concerning their purpose, the summaries can be generic, query-focused, or sentiment-based. The proposed architecture for compendium is divided in various stages, making a distinction between core and additional stages. The former constitute the backbone of the tool and are common for the generation of any type of summary, whereas the latter are used for enhancing the capabilities of the tool. The main contributions of compendium with respect to the state-of-the-art summarisation systems are that (i) it specifically deals with the problem of redundancy, by means of textual entailment; (ii) it combines statistical and cognitive-based techniques for determining relevant content; and (iii) it proposes an abstractive-oriented approach for facing the challenge of abstractive summarisation. The evaluation performed in different domains and textual genres, comprising traditional texts, as well as texts extracted from the Web 2.0, shows that compendium is very competitive and appropriate to be used as a tool for generating summaries.
APA, Harvard, Vancouver, ISO, and other styles
8

Hariharan, Shanmugasundaram, and Rengaramanujam Srinivasan. "A Comparison of Similarity Measures for Text Documents." Journal of Information & Knowledge Management 07, no. 01 (March 2008): 1–8. http://dx.doi.org/10.1142/s0219649208001889.

Full text
Abstract:
Similarity is an important and widely used concept in many applications such as Document Summarisation, Question Answering, Information Retrieval, Document Clustering and Categorisation. This paper presents a comparison of various similarity measures in comparing the content of text documents. We have attempted to find the best measure suited for finding the document similarity for newspaper reports.
APA, Harvard, Vancouver, ISO, and other styles
9

Orăsan, Constantin. "Automatic summarisation: 25 years On." Natural Language Engineering 25, no. 06 (September 19, 2019): 735–51. http://dx.doi.org/10.1017/s1351324919000524.

Full text
Abstract:
AbstractAutomatic text summarisation is a topic that has been receiving attention from the research community from the early days of computational linguistics, but it really took off around 25 years ago. This article presents the main developments from the last 25 years. It starts by defining what a summary is and how its definition changed over time as a result of the interest in processing new types of documents. The article continues with a brief history of the field and highlights the main challenges posed by the evaluation of summaries. The article finishes with some thoughts about the future of the field.
APA, Harvard, Vancouver, ISO, and other styles
10

Jayashree, R., K. Srikanta Murthy, and Basavaraj S. Anami. "Hybrid methodologies for summarisation of Kannada language text documents." International Journal of Knowledge Engineering and Data Mining 3, no. 1 (2014): 82. http://dx.doi.org/10.1504/ijkedm.2014.066238.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Text summarisation"

1

El-Haj, Mahmoud. "Multi-document Arabic text summarisation." Thesis, University of Essex, 2012. http://eprints.lancs.ac.uk/71279/.

Full text
Abstract:
Multi-document summarisation is the process of producing a single summary of a collection of related documents. Much of the current work on multi-document text summarisation is concerned with the English language; relevant resources are numerous and readily available. These resources include human generated (gold-standard) and automatic summaries. Arabic multi-document summarisation is still in its infancy. One of the obstacles to progress is the limited availability of Arabic resources to support this research. When we started our research there were no publicly available Arabic multi-document gold-standard summaries, which are needed to automatically evaluate system generated summaries. The Document Understanding Conference (DUC) and Text Analysis Conference (TAC) at that time provided resources such as gold-standard extractive and abstractive summaries (both human and system generated) that were only available in English. Our aim was to push forward the state-of-the-art in Arabic multi-document summarisation. This required advancements in at least two areas. The first area was the creation of Arabic test collections. The second area was concerned with the actual summarisation process to find methods that improve the quality of Arabic summaries. To address both points we created single and multi-document Arabic test collections both automatically and manually using a commonly used English dataset and by having human participants. We developed extractive language dependent and language independent single and multi-document summarisers, both for Arabic and English. In our work we provided state-of-the-art approaches for Arabic multi-document summarisation. We succeeded in including Arabic in one of the leading summarisation conferences the Text Analysis Conference (TAC). Researchers on Arabic multi-document summarisation now have resources and tools that can be used to advance the research in this field.
APA, Harvard, Vancouver, ISO, and other styles
2

Benbrahim, Mohamed. "Automatic text summarisation through lexical cohesion analysis." Thesis, University of Surrey, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.309200.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Garcia, Constantino Matias. "On the use of text classification methods for text summarisation." Thesis, University of Liverpool, 2013. http://livrepository.liverpool.ac.uk/12957/.

Full text
Abstract:
This thesis describes research work undertaken in the fields of text and questionnaire mining. More specifically, the research work is directed at the use of text classification techniques for the purpose of summarising the free text part of questionnaires. In this thesis text summarisation is conceived of as a form of text classification in that the classes assigned to text documents can be viewed as an indication (summarisation) of the main ideas of the original free text but in a coherent and reduced form. The reason for considering this type of summary is because summarising unstructured free text, such as that found in questionnaires, is not deemed to be effective using conventional text summarisation techniques. Four approaches are described in the context of the classification summarisation of free text from different sources, focused on the free text part of questionnaires. The first approach considers the use of standard classification techniques for text summarisation and was motivated by the desire to establish a benchmark with which the more specialised summarisation classification techniques presented later in this thesis could be compared. The second approach, called Classifier Generation Using Secondary Data (CGUSD), addresses the case when the available data is not considered sufficient for training purposes (or possibly because no data is available at all). The third approach, called Semi-Automated Rule Summarisation Extraction Tool (SARSET), presents a semi-automated classification technique to support document summarisation classification in which there is more involvement by the domain experts in the classifier generation process, the idea was that this might serve to produce more effective summaries. The fourth is a hierarchical summarisation classification approach which assumes that text summarisation can be achieved using a classification approach whereby several class labels can be associated with documents which then constitute the summarisation. For evaluation purposes three types of text were considered: (i) questionnaire free text, (ii) text from medical abstracts and (iii) text from news stories.
APA, Harvard, Vancouver, ISO, and other styles
4

Mohamed, Muhidin Abdullahi. "Automatic text summarisation using linguistic knowledge-based semantics." Thesis, University of Birmingham, 2016. http://etheses.bham.ac.uk//id/eprint/6659/.

Full text
Abstract:
Text summarisation is reducing a text document to a short substitute summary. Since the commencement of the field, almost all summarisation research works implemented to this date involve identification and extraction of the most important document/cluster segments, called extraction. This typically involves scoring each document sentence according to a composite scoring function consisting of surface level and semantic features. Enabling machines to analyse text features and understand their meaning potentially requires both text semantic analysis and equipping computers with an external semantic knowledge. This thesis addresses extractive text summarisation by proposing a number of semantic and knowledge-based approaches. The work combines the high-quality semantic information in WordNet, the crowdsourced encyclopaedic knowledge in Wikipedia, and the manually crafted categorial variation in CatVar, to improve the summary quality. Such improvements are accomplished through sentence level morphological analysis and the incorporation of Wikipedia-based named-entity semantic relatedness while using heuristic algorithms. The study also investigates how sentence-level semantic analysis based on semantic role labelling (SRL), leveraged with a background world knowledge, influences sentence textual similarity and text summarisation. The proposed sentence similarity and summarisation methods were evaluated on standard publicly available datasets such as the Microsoft Research Paraphrase Corpus (MSRPC), TREC-9 Question Variants, and the Document Understanding Conference 2002, 2005, 2006 (DUC 2002, DUC 2005, DUC 2006) Corpora. The project also uses Recall-Oriented Understudy for Gisting Evaluation (ROUGE) for the quantitative assessment of the proposed summarisers’ performances. Results of our systems showed their effectiveness as compared to related state-of-the-art summarisation methods and baselines. Of the proposed summarisers, the SRL Wikipedia-based system demonstrated the best performance.
APA, Harvard, Vancouver, ISO, and other styles
5

Xia, Menglin. "Text readability and summarisation for non-native reading comprehension." Thesis, University of Cambridge, 2019. https://www.repository.cam.ac.uk/handle/1810/288740.

Full text
Abstract:
This thesis focuses on two important aspects of non-native reading comprehension: text readability assessment, which estimates the reading difficulty of a given text for L2 learners, and learner summarisation assessment, which evaluates the quality of learner summaries to assess their reading comprehension. We approach both tasks as supervised machine learning problems and present automated assessment systems that achieve state-of-the-art performance. We first address the task of text readability assessment for L2 learners. One of the major challenges for a data-driven approach to text readability assessment is the lack of significantly-sized level-annotated data aimed at L2 learners. We present a dataset of CEFR-graded texts tailored for L2 learners and look into a range of linguistic features affecting text readability. We compare the text readability measures for native and L2 learners and explore methods that make use of the more plentiful data aimed at native readers to help improve L2 readability assessment. We then present a summarisation task for evaluating non-native reading comprehension and demonstrate an automated summarisation assessment system aimed at evaluating the quality of learner summaries. We propose three novel machine learning approaches to assessing learner summaries. In the first approach, we examine using several NLP techniques to extract features to measure the content similarity between the reading passage and the summary. In the second approach, we calculate a similarity matrix and apply a convolutional neural network (CNN) model to assess the summary quality using the similarity matrix. In the third approach, we build an end-to-end summarisation assessment model using recurrent neural networks (RNNs). Further, we combine the three approaches to a single system using a parallel ensemble modelling technique. We show that our models outperform traditional approaches that rely on exact word match on the task and that our best model produces quality assessments close to professional examiners.
APA, Harvard, Vancouver, ISO, and other styles
6

Bowden, Paul Richard. "Automated knowledge extraction from text." Thesis, Nottingham Trent University, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.298900.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Lloret, Elena. "Text summarisation based on human language technologies and its applications." Doctoral thesis, Universidad de Alicante, 2011. http://hdl.handle.net/10045/23297.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Pfitzner, Darius Mark, and pfit0022@flinders edu au. "An Investigation into User Text Query and Text Descriptor Construction." Flinders University. Computer Science, Engineering and Mathematics, 2009. http://catalogue.flinders.edu.au./local/adt/public/adt-SFU20090805.141402.

Full text
Abstract:
Cognitive limitations such as those described in Miller's (1956) work on channel capacity and Cowen's (2001) on short-term memory are factors in determining user cognitive load and in turn task performance. Inappropriate user cognitive load can reduce user efficiency in goal realization. For instance, if the user's attentional capacity is not appropriately applied to the task, distractor processing can tend to appropriate capacity from it. Conversely, if a task drives users beyond their short-term memory envelope, information loss may be realized in its translation to long-term memory and subsequent retrieval for task base processing. To manage user cognitive capacity in the task of text search the interface should allow users to draw on their powerful and innate pattern recognition abilities. This harmonizes with Johnson-Laird's (1983) proposal that propositional representation is tied to mental models. Combined with the theory that knowledge is highly organized when stored in memory an appropriate approach for cognitive load optimization would be to graphically present single documents, or clusters thereof, with an appropriate number and type of descriptors. These descriptors are commonly words and/or phrases. Information theory research suggests that words have different levels of importance in document topic differentiation. Although key word identification is well researched, there is a lack of basic research into human preference regarding query formation and the heuristics users employ in search. This lack extends to features as elementary as the number of words preferred to describe and/or search for a document. Contrastive understanding these preferences will help balance processing overheads of tasks like clustering against user cognitive load to realize a more efficient document retrieval process. Common approaches such as search engine log analysis cannot provide this degree of understanding and do not allow clear identification of the intended set of target documents. This research endeavours to improve the manner in which text search returns are presented so that user performance under real world situations is enhanced. To this end we explore both how to appropriately present search information and results graphically to facilitate optimal cognitive and perceptual load/utilization, as well as how people use textual information in describing documents or constructing queries.
APA, Harvard, Vancouver, ISO, and other styles
9

Meechan-Maddon, Ailsa. "The effect of noise in the training of convolutional neural networks for text summarisation." Thesis, Uppsala universitet, Institutionen för lingvistik och filologi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-384607.

Full text
Abstract:
In this thesis, we work towards bridging the gap between two distinct areas: noisy text handling and text summarisation. The overall goal of the paper is to examine the effects of noise in the training of convolutional neural networks for text summarisation, with a view to understanding how to effectively create a noise-robust text-summarisation system. We look specifically at the problem of abstractive text summarisation of noisy data in the context of summarising error-containing documents from automatic speech recognition (ASR) output. We experiment with adding varying levels of noise (errors) to the 4 million-article Gigaword corpus and training an encoder-decoder CNN on it with the aim of producing a noise-robust text summarisation system. A total of six text summarisation models are trained, each with a different level of noise. We discover that the models with a high level of noise are indeed able to aptly summarise noisy data into clean summaries, despite a tendency for all models to overfit to the level of noise on which they were trained. Directions are given for future steps in order to create an even more noise-robust and flexible text summarisation system.
APA, Harvard, Vancouver, ISO, and other styles
10

Fang, Yimai. "Proposition-based summarization with a coherence-driven incremental model." Thesis, University of Cambridge, 2019. https://www.repository.cam.ac.uk/handle/1810/287468.

Full text
Abstract:
Summarization models which operate on meaning representations of documents have been neglected in the past, although they are a very promising and interesting class of methods for summarization and text understanding. In this thesis, I present one such summarizer, which uses the proposition as its meaning representation. My summarizer is an implementation of Kintsch and van Dijk's model of comprehension, which uses a tree of propositions to represent the working memory. The input document is processed incrementally in iterations. In each iteration, new propositions are connected to the tree under the principle of local coherence, and then a forgetting mechanism is applied so that only a few important propositions are retained in the tree for the next iteration. A summary can be generated using the propositions which are frequently retained. Originally, this model was only played through by hand by its inventors using human-created propositions. In this work, I turned it into a fully automatic model using current NLP technologies. First, I create propositions by obtaining and then transforming a syntactic parse. Second, I have devised algorithms to numerically evaluate alternative ways of adding a new proposition, as well as to predict necessary changes in the tree. Third, I compared different methods of modelling local coherence, including coreference resolution, distributional similarity, and lexical chains. In the first group of experiments, my summarizer realizes summary propositions by sentence extraction. These experiments show that my summarizer outperforms several state-of-the-art summarizers. The second group of experiments concerns abstractive generation from propositions, which is a collaborative project. I have investigated the option of compressing extracted sentences, but generation from propositions has been shown to provide better information packaging.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Text summarisation"

1

Benbrahim, Mohamed. Automatic text summarisation through lexical cohesion analysis. 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Text summarisation"

1

Cristea, Dan, Oana Postolache, and Ionuţ Pistol. "Summarisation Through Discourse Structure." In Computational Linguistics and Intelligent Text Processing, 632–44. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/978-3-540-30586-6_70.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Christensen, Heidi, BalaKrishna Kolluru, Yoshihiko Gotoh, and Steve Renals. "From Text Summarisation to Style-Specific Summarisation for Broadcast News." In Lecture Notes in Computer Science, 223–37. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-24752-4_17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Mehta, Parth, and Prasenjit Majumder. "Corpora and Evaluation for Text Summarisation." In From Extractive to Abstractive Summarization: A Journey, 25–34. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-13-8934-4_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Teufel, Simone. "Deeper Summarisation: The Second Time Around." In Computational Linguistics and Intelligent Text Processing, 581–98. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-75487-1_44.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Perera, Prasad, and Leila Kosseim. "Evaluating Syntactic Sentence Compression for Text Summarisation." In Natural Language Processing and Information Systems, 126–39. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-38824-8_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Garcia-Constantino, Matias, Frans Coenen, P.-J. Noble, and Alan Radford. "Questionnaire Free Text Summarisation Using Hierarchical Classification." In Research and Development in Intelligent Systems XXIX, 35–48. London: Springer London, 2012. http://dx.doi.org/10.1007/978-1-4471-4739-8_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

El-Haj, Mahmoud, Udo Kruschwitz, and Chris Fox. "Experimenting with Automatic Text Summarisation for Arabic." In Human Language Technology. Challenges for Computer Science and Linguistics, 490–99. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-20095-3_45.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Lloret, Elena, and Manuel Palomar. "A Gradual Combination of Features for Building Automatic Summarisation Systems." In Text, Speech and Dialogue, 16–23. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-04208-9_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Al Oudah, Abrar, Kholoud Al Bassam, Heba Kurdi, and Shiroq Al-Megren. "Wajeez: An Extractive Automatic Arabic Text Summarisation System." In Social Computing and Social Media. Design, Human Behavior and Analytics, 3–14. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-21902-4_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Joshi, Monika, Hui Wang, and Sally McClean. "Generating Object-Oriented Semantic Graph for Text Summarisation." In Mining Intelligence and Knowledge Exploration, 298–311. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-13817-6_29.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Text summarisation"

1

Hachey, Ben, and Claire Grover. "Automatic legal text summarisation." In the 10th international conference. New York, New York, USA: ACM Press, 2005. http://dx.doi.org/10.1145/1165485.1165498.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Vijay, Sakshee, Vartika Rai, Sorabh Gupta, Anshuman Vijayvargia, and Dipti Misra Sharma. "Extractive text summarisation in hindi." In 2017 International Conference on Asian Language Processing (IALP). IEEE, 2017. http://dx.doi.org/10.1109/ialp.2017.8300607.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

El-Haj, Mahmoud, Udo Kruschwitz, and Chris Fox. "Multi-document arabic text summarisation." In 2011 3rd Computer Science and Electronic Engineering Conference (CEEC). IEEE, 2011. http://dx.doi.org/10.1109/ceec.2011.5995822.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Koulali, Rim, Mahmoud El-Haj, and Abdelouafi Meziane. "Arabic Topic Detection using automatic text summarisation." In 2013 ACS International Conference on Computer Systems and Applications (AICCSA). IEEE, 2013. http://dx.doi.org/10.1109/aiccsa.2013.6616460.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Suominen, Hanna, and Leif Hanlen. "Visual summarisation of text for surveillance and situational awareness in hospitals." In the 18th Australasian Document Computing Symposium. New York, New York, USA: ACM Press, 2013. http://dx.doi.org/10.1145/2537734.2537739.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kaur, Mandeep, and Diego Mollá. "Supervised Machine Learning for Extractive Query Based Summarisation of Biomedical Data." In Proceedings of the Ninth International Workshop on Health Text Mining and Information Analysis. Stroudsburg, PA, USA: Association for Computational Linguistics, 2018. http://dx.doi.org/10.18653/v1/w18-5604.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Maynard, Diana, Kalina Bontcheva, Horacio Saggion, Hamish Cunningham, and Oana Hamza. "Using a text engineering framework to build an extendable and portable IE-based summarisation system." In the ACL-02 Workshop. Morristown, NJ, USA: Association for Computational Linguistics, 2002. http://dx.doi.org/10.3115/1118162.1118165.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Zhao, He, Dinh Phung, Viet Huynh, Yuan Jin, Lan Du, and Wray Buntine. "Topic Modelling Meets Deep Neural Networks: A Survey." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/638.

Full text
Abstract:
Topic modelling has been a successful technique for text analysis for almost twenty years. When topic modelling met deep neural networks, there emerged a new and increasingly popular research area, neural topic models, with nearly a hundred models developed and a wide range of applications in neural language understanding such as text generation, summarisation and language models. There is a need to summarise research developments and discuss open problems and future directions. In this paper, we provide a focused yet comprehensive overview of neural topic models for interested researchers in the AI community, so as to facilitate them to navigate and innovate in this fast-growing research area. To the best of our knowledge, ours is the first review on this specific topic.
APA, Harvard, Vancouver, ISO, and other styles
9

Gao, Yang, Christian M. Meyer, Mohsen Mesgar, and Iryna Gurevych. "Reward Learning for Efficient Reinforcement Learning in Extractive Document Summarisation." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/326.

Full text
Abstract:
Document summarisation can be formulated as a sequential decision-making problem, which can be solved by Reinforcement Learning (RL) algorithms. The predominant RL paradigm for summarisation learns a cross-input policy, which requires considerable time, data and parameter tuning due to the huge search spaces and the delayed rewards. Learning input-specific RL policies is a more efficient alternative, but so far depends on handcrafted rewards, which are difficult to design and yield poor performance. We propose RELIS, a novel RL paradigm that learns a reward function with Learning-to-Rank (L2R) algorithms at training time and uses this reward function to train an input-specific RL policy at test time. We prove that RELIS guarantees to generate near-optimal summaries with appropriate L2R and RL algorithms. Empirically, we evaluate our approach on extractive multi-document summarisation. We show that RELIS reduces the training time by two orders of magnitude compared to the state-of-the-art models while performing on par with them.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography