To see the other types of publications on this topic, follow the link: Text Summarization, Latent Semantic Analysis.

Journal articles on the topic 'Text Summarization, Latent Semantic Analysis'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Text Summarization, Latent Semantic Analysis.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Ozsoy, Makbule Gulcin, Ferda Nur Alpaslan, and Ilyas Cicekli. "Text summarization using Latent Semantic Analysis." Journal of Information Science 37, no. 4 (2011): 405–17. http://dx.doi.org/10.1177/0165551511408848.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ba-Alwi, Fadl, Ghaleb Gaphari, and Fares Al-Duqaimi. "Arabic Text Summarization Using Latent Semantic Analysis." British Journal of Applied Science & Technology 10, no. 2 (2015): 1–14. http://dx.doi.org/10.9734/bjast/2015/17678.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Mashechkin, I. V., M. I. Petrovskiy, D. S. Popov, and D. V. Tsarev. "Automatic text summarization using latent semantic analysis." Programming and Computer Software 37, no. 6 (2011): 299–305. http://dx.doi.org/10.1134/s0361768811060041.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Omar, Abdulfattah. "Addressing the Problem of Coherence in Automatic Text Summarization: A Latent Semantic Analysis Approach." International Journal of English Linguistics 7, no. 4 (2017): 33. http://dx.doi.org/10.5539/ijel.v7n4p33.

Full text
Abstract:
This article is concerned with addressing the problem of coherence in the automatic summarization of prose fiction texts. Despite the increasing advances within the summarization theory, applications and industry, many problems are still unresolved in relations to the applications of the summarization theory to literature. This can be in part attributed to the peculiar nature of literary texts where standard or typical summarization processes are not amenable for literature. This study, therefore, tends to bridge the gap between literature and summarization theory by proposing a summarization
APA, Harvard, Vancouver, ISO, and other styles
5

MohammedBadry, Rasha, Ahmed Sharaf Eldin, and Doaa Saad Elzanfally. "Text Summarization within the Latent Semantic Analysis Framework: Comparative Study." International Journal of Computer Applications 81, no. 11 (2013): 40–45. http://dx.doi.org/10.5120/14060-2366.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Yeh, Jen-Yuan, Hao-Ren Ke, Wei-Pang Yang, and I.-Heng Meng. "Text summarization using a trainable summarizer and latent semantic analysis." Information Processing & Management 41, no. 1 (2005): 75–95. http://dx.doi.org/10.1016/j.ipm.2004.04.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Joshi, Manju Lata, Nisheeth Joshi, and Namita Mittal. "SGATS: Semantic Graph-based Automatic Text Summarization from Hindi Text Documents." ACM Transactions on Asian and Low-Resource Language Information Processing 20, no. 6 (2021): 1–32. http://dx.doi.org/10.1145/3464381.

Full text
Abstract:
Creating a coherent summary of the text is a challenging task in the field of Natural Language Processing (NLP). Various Automatic Text Summarization techniques have been developed for abstractive as well as extractive summarization. This study focuses on extractive summarization which is a process containing selected delineative paragraphs or sentences from the original text and combining these into smaller forms than the document(s) to generate a summary. The methods that have been used for extractive summarization are based on a graph-theoretic approach, machine learning, Latent Semantic An
APA, Harvard, Vancouver, ISO, and other styles
8

N, Mehala, and Tapas Guha. "Latent Semantic Analysis in Automatic Text Summarization: A state of the art analysis." International Journal of Intelligence and Sustainable Computing 1, no. 1 (2020): 1. http://dx.doi.org/10.1504/ijisc.2020.10029282.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Froud, Hanane, Abdelmonaime Lachkar, and Said Alaoui Ouatik. "Arabic Text Summarization Based on Latent Semantic Analysis to Enhance Arabic Documents Clustering." International Journal of Data Mining & Knowledge Management Process 3, no. 1 (2013): 79–95. http://dx.doi.org/10.5121/ijdkp.2013.3107.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Liang, Chao, Changsheng Xu, and Hanqing Lu. "Personalized Sports Video Customization Using Content and Context Analysis." International Journal of Digital Multimedia Broadcasting 2010 (2010): 1–20. http://dx.doi.org/10.1155/2010/836357.

Full text
Abstract:
We present an integrated framework on personalized sports video customization, which addresses three research issues: semantic video annotation, personalized video retrieval and summarization, and system adaptation. Sports video annotation serves as the foundation of the video customization system. To acquire detailed description of video content, external web text is adopted to align with the related sports video according to their semantic correspondence. Based on the derived semantic annotation, a user-participant multiconstraint 0/1 Knapsack model is designed to model the personalized vide
APA, Harvard, Vancouver, ISO, and other styles
11

Guadalupe Ramos, J., Isela Navarro-Alatorre, Georgina Flores Becerra, and Omar Flores-Sánchez. "A Formal Technique for Text Summarization from Web Pages by using Latent Semantic Analysis." Research in Computing Science 148, no. 3 (2019): 11–22. http://dx.doi.org/10.13053/rcs-148-3-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Rusbandi, Millenia, Imam Fahrur Rozi, and Kadek Suarjuna Batubulan. "Otomatisasi Peringkasan Teks Pada Dokumen Hukum Menggunakan Metode Latent Semantic Analysis." Jurnal Informatika Polinema 7, no. 3 (2021): 9–16. http://dx.doi.org/10.33795/jip.v7i3.515.

Full text
Abstract:
At present, the number of crimes in Indonesia is quite large. The large number of crimes in Indonesia will have an impact on the number of legal documents that will be handled by law enforcement officials. In understanding legal documents, law enforcement officials such as lawyers, judges, and prosecutors must read the entire document which will take a long time. Therefore a summary is needed so that law enforcement officials can understand it more easily. So that one solution needed is to make a summary of the legal documents where the documents are in PDF form. In terms of summarizing the te
APA, Harvard, Vancouver, ISO, and other styles
13

Kitajima, Risa, and Ichiro Kobayashi. "Latent Topic Estimation Based on Events in a Document." Journal of Advanced Computational Intelligence and Intelligent Informatics 16, no. 5 (2012): 603–10. http://dx.doi.org/10.20965/jaciii.2012.p0603.

Full text
Abstract:
Several latent topic model-based methods such as Latent Semantic Indexing (LSI), Probabilistic LSI (pLSI), and Latent Dirichlet Allocation (LDA) have been widely used for text analysis. These methods basically assign topics to words, however, and the relationship between words in a document is therefore not considered. Considering this, we propose a latent topic extraction method that assigns topics to events that represent the relation between words in a document. There are several ways to express events, and the accuracy of estimating latent topics differs depending on the definition of an e
APA, Harvard, Vancouver, ISO, and other styles
14

Yadav, Chandra, and Aditi Sharan. "A New LSA and Entropy-Based Approach for Automatic Text Document Summarization." International Journal on Semantic Web and Information Systems 14, no. 4 (2018): 1–32. http://dx.doi.org/10.4018/ijswis.2018100101.

Full text
Abstract:
Automatic text document summarization is active research area in text mining field. In this article, the authors are proposing two new approaches (three models) for sentence selection, and a new entropy-based summary evaluation criteria. The first approach is based on the algebraic model, Singular Value Decomposition (SVD), i.e. Latent Semantic Analysis (LSA) and model is termed as proposed_model-1, and Second Approach is based on entropy that is further divided into proposed_model-2 and proposed_model-3. In first proposed model, the authors are using right singular matrix, and second & th
APA, Harvard, Vancouver, ISO, and other styles
15

Wawrzyński, Adam, and Julian Szymański. "Study of Statistical Text Representation Methods for Performance Improvement of a Hierarchical Attention Network." Applied Sciences 11, no. 13 (2021): 6113. http://dx.doi.org/10.3390/app11136113.

Full text
Abstract:
To effectively process textual data, many approaches have been proposed to create text representations. The transformation of a text into a form of numbers that can be computed using computers is crucial for further applications in downstream tasks such as document classification, document summarization, and so forth. In our work, we study the quality of text representations using statistical methods and compare them to approaches based on neural networks. We describe in detail nine different algorithms used for text representation and then we evaluate five diverse datasets: BBCSport, BBC, Ohs
APA, Harvard, Vancouver, ISO, and other styles
16

Ai, Dongmei, Yuchao Zheng, and Dezheng Zhang. "Automatic text summarization based on latent semantic indexing." Artificial Life and Robotics 15, no. 1 (2010): 25–29. http://dx.doi.org/10.1007/s10015-010-0759-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Al-Sabahi, Kamal, Zuping Zhang, Jun Long, and Khaled Alwesabi. "An Enhanced Latent Semantic Analysis Approach for Arabic Document Summarization." Arabian Journal for Science and Engineering 43, no. 12 (2018): 8079–94. http://dx.doi.org/10.1007/s13369-018-3286-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Haggag, Mohamed H. "Semantic Text Summarization Based on Syntactic Patterns." International Journal of Information Retrieval Research 3, no. 4 (2013): 18–34. http://dx.doi.org/10.4018/ijirr.2013100102.

Full text
Abstract:
Text summarization is machine based generation of a shortened version of a text. The summary should be a non-redundant extract from the original text. Most researches of text summarization use sentence extraction instead of abstraction to produce a summary. Extraction is depending mainly on sentences that already contained in the original input, which makes it more accurate and more concise. When all input articles are surrounding a particular event, extracting similar sentences would result in producing a highly repetitive summary. In this paper, a novel model for text summarization is propos
APA, Harvard, Vancouver, ISO, and other styles
19

Foltz, Peter W. "Latent semantic analysis for text-based research." Behavior Research Methods, Instruments, & Computers 28, no. 2 (1996): 197–202. http://dx.doi.org/10.3758/bf03204765.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Patil, Priyanka R., and Shital A. Patil. "Similarity Detection Using Latent Semantic Analysis Algorithm." International Journal of Emerging Research in Management and Technology 6, no. 8 (2018): 102. http://dx.doi.org/10.23956/ijermt.v6i8.124.

Full text
Abstract:
Similarity View is an application for visually comparing and exploring multiple models of text and collection of document. Friendbook finds ways of life of clients from client driven sensor information, measures the closeness of ways of life amongst clients, and prescribes companions to clients if their ways of life have high likeness. Roused by demonstrate a clients day by day life as life records, from their ways of life are separated by utilizing the Latent Dirichlet Allocation Algorithm. Manual techniques can't be utilized for checking research papers, as the doled out commentator may have
APA, Harvard, Vancouver, ISO, and other styles
21

Kim, Hyun-Hee. "Comparing the Use of Semantic Relations between Tags Versus Latent Semantic Analysis for Speech Summarization." Journal of the Korean Society for Library and Information Science 47, no. 3 (2013): 343–61. http://dx.doi.org/10.4275/kslis.2013.47.3.343.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Beom-mo Kang. "Text Context and Word Meaning: Latent Semantic Analysis." EONEOHAG ll, no. 68 (2014): 3–34. http://dx.doi.org/10.17290/jlsk.2014..68.3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Schwarz, Carlo. "lsemantica: A command for text similarity based on latent semantic analysis." Stata Journal: Promoting communications on statistics and Stata 19, no. 1 (2019): 129–42. http://dx.doi.org/10.1177/1536867x19830910.

Full text
Abstract:
In this article, I present the lsemantica command, which implements latent semantic analysis in Stata. Latent semantic analysis is a machine learning algorithm for word and text similarity comparison and uses truncated singular value decomposition to derive the hidden semantic relationships between words and texts. lsemantica provides a simple command for latent semantic analysis as well as complementary commands for text similarity comparison.
APA, Harvard, Vancouver, ISO, and other styles
24

Alkhudari, Amal. "Developing a new approach to summarize Arabic text automatically using syntactic and semantic analysis." International Journal of Engineering & Technology 9, no. 2 (2020): 342. http://dx.doi.org/10.14419/ijet.v9i2.30324.

Full text
Abstract:
Due to the wide spread information and the diversity of its sources, there is a need to produce an accurate text summary with the least time and effort. This summary must preserve key information content and overall meaning of the original text. Text summarization is one of the most important applications of Natural Language Processing (NLP). The goal of automatic text summarization is to create summaries that are similar to human-created ones. However, in many cases, the readability of created summaries is not satisfactory, because the summaries do not consider the meaning of the words and do
APA, Harvard, Vancouver, ISO, and other styles
25

C, Sunitha, A. Jaya, and Amal Ganesh. "Automatic summarization of Malayalam documents using clause identification method." International Journal of Electrical and Computer Engineering (IJECE) 9, no. 6 (2019): 4929. http://dx.doi.org/10.11591/ijece.v9i6.pp4929-4938.

Full text
Abstract:
<span>Text summarization is an active research area in the field of natural language processing. Huge amount of information in the internet necessitates the development of automatic summarization systems. There are two types of summarization techniques: Extractive and Abstractive. Extractive summarization selects important sentences from the text and produces summary as it is present in the original document. Abstractive summarization systems will provide a summary of the input text as is generated by human beings. Abstractive summary requires semantic analysis of text. Limited works hav
APA, Harvard, Vancouver, ISO, and other styles
26

Kim, Hyun Hee. "Comparing the use of semantic relations between tags versus latent semantic analysis for generic speech summarization." Proceedings of the American Society for Information Science and Technology 50, no. 1 (2013): 1–4. http://dx.doi.org/10.1002/meet.14505001136.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Ghanem, Khadoudja. "Local and Global Latent Semantic Analysis for Text Categorization." International Journal of Information Retrieval Research 4, no. 3 (2014): 1–13. http://dx.doi.org/10.4018/ijirr.2014070101.

Full text
Abstract:
In this paper the authors propose a semantic approach to document categorization. The idea is to create for each category a semantic index (representative term vector) by performing a local Latent Semantic Analysis (LSA) followed by a clustering process. A second use of LSA (Global LSA) is adopted on a term-Class matrix in order to retrieve the class which is the most similar to the query (document to classify) in the same way where the LSA is used to retrieve documents which are the most similar to a query in Information Retrieval. The proposed system is evaluated on a popular dataset which i
APA, Harvard, Vancouver, ISO, and other styles
28

Kou, Gang, and Yi Peng. "An Application of Latent Semantic Analysis for Text Categorization." International Journal of Computers Communications & Control 10, no. 3 (2015): 357. http://dx.doi.org/10.15837/ijccc.2015.3.1923.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Yu, Bo, Zong-ben Xu, and Cheng-hua Li. "Latent semantic analysis for text categorization using neural network." Knowledge-Based Systems 21, no. 8 (2008): 900–904. http://dx.doi.org/10.1016/j.knosys.2008.03.045.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Et. al., M. Varaprasad Rao,. "Automated Evaluation ofTelugu Text Essays Using Latent Semantic Analysis." Turkish Journal of Computer and Mathematics Education (TURCOMAT) 12, no. 5 (2021): 1888–90. http://dx.doi.org/10.17762/turcomat.v12i5.2267.

Full text
Abstract:
The most productive strategy to improve students' ability to write is to have direct and as much as possible teacher input. However, the workload of the teacher is greatly increased. Automated systems are increasingly required to help students write essays. In the field of educational assessment technology, automated test evaluation is becoming more and more common. We present a framework that is modelled on the programme, following which the school-teachers in the BPDAV School and Govt. High School Hyderabad, Telangana, India present the automatic evaluator of student essays in the Telugu lan
APA, Harvard, Vancouver, ISO, and other styles
31

Han, Kai-Xu, Wei Chien, Chien-Ching Chiu, and Yu-Ting Cheng. "Application of Support Vector Machine (SVM) in the Sentiment Analysis of Twitter DataSet." Applied Sciences 10, no. 3 (2020): 1125. http://dx.doi.org/10.3390/app10031125.

Full text
Abstract:
At present, in the mainstream sentiment analysis methods represented by the Support Vector Machine, the vocabulary and the latent semantic information involved in the text are not well considered, and sentiment analysis of text is dependent overly on the statistics of sentiment words. Thus, a Fisher kernel function based on Probabilistic Latent Semantic Analysis is proposed in this paper for sentiment analysis by Support Vector Machine. The Fisher kernel function based on the model is derived from the Probabilistic Latent Semantic Analysis model. By means of this method, latent semantic inform
APA, Harvard, Vancouver, ISO, and other styles
32

Lee, Chang H. "Association among Reading Summarization, Word Recognition, and Sentence Comprehension." Perceptual and Motor Skills 96, no. 3_suppl (2003): 1133–38. http://dx.doi.org/10.2466/pms.2003.96.3c.1133.

Full text
Abstract:
Word recognition and sentence comprehension are initial and necessary processes to summarize a story. This study was conducted to investigate the relations among word recognition, sentence comprehension, and reading summarization. Analysis showed performance for word naming, an index of on-line word recognition, was correlated with the Latent Semantic Analysis scores, an index of reading summarization. These results indicate that the basic process of word recognition is a comer stone to better reading.
APA, Harvard, Vancouver, ISO, and other styles
33

Yadav, Chandra Shekhar, and Aditi Sharan. "Hybrid Approach for Single Text Document Summarization Using Statistical and Sentiment Features." International Journal of Information Retrieval Research 5, no. 4 (2015): 46–70. http://dx.doi.org/10.4018/ijirr.2015100104.

Full text
Abstract:
Summarization is a way to represent same information in concise way with equal sense. This can be categorized in two type Abstractive and Extractive type. Our work is focused around Extractive summarization. A generic approach to extractive summarization is to consider sentence as an entity, score each sentence based on some indicative features to ascertain the quality of sentence for inclusion in summary. Sort the sentences on the score and consider top n sentences for summarization. Mostly statistical features have been used for scoring the sentences. A hybrid model for a single text documen
APA, Harvard, Vancouver, ISO, and other styles
34

Li, Ya Xiong, and Deng Pan. "Text Clustering Based on Domain Ontology and Latent Semantic Analysis." Applied Mechanics and Materials 556-562 (May 2014): 3536–40. http://dx.doi.org/10.4028/www.scientific.net/amm.556-562.3536.

Full text
Abstract:
One key step in text mining is the categorization of texts, i.e., to put texts of the same or similar contents into one group so as to distinguish texts of different contents. However, traditional word-frequency-based statistical approaches, such as VSM model, failed to reflect the complicated meaning in texts. This paper ushers in domain ontology and constructs new conceptual vector space model in the pre-processing stage of text clustering, substituting the initial matrix (lexicon-text matrix) in the latent semantic analysis with concept-text matrix. In the clustering analysis stage, this mo
APA, Harvard, Vancouver, ISO, and other styles
35

Mohamed, Muhidin, and Mourad Oussalah. "SRL-ESA-TextSum: A text summarization approach based on semantic role labeling and explicit semantic analysis." Information Processing & Management 56, no. 4 (2019): 1356–72. http://dx.doi.org/10.1016/j.ipm.2019.04.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

ZHANG, Yu-fang, Jun ZHU, and Zhong-yang XIONG. "Improved text clustering algorithm of probabilistic latent with semantic analysis." Journal of Computer Applications 31, no. 3 (2011): 674–76. http://dx.doi.org/10.3724/sp.j.1087.2011.00674.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Saggion, Horacio, and Guy Lapalme. "Generating Indicative-Informative Summaries with SumUM." Computational Linguistics 28, no. 4 (2002): 497–526. http://dx.doi.org/10.1162/089120102762671963.

Full text
Abstract:
We present and evaluate SumUM, a text summarization system that takes a raw technical text as input and produces an indicative informative summary. The indicative part of the summary identifies the topics of the document, and the informative part elaborates on some of these topics according to the reader's interest. SumUM motivates the topics, describes entities, and defines concepts. It is a first step for exploring the issue of dynamic summarization. This is accomplished through a process of shallow syntactic and semantic analysis, concept identification, and text regeneration. Our method wa
APA, Harvard, Vancouver, ISO, and other styles
38

Wolfe, Michael B. W., M. E. Schreiner, Bob Rehder, et al. "Learning from text: Matching readers and texts by latent semantic analysis." Discourse Processes 25, no. 2-3 (1998): 309–36. http://dx.doi.org/10.1080/01638539809545030.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Zheng, Na, and Jie Yu Wu. "Cluster Analysis for Internet Public Sentiment in Universities by Combining Methods." International Journal of Recent Contributions from Engineering, Science & IT (iJES) 6, no. 3 (2018): 60. http://dx.doi.org/10.3991/ijes.v6i3.9670.

Full text
Abstract:
A clustering method based on the Latent Dirichlet Allocation and the VSM model to compute the text similarity is presented. The Latent Dirichlet Allocation subject models and the VSM vector space model weights strategy are used respectively to calculate the text similarity. The linear combination of the two results is used to get the text similarity. Then the k-means clustering algorithm is chosen for cluster analysis. It can not only solve the deep semantic information leakage problems of traditional text clustering, but also solve the problem of the LDA that could not distinguish the texts b
APA, Harvard, Vancouver, ISO, and other styles
40

Tapas, Guha, and N. Mehala. "Latent semantic analysis in automatic text summarisation: a state-of-the-art analysis." International Journal of Intelligence and Sustainable Computing 1, no. 2 (2021): 128. http://dx.doi.org/10.1504/ijisc.2021.113294.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Ababneh, Ahmad Hussein, Joan Lu, and Qiang Xu. "An efficient framework of utilizing the latent semantic analysis in text extraction." International Journal of Speech Technology 22, no. 3 (2019): 785–815. http://dx.doi.org/10.1007/s10772-019-09623-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Rahman and Siddiqui. "An Optimized Abstractive Text Summarization Model Using Peephole Convolutional LSTM." Symmetry 11, no. 10 (2019): 1290. http://dx.doi.org/10.3390/sym11101290.

Full text
Abstract:
Abstractive text summarization that generates a summary by paraphrasing a long text remains an open significant problem for natural language processing. In this paper, we present an abstractive text summarization model, multi-layered attentional peephole convolutional LSTM (long short-term memory) (MAPCoL) that automatically generates a summary from a long text. We optimize parameters of MAPCoL using central composite design (CCD) in combination with the response surface methodology (RSM), which gives the highest accuracy in terms of summary generation. We record the accuracy of our model (MAP
APA, Harvard, Vancouver, ISO, and other styles
43

Zhou, Gang. "Research on text summarization classification based on crowdfunding projects." MATEC Web of Conferences 336 (2021): 06020. http://dx.doi.org/10.1051/matecconf/202133606020.

Full text
Abstract:
In recent years, artificial intelligence technologies represented by deep learning and natural language processing have made huge breakthroughs and have begun to emerge in the field of crowdfunding project analysis. Natural language processing technology enables machines to understand and analyze the text of crowdfunding projects, and classify them based on the summary description of the project, which can help companies and individuals improve the project pass rate, so it has received widespread attention. However, most of the current researches are mostly applied to topic modeling of project
APA, Harvard, Vancouver, ISO, and other styles
44

Bestgen, Yves. "Improving Text Segmentation Using Latent Semantic Analysis: A Reanalysis of Choi, Wiemer-Hastings, and Moore (2001)." Computational Linguistics 32, no. 1 (2006): 5–12. http://dx.doi.org/10.1162/coli.2006.32.1.5.

Full text
Abstract:
Choi, Wiemer-Hastings, and Moore (2001) proposed to use Latent Semantic Analysis (LSA) to extract semantic knowledge from corpora in order to improve the accuracy of a text segmentation algorithm. By comparing the accuracy of the very same algorithm, depending on whether or not it takes into account complementary semantic knowledge, they were able to show the benefit derived from such knowledge. In their experiments, semantic knowledge was, however, acquired from a corpus containing the texts to be segmented in the test phase. If this hyper-specificity of the LSA corpus explains the largest pa
APA, Harvard, Vancouver, ISO, and other styles
45

LOUWERSE, MAX, ZHIQIANG CAI, XIANGEN HU, MATTHEW VENTURA, and PATRICK JEUNIAUX. "COGNITIVELY INSPIRED NLP-BASED KNOWLEDGE REPRESENTATIONS: FURTHER EXPLORATIONS OF LATENT SEMANTIC ANALYSIS." International Journal on Artificial Intelligence Tools 15, no. 06 (2006): 1021–39. http://dx.doi.org/10.1142/s0218213006003090.

Full text
Abstract:
Natural-language based knowledge representations borrow their expressiveness from the semantics of language. One such knowledge representation technique is Latent semantic analysis (LSA), a statistical, corpus-based method for representing knowledge. It has been successfully used in a variety of applications including intelligent tutoring systems, essay grading and coherence metrics. The advantage of LSA is that it is efficient in representing world knowledge without the need for manual coding of relations and that it has in fact been considered to simulate aspects of human knowledge represent
APA, Harvard, Vancouver, ISO, and other styles
46

Horasan, Fahrettin, Hasan Erbay, Fatih Varçın, and Emre Deniz. "Alternate Low-Rank Matrix Approximation in Latent Semantic Analysis." Scientific Programming 2019 (February 3, 2019): 1–12. http://dx.doi.org/10.1155/2019/1095643.

Full text
Abstract:
The latent semantic analysis (LSA) is a mathematical/statistical way of discovering hidden concepts between terms and documents or within a document collection (i.e., a large corpus of text). Each document of the corpus and terms are expressed as a vector with elements corresponding to these concepts to form a term-document matrix. Then, the LSA uses a low-rank approximation to the term-document matrix in order to remove irrelevant information, to extract more important relations, and to reduce the computational time. The irrelevant information is called as “noise” and does not have a notewort
APA, Harvard, Vancouver, ISO, and other styles
47

Kurby, Christopher A., Katja Wiemer-Hastings, Nagasai Ganduri, Joseph P. Magliano, Keith K. Millis, and Danielle S. McNamara. "Computerizing reading training: Evaluation of a latent semantic analysis space for science text." Behavior Research Methods, Instruments, & Computers 35, no. 2 (2003): 244–50. http://dx.doi.org/10.3758/bf03202547.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Landauer, Thomas K., and Joseph Psotka. "Simulating Text Understanding for Educational Applications with Latent Semantic Analysis: Introduction to LSA." Interactive Learning Environments 8, no. 2 (2000): 73–86. http://dx.doi.org/10.1076/1049-4820(200008)8:2;1-b;ft073.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Huang, Yuxia. "Conceptually categorizing geographic features from text based on latent semantic analysis and ontologies." Annals of GIS 22, no. 2 (2016): 113–27. http://dx.doi.org/10.1080/19475683.2016.1144648.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Al-Sabahi, Kamal, and Zhang Zuping. "Document Summarization Using Sentence-Level Semantic Based on Word Embeddings." International Journal of Software Engineering and Knowledge Engineering 29, no. 02 (2019): 177–96. http://dx.doi.org/10.1142/s0218194019500086.

Full text
Abstract:
In the era of information overload, text summarization has become a focus of attention in a number of diverse fields such as, question answering systems, intelligence analysis, news recommendation systems, search results in web search engines, and so on. A good document representation is the key point in any successful summarizer. Learning this representation becomes a very active research in natural language processing field (NLP). Traditional approaches mostly fail to deliver a good representation. Word embedding has proved an excellent performance in learning the representation. In this pap
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!