Academic literature on the topic 'Word Vector Models'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Word Vector Models.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Word Vector Models"

1

Budenkov, S. S. "SEMANTIC WORD VECTOR MODELS FOR SENTIMENT ANALYSIS." Scientific and Technical Volga region Bulletin 7, no. 2 (2017): 75–78. http://dx.doi.org/10.24153/2079-5920-2017-7-2-75-78.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Haroon, Muhammad, Junaid Baber, Ihsan Ullah, Sher Muhammad Daudpota, Maheen Bakhtyar, and Varsha Devi. "Video Scene Detection Using Compact Bag of Visual Word Models." Advances in Multimedia 2018 (November 8, 2018): 1–9. http://dx.doi.org/10.1155/2018/2564963.

Full text
Abstract:
Video segmentation into shots is the first step for video indexing and searching. Videos shots are mostly very small in duration and do not give meaningful insight of the visual contents. However, grouping of shots based on similar visual contents gives a better understanding of the video scene; grouping of similar shots is known as scene boundary detection or video segmentation into scenes. In this paper, we propose a model for video segmentation into visual scenes using bag of visual word (BoVW) model. Initially, the video is divided into the shots which are later represented by a set of key
APA, Harvard, Vancouver, ISO, and other styles
3

Ma, Zhiyang, Wenfeng Zheng, Xiaobing Chen, and Lirong Yin. "Joint embedding VQA model based on dynamic word vector." PeerJ Computer Science 7 (March 3, 2021): e353. http://dx.doi.org/10.7717/peerj-cs.353.

Full text
Abstract:
The existing joint embedding Visual Question Answering models use different combinations of image characterization, text characterization and feature fusion method, but all the existing models use static word vectors for text characterization. However, in the real language environment, the same word may represent different meanings in different contexts, and may also be used as different grammatical components. These differences cannot be effectively expressed by static word vectors, so there may be semantic and grammatical deviations. In order to solve this problem, our article constructs a j
APA, Harvard, Vancouver, ISO, and other styles
4

Nishida, Satoshi, Antoine Blanc, Naoya Maeda, Masataka Kado, and Shinji Nishimoto. "Behavioral correlates of cortical semantic representations modeled by word vectors." PLOS Computational Biology 17, no. 6 (2021): e1009138. http://dx.doi.org/10.1371/journal.pcbi.1009138.

Full text
Abstract:
The quantitative modeling of semantic representations in the brain plays a key role in understanding the neural basis of semantic processing. Previous studies have demonstrated that word vectors, which were originally developed for use in the field of natural language processing, provide a powerful tool for such quantitative modeling. However, whether semantic representations in the brain revealed by the word vector-based models actually capture our perception of semantic information remains unclear, as there has been no study explicitly examining the behavioral correlates of the modeled brain
APA, Harvard, Vancouver, ISO, and other styles
5

Tissier, Julien, Christophe Gravier, and Amaury Habrard. "Near-Lossless Binarization of Word Embeddings." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 7104–11. http://dx.doi.org/10.1609/aaai.v33i01.33017104.

Full text
Abstract:
Word embeddings are commonly used as a starting point in many NLP models to achieve state-of-the-art performances. However, with a large vocabulary and many dimensions, these floating-point representations are expensive both in terms of memory and calculations which makes them unsuitable for use on low-resource devices. The method proposed in this paper transforms real-valued embeddings into binary embeddings while preserving semantic information, requiring only 128 or 256 bits for each vector. This leads to a small memory footprint and fast vector operations. The model is based on an autoenco
APA, Harvard, Vancouver, ISO, and other styles
6

Sassenhagen, Jona, and Christian J. Fiebach. "Traces of Meaning Itself: Encoding Distributional Word Vectors in Brain Activity." Neurobiology of Language 1, no. 1 (2020): 54–76. http://dx.doi.org/10.1162/nol_a_00003.

Full text
Abstract:
How is semantic information stored in the human mind and brain? Some philosophers and cognitive scientists argue for vectorial representations of concepts, where the meaning of a word is represented as its position in a high-dimensional neural state space. At the intersection of natural language processing and artificial intelligence, a class of very successful distributional word vector models has developed that can account for classic EEG findings of language, that is, the ease versus difficulty of integrating a word with its sentence context. However, models of semantics have to account not
APA, Harvard, Vancouver, ISO, and other styles
7

Bojanowski, Piotr, Edouard Grave, Armand Joulin, and Tomas Mikolov. "Enriching Word Vectors with Subword Information." Transactions of the Association for Computational Linguistics 5 (December 2017): 135–46. http://dx.doi.org/10.1162/tacl_a_00051.

Full text
Abstract:
Continuous word representations, trained on large unlabeled corpora are useful for many natural language processing tasks. Popular models that learn such representations ignore the morphology of words, by assigning a distinct vector to each word. This is a limitation, especially for languages with large vocabularies and many rare words. In this paper, we propose a new approach based on the skipgram model, where each word is represented as a bag of character n-grams. A vector representation is associated to each character n-gram; words being represented as the sum of these representations. Our
APA, Harvard, Vancouver, ISO, and other styles
8

Xu, Beibei, Zhiying Tan, Kenli Li, Taijiao Jiang, and Yousong Peng. "Predicting the host of influenza viruses based on the word vector." PeerJ 5 (July 18, 2017): e3579. http://dx.doi.org/10.7717/peerj.3579.

Full text
Abstract:
Newly emerging influenza viruses continue to threaten public health. A rapid determination of the host range of newly discovered influenza viruses would assist in early assessment of their risk. Here, we attempted to predict the host of influenza viruses using the Support Vector Machine (SVM) classifier based on the word vector, a new representation and feature extraction method for biological sequences. The results show that the length of the word within the word vector, the sequence type (DNA or protein) and the species from which the sequences were derived for generating the word vector all
APA, Harvard, Vancouver, ISO, and other styles
9

Nguyen, Dat Quoc, Richard Billingsley, Lan Du, and Mark Johnson. "Improving Topic Models with Latent Feature Word Representations." Transactions of the Association for Computational Linguistics 3 (December 2015): 299–313. http://dx.doi.org/10.1162/tacl_a_00140.

Full text
Abstract:
Probabilistic topic models are widely used to discover latent topics in document collections, while latent feature vector representations of words have been used to obtain high performance in many NLP tasks. In this paper, we extend two different Dirichlet multinomial topic models by incorporating latent feature vector representations of words trained on very large corpora to improve the word-topic mapping learnt on a smaller corpus. Experimental results show that by using information from the external corpora, our new models produce significant improvements on topic coherence, document cluste
APA, Harvard, Vancouver, ISO, and other styles
10

Li, Zhen, Dan Qu, Yanxia Li, Chaojie Xie, and Qi Chen. "A Position Weighted Information Based Word Embedding Model for Machine Translation." International Journal on Artificial Intelligence Tools 29, no. 07n08 (2020): 2040005. http://dx.doi.org/10.1142/s0218213020400059.

Full text
Abstract:
Deep learning technology promotes the development of neural network machine translation (NMT). End-to-End (E2E) has become the mainstream in NMT. It uses word vectors as the initial value of the input layer. The effect of word vector model directly affects the accuracy of E2E-NMT. Researchers have proposed many approaches to learn word representations and have achieved significant results. However, the drawbacks of these methods still limit the performance of E2E-NMT systems. This paper focuses on the word embedding technology and proposes the PW-CBOW word vector model which can present better
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Word Vector Models"

1

Prum, Sophea. "On the use of a discriminant approach for handwritten word recognition based on bi-character models." Thesis, La Rochelle, 2013. http://www.theses.fr/2013LAROS418/document.

Full text
Abstract:
Avec l’avènement des dispositifs nomades tels que les smartphones et les tablettes, la reconnaissance automatique de l’écriture manuscrite cursive à partir d’un signal en ligne est devenue durant les dernières décennies un besoin réel de la vie quotidienne à l’ère numérique. Dans le cadre de cette thèse, nous proposons de nouvelles stratégies pour un système de reconnaissance de mots manuscrits en-ligne. Ce système se base sur une méthode collaborative segmentation/reconnaissance et en utilisant des analyses à deux niveaux : caractère et bi-caractères. Plus précisément, notre système repose su
APA, Harvard, Vancouver, ISO, and other styles
2

Lipecki, Johan, and Viggo Lundén. "The Effect of Data Quantity on Dialog System Input Classification Models." Thesis, KTH, Hälsoinformatik och logistik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-237282.

Full text
Abstract:
This paper researches how different amounts of data affect different word vector models for classification of dialog system user input. A hypothesis is tested that there is a data threshold for dense vector models to reach the state-of-the-art performance that have been shown with recent research, and that character-level n-gram word-vector classifiers are especially suited for Swedish classifiers–because of compounding and the character-level n-gram model ability to vectorize out-of-vocabulary words. Also, a second hypothesis is put forward that models trained with single statements are more
APA, Harvard, Vancouver, ISO, and other styles
3

Esin, Yunus Emre. "Improvement Of Corpus-based Semantic Word Similarity Using Vector Space Model." Master's thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/12610759/index.pdf.

Full text
Abstract:
This study presents a new approach for finding semantically similar words from corpora using window based context methods. Previous studies mainly concentrate on either finding new combination of distance-weight measurement methods or proposing new context methods. The main difference of this new approach is that this study reprocesses the outputs of the existing methods to update the representation of related word vectors used for measuring semantic distance between words, to improve the results further. Moreover, this novel technique provides a solution to the data sparseness of vectors whic
APA, Harvard, Vancouver, ISO, and other styles
4

Sahlgren, Magnus. "The Word-Space Model : Using distributional analysis to represent syntagmatic and paradigmatic relations between words in high-dimensional vector spaces." Doctoral thesis, Stockholm : Göteborg : Kista : Department of Linguistics, Stockholm University : National Graduate School of Language Technology, Gothenburg University ; Swedish Institute of Computer Science Useware Laboratory, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-1037.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Westin, Emil. "Authorship classification using the Vector Space Model and kernel methods." Thesis, Uppsala universitet, Statistiska institutionen, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-412897.

Full text
Abstract:
Authorship identification is the field of classifying a given text by its author based on the assumption that authors exhibit unique writing styles. This thesis investigates the semantic shortcomings of the vector space model by constructing a semantic kernel created from WordNet which is evaluated on the problem of authorship attribution. A multiclass SVM classifier is constructed using the one-versus-all strategy and evaluated in terms of precision, recall, accuracy and F1 scores. Results show that the use of the semantic scores from WordNet degrades the performance compared to using a linea
APA, Harvard, Vancouver, ISO, and other styles
6

Göteman, Malin. "The Complex World of Superstrings : On Semichiral Sigma Models and N=(4,4) Supersymmetry." Doctoral thesis, Uppsala universitet, Teoretisk fysik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-183407.

Full text
Abstract:
Non-linear sigma models with extended supersymmetry have constrained target space geometries, and can serve as effective tools for investigating and constructing new geometries. Analyzing the geometrical and topological properties of sigma models is necessary to understand the underlying structures of string theory. The most general two-dimensional sigma model with manifest N=(2,2) supersymmetry can be parametrized by chiral, twisted chiral and semichiral superfields. In the research presented in this thesis, N=(4,4) (twisted) supersymmetry is constructed for a semichiral sigma model. It is fo
APA, Harvard, Vancouver, ISO, and other styles
7

Pettersson, Tove. "Word2vec2syn : Synonymidentifiering med Word2vec." Thesis, Linköpings universitet, Statistik och maskininlärning, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-157638.

Full text
Abstract:
Inom NLP (eng. natural language processing) är synonymidentifiering en av de språkvetenskapliga utmaningarna som många antar. Fodina Language Technology AB är ett företag som skapat ett verktyg, Termograph, ämnad att samla termer inom företag och hålla den interna språkanvändningen konsekvent. En metodkombination bestående av språkteknologiska strategier utgör synonymidentifieringen och Fodina önskar ett större täckningsområde samt mer dynamik i framtagningsprocessen. Därav syftade detta arbete till att ta fram en ny metod, utöver metodkombinationen, för just synonymidentifiering. En färdigträ
APA, Harvard, Vancouver, ISO, and other styles
8

Lundberg, Otto. "GDP forecasting and nowcasting : Utilizing a system for averaging models to improve GDP predictions for six countries around the world." Thesis, Umeå universitet, Institutionen för matematik och matematisk statistik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-131718.

Full text
Abstract:
This study was issued by Swedbank because they wanted too improve their GDP growth forecast capabilites.  A program was developed and tested on six countries; USA, Sweden, Germany, UK, Brazil and Norway. In this paper I investigate if I can reduce forecasting error for GDP growth by taking a smart average from a variety of models compared to both the best individual models and a random walk. I combine the forecasts from four model groups: Vector autoregression, principal component analysis, machine learning and random walk. The smart average is given by a system that give more weight to the pr
APA, Harvard, Vancouver, ISO, and other styles
9

Wang, Zhenyu. "Modeling crash severity and speed profile at roadway work zones." [Tampa, Fla] : University of South Florida, 2008. http://purl.fcla.edu/usf/dc/et/SFE0002515.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Callin, Jimmy. "Word Representations and Machine Learning Models for Implicit Sense Classification in Shallow Discourse Parsing." Thesis, Uppsala universitet, Institutionen för lingvistik och filologi, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-325876.

Full text
Abstract:
CoNLL 2015 featured a shared task on shallow discourse parsing. In 2016, the efforts continued with an increasing focus on sense classification. In the case of implicit sense classification, there was an interesting mix of traditional and modern machine learning classifiers using word representation models. In this thesis, we explore the performance of a number of these models, and investigate how they perform using a variety of word representation models. We show that there are large performance differences between word representation models for certain machine learning classifiers, while oth
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Word Vector Models"

1

Razumova, Tat'yana, Natal'ya Spiridonova, Irina Durakova, et al. Personnel management in Russia: vector of humanization. Book 7. INFRA-M Academic Publishing LLC., 2020. http://dx.doi.org/10.12737/1060850.

Full text
Abstract:
The monograph contains the results of studies concerning: first, the evolution of ideas and practice of humanization in the personnel policy of the state; second, the implementation of the principles of humanization in work with the personnel of economic subjects: talent management, renewal of working capacity of older workers, building a dual career, building a strong corporate culture, the development of the additional professional education system; thirdly, problems related to industry characteristics personnel work, drawing on international experience of vocational rehabilitation and emplo
APA, Harvard, Vancouver, ISO, and other styles
2

Andreev, Anatoliy. Personocentrism in classical Russian literature of the XIX century. Dialectics of Artistic Consciousness. INFRA-M Academic Publishing LLC., 2021. http://dx.doi.org/10.12737/1095050.

Full text
Abstract:
The monograph is devoted to the study of the brightest phenomenon of the world art culture — Russian literature of the "golden age", which was formed as an aristocratic, personocentric literature. Russian Russian literature began to realize its "cultural code", its purpose, which was close to it in spirit; moreover, it unconsciously formed a program for its development, immediately finding its "gold mine": elitist personocentrism as a highly promising vector of culture, which became a decisive factor in the world recognition of Russian literature. The end-to-end plot of the book was the spirit
APA, Harvard, Vancouver, ISO, and other styles
3

Khambata, Adi J. Introduction to the Z80 microcomputer. 2nd ed. J. Wiley, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Pevehouse, Jon, and Jason D. Brozek. Time‐Series Analysis. Edited by Janet M. Box-Steffensmeier, Henry E. Brady, and David Collier. Oxford University Press, 2009. http://dx.doi.org/10.1093/oxfordhb/9780199286546.003.0019.

Full text
Abstract:
This article discusses time-series methods such as simple time-series regressions, ARIMA models, vector autoregression (VAR) models, and unit root and error correction models (ECM). It specifically presents a brief history of time-series analysis before moving to a review of the basic time-series model. It then describes the stationary models in univariate and multivariate analyses. The nonstationary models of each type are addressed. In addition, various issues regarding the analysis of time series including data aggregation and temporal stability are considered. Before concluding, the articl
APA, Harvard, Vancouver, ISO, and other styles
5

Crès, Hervé, and Mich Tvede. Democracy, the Market, and the Firm. Oxford University Press, 2021. http://dx.doi.org/10.1093/oso/9780192894731.001.0001.

Full text
Abstract:
This book is an attempt to resolve an enigma that has puzzled social scientists since Condorcet in the eighteenth century: Why are collective choices so stable and easy to make in practice, when in theory it should be totally otherwise? A striking illustration of this enigma is the almost unanimous support of shareholders in publicly traded companies for the motions tabled by directors. The first part of the book explores the interplay between the voting and trading mechanisms. Two main arguments are proposed: on the one hand, the better the market works, the easier it is for majority voting t
APA, Harvard, Vancouver, ISO, and other styles
6

Spitzer, Michael. Affective shapes and shapings of affect in Bach’s Sonata for Unaccompanied Violin No. 1 in G minor (BWV 1001). Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780199351411.003.0008.

Full text
Abstract:
This chapter analyses Bach’s Sonata for Unaccompanied Violin No. 1 in G minor in terms of recent theories of music and emotion. It considers how musical ‘shape’ relates to the structure of affect, conceived in the nuanced terms afforded by recent work in the psychology of discrete emotional categories. Part I is dedicated to a close reading of Bach’s opening Adagio. Analysing three levels of shape (acoustic cues, midlevel phrasing and large-scale form), the chapter compares Bach’s music both to the shape of particular emotional behaviours and to the expressive shapings of a formal model. This
APA, Harvard, Vancouver, ISO, and other styles
7

Jha, Vivekanand. Acute kidney injury in the tropics. Edited by Norbert Lameire. Oxford University Press, 2015. http://dx.doi.org/10.1093/med/9780199592548.003.0241.

Full text
Abstract:
The spectrum of acute kidney injury (AKI) encountered in the hospitals of the tropical zone countries is different from that seen in the non-tropical climate countries, most of which are high-income countries. The difference is explained in large part by the influence of environment on the epidemiology of human disease. The key features of geographic regions falling in the tropical zones are climatic, that is, high temperatures and absence of winter frost, and economic, that is, lower levels of income. The causes and presentation of tropical AKI reflect these prevailing cultural, socioeconomic
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Word Vector Models"

1

Banjade, Rajendra, Nabin Maharjan, Dipesh Gautam, Frank Adrasik, Arthur C. Graesser, and Vasile Rus. "Pooling Word Vector Representations Across Models." In Computational Linguistics and Intelligent Text Processing. Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-77113-7_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lebret, Rémi, and Ronan Collobert. "Rehabilitation of Count-Based Models for Word Vector Representations." In Computational Linguistics and Intelligent Text Processing. Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-18111-0_31.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Eren, Levent Tolga, and Senem Kumova Metin. "Vector Space Models in Detection of Semantically Non-compositional Word Combinations in Turkish." In Lecture Notes in Computer Science. Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-11027-7_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Sboev, A., R. Rybka, and A. Gryaznov. "Deep Neural Networks Ensemble with Word Vector Representation Models to Resolve Coreference Resolution in Russian." In Advanced Technologies in Robotics and Intelligent Systems. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-33491-8_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Consoli, Sergio, Domenico Perrotta, and Marco Turchi. "Reduced Variable Neighbourhood Search for the Generation of Controlled Circular Data." In Variable Neighborhood Search. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-69625-2_7.

Full text
Abstract:
AbstractA number of artificial intelligence and machine learning problems need to be formulated within a directional space, where classical Euclidean geometry does not apply or needs to be readjusted into the circle. This is typical, for example, in computational linguistics and natural language processing, where language models based on Bag-of-Words, Vector Space, or Word Embedding, are largely used for tasks like document classification, information retrieval and recommendation systems, among others. In these contexts, for assessing document clustering and outliers detection applications, it is often necessary to generate data with directional properties and units that follow some model assumptions and possibly form close groups. In the following we propose a Reduced Variable Neighbourhood Search heuristic which is used to generate high-dimensional data controlled by the desired properties aimed at representing several real-world contexts. The whole problem is formulated as a non-linear continuous optimization problem, and it is shown that the proposed Reduced Variable Neighbourhood Search is able to generate high-dimensional solutions to the problem in short computational time. A comparison with the state-of-the-art local search routine used to address this problem shows the greater efficiency of the approach presented here.
APA, Harvard, Vancouver, ISO, and other styles
6

Kou, Wanqiu, Fang Li, and Timothy Baldwin. "Automatic Labelling of Topic Models Using Word Vectors and Letter Trigram Vectors." In Information Retrieval Technology. Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-28940-3_20.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Lee, Wing Shum, Hu Ng, Timothy Tzen Vun Yap, Chiung Ching Ho, Vik Tor Goh, and Hau Lee Tong. "Attention Models for Sentiment Analysis Using Objectivity and Subjectivity Word Vectors." In Lecture Notes in Electrical Engineering. Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-33-4069-5_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Espinosa-Anke, Luis, Sergio Oramas, Horacio Saggion, and Xavier Serra. "ELMDist: A Vector Space Model with Words and MusicBrainz Entities." In Lecture Notes in Computer Science. Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-70407-4_44.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

SanJuan, Eric, Fidelia Ibekwe-SanJuan, Juan-Manuel Torres-Moreno, and Patricia Velázquez-Morales. "Combining Vector Space Model and Multi Word Term Extraction for Semantic Query Expansion." In Natural Language Processing and Information Systems. Springer Berlin Heidelberg, 2007. http://dx.doi.org/10.1007/978-3-540-73351-5_22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Lellan, Anne Mac. "Victim or Vector? Tubercular Irish Nurses in England, 1930–1960." In Migration, Health and Ethnicity in the Modern World. Palgrave Macmillan UK, 2013. http://dx.doi.org/10.1057/9781137303233_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Word Vector Models"

1

Hershcovich, Daniel, Assaf Toledo, Alon Halfon, and Noam Slonim. "Syntactic Interchangeability in Word Embedding Models." In Proceedings of the 3rd Workshop on Evaluating Vector Space Representations for. Association for Computational Linguistics, 2019. http://dx.doi.org/10.18653/v1/w19-2009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Esin, Yunus Emre, Ozgur Alan, and Ferda Nur Alpaslan. "Improvement on corpus-based word similarity using vector space models." In 2009 24th International Symposium on Computer and Information Sciences (ISCIS). IEEE, 2009. http://dx.doi.org/10.1109/iscis.2009.5291827.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Batchkarov, Miroslav, Thomas Kober, Jeremy Reffin, Julie Weeds, and David Weir. "A critique of word similarity as a method for evaluating distributional semantic models." In Proceedings of the 1st Workshop on Evaluating Vector-Space Representations for NLP. Association for Computational Linguistics, 2016. http://dx.doi.org/10.18653/v1/w16-2502.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Li, Wenye, Jiawei Zhang, Jianjun Zhou, and Laizhong Cui. "Learning Word Vectors with Linear Constraints: A Matrix Factorization Approach." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/582.

Full text
Abstract:
Learning vector space representation of words, or word embedding, has attracted much recent research attention. With the objective of better capturing the semantic and syntactic information inherent in words, we propose two new embedding models based on the singular value decomposition of lexical co-occurrences of words. Different from previous work, our proposed models allow for injecting linear constraints when performing the decomposition, with which the desired semantic and syntactic information will be maintained in word vectors. Conceptually the models are flexible and convenient to enco
APA, Harvard, Vancouver, ISO, and other styles
5

Kothalkar, Prasanna, Johanna Rudolph, Christine Dollaghan, Jennifer McGlothlin, Thomas Campbell, and John H. L. Hansen. "Fusing Text-dependent Word-level i-Vector Models to Screen ‘at Risk’ Child Speech." In Interspeech 2018. ISCA, 2018. http://dx.doi.org/10.21437/interspeech.2018-1465.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Bawa, Gurpreet Singh Bawa, Sanjay Sharma, Kaustav Pakira, and Souvik Chakraborty. "Fine Tuning Consumer Feedback Based Recommender Systems Using Deep Learning Models on Word Vector Space Representations." In 2019 International Conference on Computational Science and Computational Intelligence (CSCI). IEEE, 2019. http://dx.doi.org/10.1109/csci49370.2019.00056.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Liang, Zhenwen, and Xiangliang Zhang. "Solving Math Word Problems with Teacher Supervision." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/485.

Full text
Abstract:
Math word problems (MWPs) have been recently addressed with Seq2Seq models by `translating' math problems described in natural language to a mathematical expression, following a typical encoder-decoder structure. Although effective in solving classical math problems, these models fail when a subtle variation is applied to the word expression of a math problem, and leads to a remarkably different answer. We find the failure is because MWPs with different answers but similar math formula expression are encoded closely in the latent space. We thus designed a teacher module to make the MWP encodin
APA, Harvard, Vancouver, ISO, and other styles
8

Wei, Liangchen, and Zhi-Hong Deng. "A Variational Autoencoding Approach for Inducing Cross-lingual Word Embeddings." In Twenty-Sixth International Joint Conference on Artificial Intelligence. International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/582.

Full text
Abstract:
Cross-language learning allows one to use training data from one language to build models for another language. Many traditional approaches require word-level alignment sentences from parallel corpora, in this paper we define a general bilingual training objective function requiring sentence level parallel corpus only. We propose a variational autoencoding approach for training bilingual word embeddings. The variational model introduces a continuous latent variable to explicitly model the underlying semantics of the parallel sentence pairs and to guide the generation of the sentence pairs. Our
APA, Harvard, Vancouver, ISO, and other styles
9

Cai, Yitao, and Xiaojun Wan. "Multi-Domain Sentiment Classification Based on Domain-Aware Embedding and Attention." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/681.

Full text
Abstract:
Sentiment classification is a fundamental task in NLP. However, as revealed by many researches, sentiment classification models are highly domain-dependent. It is worth investigating to leverage data from different domains to improve the classification performance in each domain. In this work, we propose a novel completely-shared multi-domain neural sentiment classification model to learn domain-aware word embeddings and make use of domain-aware attention mechanism. Our model first utilizes BiLSTM for domain classification and extracts domain-specific features for words, which are then combine
APA, Harvard, Vancouver, ISO, and other styles
10

Williams, Robert. "The Power of Normalised Word Vectors for Automatically Grading Essays." In InSITE 2006: Informing Science + IT Education Conference. Informing Science Institute, 2006. http://dx.doi.org/10.28945/2995.

Full text
Abstract:
Latent Semantic Analysis, when used for automated essay grading, makes use of document word count vectors for scoring the essays against domain knowledge. Words in the domain knowledge documents and essays are counted, and Singular Value Decomposition is undertaken to reduce the dimensions of the semantic space. Near neighbour vector cosines and other variables are used to calculate an essay score. This paper discusses a technique for computing word count vectors where the words are first normalised using thesaurus concept index numbers. This approach leads to a vector space of 812 dimensions,
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!