Academic literature on the topic 'Word embeddings'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Word embeddings.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Word embeddings"

1

Ahn, Yoonjoo, Eugene Rhee, and Jihoon Lee. "Dual embedding with input embedding and output embedding for better word representation." Indonesian Journal of Electrical Engineering and Computer Science 27, no. 2 (2022): 1091–99. https://doi.org/10.11591/ijeecs.v27.i2.pp1091-1099.

Full text
Abstract:
Recent studies in distributed vector representations for words have variety of ways to represent words. We propose a various ways using input embedding and output embedding to better represent words than single model. We compared the performance in terms of word analogy and word similarity with each input and output embeddings and various dual embeddings which are the combination of those two embeddings. Performance evaluation results show that the proposed dual embeddings outperform each single embedding, especially with the way of simply adding input and output embeddings. We figured out two
APA, Harvard, Vancouver, ISO, and other styles
2

Ahn, Yoonjoo, Eugene Rhee, and Jihoon Lee. "Dual embedding with input embedding and output embedding for better word representation." Indonesian Journal of Electrical Engineering and Computer Science 27, no. 2 (2022): 1091. http://dx.doi.org/10.11591/ijeecs.v27.i2.pp1091-1099.

Full text
Abstract:
Recent <span lang="EN-US">studies in distributed vector representations for words have variety of ways to represent words. We propose a various ways using input embedding and output embedding to better represent words than single model. We compared the performance in terms of word analogy and word similarity with each input and output embeddings and various dual embeddings which are the combination of those two embeddings. Performance evaluation results show that the proposed dual embeddings outperform each single embedding, especially with the way of simply adding input and output embed
APA, Harvard, Vancouver, ISO, and other styles
3

Srinidhi, K., T. L.S Tejaswi, CH Rama Rupesh Kumar, and I. Sai Siva Charan. "An Advanced Sentiment Embeddings with Applications to Sentiment Based Result Analysis." International Journal of Engineering & Technology 7, no. 2.32 (2018): 393. http://dx.doi.org/10.14419/ijet.v7i2.32.15721.

Full text
Abstract:
We propose an advanced well-trained sentiment analysis based adoptive analysis “word specific embedding’s, dubbed sentiment embedding’s”. Using available word and phrase embedded learning and trained algorithms mainly make use of contexts of terms but ignore the sentiment of texts and analyzing the process of word and text classifications. sentimental analysis on unlike words conveying same meaning matched to corresponding word vector. This problem is bridged by combining encoding opinion carrying text with sentiment embeddings words. But performing sentimental analysis on e-commerce, social n
APA, Harvard, Vancouver, ISO, and other styles
4

Zhu, Lixing, Yulan He, and Deyu Zhou. "A Neural Generative Model for Joint Learning Topics and Topic-Specific Word Embeddings." Transactions of the Association for Computational Linguistics 8 (August 2020): 471–85. http://dx.doi.org/10.1162/tacl_a_00326.

Full text
Abstract:
We propose a novel generative model to explore both local and global context for joint learning topics and topic-specific word embeddings. In particular, we assume that global latent topics are shared across documents, a word is generated by a hidden semantic vector encoding its contextual semantic meaning, and its context words are generated conditional on both the hidden semantic vector and global latent topics. Topics are trained jointly with the word embeddings. The trained model maps words to topic-dependent embeddings, which naturally addresses the issue of word polysemy. Experimental re
APA, Harvard, Vancouver, ISO, and other styles
5

Yadav, Aditya Kumar. "Refined Global Word Embeddings Based on Sentiment Concept for Sentiment Analysis." INTERNATIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 05 (2025): 1–9. https://doi.org/10.55041/ijsrem49245.

Full text
Abstract:
ABSTRACT Sentiment analysis is a significant area of study in natural language processing that finds extensive use in journalism, politics, and other domains. In sentiment analysis, word embeddings are important. The sentiment lexicons are directly incorporated into conventional word representation using the current senstiment embeddings techniques. This sentiment representation technique is unable to offer precise sentiment information for words in many situations since it can only distinguish the sentiment information of distinct words, not the same word in several settings. To address the i
APA, Harvard, Vancouver, ISO, and other styles
6

Jang, Youngjin, and Harksoo Kim. "Reliable Classification of FAQs with Spelling Errors Using an Encoder-Decoder Neural Network in Korean." Applied Sciences 9, no. 22 (2019): 4758. http://dx.doi.org/10.3390/app9224758.

Full text
Abstract:
To resolve lexical disagreement problems between queries and frequently asked questions (FAQs), we propose a reliable sentence classification model based on an encoder-decoder neural network. The proposed model uses three types of word embeddings; fixed word embeddings for representing domain-independent meanings of words, fined-tuned word embeddings for representing domain-specific meanings of words, and character-level word embeddings for bridging lexical gaps caused by spelling errors. It also uses class embeddings to represent domain knowledge associated with each category. In the experime
APA, Harvard, Vancouver, ISO, and other styles
7

Chang, Haw-Shiuan, Amol Agrawal, and Andrew McCallum. "Extending Multi-Sense Word Embedding to Phrases and Sentences for Unsupervised Semantic Applications." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 8 (2021): 6956–65. http://dx.doi.org/10.1609/aaai.v35i8.16857.

Full text
Abstract:
Most unsupervised NLP models represent each word with a single point or single region in semantic space, while the existing multi-sense word embeddings cannot represent longer word sequences like phrases or sentences. We propose a novel embedding method for a text sequence (a phrase or a sentence) where each sequence is represented by a distinct set of multi-mode codebook embeddings to capture different semantic facets of its meaning. The codebook embeddings can be viewed as the cluster centers which summarize the distribution of possibly co-occurring words in a pre-trained word embedding spac
APA, Harvard, Vancouver, ISO, and other styles
8

Ramos-Vargas, Rigo E., Israel Román-Godínez, and Sulema Torres-Ramos. "Comparing general and specialized word embeddings for biomedical named entity recognition." PeerJ Computer Science 7 (February 18, 2021): e384. http://dx.doi.org/10.7717/peerj-cs.384.

Full text
Abstract:
Increased interest in the use of word embeddings, such as word representation, for biomedical named entity recognition (BioNER) has highlighted the need for evaluations that aid in selecting the best word embedding to be used. One common criterion for selecting a word embedding is the type of source from which it is generated; that is, general (e.g., Wikipedia, Common Crawl), or specific (e.g., biomedical literature). Using specific word embeddings for the BioNER task has been strongly recommended, considering that they have provided better coverage and semantic relationships among medical ent
APA, Harvard, Vancouver, ISO, and other styles
9

Schick, Timo, and Hinrich Schütze. "Learning Semantic Representations for Novel Words: Leveraging Both Form and Context." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 6965–73. http://dx.doi.org/10.1609/aaai.v33i01.33016965.

Full text
Abstract:
Word embeddings are a key component of high-performing natural language processing (NLP) systems, but it remains a challenge to learn good representations for novel words on the fly, i.e., for words that did not occur in the training data. The general problem setting is that word embeddings are induced on an unlabeled training corpus and then a model is trained that embeds novel words into this induced embedding space. Currently, two approaches for learning embeddings of novel words exist: (i) learning an embedding from the novel word’s surface-form (e.g., subword n-grams) and (ii) learning an
APA, Harvard, Vancouver, ISO, and other styles
10

Shen, Feiyu, Chenpeng Du, and Kai Yu. "Acoustic Word Embeddings for End-to-End Speech Synthesis." Applied Sciences 11, no. 19 (2021): 9010. http://dx.doi.org/10.3390/app11199010.

Full text
Abstract:
The most recent end-to-end speech synthesis systems use phonemes as acoustic input tokens and ignore the information about which word the phonemes come from. However, many words have their specific prosody type, which may significantly affect the naturalness. Prior works have employed pre-trained linguistic word embeddings as TTS system input. However, since linguistic information is not directly relevant to how words are pronounced, TTS quality improvement of these systems is mild. In this paper, we propose a novel and effective way of jointly training acoustic phone and word embeddings for e
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Word embeddings"

1

Zhang, Zheng. "Explorations in Word Embeddings : graph-based word embedding learning and cross-lingual contextual word embedding learning." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLS369/document.

Full text
Abstract:
Les plongements lexicaux sont un composant standard des architectures modernes de traitement automatique des langues (TAL). Chaque fois qu'une avancée est obtenue dans l'apprentissage de plongements lexicaux, la grande majorité des tâches de traitement automatique des langues, telles que l'étiquetage morphosyntaxique, la reconnaissance d'entités nommées, la recherche de réponses à des questions, ou l'inférence textuelle, peuvent en bénéficier. Ce travail explore la question de l'amélioration de la qualité de plongements lexicaux monolingues appris par des modèles prédictifs et celle de la mise
APA, Harvard, Vancouver, ISO, and other styles
2

Madhyastha, Pranava Swaroop. "Exploiting word embeddings for modeling bilexical relations." Doctoral thesis, Universitat Politècnica de Catalunya, 2017. http://hdl.handle.net/10803/457892.

Full text
Abstract:
There has been an exponential surge of text data in the recent years. As a consequence, unsupervised methods that make use of this data have been steadily growing in the field of natural language processing (NLP). Word embeddings are low-dimensional vectors obtained using unsupervised techniques on the large unlabelled corpora, where words from the vocabulary are mapped to vectors of real numbers. Word embeddings aim to capture syntactic and semantic properties of words. In NLP, many tasks involve computing the compatibility between lexical items under some linguistic relation. We call this t
APA, Harvard, Vancouver, ISO, and other styles
3

Tallo, Philip T. "Using Sentence Embeddings for Word Sense Induction." University of Cincinnati / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1613748873435158.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Shaik, Arshad. "Biomedical Semantic Embeddings: Using Hybrid Sentences to Construct Biomedical Word Embeddings and Their Applications." Thesis, University of North Texas, 2019. https://digital.library.unt.edu/ark:/67531/metadc1609064/.

Full text
Abstract:
Word embeddings is a useful method that has shown enormous success in various NLP tasks, not only in open domain but also in biomedical domain. The biomedical domain provides various domain specific resources and tools that can be exploited to improve performance of these word embeddings. However, most of the research related to word embeddings in biomedical domain focuses on analysis of model architecture, hyper-parameters and input text. In this paper, we use SemMedDB to design new sentences called `Semantic Sentences'. Then we use these sentences in addition to biomedical text as inputs to
APA, Harvard, Vancouver, ISO, and other styles
5

Shaik, Arshad. "Biomedical Semantic Embeddings: Using Hybrid Sentences to Construct Biomedical Word Embeddings and its Applications." Thesis, University of North Texas, 2019. https://digital.library.unt.edu/ark:/67531/metadc1609064/.

Full text
Abstract:
Word embeddings is a useful method that has shown enormous success in various NLP tasks, not only in open domain but also in biomedical domain. The biomedical domain provides various domain specific resources and tools that can be exploited to improve performance of these word embeddings. However, most of the research related to word embeddings in biomedical domain focuses on analysis of model architecture, hyper-parameters and input text. In this paper, we use SemMedDB to design new sentences called `Semantic Sentences'. Then we use these sentences in addition to biomedical text as inputs to
APA, Harvard, Vancouver, ISO, and other styles
6

Del, Coco Pierpaolo Elio Jr. "Temporal Text Mining: From Frequencies to Word Embeddings." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amslaurea.unibo.it/15481/.

Full text
Abstract:
The last decade has witnessed a tremendous growth in the amount of textual data available from web pages and social media posts, as well as from digitized sources, such as newspapers and books. However, as new data is continuously created to record the events of the moment, old data is archived day by day, for months, years, and decades. From this point of view, web archives play an important role not only as sources of data, but also as testimonials of history. In this respect, state-of-art machine learning models for word representations, namely word embeddings, are not able to capture the d
APA, Harvard, Vancouver, ISO, and other styles
7

Aldarmaki, Hanan. "Cross-Lingual Alignment of Word & Sentence Embeddings." Thesis, The George Washington University, 2019. http://pqdtopen.proquest.com/#viewpdf?dispub=13812118.

Full text
Abstract:
<p>One of the notable developments in current natural language processing is the practical efficacy of probabilistic word representations, where words are embedded in high-dimensional continuous vector spaces that are optimized to reflect their distributional relationships. For sequences of words, such as phrases and sentences, distributional representations can be estimated by combining word embeddings using arithmetic operations like vector averaging or by estimating composition parameters from data using various objective functions. The quality of these compositional representations is typ
APA, Harvard, Vancouver, ISO, and other styles
8

Sjökvist, Henrik. "Text feature mining using pre-trained word embeddings." Thesis, KTH, Matematisk statistik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-228536.

Full text
Abstract:
This thesis explores a machine learning task where the data contains not only numerical features but also free-text features. In order to employ a supervised classifier and make predictions, the free-text features must be converted into numerical features.  In this thesis, an algorithm is developed to perform that conversion. The algorithm uses a pre-trained word embedding model which maps each word to a vector. The vectors for multiple word embeddings belonging to the same sentence are then combined to form a single sentence embedding. The sentence embeddings for the whole dataset are cluster
APA, Harvard, Vancouver, ISO, and other styles
9

Montariol, Syrielle. "Models of diachronic semantic change using word embeddings." Electronic Thesis or Diss., université Paris-Saclay, 2021. http://www.theses.fr/2021UPASG006.

Full text
Abstract:
Dans cette thèse, nous étudions les changements lexico-sémantiques : les variations temporelles dans l'usage et la signification des mots, également appelé extit{diachronie}. Ces changements reflètent l'évolution de divers aspects de la société tels que l'environnement technologique et culturel.Nous explorons et évaluons des méthodes de construction de plongements lexicaux variant dans le temps afin d'analyser l'évolution du language. Nous utilisont notamment des plongements contextualisés à partir de modèles de langue pré-entraînés tels que BERT.Nous proposons plusieurs approches pour extrair
APA, Harvard, Vancouver, ISO, and other styles
10

Calarota, Gabriele. "Domain-specific word embeddings for ICD-9-CM classification." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amslaurea.unibo.it/16714/.

Full text
Abstract:
In this work we evaluate domain-speci�c embedding models induced from textual resources in the medical domain. The International Classi�cation of Diseases (ICD) is a standard, broadly used classi�cation system, that codes a large number of speci�c diseases, symptoms, injuries and medical procedures into numerical classes. Assigning a code to a clinical case means classifying that case into one or more particular discrete class, hence allowing further statistics studies and automated calculations. The possibility to have a discrete code instead of a text in natural language is intuitively
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Word embeddings"

1

McGillivray, Barbara. How to Use Word Embeddings for Natural Language Processing. SAGE Publications, Ltd., 2022. http://dx.doi.org/10.4135/9781529609578.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Rothermel, Ann-Kathrin. Too Big but too Small - Challenges for Conducting Discourse Analysis With Word Embeddings in Medium-Sized Databases. SAGE Publications Ltd, 2024. http://dx.doi.org/10.4135/9781529684469.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ethics, Institute of Business, ed. Making business ethics work: The foundations of effective embedding. Institute of Business Ethics, 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Great Britain. Further Education Unit., ed. Supporting embedding projects: Funded by the Work-Related Further Education Development Fund. Further Education Unit, 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Joined in Equity Diversity and Interdependence., ed. A framework for reflection in practice: Guidelines for embedding EDI priniciples in youth work practice. JEDI, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Corporation, Microsoft, ed. Object linking and embedding programmer's reference: Version 1, designed to work with Microsoft Windows versions 3.0 and 3.1. Microsoft Press, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Wijers, Jean Paul, ed. Managing Authentic Relationships. Amsterdam University Press, 2019. http://dx.doi.org/10.5117/9789462988613.

Full text
Abstract:
In an increasingly connected world, Strategic Relationship Management is a vital capability for successful organizations. The book Managing Authentic Relationships; Facing New Challenges in a Changing Context focuses on building and managing a strong network and reciprocal relationships for the entire organization by implementing a professional relationship management approach at strategic, tactical and operational level. Professional relationship management makes valuable and measurable contributions to the strategic goals of an organization by: Expanding the organization's strategy to a Rela
APA, Harvard, Vancouver, ISO, and other styles
8

Doellgast, Virginia, Shruti Appalla, Dina Ginzburg, Jeonghun Kim, and Wen Li Thian. Global case studies of social dialogue on AI and algorithmic management. ILO, 2025. https://doi.org/10.54394/voqe4924.

Full text
Abstract:
Employers are adopting and refining artificial intelligence (AI) and algorithm-based tools in the workplace, with wide-ranging implications for work and employment. This working paper examines case studies of social dialogue on AI at national, regional, sectoral, company, and workplace levels in Europe, North America, Asia, South America and the Caribbean, and Africa. Findings are organized around three distinct ‘action fields’ in which worker representatives have sought to influence strategies and outcomes associated with the growing use of AI and algorithms in the workplace. These include th
APA, Harvard, Vancouver, ISO, and other styles
9

Bielinskienė, Agnė, Loic Boizou, Ieva Bumbulienė, et al. Lithuanian Word embeddings. Baltic Institute of Advanced Technology, Vytautas Magnus University, 2019. http://dx.doi.org/10.7220/20.500.12259/240093.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Søgaard, Anders, Sebastian Ruder, Manaal Faruqui, Ivan Vulić, and Graeme Hirst. Cross-Lingual Word Embeddings. Morgan & Claypool Publishers, 2019.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Word embeddings"

1

Kumar, Sunil. "Word Embeddings." In Python for Accounting and Finance. Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-54680-8_15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Žižka, Jan, František Dařena, and Arnošt Svoboda. "Word Embeddings." In Text Mining with Machine Learning. CRC Press, 2019. http://dx.doi.org/10.1201/9780429469275-13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Hvitfeldt, Emil, and Julia Silge. "Word Embeddings." In Supervised Machine Learning for Text Analysis in R. Chapman and Hall/CRC, 2021. http://dx.doi.org/10.1201/9781003093459-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Andrews, Martin. "Compressing Word Embeddings." In Neural Information Processing. Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-46681-1_50.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Erk, Katrin, and Gabriella Chronis. "Word Embeddings are Word Story Embeddings (and That's Fine)." In Algebraic Structures in Natural Language. CRC Press, 2022. http://dx.doi.org/10.1201/9781003205388-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Søgaard, Anders, Ivan Vulić, Sebastian Ruder, and Manaal Faruqui. "Monolingual Word Embedding Models." In Cross-Lingual Word Embeddings. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-031-02171-8_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Paulheim, Heiko, Petar Ristoski, and Jan Portisch. "From Word Embeddings to Knowledge Graph Embeddings." In Synthesis Lectures on Data, Semantics, and Knowledge. Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-30387-6_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Søgaard, Anders, Ivan Vulić, Sebastian Ruder, and Manaal Faruqui. "Cross-Lingual Word Embedding Models: Typology." In Cross-Lingual Word Embeddings. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-031-02171-8_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Mousset, Paul, Yoann Pitarch, and Lynda Tamine. "Towards Spatial Word Embeddings." In Lecture Notes in Computer Science. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-15719-7_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Sabharwal, Navin, and Amit Agrawal. "Introduction to Word Embeddings." In Hands-on Question Answering Systems with BERT. Apress, 2021. http://dx.doi.org/10.1007/978-1-4842-6664-9_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Word embeddings"

1

Shah, Sapan, Sreedhar Reddy, and Pushpak Bhattacharyya. "Affective Retrofitted Word Embeddings." In Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, 2022. http://dx.doi.org/10.18653/v1/2022.aacl-main.42.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Bollegala, Danushka. "Learning Meta Word Embeddings by Unsupervised Weighted Concatenation of Source Embeddings." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/563.

Full text
Abstract:
Given multiple source word embeddings learnt using diverse algorithms and lexical resources, meta word embedding learning methods attempt to learn more accurate and wide-coverage word embeddings. Prior work on meta-embedding has repeatedly discovered that simple vector concatenation of the source embeddings to be a competitive baseline. However, it remains unclear as to why and when simple vector concatenation can produce accurate meta-embeddings. We show that weighted concatenation can be seen as a spectrum matching operation between each source embedding and the meta-embedding, minimising th
APA, Harvard, Vancouver, ISO, and other styles
3

Albujasim, Zainab, Diana Inkpen, and Yuhong Guo. "Word Embedding Interpretation using Co-Clustering." In International Conference on Signal Processing and Vision. Academy and Industry Research Collaboration Center (AIRCC), 2022. http://dx.doi.org/10.5121/csit.2022.122210.

Full text
Abstract:
Word embedding is the foundation of modern language processing (NLP). In the last few decades, word representation has evolved remarkably resulting in an impressive performance in NLP downstream applications. Yet, word embedding's interpretability remains a challenge. In this paper, We propose a simple technique to interpret word embedding. Our method is based on post-processing technique to improve the quality of word embedding and reveal the hidden structure in these embeddings. We deploy Co-clustering method to reveal the hidden structure of word embedding and detect sub-matrices between wo
APA, Harvard, Vancouver, ISO, and other styles
4

Zeng, Ziqian, Yichun Yin, Yangqiu Song, and Ming Zhang. "Socialized Word Embeddings." In Twenty-Sixth International Joint Conference on Artificial Intelligence. International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/547.

Full text
Abstract:
Word embeddings have attracted a lot of attention. On social media, each user’s language use can be significantly affected by the user’s friends. In this paper, we propose a socialized word embedding algorithm which can consider both user’s personal characteristics of language use and the user’s social relationship on social media. To incorporate personal characteristics, we propose to use a user vector to represent each user. Then for each user, the word embeddings are trained based on each user’s corpus by combining the global word vectors and local user vector. To incorporate social relatio
APA, Harvard, Vancouver, ISO, and other styles
5

Xun, Guangxu, Yaliang Li, Wayne Xin Zhao, Jing Gao, and Aidong Zhang. "A Correlated Topic Model Using Word Embeddings." In Twenty-Sixth International Joint Conference on Artificial Intelligence. International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/588.

Full text
Abstract:
Conventional correlated topic models are able to capture correlation structure among latent topics by replacing the Dirichlet prior with the logistic normal distribution. Word embeddings have been proven to be able to capture semantic regularities in language. Therefore, the semantic relatedness and correlations between words can be directly calculated in the word embedding space, for example, via cosine values. In this paper, we propose a novel correlated topic model using word embeddings. The proposed model enables us to exploit the additional word-level correlation information in word embed
APA, Harvard, Vancouver, ISO, and other styles
6

Xu, Hu, Bing Liu, Lei Shu, and Philip S. Yu. "Lifelong Domain Word Embedding via Meta-Learning." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/627.

Full text
Abstract:
Learning high-quality domain word embeddings is important for achieving good performance in many NLP tasks. General-purpose embeddings trained on large-scale corpora are often sub-optimal for domain-specific applications. However, domain-specific tasks often do not have large in-domain corpora for training high-quality domain embeddings. In this paper, we propose a novel lifelong learning setting for domain embedding. That is, when performing the new domain embedding, the system has seen many past domains, and it tries to expand the new in-domain corpus by exploiting the corpora from the past
APA, Harvard, Vancouver, ISO, and other styles
7

Camacho-Collados, Jose, Luis Espinosa Anke, and Steven Schockaert. "Relational Word Embeddings." In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, 2019. http://dx.doi.org/10.18653/v1/p19-1318.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Mou, Lili, Ran Jia, Yan Xu, Ge Li, Lu Zhang, and Zhi Jin. "Distilling Word Embeddings." In CIKM'16: ACM Conference on Information and Knowledge Management. ACM, 2016. http://dx.doi.org/10.1145/2983323.2983888.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Dlamini, Sibonelo, Edgar Jembere, Anban Pillay, and Brett van Niekerk. "isiZulu Word Embeddings." In 2021 Conference on Information Communications Technology and Society (ICTAS). IEEE, 2021. http://dx.doi.org/10.1109/ictas50802.2021.9395011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Cotterell, Ryan, and Hinrich Schütze. "Morphological Word-Embeddings." In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, 2015. http://dx.doi.org/10.3115/v1/n15-1140.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Word embeddings"

1

Borders, Tammie, and Svitlana Volkova. An Introduction to Word Embeddings and Language Models. Office of Scientific and Technical Information (OSTI), 2021. http://dx.doi.org/10.2172/1773690.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Reades, Jon, and Jennie Williams. Dataset to accompany Clustering and Visualising Documents using Word Embeddings. Programming Historian, 2023. http://dx.doi.org/10.46430/phen0112.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Danielson, Thomas, and Larry Deschaine. Understanding Event Trajectories Across Massive Temporal Datasets with Word Embeddings and Visualization. Office of Scientific and Technical Information (OSTI), 2024. http://dx.doi.org/10.2172/2396738.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Wolfenden, Luke, and Laura Wolfenden. Embedding smoking cessation support in community service organisations. The Sax Institute, 2021. http://dx.doi.org/10.57022/ihzq1178.

Full text
Abstract:
This Rapid Evidence Summary aimed to identify barriers and enablers to embedding smoking cessation support into the routine work of community service organisations (CSOs), a setting which provides access to priority and disadvantaged groups. The authors also looked more broadly at barriers and enablers to supporting provision of preventive care targeting key chronic diseases in order to draw relevant lessons from these. The findings indicate that many factors influence the integration of smoking cessation support into CSOs and that understanding these and developing multi-strategic approaches
APA, Harvard, Vancouver, ISO, and other styles
5

Schiavuzzi, Alexandra, Nick Petrunoff, Jacky Dawkins, Eileen Goldberg, and Belinda Goodenough. Evidence Snapshot: Factors associated with successfully embedding brief primary prevention interventions in cancer screening programs. The Sax Institute, 2024. http://dx.doi.org/10.57022/oqfn9773.

Full text
Abstract:
This rapid review aims to provide evidence about the factors that influence the successful implementation of brief primary prevention interventions delivered in a cancer screening setting. The findings of this Evidence Snapshot have been arranged into key themes using the RE-AIM framework to assist in the identification of potentially essential elements that may improve the adoption and sustainment of evidence-based interventions. We identified 11 studies that met the inclusion criteria. Delivering brief cancer prevention interventions within a screening setting is an emerging area of research
APA, Harvard, Vancouver, ISO, and other styles
6

Chaudhuri, Somsubhro, and Stijn Hertele. PR-716-204500-R01 Integration of 3D NDE Systems to FEA Evaluation of Flaws. Pipeline Research Council International, Inc. (PRCI), 2022. http://dx.doi.org/10.55274/r0012220.

Full text
Abstract:
This report summarizes the outcomes of the second year of PRCI ECA-1-1. Concretely, specific methodologies developed to improve the NDE-FEA framework have been discussed (as elaborated in T2.3: "Evaluation and improvement of 3D NDE - FEA performance). A medium wide plate (MWP) testing program (T2.1: "Design and preparation of experimental test program") was designed to experimentally validate the results obtained from the NDE-FEA framework (T2.2: "MWP test execution and analysis"). The results show satisfactory agreement between the CDF response obtained from the NDE-FEA framework and experime
APA, Harvard, Vancouver, ISO, and other styles
7

Bastiani, Spencer, Lisa Dickmanns, Thomas Giebe, and Oliver Gürtler. Household specialization and competition for promotion. Institutionen för nationalekonomi och statistik, Linnéuniversitetet, 2024. http://dx.doi.org/10.15626/ns.wp.2024.05.

Full text
Abstract:
We study how the presence of promotion competition in the labor market affects household specialization patterns. By embedding a promotion tournament model in a household setting, we show that specialization can emerge as a consequence of competitive work incentives. This specialization outcome, in which only one spouse invests heavily in his or her career, can be welfare superior to a situation in which both spouses invest equally in their careers. The reason is that household specialization reduces the intensity of competitionand provides households with consumption smoothing. The specializa
APA, Harvard, Vancouver, ISO, and other styles
8

Moreno Pérez, Carlos, and Marco Minozzo. “Making Text Talk”: The Minutes of the Central Bank of Brazil and the Real Economy. Banco de España, 2022. http://dx.doi.org/10.53479/23646.

Full text
Abstract:
This paper investigates the relationship between the views expressed in the minutes of the meetings of the Central Bank of Brazil’s Monetary Policy Committee (COPOM) and the real economy. It applies various computational linguistic machine learning algorithms to construct measures of the minutes of the COPOM. First, we create measures of the content of the paragraphs of the minutes using Latent Dirichlet Allocation (LDA). Second, we build an uncertainty index for the minutes using Word Embedding and K-Means. Then, we combine these indices to create two topic-uncertainty indices. The first one
APA, Harvard, Vancouver, ISO, and other styles
9

Day, Rita. Integrating a Business Industry Advisory Board within the Triple Helix Framework. Vilnius Business College, 2024. https://doi.org/10.57005/ab.2024.4.2.

Full text
Abstract:
The Triple Helix Model of Innovation, which emphasises the dynamic interactions between academia, industry, and government, offers a powerful framework for fostering innovation and economic development. Embedding a Business Industry Advisory Board (BIAB) within this model can enhance collaboration, ensure alignment with market needs, and provide a strategic platform for knowledge transfer. A BIAB typically consists of industry leaders, business experts, and policymakers who advise academic institutions on curriculum development, research agendas, and engagement with industry trends. This advis
APA, Harvard, Vancouver, ISO, and other styles
10

Rasiah, Rajah. Fostering Clusters in the Malaysian Electronics Industry. Inter-American Development Bank, 2005. http://dx.doi.org/10.18235/0006838.

Full text
Abstract:
The meaning of clusters has evolved considerably over several decades. This presentation seeks to use a synthesis of the concept from the time of Mill and Marshall (industrial districts), and Smith and Young on differentiation and division of labour to encompass the work of Brusco, Becatini, Sabel, Sengenberger, Zeitlin, Pyke, Richardson, North, Lorenz, Wilkinson and Piore to extract the influence of socio-economic relationships (a blend of markets and trust-loyalty), and subsequently the contributions of Porter (traditional and high tech clusters) and Best (organizational change, techno-diver
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!