Academic literature on the topic 'Semantic embeddings'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Semantic embeddings.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Semantic embeddings"
JP, Sanjanasri, Vijay Krishna Menon, Soman KP, Rajendran S, and Agnieszka Wolk. "Generation of Cross-Lingual Word Vectors for Low-Resourced Languages Using Deep Learning and Topological Metrics in a Data-Efficient Way." Electronics 10, no. 12 (June 8, 2021): 1372. http://dx.doi.org/10.3390/electronics10121372.
Full textMerkx, Danny, and Stefan L. Frank. "Learning semantic sentence representations from visually grounded language without lexical knowledge." Natural Language Engineering 25, no. 4 (July 2019): 451–66. http://dx.doi.org/10.1017/s1351324919000196.
Full textÖzkaya Eren, Ayşegül, and Mustafa Sert. "Audio Captioning with Composition of Acoustic and Semantic Information." International Journal of Semantic Computing 15, no. 02 (June 2021): 143–60. http://dx.doi.org/10.1142/s1793351x21400018.
Full textMao, Yuqing, and Kin Wah Fung. "Use of word and graph embedding to measure semantic relatedness between Unified Medical Language System concepts." Journal of the American Medical Informatics Association 27, no. 10 (October 1, 2020): 1538–46. http://dx.doi.org/10.1093/jamia/ocaa136.
Full textDing, Juncheng, and Wei Jin. "COS: A new MeSH term embedding incorporating corpus, ontology, and semantic predications." PLOS ONE 16, no. 5 (May 4, 2021): e0251094. http://dx.doi.org/10.1371/journal.pone.0251094.
Full textHirota, Wataru, Yoshihiko Suhara, Behzad Golshan, and Wang-Chiew Tan. "Emu: Enhancing Multilingual Sentence Embeddings with Semantic Specialization." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 05 (April 3, 2020): 7935–43. http://dx.doi.org/10.1609/aaai.v34i05.6301.
Full textCroce, Danilo, Daniele Rossini, and Roberto Basili. "Neural embeddings: accurate and readable inferences based on semantic kernels." Natural Language Engineering 25, no. 4 (July 2019): 519–41. http://dx.doi.org/10.1017/s1351324919000238.
Full textSchick, Timo, and Hinrich Schütze. "Learning Semantic Representations for Novel Words: Leveraging Both Form and Context." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 6965–73. http://dx.doi.org/10.1609/aaai.v33i01.33016965.
Full textZhu, Lixing, Yulan He, and Deyu Zhou. "A Neural Generative Model for Joint Learning Topics and Topic-Specific Word Embeddings." Transactions of the Association for Computational Linguistics 8 (August 2020): 471–85. http://dx.doi.org/10.1162/tacl_a_00326.
Full textHashimoto, Tatsunori B., David Alvarez-Melis, and Tommi S. Jaakkola. "Word Embeddings as Metric Recovery in Semantic Spaces." Transactions of the Association for Computational Linguistics 4 (December 2016): 273–86. http://dx.doi.org/10.1162/tacl_a_00098.
Full textDissertations / Theses on the topic "Semantic embeddings"
Malmberg, Jacob. "Evaluating semantic similarity using sentence embeddings." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-291425.
Full textSammanfattning Semantisk likhets-sökning är en typ av sökning som syftar till att hitta dokument eller meningar som är semantiskt lika en användarspecifierad sökterm. Denna typ av sökning utförs ofta, exempelvis när användaren söker efter information på internet. För att möjliggöra detta måste vektorrepresentationer av både dokumenten som ska genomsökas såväl som söktermen skapas. Ett vanligt sätt att skapa dessa representationer har varit term frequency - inverse document frequencyalgoritmen (TF-IDF). Moderna metoder använder neurala nätverk som har blivit mycket populära under de senaste åren. BERT-nätverket som släpptes 2018 är ett väl ansett nätverk som kan användas för att skapa vektorrepresentationer. Många varianter av BERT-nätverket har skapats, exempelvis nätverket Sentence-BERT som är uttryckligen skapad för att skapa vektorrepresentationer av meningar. Denna avhandling ämnar att utvärdera semantisk likhets-sökning som bygger på vektorrepresentationer av meningar producerade av både traditionella och moderna approacher. Olika experiment utfördes för att kontrastera de olika approacherna. Eftersom dataset uttryckligen skapade för denna typ av experiment inte kunde lokaliseras modifierades dataset som vanligen används. Resultaten visade att algoritmen TF-IDF överträffade approacherna som var baserade på neurala nätverk i nästintill alla experiment. Av de neurala nätverk som utvärderades var Sentence-BERT bättre än BERT-nätverket. För att skapa mer generaliserbara resultat krävs dataset uttryckligen designade för semantisk likhets-sökning.
Yu, Lu. "Semantic representation: from color to deep embeddings." Doctoral thesis, Universitat Autònoma de Barcelona, 2019. http://hdl.handle.net/10803/669458.
Full textUno de los problemas fundamentales de la visión por computador es representar imágenes con descripciones compactas semánticamente relevantes. Estas descripciones podrían utilizarse en una amplia variedad de aplicaciones, como la comparación de imágenes, la detección de objetos y la búsqueda de vídeos. El objetivo principal de esta tesis es estudiar las representaciones de imágenes desde dos aspectos: las descripciones de color y las descripciones profundas con redes neuronales. En la primera parte de la tesis partimos de descripciones de color modeladas a mano. Existen nombres comunes en varias lenguas para los colores básicos, y proponemos un método para extender los nombres de colores adicionales de acuerdo con su naturaleza complementaria a los básicos. Esto nos permite calcular representaciones de nombres de colores de longitud arbitraria con un alto poder discriminatorio. Los experimentos psicofísicos confirman que el método propuesto supera a los marcos de referencia existentes. En segundo lugar, al agregar estrategias de atención, aprendemos descripciones de colores profundos con redes neuronales a partir de datos con anotaciones para la imagen en vez de para cada uno de los píxeles. La estrategia de atención logra identificar correctamente las regiones relevantes para cada clase que queremos evaluar. La ventaja del enfoque propuesto es que los nombres de colores a usar se pueden aprender específicamente para dominios de los que no existen anotaciones a nivel de píxel. En la segunda parte de la tesis, nos centramos en las descripciones profundas con redes neuronales. En primer lugar, abordamos el problema de comprimir grandes redes de descriptores en redes más pequeñas, manteniendo un rendimiento similar. Proponemos destilar las métricas de una red maestro a una red estudiante. Se introducen dos nuevas funciones de coste para modelar la comunicación de la red maestro a una red estudiante más pequeña: una basada en un maestro absoluto, donde el estudiante pretende producir los mismos descriptores que el maestro, y otra basada en un maestro relativo, donde las distancias entre pares de puntos de datos son comunicadas del maestro al alumno. Además, se han investigado diversos aspectos de la destilación para las representaciones, incluidas las capas de atención, el aprendizaje semi-supervisado y la destilación de calidad cruzada. Finalmente, se estudia otro aspecto del aprendizaje por métrica profundo, el aprendizaje continuado. Observamos que se produce una variación del conocimiento aprendido durante el entrenamiento de nuevas tareas. En esta tesis se presenta un método para estimar la variación semántica en función de la variación que experimentan los datos de la tarea actual durante su aprendizaje. Teniendo en cuenta esta estimación, las tareas anteriores pueden ser compensadas, mejorando así su rendimiento. Además, mostramos que las redes de descripciones profundas sufren significativamente menos olvidos catastróficos en comparación con las redes de clasificación cuando aprenden nuevas tareas.
One of the fundamental problems of computer vision is to represent images with compact semantically relevant embeddings. These embeddings could then be used in a wide variety of applications, such as image retrieval, object detection, and video search. The main objective of this thesis is to study image embeddings from two aspects: color embeddings and deep embeddings. In the first part of the thesis we start from hand-crafted color embeddings. We propose a method to order the additional color names according to their complementary nature with the basic eleven color names. This allows us to compute color name representations with high discriminative power of arbitrary length. Psychophysical experiments confirm that our proposed method outperforms baseline approaches. Secondly, we learn deep color embeddings from weakly labeled data by adding an attention strategy. The attention branch is able to correctly identify the relevant regions for each class. The advantage of our approach is that it can learn color names for specific domains for which no pixel-wise labels exists. In the second part of the thesis, we focus on deep embeddings. Firstly, we address the problem of compressing large embedding networks into small networks, while maintaining similar performance. We propose to distillate the metrics from a teacher network to a student network. Two new losses are introduced to model the communication of a deep teacher network to a small student network: one based on an absolute teacher, where the student aims to produce the same embeddings as the teacher, and one based on a relative teacher, where the distances between pairs of data points is communicated from the teacher to the student. In addition, various aspects of distillation have been investigated for embeddings, including hint and attention layers, semi-supervised learning and cross quality distillation. Finally, another aspect of deep metric learning, namely lifelong learning, is studied. We observed some drift occurs during training of new tasks for metric learning. A method to estimate the semantic drift based on the drift which is experienced by data of the current task during its training is introduced. Having this estimation, previous tasks can be compensated for this drift, thereby improving their performance. Furthermore, we show that embedding networks suffer significantly less from catastrophic forgetting compared to classification networks when learning new tasks.
Moss, Adam. "Detecting Lexical Semantic Change Using Probabilistic Gaussian Word Embeddings." Thesis, Uppsala universitet, Institutionen för lingvistik och filologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-412539.
Full textMontariol, Syrielle. "Models of diachronic semantic change using word embeddings." Electronic Thesis or Diss., université Paris-Saclay, 2021. http://www.theses.fr/2021UPASG006.
Full textIn this thesis, we study lexical semantic change: temporal variations in the use and meaning of words, also called extit{diachrony}. These changes are carried by the way people use words, and mirror the evolution of various aspects of society such as its technological and cultural environment.We explore, compare and evaluate methods to build time-varying embeddings from a corpus in order to analyse language evolution.We focus on contextualised word embeddings using pre-trained language models such as BERT. We propose several approaches to extract and aggregate the contextualised representations of words over time, and quantify their level of semantic change.In particular, we address the practical aspect of these systems: the scalability of our approaches, with a view to applying them to large corpora or large vocabularies; their interpretability, by disambiguating the different uses of a word over time; and their applicability to concrete issues, for documents related to COVID19We evaluate the efficiency of these methods quantitatively using several annotated corpora, and qualitatively by linking the detected semantic variations with real-life events and numerical data.Finally, we extend the task of semantic change detection beyond the temporal dimension. We adapt it to a bilingual setting, to study the joint evolution of a word and its translation in two corpora of different languages; and to a synchronic frame, to detect semantic variations across different sources or communities on top of the temporal variation
Shaik, Arshad. "Biomedical Semantic Embeddings: Using Hybrid Sentences to Construct Biomedical Word Embeddings and Their Applications." Thesis, University of North Texas, 2019. https://digital.library.unt.edu/ark:/67531/metadc1609064/.
Full textShaik, Arshad. "Biomedical Semantic Embeddings: Using Hybrid Sentences to Construct Biomedical Word Embeddings and its Applications." Thesis, University of North Texas, 2019. https://digital.library.unt.edu/ark:/67531/metadc1609064/.
Full textMunbodh, Mrinal. "Deriving A Better Metric To Assess theQuality of Word Embeddings Trained OnLimited Specialized Corpora." University of Cincinnati / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1601995854965902.
Full textBalzar, Ekenbäck Nils. "Evaluation of Sentence Representations in Semantic Text Similarity Tasks." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-291334.
Full textDenna avhandling undersöker metoder för att representera meningar i vektor-form för semantisk textlikhet och jämför dem med meningsbaserade testmäng-der. För att utvärdera representationerna användes två metoder: STS Bench-mark, en vedertagen metod för att utvärdera språkmodellers förmåga att ut-värdera semantisk likhet, och STS Benchmark konverterad till en binär lik-hetsuppgift. Resultaten visade att förbehandling av texten och ordvektorerna kunde ge en signifikant ökning i resultatet för dessa uppgifter. Studien konklu-derade även att datamängden som användes kanske inte är ideal för denna typ av utvärdering, då meningsparen i stort hade ett högt lexikalt överlapp. Som komplement föreslår studien en parafrasdatamängd, något som skulle kräva ytterligare studier.
Zhou, Hanqing. "DBpedia Type and Entity Detection Using Word Embeddings and N-gram Models." Thesis, Université d'Ottawa / University of Ottawa, 2018. http://hdl.handle.net/10393/37324.
Full textFelt, Paul L. "Facilitating Corpus Annotation by Improving Annotation Aggregation." BYU ScholarsArchive, 2015. https://scholarsarchive.byu.edu/etd/5678.
Full textBooks on the topic "Semantic embeddings"
Bratko, Aleksandr. Artificial intelligence, legal system and state functions. ru: INFRA-M Academic Publishing LLC., 2020. http://dx.doi.org/10.12737/1064996.
Full textMoss, Sarah. Indicative conditionals. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198792154.003.0004.
Full textHenning, Tim. Parentheticalism about “Believe”. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198797036.003.0002.
Full textCamp, Elisabeth. A Dual Act Analysis of Slurs. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198758655.003.0003.
Full textPenco, Carlo. Donnellan’s misdescriptions and loose talk. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780198714217.003.0007.
Full textZimmermann, Thomas Ede. Fregean Compositionality. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198739548.003.0010.
Full textBook chapters on the topic "Semantic embeddings"
Demir, Caglar, and Axel-Cyrille Ngonga Ngomo. "Convolutional Complex Knowledge Graph Embeddings." In The Semantic Web, 409–24. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-77385-4_24.
Full textMohamed, Sameh K., and Vít Nováček. "Link Prediction Using Multi Part Embeddings." In The Semantic Web, 240–54. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-21348-0_16.
Full textJain, Nitisha, Jan-Christoph Kalo, Wolf-Tilo Balke, and Ralf Krestel. "Do Embeddings Actually Capture Knowledge Graph Semantics?" In The Semantic Web, 143–59. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-77385-4_9.
Full textWu, Tianxing, Du Zhang, Lei Zhang, and Guilin Qi. "Cross-Lingual Taxonomy Alignment with Bilingual Knowledge Graph Embeddings." In Semantic Technology, 251–58. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-70682-5_16.
Full textMoreno, Jose G., Romaric Besançon, Romain Beaumont, Eva D’hondt, Anne-Laure Ligozat, Sophie Rosset, Xavier Tannier, and Brigitte Grau. "Combining Word and Entity Embeddings for Entity Linking." In The Semantic Web, 337–52. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-58068-5_21.
Full textKolyvakis, Prodromos, Alexandros Kalousis, and Dimitris Kiritsis. "Hyperbolic Knowledge Graph Embeddings for Knowledge Base Completion." In The Semantic Web, 199–214. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-49461-2_12.
Full textGonçalves, Rafael S., Maulik R. Kamdar, and Mark A. Musen. "Aligning Biomedical Metadata with Ontologies Using Clustering and Embeddings." In The Semantic Web, 146–61. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-21348-0_10.
Full textAtzeni, Mattia, and Diego Reforgiato Recupero. "Fine-Tuning of Word Embeddings for Semantic Sentiment Analysis." In Semantic Web Challenges, 140–50. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-00072-1_12.
Full textMountantonakis, Michalis, and Yannis Tzitzikas. "Knowledge Graph Embeddings over Hundreds of Linked Datasets." In Metadata and Semantic Research, 150–62. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-36599-8_13.
Full textChekol, Melisachew Wudage, and Giuseppe Pirrò. "Refining Node Embeddings via Semantic Proximity." In Lecture Notes in Computer Science, 74–91. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-62419-4_5.
Full textConference papers on the topic "Semantic embeddings"
Lécué, Freddy, Jiaoyan Chen, Jeff Z. Pan, and Huajun Chen. "Augmenting Transfer Learning with Semantic Reasoning." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/246.
Full textLe, Tuan M. V., and Hady W. Lauw. "Semantic Visualization for Short Texts with Word Embeddings." In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/288.
Full textWehrmann, Jônatas, and Rodrigo C. Barros. "Language-Agnostic Visual-Semantic Embeddings." In Concurso de Teses e Dissertações da SBC. Sociedade Brasileira de Computação, 2021. http://dx.doi.org/10.5753/ctd.2021.15751.
Full textBollegala, Danushka, Kohei Hayashi, and Ken-ichi Kawarabayashi. "Think Globally, Embed Locally --- Locally Linear Meta-embedding of Words." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/552.
Full textXun, Guangxu, Yaliang Li, Wayne Xin Zhao, Jing Gao, and Aidong Zhang. "A Correlated Topic Model Using Word Embeddings." In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/588.
Full textKulmanov, Maxat, Wang Liu-Wei, Yuan Yan, and Robert Hoehndorf. "EL Embeddings: Geometric Construction of Models for the Description Logic EL++." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/845.
Full textQi, Zhiyuan, Ziheng Zhang, Jiaoyan Chen, Xi Chen, Yuejia Xiang, Ningyu Zhang, and Yefeng Zheng. "Unsupervised Knowledge Graph Alignment by Probabilistic Reasoning and Semantic Embedding." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/278.
Full textWehrmann, Jonatas, Mauricio Armani Lopes, Douglas Souza, and Rodrigo Barros. "Language-Agnostic Visual-Semantic Embeddings." In 2019 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, 2019. http://dx.doi.org/10.1109/iccv.2019.00590.
Full textTsai, Yao-Hung Hubert, Liang-Kang Huang, and Ruslan Salakhutdinov. "Learning Robust Visual-Semantic Embeddings." In 2017 IEEE International Conference on Computer Vision (ICCV). IEEE, 2017. http://dx.doi.org/10.1109/iccv.2017.386.
Full textWeston, Jason, Sumit Chopra, and Keith Adams. "#TagSpace: Semantic Embeddings from Hashtags." In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Stroudsburg, PA, USA: Association for Computational Linguistics, 2014. http://dx.doi.org/10.3115/v1/d14-1194.
Full text