Auswahl der wissenschaftlichen Literatur zum Thema „Sentence Embedding Spaces“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Sentence Embedding Spaces" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Sentence Embedding Spaces"

1

Nguyen, Huy Manh, Tomo Miyazaki, Yoshihiro Sugaya, and Shinichiro Omachi. "Multiple Visual-Semantic Embedding for Video Retrieval from Query Sentence." Applied Sciences 11, no. 7 (2021): 3214. http://dx.doi.org/10.3390/app11073214.

Der volle Inhalt der Quelle
Annotation:
Visual-semantic embedding aims to learn a joint embedding space where related video and sentence instances are located close to each other. Most existing methods put instances in a single embedding space. However, they struggle to embed instances due to the difficulty of matching visual dynamics in videos to textual features in sentences. A single space is not enough to accommodate various videos and sentences. In this paper, we propose a novel framework that maps instances into multiple individual embedding spaces so that we can capture multiple relationships between instances, leading to com
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Liu, Yi, Chengyu Yin, Jingwei Li, Fang Wang, and Senzhang Wang. "Predicting Dynamic User–Item Interaction with Meta-Path Guided Recursive RNN." Algorithms 15, no. 3 (2022): 80. http://dx.doi.org/10.3390/a15030080.

Der volle Inhalt der Quelle
Annotation:
Accurately predicting user–item interactions is critically important in many real applications, including recommender systems and user behavior analysis in social networks. One major drawback of existing studies is that they generally directly analyze the sparse user–item interaction data without considering their semantic correlations and the structural information hidden in the data. Another limitation is that existing approaches usually embed the users and items into the different embedding spaces in a static way, but ignore the dynamic characteristics of both users and items. In this paper
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Qian, Chen, Fuli Feng, Lijie Wen, and Tat-Seng Chua. "Conceptualized and Contextualized Gaussian Embedding." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 15 (2021): 13683–91. http://dx.doi.org/10.1609/aaai.v35i15.17613.

Der volle Inhalt der Quelle
Annotation:
Word embedding can represent a word as a point vector or a Gaussian distribution in high-dimensional spaces. Gaussian distribution is innately more expressive than point vector owing to the ability to additionally capture semantic uncertainties of words, and thus can express asymmetric relations among words more naturally (e.g., animal entails cat but not the reverse. However, previous Gaussian embedders neglect inner-word conceptual knowledge and lack tailored Gaussian contextualizer, leading to inferior performance on both intrinsic (context-agnostic) and extrinsic (context-sensitive) tasks.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Cantini, Riccardo, Fabrizio Marozzo, Giovanni Bruno, and Paolo Trunfio. "Learning Sentence-to-Hashtags Semantic Mapping for Hashtag Recommendation on Microblogs." ACM Transactions on Knowledge Discovery from Data 16, no. 2 (2022): 1–26. http://dx.doi.org/10.1145/3466876.

Der volle Inhalt der Quelle
Annotation:
The growing use of microblogging platforms is generating a huge amount of posts that need effective methods to be classified and searched. In Twitter and other social media platforms, hashtags are exploited by users to facilitate the search, categorization, and spread of posts. Choosing the appropriate hashtags for a post is not always easy for users, and therefore posts are often published without hashtags or with hashtags not well defined. To deal with this issue, we propose a new model, called HASHET ( HAshtag recommendation using Sentence-to-Hashtag Embedding Translation ), aimed at sugges
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Zhang, Yachao, Runze Hu, Ronghui Li, Yanyun Qu, Yuan Xie, and Xiu Li. "Cross-Modal Match for Language Conditioned 3D Object Grounding." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 7 (2024): 7359–67. http://dx.doi.org/10.1609/aaai.v38i7.28566.

Der volle Inhalt der Quelle
Annotation:
Language conditioned 3D object grounding aims to find the object within the 3D scene mentioned by natural language descriptions, which mainly depends on the matching between visual and natural language. Considerable improvement in grounding performance is achieved by improving the multimodal fusion mechanism or bridging the gap between detection and matching. However, several mismatches are ignored, i.e., mismatch in local visual representation and global sentence representation, and mismatch in visual space and corresponding label word space. In this paper, we propose crossmodal match for 3D
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Dancygier, Barbara. "Mental space embeddings, counterfactuality, and the use of unless." English Language and Linguistics 6, no. 2 (2002): 347–77. http://dx.doi.org/10.1017/s1360674302000278.

Der volle Inhalt der Quelle
Annotation:
Unless-constructions have often been compared with conditionals. It was noted that unless can in most cases be paraphrased with if not, but that its meaning resembles that of except if (Geis, 1973; von Fintel, 1991). Initially, it was also assumed that, unlike if-conditionals, unless-sentences with counterfactual (or irrealis) meanings are not acceptable. In recent studies by Declerck and Reed (2000, 2001), however, the acceptability of such sentences was demonstrated and a new analysis was proposed.The present article argues for an account of irrealis unless-sentences in terms of epistemic di
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Amigo, Enrique, Alejandro Ariza-Casabona, Victor Fresno, and M. Antonia Marti. "Information Theory–based Compositional Distributional Semantics." Computational Linguistics 48, no. 4 (2022): 907–48. http://dx.doi.org/10.1162/_.

Der volle Inhalt der Quelle
Annotation:
Abstract In the context of text representation, Compositional Distributional Semantics models aim to fuse the Distributional Hypothesis and the Principle of Compositionality. Text embedding is based on co-ocurrence distributions and the representations are in turn combined by compositional functions taking into account the text structure. However, the theoretical basis of compositional functions is still an open issue. In this article we define and study the notion of Information Theory–based Compositional Distributional Semantics (ICDS): (i) We first establish formal properties for embedding,
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Faraz, Anum, Fardin Ahsan, Jinane Mounsef, Ioannis Karamitsos, and Andreas Kanavos. "Enhancing Child Safety in Online Gaming: The Development and Application of Protectbot, an AI-Powered Chatbot Framework." Information 15, no. 4 (2024): 233. http://dx.doi.org/10.3390/info15040233.

Der volle Inhalt der Quelle
Annotation:
This study introduces Protectbot, an innovative chatbot framework designed to improve safety in children’s online gaming environments. At its core, Protectbot incorporates DialoGPT, a conversational Artificial Intelligence (AI) model rooted in Generative Pre-trained Transformer 2 (GPT-2) technology, engineered to simulate human-like interactions within gaming chat rooms. The framework is distinguished by a robust text classification strategy, rigorously trained on the Publicly Available Natural 2012 (PAN12) dataset, aimed at identifying and mitigating potential sexual predatory behaviors throu
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Croce, Danilo, Giuseppe Castellucci, and Roberto Basili. "Adversarial training for few-shot text classification." Intelligenza Artificiale 14, no. 2 (2021): 201–14. http://dx.doi.org/10.3233/ia-200051.

Der volle Inhalt der Quelle
Annotation:
In recent years, Deep Learning methods have become very popular in classification tasks for Natural Language Processing (NLP); this is mainly due to their ability to reach high performances by relying on very simple input representations, i.e., raw tokens. One of the drawbacks of deep architectures is the large amount of annotated data required for an effective training. Usually, in Machine Learning this problem is mitigated by the usage of semi-supervised methods or, more recently, by using Transfer Learning, in the context of deep architectures. One recent promising method to enable semi-sup
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Hao, Sun, Xiaolin Qin, and Xiaojing Liu. "Learning hierarchical embedding space for image-text matching." Intelligent Data Analysis, September 14, 2023, 1–19. http://dx.doi.org/10.3233/ida-230214.

Der volle Inhalt der Quelle
Annotation:
There are two mainstream strategies for image-text matching at present. The one, termed as joint embedding learning, aims to model the semantic information of both image and sentence in a shared feature subspace, which facilitates the measurement of semantic similarity but only focuses on global alignment relationship. To explore the local semantic relationship more fully, the other one, termed as metric learning, aims to learn a complex similarity function to directly output score of each image-text pair. However, it significantly suffers from more computation burden at retrieval stage. In th
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Dissertationen zum Thema "Sentence Embedding Spaces"

1

Duquenne, Paul-Ambroise. "Sentence Embeddings for Massively Multilingual Speech and Text Processing." Electronic Thesis or Diss., Sorbonne université, 2024. http://www.theses.fr/2024SORUS039.

Der volle Inhalt der Quelle
Annotation:
L'apprentissage de représentations mathématiques des phrases, sous forme textuelle, a été largement étudié en traitement automatique des langues (TAL). Alors que de nombreuses recherches ont exploré différentes fonctions d'objectif de pré-entraînement pour créer des représentations contextuelles des mots à partir des phrases, d'autres se sont concentrées sur l'apprentissage de représentations des phrases par des vecteurs uniques, ou représentations de taille fixe (par opposition à une séquence de vecteurs dont la longueur dépend de la longueur de la phrase), pour plusieurs langues. Le but étan
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Buchteile zum Thema "Sentence Embedding Spaces"

1

Alnajjar, Khalid. "When Word Embeddings Become Endangered." In Multilingual Facilitation. University of Helsinki, 2021. http://dx.doi.org/10.31885/9789515150257.24.

Der volle Inhalt der Quelle
Annotation:
Big languages such as English and Finnish have many natural language processing (NLP) resources and models, but this is not the case for low-resourced and endangered languages as such resources are so scarce despite the great advantages they would provide for the language communities. The most common types of resources available for low-resourced and endangered languages are translation dictionaries and universal dependencies. In this paper, we present a method for constructing word embeddings for endangered languages using existing word embeddings of different resource-rich languages and the translation dictionaries of resource-poor languages. Thereafter, the embeddings are fine-tuned using the sentences in the universal dependencies and aligned to match the semantic spaces of the big languages; resulting in cross-lingual embeddings. The endangered languages we work with here are Erzya, Moksha, Komi-Zyrian and Skolt Sami. Furthermore, we build a universal sentiment analysis model for all the languages that are part of this study, whether endangered or not, by utilizing cross-lingual word embeddings. The evaluation conducted shows that our word embeddings for endangered languages are well-aligned with the resource-rich languages, and they are suitable for training task-specific models as demonstrated by our sentiment analysis models which achieved high accuracies. All our cross-lingual word embeddings and sentiment analysis models will be released openly via an easy-to-use Python library.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Xiao, Qingfa, Shuangyin Li, and Lei Chen. "Identical and Fraternal Twins: Fine-Grained Semantic Contrastive Learning of Sentence Representations." In Frontiers in Artificial Intelligence and Applications. IOS Press, 2023. http://dx.doi.org/10.3233/faia230584.

Der volle Inhalt der Quelle
Annotation:
The enhancement of unsupervised learning of sentence representations has been significantly achieved by the utility of contrastive learning. This approach clusters the augmented positive instance with the anchor instance to create a desired embedding space. However, relying solely on the contrastive objective can result in sub-optimal outcomes due to its inability to differentiate subtle semantic variations between positive pairs. Specifically, common data augmentation techniques frequently introduce semantic distortion, leading to a semantic margin between the positive pair. While the InfoNCE loss function overlooks the semantic margin and prioritizes similarity maximization between positive pairs during training, leading to the insensitive semantic comprehension ability of the trained model. In this paper, we introduce a novel Identical and Fraternal Twins of Contrastive Learning (named IFTCL) framework, capable of simultaneously adapting to various positive pairs generated by different augmentation techniques. We propose a Twins Loss to preserve the innate margin during training and promote the potential of data enhancement in order to overcome the sub-optimal issue. We also present proof-of-concept experiments combined with the contrastive objective to prove the validity of the proposed Twins Loss. Furthermore, we propose a hippocampus queue mechanism to restore and reuse the negative instances without additional calculation, which further enhances the efficiency and performance of the IFCL. We verify the IFCL framework on nine semantic textual similarity tasks with both English and Chinese datasets, and the experimental results show that IFCL outperforms state-of-the-art methods.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Sentence Embedding Spaces"

1

Zhang, Chengkun, and Junbin Gao. "Hype-HAN: Hyperbolic Hierarchical Attention Network for Semantic Embedding." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/552.

Der volle Inhalt der Quelle
Annotation:
Hyperbolic space is a well-defined space with constant negative curvature. Recent research demonstrates its odds of capturing complex hierarchical structures with its exceptional high capacity and continuous tree-like properties. This paper bridges hyperbolic space's superiority to the power-law structure of documents by introducing a hyperbolic neural network architecture named Hyperbolic Hierarchical Attention Network (Hype-HAN). Hype-HAN defines three levels of embeddings (word/sentence/document) and two layers of hyperbolic attention mechanism (word-to-sentence/sentence-to-document) on Rie
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Wei, Liangchen, and Zhi-Hong Deng. "A Variational Autoencoding Approach for Inducing Cross-lingual Word Embeddings." In Twenty-Sixth International Joint Conference on Artificial Intelligence. International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/582.

Der volle Inhalt der Quelle
Annotation:
Cross-language learning allows one to use training data from one language to build models for another language. Many traditional approaches require word-level alignment sentences from parallel corpora, in this paper we define a general bilingual training objective function requiring sentence level parallel corpus only. We propose a variational autoencoding approach for training bilingual word embeddings. The variational model introduces a continuous latent variable to explicitly model the underlying semantics of the parallel sentence pairs and to guide the generation of the sentence pairs. Our
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Xu, Linli, Wenjun Ouyang, Xiaoying Ren, Yang Wang, and Liang Jiang. "Enhancing Semantic Representations of Bilingual Word Embeddings with Syntactic Dependencies." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/628.

Der volle Inhalt der Quelle
Annotation:
Cross-lingual representation is a technique that can both represent different languages in the same latent vector space and enable the knowledge transfer across languages. To learn such representations, most of existing works require parallel sentences with word-level alignments and assume that aligned words have similar Bag-of-Words (BoW) contexts. However, due to differences in grammar structures among different languages, the contexts of aligned words in different languages may appear at different positions of the sentence. To address this issue of different syntactics across different lang
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Baumel, Tal, Raphael Cohen, and Michael Elhadad. "Sentence Embedding Evaluation Using Pyramid Annotation." In Proceedings of the 1st Workshop on Evaluating Vector-Space Representations for NLP. Association for Computational Linguistics, 2016. http://dx.doi.org/10.18653/v1/w16-2526.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Yi, Xiaoyuan, Zhenghao Liu, Wenhao Li, and Maosong Sun. "Text Style Transfer via Learning Style Instance Supported Latent Space." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/526.

Der volle Inhalt der Quelle
Annotation:
Text style transfer pursues altering the style of a sentence while remaining its main content unchanged. Due to the lack of parallel corpora, most recent work focuses on unsupervised methods and has achieved noticeable progress. Nonetheless, the intractability of completely disentangling content from style for text leads to a contradiction of content preservation and style transfer accuracy. To address this problem, we propose a style instance supported method, StyIns. Instead of representing styles with embeddings or latent variables learned from single sentences, our model leverages the gene
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

An, Yuan, Alexander Kalinowski, and Jane Greenberg. "Clustering and Network Analysis for the Embedding Spaces of Sentences and Sub-Sentences." In 2021 Second International Conference on Intelligent Data Science Technologies and Applications (IDSTA). IEEE, 2021. http://dx.doi.org/10.1109/idsta53674.2021.9660801.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Sato, Motoki, Jun Suzuki, Hiroyuki Shindo, and Yuji Matsumoto. "Interpretable Adversarial Perturbation in Input Embedding Space for Text." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/601.

Der volle Inhalt der Quelle
Annotation:
Following great success in the image processing field, the idea of adversarial training has been applied to tasks in the natural language processing (NLP) field. One promising approach directly applies adversarial training developed in the image processing field to the input word embedding space instead of the discrete input space of texts. However, this approach abandons such interpretability as generating adversarial texts to significantly improve the performance of NLP tasks. This paper restores interpretability to such methods by restricting the directions of perturbations toward the exist
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Hwang, Eugene. "Saving Endangered Languages with a Novel Three-Way Cycle Cross-Lingual Zero-Shot Sentence Alignment." In 10th International Conference on Artificial Intelligence & Applications. Academy & Industry Research Collaboration Center, 2023. http://dx.doi.org/10.5121/csit.2023.131926.

Der volle Inhalt der Quelle
Annotation:
Sentence classification, including sentiment analysis, hate speech detection, tagging, and urgency detection is one of the most prospective and important subjects in the Natural Language processing field. With the advent of artificial neural networks, researchers usually take advantage of models favorable for processing natural languages including RNN, LSTM and BERT. However, these models require huge amount of language corpus data to attain satisfactory accuracy. Typically this is not a big deal for researchers who are using major languages including English and Chinese because there are a my
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Li, Wenye, Jiawei Zhang, Jianjun Zhou, and Laizhong Cui. "Learning Word Vectors with Linear Constraints: A Matrix Factorization Approach." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/582.

Der volle Inhalt der Quelle
Annotation:
Learning vector space representation of words, or word embedding, has attracted much recent research attention. With the objective of better capturing the semantic and syntactic information inherent in words, we propose two new embedding models based on the singular value decomposition of lexical co-occurrences of words. Different from previous work, our proposed models allow for injecting linear constraints when performing the decomposition, with which the desired semantic and syntactic information will be maintained in word vectors. Conceptually the models are flexible and convenient to enco
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Dimovski, Mladen, Claudiu Musat, Vladimir Ilievski, Andreea Hossman, and Michael Baeriswyl. "Submodularity-Inspired Data Selection for Goal-Oriented Chatbot Training Based on Sentence Embeddings." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/559.

Der volle Inhalt der Quelle
Annotation:
Spoken language understanding (SLU) systems, such as goal-oriented chatbots or personal assistants, rely on an initial natural language understanding (NLU) module to determine the intent and to extract the relevant information from the user queries they take as input. SLU systems usually help users to solve problems in relatively narrow domains and require a large amount of in-domain training data. This leads to significant data availability issues that inhibit the development of successful systems. To alleviate this problem, we propose a technique of data selection in the low-data regime that
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!