To see the other types of publications on this topic, follow the link: Embedding techniques.

Journal articles on the topic 'Embedding techniques'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Embedding techniques.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Meyer, Francois, der Merwe Brink van, and Dirko Coetsee. "Learning Concept Embeddings from Temporal Data." JUCS - Journal of Universal Computer Science 24, no. (10) (2018): 1378–402. https://doi.org/10.3217/jucs-024-10-1378.

Full text
Abstract:
Word embedding techniques can be used to learn vector representations of concepts from temporal datasets. Previous attempts to do this amounted to appling word embedding techniques to event sequences. We propose a concept embedding model that extends existing word embedding techniques to take time into account by explicitly modelling the time between concept occurrences. The model is implemented and evaluated using medical temporal data. It is found that incorporating time into the learning algorithm can improve the quality of the resulting embeddings, as measured by an existing methodological
APA, Harvard, Vancouver, ISO, and other styles
2

Duong, Chi Thang, Trung Dung Hoang, Hongzhi Yin, Matthias Weidlich, Quoc Viet Hung Nguyen, and Karl Aberer. "Scalable robust graph embedding with Spark." Proceedings of the VLDB Endowment 15, no. 4 (2021): 914–22. http://dx.doi.org/10.14778/3503585.3503599.

Full text
Abstract:
Graph embedding aims at learning a vector-based representation of vertices that incorporates the structure of the graph. This representation then enables inference of graph properties. Existing graph embedding techniques, however, do not scale well to large graphs. While several techniques to scale graph embedding using compute clusters have been proposed, they require continuous communication between the compute nodes and cannot handle node failure. We therefore propose a framework for scalable and robust graph embedding based on the MapReduce model, which can distribute any existing embeddin
APA, Harvard, Vancouver, ISO, and other styles
3

Li, Pandeng, Yan Li, Hongtao Xie, and Lei Zhang. "Neighborhood-Adaptive Structure Augmented Metric Learning." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 2 (2022): 1367–75. http://dx.doi.org/10.1609/aaai.v36i2.20025.

Full text
Abstract:
Most metric learning techniques typically focus on sample embedding learning, while implicitly assume a homogeneous local neighborhood around each sample, based on the metrics used in training ( e.g., hypersphere for Euclidean distance or unit hyperspherical crown for cosine distance). As real-world data often lies on a low-dimensional manifold curved in a high-dimensional space, it is unlikely that everywhere of the manifold shares the same local structures in the input space. Besides, considering the non-linearity of neural networks, the local structure in the output embedding space may not
APA, Harvard, Vancouver, ISO, and other styles
4

Jadon, Anil Kumar, and Suresh Kumar. "Enhancing emotion detection with synergistic combination of word embeddings and convolutional neural networks." Indonesian Journal of Electrical Engineering and Computer Science 35, no. 3 (2024): 1933. http://dx.doi.org/10.11591/ijeecs.v35.i3.pp1933-1941.

Full text
Abstract:
Recognizing emotions in textual data is crucial in a wide range of natural language processing (NLP) applications, from consumer sentiment research to mental health evaluation. The word embedding techniques play a pivotal role in text processing. In this paper, the performance of several well-known word embedding methods is evaluated in the context of emotion recognition. The classification of emotions is further enhanced using a convolutional neural network (CNN) model because of its propensity to capture local patterns and its recent triumphs in text-related tasks. The integration of CNN wit
APA, Harvard, Vancouver, ISO, and other styles
5

Anil, Kumar Jadon Suresh Kumar. "Enhancing emotion detection with synergistic combination of word embeddings and convolutional neural networks." Indonesian Journal of Electrical Engineering and Computer Science 35, no. 3 (2024): 1933–41. https://doi.org/10.11591/ijeecs.v35.i3.pp1933-1941.

Full text
Abstract:
Recognizing emotions in textual data is crucial in a wide range of natural language processing (NLP) applications, from consumer sentiment research to mental health evaluation. The word embedding techniques play a pivotal role in text processing. In this paper, the performance of several well-known word embedding methods is evaluated in the context of emotion recognition. The classification of emotions is further enhanced using a convolutional neural network (CNN) model because of its propensity to capture local patterns and its recent triumphs in text-related tasks. The integration of CNN wit
APA, Harvard, Vancouver, ISO, and other styles
6

Mao, Yuqing, and Kin Wah Fung. "Use of word and graph embedding to measure semantic relatedness between Unified Medical Language System concepts." Journal of the American Medical Informatics Association 27, no. 10 (2020): 1538–46. http://dx.doi.org/10.1093/jamia/ocaa136.

Full text
Abstract:
Abstract Objective The study sought to explore the use of deep learning techniques to measure the semantic relatedness between Unified Medical Language System (UMLS) concepts. Materials and Methods Concept sentence embeddings were generated for UMLS concepts by applying the word embedding models BioWordVec and various flavors of BERT to concept sentences formed by concatenating UMLS terms. Graph embeddings were generated by the graph convolutional networks and 4 knowledge graph embedding models, using graphs built from UMLS hierarchical relations. Semantic relatedness was measured by the cosin
APA, Harvard, Vancouver, ISO, and other styles
7

Ma, Xingyu, and Bin Yao. "Embedding Numerical Features and Meta-Features in Tabular Deep Learning." Information Technology and Control 54, no. 2 (2025): 662–81. https://doi.org/10.5755/j01.itc.54.2.39134.

Full text
Abstract:
Tabular data is ubiquitous in real-world applications, and an increasing number of deep learning approaches have been developed for tabular data prediction. Among these approaches, embedding techniques serve as both a common and essential component. However, the design of tabular embedding paradigms remains relatively limited, and there is a lack of systematic evaluation regarding the performance of many existing methods in specific scenarios. In this paper, we focus on embedding numerical features and meta-features. To enrich the embedding methods for numerical features, we propose an orderin
APA, Harvard, Vancouver, ISO, and other styles
8

Tan, Eugene, Shannon Algar, Débora Corrêa, Michael Small, Thomas Stemler, and David Walker. "Selecting embedding delays: An overview of embedding techniques and a new method using persistent homology." Chaos: An Interdisciplinary Journal of Nonlinear Science 33, no. 3 (2023): 032101. http://dx.doi.org/10.1063/5.0137223.

Full text
Abstract:
Delay embedding methods are a staple tool in the field of time series analysis and prediction. However, the selection of embedding parameters can have a big impact on the resulting analysis. This has led to the creation of a large number of methods to optimize the selection of parameters such as embedding lag. This paper aims to provide a comprehensive overview of the fundamentals of embedding theory for readers who are new to the subject. We outline a collection of existing methods for selecting embedding lag in both uniform and non-uniform delay embedding cases. Highlighting the poor dynamic
APA, Harvard, Vancouver, ISO, and other styles
9

SAMANTA, SAURAV. "NONCOMMUTATIVITY FROM EMBEDDING TECHNIQUES." Modern Physics Letters A 21, no. 08 (2006): 675–89. http://dx.doi.org/10.1142/s0217732306019037.

Full text
Abstract:
We apply the embedding method of Batalin–Tyutin for revealing noncommutative structures in the generalized Landau problem. Different types of noncommutativity follow from different gauge choices. This establishes a duality among the distinct algebras. An alternative approach is discussed which yields equivalent results as the embedding method. We also discuss the consequences in the Landau problem for a non-constant magnetic field.
APA, Harvard, Vancouver, ISO, and other styles
10

Alkaabi, Hussein, Ali Kadhim Jasim, and Ali Darroudi. "From Static to Contextual: A Survey of Embedding Advances in NLP." PERFECT: Journal of Smart Algorithms 2, no. 2 (2025): 57–66. https://doi.org/10.62671/perfect.v2i2.77.

Full text
Abstract:
Embedding techniques have been a cornerstone of Natural Language Processing (NLP), enabling machines to represent textual data in a form that captures semantic and syntactic relationships. Over the years, the field has witnessed a significant evolution—from static word embeddings, such as Word2Vec and GloVe, which represent words as fixed vectors, to dynamic, contextualized embeddings like BERT and GPT, which generate word representations based on their surrounding context. This survey provides a comprehensive overview of embedding techniques, tracing their development from early methods to st
APA, Harvard, Vancouver, ISO, and other styles
11

Liang, Jiongqian, Saket Gurukar, and Srinivasan Parthasarathy. "MILE: A Multi-Level Framework for Scalable Graph Embedding." Proceedings of the International AAAI Conference on Web and Social Media 15 (May 22, 2021): 361–72. http://dx.doi.org/10.1609/icwsm.v15i1.18067.

Full text
Abstract:
Recently there has been a surge of interest in designing graph embedding methods. Few, if any, can scale to a large-sized graph with millions of nodes due to both computational complexity and memory requirements. In this paper, we relax this limitation by introducing the MultI-Level Embedding (MILE) framework – a generic methodology allowing contemporary graph embedding methods to scale to large graphs. MILE repeatedly coarsens the graph into smaller ones using a hybrid matching technique to maintain the backbone structure of the graph. It then applies existing embedding methods on the coarses
APA, Harvard, Vancouver, ISO, and other styles
12

Moudhich, Ihab, and Abdelhadi Fennan. "Evaluating sentiment analysis and word embedding techniques on Brexit." IAES International Journal of Artificial Intelligence (IJ-AI) 13, no. 1 (2024): 695–702. https://doi.org/10.11591/ijai.v13.i1.pp695-702.

Full text
Abstract:
In this study, we investigate the effectiveness of pre-trained word embeddings for sentiment analysis on a real-world topic, namely Brexit. We compare the performance of several popular word embedding models such global vectors for word representation (GloVe), FastText, word to vec (word2vec), and embeddings from language models (ELMo) on a dataset of tweets related to Brexit and evaluate their ability to classify the sentiment of the tweets as positive, negative, or neutral. We find that pre-trained word embeddings provide useful features for sentiment analysis and can significantly improve t
APA, Harvard, Vancouver, ISO, and other styles
13

Niyonkuru, Enock, Mauricio Soto Gomez, Elena Casarighi, et al. "Replacing non-biomedical concepts improves embedding of biomedical concepts." PLOS One 20, no. 5 (2025): e0322498. https://doi.org/10.1371/journal.pone.0322498.

Full text
Abstract:
Embeddings are semantically meaningful representations of words in a vector space, commonly used to enhance downstream machine learning applications. Traditional biomedical embedding techniques often replace all synonymous words representing biological or medical concepts with a unique token, ensuring consistent representation and improving embedding quality. However, the potential impact of replacing non-biomedical concept synonyms has received less attention. Embedding approaches often employ concept replacement to replace concepts that span multiple words, such as non-small-cell lung carcin
APA, Harvard, Vancouver, ISO, and other styles
14

Mehta, Sweta, Pankaj K. Goswami, and K. Sridhar Patnaik. "Network Embedding Techniques for Predicting Software Defects: A Review." International Journal of Scientific Research and Management (IJSRM) 13, no. 06 (2025): 2254–75. https://doi.org/10.18535/ijsrm/v13i06.ec05.

Full text
Abstract:
In the software development process, ensuring the quality of the software is essential. Software defect prediction (SDP) is of significant importance in identifying software modules with a high likelihood of defects. Several machine learning-based defect prediction models have been developed and implemented in recent years. Researchers have also utilized network embedding for SDP, showcasing the adaptability of Natural Language Processing techniques within the domain of defect prediction. This study aims to review, investigate, and discuss network embedding's use in SDP. We examined the previo
APA, Harvard, Vancouver, ISO, and other styles
15

Moudhich, Ihab, and Abdelhadi Fennan. "Evaluating sentiment analysis and word embedding techniques on Brexit." IAES International Journal of Artificial Intelligence (IJ-AI) 13, no. 1 (2024): 695. http://dx.doi.org/10.11591/ijai.v13.i1.pp695-702.

Full text
Abstract:
<p>In this study, we investigate the effectiveness of pre-trained word embeddings for sentiment analysis on a real-world topic, namely Brexit. We compare the performance of several popular word embedding models such global vectors for word representation (GloVe), FastText, word to vec (word2vec), and embeddings from language models (ELMo) on a dataset of tweets related to Brexit and evaluate their ability to classify the sentiment of the tweets as positive, negative, or neutral. We find that pre-trained word embeddings provide useful features for sentiment analysis and can significantly
APA, Harvard, Vancouver, ISO, and other styles
16

Zhou, Jingya, Ling Liu, Wenqi Wei, and Jianxi Fan. "Network Representation Learning: From Preprocessing, Feature Extraction to Node Embedding." ACM Computing Surveys 55, no. 2 (2023): 1–35. http://dx.doi.org/10.1145/3491206.

Full text
Abstract:
Network representation learning (NRL) advances the conventional graph mining of social networks, knowledge graphs, and complex biomedical and physics information networks. Dozens of NRL algorithms have been reported in the literature. Most of them focus on learning node embeddings for homogeneous networks, but they differ in the specific encoding schemes and specific types of node semantics captured and used for learning node embedding. This article reviews the design principles and the different node embedding techniques for NRL over homogeneous networks. To facilitate the comparison of diffe
APA, Harvard, Vancouver, ISO, and other styles
17

Srinidhi, K., T. L.S Tejaswi, CH Rama Rupesh Kumar, and I. Sai Siva Charan. "An Advanced Sentiment Embeddings with Applications to Sentiment Based Result Analysis." International Journal of Engineering & Technology 7, no. 2.32 (2018): 393. http://dx.doi.org/10.14419/ijet.v7i2.32.15721.

Full text
Abstract:
We propose an advanced well-trained sentiment analysis based adoptive analysis “word specific embedding’s, dubbed sentiment embedding’s”. Using available word and phrase embedded learning and trained algorithms mainly make use of contexts of terms but ignore the sentiment of texts and analyzing the process of word and text classifications. sentimental analysis on unlike words conveying same meaning matched to corresponding word vector. This problem is bridged by combining encoding opinion carrying text with sentiment embeddings words. But performing sentimental analysis on e-commerce, social n
APA, Harvard, Vancouver, ISO, and other styles
18

Sabbeh, Sahar F., and Heba A. Fasihuddin. "A Comparative Analysis of Word Embedding and Deep Learning for Arabic Sentiment Classification." Electronics 12, no. 6 (2023): 1425. http://dx.doi.org/10.3390/electronics12061425.

Full text
Abstract:
Sentiment analysis on social media platforms (i.e., Twitter or Facebook) has become an important tool to learn about users’ opinions and preferences. However, the accuracy of sentiment analysis is disrupted by the challenges of natural language processing (NLP). Recently, deep learning models have proved superior performance over statistical- and lexical-based approaches in NLP-related tasks. Word embedding is an important layer of deep learning models to generate input features. Many word embedding models have been presented for text representation of both classic and context-based word embed
APA, Harvard, Vancouver, ISO, and other styles
19

Ravindran, Renjith P., and Kavi Narayana Murthy. "Syntactic Coherence in Word Embedding Spaces." International Journal of Semantic Computing 15, no. 02 (2021): 263–90. http://dx.doi.org/10.1142/s1793351x21500057.

Full text
Abstract:
Word embeddings have recently become a vital part of many Natural Language Processing (NLP) systems. Word embeddings are a suite of techniques that represent words in a language as vectors in an n-dimensional real space that has been shown to encode a significant amount of syntactic and semantic information. When used in NLP systems, these representations have resulted in improved performance across a wide range of NLP tasks. However, it is not clear how syntactic properties interact with the more widely studied semantic properties of words. Or what the main factors in the modeling formulation
APA, Harvard, Vancouver, ISO, and other styles
20

Pankaj, M. Bhuyar, and S. W. Mohod Dr. "Study of Steganographic Techniques for Data Hiding." International Journal of Research in Computer & Information Technology (IJRCIT) 7, no. 4 (2022): 8–12. https://doi.org/10.5281/zenodo.7180942.

Full text
Abstract:
Nowadays, the volume of data shared over the Internet is growing. As a result, data security is referred to as a major issue while processing data communications through the Internet. During communication procedures, everyone requires their data to remain secure. Steganography is the science and art of embedding audio, message, video, or image into another audio, image, video, or message to conceal it. It is used to secure confidential information from harmful attacks. This research offers a classification of digital steganography based on cover object categories, as well as a classification o
APA, Harvard, Vancouver, ISO, and other styles
21

Martina, Toshevska, Stojanovska Frosina, and Kalajdjiesk Jovan. "The Ability of Word Embeddings to Capture Word Similarities." International Journal on Natural Language Computing (IJNLC) Vol.9, No.3, June 2020 9, no. 3 (2023): 18. https://doi.org/10.5281/zenodo.7827290.

Full text
Abstract:
Distributed language representation has become the most widely used technique for language representation in various natural language processing tasks. Most of the natural language processing models that are based on deep learning techniques use already pre-trained distributed word representations, commonly called word embeddings. Determining the most qualitative word embeddings is of crucial importance for such models. However, selecting the appropriate word embeddings is a perplexing task since the projected embedding space is not intuitive to humans. In this paper, we explore different appr
APA, Harvard, Vancouver, ISO, and other styles
22

Goel, Mukta, and Rohit Goel. "Comparative Analysis of Hybrid Transform Domain Image Steganography Embedding Techniques." International Journal of Scientific Research 2, no. 2 (2012): 388–90. http://dx.doi.org/10.15373/22778179/feb2013/131.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Del Gaizo, John, Curry Sherard, Khaled Shorbaji, Brett Welch, Roshan Mathi, and Arman Kilic. "Prediction of coronary artery bypass graft outcomes using a single surgical note: An artificial intelligence-based prediction model study." PLOS ONE 19, no. 4 (2024): e0300796. http://dx.doi.org/10.1371/journal.pone.0300796.

Full text
Abstract:
Background Healthcare providers currently calculate risk of the composite outcome of morbidity or mortality associated with a coronary artery bypass grafting (CABG) surgery through manual input of variables into a logistic regression-based risk calculator. This study indicates that automated artificial intelligence (AI)-based techniques can instead calculate risk. Specifically, we present novel numerical embedding techniques that enable NLP (natural language processing) models to achieve higher performance than the risk calculator using a single preoperative surgical note. Methods The most rec
APA, Harvard, Vancouver, ISO, and other styles
24

Gerritse, Emma, Faegheh Hasibi, and Arjen De Vries. "Graph Embeddings to Empower Entity Retrieval." Information Retrieval Research 1, no. 1 (2025): 137–65. https://doi.org/10.54195/irrj.19877.

Full text
Abstract:
In this research, we investigate methods for entity retrieval using graph embeddings. While various methods have been proposed over the years, most utilize a single graph embedding and entity linking approach. This hinders our understanding of how different graph embedding and entity linking methods impact entity retrieval. To address this gap, we investigate the effects of three different categories of graph embedding techniques and five different entity linking methods. We perform a reranking of entities using the distance between the embeddings of annotated entities and the entities we wish
APA, Harvard, Vancouver, ISO, and other styles
25

Sun, Yaozhu, Utkarsh Dhandhania, and Bruno C. d. S. Oliveira. "Compositional embeddings of domain-specific languages." Proceedings of the ACM on Programming Languages 6, OOPSLA2 (2022): 175–203. http://dx.doi.org/10.1145/3563294.

Full text
Abstract:
A common approach to defining domain-specific languages (DSLs) is via a direct embedding into a host language. There are several well-known techniques to do such embeddings, including shallow and deep embeddings. However, such embeddings come with various trade-offs in existing programming languages. Owing to such trade-offs, many embedded DSLs end up using a mix of approaches in practice, requiring a substantial amount of code, as well as some advanced coding techniques. In this paper, we show that the recently proposed Compositional Programming paradigm and the CP language provide improved s
APA, Harvard, Vancouver, ISO, and other styles
26

Susanty, Meredita, and Sahrul Sukardi. "Perbandingan Pre-trained Word Embedding dan Embedding Layer untuk Named-Entity Recognition Bahasa Indonesia." Petir 14, no. 2 (2021): 247–57. http://dx.doi.org/10.33322/petir.v14i2.1164.

Full text
Abstract:
Named-Entity Recognition (NER) is used to extract information from text by identifying entities such as the name of the person, organization, location, time, and other entities. Recently, machine learning approaches, particularly deep-learning, are widely used to recognize patterns of entities in sentences. Embedding, a process to convert text data into a number or vector of numbers, translates high dimensional vectors into relatively low-dimensional space. Embeddings make it easier to do machine learning on large inputs like sparse vectors representing words. The embedding process can be perf
APA, Harvard, Vancouver, ISO, and other styles
27

Cheng, Weiyu, Yanyan Shen, Linpeng Huang, and Yanmin Zhu. "Dual-Embedding based Deep Latent Factor Models for Recommendation." ACM Transactions on Knowledge Discovery from Data 15, no. 5 (2021): 1–24. http://dx.doi.org/10.1145/3447395.

Full text
Abstract:
Among various recommendation methods, latent factor models are usually considered to be state-of-the-art techniques, which aim to learn user and item embeddings for predicting user-item preferences. When applying latent factor models to the recommendation with implicit feedback, the quality of embeddings always suffers from inadequate positive feedback and noisy negative feedback. Inspired by the idea of NSVD that represents users based on their interacted items, this article proposes a dual-embedding based deep latent factor method for recommendation with implicit feedback. In addition to lea
APA, Harvard, Vancouver, ISO, and other styles
28

Barros, Claudio D. T., Matheus R. F. Mendonça, Alex B. Vieira, and Artur Ziviani. "A Survey on Embedding Dynamic Graphs." ACM Computing Surveys 55, no. 1 (2023): 1–37. http://dx.doi.org/10.1145/3483595.

Full text
Abstract:
Embedding static graphs in low-dimensional vector spaces plays a key role in network analytics and inference, supporting applications like node classification, link prediction, and graph visualization. However, many real-world networks present dynamic behavior, including topological evolution, feature evolution, and diffusion. Therefore, several methods for embedding dynamic graphs have been proposed to learn network representations over time, facing novel challenges, such as time-domain modeling, temporal features to be captured, and the temporal granularity to be embedded. In this survey, we
APA, Harvard, Vancouver, ISO, and other styles
29

S. Thiruvenkatasamy, G. Devi, B. Gayathri, C. KaviyaSri, and E. Leela. "Data Loss Transmission in 5g Network by Enabling Green Blockchain Methodologies." South Asian Journal of Engineering and Technology 13, no. 1 (2023): 13–21. http://dx.doi.org/10.26524/sajet.2023.13.2.

Full text
Abstract:
Network embedding successfully maintains the network structure by assigning network nodes to low-dimensional representations. A considerable amount of progress has recently been achieved in the direction of this new paradigm for network research. In this study, we concentrate on classifying, analyzing, and pointing out the future direction for network embedding techniques to research. We begin by summarizing the purpose of network embedding. We talk about network embedding and how it relates to traditional graph embedding methods in a cognitive radio context. Following that, we give a thorough
APA, Harvard, Vancouver, ISO, and other styles
30

David, Merlin Susan, and Shini Renjith. "Comparison of word embeddings in text classification based on RNN and CNN." IOP Conference Series: Materials Science and Engineering 1187, no. 1 (2021): 012029. http://dx.doi.org/10.1088/1757-899x/1187/1/012029.

Full text
Abstract:
Abstract This paper presents a comparison of word embeddings in text classification using RNN and CNN. In the field of image classification, deep learning methods like as RNN and CNN have shown to be popular. CNN is most popular model among deep learning techniques in the field of NLP because of its simplicity and parallelism, even if the dataset is huge. Word embedding techniques employed are GloVe and fastText. Use of different word embeddings showed a major difference in the accuracy of the models. When it comes to embedding of rare words, GloVe can sometime perform poorly. Inorder to tackl
APA, Harvard, Vancouver, ISO, and other styles
31

Vadalà, Valeria, Gustavo Avolio, Antonio Raffo, Dominique M. M. P. Schreurs, and Giorgio Vannini. "Nonlinear embedding and de-embedding techniques for large-signal fet measurements." Microwave and Optical Technology Letters 54, no. 12 (2012): 2835–38. http://dx.doi.org/10.1002/mop.27169.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Levy, Ronnie, and M. D. Rice. "Techniques and examples in U-embedding." Topology and its Applications 22, no. 2 (1986): 157–74. http://dx.doi.org/10.1016/0166-8641(86)90006-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Thodi, Diljith M., and Jeffrey J. Rodriguez. "Expansion Embedding Techniques for Reversible Watermarking." IEEE Transactions on Image Processing 16, no. 3 (2007): 721–30. http://dx.doi.org/10.1109/tip.2006.891046.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Song, J. M., F. Ling, W. Blood, et al. "De-embedding techniques for embedded microstrips." Microwave and Optical Technology Letters 42, no. 1 (2004): 50–54. http://dx.doi.org/10.1002/mop.20204.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Takehara, Daisuke, and Kei Kobayashi. "Representing Hierarchical Structured Data Using Cone Embedding." Mathematics 11, no. 10 (2023): 2294. http://dx.doi.org/10.3390/math11102294.

Full text
Abstract:
Extracting hierarchical structure in graph data is becoming an important problem in fields such as natural language processing and developmental biology. Hierarchical structures can be extracted by embedding methods in non-Euclidean spaces, such as Poincaré embedding and Lorentz embedding, and it is now possible to learn efficient embedding by taking advantage of the structure of these spaces. In this study, we propose embedding into another type of metric space called a metric cone by learning an only one-dimensional coordinate variable added to the original vector space or a pre-trained embe
APA, Harvard, Vancouver, ISO, and other styles
36

Kapil Adhar Wagh. "A Review: Word Embedding Models with Machine Learning Based Context Depend and Context Independent Techniques." Advances in Nonlinear Variational Inequalities 28, no. 3s (2024): 251–58. https://doi.org/10.52783/anvi.v28.2928.

Full text
Abstract:
Natural language processing (NLP) has been transformed by word embedding models, which convert text into meaningful numerical representations. These models fall into two general categories: context-dependent methods like ELMo, BERT, and GPT, and context-independent methods like Word2Vec, GloVe, and FastText. Although static word representations are provided by context-independent models, polysemy and contextual subtleties are difficult for them to capture. These issues are addressed by context-dependent approaches that make use of sophisticated deep learning architectures to produce dynamic em
APA, Harvard, Vancouver, ISO, and other styles
37

Jin, Junchen, Mark Heimann, Di Jin, and Danai Koutra. "Toward Understanding and Evaluating Structural Node Embeddings." ACM Transactions on Knowledge Discovery from Data 16, no. 3 (2022): 1–32. http://dx.doi.org/10.1145/3481639.

Full text
Abstract:
While most network embedding techniques model the proximity between nodes in a network, recently there has been significant interest in structural embeddings that are based on node equivalences , a notion rooted in sociology: equivalences or positions are collections of nodes that have similar roles—i.e., similar functions, ties or interactions with nodes in other positions—irrespective of their distance or reachability in the network. Unlike the proximity-based methods that are rigorously evaluated in the literature, the evaluation of structural embeddings is less mature. It relies on small s
APA, Harvard, Vancouver, ISO, and other styles
38

Yadav, Aditya Kumar. "Refined Global Word Embeddings Based on Sentiment Concept for Sentiment Analysis." INTERNATIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 05 (2025): 1–9. https://doi.org/10.55041/ijsrem49245.

Full text
Abstract:
ABSTRACT Sentiment analysis is a significant area of study in natural language processing that finds extensive use in journalism, politics, and other domains. In sentiment analysis, word embeddings are important. The sentiment lexicons are directly incorporated into conventional word representation using the current senstiment embeddings techniques. This sentiment representation technique is unable to offer precise sentiment information for words in many situations since it can only distinguish the sentiment information of distinct words, not the same word in several settings. To address the i
APA, Harvard, Vancouver, ISO, and other styles
39

Prokhorov, Victor, Mohammad Taher Pilehvar, Dimitri Kartsaklis, Pietro Lio, and Nigel Collier. "Unseen Word Representation by Aligning Heterogeneous Lexical Semantic Spaces." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 6900–6907. http://dx.doi.org/10.1609/aaai.v33i01.33016900.

Full text
Abstract:
Word embedding techniques heavily rely on the abundance of training data for individual words. Given the Zipfian distribution of words in natural language texts, a large number of words do not usually appear frequently or at all in the training data. In this paper we put forward a technique that exploits the knowledge encoded in lexical resources, such as WordNet, to induce embeddings for unseen words. Our approach adapts graph embedding and cross-lingual vector space transformation techniques in order to merge lexical knowledge encoded in ontologies with that derived from corpus statistics. W
APA, Harvard, Vancouver, ISO, and other styles
40

P. Bhopale, Bhopale, and Ashish Tiwari. "LEVERAGING NEURAL NETWORK PHRASE EMBEDDING MODEL FOR QUERY REFORMULATION IN AD-HOC BIOMEDICAL INFORMATION RETRIEVAL." Malaysian Journal of Computer Science 34, no. 2 (2021): 151–70. http://dx.doi.org/10.22452/mjcs.vol34no2.2.

Full text
Abstract:
This study presents a spark enhanced neural network phrase embedding model to leverage query representation for relevant biomedical literature retrieval. Information retrieval for clinical decision support demands high precision. In recent years, word embeddings have been evolved as a solution to such requirements. It represents vocabulary words in low-dimensional vectors in the context of their similar words; however, it is inadequate to deal with semantic phrases or multi-word units. Learning vector embeddings for phrases by maintaining word meanings is a challenging task. This study propose
APA, Harvard, Vancouver, ISO, and other styles
41

Xie, Chengxin, Jingui Huang, Yongjiang Shi, Hui Pang, Liting Gao, and Xiumei Wen. "Ensemble graph auto-encoders for clustering and link prediction." PeerJ Computer Science 11 (January 22, 2025): e2648. https://doi.org/10.7717/peerj-cs.2648.

Full text
Abstract:
Graph auto-encoders are a crucial research area within graph neural networks, commonly employed for generating graph embeddings while minimizing errors in unsupervised learning. Traditional graph auto-encoders focus on reconstructing minimal graph data loss to encode neighborhood information for each node, yielding node embedding representations. However, existing graph auto-encoder models often overlook node representations and fail to capture contextual node information within the graph data, resulting in poor embedding effects. Accordingly, this study proposes the ensemble graph auto-encode
APA, Harvard, Vancouver, ISO, and other styles
42

Angerer, Philipp, David S. Fischer, Fabian J. Theis, Antonio Scialdone, and Carsten Marr. "Automatic identification of relevant genes from low-dimensional embeddings of single-cell RNA-seq data." Bioinformatics 36, no. 15 (2020): 4291–95. http://dx.doi.org/10.1093/bioinformatics/btaa198.

Full text
Abstract:
Abstract Motivation Dimensionality reduction is a key step in the analysis of single-cell RNA-sequencing data. It produces a low-dimensional embedding for visualization and as a calculation base for downstream analysis. Nonlinear techniques are most suitable to handle the intrinsic complexity of large, heterogeneous single-cell data. However, with no linear relation between gene and embedding coordinate, there is no way to extract the identity of genes driving any cell’s position in the low-dimensional embedding, making it difficult to characterize the underlying biological processes. Results
APA, Harvard, Vancouver, ISO, and other styles
43

N, Nagendra, and Chandra J. "A Systematic Review on Features Extraction Techniques for Aspect Based Text Classification Using Artificial Intelligence." ECS Transactions 107, no. 1 (2022): 2503–14. http://dx.doi.org/10.1149/10701.2503ecst.

Full text
Abstract:
Aspect extraction is an important and challenging and meaningful task in aspect-based text classification analysis. To apply variants of topic models on task, while reasonably successful, these methods usually do not produce highly coherent aspects. In this review, present a novel neural/cognitive approach to discover coherent aspects methods. Exploiting the distribution of word co-occurrences through neural/cognitive word embeddings. Unlike topics that typically assume independently generated words, word embedding models encourage words that appear in similar factors close to each other in th
APA, Harvard, Vancouver, ISO, and other styles
44

Ayo-Soyemi, Olusola. "Market Sentiment Analysis Using NLP: Understanding Trends and Buyer Preferences in Real Estate and Environmental Sectors." Technix International Journal for Engineering Research 12, no. 3 (2025): 974–88. https://doi.org/10.5281/zenodo.15120636.

Full text
Abstract:
The explosive growth of digital platforms has resulted in an explosion of unstructured textual data, such as customer reviews, social media posts, and feedback forms, which can provide significant insights into consumer preferences and industry trends.  To improve market sentiment research in the real estate and environmental sectors, this study investigates the use of Natural Language Processing (NLP) approaches, including word embedding models like Word2Vec, FastText, GloVe, and custom-developed embeddings.  The study's goal is to use these models to transform raw textual data into
APA, Harvard, Vancouver, ISO, and other styles
45

Moudhich, Ihab, and Abdelhadi Fennan. "Graph embedding approach to analyze sentiments on cryptocurrency." International Journal of Electrical and Computer Engineering (IJECE) 14, no. 1 (2024): 690. http://dx.doi.org/10.11591/ijece.v14i1.pp690-697.

Full text
Abstract:
This paper presents a comprehensive exploration of graph embedding techniques for sentiment analysis. The objective of this study is to enhance the accuracy of sentiment analysis models by leveraging the rich contextual relationships between words in text data. We investigate the application of graph embedding in the context of sentiment analysis, focusing on it is effectiveness in capturing the semantic and syntactic information of text. By representing text as a graph and employing graph embedding techniques, we aim to extract meaningful insights and improve the performance of sentiment anal
APA, Harvard, Vancouver, ISO, and other styles
46

Moudhich, Ihab, and Abdelhadi Fennan. "Graph embedding approach to analyze sentiments on cryptocurrency." International Journal of Electrical and Computer Engineering (IJECE) 14, no. 1 (2024): 690–97. https://doi.org/10.11591/ijece.v14i1.pp690-697.

Full text
Abstract:
This paper presents a comprehensive exploration of graph embedding techniques for sentiment analysis. The objective of this study is to enhance the accuracy of sentiment analysis models by leveraging the rich contextual relationships between words in text data. We investigate the application of graph embedding in the context of sentiment analysis, focusing on it is effectiveness in capturing the semantic and syntactic information of text. By representing text as a graph and employing graph embedding techniques, we aim to extract meaningful insights and improve the performance of sentiment anal
APA, Harvard, Vancouver, ISO, and other styles
47

Park, Jaehyuk. "Decoding Social Structure via Neural Embedding Techniques." Korean Journal of Sociology 58, no. 3 (2024): 241–66. http://dx.doi.org/10.21562/kjs.2024.08.58.3.241.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Newman, G. R., and J. A. Hobot. "Modern acrylics for post-embedding immunostaining techniques." Journal of Histochemistry & Cytochemistry 35, no. 9 (1987): 971–81. http://dx.doi.org/10.1177/35.9.3302021.

Full text
Abstract:
We describe two methods for rapid processing of biological tissues into LR White acrylic plastic. Both methods make use of LR White's compatibility with small amounts of water, enabling non-osmicated tissue to be only partially dehydrated before infiltration with the plastic, a procedure that improves the sensitivity of post-embedding immunocytochemistry. In addition, both methods are designed to reduce the time for which tissue is exposed to the damaging influence of the plastic monomer, which can cause extraction and sudden shrinkage. The tissue example used in the first method is immersion-
APA, Harvard, Vancouver, ISO, and other styles
49

Jackson, C. M. "Microwave de-embedding techniques applied to acoustics." IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control 52, no. 7 (2005): 1094–100. http://dx.doi.org/10.1109/tuffc.2005.1503995.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Lin, Ching-Chiuan, Shih-Chieh Chen, and Nien-Lin Hsueh. "Adaptive embedding techniques for VQ-compressed images." Information Sciences 179, no. 1-2 (2009): 140–49. http://dx.doi.org/10.1016/j.ins.2008.09.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!