Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Embedding de graph.

Статті в журналах з теми "Embedding de graph"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Embedding de graph".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Liang, Jiongqian, Saket Gurukar, and Srinivasan Parthasarathy. "MILE: A Multi-Level Framework for Scalable Graph Embedding." Proceedings of the International AAAI Conference on Web and Social Media 15 (May 22, 2021): 361–72. http://dx.doi.org/10.1609/icwsm.v15i1.18067.

Повний текст джерела
Анотація:
Recently there has been a surge of interest in designing graph embedding methods. Few, if any, can scale to a large-sized graph with millions of nodes due to both computational complexity and memory requirements. In this paper, we relax this limitation by introducing the MultI-Level Embedding (MILE) framework – a generic methodology allowing contemporary graph embedding methods to scale to large graphs. MILE repeatedly coarsens the graph into smaller ones using a hybrid matching technique to maintain the backbone structure of the graph. It then applies existing embedding methods on the coarses
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Duong, Chi Thang, Trung Dung Hoang, Hongzhi Yin, Matthias Weidlich, Quoc Viet Hung Nguyen, and Karl Aberer. "Scalable robust graph embedding with Spark." Proceedings of the VLDB Endowment 15, no. 4 (2021): 914–22. http://dx.doi.org/10.14778/3503585.3503599.

Повний текст джерела
Анотація:
Graph embedding aims at learning a vector-based representation of vertices that incorporates the structure of the graph. This representation then enables inference of graph properties. Existing graph embedding techniques, however, do not scale well to large graphs. While several techniques to scale graph embedding using compute clusters have been proposed, they require continuous communication between the compute nodes and cannot handle node failure. We therefore propose a framework for scalable and robust graph embedding based on the MapReduce model, which can distribute any existing embeddin
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Zhou, Houquan, Shenghua Liu, Danai Koutra, Huawei Shen, and Xueqi Cheng. "A Provable Framework of Learning Graph Embeddings via Summarization." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 4 (2023): 4946–53. http://dx.doi.org/10.1609/aaai.v37i4.25621.

Повний текст джерела
Анотація:
Given a large graph, can we learn its node embeddings from a smaller summary graph? What is the relationship between embeddings learned from original graphs and their summary graphs? Graph representation learning plays an important role in many graph mining applications, but learning em-beddings of large-scale graphs remains a challenge. Recent works try to alleviate it via graph summarization, which typ-ically includes the three steps: reducing the graph size by combining nodes and edges into supernodes and superedges,learning the supernode embedding on the summary graph and then restoring th
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Fang, Peng, Arijit Khan, Siqiang Luo, et al. "Distributed Graph Embedding with Information-Oriented Random Walks." Proceedings of the VLDB Endowment 16, no. 7 (2023): 1643–56. http://dx.doi.org/10.14778/3587136.3587140.

Повний текст джерела
Анотація:
Graph embedding maps graph nodes to low-dimensional vectors, and is widely adopted in machine learning tasks. The increasing availability of billion-edge graphs underscores the importance of learning efficient and effective embeddings on large graphs, such as link prediction on Twitter with over one billion edges. Most existing graph embedding methods fall short of reaching high data scalability. In this paper, we present a general-purpose, distributed, information-centric random walk-based graph embedding framework, DistGER, which can scale to embed billion-edge graphs. DistGER incrementally
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Mao, Yuqing, and Kin Wah Fung. "Use of word and graph embedding to measure semantic relatedness between Unified Medical Language System concepts." Journal of the American Medical Informatics Association 27, no. 10 (2020): 1538–46. http://dx.doi.org/10.1093/jamia/ocaa136.

Повний текст джерела
Анотація:
Abstract Objective The study sought to explore the use of deep learning techniques to measure the semantic relatedness between Unified Medical Language System (UMLS) concepts. Materials and Methods Concept sentence embeddings were generated for UMLS concepts by applying the word embedding models BioWordVec and various flavors of BERT to concept sentences formed by concatenating UMLS terms. Graph embeddings were generated by the graph convolutional networks and 4 knowledge graph embedding models, using graphs built from UMLS hierarchical relations. Semantic relatedness was measured by the cosin
Стилі APA, Harvard, Vancouver, ISO та ін.
6

FRIESEN, TYLER, and VASSILY OLEGOVICH MANTUROV. "EMBEDDINGS OF *-GRAPHS INTO 2-SURFACES." Journal of Knot Theory and Its Ramifications 22, no. 12 (2013): 1341005. http://dx.doi.org/10.1142/s0218216513410058.

Повний текст джерела
Анотація:
This paper considers *-graphs in which all vertices have degree 4 or 6, and studies the question of calculating the genus of orientable 2-surfaces into which such graphs may be embedded. A *-graph is a graph endowed with a formal adjacency structure on the half-edges around each vertex, and an embedding of a *-graph is an embedding under which the formal adjacency relation on half-edges corresponds to the adjacency relation induced by the embedding. *-graphs are a natural generalization of four-valent framed graphs, which are four-valent graphs with an opposite half-edge structure. In [Embeddi
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Makarov, Ilya, Dmitrii Kiselev, Nikita Nikitinsky, and Lovro Subelj. "Survey on graph embeddings and their applications to machine learning problems on graphs." PeerJ Computer Science 7 (February 4, 2021): e357. http://dx.doi.org/10.7717/peerj-cs.357.

Повний текст джерела
Анотація:
Dealing with relational data always required significant computational resources, domain expertise and task-dependent feature engineering to incorporate structural information into a predictive model. Nowadays, a family of automated graph feature engineering techniques has been proposed in different streams of literature. So-called graph embeddings provide a powerful tool to construct vectorized feature spaces for graphs and their components, such as nodes, edges and subgraphs under preserving inner graph properties. Using the constructed feature spaces, many machine learning problems on graph
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Mohar, Bojan. "Combinatorial Local Planarity and the Width of Graph Embeddings." Canadian Journal of Mathematics 44, no. 6 (1992): 1272–88. http://dx.doi.org/10.4153/cjm-1992-076-8.

Повний текст джерела
Анотація:
AbstractLet G be a graph embedded in a closed surface. The embedding is “locally planar” if for each face, a “large” neighbourhood of this face is simply connected. This notion is formalized, following [RV], by introducing the width ρ(ψ) of the embedding ψ. It is shown that embeddings with ρ(ψ) ≥ 3 behave very much like the embeddings of planar graphs in the 2-sphere. Another notion, “combinatorial local planarity”, is introduced. The criterion is independent of embeddings of the graph, but it guarantees that a given cycle in a graph G must be contractible in any minimal genus embedding of G (
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Chen, Mingyang, Wen Zhang, Zhen Yao, et al. "Entity-Agnostic Representation Learning for Parameter-Efficient Knowledge Graph Embedding." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 4 (2023): 4182–90. http://dx.doi.org/10.1609/aaai.v37i4.25535.

Повний текст джерела
Анотація:
We propose an entity-agnostic representation learning method for handling the problem of inefficient parameter storage costs brought by embedding knowledge graphs. Conventional knowledge graph embedding methods map elements in a knowledge graph, including entities and relations, into continuous vector spaces by assigning them one or multiple specific embeddings (i.e., vector representations). Thus the number of embedding parameters increases linearly as the growth of knowledge graphs. In our proposed model, Entity-Agnostic Representation Learning (EARL), we only learn the embeddings for a smal
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Xie, Anze, Anders Carlsson, Jason Mohoney, et al. "Demo of marius." Proceedings of the VLDB Endowment 14, no. 12 (2021): 2759–62. http://dx.doi.org/10.14778/3476311.3476338.

Повний текст джерела
Анотація:
Graph embeddings have emerged as the de facto representation for modern machine learning over graph data structures. The goal of graph embedding models is to convert high-dimensional sparse graphs into low-dimensional, dense and continuous vector spaces that preserve the graph structure properties. However, learning a graph embedding model is a resource intensive process, and existing solutions rely on expensive distributed computation to scale training to instances that do not fit in GPU memory. This demonstration showcases Marius: a new open-source engine for learning graph embedding models
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Wang, Bin, Yu Chen, Jinfang Sheng, and Zhengkun He. "Attributed Graph Embedding Based on Attention with Cluster." Mathematics 10, no. 23 (2022): 4563. http://dx.doi.org/10.3390/math10234563.

Повний текст джерела
Анотація:
Graph embedding is of great significance for the research and analysis of graphs. Graph embedding aims to map nodes in the network to low-dimensional vectors while preserving information in the original graph of nodes. In recent years, the appearance of graph neural networks has significantly improved the accuracy of graph embedding. However, the influence of clusters was not considered in existing graph neural network (GNN)-based methods, so this paper proposes a new method to incorporate the influence of clusters into the generation of graph embedding. We use the attention mechanism to pass
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Bläsius, Thomas, Jean-Pierre Von der Heydt, Maximilian Katzmann, and Nikolai Maas. "Weighted Embeddings for Low-Dimensional Graph Representation." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 15 (2025): 15587–95. https://doi.org/10.1609/aaai.v39i15.33711.

Повний текст джерела
Анотація:
Learning low-dimensional numerical representations from symbolic data, e.g., embedding the nodes of a graph into a geometric space, is an important concept in machine learning. While embedding into Euclidean space is common, recent observations indicate that hyperbolic geometry is better suited to represent hierarchical information and heterogeneous data (e.g., graphs with a scale-free degree distribution). Despite their potential for more accurate representations, hyperbolic embeddings also have downsides like being more difficult to compute and harder to use in downstream tasks. We propose e
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Sheng, Jinfang, Zili Yang, Bin Wang, and Yu Chen. "Attribute Graph Embedding Based on Multi-Order Adjacency Views and Attention Mechanisms." Mathematics 12, no. 5 (2024): 697. http://dx.doi.org/10.3390/math12050697.

Повний текст джерела
Анотація:
Graph embedding plays an important role in the analysis and study of typical non-Euclidean data, such as graphs. Graph embedding aims to transform complex graph structures into vector representations for further machine learning or data mining tasks. It helps capture relationships and similarities between nodes, providing better representations for various tasks on graphs. Different orders of neighbors have different impacts on the generation of node embedding vectors. Therefore, this paper proposes a multi-order adjacency view encoder to fuse the feature information of neighbors at different
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Pietrasik, Marcin, and Marek Z. Reformat. "Probabilistic Coarsening for Knowledge Graph Embeddings." Axioms 12, no. 3 (2023): 275. http://dx.doi.org/10.3390/axioms12030275.

Повний текст джерела
Анотація:
Knowledge graphs have risen in popularity in recent years, demonstrating their utility in applications across the spectrum of computer science. Finding their embedded representations is thus highly desirable as it makes them easily operated on and reasoned with by machines. With this in mind, we propose a simple meta-strategy for embedding knowledge graphs using probabilistic coarsening. In this approach, a knowledge graph is first coarsened before being embedded by an arbitrary embedding method. The resulting coarse embeddings are then extended down as those of the initial knowledge graph. Al
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Trisedya, Bayu Distiawan, Jianzhong Qi, and Rui Zhang. "Entity Alignment between Knowledge Graphs Using Attribute Embeddings." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 297–304. http://dx.doi.org/10.1609/aaai.v33i01.3301297.

Повний текст джерела
Анотація:
The task of entity alignment between knowledge graphs aims to find entities in two knowledge graphs that represent the same real-world entity. Recently, embedding-based models are proposed for this task. Such models are built on top of a knowledge graph embedding model that learns entity embeddings to capture the semantic similarity between entities in the same knowledge graph. We propose to learn embeddings that can capture the similarity between entities in different knowledge graphs. Our proposed model helps align entities from different knowledge graphs, and hence enables the integration o
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Hu, Ganglin, and Jun Pang. "Relation-Aware Weighted Embedding for Heterogeneous Graphs." Information Technology and Control 52, no. 1 (2023): 199–214. http://dx.doi.org/10.5755/j01.itc.52.1.32390.

Повний текст джерела
Анотація:
Heterogeneous graph embedding, aiming to learn the low-dimensional representations of nodes, is effective in many tasks, such as link prediction, node classification, and community detection. Most existing graph embedding methods conducted on heterogeneous graphs treat the heterogeneous neighbours equally. Although it is possible to get node weights through attention mechanisms mainly developed using expensive recursive message-passing, they are difficult to deal with large-scale networks. In this paper, we propose R-WHGE, a relation-aware weighted embedding model for heterogeneous graphs, to
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Cape, Joshua, Minh Tang, and Carey E. Priebe. "On spectral embedding performance and elucidating network structure in stochastic blockmodel graphs." Network Science 7, no. 3 (2019): 269–91. http://dx.doi.org/10.1017/nws.2019.23.

Повний текст джерела
Анотація:
AbstractStatistical inference on graphs often proceeds via spectral methods involving low-dimensional embeddings of matrix-valued graph representations such as the graph Laplacian or adjacency matrix. In this paper, we analyze the asymptotic information-theoretic relative performance of Laplacian spectral embedding and adjacency spectral embedding for block assignment recovery in stochastic blockmodel graphs by way of Chernoff information. We investigate the relationship between spectral embedding performance and underlying network structure (e.g., homogeneity, affinity, core-periphery, and (u
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Barros, Claudio D. T., Matheus R. F. Mendonça, Alex B. Vieira, and Artur Ziviani. "A Survey on Embedding Dynamic Graphs." ACM Computing Surveys 55, no. 1 (2023): 1–37. http://dx.doi.org/10.1145/3483595.

Повний текст джерела
Анотація:
Embedding static graphs in low-dimensional vector spaces plays a key role in network analytics and inference, supporting applications like node classification, link prediction, and graph visualization. However, many real-world networks present dynamic behavior, including topological evolution, feature evolution, and diffusion. Therefore, several methods for embedding dynamic graphs have been proposed to learn network representations over time, facing novel challenges, such as time-domain modeling, temporal features to be captured, and the temporal granularity to be embedded. In this survey, we
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Cappelletti, Luca, Tommaso Fontana, Elena Casiraghi, et al. "GRAPE for fast and scalable graph processing and random-walk-based embedding." Nature Computational Science 3, no. 6 (2023): 552–68. http://dx.doi.org/10.1038/s43588-023-00465-8.

Повний текст джерела
Анотація:
AbstractGraph representation learning methods opened new avenues for addressing complex, real-world problems represented by graphs. However, many graphs used in these applications comprise millions of nodes and billions of edges and are beyond the capabilities of current methods and software implementations. We present GRAPE (Graph Representation Learning, Prediction and Evaluation), a software resource for graph processing and embedding that is able to scale with big graphs by using specialized and smart data structures, algorithms, and a fast parallel implementation of random-walk-based meth
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Song, Yifan, Xiaolong Chen, Wenqing Lin, et al. "Efficient Graph Embedding Generation and Update for Large-Scale Temporal Graph." Proceedings of the VLDB Endowment 18, no. 4 (2024): 929–42. https://doi.org/10.14778/3717755.3717756.

Повний текст джерела
Анотація:
Graph embedding aims at mapping each node to a low-dimensional vector, beneficial for various applications like pattern matching, retrieval augmented generation and recommendation. In this paper, we study the large-scale temporal graph embedding problem. Different from simple graphs, each edge has a timestamp in temporal graphs, which requires the embeddings to encode the temporal biases. Factorizing similarity matrix is a common approach for generating simple graph embeddings where similarity can be well characterized by some conventional metrics like personalized PageRank. However, how to co
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Zhang, H., J. J. Zhou, and R. Li. "Enhanced Unsupervised Graph Embedding via Hierarchical Graph Convolution Network." Mathematical Problems in Engineering 2020 (July 26, 2020): 1–9. http://dx.doi.org/10.1155/2020/5702519.

Повний текст джерела
Анотація:
Graph embedding aims to learn the low-dimensional representation of nodes in the network, which has been paid more and more attention in many graph-based tasks recently. Graph Convolution Network (GCN) is a typical deep semisupervised graph embedding model, which can acquire node representation from the complex network. However, GCN usually needs to use a lot of labeled data and additional expressive features in the graph embedding learning process, so the model cannot be effectively applied to undirected graphs with only network structure information. In this paper, we propose a novel unsuper
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Begga, Ahmed, Francisco Escolano Ruiz, and Miguel Ángel Lozano. "Edge-Centric Embeddings of Digraphs: Properties and Stability Under Sparsification." Entropy 27, no. 3 (2025): 304. https://doi.org/10.3390/e27030304.

Повний текст джерела
Анотація:
In this paper, we define and characterize the embedding of edges and higher-order entities in directed graphs (digraphs) and relate these embeddings to those of nodes. Our edge-centric approach consists of the following: (a) Embedding line digraphs (or their iterated versions); (b) Exploiting the rank properties of these embeddings to show that edge/path similarity can be posed as a linear combination of node similarities; (c) Solving scalability issues through digraph sparsification; (d) Evaluating the performance of these embeddings for classification and clustering. We commence by identifyi
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Suo, Xinhua, Bing Guo, Yan Shen, Wei Wang, Yaosen Chen, and Zhen Zhang. "Embodying the Number of an Entity’s Relations for Knowledge Representation Learning." International Journal of Software Engineering and Knowledge Engineering 31, no. 10 (2021): 1495–515. http://dx.doi.org/10.1142/s0218194021500509.

Повний текст джерела
Анотація:
Knowledge representation learning (knowledge graph embedding) plays a critical role in the application of knowledge graph construction. The multi-source information knowledge representation learning, which is one class of the most promising knowledge representation learning at present, mainly focuses on learning a large number of useful additional information of entities and relations in the knowledge graph into their embeddings, such as the text description information, entity type information, visual information, graph structure information, etc. However, there is a kind of simple but very c
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Tao, Tao, Qianqian Wang, Yue Ruan, Xue Li, and Xiujun Wang. "Graph Embedding with Similarity Metric Learning." Symmetry 15, no. 8 (2023): 1618. http://dx.doi.org/10.3390/sym15081618.

Повний текст джерела
Анотація:
Graph embedding transforms high-dimensional graphs into a lower-dimensional vector space while preserving their structural information and properties. Context-sensitive graph embedding, in particular, performs well in tasks such as link prediction and ranking recommendations. However, existing context-sensitive graph embeddings have limitations: they require additional information, depend on community algorithms to capture multiple contexts, or fail to capture sufficient structural information. In this paper, we propose a novel Graph Embedding with Similarity Metric Learning (GESML). The core
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Song, Zhiwei, Brittany Baur, and Sushmita Roy. "Benchmarking graph representation learning algorithms for detecting modules in molecular networks." F1000Research 12 (August 7, 2023): 941. http://dx.doi.org/10.12688/f1000research.134526.1.

Повний текст джерела
Анотація:
Background: A common task in molecular network analysis is the detection of community structures or modules. Such modules are frequently associated with shared biological functions and are often disrupted in disease. Detection of community structure entails clustering nodes in the graph, and many algorithms apply a clustering algorithm on an input node embedding. Graph representation learning offers a powerful framework to learn node embeddings to perform various downstream tasks such as clustering. Deep embedding methods based on graph neural networks can have substantially better performance
Стилі APA, Harvard, Vancouver, ISO та ін.
26

KOMLÓS, JÁNOS. "The Blow-up Lemma." Combinatorics, Probability and Computing 8, no. 1-2 (1999): 161–76. http://dx.doi.org/10.1017/s0963548398003502.

Повний текст джерела
Анотація:
Extremal graph theory has a great number of conjectures concerning the embedding of large sparse graphs into dense graphs. Szemerédi's Regularity Lemma is a valuable tool in finding embeddings of small graphs. The Blow-up Lemma, proved recently by Komlós, Sárközy and Szemerédi, can be applied to obtain approximate versions of many of the embedding conjectures. In this paper we review recent developments in the area.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

DI GIACOMO, EMILIO, and GIUSEPPE LIOTTA. "SIMULTANEOUS EMBEDDING OF OUTERPLANAR GRAPHS, PATHS, AND CYCLES." International Journal of Computational Geometry & Applications 17, no. 02 (2007): 139–60. http://dx.doi.org/10.1142/s0218195907002276.

Повний текст джерела
Анотація:
Let G1 and G2 be two planar graphs having some vertices in common. A simultaneous embedding of G1 and G2 is a pair of crossing-free drawings of G1 and G2 such that each vertex in common is represented by the same point in both drawings. In this paper we show that an outerplanar graph and a simple path can be simultaneously embedded with fixed edges such that the edges in common are straight-line segments while the other edges of the outerplanar graph can have at most one bend per edge. We then exploit the technique for outerplanar graphs and paths to study simultaneous embeddings of other pair
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Gerritse, Emma, Faegheh Hasibi, and Arjen De Vries. "Graph Embeddings to Empower Entity Retrieval." Information Retrieval Research 1, no. 1 (2025): 137–65. https://doi.org/10.54195/irrj.19877.

Повний текст джерела
Анотація:
In this research, we investigate methods for entity retrieval using graph embeddings. While various methods have been proposed over the years, most utilize a single graph embedding and entity linking approach. This hinders our understanding of how different graph embedding and entity linking methods impact entity retrieval. To address this gap, we investigate the effects of three different categories of graph embedding techniques and five different entity linking methods. We perform a reranking of entities using the distance between the embeddings of annotated entities and the entities we wish
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Shang, Chao, Yun Tang, Jing Huang, Jinbo Bi, Xiaodong He, and Bowen Zhou. "End-to-End Structure-Aware Convolutional Networks for Knowledge Base Completion." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 3060–67. http://dx.doi.org/10.1609/aaai.v33i01.33013060.

Повний текст джерела
Анотація:
Knowledge graph embedding has been an active research topic for knowledge base completion, with progressive improvement from the initial TransE, TransH, DistMult et al to the current state-of-the-art ConvE. ConvE uses 2D convolution over embeddings and multiple layers of nonlinear features to model knowledge graphs. The model can be efficiently trained and scalable to large knowledge graphs. However, there is no structure enforcement in the embedding space of ConvE. The recent graph convolutional network (GCN) provides another way of learning graph node embedding by successfully utilizing grap
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Bai, Yunsheng, Hao Ding, Ken Gu, Yizhou Sun, and Wei Wang. "Learning-Based Efficient Graph Similarity Computation via Multi-Scale Convolutional Set Matching." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (2020): 3219–26. http://dx.doi.org/10.1609/aaai.v34i04.5720.

Повний текст джерела
Анотація:
Graph similarity computation is one of the core operations in many graph-based applications, such as graph similarity search, graph database analysis, graph clustering, etc. Since computing the exact distance/similarity between two graphs is typically NP-hard, a series of approximate methods have been proposed with a trade-off between accuracy and speed. Recently, several data-driven approaches based on neural networks have been proposed, most of which model the graph-graph similarity as the inner product of their graph-level representations, with different techniques proposed for generating o
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Mohamed, Sameh K., Emir Muñoz, and Vit Novacek. "On Training Knowledge Graph Embedding Models." Information 12, no. 4 (2021): 147. http://dx.doi.org/10.3390/info12040147.

Повний текст джерела
Анотація:
Knowledge graph embedding (KGE) models have become popular means for making discoveries in knowledge graphs (e.g., RDF graphs) in an efficient and scalable manner. The key to success of these models is their ability to learn low-rank vector representations for knowledge graph entities and relations. Despite the rapid development of KGE models, state-of-the-art approaches have mostly focused on new ways to represent embeddings interaction functions (i.e., scoring functions). In this paper, we argue that the choice of other training components such as the loss function, hyperparameters and negat
Стилі APA, Harvard, Vancouver, ISO та ін.
32

NIKKUNI, RYO. "THE SECOND SKEW-SYMMETRIC COHOMOLOGY GROUP AND SPATIAL EMBEDDINGS OF GRAPHS." Journal of Knot Theory and Its Ramifications 09, no. 03 (2000): 387–411. http://dx.doi.org/10.1142/s0218216500000189.

Повний текст джерела
Анотація:
Let L(G) be the second skew-symmetric cohomology group of the residual space of a graph G. We determine L(G) in the case G is a 3-connected simple graph, and give the structure of L(G) in the case of G is a complete graph and a complete bipartite graph. By using these results, we determine the Wu invariants in L(G) of the spatial embeddings of the complete graph and those of the complete bipartite graph, respectively. Since the Wu invariant of a spatial embedding is a complete invariant up to homology which is an equivalence relation on spatial embeddings introduced in [12], we give a homology
Стилі APA, Harvard, Vancouver, ISO та ін.
33

BOZKURT, ILKER NADI, HAI HUANG, BRUCE MAGGS, ANDRÉA RICHA, and MAVERICK WOO. "Mutual Embeddings." Journal of Interconnection Networks 15, no. 01n02 (2015): 1550001. http://dx.doi.org/10.1142/s0219265915500012.

Повний текст джерела
Анотація:
This paper introduces a type of graph embedding called a mutual embedding. A mutual embedding between two n-node graphs [Formula: see text] and [Formula: see text] is an identification of the vertices of V1 and V2, i.e., a bijection [Formula: see text], together with an embedding of G1 into G2 and an embedding of G2 into G1 where in the embedding of G1 into G2, each node u of G1 is mapped to π(u) in G2 and in the embedding of G2 into G1 each node v of G2 is mapped to [Formula: see text] in G1. The identification of vertices in G1 and G2 constrains the two embeddings so that it is not always po
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Xie, Chengxin, Jingui Huang, Yongjiang Shi, Hui Pang, Liting Gao, and Xiumei Wen. "Ensemble graph auto-encoders for clustering and link prediction." PeerJ Computer Science 11 (January 22, 2025): e2648. https://doi.org/10.7717/peerj-cs.2648.

Повний текст джерела
Анотація:
Graph auto-encoders are a crucial research area within graph neural networks, commonly employed for generating graph embeddings while minimizing errors in unsupervised learning. Traditional graph auto-encoders focus on reconstructing minimal graph data loss to encode neighborhood information for each node, yielding node embedding representations. However, existing graph auto-encoder models often overlook node representations and fail to capture contextual node information within the graph data, resulting in poor embedding effects. Accordingly, this study proposes the ensemble graph auto-encode
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Park, Chanyoung, Donghyun Kim, Jiawei Han, and Hwanjo Yu. "Unsupervised Attributed Multiplex Network Embedding." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (2020): 5371–78. http://dx.doi.org/10.1609/aaai.v34i04.5985.

Повний текст джерела
Анотація:
Nodes in a multiplex network are connected by multiple types of relations. However, most existing network embedding methods assume that only a single type of relation exists between nodes. Even for those that consider the multiplexity of a network, they overlook node attributes, resort to node labels for training, and fail to model the global properties of a graph. We present a simple yet effective unsupervised network embedding method for attributed multiplex network called DMGI, inspired by Deep Graph Infomax (DGI) that maximizes the mutual information between local patches of a graph, and t
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Cheng, Pengyu, Yitong Li, Xinyuan Zhang, Liqun Chen, David Carlson, and Lawrence Carin. "Dynamic Embedding on Textual Networks via a Gaussian Process." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 05 (2020): 7562–69. http://dx.doi.org/10.1609/aaai.v34i05.6255.

Повний текст джерела
Анотація:
Textual network embedding aims to learn low-dimensional representations of text-annotated nodes in a graph. Prior work in this area has typically focused on fixed graph structures; however, real-world networks are often dynamic. We address this challenge with a novel end-to-end node-embedding model, called Dynamic Embedding for Textual Networks with a Gaussian Process (DetGP). After training, DetGP can be applied efficiently to dynamic graphs without re-training or backpropagation. The learned representation of each node is a combination of textual and structural embeddings. Because the struct
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Guo, Zihao, Qingyun Sun, Haonan Yuan, et al. "GraphMoRE: Mitigating Topological Heterogeneity via Mixture of Riemannian Experts." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 11 (2025): 11754–62. https://doi.org/10.1609/aaai.v39i11.33279.

Повний текст джерела
Анотація:
Real-world graphs have inherently complex and diverse topological patterns, known as topological heterogeneity. Most existing works learn graph representation in a single constant curvature space that is insufficient to match the complex geometric shapes, resulting in low-quality embeddings with high distortion. This also constitutes a critical challenge for graph foundation models, which are expected to uniformly handle a wide variety of diverse graph data. Recent studies have indicated that product manifold gains the possibility to address topological heterogeneity. However, the product mani
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Hoang , Van Thuy, Hyeon-Ju Jeon , Eun-Soon You , Yoewon Yoon , Sungyeop Jung , and O.-Joun Lee . "Graph Representation Learning and Its Applications: A Survey." Sensors 23, no. 8 (2023): 4168. http://dx.doi.org/10.3390/s23084168.

Повний текст джерела
Анотація:
Graphs are data structures that effectively represent relational data in the real world. Graph representation learning is a significant task since it could facilitate various downstream tasks, such as node classification, link prediction, etc. Graph representation learning aims to map graph entities to low-dimensional vectors while preserving graph structure and entity relationships. Over the decades, many models have been proposed for graph representation learning. This paper aims to show a comprehensive picture of graph representation learning models, including traditional and state-of-the-a
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Wang, Xiaojie, Haijun Zhao, and Huayue Chen. "Improved Skip-Gram Based on Graph Structure Information." Sensors 23, no. 14 (2023): 6527. http://dx.doi.org/10.3390/s23146527.

Повний текст джерела
Анотація:
Applying the Skip-gram to graph representation learning has become a widely researched topic in recent years. Prior works usually focus on the migration application of the Skip-gram model, while Skip-gram in graph representation learning, initially applied to word embedding, is left insufficiently explored. To compensate for the shortcoming, we analyze the difference between word embedding and graph embedding and reveal the principle of graph representation learning through a case study to explain the essential idea of graph embedding intuitively. Through the case study and in-depth understand
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Fionda, Valeria, and Giuseppe Pirrò. "Learning Triple Embeddings from Knowledge Graphs." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (2020): 3874–81. http://dx.doi.org/10.1609/aaai.v34i04.5800.

Повний текст джерела
Анотація:
Graph embedding techniques allow to learn high-quality feature vectors from graph structures and are useful in a variety of tasks, from node classification to clustering. Existing approaches have only focused on learning feature vectors for the nodes and predicates in a knowledge graph. To the best of our knowledge, none of them has tackled the problem of directly learning triple embeddings. The approaches that are closer to this task have focused on homogeneous graphs involving only one type of edge and obtain edge embeddings by applying some operation (e.g., average) on the embeddings of the
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Zhang, Pengfei, Dong Chen, Yang Fang, Xiang Zhao, and Weidong Xiao. "CIST: Differentiating Concepts and Instances Based on Spatial Transformation for Knowledge Graph Embedding." Mathematics 10, no. 17 (2022): 3161. http://dx.doi.org/10.3390/math10173161.

Повний текст джерела
Анотація:
Knowledge representation learning is representing entities and relations in a knowledge graph as dense low-dimensional vectors in the continuous space, which explores the features and properties of the graph. Such a technique can facilitate the computation and reasoning on the knowledge graphs, which benefits many downstream tasks. In order to alleviate the problem of insufficient entity representation learning caused by sparse knowledge graphs, some researchers propose knowledge graph embedding models based on instances and concepts, which utilize the latent semantic connections between conce
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Kalogeropoulos, Nikitas-Rigas, Dimitris Ioannou, Dionysios Stathopoulos, and Christos Makris. "On Embedding Implementations in Text Ranking and Classification Employing Graphs." Electronics 13, no. 10 (2024): 1897. http://dx.doi.org/10.3390/electronics13101897.

Повний текст джерела
Анотація:
This paper aims to enhance the Graphical Set-based model (GSB) for ranking and classification tasks by incorporating node and word embeddings. The model integrates a textual graph representation with a set-based model for information retrieval. Initially, each document in a collection is transformed into a graph representation. The proposed enhancement involves augmenting the edges of these graphs with embeddings, which can be pretrained or generated using Word2Vec and GloVe models. Additionally, an alternative aspect of our proposed model consists of the Node2Vec embedding technique, which is
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Cheng, Kewei, Xian Li, Yifan Ethan Xu, Xin Luna Dong, and Yizhou Sun. "PGE." Proceedings of the VLDB Endowment 15, no. 6 (2022): 1288–96. http://dx.doi.org/10.14778/3514061.3514074.

Повний текст джерела
Анотація:
Although product graphs (PGs) have gained increasing attentions in recent years for their successful applications in product search and recommendations, the extensive power of PGs can be limited by the inevitable involvement of various kinds of errors. Thus, it is critical to validate the correctness of triples in PGs to improve their reliability. Knowledge graph (KG) embedding methods have strong error detection abilities. Yet, existing KG embedding methods may not be directly applicable to a PG due to its distinct characteristics: (1) PG contains rich textual signals, which necessitates a jo
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Liu, Xin, Chenyi Zhuang, Tsuyoshi Murata, Kyoung-Sook Kim, and Natthawut Kertkeidkachorn. "How much topological structure is preserved by graph embeddings?" Computer Science and Information Systems 16, no. 2 (2019): 597–614. http://dx.doi.org/10.2298/csis181001011l.

Повний текст джерела
Анотація:
Graph embedding aims at learning representations of nodes in a low dimensional vector space. Good embeddings should preserve the graph topological structure. To study how much such structure can be preserved, we propose evaluation methods from four aspects: 1) How well the graph can be reconstructed based on the embeddings, 2) The divergence of the original link distribution and the embedding-derived distribution, 3) The consistency of communities discovered from the graph and embeddings, and 4) To what extent we can employ embeddings to facilitate link prediction. We find that it is insuffici
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Wu, Xueyi, Yuanyuan Xu, Wenjie Zhang, and Ying Zhang. "Billion-Scale Bipartite Graph Embedding: A Global-Local Induced Approach." Proceedings of the VLDB Endowment 17, no. 2 (2023): 175–83. http://dx.doi.org/10.14778/3626292.3626300.

Повний текст джерела
Анотація:
Bipartite graph embedding (BGE), as the fundamental task in bipartite network analysis, is to map each node to compact low-dimensional vectors that preserve intrinsic properties. The existing solutions towards BGE fall into two groups: metric-based methods and graph neural network-based (GNN-based) methods. The latter typically generates higher-quality embeddings than the former due to the strong representation ability of deep learning. Nevertheless, none of the existing GNN-based methods can handle billion-scale bipartite graphs due to the expensive message passing or complex modelling choice
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Li, Yu, Yuan Tian, Jiawei Zhang, and Yi Chang. "Learning Signed Network Embedding via Graph Attention." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (2020): 4772–79. http://dx.doi.org/10.1609/aaai.v34i04.5911.

Повний текст джерела
Анотація:
Learning the low-dimensional representations of graphs (i.e., network embedding) plays a critical role in network analysis and facilitates many downstream tasks. Recently graph convolutional networks (GCNs) have revolutionized the field of network embedding, and led to state-of-the-art performance in network analysis tasks such as link prediction and node classification. Nevertheless, most of the existing GCN-based network embedding methods are proposed for unsigned networks. However, in the real world, some of the networks are signed, where the links are annotated with different polarities, e
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Peng, Yanhui, Jing Zhang, Cangqi Zhou, and Shunmei Meng. "Knowledge Graph Entity Alignment Using Relation Structural Similarity." Journal of Database Management 33, no. 1 (2022): 1–19. http://dx.doi.org/10.4018/jdm.305733.

Повний текст джерела
Анотація:
Embedding-based entity alignment, which represents knowledge graphs as low-dimensional embeddings and finds entities in different knowledge graphs that semantically represent the same real-world entity by measuring the similarities between entity embeddings, has achieved promising results. However, existing methods are still challenged by the error accumulation of embeddings along multi-step paths and the semantic information loss. This paper proposes a novel embedding-based entity alignment method that iteratively aligns both entities and relations with high similarities as training data. New
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Baumslag, Marc, and Bojana Obrenić. "Index-Shuffle Graphs." International Journal of Foundations of Computer Science 08, no. 03 (1997): 289–304. http://dx.doi.org/10.1142/s0129054197000197.

Повний текст джерела
Анотація:
Index-shuffle graphs are introduced as candidate interconnection networks for parallel computers. The comparative advantages of index-shuffle graphs over the standard bounded-degree "approximations" of the hypercube, namely butterfly-like and shuffle-like graphs, are demonstrated in the theoretical framework of graph embedding and network emulations. An N-node index-shuffle graph emulates: • an N-node shuffle-exchange graph with no slowdown, which the currently best emulations of shuffle-like graphs by hypercubes and butterflies incur a slowdown of Ω( log N). • its like-sized butterfly graph w
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Zhu, Shijie, Jianxin Li, Hao Peng, Senzhang Wang, and Lifang He. "Adversarial Directed Graph Embedding." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 5 (2021): 4741–48. http://dx.doi.org/10.1609/aaai.v35i5.16605.

Повний текст джерела
Анотація:
Node representation learning for directed graphs is critically important to facilitate many graph mining tasks. To capture the directed edges between nodes, existing methods mostly learn two embedding vectors for each node, source vector and target vector. However, these methods learn the source and target vectors separately. For the node with very low indegree or outdegree, the corresponding target vector or source vector cannot be effectively learned. In this paper, we propose a novel Directed Graph embedding framework based on Generative Adversarial Network, called DGGAN. The main idea is t
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Shah, Haseeb, Johannes Villmow, Adrian Ulges, Ulrich Schwanecke, and Faisal Shafait. "An Open-World Extension to Knowledge Graph Completion Models." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 3044–51. http://dx.doi.org/10.1609/aaai.v33i01.33013044.

Повний текст джерела
Анотація:
We present a novel extension to embedding-based knowledge graph completion models which enables them to perform open-world link prediction, i.e. to predict facts for entities unseen in training based on their textual description. Our model combines a regular link prediction model learned from a knowledge graph with word embeddings learned from a textual corpus. After training both independently, we learn a transformation to map the embeddings of an entity’s name and description to the graph-based embedding space.In experiments on several datasets including FB20k, DBPedia50k and our new dataset
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!