To see the other types of publications on this topic, follow the link: Graph Neural Networks (GNNs).

Journal articles on the topic 'Graph Neural Networks (GNNs)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Graph Neural Networks (GNNs).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

You, Jiaxuan, Jonathan M. Gomes-Selman, Rex Ying, and Jure Leskovec. "Identity-aware Graph Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 12 (2021): 10737–45. http://dx.doi.org/10.1609/aaai.v35i12.17283.

Full text
Abstract:
Message passing Graph Neural Networks (GNNs) provide a powerful modeling framework for relational data. However, the expressive power of existing GNNs is upper-bounded by the 1-Weisfeiler-Lehman (1-WL) graph isomorphism test, which means GNNs that are not able to predict node clustering coefficients and shortest path distances, and cannot differentiate between different d-regular graphs. Here we develop a class of message passing GNNs, named Identity-aware Graph Neural Networks (ID-GNNs), with greater expressive power than the 1-WL test. ID-GNN offers a minimal but powerful solution to limitat
APA, Harvard, Vancouver, ISO, and other styles
2

Shen, Yanyan, Lei Chen, Jingzhi Fang, Xin Zhang, Shihong Gao, and Hongbo Yin. "Efficient Training of Graph Neural Networks on Large Graphs." Proceedings of the VLDB Endowment 17, no. 12 (2024): 4237–40. http://dx.doi.org/10.14778/3685800.3685844.

Full text
Abstract:
Graph Neural Networks (GNNs) have gained significant popularity for learning representations of graph-structured data. Mainstream GNNs employ the message passing scheme that iteratively propagates information between connected nodes through edges. However, this scheme incurs high training costs, hindering the applicability of GNNs on large graphs. Recently, the database community has extensively researched effective solutions to facilitate efficient GNN training on massive graphs. In this tutorial, we provide a comprehensive overview of the GNN training process based on the graph data lifecycl
APA, Harvard, Vancouver, ISO, and other styles
3

Feng, Aosong, Chenyu You, Shiqiang Wang, and Leandros Tassiulas. "KerGNNs: Interpretable Graph Neural Networks with Graph Kernels." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 6 (2022): 6614–22. http://dx.doi.org/10.1609/aaai.v36i6.20615.

Full text
Abstract:
Graph kernels are historically the most widely-used technique for graph classification tasks. However, these methods suffer from limited performance because of the hand-crafted combinatorial features of graphs. In recent years, graph neural networks (GNNs) have become the state-of-the-art method in downstream graph-related tasks due to their superior performance. Most GNNs are based on Message Passing Neural Network (MPNN) frameworks. However, recent studies show that MPNNs can not exceed the power of the Weisfeiler-Lehman (WL) algorithm in graph isomorphism test. To address the limitations of
APA, Harvard, Vancouver, ISO, and other styles
4

Mo, Shibing, Kai Wu, Qixuan Gao, Xiangyi Teng, and Jing Liu. "AutoSGNN: Automatic Propagation Mechanism Discovery for Spectral Graph Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 18 (2025): 19493–502. https://doi.org/10.1609/aaai.v39i18.34146.

Full text
Abstract:
In real-world applications, spectral Graph Neural Networks (GNNs) are powerful tools for processing diverse types of graphs. However, a single GNN often struggles to handle different graph types—such as homogeneous and heterogeneous graphs—simultaneously. This challenge has led to the manual design of GNNs tailored to specific graph types, but these approaches are limited by the high cost of labor and the constraints of expert knowledge, which cannot keep up with the rapid growth of graph data. To overcome these challenges, we introduce AutoSGNN, an automated framework for discovering propagat
APA, Harvard, Vancouver, ISO, and other styles
5

Morris, Christopher, Martin Ritzert, Matthias Fey, et al. "Weisfeiler and Leman Go Neural: Higher-Order Graph Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 4602–9. http://dx.doi.org/10.1609/aaai.v33i01.33014602.

Full text
Abstract:
In recent years, graph neural networks (GNNs) have emerged as a powerful neural architecture to learn vector representations of nodes and graphs in a supervised, end-to-end fashion. Up to now, GNNs have only been evaluated empirically—showing promising results. The following work investigates GNNs from a theoretical point of view and relates them to the 1-dimensional Weisfeiler-Leman graph isomorphism heuristic (1-WL). We show that GNNs have the same expressiveness as the 1-WL in terms of distinguishing non-isomorphic (sub-)graphs. Hence, both algorithms also have the same shortcomings. Based
APA, Harvard, Vancouver, ISO, and other styles
6

Lu, Yuanfu, Xunqiang Jiang, Yuan Fang, and Chuan Shi. "Learning to Pre-train Graph Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 5 (2021): 4276–84. http://dx.doi.org/10.1609/aaai.v35i5.16552.

Full text
Abstract:
Graph neural networks (GNNs) have become the defacto standard for representation learning on graphs, which derive effective node representations by recursively aggregating information from graph neighborhoods. While GNNs can be trained from scratch, pre-training GNNs to learn transferable knowledge for downstream tasks has recently been demonstrated to improve the state of the art. However, conventional GNN pre-training methods follow a two-step paradigm: 1) pre-training on abundant unlabeled data and 2) fine-tuning on downstream labeled data, between which there exists a significant gap due t
APA, Harvard, Vancouver, ISO, and other styles
7

Guo, Zhichun, Chunhui Zhang, Yujie Fan, Yijun Tian, Chuxu Zhang, and Nitesh V. Chawla. "Boosting Graph Neural Networks via Adaptive Knowledge Distillation." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 6 (2023): 7793–801. http://dx.doi.org/10.1609/aaai.v37i6.25944.

Full text
Abstract:
Graph neural networks (GNNs) have shown remarkable performance on diverse graph mining tasks. While sharing the same message passing framework, our study shows that different GNNs learn distinct knowledge from the same graph. This implies potential performance improvement by distilling the complementary knowledge from multiple models. However, knowledge distillation (KD) transfers knowledge from high-capacity teachers to a lightweight student, which deviates from our scenario: GNNs are often shallow. To transfer knowledge effectively, we need to tackle two challenges: how to transfer knowledge
APA, Harvard, Vancouver, ISO, and other styles
8

Yang, Han, Kaili Ma, and James Cheng. "Rethinking Graph Regularization for Graph Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 5 (2021): 4573–81. http://dx.doi.org/10.1609/aaai.v35i5.16586.

Full text
Abstract:
The graph Laplacian regularization term is usually used in semi-supervised representation learning to provide graph structure information for a model f(X). However, with the recent popularity of graph neural networks (GNNs), directly encoding graph structure A into a model, i.e., f(A, X), has become the more common approach. While we show that graph Laplacian regularization brings little-to-no benefit to existing GNNs, and propose a simple but non-trivial variant of graph Laplacian regularization, called Propagation-regularization (P-reg), to boost the performance of existing GNN models. We pr
APA, Harvard, Vancouver, ISO, and other styles
9

Guo, Kai, Kaixiong Zhou, Xia Hu, Yu Li, Yi Chang, and Xin Wang. "Orthogonal Graph Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 4 (2022): 3996–4004. http://dx.doi.org/10.1609/aaai.v36i4.20316.

Full text
Abstract:
Graph neural networks (GNNs) have received tremendous attention due to their superiority in learning node representations. These models rely on message passing and feature transformation functions to encode the structural and feature information from neighbors. However, stacking more convolutional layers significantly decreases the performance of GNNs. Most recent studies attribute this limitation to the over-smoothing issue, where node embeddings converge to indistinguishable vectors. Through a number of experimental observations, we argue that the main factor degrading the performance is the
APA, Harvard, Vancouver, ISO, and other styles
10

Yang, Yachao, Yanfeng Sun, Shaofan Wang, et al. "Graph Neural Networks with Soft Association between Topology and Attribute." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 8 (2024): 9260–68. http://dx.doi.org/10.1609/aaai.v38i8.28778.

Full text
Abstract:
Graph Neural Networks (GNNs) have shown great performance in learning representations for graph-structured data. However, recent studies have found that the interference between topology and attribute can lead to distorted node representations. Most GNNs are designed based on homophily assumptions, thus they cannot be applied to graphs with heterophily. This research critically analyzes the propagation principles of various GNNs and the corresponding challenges from an optimization perspective. A novel GNN called Graph Neural Networks with Soft Association between Topology and Attribute (GNN-S
APA, Harvard, Vancouver, ISO, and other styles
11

Hu, Shengxiang, Guobing Zou, Song Yang, et al. "Large Language Model Meets Graph Neural Network in Knowledge Distillation." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 16 (2025): 17295–304. https://doi.org/10.1609/aaai.v39i16.33901.

Full text
Abstract:
While Large Language Models (LLMs) show promise for Text-Attributed Graphs (TAGs) learning, their deployment is hindered by computational demands. Graph Neural Networks (GNNs) are efficient but struggle with TAGs' complex semantics. We propose LinguGKD, a novel LLM-to-GNN knowledge distillation framework that enables transferring both local semantic details and global structural information from LLMs to GNNs. First, it introduces TAG-oriented instruction tuning, enhancing LLMs with graph-specific knowledge through carefully designed prompts. Next, it develops a layer-adaptive multi-scale contr
APA, Harvard, Vancouver, ISO, and other styles
12

Li, Qunwei, Shaofeng Zou, and Wenliang Zhong. "Learning Graph Neural Networks with Approximate Gradient Descent." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 10 (2021): 8438–46. http://dx.doi.org/10.1609/aaai.v35i10.17025.

Full text
Abstract:
The first provably efficient algorithm for learning graph neural networks (GNNs) with one hidden layer for node information convolution is provided in this paper. Two types of GNNs are investigated, depending on whether labels are attached to nodes or graphs. A comprehensive framework for designing and analyzing convergence of GNN training algorithms is developed. The algorithm proposed is applicable to a wide range of activation functions including ReLU, Leaky ReLU, Sigmod, Softplus and Swish. It is shown that the proposed algorithm guarantees a linear convergence rate to the underlying true
APA, Harvard, Vancouver, ISO, and other styles
13

Chen, Zhengyu, Teng Xiao, Kun Kuang, et al. "Learning to Reweight for Generalizable Graph Neural Network." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 8 (2024): 8320–28. http://dx.doi.org/10.1609/aaai.v38i8.28673.

Full text
Abstract:
Graph Neural Networks (GNNs) show promising results for graph tasks. However, existing GNNs' generalization ability will degrade when there exist distribution shifts between testing and training graph data. The fundamental reason for the severe degeneration is that most GNNs are designed based on the I.I.D hypothesis. In such a setting, GNNs tend to exploit subtle statistical correlations existing in the training set for predictions, even though it is a spurious correlation. In this paper, we study the problem of the generalization ability of GNNs on Out-Of-Distribution (OOD) settings. To solv
APA, Harvard, Vancouver, ISO, and other styles
14

Liu, Fangbing, and Qing Wang. "Asymmetric Learning for Spectral Graph Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 18 (2025): 18798–806. https://doi.org/10.1609/aaai.v39i18.34069.

Full text
Abstract:
Optimizing spectral graph neural networks (GNNs) remains a critical challenge in the field, yet the underlying processes are not well understood. In this paper, we investigate the inherent differences between graph convolution parameters and feature transformation parameters in spectral GNNs and their impact on the optimization landscape. Our analysis reveals that these differences contribute to a poorly conditioned problem, resulting in suboptimal performance. To address this issue, we introduce the concept of the block condition number of the Hessian matrix, which characterizes the difficult
APA, Harvard, Vancouver, ISO, and other styles
15

Horčík, Rostislav, and Gustav Šír. "Expressiveness of Graph Neural Networks in Planning Domains." Proceedings of the International Conference on Automated Planning and Scheduling 34 (May 30, 2024): 281–89. http://dx.doi.org/10.1609/icaps.v34i1.31486.

Full text
Abstract:
Graph Neural Networks (GNNs) have become the standard method of choice for learning with structured data, demonstrating particular promise in classical planning. Their inherent invariance under symmetries of the input graphs endows them with superior generalization capabilities, compared to their symmetry-oblivious counterparts. However, this comes at the cost of limited expressive power. Particularly, GNNs cannot distinguish between graphs that satisfy identical sentences of C2 logic. To leverage GNNs for learning policies in PDDL domains, one needs to encode the contextual representation of
APA, Harvard, Vancouver, ISO, and other styles
16

Madhavi M. Kulkarni. "Enhancing Social Network Analysis using Graph Neural Networks." Advances in Nonlinear Variational Inequalities 27, no. 4 (2024): 213–30. http://dx.doi.org/10.52783/anvi.v27.1502.

Full text
Abstract:
Social Network Analysis (SNA) may be a key apparatus for figuring out how individuals in social systems interface and relate to each other. Most of the time, chart hypothesis, factual models, and machine learning are utilized in conventional SNA strategies. Be that as it may, these strategies have inconvenience finding complex designs in huge, changing, and assorted systems. Chart Neural Systems (GNNs) are a modern and solid way to progress SNA. They learn models straight from graph-structured information, which makes them exceptionally great at assignments like finding communities, classifyin
APA, Harvard, Vancouver, ISO, and other styles
17

Boronina, Anna, Vladimir Maksimenko, and Alexander E. Hramov. "Convolutional Neural Network Outperforms Graph Neural Network on the Spatially Variant Graph Data." Mathematics 11, no. 11 (2023): 2515. http://dx.doi.org/10.3390/math11112515.

Full text
Abstract:
Applying machine learning algorithms to graph-structured data has garnered significant attention in recent years due to the prevalence of inherent graph structures in real-life datasets. However, the direct application of traditional deep learning algorithms, such as Convolutional Neural Networks (CNNs), is limited as they are designed for regular Euclidean data like 2D grids and 1D sequences. In contrast, graph-structured data are in a non-Euclidean form. Graph Neural Networks (GNNs) are specifically designed to handle non-Euclidean data and make predictions based on connectivity rather than
APA, Harvard, Vancouver, ISO, and other styles
18

Wei, Qiang, and Guangmin Hu. "Evaluating graph neural networks under graph sampling scenarios." PeerJ Computer Science 8 (March 4, 2022): e901. http://dx.doi.org/10.7717/peerj-cs.901.

Full text
Abstract:
Background It is often the case that only a portion of the underlying network structure is observed in real-world settings. However, as most network analysis methods are built on a complete network structure, the natural questions to ask are: (a) how well these methods perform with incomplete network structure, (b) which structural observation and network analysis method to choose for a specific task, and (c) is it beneficial to complete the missing structure. Methods In this paper, we consider the incomplete network structure as one random sampling instance from a complete graph, and we choos
APA, Harvard, Vancouver, ISO, and other styles
19

Zeng, DingYi, Wanlong Liu, Wenyu Chen, Li Zhou, Malu Zhang, and Hong Qu. "Substructure Aware Graph Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 9 (2023): 11129–37. http://dx.doi.org/10.1609/aaai.v37i9.26318.

Full text
Abstract:
Despite the great achievements of Graph Neural Networks (GNNs) in graph learning, conventional GNNs struggle to break through the upper limit of the expressiveness of first-order Weisfeiler-Leman graph isomorphism test algorithm (1-WL) due to the consistency of the propagation paradigm of GNNs with the 1-WL.Based on the fact that it is easier to distinguish the original graph through subgraphs, we propose a novel framework neural network framework called Substructure Aware Graph Neural Networks (SAGNN) to address these issues. We first propose a Cut subgraph which can be obtained from the orig
APA, Harvard, Vancouver, ISO, and other styles
20

Wang, Zhiyang, Juan Cerviño, and Alejandro Ribeiro. "Generalization of Graph Neural Networks Is Robust to Model Mismatch." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 20 (2025): 21402–10. https://doi.org/10.1609/aaai.v39i20.35441.

Full text
Abstract:
Graph neural networks (GNNs) have demonstrated their effectiveness in various tasks supported by their generalization capabilities. However, the current analysis of GNN generalization relies on the assumption that training and testing data are independent and identically distributed (i.i.d). This imposes limitations on the cases where a model mismatch exists when generating testing data. In this paper, we examine GNNs that operate on geometric graphs generated from manifold models, explicitly focusing on scenarios where there is a mismatch between manifold models generating training and testin
APA, Harvard, Vancouver, ISO, and other styles
21

Wu, Yixin, Xinlei He, Pascal Berrang, et al. "Link Stealing Attacks Against Inductive Graph Neural Networks." Proceedings on Privacy Enhancing Technologies 2024, no. 4 (2024): 818–39. http://dx.doi.org/10.56553/popets-2024-0143.

Full text
Abstract:
A graph neural network (GNN) is a type of neural network that is specifically designed to process graph-structured data. Typically, GNNs can be implemented in two settings, including the transductive setting and the inductive setting. In the transductive setting, the trained model can only predict the labels of nodes that were observed at the training time. In the inductive setting, the trained model can be generalized to new nodes/graphs. Due to its flexibility, the inductive setting is the most popular GNN setting at the moment. Previous work has shown that transductive GNNs are vulnerable t
APA, Harvard, Vancouver, ISO, and other styles
22

Eliasof, Moshe, Eldad Haber, and Eran Treister. "Feature Transportation Improves Graph Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 11 (2024): 11874–82. http://dx.doi.org/10.1609/aaai.v38i11.29073.

Full text
Abstract:
Graph neural networks (GNNs) have shown remarkable success in learning representations for graph-structured data. However, GNNs still face challenges in modeling complex phenomena that involve feature transportation. In this paper, we propose a novel GNN architecture inspired by Advection-Diffusion-Reaction systems, called ADR-GNN. Advection models feature transportation, while diffusion captures the local smoothing of features, and reaction represents the non-linear transformation between feature channels. We provide an analysis of the qualitative behavior of ADR-GNN, that shows the benefit o
APA, Harvard, Vancouver, ISO, and other styles
23

Yu, Xingtong, Zemin Liu, Yuan Fang, and Xinming Zhang. "Learning to Count Isomorphisms with Graph Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 4 (2023): 4845–53. http://dx.doi.org/10.1609/aaai.v37i4.25610.

Full text
Abstract:
Subgraph isomorphism counting is an important problem on graphs, as many graph-based tasks exploit recurring subgraph patterns. Classical methods usually boil down to a backtracking framework that needs to navigate a huge search space with prohibitive computational cost. Some recent studies resort to graph neural networks (GNNs) to learn a low-dimensional representation for both the query and input graphs, in order to predict the number of subgraph isomorphisms on the input graph. However, typical GNNs employ a node-centric message passing scheme that receives and aggregates messages on nodes,
APA, Harvard, Vancouver, ISO, and other styles
24

Zhu, Jiong, Ryan A. Rossi, Anup Rao, et al. "Graph Neural Networks with Heterophily." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 12 (2021): 11168–76. http://dx.doi.org/10.1609/aaai.v35i12.17332.

Full text
Abstract:
Graph Neural Networks (GNNs) have proven to be useful for many different practical applications. However, many existing GNN models have implicitly assumed homophily among the nodes connected in the graph, and therefore have largely overlooked the important setting of heterophily, where most connected nodes are from different classes. In this work, we propose a novel framework called CPGNN that generalizes GNNs for graphs with either homophily or heterophily. The proposed framework incorporates an interpretable compatibility matrix for modeling the heterophily or homophily level in the graph, w
APA, Harvard, Vancouver, ISO, and other styles
25

Liang, Fan, Cheng Qian, Wei Yu, David Griffith, and Nada Golmie. "Survey of Graph Neural Networks and Applications." Wireless Communications and Mobile Computing 2022 (July 28, 2022): 1–18. http://dx.doi.org/10.1155/2022/9261537.

Full text
Abstract:
The advance of deep learning has shown great potential in applications (speech, image, and video classification). In these applications, deep learning models are trained by datasets in Euclidean space with fixed dimensions and sequences. Nonetheless, the rapidly increasing demands on analyzing datasets in non-Euclidean space require additional research. Generally speaking, finding the relationships of elements in datasets and representing such relationships as weighted graphs consisting of vertices and edges is a viable way of analyzing datasets in non-Euclidean space. However, analyzing the w
APA, Harvard, Vancouver, ISO, and other styles
26

Afifi, Salma, Febin Sunny, Amin Shafiee, Mahdi Nikdast, and Sudeep Pasricha. "GHOST: A Graph Neural Network Accelerator using Silicon Photonics." ACM Transactions on Embedded Computing Systems 22, no. 5s (2023): 1–25. http://dx.doi.org/10.1145/3609097.

Full text
Abstract:
Graph neural networks (GNNs) have emerged as a powerful approach for modelling and learning from graph-structured data. Multiple fields have since benefitted enormously from the capabilities of GNNs, such as recommendation systems, social network analysis, drug discovery, and robotics. However, accelerating and efficiently processing GNNs require a unique approach that goes beyond conventional artificial neural network accelerators, due to the substantial computational and memory requirements of GNNs. The slowdown of scaling in CMOS platforms also motivates a search for alternative implementat
APA, Harvard, Vancouver, ISO, and other styles
27

Abadal, Sergi, Akshay Jain, Robert Guirado, Jorge López-Alonso, and Eduard Alarcón. "Computing Graph Neural Networks: A Survey from Algorithms to Accelerators." ACM Computing Surveys 54, no. 9 (2022): 1–38. http://dx.doi.org/10.1145/3477141.

Full text
Abstract:
Graph Neural Networks (GNNs) have exploded onto the machine learning scene in recent years owing to their capability to model and learn from graph-structured data. Such an ability has strong implications in a wide variety of fields whose data are inherently relational, for which conventional neural networks do not perform well. Indeed, as recent reviews can attest, research in the area of GNNs has grown rapidly and has lead to the development of a variety of GNN algorithm variants as well as to the exploration of ground-breaking applications in chemistry, neurology, electronics, or communicati
APA, Harvard, Vancouver, ISO, and other styles
28

Nguyen, Hoa Xuan, Shaoshu Zhu, and Mingming Liu. "A Survey on Graph Neural Networks for Microservice-Based Cloud Applications." Sensors 22, no. 23 (2022): 9492. http://dx.doi.org/10.3390/s22239492.

Full text
Abstract:
Graph neural networks (GNNs) have achieved great success in many research areas ranging from traffic to computer vision. With increased interest in cloud-native applications, GNNs are increasingly being investigated to address various challenges in microservice architecture from prototype design to large-scale service deployment. To appreciate the big picture of this emerging trend, we provide a comprehensive review of recent studies leveraging GNNs for microservice-based applications. To begin, we identify the key areas in which GNNs are applied, and then we review in detail how GNNs can be d
APA, Harvard, Vancouver, ISO, and other styles
29

Zhang, Zaixi, Qi Liu, Hao Wang, Chengqiang Lu, and Cheekong Lee. "ProtGNN: Towards Self-Explaining Graph Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 8 (2022): 9127–35. http://dx.doi.org/10.1609/aaai.v36i8.20898.

Full text
Abstract:
Despite the recent progress in Graph Neural Networks (GNNs), it remains challenging to explain the predictions made by GNNs. Existing explanation methods mainly focus on post-hoc explanations where another explanatory model is employed to provide explanations for a trained GNN. The fact that post-hoc methods fail to reveal the original reasoning process of GNNs raises the need of building GNNs with built-in interpretability. In this work, we propose Prototype Graph Neural Network (ProtGNN), which combines prototype learning with GNNs and provides a new perspective on the explanations of GNNs.
APA, Harvard, Vancouver, ISO, and other styles
30

Gao, Hang, Chengyu Yao, Jiangmeng Li, et al. "Rethinking Causal Relationships Learning in Graph Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 11 (2024): 12145–54. http://dx.doi.org/10.1609/aaai.v38i11.29103.

Full text
Abstract:
Graph Neural Networks (GNNs) demonstrate their significance by effectively modeling complex interrelationships within graph-structured data. To enhance the credibility and robustness of GNNs, it becomes exceptionally crucial to bolster their ability to capture causal relationships. However, despite recent advancements that have indeed strengthened GNNs from a causal learning perspective, conducting an in-depth analysis specifically targeting the causal modeling prowess of GNNs remains an unresolved issue. In order to comprehensively analyze various GNN models from a causal learning perspective
APA, Harvard, Vancouver, ISO, and other styles
31

Chen, Tingyang, Dazhuo Qiu, Yinghui Wu, Arijit Khan, Xiangyu Ke, and Yunjun Gao. "View-based Explanations for Graph Neural Networks." Proceedings of the ACM on Management of Data 2, no. 1 (2024): 1–27. http://dx.doi.org/10.1145/3639295.

Full text
Abstract:
Generating explanations for graph neural networks (GNNs) has been studied to understand their behaviors in analytical tasks such as graph classification. Existing approaches aim to understand the overall results of GNNs rather than providing explanations for specific class labels of interest, and may return explanation structures that are hard to access, nor directly queryable. We propose GVEX, a novel paradigm that generates Graph Views for GNN EXplanation. (1) We design a two-tier explanation structure called explanation views. An explanation view consists of a set of graph patterns and a se
APA, Harvard, Vancouver, ISO, and other styles
32

Sato, Ryoma, Makoto Yamada, and Hisashi Kashima. "Constant Time Graph Neural Networks." ACM Transactions on Knowledge Discovery from Data 16, no. 5 (2022): 1–31. http://dx.doi.org/10.1145/3502733.

Full text
Abstract:
The recent advancements in graph neural networks (GNNs) have led to state-of-the-art performances in various applications, including chemo-informatics, question-answering systems, and recommender systems. However, scaling up these methods to huge graphs, such as social networks and Web graphs, remains a challenge. In particular, the existing methods for accelerating GNNs either are not theoretically guaranteed in terms of the approximation error or incurred at least a linear time computation cost. In this study, we reveal the query complexity of the uniform node sampling scheme for Message Pas
APA, Harvard, Vancouver, ISO, and other styles
33

Zhao, Tong, Yozen Liu, Leonardo Neves, Oliver Woodford, Meng Jiang, and Neil Shah. "Data Augmentation for Graph Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 12 (2021): 11015–23. http://dx.doi.org/10.1609/aaai.v35i12.17315.

Full text
Abstract:
Data augmentation has been widely used to improve generalizability of machine learning models. However, comparatively little work studies data augmentation for graphs. This is largely due to the complex, non-Euclidean structure of graphs, which limits possible manipulation operations. Augmentation operations commonly used in vision and language have no analogs for graphs. Our work studies graph data augmentation for graph neural networks (GNNs) in the context of improving semi-supervised node-classification. We discuss practical and theoretical motivations, considerations and strategies for gr
APA, Harvard, Vancouver, ISO, and other styles
34

Wang, Renbiao, Fengtai Li, Shuwei Liu, et al. "Adaptive Multi-Channel Deep Graph Neural Networks." Symmetry 16, no. 4 (2024): 406. http://dx.doi.org/10.3390/sym16040406.

Full text
Abstract:
Graph neural networks (GNNs) have shown significant success in graph representation learning. However, the performance of existing GNNs degrades seriously when their layers deepen due to the over-smoothing issue. The node embedding incline converges to a certain value when GNNs repeat, aggregating the representations of the receptive field. The main reason for over-smoothing is that the receptive field of each node tends to be similar as the layers increase, which leads to different nodes aggregating similar information. To solve this problem, we propose an adaptive multi-channel deep graph ne
APA, Harvard, Vancouver, ISO, and other styles
35

Zhang, Yinan, and Wenyu Chen. "Incorporating Siamese Network Structure into Graph Neural Network." Journal of Physics: Conference Series 2171, no. 1 (2022): 012023. http://dx.doi.org/10.1088/1742-6596/2171/1/012023.

Full text
Abstract:
Abstract Siamese network plays an important role in many artificial intelligence domains, but there requires more exploration of applying Siamese structure to graph neural network. This paper proposes a novel framework that incorporates Siamese network structure into Graph Neural Network (Siam-GNN). We use DropEdge as graph augmentation technique to generate new graphs. Besides, the strategy of constructing Siamese network’s paired inputs is also studied in our work. Notably, stopping gradient backpropagation one side in Siam-GNN is an important factor affecting the performance of model. We eq
APA, Harvard, Vancouver, ISO, and other styles
36

Mandal, Debmalya, Sourav Medya, Brian Uzzi, and Charu Aggarwal. "MetaLearning with Graph Neural Networks." ACM SIGKDD Explorations Newsletter 23, no. 2 (2021): 13–22. http://dx.doi.org/10.1145/3510374.3510379.

Full text
Abstract:
Graph Neural Networks (GNNs), a generalization of deep neural networks on graph data have been widely used in various domains, ranging from drug discovery to recommender systems. However, GNNs on such applications are limited when there are few available samples. Meta-learning has been an important framework to address the lack of samples in machine learning, and in recent years, researchers have started to apply meta-learning to GNNs. In this work, we provide a comprehensive survey of different metalearning approaches involving GNNs on various graph problems showing the power of using these t
APA, Harvard, Vancouver, ISO, and other styles
37

Lin, Mingkai, Xiaobin Hong, Wenzhong Li, and Sanglu Lu. "Unified Graph Neural Networks Pre-training for Multi-domain Graphs." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 11 (2025): 12165–73. https://doi.org/10.1609/aaai.v39i11.33325.

Full text
Abstract:
Graph Neural Networks (GNNs) have proven effective and typically benefit from pre-training on accessible graphs to enhance performance on tasks with limited labeled data. However, existing GNNs are constrained by the ``one-domain-one-model'' limitation, which restricts their effectiveness across diverse graph domains. In this paper, we tackle this problem by developing a method called Multi-Domain Pre-training for a Unified GNN Model (MDP-GNN). This method is based on the philosophical notion that everything is interconnected, suggesting that a latent meta-domain exists to encompass the divers
APA, Harvard, Vancouver, ISO, and other styles
38

Zhou, Fan, and Chengtai Cao. "Overcoming Catastrophic Forgetting in Graph Neural Networks with Experience Replay." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 5 (2021): 4714–22. http://dx.doi.org/10.1609/aaai.v35i5.16602.

Full text
Abstract:
Graph Neural Networks (GNNs) have recently received significant research attention due to their superior performance on a variety of graph-related learning tasks. Most of the current works focus on either static or dynamic graph settings, addressing a single particular task, e.g., node/graph classification, link prediction. In this work, we investigate the question: can GNNs be applied to continuously learning a sequence of tasks? Towards that, we explore the Continual Graph Learning (CGL) paradigm and present the Experience Replay based framework ER-GNN for CGL to alleviate the catastrophic f
APA, Harvard, Vancouver, ISO, and other styles
39

Kooverjee, Nishai, Steven James, and Terence van Zyl. "Investigating Transfer Learning in Graph Neural Networks." Electronics 11, no. 8 (2022): 1202. http://dx.doi.org/10.3390/electronics11081202.

Full text
Abstract:
Graph neural networks (GNNs) build on the success of deep learning models by extending them for use in graph spaces. Transfer learning has proven extremely successful for traditional deep learning problems, resulting in faster training and improved performance. Despite the increasing interest in GNNs and their use cases, there is little research on their transferability. This research demonstrates that transfer learning is effective with GNNs, and describes how source tasks and the choice of GNN impact the ability to learn generalisable knowledge. We perform experiments using real-world and sy
APA, Harvard, Vancouver, ISO, and other styles
40

Lachaud, Guillaume, Patricia Conde-Cespedes, and Maria Trocan. "Mathematical Expressiveness of Graph Neural Networks." Mathematics 10, no. 24 (2022): 4770. http://dx.doi.org/10.3390/math10244770.

Full text
Abstract:
Graph Neural Networks (GNNs) are neural networks designed for processing graph data. There has been a lot of focus on recent developments of graph neural networks concerning the theoretical properties of the models, in particular with respect to their mathematical expressiveness, that is, to map different graphs or nodes to different outputs; or, conversely, to map permutations of the same graph to the same output. In this paper, we review the mathematical expressiveness results of graph neural networks. We find that according to their mathematical properties, the GNNs that are more expressive
APA, Harvard, Vancouver, ISO, and other styles
41

Do, P. H., T. D. Le, A. Berezkin, and R. Kirichek. "Graph Neural Networks for Traffic Classification in Satellite Communication Channels: A Comparative Analysis." Proceedings of Telecommunication Universities 9, no. 3 (2023): 14–27. http://dx.doi.org/10.31854/1813-324x-2023-9-3-14-27.

Full text
Abstract:
This paper presents a comprehensive comparison of graph neural networks, specifically Graph Convolutional Networks (GCN) and Graph Attention Networks (GAT), for traffic classification in satellite communication channels. The performance of these GNN-based methods is benchmarked against traditional Multi-Layer Perceptron (MLP) algorithms. The results indicate that GNNs demonstrate superior accuracy and efficiency compared to MLPs, emphasizing their potential for application in satellite communication systems. Moreover, the study investigates the impact of various factors on GNN algorithm perfor
APA, Harvard, Vancouver, ISO, and other styles
42

Zafeiropoulos, Nikolaos, Pavlos Bitilis, George E. Tsekouras, and Konstantinos Kotis. "Graph Neural Networks for Parkinson’s Disease Monitoring and Alerting." Sensors 23, no. 21 (2023): 8936. http://dx.doi.org/10.3390/s23218936.

Full text
Abstract:
Graph neural networks (GNNs) have been increasingly employed in the field of Parkinson’s disease (PD) research. The use of GNNs provides a promising approach to address the complex relationship between various clinical and non-clinical factors that contribute to the progression of PD. This review paper aims to provide a comprehensive overview of the state-of-the-art research that is using GNNs for PD. It presents PD and the motivation behind using GNNs in this field. Background knowledge on the topic is also presented. Our research methodology is based on PRISMA, presenting a comprehensive ove
APA, Harvard, Vancouver, ISO, and other styles
43

Jia, Zhiyong, Chuang Wang, Yang Wang, et al. "Recent Research Progress of Graph Neural Networks in Computer Vision." Electronics 14, no. 9 (2025): 1742. https://doi.org/10.3390/electronics14091742.

Full text
Abstract:
Graph neural networks (GNNs) have demonstrated significant potential in the field of computer vision in recent years, particularly in handling non-Euclidean data and capturing complex spatial and semantic relationships. This paper provides a comprehensive review of the latest research on GNNs in computer vision, with a focus on their applications in image processing, video analysis, and multimodal data fusion. First, we briefly introduce common GNN models, such as graph convolutional networks (GCN) and graph attention networks (GAT), and analyze their advantages in image and video data process
APA, Harvard, Vancouver, ISO, and other styles
44

Ye, Zhonglin, Lin Zhou, Mingyuan Li, Wei Zhang, Zhen Liu, and Haixing Zhao. "Multichannel Adaptive Data Mixture Augmentation for Graph Neural Networks." International Journal of Data Warehousing and Mining 20, no. 1 (2024): 1–14. http://dx.doi.org/10.4018/ijdwm.349975.

Full text
Abstract:
Graph neural networks (GNNs) have demonstrated significant potential in analyzing complex graph-structured data. However, conventional GNNs encounter challenges in effectively incorporating global and local features. Therefore, this paper introduces a novel approach for GNN called multichannel adaptive data mixture augmentation (MAME-GNN). It enhances a GNN by adopting a multi-channel architecture and interactive learning to effectively capture and coordinate the interrelationships between local and global graph structures. Additionally, this paper introduces the polynomial–Gaussian mixture gr
APA, Harvard, Vancouver, ISO, and other styles
45

Gogoshin, Grigoriy, and Andrei S. Rodin. "Graph Neural Networks in Cancer and Oncology Research: Emerging and Future Trends." Cancers 15, no. 24 (2023): 5858. http://dx.doi.org/10.3390/cancers15245858.

Full text
Abstract:
Next-generation cancer and oncology research needs to take full advantage of the multimodal structured, or graph, information, with the graph data types ranging from molecular structures to spatially resolved imaging and digital pathology, biological networks, and knowledge graphs. Graph Neural Networks (GNNs) efficiently combine the graph structure representations with the high predictive performance of deep learning, especially on large multimodal datasets. In this review article, we survey the landscape of recent (2020–present) GNN applications in the context of cancer and oncology research
APA, Harvard, Vancouver, ISO, and other styles
46

You, Yuxin, Zhen Liu, Xiangchao Wen, Yongtao Zhang, and Wei Ai. "Large Language Models Meet Graph Neural Networks: A Perspective of Graph Mining." Mathematics 13, no. 7 (2025): 1147. https://doi.org/10.3390/math13071147.

Full text
Abstract:
Graph mining is an important area in data mining and machine learning that involves extracting valuable information from graph-structured data. In recent years, significant progress has been made in this field through the development of graph neural networks (GNNs). However, GNNs are still deficient in generalizing to diverse graph data. Aiming to this issue, large language models (LLMs) could provide new solutions for graph mining tasks with their superior semantic understanding. In this review, we systematically review the combination and application techniques of LLMs and GNNs and present a
APA, Harvard, Vancouver, ISO, and other styles
47

Zhang, Zepeng, and Olga Fink. "Domain Adaptive Unfolded Graph Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 21 (2025): 22714–22. https://doi.org/10.1609/aaai.v39i21.34431.

Full text
Abstract:
Over the last decade, graph neural networks (GNNs) have made significant progress in numerous graph machine learning tasks. In real-world applications, where domain shifts occur and labels are often unavailable for a new target domain, graph domain adaptation (GDA) approaches have been proposed to facilitate knowledge transfer from the source domain to the target domain. Previous efforts in tackling distribution shifts across domains have mainly focused on aligning the node embedding distributions generated by the GNNs in the source and target domains. However, as the core part of GDA approach
APA, Harvard, Vancouver, ISO, and other styles
48

Ju, Mingxuan, Shifu Hou, Yujie Fan, Jianan Zhao, Yanfang Ye, and Liang Zhao. "Adaptive Kernel Graph Neural Network." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 6 (2022): 7051–58. http://dx.doi.org/10.1609/aaai.v36i6.20664.

Full text
Abstract:
Graph neural networks (GNNs) have demonstrated great success in representation learning for graph-structured data. The layer-wise graph convolution in GNNs is shown to be powerful at capturing graph topology. During this process, GNNs are usually guided by pre-defined kernels such as Laplacian matrix, adjacency matrix, or their variants. However, the adoptions of pre-defined kernels may restrain the generalities to different graphs: mismatch between graph and kernel would entail sub-optimal performance. For example, GNNs that focus on low-frequency information may not achieve satisfactory perf
APA, Harvard, Vancouver, ISO, and other styles
49

Yuan, Jinliang, Yirong Yao, Ming Xu, Hualei Yu, Junyuan Xie, and Chongjun Wang. "Graph structure learning based on feature and label consistency." Intelligent Data Analysis 26, no. 6 (2022): 1539–55. http://dx.doi.org/10.3233/ida-216253.

Full text
Abstract:
Graph Neural Networks (GNNs) have achieved remarkable success in graph-related tasks by combining node features and graph topology elegantly. Most GNNs assume that the networks are homophilous, which is not always true in the real world, i.e., structure noise or disassortative graphs. Only a few works focus on generalizing graph neural networks to heterophilous or low homophilous networks, where connected nodes may have different labels. In this paper, we design a simple and effective Graph Structure Learning strategy based on Feature and Label consistency (GSLFL) to increase the homophilous l
APA, Harvard, Vancouver, ISO, and other styles
50

Bo, Deyu, Binbin Hu, Xiao Wang, Zhiqiang Zhang, Chuan Shi, and Jun Zhou. "Regularizing Graph Neural Networks via Consistency-Diversity Graph Augmentations." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 4 (2022): 3913–21. http://dx.doi.org/10.1609/aaai.v36i4.20307.

Full text
Abstract:
Despite the remarkable performance of graph neural networks (GNNs) in semi-supervised learning, it is criticized for not making full use of unlabeled data and suffering from over-fitting. Recently, graph data augmentation, used to improve both accuracy and generalization of GNNs, has received considerable attentions. However, one fundamental question is how to evaluate the quality of graph augmentations in principle? In this paper, we propose two metrics, Consistency and Diversity, from the aspects of augmentation correctness and generalization. Moreover, we discover that existing augmentation
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!