To see the other types of publications on this topic, follow the link: Gnn.

Journal articles on the topic 'Gnn'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Gnn.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Yilmaz, Fatih, Aybüke Ertaş, and Seda Yamaç Akbiyik. "Determinants of circulant matrices with Gaussian nickel Fibonacci numbers." Filomat 37, no. 25 (2023): 8683–92. http://dx.doi.org/10.2298/fil2325683y.

Full text
Abstract:
In this study, we consider Kn := circ (GN1,GN2,...,GNn) circulant matrices whose entries are the Gaussian Nickel Fibonacci numbers GN1,GN2,...,GNn. Then, we compute determinants of Kn by exploiting Chebyshev polynomials of the second kind. Moreover, we obtain Cassini?s identity and the D?Ocagne identity for the Gaussian Nickel Fibonacci numbers.
APA, Harvard, Vancouver, ISO, and other styles
2

Stanimirović, Predrag S., Nataša Tešić, Dimitrios Gerontitis, Gradimir V. Milovanović, Milena J. Petrović, Vladimir L. Kazakovtsev, and Vladislav Stasiuk. "Application of Gradient Optimization Methods in Defining Neural Dynamics." Axioms 13, no. 1 (January 14, 2024): 49. http://dx.doi.org/10.3390/axioms13010049.

Full text
Abstract:
Applications of gradient method for nonlinear optimization in development of Gradient Neural Network (GNN) and Zhang Neural Network (ZNN) are investigated. Particularly, the solution of the matrix equation AXB=D which changes over time is studied using the novel GNN model, termed as GGNN(A,B,D). The GGNN model is developed applying GNN dynamics on the gradient of the error matrix used in the development of the GNN model. The convergence analysis shows that the neural state matrix of the GGNN(A,B,D) design converges asymptotically to the solution of the matrix equation AXB=D, for any initial state matrix. It is also shown that the convergence result is the least square solution which is defined depending on the selected initial matrix. A hybridization of GGNN with analogous modification GZNN of the ZNN dynamics is considered. The Simulink implementation of presented GGNN models is carried out on the set of real matrices.
APA, Harvard, Vancouver, ISO, and other styles
3

Long, Juan. "Exploration of Cross-Border Language Planning Using the Graph Neural Network for Internet of Things-Native Data." Mobile Information Systems 2022 (September 23, 2022): 1–12. http://dx.doi.org/10.1155/2022/7807878.

Full text
Abstract:
This work aims to study applying the graph neural network (GNN) in cross-border language planning (CBLP). Consequently, following a review of the connotation of GNN, it puts forward the research method for CBLP based on the Internet of Things (IoT)-native data and studies the classification of language texts utilizing different types of GNNs. Firstly, the isomorphic label-embedded graph convolution network (GCN) is proposed. Then, it proposes a scalability-enhanced heterogeneous GCN. Subsequently, the two GCN models are fused, and the research model-heterogeneous InducGCN is proposed. Finally, the model performances are comparatively analyzed. The experimental findings suggest that the classification accuracy of label-embedded GNN is higher than that of other methods, with the highest recognition accuracy of 97.37% on dataset R8. The classification accuracy of the proposed heterogeneous InducGCN fusion model has been improved by 0.09% more than the label-embedded GNN, reaching 97.46%.
APA, Harvard, Vancouver, ISO, and other styles
4

Zhao, Qingchao, Long Li, Yan Chu, Zhen Yang, Zhengkui Wang, and Wen Shan. "Efficient Supervised Image Clustering Based on Density Division and Graph Neural Networks." Remote Sensing 14, no. 15 (August 5, 2022): 3768. http://dx.doi.org/10.3390/rs14153768.

Full text
Abstract:
In recent research, supervised image clustering based on Graph Neural Networks (GNN) connectivity prediction has demonstrated considerable improvements over traditional clustering algorithms. However, existing supervised image clustering algorithms are usually time-consuming and limit their applications. In order to infer the connectivity between image instances, they usually created a subgraph for each image instance. Due to the creation and process of a large number of subgraphs as the input of GNN, the computation overheads are enormous. To address the high computation overhead problem in the GNN connectivity prediction, we present a time-efficient and effective GNN-based supervised clustering framework based on density division namely DDC-GNN. DDC-GNN divides all image instances into high-density parts and low-density parts, and only performs GNN subgraph connectivity prediction on the low-density parts, resulting in a significant reduction in redundant calculations. We test two typical models in the GNN connectivity prediction module in the DDC-GNN framework, which are the graph convolutional networks (GCN)-based model and the graph auto-encoder (GAE)-based model. Meanwhile, adaptive subgraphs are generated to ensure sufficient contextual information extraction for low-density parts instead of the fixed-size subgraphs. According to the experiments on different datasets, DDC-GNN achieves higher accuracy and is almost five times quicker than those without the density division strategy.
APA, Harvard, Vancouver, ISO, and other styles
5

Shanthamallu, Uday Shankar, Jayaraman J. Thiagarajan, and Andreas Spanias. "Uncertainty-Matching Graph Neural Networks to Defend Against Poisoning Attacks." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 11 (May 18, 2021): 9524–32. http://dx.doi.org/10.1609/aaai.v35i11.17147.

Full text
Abstract:
Graph Neural Networks (GNNs), a generalization of neural networks to graph-structured data, are often implemented using message passes between entities of a graph. While GNNs are effective for node classification, link prediction and graph classification, they are vulnerable to adversarial attacks, i.e., a small perturbation to the structure can lead to a non-trivial performance degradation. In this work, we propose Uncertainty Matching GNN (UM-GNN), that is aimed at improving the robustness of GNN models, particularly against poisoning attacks to the graph structure, by leveraging epistemic uncertainties from the message passing framework. More specifically, we propose to build a surrogate predictor that does not directly access the graph structure, but systematically extracts reliable knowledge from a standard GNN through a novel uncertainty-matching strategy. Interestingly, this uncoupling makes UM-GNN immune to evasion attacks by design, and achieves significantly improved robustness against poisoning attacks. Using empirical studies with standard benchmarks and a suite of global and target attacks, we demonstrate the effectiveness of UM-GNN, when compared to existing baselines including the state-of-the-art robust GCN.
APA, Harvard, Vancouver, ISO, and other styles
6

Ge, Kao, Jian-Qiang Zhao, and Yan-Yong Zhao. "GR-GNN: Gated Recursion-Based Graph Neural Network Algorithm." Mathematics 10, no. 7 (April 4, 2022): 1171. http://dx.doi.org/10.3390/math10071171.

Full text
Abstract:
Under an internet background involving artificial intelligence and big data—unstructured, materialized, network graph-structured data, such as social networks, knowledge graphs, and compound molecules, have gradually entered into various specific business scenarios. One problem that urgently needs to be solved in the industry involves how to perform feature extractions, transformations, and operations in graph-structured data to solve downstream tasks, such as node classifications and graph classifications in actual business scenarios. Therefore, this paper proposes a gated recursion-based graph neural network (GR-GNN) algorithm to solve tasks such as node depth-dependent feature extractions and node classifications for graph-structured data. The GRU neural network unit was used to complete the node classification task and, thereby, construct the GR-GNN model. In order to verify the accuracy, effectiveness, and superiority of the algorithm on the open datasets Cora, CiteseerX, and PubMed, the algorithm was used to compare the operation results with the classical graph neural network baseline algorithms GCN, GAT, and GraphSAGE, respectively. The experimental results show that, on the validation set, the accuracy and target loss of the GR-GNN algorithm are better than or equal to other baseline algorithms; in terms of algorithm convergence speed, the performance of the GR-GNN algorithm is comparable to that of the GCN algorithm, which is higher than other algorithms. The research results show that the GR-GNN algorithm proposed in this paper has high accuracy and computational efficiency, and very wide application significance.
APA, Harvard, Vancouver, ISO, and other styles
7

Gholami, Fatemeh, Zahed Rahmati, Alireza Mofidi, and Mostafa Abbaszadeh. "On Enhancement of Text Classification and Analysis of Text Emotions Using Graph Machine Learning and Ensemble Learning Methods on Non-English Datasets." Algorithms 16, no. 10 (October 4, 2023): 470. http://dx.doi.org/10.3390/a16100470.

Full text
Abstract:
In recent years, machine learning approaches, in particular graph learning methods, have achieved great results in the field of natural language processing, in particular text classification tasks. However, many of such models have shown limited generalization on datasets in different languages. In this research, we investigate and elaborate graph machine learning methods on non-English datasets (such as the Persian Digikala dataset), which consists of users’ opinions for the task of text classification. More specifically, we investigate different combinations of (Pars) BERT with various graph neural network (GNN) architectures (such as GCN, GAT, and GIN) as well as use ensemble learning methods in order to tackle the text classification task on certain well-known non-English datasets. Our analysis and results demonstrate how applying GNN models helps in achieving good scores on the task of text classification by better capturing the topological information between textual data. Additionally, our experiments show how models employing language-specific pre-trained models (like ParsBERT, instead of BERT) capture better information about the data, resulting in better accuracies.
APA, Harvard, Vancouver, ISO, and other styles
8

Ennadir, Sofiane, Yassine Abbahaddou, Johannes F. Lutzeyer, Michalis Vazirgiannis, and Henrik Boström. "A Simple and Yet Fairly Effective Defense for Graph Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 19 (March 24, 2024): 21063–71. http://dx.doi.org/10.1609/aaai.v38i19.30098.

Full text
Abstract:
Graph Neural Networks (GNNs) have emerged as the dominant approach for machine learning on graph-structured data. However, concerns have arisen regarding the vulnerability of GNNs to small adversarial perturbations. Existing defense methods against such perturbations suffer from high time complexity and can negatively impact the model's performance on clean graphs. To address these challenges, this paper introduces NoisyGNNs, a novel defense method that incorporates noise into the underlying model's architecture. We establish a theoretical connection between noise injection and the enhancement of GNN robustness, highlighting the effectiveness of our approach. We further conduct extensive empirical evaluations on the node classification task to validate our theoretical findings, focusing on two popular GNNs: the GCN and GIN. The results demonstrate that NoisyGNN achieves superior or comparable defense performance to existing methods while minimizing added time complexity. The NoisyGNN approach is model-agnostic, allowing it to be integrated with different GNN architectures. Successful combinations of our NoisyGNN approach with existing defense techniques demonstrate even further improved adversarial defense results. Our code is publicly available at: https://github.com/Sennadir/NoisyGNN.
APA, Harvard, Vancouver, ISO, and other styles
9

Wu, Qingle, Benjamin K. Ng, Chan-Tong Lam, Xiangyu Cen, Yuanhui Liang, and Yan Ma. "Shared Graph Neural Network for Channel Decoding." Applied Sciences 13, no. 23 (November 24, 2023): 12657. http://dx.doi.org/10.3390/app132312657.

Full text
Abstract:
With the application of graph neural network (GNN) in the communication physical layer, GNN-based channel decoding algorithms have become a research hotspot. Compared with traditional decoding algorithms, GNN-based channel decoding algorithms have a better performance . GNN has good stability and can handle large-scale problems; GNN has good inheritance and can generalize to different network settings. Compared with deep learning-based channel decoding algorithms, GNN-based channel decoding algorithms avoid a large number of multiplications between learning weights and messages. However, the aggregation edges and nodes for GNN require many parameters, which requires a large amount of memory storage resources. In this work, we propose GNN-based channel decoding algorithms with shared parameters, called shared graph neural network (SGNN). For BCH codes and LDPC codes, the SGNN decoding algorithm only needs a quarter or half of the parameters, while achieving a slightly degraded bit error ratio (BER) performance.
APA, Harvard, Vancouver, ISO, and other styles
10

Kim, Cheolhyeong, Haeseong Moon, and Hyung Ju Hwang. "NEAR: Neighborhood Edge AggregatoR for Graph Classification." ACM Transactions on Intelligent Systems and Technology 13, no. 3 (June 30, 2022): 1–17. http://dx.doi.org/10.1145/3506714.

Full text
Abstract:
Learning graph-structured data with graph neural networks (GNNs) has been recently emerging as an important field because of its wide applicability in bioinformatics, chemoinformatics, social network analysis, and data mining. Recent GNN algorithms are based on neural message passing, which enables GNNs to integrate local structures and node features recursively. However, past GNN algorithms based on 1-hop neighborhood neural message passing are exposed to a risk of loss of information on local structures and relationships. In this article, we propose Neighborhood Edge AggregatoR (NEAR), a framework that aggregates relations between the nodes in the neighborhood via edges. NEAR, which can be orthogonally combined with Graph Isomorphism Network (GIN), gives integrated information that describes which nodes in the neighborhood are connected. Therefore, NEAR can reflect additional information of a local structure of each node beyond the nodes themselves in 1-hop neighborhood. Experimental results on multiple graph classification tasks show that our algorithm makes a good improvement over other existing 1-hop based GNN-based algorithms.
APA, Harvard, Vancouver, ISO, and other styles
11

Mi, Wujuan, Minghua Zhang, Yuan Li, Xiaoxuan Jing, Wei Pan, Xin Xing, Chen Xiao, Qiusheng He, and Yonghong Bi. "Spatio-Temporal Pattern of Groundwater Nitrate-Nitrogen and Its Potential Human Health Risk in a Severe Water Shortage Region." Sustainability 15, no. 19 (September 27, 2023): 14284. http://dx.doi.org/10.3390/su151914284.

Full text
Abstract:
Groundwater nitrate-nitrogen (GNN) has been one of the most widespread pollutants. However, there is still a poor understanding of GNN pollution and its potential effects on human health. In this study, GNN in Taiyuan, a region of severe water scarcity in northern China, was tracked from 2016 to 2020; the spatio-temporal distribution characteristics of GNN were demonstrated and the potential human health risks to infants, children, and adults were assessed. The results showed that the concentration of GNN varied from 0.1 to 43.3 mg L−1; the highest mean concentration was observed in 2016 and the lowest value appeared in 2020. GNN concentration declined over time, which was closely related to the proactive environmental policies of Tiyuan city. GNN levels were considerably greater in urban areas than in rural areas (p < 0.001), and the forest had a very low level of GNN, which was significantly different from the grassland, farmland, and construction land (p < 0.001). According to the hazard quotient, the impacts of GNN on human health revealed age specificity, namely in the order of infants > children > adults. It was concluded that the interception effect of the forest could effectively alleviate groundwater pollution pressures, and more forest land is necessary for human health risk prevention in the severe water shortage areas to alleviate GNN pollution.
APA, Harvard, Vancouver, ISO, and other styles
12

Hong, Xiaobin, Wenzhong Li, Chaoqun Wang, Mingkai Lin, and Sanglu Lu. "Label Attentive Distillation for GNN-Based Graph Classification." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 8 (March 24, 2024): 8499–507. http://dx.doi.org/10.1609/aaai.v38i8.28693.

Full text
Abstract:
Graph Neural Networks (GNNs) have emerged as a powerful tool for modeling graph-structured data, exhibiting remarkable potential in applications such as social networks, recommendation systems, and molecular structures. However, the conventional GNNs perform node-level feature aggregation from neighbors without considering graph-label information, which leads to the misaligned embedding problem that may cause a detrimental effect on graph-level tasks such as graph classification. In this paper, we propose a novel label-attentive distillation method called LAD-GNN for graph representation learning to solve this problem. It alternatively trains a teacher model and a student GNN with a distillation-based approach. In the teacher model, a label-attentive encoder is proposed to encode the label information fusing with the node features to generate ideal embedding. In the student model, the ideal embedding is used as intermediate supervision to urge the student GNN to learn class-friendly node embedding to facilitate graph-level tasks. Generally, LAD-GNN is an enhanced GNN training approach that can be incorporated with arbitrary GNN backbone to improve performance without significant increase of computational cost. Extensive experiments with 7 GNN backbones based on 10 benchmark datasets show that LAD-GNN improves the SOTA GNNs in graph classification accuracy. The source codes of LAD-GNN are publicly available on https://github.com/XiaobinHong/LAD-GNN.
APA, Harvard, Vancouver, ISO, and other styles
13

Gao, Jun, Jiazun Chen, Zhao Li, and Ji Zhang. "ICS-GNN." Proceedings of the VLDB Endowment 14, no. 6 (February 2021): 1006–18. http://dx.doi.org/10.14778/3447689.3447704.

Full text
Abstract:
Searching a community containing a given query vertex in an online social network enjoys wide applications like recommendation, team organization, etc. When applied to real-life networks, the existing approaches face two major limitations. First, they usually take two steps, i.e. , crawling a large part of the network first and then finding the community next, but the entire network is usually too big and most of the data are not interesting to end users. Second, the existing methods utilize hand-crafted rules to measure community membership, while it is very difficult to define effective rules as the communities are flexible for different query vertices. In this paper, we propose an Interactive Community Search method based on Graph Neural Network (shortened by ICS-GNN) to locate the target community over a subgraph collected on the fly from an online network. Specifically, we recast the community membership problem as a vertex classification problem using GNN, which captures similarities between the graph vertices and the query vertex by combining content and structural features seamlessly and flexibly under the guide of users' labeling. We then introduce a k -sized Maximum-GNN-scores (shortened by kMG ) community to describe the target community. We next discover the target community iteratively and interactively. In each iteration, we build a candidate subgraph using the crawled pages with the guide of the query vertex and labeled vertices, infer the vertex scores with a GNN model trained on the subgraph, and discover the kMG community which will be evaluated by end users to acquire more feedback. Besides, two optimization strategies are proposed to combine ranking loss into the GNN model and search more space in the target community location. We conduct the experiments in both offline and online real-life data sets, and demonstrate that ICS-GNN can produce effective communities with low overhead in communication, computation, and user labeling.
APA, Harvard, Vancouver, ISO, and other styles
14

Tan, Zhiguo. "Fixed-Time Convergent Gradient Neural Network for Solving Online Sylvester Equation." Mathematics 10, no. 17 (August 28, 2022): 3090. http://dx.doi.org/10.3390/math10173090.

Full text
Abstract:
This paper aims at finding a fixed-time solution to the Sylvester equation by using a gradient neural network (GNN). To reach this goal, a modified sign-bi-power (msbp) function is presented and applied on a linear GNN as an activation function. Accordingly, a fixed-time convergent GNN (FTC-GNN) model is developed for solving the Sylvester equation. The upper bound of the convergence time of such an FTC-GNN model can be predetermined if parameters are given regardless of the initial conditions. This point is corroborated by a detailed theoretical analysis. In addition, the convergence time is also estimated utilizing the Lyapunov stability theory. Two examples are then simulated to demonstrate the validation of the theoretical analysis, as well as the superior convergence performance of the presented FTC-GNN model as compared to the existing GNN models.
APA, Harvard, Vancouver, ISO, and other styles
15

Wang, Zhong Li, Cong Liang Cheng, Xian Hai Hu, and John Garcia. "Synthesis and Properties of Polyurethane - Dispersed Diazo Black GNN Polymer Dye." Key Engineering Materials 852 (July 2020): 1–10. http://dx.doi.org/10.4028/www.scientific.net/kem.852.1.

Full text
Abstract:
Polyurethane - Dispersed Diazo Black GNN Polymer dye (PU-DDB GNN) was synthesized by incorporation of DDB GNN into polyurethane chains. The expected structure of polyurethane-dispersed diazo black GNN polymer dye was confirmed by FT-IR and UV-vis spectra. The photochromic phenomenon of polyurethane-dispersed diazo-black GNN dyes was investigated. And through the thermogravimetric analysis and differential calorimetry analysis, it is confirmed to be an amorphous structure with good thermal stability. Through the test of its mechanical properties, the results show that the elongation at break is quite high.
APA, Harvard, Vancouver, ISO, and other styles
16

Zheng, Chenguang, Hongzhi Chen, Yuxuan Cheng, Zhezheng Song, Yifan Wu, Changji Li, James Cheng, Hao Yang, and Shuai Zhang. "ByteGNN." Proceedings of the VLDB Endowment 15, no. 6 (February 2022): 1228–42. http://dx.doi.org/10.14778/3514061.3514069.

Full text
Abstract:
Graph neural networks (GNNs) have shown excellent performance in a wide range of applications such as recommendation, risk control, and drug discovery. With the increase in the volume of graph data, distributed GNN systems become essential to support efficient GNN training. However, existing distributed GNN training systems suffer from various performance issues including high network communication cost, low CPU utilization, and poor end-to-end performance. In this paper, we propose ByteGNN, which addresses the limitations in existing distributed GNN systems with three key designs: (1) an abstraction of mini-batch graph sampling to support high parallelism, (2) a two-level scheduling strategy to improve resource utilization and to reduce the end-to-end GNN training time, and (3) a graph partitioning algorithm tailored for GNN workloads. Our experiments show that ByteGNN outperforms the state-of-the-art distributed GNN systems with up to 3.5--23.8 times faster end-to-end execution, 2--6 times higher CPU utilization, and around half of the network communication cost.
APA, Harvard, Vancouver, ISO, and other styles
17

Do, P. H., T. D. Le, A. Berezkin, and R. Kirichek. "Graph Neural Networks for Traffic Classification in Satellite Communication Channels: A Comparative Analysis." Proceedings of Telecommunication Universities 9, no. 3 (July 10, 2023): 14–27. http://dx.doi.org/10.31854/1813-324x-2023-9-3-14-27.

Full text
Abstract:
This paper presents a comprehensive comparison of graph neural networks, specifically Graph Convolutional Networks (GCN) and Graph Attention Networks (GAT), for traffic classification in satellite communication channels. The performance of these GNN-based methods is benchmarked against traditional Multi-Layer Perceptron (MLP) algorithms. The results indicate that GNNs demonstrate superior accuracy and efficiency compared to MLPs, emphasizing their potential for application in satellite communication systems. Moreover, the study investigates the impact of various factors on GNN algorithm performance, providing insights into the most effective strategies for implementing GNNs in traffic classification tasks. This research offers valuable knowledge on the benefits and prospective use cases of GNNs within satellite communication systems.
APA, Harvard, Vancouver, ISO, and other styles
18

Chen, Yuhuan, Chenfu Yi, and Jian Zhong. "Linear Simultaneous Equations’ Neural Solution and Its Application to Convex Quadratic Programming with Equality-Constraint." Journal of Applied Mathematics 2013 (2013): 1–6. http://dx.doi.org/10.1155/2013/695647.

Full text
Abstract:
A gradient-based neural network (GNN) is improved and presented for the linear algebraic equation solving. Then, such a GNN model is used for the online solution of the convex quadratic programming (QP) with equality-constraints under the usage of Lagrangian function and Karush-Kuhn-Tucker (KKT) condition. According to the electronic architecture of such a GNN, it is known that the performance of the presented GNN could be enhanced by adopting different activation function arrays and/or design parameters. Computer simulation results substantiate that such a GNN could obtain the accurate solution of the QP problem with an effective manner.
APA, Harvard, Vancouver, ISO, and other styles
19

Tang, Dahai, Jiali Wang, Rong Chen, Lei Wang, Wenyuan Yu, Jingren Zhou, and Kenli Li. "XGNN: Boosting Multi-GPU GNN Training via Global GNN Memory Store." Proceedings of the VLDB Endowment 17, no. 5 (January 2024): 1105–18. http://dx.doi.org/10.14778/3641204.3641219.

Full text
Abstract:
GPUs are commonly utilized to accelerate GNN training, particularly on a multi-GPU server with high-speed interconnects (e.g., NVLink and NVSwitch). However, the rapidly increasing scale of graphs poses a challenge to applying GNN to real-world applications, due to limited GPU memory. This paper presents XGNN, a multi-GPU GNN training system that fully utilizes system memory (e.g., GPU and host memory), as well as high-speed interconnects. The core design of XGNN is the Global GNN Memory Store (GGMS), which abstracts underlying resources to provide a unified memory store for GNN training. It partitions hybrid input data, including graph topological and feature data, across both GPU and host memory. GGMS also provides easy-to-use APIs for GNN applications to access data transparently, forwarding data access requests to the actual physical data partitions automatically. Evaluation on various multi-GPU platforms using three common GNN models with four large-scale datasets shows that XGNN outperforms DGL, Quiver and DGL+C by up to 7.9X (from 2.3X), 15.7X (from 3.3X) and 2.8X (from 1.3X), respectively.
APA, Harvard, Vancouver, ISO, and other styles
20

Yang, Zhi, Yadong Yan, Haitao Gan, Jing Zhao, and Zhiwei Ye. "A safe semi-supervised graph convolution network." Mathematical Biosciences and Engineering 19, no. 12 (2022): 12677–92. http://dx.doi.org/10.3934/mbe.2022592.

Full text
Abstract:
<abstract><p>In the semi-supervised learning field, Graph Convolution Network (GCN), as a variant model of GNN, has achieved promising results for non-Euclidean data by introducing convolution into GNN. However, GCN and its variant models fail to safely use the information of risk unlabeled data, which will degrade the performance of semi-supervised learning. Therefore, we propose a Safe GCN framework (Safe-GCN) to improve the learning performance. In the Safe-GCN, we design an iterative process to label the unlabeled data. In each iteration, a GCN and its supervised version (S-GCN) are learned to find the unlabeled data with high confidence. The high-confidence unlabeled data and their pseudo labels are then added to the label set. Finally, both added unlabeled data and labeled ones are used to train a S-GCN which can achieve the safe exploration of the risk unlabeled data and enable safe use of large numbers of unlabeled data. The performance of Safe-GCN is evaluated on three well-known citation network datasets and the obtained results demonstrate the effectiveness of the proposed framework over several graph-based semi-supervised learning methods.</p></abstract>
APA, Harvard, Vancouver, ISO, and other styles
21

Duan, Mingjiang, Tongya Zheng, Yang Gao, Gang Wang, Zunlei Feng, and Xinyu Wang. "DGA-GNN: Dynamic Grouping Aggregation GNN for Fraud Detection." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 10 (March 24, 2024): 11820–28. http://dx.doi.org/10.1609/aaai.v38i10.29067.

Full text
Abstract:
Fraud detection has increasingly become a prominent research field due to the dramatically increased incidents of fraud. The complex connections involving thousands, or even millions of nodes, present challenges for fraud detection tasks. Many researchers have developed various graph-based methods to detect fraud from these intricate graphs. However, those methods neglect two distinct characteristics of the fraud graph: the non-additivity of certain attributes and the distinguishability of grouped messages from neighbor nodes. This paper introduces the Dynamic Grouping Aggregation Graph Neural Network (DGA-GNN) for fraud detection, which addresses these two characteristics by dynamically grouping attribute value ranges and neighbor nodes. In DGA-GNN, we initially propose the decision tree binning encoding to transform non-additive node attributes into bin vectors. This approach aligns well with the GNN’s aggregation operation and avoids nonsensical feature generation. Furthermore, we devise a feedback dynamic grouping strategy to classify graph nodes into two distinct groups and then employ a hierarchical aggregation. This method extracts more discriminative features for fraud detection tasks. Extensive experiments on five datasets suggest that our proposed method achieves a 3% ~ 16% improvement over existing SOTA methods. Code is available at https://github.com/AtwoodDuan/DGA-GNN.
APA, Harvard, Vancouver, ISO, and other styles
22

Zhang, Yuhao, and Arun Kumar. "Lotan: Bridging the Gap between GNNs and Scalable Graph Analytics Engines." Proceedings of the VLDB Endowment 16, no. 11 (July 2023): 2728–41. http://dx.doi.org/10.14778/3611479.3611483.

Full text
Abstract:
Recent advances in Graph Neural Networks (GNNs) have changed the landscape of modern graph analytics. The complexity of GNN training and the scalability challenges have also sparked interest from the systems community, with efforts to build systems that provide higher efficiency and schemes to reduce costs. However, we observe that many such systems basically "reinvent the wheel" of much work done in the database world on scalable graph analytics engines. Further, they often tightly couple the scalability treatments of graph data processing with that of GNN training, resulting in entangled complex problems and systems that often do not scale well on one of those axes. In this paper, we ask a fundamental question: How far can we push existing systems for scalable graph analytics and deep learning (DL) instead of building custom GNN systems? Are compromises inevitable on scalability and/or runtimes? We propose Lotan, the first scalable and optimized data system for full-batch GNN training with decoupled scaling that bridges the hitherto siloed worlds of graph analytics systems and DL systems. Lotan offers a series of technical innovations, including re-imagining GNN training as query plan-like dataflows, execution plan rewriting, optimized data movement between systems, a GNN-centric graph partitioning scheme, and the first known GNN model batching scheme. We prototyped Lotan on top of GraphX and PyTorch. An empirical evaluation using several real-world benchmark GNN workloads reveals a promising nuanced picture: Lotan significantly surpasses the scalability of state-of-the-art custom GNN systems, while often matching or being only slightly behind on time-to-accuracy metrics in some cases. We also show the impact of our system optimizations. Overall, our work shows that the GNN world can indeed benefit from building on top of scalable graph analytics engines. Lotan's new level of scalability can also empower new ML-oriented research on ever-larger graphs and GNNs.
APA, Harvard, Vancouver, ISO, and other styles
23

Sun, Yanqiu. "A Hybrid Approach by Integrating Brain Storm Optimization Algorithm with Grey Neural Network for Stock Index Forecasting." Abstract and Applied Analysis 2014 (2014): 1–10. http://dx.doi.org/10.1155/2014/759862.

Full text
Abstract:
Stock index forecasting is an important tool for both the investors and the government organizations. However, due to the inherent large volatility, high noise, and nonlinearity of the stock index, stock index forecasting has been a challenging task for a long time. This paper aims to develop a novel hybrid stock index forecasting model named BSO-GNN based on the brain storm optimization (BSO) approach and the grey neural network (GNN) model by taking full advantage of the grey model in dealing with data with small samples and the neural network in handling nonlinear fitting problems. Moreover, the new developed BSO-GNN, which initializes the parameters in grey neural network with the BSO algorithm, has great capability in overcoming the deficiencies of the traditional GNN model with randomly initialized parameters through solving the local optimum and low forecasting accuracy problems. The performance of the proposed BSO-GNN model is evaluated under the normalization and nonnormalization preprocessing situations. Experimental results from the Shanghai Stock Exchange (SSE) Composite Index, the Shenzhen Composite Index, and the HuShen 300 Index opening price forecasting show that the proposed BSO-GNN model is effective and robust in the stock index forecasting and superior to the individual GNN model.
APA, Harvard, Vancouver, ISO, and other styles
24

Yuan, Hao, Yajiong Liu, Yanfeng Zhang, Xin Ai, Qiange Wang, Chaoyi Chen, Yu Gu, and Ge Yu. "Comprehensive Evaluation of GNN Training Systems: A Data Management Perspective." Proceedings of the VLDB Endowment 17, no. 6 (February 2024): 1241–54. http://dx.doi.org/10.14778/3648160.3648167.

Full text
Abstract:
Many Graph Neural Network (GNN) training systems have emerged recently to support efficient GNN training. Since GNNs embody complex data dependencies between training samples, the training of GNNs should address distinct challenges different from DNN training in data management, such as data partitioning, batch preparation for mini-batch training, and data transferring between CPUs and GPUs. These factors, which take up a large proportion of training time, make data management in GNN training more significant. This paper reviews GNN training from a data management perspective and provides a comprehensive analysis and evaluation of the representative approaches. We conduct extensive experiments on various benchmark datasets and show many interesting and valuable results. We also provide some practical tips learned from these experiments, which are helpful for designing GNN training systems in the future.
APA, Harvard, Vancouver, ISO, and other styles
25

Zheng, Xiangyu, Wanwei Huang, Hui Li, and Guangyuan Li. "Research on Generalized Intelligent Routing Technology Based on Graph Neural Network." Electronics 11, no. 18 (September 17, 2022): 2952. http://dx.doi.org/10.3390/electronics11182952.

Full text
Abstract:
Aiming at the problems of poor load balancing ability and weak generalization of the existing routing algorithms, this paper proposes an intelligent routing algorithm, GNN-DRL, in the Software Defined Networking (SDN) environment. The GNN-DRL algorithm uses a graph neural network (GNN) to perceive the dynamically changing network topology, generalizes the state of nodes and edges, and combines the self-learning ability of Deep Reinforcement Learning (DRL) to find the optimal routing strategy, which makes GNN-DRL minimize the maximum link utilization and reduces average end-to-end delay under high network load. In this paper, the GNN-DRL intelligent routing algorithm is compared with the Open Shortest Path First (OSPF), Equal-Cost Multi-Path (ECMP), and intelligence-driven experiential network architecture for automatic routing (EARS). The experimental results show that GNN-DRL reduces the maximum link utilization by 13.92% and end-to-end delay by 9.48% compared with the superior intelligent routing algorithm EARS under high traffic load, and can be effectively extended to different network topologies, making possible better load balancing capability and generalizability.
APA, Harvard, Vancouver, ISO, and other styles
26

van den Akker, Eric, Yolanda Vankan-Berkhoudt, Peter J. M. Valk, Bob Löwenberg, and Ruud Delwel. "The Common Viral Insertion Site Evi12 Is Located in the 5′-Noncoding Region of Gnn, a Novel Gene with Enhanced Expression in Two Subclasses of Human Acute Myeloid Leukemia." Journal of Virology 79, no. 9 (May 1, 2005): 5249–58. http://dx.doi.org/10.1128/jvi.79.9.5249-5258.2005.

Full text
Abstract:
ABSTRACT The leukemia and lymphoma disease locus Evi12 was mapped to the noncoding region of a novel gene, Gnn (named for Grp94 neighboring nucleotidase), that is located immediately upstream of the Grp94/Tra1 gene on mouse chromosome 10. The Gnn gene is conserved in mice and humans. Expression of fusion constructs between GFP and Gnn cDNA isoforms in HEK-293 cells showed that Gnn proteins are located mainly in the cytoplasm. Immunoblotting experiments demonstrated the presence of multiple Gnn protein isoforms in most organs, with the lowest levels of expression of the protein detected in bone marrow and spleen. The Evi12-containing leukemia cell line NFS107 showed high levels of expression of a ∼150-kDa Gnn isoform (Gnn107) that was not observed in control cell lines. Overexpression may be due to the viral insertion in Evi12. The Gnn107 protein is probably encoded by a Gnn cDNA isoform that is expressed exclusively in NFS107 cells and that includes sequences of TU12B1-TY, a putative protein with homology to 5′-nucleotidase enzymes. Interestingly, using Affymetrix gene expression data of a cohort of 285 patients with acute myeloid leukemia (AML), we found that GNN/TU12B1-TY expression was specifically increased in two AML clusters. One cluster consisted of all AML patients with a t(8;21) translocation, and the second cluster consisted of AML patients with a normal karyotype carrying a FLT3 internal tandem duplication. These findings suggest that we identified a novel proto-oncogene that may be causally linked to certain types of human leukemia.
APA, Harvard, Vancouver, ISO, and other styles
27

Sabir, Zulqurnain, Muhammad Asif Raja, Dumitru Baleanu, R. Sadat, and Mohamed Ali. "Investigations of nonlinear induction motor model using the Gudermannian neural networks." Thermal Science, no. 00 (2021): 261. http://dx.doi.org/10.2298/tsci210508261s.

Full text
Abstract:
This study aims to solve the nonlinear fifth-order induction motor model (FO-IMM) using the Gudermannian neural networks (GNNs) along with the optimization procedures of global search as a genetic algorithm together with the quick local search process as active-set technique (GNN-GA-AST). GNNs are executed to discretize the nonlinear FO-IMM to prompt the fitness function in the procedure of mean square error. The exactness of the GNN-GA-AST is observed by comparing the obtained results with the reference results. The numerical performances of the stochastic GNN-GA-AST are provided to tackle three different variants based on the nonlinear FO-IMM to authenticate the consistency, significance and efficacy of the designed stochastic GNN-GA-AST. Additionally, statistical illustrations are available to authenticate the precision, accuracy and convergence of the designed stochastic GNN-GA-AST.
APA, Harvard, Vancouver, ISO, and other styles
28

Zhou, Yuchen, Yanmin Shang, Yanan Cao, Qian Li, Chuan Zhou, and Guandong Xu. "API-GNN: attribute preserving oriented interactive graph neural network." World Wide Web 25, no. 1 (January 2022): 239–58. http://dx.doi.org/10.1007/s11280-021-00987-z.

Full text
Abstract:
AbstractAttributed graph embedding aims to learn node representation based on the graph topology and node attributes. The current mainstream GNN-based methods learn the representation of the target node by aggregating the attributes of its neighbor nodes. These methods still face two challenges: (1) In the neighborhood aggregation procedure, the attributes of each node would be propagated to its neighborhoods which may cause disturbance to the original attributes of the target node and cause over-smoothing in GNN iteration. (2) Because the representation of the target node is derived from the attributes and topology of its neighbors, the attributes and topological information of each neighbor have different effects on the representation of the target node. However, this different contribution has not been considered by the existing GNN-based methods. In this paper, we propose a novel GNN model named API-GNN (Attribute Preserving Oriented Interactive Graph Neural Network). API-GNN can not only reduce the disturbance of neighborhood aggregation to the original attribute of target node, but also explicitly model the different impacts of attribute and topology on node representation. We conduct experiments on six public real-world datasets to validate API-GNN on node classification and link prediction. Experimental results show that our model outperforms several strong baselines over various graph datasets on multiple graph analysis tasks.
APA, Harvard, Vancouver, ISO, and other styles
29

Xue, Guotong, Ming Zhong, Tieyun Qian, and Jianxin Li. "PSA-GNN: An augmented GNN framework with priori subgraph knowledge." Neural Networks 173 (May 2024): 106155. http://dx.doi.org/10.1016/j.neunet.2024.106155.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Tam, Prohim, Inseok Song, Seungwoo Kang, Seyha Ros, and Seokhoon Kim. "Graph Neural Networks for Intelligent Modelling in Network Management and Orchestration: A Survey on Communications." Electronics 11, no. 20 (October 19, 2022): 3371. http://dx.doi.org/10.3390/electronics11203371.

Full text
Abstract:
The advancing applications based on machine learning and deep learning in communication networks have been exponentially increasing in the system architectures of enabled software-defined networking, network functions virtualization, and other wired/wireless networks. With data exposure capabilities of graph-structured network topologies and underlying data plane information, the state-of-the-art deep learning approach, graph neural networks (GNN), has been applied to understand multi-scale deep correlations, offer generalization capability, improve the accuracy metrics of prediction modelling, and empower state representation for deep reinforcement learning (DRL) agents in future intelligent network management and orchestration. This paper contributes a taxonomy of recent studies using GNN-based approaches to optimize the control policies, including offloading strategies, routing optimization, virtual network function orchestration, and resource allocation. The algorithm designs of converged DRL and GNN are reviewed throughout the selected studies by presenting the state generalization, GNN-assisted action selection, and reward valuation cooperating with GNN outputs. We also survey the GNN-empowered application deployment in the autonomous control of optical networks, Internet of Healthcare Things, Internet of Vehicles, Industrial Internet of Things, and other smart city applications. Finally, we provide a potential discussion on research challenges and future directions.
APA, Harvard, Vancouver, ISO, and other styles
31

Huang, Yunlong, and Yanqiu Wang. "The Application of Graph Neural Network Based on Edge Computing in English Teaching Mode Reform." Wireless Communications and Mobile Computing 2022 (March 12, 2022): 1–12. http://dx.doi.org/10.1155/2022/2611923.

Full text
Abstract:
The latest developments in edge computing have paved the way for more efficient data processing especially for simple tasks and lightweight models on the edge of the network, sinking network functions from cloud to edge of the network closer to users. For the reform of English teaching mode, this is also an opportunity to integrate information technology, providing new ideas and new methods for the optimization of English teaching. It improves the efficiency of English reading teaching, stimulates the interest of English learning, enhances students’ autonomous learning ability, and creates favorable conditions for students’ learning and development. This paper designs a MEC-based GNN (GCN-GAN) user preference prediction recommendation model, which can recommend high-quality video or picture text content to the local MEC server based on user browsing history and user preferences. In the experiment, the LFU-LRU joint cache placement strategy used in this article has a cache hit rate of up to 99%. Comparing the GCN-GAN model with other traditional graph neural network models, it performs caching experiments on the Douban English book data and Douban video data sets. The GCN-GAN model has a higher score on the cache task, and the highest speculation accuracy value F1 can reach 86.7.
APA, Harvard, Vancouver, ISO, and other styles
32

Zhou, Fan, Rongfan Li, Goce Trajcevski, and Kunpeng Zhang. "Land Deformation Prediction via Slope-Aware Graph Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 17 (May 18, 2021): 15033–40. http://dx.doi.org/10.1609/aaai.v35i17.17764.

Full text
Abstract:
We introduce a slope-aware graph neural network (SA-GNN) to leverage continuously monitored data and predict the land displacement. Unlike general GNNs tackling tasks in the plain graphs, our method is capable of generalizing 3D spatial knowledge from InSAR point clouds. Specifically, we structure of the land surface, while preserving the spatial correlations among adjacent points. The point cloud can then be efficiently converted to a near-neighbor graph where general GNN methods can be applied to predict the displacement of the slope surface. We conducted experiments on real-world datasets and the results demonstrate that SA-GNN outperforms existing 3D CNN and point GNN methods.
APA, Harvard, Vancouver, ISO, and other styles
33

Zhang, Chunhai, Jie Liu, Kai Dang, and Wenzheng Zhang. "Multi-Scale Distillation from Multiple Graph Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 4 (June 28, 2022): 4337–44. http://dx.doi.org/10.1609/aaai.v36i4.20354.

Full text
Abstract:
Knowledge Distillation (KD), which is an effective model compression and acceleration technique, has been successfully applied to graph neural networks (GNNs) recently. Existing approaches utilize a single GNN model as the teacher to distill knowledge. However, we notice that GNN models with different number of layers demonstrate different classification abilities on nodes with different degrees. On the one hand, for nodes with high degrees, their local structures are dense and complex, hence more message passing is needed. Therefore, GNN models with more layers perform better. On the other hand, for nodes with low degrees, whose local structures are relatively sparse and simple, the repeated message passing can easily lead to over-smoothing. Thus, GNN models with less layers are more suitable. However, existing single-teacher GNN knowledge distillation approaches which are based on a single GNN model, are sub-optimal. To this end, we propose a novel approach to distill multi-scale knowledge, which learns from multiple GNN teacher models with different number of layers to capture the topological semantic at different scales. Instead of learning from the teacher models equally, the proposed method automatically assigns proper weights for each teacher model via an attention mechanism which enables the student to select teachers for different local structures. Extensive experiments are conducted to evaluate the proposed method on four public datasets. The experimental results demonstrate the superiority of our proposed method over state-of-the-art methods. Our code is publicly available at https://github.com/NKU-IIPLab/MSKD.
APA, Harvard, Vancouver, ISO, and other styles
34

Chen, Chaoyi, Dechao Gao, Yanfeng Zhang, Qiange Wang, Zhenbo Fu, Xuecang Zhang, Junhua Zhu, Yu Gu, and Ge Yu. "NeutronStream: A Dynamic GNN Training Framework with Sliding Window for Graph Streams." Proceedings of the VLDB Endowment 17, no. 3 (November 2023): 455–68. http://dx.doi.org/10.14778/3632093.3632108.

Full text
Abstract:
Existing Graph Neural Network (GNN) training frameworks have been designed to help developers easily create performant GNN implementations. However, most existing GNN frameworks assume that the input graphs are static, but ignore that most real-world graphs are constantly evolving. Though many dynamic GNN models have emerged to learn from evolving graphs, the training process of these dynamic GNNs is dramatically different from traditional GNNs in that it captures both the spatial and temporal dependencies of graph updates. This poses new challenges for designing dynamic GNN training frameworks. First, the traditional batched training method fails to capture real-time structural evolution information. Second, the time-dependent nature makes parallel training hard to design. Third, it lacks system supports for users to efficiently implement dynamic GNNs. In this paper, we present NeutronStream, a framework for training dynamic GNN models. NeutronStream abstracts the input dynamic graph into a chronologically updated stream of events and processes the stream with an optimized sliding window to incrementally capture the spatial-temporal dependencies of events. Furthermore, NeutronStream provides a parallel execution engine to tackle the sequential event processing challenge to achieve high performance. NeutronStream also integrates a built-in graph storage structure that supports dynamic updates and provides a set of easy-to-use APIs that allow users to express their dynamic GNNs. Our experimental results demonstrate that, compared to state-of-the-art dynamic GNN implementations, NeutronStream achieves speedups ranging from 1.48X to 5.87X and an average accuracy improvement of 3.97%.
APA, Harvard, Vancouver, ISO, and other styles
35

Bentsen, Lars Ødegaard, Narada Dilp Warakagoda, Roy Stenbro, and Paal Engelstad. "Probabilistic Wind Park Power Prediction using Bayesian Deep Learning and Generative Adversarial Networks." Journal of Physics: Conference Series 2362, no. 1 (November 1, 2022): 012005. http://dx.doi.org/10.1088/1742-6596/2362/1/012005.

Full text
Abstract:
The rapid depletion of fossil-based energy supplies, along with the growing reliance on renewable resources, has placed supreme importance on the predictability of renewables. Research focusing on wind park power modelling has mainly been concerned with point estimators, while most probabilistic studies have been reserved for forecasting. In this paper, a few different approaches to estimate probability distributions for individual turbine powers in a real off-shore wind farm were studied. Two variational Bayesian inference models were used, one employing a multilayered perceptron and another a graph neural network (GNN) architecture. Furthermore, generative adversarial networks (GAN) have recently been proposed as Bayesian models and was here investigated as a novel area of research. The results showed that the two Bayesian models outperformed the GAN model with regards to mean absolute errors (MAE), with the GNN architecture yielding the best results. The GAN on the other hand, seemed potentially better at generating diverse distributions. Standard deviations of the predicted distributions were found to have a positive correlation with MAEs, indicating that the models could correctly provide estimates on the confidence associated with particular predictions.
APA, Harvard, Vancouver, ISO, and other styles
36

Zeng, Yufan, and Jiashan Tang. "RLC-GNN: An Improved Deep Architecture for Spatial-Based Graph Neural Network with Application to Fraud Detection." Applied Sciences 11, no. 12 (June 18, 2021): 5656. http://dx.doi.org/10.3390/app11125656.

Full text
Abstract:
Graph neural networks (GNNs) have been very successful at solving fraud detection tasks. The GNN-based detection algorithms learn node embeddings by aggregating neighboring information. Recently, CAmouflage-REsistant GNN (CARE-GNN) is proposed, and this algorithm achieves state-of-the-art results on fraud detection tasks by dealing with relation camouflages and feature camouflages. However, stacking multiple layers in a traditional way defined by hop leads to a rapid performance drop. As the single-layer CARE-GNN cannot extract more information to fix the potential mistakes, the performance heavily relies on the only one layer. In order to avoid the case of single-layer learning, in this paper, we consider a multi-layer architecture which can form a complementary relationship with residual structure. We propose an improved algorithm named Residual Layered CARE-GNN (RLC-GNN). The new algorithm learns layer by layer progressively and corrects mistakes continuously. We choose three metrics—recall, AUC, and F1-score—to evaluate proposed algorithm. Numerical experiments are conducted. We obtain up to 5.66%, 7.72%, and 9.09% improvements in recall, AUC, and F1-score, respectively, on Yelp dataset. Moreover, we also obtain up to 3.66%, 4.27%, and 3.25% improvements in the same three metrics on the Amazon dataset.
APA, Harvard, Vancouver, ISO, and other styles
37

Zhang, Jinghui, Zhengjia Xu, Dingyang Lv, Zhan Shi, Dian Shen, Jiahui Jin, and Fang Dong. "DiG-In-GNN: Discriminative Feature Guided GNN-Based Fraud Detector against Inconsistencies in Multi-Relation Fraud Graph." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 8 (March 24, 2024): 9323–31. http://dx.doi.org/10.1609/aaai.v38i8.28785.

Full text
Abstract:
Fraud detection on multi-relation graphs aims to identify fraudsters in graphs. Graph Neural Network (GNN) models leverage graph structures to pass messages from neighbors to the target nodes, thereby enriching the representations of those target nodes. However, feature and structural inconsistency in the graph, owing to fraudsters' camouflage behaviors, diminish the suspiciousness of fraud nodes which hinders the effectiveness of GNN-based models. In this work, we propose DiG-In-GNN, Discriminative Feature Guided GNN against Inconsistency, to dig into graphs for fraudsters. Specifically, we use multi-scale contrastive learning from the perspective of the neighborhood subgraph where the target node is located to generate guidance nodes to cope with the feature inconsistency. Then, guided by the guidance nodes, we conduct fine-grained neighbor selection through reinforcement learning for each neighbor node to precisely filter nodes that can enhance the message passing and therefore alleviate structural inconsistency. Finally, the two modules are integrated together to obtain discriminable representations of the nodes. Experiments on three fraud detection datasets demonstrate the superiority of the proposed method DiG-In-GNN, which obtains up to 20.73% improvement over previous state-of-the-art methods. Our code can be found at https://github.com/GraphBerry/DiG-In-GNN.
APA, Harvard, Vancouver, ISO, and other styles
38

Lu, Yuanfu, Xunqiang Jiang, Yuan Fang, and Chuan Shi. "Learning to Pre-train Graph Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 5 (May 18, 2021): 4276–84. http://dx.doi.org/10.1609/aaai.v35i5.16552.

Full text
Abstract:
Graph neural networks (GNNs) have become the defacto standard for representation learning on graphs, which derive effective node representations by recursively aggregating information from graph neighborhoods. While GNNs can be trained from scratch, pre-training GNNs to learn transferable knowledge for downstream tasks has recently been demonstrated to improve the state of the art. However, conventional GNN pre-training methods follow a two-step paradigm: 1) pre-training on abundant unlabeled data and 2) fine-tuning on downstream labeled data, between which there exists a significant gap due to the divergence of optimization objectives in the two steps. In this paper, we conduct an analysis to show the divergence between pre-training and fine-tuning, and to alleviate such divergence, we propose L2P-GNN, a self-supervised pre-training strategy for GNNs. The key insight is that L2P-GNN attempts to learn how to fine-tune during the pre-training process in the form of transferable prior knowledge. To encode both local and global information into the prior, L2P-GNN is further designed with a dual adaptation mechanism at both node and graph levels. Finally, we conduct a systematic empirical study on the pre-training of various GNN models, using both a public collection of protein graphs and a new compilation of bibliographic graphs for pre-training. Experimental results show that L2P-GNN is capable of learning effective and transferable prior knowledge that yields powerful representations for downstream tasks. (Code and datasets are available at https://github.com/rootlu/L2P-GNN.)
APA, Harvard, Vancouver, ISO, and other styles
39

Mohammadi, Sina, and Mohamed Allali. "Advancing Brain Tumor Segmentation with Spectral–Spatial Graph Neural Networks." Applied Sciences 14, no. 8 (April 18, 2024): 3424. http://dx.doi.org/10.3390/app14083424.

Full text
Abstract:
In the field of brain tumor segmentation, accurately capturing the complexities of tumor sub-regions poses significant challenges. Traditional segmentation methods usually fail to accurately segment tumor subregions. This research introduces a novel solution employing Graph Neural Networks (GNNs), enriched with spectral and spatial insight. In the supervoxel creation phase, we explored methods like VCCS, SLIC, Watershed, Meanshift, and Felzenszwalb–Huttenlocher, evaluating their performance based on homogeneity, moment of inertia, and uniformity in shape and size. After creating supervoxels, we represented 3D MRI images as a graph structure. In this study, we combined Spatial and Spectral GNNs to capture both local and global information. Our Spectral GNN implementation employs the Laplacian matrix to efficiently map tumor tissue connectivity by capturing the graph’s global structure. Consequently, this enhances the model’s precision in classifying brain tumors into distinct types: necrosis, edema, and enhancing tumor. This model underwent extensive hyper-parameter tuning to ascertain the most effective configuration for optimal segmentation performance. Our Spectral–Spatial GNN model surpasses traditional segmentation methods in accuracy for both whole tumor and sub-regions, validated by metrics such as the dice coefficient and accuracy. For the necrotic core, the Spectral–Spatial GNN model showed a 10.6% improvement over the Spatial GNN and 8% over the Spectral GNN. Enhancing tumor gains were 9.5% and 6.4%, respectively. For edema, improvements were 12.8% over the Spatial GNN and 7.3% over the Spectral GNN, highlighting its segmentation accuracy for each tumor sub-region. This superiority underscores the model’s potential in improving brain tumor segmentation accuracy, precision, and computational efficiency.
APA, Harvard, Vancouver, ISO, and other styles
40

Eliasof, Moshe, Eldad Haber, and Eran Treister. "Feature Transportation Improves Graph Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 11 (March 24, 2024): 11874–82. http://dx.doi.org/10.1609/aaai.v38i11.29073.

Full text
Abstract:
Graph neural networks (GNNs) have shown remarkable success in learning representations for graph-structured data. However, GNNs still face challenges in modeling complex phenomena that involve feature transportation. In this paper, we propose a novel GNN architecture inspired by Advection-Diffusion-Reaction systems, called ADR-GNN. Advection models feature transportation, while diffusion captures the local smoothing of features, and reaction represents the non-linear transformation between feature channels. We provide an analysis of the qualitative behavior of ADR-GNN, that shows the benefit of combining advection, diffusion, and reaction. To demonstrate its efficacy, we evaluate ADR-GNN on real-world node classification and spatio-temporal datasets, and show that it improves or offers competitive performance compared to state-of-the-art networks.
APA, Harvard, Vancouver, ISO, and other styles
41

Deng, Jiale, and Yanyan Shen. "Self-Interpretable Graph Learning with Sufficient and Necessary Explanations." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 10 (March 24, 2024): 11749–56. http://dx.doi.org/10.1609/aaai.v38i10.29059.

Full text
Abstract:
Self-interpretable graph learning methods provide insights to unveil the black-box nature of GNNs by providing predictions with built-in explanations. However, current works suffer from performance degradation compared to GNNs trained without built-in explanations. We argue the main reason is that they fail to generate explanations satisfying both sufficiency and necessity, and the biased explanations further hurt GNNs' performance. In this work, we propose a novel framework for generating SUfficient aNd NecessarY explanations (SUNNY-GNN for short) that benefit GNNs' predictions. The key idea is to conduct augmentations by structurally perturbing given explanations and employ a contrastive loss to guide the learning of explanations toward sufficiency and necessity directions. SUNNY-GNN introduces two coefficients to generate hard and reliable contrastive samples. We further extend SUNNY-GNN to heterogeneous graphs. Empirical results on various GNNs and real-world graphs show that SUNNY-GNN yields accurate predictions and faithful explanations, outperforming the state-of-the-art methods by improving 3.5% prediction accuracy and 13.1% explainability fidelity on average. Our code and data are available at https://github.com/SJTU-Quant/SUNNY-GNN.
APA, Harvard, Vancouver, ISO, and other styles
42

Dandale, Milindkumar N., Amar P. Yadav, P. S. K. Reddy, Seema G. Kadu, Madhusudana T, and Manthan S. Manavadaria. "Deep learning enhanced drug discovery for novel biomaterials in regenerative medicine utilizing graph neural network approach for predicting cellular responses." Scientific Temper 15, no. 01 (March 15, 2024): 1588–94. http://dx.doi.org/10.58414/scientifictemper.2024.15.1.04.

Full text
Abstract:
This study introduces a novel approach to drug discovery in regenerative medicine through the utilization of a graph neural network (GNN). The research methodology integrates the development and training of the GNN with a subsequent evaluation of its performance metrics. The first phase involves the generation of synthetic data simulating a biological network, employing networkX and NumPy libraries to construct a random graph with Erdos-Renyi topology. The data, representing cellular responses to biomaterials, is then converted into PyTorch tensors for compatibility with the GNN architecture. The GNN model, characterized by two fully connected layers with ReLU and log-softmax activations, captures intricate relationships within the graph-structured data. The second phase employs a stochastic gradient descent algorithm, specifically the Adam optimizer, to train the GNN over 100 epochs using the cross-entropy loss for multi-class classification. The research methodology extends to the evaluation phase, producing three distinct output graphs for analysis: Visualization of the graph structure, a comparison between predicted and true labels, and a plot illustrating training loss over epochs. Performance metrics, including accuracy, precision, recall, and F1-score, are computed to assess the model’s predictive capabilities quantitatively. The study concludes with a discussion on the nuances revealed by each graph and their implications for refining GNN models in the context of drug discovery for regenerative medicine.
APA, Harvard, Vancouver, ISO, and other styles
43

Sun, Joyce, Pete Hwang, Eric D. Sakkas, Yancheng Zhou, Luis Perez, Ishani Dave, Jack B. Kwon, et al. "GNN Codon Adjacency Tunes Protein Translation." International Journal of Molecular Sciences 25, no. 11 (May 29, 2024): 5914. http://dx.doi.org/10.3390/ijms25115914.

Full text
Abstract:
The central dogma treats the ribosome as a molecular machine that reads one mRNA codon at a time as it adds each amino acid to its growing peptide chain. However, this and previous studies suggest that ribosomes actually perceive pairs of adjacent codons as they take three-nucleotide steps along the mRNA. We examined GNN codons, which we find are surprisingly overrepresented in eukaryote protein-coding open reading frames (ORFs), especially immediately after NNU codons. Ribosome profiling experiments in yeast revealed that ribosomes with NNU at their aminoacyl (A) site have particularly elevated densities when NNU is immediately followed (3′) by a GNN codon, indicating slower mRNA threading of the NNU codon from the ribosome’s A to peptidyl (P) sites. Moreover, if the assessment was limited to ribosomes that have only recently arrived at the next codon, by examining 21-nucleotide ribosome footprints (21-nt RFPs), elevated densities were observed for multiple codon classes when followed by GNN. This striking translation slowdown at adjacent 5′-NNN GNN codon pairs is likely mediated, in part, by the ribosome’s CAR surface, which acts as an extension of the A-site tRNA anticodon during ribosome translocation and interacts through hydrogen bonding and pi stacking with the GNN codon. The functional consequences of 5′-NNN GNN codon adjacency are expected to influence the evolution of protein coding sequences.
APA, Harvard, Vancouver, ISO, and other styles
44

Sun, Alexander Y., Peishi Jiang, Zong-Liang Yang, Yangxinyu Xie, and Xingyuan Chen. "A graph neural network (GNN) approach to basin-scale river network learning: the role of physics-based connectivity and data fusion." Hydrology and Earth System Sciences 26, no. 19 (October 14, 2022): 5163–84. http://dx.doi.org/10.5194/hess-26-5163-2022.

Full text
Abstract:
Abstract. Rivers and river habitats around the world are under sustained pressure from human activities and the changing global environment. Our ability to quantify and manage the river states in a timely manner is critical for protecting the public safety and natural resources. In recent years, vector-based river network models have enabled modeling of large river basins at increasingly fine resolutions, but are computationally demanding. This work presents a multistage, physics-guided, graph neural network (GNN) approach for basin-scale river network learning and streamflow forecasting. During training, we train a GNN model to approximate outputs of a high-resolution vector-based river network model; we then fine-tune the pretrained GNN model with streamflow observations. We further apply a graph-based, data-fusion step to correct prediction biases. The GNN-based framework is first demonstrated over a snow-dominated watershed in the western United States. A series of experiments are performed to test different training and imputation strategies. Results show that the trained GNN model can effectively serve as a surrogate of the process-based model with high accuracy, with median Kling–Gupta efficiency (KGE) greater than 0.97. Application of the graph-based data fusion further reduces mismatch between the GNN model and observations, with as much as 50 % KGE improvement over some cross-validation gages. To improve scalability, a graph-coarsening procedure is introduced and is demonstrated over a much larger basin. Results show that graph coarsening achieves comparable prediction skills at only a fraction of training cost, thus providing important insights into the degree of physical realism needed for developing large-scale GNN-based river network models.
APA, Harvard, Vancouver, ISO, and other styles
45

Jiang, Yuli, Yu Rong, Hong Cheng, Xin Huang, Kangfei Zhao, and Junzhou Huang. "Query driven-graph neural networks for community search." Proceedings of the VLDB Endowment 15, no. 6 (February 2022): 1243–55. http://dx.doi.org/10.14778/3514061.3514070.

Full text
Abstract:
Given one or more query vertices, Community Search (CS) aims to find densely intra-connected and loosely inter-connected structures containing query vertices. Attributed Community Search (ACS), a related problem, is more challenging since it finds communities with both cohesive structures and homogeneous vertex attributes. However, most methods for the CS task rely on inflexible pre-defined structures and studies for ACS treat each attribute independently. Moreover, the most popular ACS strategies decompose ACS into two separate sub-problems, i.e., the CS task and subsequent attribute filtering task. However, in real-world graphs, the community structure and the vertex attributes are closely correlated to each other. This correlation is vital for the ACS problem. In this vein, we argue that the separation strategy cannot fully capture the correlation between structure and attributes simultaneously and it would compromise the final performance. In this paper, we propose Graph Neural Network (GNN) models for both CS and ACS problems, i.e., Query Driven-GNN (QD-GNN) and Attributed Query Driven-GNN (AQD-GNN). In QD-GNN, we combine the local query-dependent structure and global graph embedding. In order to extend QD-GNN to handle attributes, we model vertex attributes as a bipartite graph and capture the relation between attributes by constructing GNNs on this bipartite graph. With a Feature Fusion operator, AQD-GNN processes the structure and attribute simultaneously and predicts communities according to each attributed query. Experiments on real-world graphs with ground-truth communities demonstrate that the proposed models outperform existing CS and ACS algorithms in terms of both efficiency and effectiveness. More recently, an interactive setting for CS is proposed that allows users to adjust the predicted communities. We further verify our approaches under the interactive setting and extend to the attributed context. Our method achieves 2.37% and 6.29% improvements in F1-score than the state-of-the-art model without attributes and with attributes respectively.
APA, Harvard, Vancouver, ISO, and other styles
46

Park, Yeonhong, Sunhong Min, and Jae W. Lee. "Ginex." Proceedings of the VLDB Endowment 15, no. 11 (July 2022): 2626–39. http://dx.doi.org/10.14778/3551793.3551819.

Full text
Abstract:
Graph Neural Networks (GNNs) are receiving a spotlight as a powerful tool that can effectively serve various inference tasks on graph structured data. As the size of real-world graphs continues to scale, the GNN training system faces a scalability challenge. Distributed training is a popular approach to address this challenge by scaling out CPU nodes. However, not much attention has been paid to disk-based GNN training, which can scale up the single-node system in a more cost-effective manner by leveraging high-performance storage devices like NVMe SSDs. We observe that the data movement between the main memory and the disk is the primary bottleneck in the SSD-based training system, and that the conventional GNN training pipeline is sub-optimal without taking this overhead into account. Thus, we propose Ginex, the first SSD-based GNN training system that can process billion-scale graph datasets on a single machine. Inspired by the inspector-executor execution model in compiler optimization, Ginex restructures the GNN training pipeline by separating sample and gather stages. This separation enables Ginex to realize a provably optimal replacement algorithm, known as Belady's algorithm , for caching feature vectors in memory, which account for the dominant portion of I/O accesses. According to our evaluation with four billion-scale graph datasets and two GNN models, Ginex achieves 2.11X higher training throughput on average (2.67X at maximum) than the SSD-extended PyTorch Geometric.
APA, Harvard, Vancouver, ISO, and other styles
47

You, Jiaxuan, Jonathan M. Gomes-Selman, Rex Ying, and Jure Leskovec. "Identity-aware Graph Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 12 (May 18, 2021): 10737–45. http://dx.doi.org/10.1609/aaai.v35i12.17283.

Full text
Abstract:
Message passing Graph Neural Networks (GNNs) provide a powerful modeling framework for relational data. However, the expressive power of existing GNNs is upper-bounded by the 1-Weisfeiler-Lehman (1-WL) graph isomorphism test, which means GNNs that are not able to predict node clustering coefficients and shortest path distances, and cannot differentiate between different d-regular graphs. Here we develop a class of message passing GNNs, named Identity-aware Graph Neural Networks (ID-GNNs), with greater expressive power than the 1-WL test. ID-GNN offers a minimal but powerful solution to limitations of existing GNNs. ID-GNN extends existing GNN architectures by inductively considering nodes’ identities during message passing. To embed a given node, ID-GNN first extracts the ego network centered at the node, then conducts rounds of heterogeneous message passing, where different sets of parameters are applied to the center node than to other surrounding nodes in the ego network. We further propose a simplified but faster version of ID-GNN that injects node identity information as augmented node features. Alto- gether, both versions of ID-GNN represent general extensions of message passing GNNs, where experiments show that transforming existing GNNs to ID-GNNs yields on average 40% accuracy improvement on challenging node, edge, and graph property prediction tasks; 3% accuracy improvement on node and graph classification benchmarks; and 15% ROC AUC improvement on real-world link prediction tasks. Additionally, ID-GNNs demonstrate improved or comparable performance over other task-specific graph networks.
APA, Harvard, Vancouver, ISO, and other styles
48

Guo, Jingjing, and Jiacong Sun. "Secure and Practical Group Nearest Neighbor Query for Location-Based Services in Cloud Computing." Security and Communication Networks 2021 (September 25, 2021): 1–17. http://dx.doi.org/10.1155/2021/5686506.

Full text
Abstract:
Group nearest neighbor (GNN) query enables a group of location-based service (LBS) users to retrieve a point from point of interests (POIs) with the minimum aggregate distance to them. For resource constraints and privacy concerns, LBS provider outsources the encrypted POIs to a powerful cloud server. The encryption-and-outsourcing mechanism brings a challenge for the data utilization. However, as previous work from k − anonymity technique leaks all contents of POIs and returns an answer set with redundant communication cost, the LBS system cannot work properly with those privacy-preserving schemes. In this paper, we illustrate a secure group nearest neighbor query scheme, which is referred to as SecGNN. It supports the GNN query with n n ≥ 3 LBS users and assures the data privacy and query privacy. Since SecGNN only achieves linear search complexity, an efficiency enhanced scheme (named Sec GNN + ) is introduced by taking advantage of the KD-tree data structure. Specifically, we convert the GNN problem to the nearest neighbor problem for their centroid, which can be computed by anonymous veto network and Burmester–Desmedt conference key agreement protocols. Furthermore, the Sec GNN + scheme is introduced from the KD-tree data structure and a designed tool, which supports the computation of inner products over ciphertexts. Finally, we run experiments on a real-database and a random database to evaluate the performance of our SecGNN and Sec GNN + schemes. The experimental results show the high efficiency of our proposed schemes.
APA, Harvard, Vancouver, ISO, and other styles
49

Ye, Yutong, Xiang Lian, and Mingsong Chen. "Efficient Exact Subgraph Matching via GNN-Based Path Dominance Embedding." Proceedings of the VLDB Endowment 17, no. 7 (March 2024): 1628–41. http://dx.doi.org/10.14778/3654621.3654630.

Full text
Abstract:
The classic problem of exact subgraph matching returns those subgraphs in a large-scale data graph that are isomorphic to a given query graph, which has gained increasing importance in many real-world applications such as social network analysis, knowledge graph discovery in the Semantic Web, bibliographical network mining, and so on. In this paper, we propose a novel and effective graph neural network (GNN)-based path embedding framework (GNN-PE), which allows efficient exact subgraph matching without introducing false dismissals. Unlike traditional GNN-based graph embeddings that only produce approximate subgraph matching results, in this paper, we carefully devise GNN-based embeddings for paths, such that: if two paths (and 1-hop neighbors of vertices on them) have the subgraph relationship, their corresponding GNN-based embedding vectors will strictly follow the dominance relationship. With such a newly designed property of path dominance embeddings, we are able to propose effective pruning strategies based on path label/dominance embeddings and guarantee no false dismissals for subgraph matching. We build multidimensional indexes over path embedding vectors, and develop an efficient subgraph matching algorithm by traversing indexes over graph partitions in parallel and applying our pruning methods. We also propose a cost-model-based query plan that obtains query paths from the query graph with low query cost. Through extensive experiments, we confirm the efficiency and effectiveness of our proposed GNN-PE approach for exact subgraph matching on both real and synthetic graph data.
APA, Harvard, Vancouver, ISO, and other styles
50

Park, Seok-Woo, Kang-Hyun Moon, Kyung-Taek Chung, and In-Ho Ra. "Graph Neural Network and Reinforcement Learning based Optimal VNE Method in 5G and B5G Networks." Korean Institute of Smart Media 12, no. 11 (December 31, 2023): 113–24. http://dx.doi.org/10.30693/smj.2023.12.11.113.

Full text
Abstract:
With the advent of 5G and B5G (Beyond 5G) networks, network virtualization technology that can overcome the limitations of existing networks is attracting attention. The purpose of network virtualization is to provide solutions for efficient network resource utilization and various services. Existing heuristic-based VNE (Virtual Network Embedding) techniques have been studied, but the flexibility is limited. Therefore, in this paper, we propose a GNN-based network slicing classification scheme to meet various service requirements and a RL-based VNE scheme for optimal resource allocation. The proposed method performs optimal VNE using an Actor-Critic network. Finally, to evaluate the performance of the proposed technique, we compare it with Node Rank, MCST-VNE, and GCN-VNE techniques. Through performance analysis, it was shown that the GNN and RL-based VNE techniques are better than the existing techniques in terms of acceptance rate and resource efficiency.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography