Academic literature on the topic 'Graph Neural Networks (GNNs)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Graph Neural Networks (GNNs).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Graph Neural Networks (GNNs)"

1

You, Jiaxuan, Jonathan M. Gomes-Selman, Rex Ying, and Jure Leskovec. "Identity-aware Graph Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 12 (2021): 10737–45. http://dx.doi.org/10.1609/aaai.v35i12.17283.

Full text
Abstract:
Message passing Graph Neural Networks (GNNs) provide a powerful modeling framework for relational data. However, the expressive power of existing GNNs is upper-bounded by the 1-Weisfeiler-Lehman (1-WL) graph isomorphism test, which means GNNs that are not able to predict node clustering coefficients and shortest path distances, and cannot differentiate between different d-regular graphs. Here we develop a class of message passing GNNs, named Identity-aware Graph Neural Networks (ID-GNNs), with greater expressive power than the 1-WL test. ID-GNN offers a minimal but powerful solution to limitat
APA, Harvard, Vancouver, ISO, and other styles
2

Shen, Yanyan, Lei Chen, Jingzhi Fang, Xin Zhang, Shihong Gao, and Hongbo Yin. "Efficient Training of Graph Neural Networks on Large Graphs." Proceedings of the VLDB Endowment 17, no. 12 (2024): 4237–40. http://dx.doi.org/10.14778/3685800.3685844.

Full text
Abstract:
Graph Neural Networks (GNNs) have gained significant popularity for learning representations of graph-structured data. Mainstream GNNs employ the message passing scheme that iteratively propagates information between connected nodes through edges. However, this scheme incurs high training costs, hindering the applicability of GNNs on large graphs. Recently, the database community has extensively researched effective solutions to facilitate efficient GNN training on massive graphs. In this tutorial, we provide a comprehensive overview of the GNN training process based on the graph data lifecycl
APA, Harvard, Vancouver, ISO, and other styles
3

Feng, Aosong, Chenyu You, Shiqiang Wang, and Leandros Tassiulas. "KerGNNs: Interpretable Graph Neural Networks with Graph Kernels." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 6 (2022): 6614–22. http://dx.doi.org/10.1609/aaai.v36i6.20615.

Full text
Abstract:
Graph kernels are historically the most widely-used technique for graph classification tasks. However, these methods suffer from limited performance because of the hand-crafted combinatorial features of graphs. In recent years, graph neural networks (GNNs) have become the state-of-the-art method in downstream graph-related tasks due to their superior performance. Most GNNs are based on Message Passing Neural Network (MPNN) frameworks. However, recent studies show that MPNNs can not exceed the power of the Weisfeiler-Lehman (WL) algorithm in graph isomorphism test. To address the limitations of
APA, Harvard, Vancouver, ISO, and other styles
4

Mo, Shibing, Kai Wu, Qixuan Gao, Xiangyi Teng, and Jing Liu. "AutoSGNN: Automatic Propagation Mechanism Discovery for Spectral Graph Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 18 (2025): 19493–502. https://doi.org/10.1609/aaai.v39i18.34146.

Full text
Abstract:
In real-world applications, spectral Graph Neural Networks (GNNs) are powerful tools for processing diverse types of graphs. However, a single GNN often struggles to handle different graph types—such as homogeneous and heterogeneous graphs—simultaneously. This challenge has led to the manual design of GNNs tailored to specific graph types, but these approaches are limited by the high cost of labor and the constraints of expert knowledge, which cannot keep up with the rapid growth of graph data. To overcome these challenges, we introduce AutoSGNN, an automated framework for discovering propagat
APA, Harvard, Vancouver, ISO, and other styles
5

Morris, Christopher, Martin Ritzert, Matthias Fey, et al. "Weisfeiler and Leman Go Neural: Higher-Order Graph Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 4602–9. http://dx.doi.org/10.1609/aaai.v33i01.33014602.

Full text
Abstract:
In recent years, graph neural networks (GNNs) have emerged as a powerful neural architecture to learn vector representations of nodes and graphs in a supervised, end-to-end fashion. Up to now, GNNs have only been evaluated empirically—showing promising results. The following work investigates GNNs from a theoretical point of view and relates them to the 1-dimensional Weisfeiler-Leman graph isomorphism heuristic (1-WL). We show that GNNs have the same expressiveness as the 1-WL in terms of distinguishing non-isomorphic (sub-)graphs. Hence, both algorithms also have the same shortcomings. Based
APA, Harvard, Vancouver, ISO, and other styles
6

Lu, Yuanfu, Xunqiang Jiang, Yuan Fang, and Chuan Shi. "Learning to Pre-train Graph Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 5 (2021): 4276–84. http://dx.doi.org/10.1609/aaai.v35i5.16552.

Full text
Abstract:
Graph neural networks (GNNs) have become the defacto standard for representation learning on graphs, which derive effective node representations by recursively aggregating information from graph neighborhoods. While GNNs can be trained from scratch, pre-training GNNs to learn transferable knowledge for downstream tasks has recently been demonstrated to improve the state of the art. However, conventional GNN pre-training methods follow a two-step paradigm: 1) pre-training on abundant unlabeled data and 2) fine-tuning on downstream labeled data, between which there exists a significant gap due t
APA, Harvard, Vancouver, ISO, and other styles
7

Guo, Zhichun, Chunhui Zhang, Yujie Fan, Yijun Tian, Chuxu Zhang, and Nitesh V. Chawla. "Boosting Graph Neural Networks via Adaptive Knowledge Distillation." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 6 (2023): 7793–801. http://dx.doi.org/10.1609/aaai.v37i6.25944.

Full text
Abstract:
Graph neural networks (GNNs) have shown remarkable performance on diverse graph mining tasks. While sharing the same message passing framework, our study shows that different GNNs learn distinct knowledge from the same graph. This implies potential performance improvement by distilling the complementary knowledge from multiple models. However, knowledge distillation (KD) transfers knowledge from high-capacity teachers to a lightweight student, which deviates from our scenario: GNNs are often shallow. To transfer knowledge effectively, we need to tackle two challenges: how to transfer knowledge
APA, Harvard, Vancouver, ISO, and other styles
8

Yang, Han, Kaili Ma, and James Cheng. "Rethinking Graph Regularization for Graph Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 5 (2021): 4573–81. http://dx.doi.org/10.1609/aaai.v35i5.16586.

Full text
Abstract:
The graph Laplacian regularization term is usually used in semi-supervised representation learning to provide graph structure information for a model f(X). However, with the recent popularity of graph neural networks (GNNs), directly encoding graph structure A into a model, i.e., f(A, X), has become the more common approach. While we show that graph Laplacian regularization brings little-to-no benefit to existing GNNs, and propose a simple but non-trivial variant of graph Laplacian regularization, called Propagation-regularization (P-reg), to boost the performance of existing GNN models. We pr
APA, Harvard, Vancouver, ISO, and other styles
9

Guo, Kai, Kaixiong Zhou, Xia Hu, Yu Li, Yi Chang, and Xin Wang. "Orthogonal Graph Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 4 (2022): 3996–4004. http://dx.doi.org/10.1609/aaai.v36i4.20316.

Full text
Abstract:
Graph neural networks (GNNs) have received tremendous attention due to their superiority in learning node representations. These models rely on message passing and feature transformation functions to encode the structural and feature information from neighbors. However, stacking more convolutional layers significantly decreases the performance of GNNs. Most recent studies attribute this limitation to the over-smoothing issue, where node embeddings converge to indistinguishable vectors. Through a number of experimental observations, we argue that the main factor degrading the performance is the
APA, Harvard, Vancouver, ISO, and other styles
10

Yang, Yachao, Yanfeng Sun, Shaofan Wang, et al. "Graph Neural Networks with Soft Association between Topology and Attribute." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 8 (2024): 9260–68. http://dx.doi.org/10.1609/aaai.v38i8.28778.

Full text
Abstract:
Graph Neural Networks (GNNs) have shown great performance in learning representations for graph-structured data. However, recent studies have found that the interference between topology and attribute can lead to distorted node representations. Most GNNs are designed based on homophily assumptions, thus they cannot be applied to graphs with heterophily. This research critically analyzes the propagation principles of various GNNs and the corresponding challenges from an optimization perspective. A novel GNN called Graph Neural Networks with Soft Association between Topology and Attribute (GNN-S
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Graph Neural Networks (GNNs)"

1

Pappone, Francesco. "Graph neural networks: theory and applications." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/23893/.

Full text
Abstract:
Le reti neurali artificiali hanno visto, negli ultimi anni, una crescita vertiginosa nelle loro applicazioni e nelle architetture dei modelli impiegati. In questa tesi introduciamo le reti neurali su domini euclidei, in particolare mostrando l’importanza dell’equivarianza di traslazione nelle reti convoluzionali, e introduciamo, per analogia, un’estensione della convoluzione a dati strutturati come grafi. Inoltre presentiamo le architetture dei principali Graph Neural Network ed esponiamo, per ognuna delle tre architetture proposte (Spectral graph Convolutional Network, Graph Co
APA, Harvard, Vancouver, ISO, and other styles
2

Andersson, Mikael. "Gamma-ray racking using graph neural networks." Thesis, KTH, Fysik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-298610.

Full text
Abstract:
While there are existing methods of gamma ray-track reconstruction in specialized detectors such as AGATA, including backtracking and clustering, it is naturally of interest to diversify the portfolio of available tools to provide us viable alternatives. In this study some possibilities found in the field of machine learning were investigated, more specifically within the field of graph neural networks. In this project there was attempt to reconstruct gamma tracks in a germanium solid using data simulated in Geant4. The data consists of photon energies below the pair production limit and so we
APA, Harvard, Vancouver, ISO, and other styles
3

Andersson, Mikael. "Gamma-ray tracking using graph neural networks." Thesis, KTH, Fysik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-298610.

Full text
Abstract:
While there are existing methods of gamma ray-track reconstruction in specialized detectors such as AGATA, including backtracking and clustering, it is naturally of interest to diversify the portfolio of available tools to provide us viable alternatives. In this study some possibilities found in the field of machine learning were investigated, more specifically within the field of graph neural networks. In this project there was attempt to reconstruct gamma tracks in a germanium solid using data simulated in Geant4. The data consists of photon energies below the pair production limit and so we
APA, Harvard, Vancouver, ISO, and other styles
4

Gunnarsson, Robin, and Alexander Åkermark. "Approaching sustainable mobility utilizing graph neural networks." Thesis, Högskolan i Halmstad, Akademin för informationsteknologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-45191.

Full text
Abstract:
This report is done in collaboration with WirelessCar for the master of science thesis at Halmstad University. Many different parameters influence fuel consumption. The objective of the report is to evaluate if Graph neural networks are a practical model to perform fuel consumption prediction on areas. The model uses a partitioning of geographical locations of trip observations to capture their spatial information. The project also proposes a method to capture the non-stationary behavior of vehicles by defining a vehicle node as a separate entity. The model then captures their different featur
APA, Harvard, Vancouver, ISO, and other styles
5

Amanzadi, Amirhossein. "Predicting safe drug combinations with Graph Neural Networks (GNN)." Thesis, Uppsala universitet, Institutionen för farmaceutisk biovetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-446691.

Full text
Abstract:
Many people - especially during their elderly - consume multiple drugs for the treatment of complex or co-existing diseases. Identifying side effects caused by polypharmacy is crucial for reducing mortality and morbidity of the patients which will lead to improvement in their quality of life. Since there is immense space for possible drug combinations, it is infeasible to examine them entirely in the lab. In silico models can offer a convenient solution, however, due to the lack of a sufficient amount of homogenous data it is difficult to develop both reliable and scalable models in its abilit
APA, Harvard, Vancouver, ISO, and other styles
6

Liberatore, Lorenzo. "Introduction to geometric deep learning and graph neural networks." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2022. http://amslaurea.unibo.it/25339/.

Full text
Abstract:
This thesis proposes an introduction to the fundamental concepts of supervised deep learning. Starting from Rosemblatt's Perceptron we will discuss the architectures that, in recent years, have revolutioned the world of deep learning: graph neural networks, which led to the formulation of geometric deep learning. We will then give a simple example of graph neural network, discussing the code that composes it and then test our architecture on the MNISTSuperpixels dataset, which is a variation of the benchmark dataset MNIST.
APA, Harvard, Vancouver, ISO, and other styles
7

Nastorg, Matthieu. "Scalable GNN Solutions for CFD Simulations." Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASG020.

Full text
Abstract:
La Dynamique des Fluides Numérique (CFD) joue un rôle essentiel dans la prédiction de divers phénomènes physiques, tels que le climat, l'aérodynamique ou la circulation sanguine. Au coeur de la CFD se trouvent les équations de Navier-Stokes régissant le mouvement des fluides. Cependant, résoudre ces équations à grande échelle reste fastidieux, en particulier lorsqu'il s'agit des équations de Navier-Stokes incompressibles, qui nécessitent la résolution intensive d'un problème de Poisson de Pression, garantissant la contrainte d'incompressibilité. De nos jours, les méthodes d'apprentissage profo
APA, Harvard, Vancouver, ISO, and other styles
8

Zheng, Xuebin. "Wavelet-based Graph Neural Networks." Thesis, The University of Sydney, 2022. https://hdl.handle.net/2123/27989.

Full text
Abstract:
This thesis focuses on spectral-based graph neural networks (GNNs). In Chapter 2, we use multiresolution Haar-like wavelets to design a framework of GNNs which equips with graph convolution and pooling strategies. The resulting model is called MathNet whose wavelet transform matrix is constructed with a coarse-grained chain. So our proposed MathNet not only enjoys the multiresolution analysis from the Haar-like wavelets but also leverages the clustering information of the graph data. Furthermore, we develop a novel multiscale representation system for graph data, called decimated framelets, w
APA, Harvard, Vancouver, ISO, and other styles
9

Olmucci, Poddubnyy Oleksandr. "Graph Neural Networks for Recommender Systems." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2022. http://amslaurea.unibo.it/25033/.

Full text
Abstract:
In recent years, a new type of deep learning models, Graph Neural Networks (GNNs), have demonstrated to be a powerful learning paradigm when applied to problems that can be described via graph data, due to their natural ability to integrate representations across nodes that are connected via some topological structure. One of such domains is Recommendation Systems, the majority of whose data can be naturally represented via graphs. For example, typical item recommendation datasets can be represented via user-item bipartite graphs, social recommendation datasets by social networks, and so on. T
APA, Harvard, Vancouver, ISO, and other styles
10

Chen, Zhiqian. "Graph Neural Networks: Techniques and Applications." Diss., Virginia Tech, 2020. http://hdl.handle.net/10919/99848.

Full text
Abstract:
Effective information analysis generally boils down to the geometry of the data represented by a graph. Typical applications include social networks, transportation networks, the spread of epidemic disease, brain's neuronal networks, gene data on biological regulatory networks, telecommunication networks, knowledge graph, which are lying on the non-Euclidean graph domain. To describe the geometric structures, graph matrices such as adjacency matrix or graph Laplacian can be employed to reveal latent patterns. This thesis focuses on the theoretical analysis of graph neural networks and the deve
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Graph Neural Networks (GNNs)"

1

Liu, Zhiyuan, and Jie Zhou. Introduction to Graph Neural Networks. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-031-01587-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Shi, Chuan, Xiao Wang, and Cheng Yang. Advances in Graph Neural Networks. Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-16174-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wu, Lingfei, Peng Cui, Jian Pei, and Liang Zhao, eds. Graph Neural Networks: Foundations, Frontiers, and Applications. Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-6054-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

1955-, Lucas Peter, Gámez José A, and Salmerón Antonio, eds. Advances in probabilistic graphical models. Springer, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Hawash, Mohamed. Responsible Graph Neural Networks. CRC Press LLC, 2023.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Hawash, Mohamed. Responsible Graph Neural Networks. CRC Press LLC, 2023.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Zhou, Jie, and Zhiyuan Liu. Introduction to Graph Neural Networks. Morgan & Claypool Publishers, 2020.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Graph Neural Networks in Action. Manning Publications Co. LLC, 2023.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Zhou, Jie, and Zhiyuan Liu. Introduction to Graph Neural Networks. Morgan & Claypool Publishers, 2020.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Zhou, Jie, and Zhiyuan Liu. Introduction to Graph Neural Networks. Morgan & Claypool Publishers, 2020.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Graph Neural Networks (GNNs)"

1

Sharma, Jayant, Manuel Lentzen, Sophia Krix, et al. "Graph Neural Networks for Predicting Side Effects and New Indications of Drugs Using Electronic Health Records." In Cognitive Technologies. Springer Nature Switzerland, 2025. https://doi.org/10.1007/978-3-031-83097-6_9.

Full text
Abstract:
Abstract Drug development is a costly and time-intensive process. However, promising strategies such as drug repositioning and side effect prediction can help to overcome these challenges. Repurposing approved drugs can significantly reduce the time and resources required for preclinical and clinical trials. Furthermore, early detection of potential safety issues is crucial for both drug development programs and the wider healthcare system. For both goals, drug repositioning and side effect prediction, existing machine learning (ML) approaches mainly rely on data collected in preclinical phase
APA, Harvard, Vancouver, ISO, and other styles
2

Holzinger, Andreas, Anna Saranti, Anne-Christin Hauschild, et al. "Human-in-the-Loop Integration with Domain-Knowledge Graphs for Explainable Federated Deep Learning." In Lecture Notes in Computer Science. Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-40837-3_4.

Full text
Abstract:
AbstractWe explore the integration of domain knowledge graphs into Deep Learning for improved interpretability and explainability using Graph Neural Networks (GNNs). Specifically, a protein-protein interaction (PPI) network is masked over a deep neural network for classification, with patient-specific multi-modal genomic features enriched into the PPI graph’s nodes. Subnetworks that are relevant to the classification (referred to as “disease subnetworks”) are detected using explainable AI. Federated learning is enabled by dividing the knowledge graph into relevant subnetworks, constructing an
APA, Harvard, Vancouver, ISO, and other styles
3

Li, Mingkai, Peter Kok-Yiu Wong, Cong Huang, and Jack C. P. Cheng. "Indoor Trajectory Reconstruction Using Building Information Modeling and Graph Neural Networks." In CONVR 2023 - Proceedings of the 23rd International Conference on Construction Applications of Virtual Reality. Firenze University Press, 2023. http://dx.doi.org/10.36253/10.36253/979-12-215-0289-3.89.

Full text
Abstract:
Trajectory reconstruction of pedestrian is of paramount importance to understand crowd dynamics and human movement pattern, which will provide insights to improve building design, facility management and route planning. Camera-based tracking methods have been widely explored with the rapid development of deep learning techniques. When moving to indoor environment, many challenges occur, including occlusions, complex environments and limited camera placement and coverage. Therefore, we propose a novel indoor trajectory reconstruction method using building information modeling (BIM) and graph ne
APA, Harvard, Vancouver, ISO, and other styles
4

Li, Mingkai, Peter Kok-Yiu Wong, Cong Huang, and Jack C. P. Cheng. "Indoor Trajectory Reconstruction Using Building Information Modeling and Graph Neural Networks." In CONVR 2023 - Proceedings of the 23rd International Conference on Construction Applications of Virtual Reality. Firenze University Press, 2023. http://dx.doi.org/10.36253/979-12-215-0289-3.89.

Full text
Abstract:
Trajectory reconstruction of pedestrian is of paramount importance to understand crowd dynamics and human movement pattern, which will provide insights to improve building design, facility management and route planning. Camera-based tracking methods have been widely explored with the rapid development of deep learning techniques. When moving to indoor environment, many challenges occur, including occlusions, complex environments and limited camera placement and coverage. Therefore, we propose a novel indoor trajectory reconstruction method using building information modeling (BIM) and graph ne
APA, Harvard, Vancouver, ISO, and other styles
5

Li, Xu, and Yongsheng Chen. "Multi-Augmentation Contrastive Learning as Multi-Objective Optimization for Graph Neural Networks." In Advances in Knowledge Discovery and Data Mining. Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-33377-4_38.

Full text
Abstract:
AbstractRecently self-supervised learning is gaining popularity for Graph Neural Networks (GNN) by leveraging unlabeled data. Augmentation plays a key role in self-supervision. While there is a common set of image augmentation methods that preserve image labels in general, graph augmentation methods do not guarantee consistent graph semantics and are usually domain dependent. Existing self-supervised GNN models often handpick a small set of augmentation techniques that limit the performance of the model.In this paper, we propose a common set of graph augmentation methods to a wide range of GNN
APA, Harvard, Vancouver, ISO, and other styles
6

Su, Chang, Yu Hou, and Fei Wang. "GNN-based Biomedical Knowledge Graph Mining in Drug Development." In Graph Neural Networks: Foundations, Frontiers, and Applications. Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-6054-2_24.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Yin, Wanhao, Mingyuan Li, Haixing Zhao, et al. "IDLT-GNN: Graph Neural Networks Incorporating Deep Local Topology." In Lecture Notes in Electrical Engineering. Springer Nature Singapore, 2025. https://doi.org/10.1007/978-981-96-2432-4_31.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Yajima, Yuta, and Akihiro Inokuchi. "Why Deeper Graph Neural Network Performs Worse? Discussion and Improvement About Deep GNNs." In Lecture Notes in Computer Science. Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-15931-2_60.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Romanova, Alex. "GNN Graph Classification Method to Discover Climate Change Patterns." In Artificial Neural Networks and Machine Learning – ICANN 2023. Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-44216-2_32.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Jayasinghe, Haritha, and Ioannis Brilakis. "Topological Relationship Modelling for Industrial Facility Digitisation Using Graph Neural Networks." In CONVR 2023 - Proceedings of the 23rd International Conference on Construction Applications of Virtual Reality. Firenze University Press, 2023. http://dx.doi.org/10.36253/979-12-215-0289-3.88.

Full text
Abstract:
There is rising demand for automated digital twin construction based on point cloud scans, especially in the domain of industrial facilities. Yet, current automation approaches focus almost exclusively on geometric modelling. The output of these methods is a disjoint cluster of individual elements, while element relationships are ignored. This research demonstrates the feasibility of adopting Graph Neural Networks (GNN) for automated detection of connectivity relationships between elements in industrial facility scans. We propose a novel method which represents elements and relationships as gr
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Graph Neural Networks (GNNs)"

1

Pluska, Alexander, Pascal Welke, Thomas Gärtner, and Sagar Malhotra. "Logical Distillation of Graph Neural Networks." In 21st International Conference on Principles of Knowledge Representation and Reasoning {KR-2023}. International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/kr.2024/86.

Full text
Abstract:
We present a logic based interpretable model for learning on graphs and an algorithm to distill this model from a Graph Neural Network (GNN). Recent results have shown connections between the expressivity of GNNs and the two-variable fragment of first-order logic with counting quantifiers (C2). We introduce a decision-tree based model which leverages an extension of C2 to distill interpretable logical classifiers from GNNs. We test our approach on multiple GNN architectures. The distilled models are interpretable, succinct, and attain similar accuracy to the underlying GNN. Furthermore, when t
APA, Harvard, Vancouver, ISO, and other styles
2

Tena Cucala, David J., and Bernardo Cuenca Grau. "Bridging Max Graph Neural Networks and Datalog with Negation." In 21st International Conference on Principles of Knowledge Representation and Reasoning {KR-2023}. International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/kr.2024/89.

Full text
Abstract:
We consider a general class of data transformations based on Graph Neural Networks (GNNs), which can be used for a wide variety of tasks. An important question in this setting is to characterise the expressive power of these transformations in terms of a suitable logic-based language. From a practical perspective, the correspondence of a GNN with a logical theory can be exploited for explaining the model's predictions symbolically. In this paper, we introduce a broad family of GNN-based transformations which can be characterised using Datalog programs with negation-as-failure, which can be com
APA, Harvard, Vancouver, ISO, and other styles
3

Khalid, Md Meraj, Luisa Peterson, Edgar Ivan Sanchez Medina, and Kai Sundmacher. "Physics-Informed Graph Neural Networks for Modeling Spatially Distributed Dynamically Operated Processes." In The 35th European Symposium on Computer Aided Process Engineering. PSE Press, 2025. https://doi.org/10.69997/sct.101576.

Full text
Abstract:
Modeling process systems by use of partial differential equations is often complex and computationally expensive, especially for inverse problems such as optimization, state identification, or parameter estimation. Data-driven methods typically provide efficient alternatives with lower computational cost. One such method is Graph Neural Networks (GNNs), which can be used to model dynamical systems as graphs. However, dynamic GNNs often face challenges with extrapolation and representability. Integrating mechanistic insights in surrogate models can improve both prediction accuracy and interpret
APA, Harvard, Vancouver, ISO, and other styles
4

Leenhouts, Roel, Sebastien Jankelevitch, Roel Raike, Simon M�ller, and Florence Vermeire. "Thermodynamics-informed Graph Neural Networks for Phase Transition Enthalpies." In The 35th European Symposium on Computer Aided Process Engineering. PSE Press, 2025. https://doi.org/10.69997/sct.140638.

Full text
Abstract:
Phase transition enthalpies, such as those for fusion, vaporization, and sublimation, are vital for understanding thermodynamic properties and aiding early-stage process design. However, measuring these properties is often time-consuming and costly, leading to increased interest in computational methods for fast and accurate predictions. Graph neural networks (GNNs), known for their ability to learn complex molecular representations, have emerged as state-of-the-art tools for predicting various thermophysical properties. Despite their success, GNNs do not inherently obey thermodynamic laws. In
APA, Harvard, Vancouver, ISO, and other styles
5

Gao, Qinghe, Daniel C. Miedema, Yidong Zhao, Jana M. Weber, Qian Tao, and Artur M. Schweidtmann. "Bayesian uncertainty quantification of graph neural networks using stochastic gradient Hamiltonian Monte Carlo." In The 35th European Symposium on Computer Aided Process Engineering. PSE Press, 2025. https://doi.org/10.69997/sct.111298.

Full text
Abstract:
Graph neural networks (GNNs) have proven state-of-the-art performance in molecular property prediction tasks. However, a significant challenge with GNNs is the reliability of their predictions, particularly in critical domains where quantifying model confidence is essential. Therefore, assessing uncertainty in GNN predictions is crucial to improving their robustness. Existing uncertainty quantification methods, such as Deep ensembles and Monte Carlo Dropout, have been applied to GNNs with some success, but these methods are limited to approximate the full posterior distribution. In this work,
APA, Harvard, Vancouver, ISO, and other styles
6

Rangisetti, Lakshmi Sravanthi, Fathimabi Shaik, Aparna Bhagavatula, and Leela Satya Kommareddy. "Heart Disease Detection Using Graph Neural Networks (GNNs)." In 2024 5th International Conference on Electronics and Sustainable Communication Systems (ICESC). IEEE, 2024. http://dx.doi.org/10.1109/icesc60852.2024.10690138.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Morris, Matthew, David J. Tena Cucala, Bernardo Cuenca Grau, and Ian Horrocks. "Relational Graph Convolutional Networks Do Not Learn Sound Rules." In 21st International Conference on Principles of Knowledge Representation and Reasoning {KR-2023}. International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/kr.2024/84.

Full text
Abstract:
Graph neural networks (GNNs) are frequently used to predict missing facts in knowledge graphs (KGs). Motivated by the lack of explainability for the outputs of these models, recent work has aimed to explain their predictions using Datalog, a widely used logic-based formalism. However, such work has been restricted to certain subclasses of GNNs. In this paper, we consider one of the most popular GNN architectures for KGs, R-GCN, and we provide two methods to extract rules that explain its predictions and are sound, in the sense that each fact derived by the rules is also predicted by the GNN, f
APA, Harvard, Vancouver, ISO, and other styles
8

Li, Hao, Chen Li, Jianfei Zhang, Yuanxin Ouyang, and Wenge Rong. "Addressing Over-Squashing in GNNs with Graph Rewiring and Ordered Neurons." In 2024 International Joint Conference on Neural Networks (IJCNN). IEEE, 2024. http://dx.doi.org/10.1109/ijcnn60899.2024.10650713.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Rahi, Parvez, Deepraj Patel, Srijan Prabhakar, Raunak Srivastava, Bhargav, and Prakher Singh. "“Predictive Analytics for Stock Markets Using Graph Neural Networks (GNNs)”." In 2024 International Conference on Progressive Innovations in Intelligent Systems and Data Science (ICPIDS). IEEE, 2024. https://doi.org/10.1109/icpids65698.2024.00088.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Liu, Zemin, Yuan Fang, Chenghao Liu, and Steven C. H. Hoi. "Node-wise Localization of Graph Neural Networks." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/210.

Full text
Abstract:
Graph neural networks (GNNs) emerge as a powerful family of representation learning models on graphs. To derive node representations, they utilize a global model that recursively aggregates information from the neighboring nodes. However, different nodes reside at different parts of the graph in different local contexts, making their distributions vary across the graph. Ideally, how a node receives its neighborhood information should be a function of its local context, to diverge from the global GNN model shared by all nodes. To utilize node locality without overfitting, we propose a node-wise
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Graph Neural Networks (GNNs)"

1

Jha, Sonal, Ayan Biswas, and Terece Turton. Graph Neural Network (GNN) - assisted Sampling for Cosmological Simulations. Office of Scientific and Technical Information (OSTI), 2022. http://dx.doi.org/10.2172/1884741.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Fox, James Siyang, and Sivasankaran Rajamanickam. How Robust Are Graph Neural Networks to Structural Noise?. Office of Scientific and Technical Information (OSTI), 2020. http://dx.doi.org/10.2172/1592845.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Aktar, Shamminuj, Andreas Baertschi, Abdel-Hameed Badawy, Diane Oyen, and Stephan Eidenbenz. Graph Neural Networks for Parameterized Quantum Circuits Expressibility Estimation. Office of Scientific and Technical Information (OSTI), 2024. http://dx.doi.org/10.2172/2350603.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Pokhrel, Aashish. Predicting Cross Architecture Performance of Source Codes using Graph Neural Networks. Iowa State University, 2024. https://doi.org/10.31274/cc-20250502-58.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Lupo Pasini, Massimiliano, Jong Youl Choi, Pei Zhang, and Justin Baker. User Manual - HydraGNN: Distributed PyTorch Implementation of Multi-Headed Graph Convolutional Neural Networks. Office of Scientific and Technical Information (OSTI), 2023. http://dx.doi.org/10.2172/2224153.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ramakrishnan, Aravind, Fangyu Liu, Angeli Jayme, and Imad Al-Qadi. Prediction of Pavement Damage under Truck Platoons Utilizing a Combined Finite Element and Artificial Intelligence Model. Illinois Center for Transportation, 2024. https://doi.org/10.36501/0197-9191/24-030.

Full text
Abstract:
For robust pavement design, accurate damage computation is essential, especially for loading scenarios such as truck platoons. Studies have developed a framework to compute pavement distresses as function of lateral position, spacing, and market-penetration level of truck platoons. The established framework uses a robust 3D pavement model, along with the AASHTOWare Mechanistic–Empirical Pavement Design Guidelines (MEPDG) transfer functions to compute pavement distresses. However, transfer functions include high variability and lack physical significance. Therefore, as an improvement to effecti
APA, Harvard, Vancouver, ISO, and other styles
7

Harb, Ihab. An approach to pattern recognition of multifont printed alphabet using conceptual graph theory and neural networks. Portland State University Library, 2000. http://dx.doi.org/10.15760/etd.5807.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Garg, Raveesh, Eric Qin, Francisco Martinez, et al. Understanding the Design Space of Sparse/Dense Multiphase Dataflows for Mapping Graph Neural Networks on Spatial Accelerators. Office of Scientific and Technical Information (OSTI), 2021. http://dx.doi.org/10.2172/1821960.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

St. John, Peter, Dave Biagioni, Charles Tripp, et al. End-to-End Optimization for Battery Materials and Molecules by Combining Graph Neural Networks and Reinforcement Learning. Office of Scientific and Technical Information (OSTI), 2025. https://doi.org/10.2172/2565365.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

INVERSION METHOD OF UNCERTAIN PARAMETERS FOR TRUSS STRUCTURES BASED ON GRAPH NEURAL NETWORKS. The Hong Kong Institute of Steel Construction, 2023. http://dx.doi.org/10.18057/ijasc.2023.19.4.5.

Full text
Abstract:
Uncertainty exists widely in practical engineering. It is an important challenge in engineering structural analysis. In truss structures, the uncertainties of axial stiffness of bolted joints will significantly affect the mechanical behavior of the structure as the axial load is dominated by the member internal forces. Structural response analysis based on determined structural parameters is a common forward problem that can be solved by modeling analysis methods. However, the uncertainties parameter of axial stiffness of bolted joint cannot be determined during the design and analysis of trus
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!