Academic literature on the topic 'Gnn'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Gnn.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Gnn"

1

Yilmaz, Fatih, Aybüke Ertaş, and Seda Yamaç Akbiyik. "Determinants of circulant matrices with Gaussian nickel Fibonacci numbers." Filomat 37, no. 25 (2023): 8683–92. http://dx.doi.org/10.2298/fil2325683y.

Full text
Abstract:
In this study, we consider Kn := circ (GN1,GN2,...,GNn) circulant matrices whose entries are the Gaussian Nickel Fibonacci numbers GN1,GN2,...,GNn. Then, we compute determinants of Kn by exploiting Chebyshev polynomials of the second kind. Moreover, we obtain Cassini?s identity and the D?Ocagne identity for the Gaussian Nickel Fibonacci numbers.
APA, Harvard, Vancouver, ISO, and other styles
2

Stanimirović, Predrag S., Nataša Tešić, Dimitrios Gerontitis, Gradimir V. Milovanović, Milena J. Petrović, Vladimir L. Kazakovtsev, and Vladislav Stasiuk. "Application of Gradient Optimization Methods in Defining Neural Dynamics." Axioms 13, no. 1 (January 14, 2024): 49. http://dx.doi.org/10.3390/axioms13010049.

Full text
Abstract:
Applications of gradient method for nonlinear optimization in development of Gradient Neural Network (GNN) and Zhang Neural Network (ZNN) are investigated. Particularly, the solution of the matrix equation AXB=D which changes over time is studied using the novel GNN model, termed as GGNN(A,B,D). The GGNN model is developed applying GNN dynamics on the gradient of the error matrix used in the development of the GNN model. The convergence analysis shows that the neural state matrix of the GGNN(A,B,D) design converges asymptotically to the solution of the matrix equation AXB=D, for any initial state matrix. It is also shown that the convergence result is the least square solution which is defined depending on the selected initial matrix. A hybridization of GGNN with analogous modification GZNN of the ZNN dynamics is considered. The Simulink implementation of presented GGNN models is carried out on the set of real matrices.
APA, Harvard, Vancouver, ISO, and other styles
3

Long, Juan. "Exploration of Cross-Border Language Planning Using the Graph Neural Network for Internet of Things-Native Data." Mobile Information Systems 2022 (September 23, 2022): 1–12. http://dx.doi.org/10.1155/2022/7807878.

Full text
Abstract:
This work aims to study applying the graph neural network (GNN) in cross-border language planning (CBLP). Consequently, following a review of the connotation of GNN, it puts forward the research method for CBLP based on the Internet of Things (IoT)-native data and studies the classification of language texts utilizing different types of GNNs. Firstly, the isomorphic label-embedded graph convolution network (GCN) is proposed. Then, it proposes a scalability-enhanced heterogeneous GCN. Subsequently, the two GCN models are fused, and the research model-heterogeneous InducGCN is proposed. Finally, the model performances are comparatively analyzed. The experimental findings suggest that the classification accuracy of label-embedded GNN is higher than that of other methods, with the highest recognition accuracy of 97.37% on dataset R8. The classification accuracy of the proposed heterogeneous InducGCN fusion model has been improved by 0.09% more than the label-embedded GNN, reaching 97.46%.
APA, Harvard, Vancouver, ISO, and other styles
4

Zhao, Qingchao, Long Li, Yan Chu, Zhen Yang, Zhengkui Wang, and Wen Shan. "Efficient Supervised Image Clustering Based on Density Division and Graph Neural Networks." Remote Sensing 14, no. 15 (August 5, 2022): 3768. http://dx.doi.org/10.3390/rs14153768.

Full text
Abstract:
In recent research, supervised image clustering based on Graph Neural Networks (GNN) connectivity prediction has demonstrated considerable improvements over traditional clustering algorithms. However, existing supervised image clustering algorithms are usually time-consuming and limit their applications. In order to infer the connectivity between image instances, they usually created a subgraph for each image instance. Due to the creation and process of a large number of subgraphs as the input of GNN, the computation overheads are enormous. To address the high computation overhead problem in the GNN connectivity prediction, we present a time-efficient and effective GNN-based supervised clustering framework based on density division namely DDC-GNN. DDC-GNN divides all image instances into high-density parts and low-density parts, and only performs GNN subgraph connectivity prediction on the low-density parts, resulting in a significant reduction in redundant calculations. We test two typical models in the GNN connectivity prediction module in the DDC-GNN framework, which are the graph convolutional networks (GCN)-based model and the graph auto-encoder (GAE)-based model. Meanwhile, adaptive subgraphs are generated to ensure sufficient contextual information extraction for low-density parts instead of the fixed-size subgraphs. According to the experiments on different datasets, DDC-GNN achieves higher accuracy and is almost five times quicker than those without the density division strategy.
APA, Harvard, Vancouver, ISO, and other styles
5

Shanthamallu, Uday Shankar, Jayaraman J. Thiagarajan, and Andreas Spanias. "Uncertainty-Matching Graph Neural Networks to Defend Against Poisoning Attacks." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 11 (May 18, 2021): 9524–32. http://dx.doi.org/10.1609/aaai.v35i11.17147.

Full text
Abstract:
Graph Neural Networks (GNNs), a generalization of neural networks to graph-structured data, are often implemented using message passes between entities of a graph. While GNNs are effective for node classification, link prediction and graph classification, they are vulnerable to adversarial attacks, i.e., a small perturbation to the structure can lead to a non-trivial performance degradation. In this work, we propose Uncertainty Matching GNN (UM-GNN), that is aimed at improving the robustness of GNN models, particularly against poisoning attacks to the graph structure, by leveraging epistemic uncertainties from the message passing framework. More specifically, we propose to build a surrogate predictor that does not directly access the graph structure, but systematically extracts reliable knowledge from a standard GNN through a novel uncertainty-matching strategy. Interestingly, this uncoupling makes UM-GNN immune to evasion attacks by design, and achieves significantly improved robustness against poisoning attacks. Using empirical studies with standard benchmarks and a suite of global and target attacks, we demonstrate the effectiveness of UM-GNN, when compared to existing baselines including the state-of-the-art robust GCN.
APA, Harvard, Vancouver, ISO, and other styles
6

Ge, Kao, Jian-Qiang Zhao, and Yan-Yong Zhao. "GR-GNN: Gated Recursion-Based Graph Neural Network Algorithm." Mathematics 10, no. 7 (April 4, 2022): 1171. http://dx.doi.org/10.3390/math10071171.

Full text
Abstract:
Under an internet background involving artificial intelligence and big data—unstructured, materialized, network graph-structured data, such as social networks, knowledge graphs, and compound molecules, have gradually entered into various specific business scenarios. One problem that urgently needs to be solved in the industry involves how to perform feature extractions, transformations, and operations in graph-structured data to solve downstream tasks, such as node classifications and graph classifications in actual business scenarios. Therefore, this paper proposes a gated recursion-based graph neural network (GR-GNN) algorithm to solve tasks such as node depth-dependent feature extractions and node classifications for graph-structured data. The GRU neural network unit was used to complete the node classification task and, thereby, construct the GR-GNN model. In order to verify the accuracy, effectiveness, and superiority of the algorithm on the open datasets Cora, CiteseerX, and PubMed, the algorithm was used to compare the operation results with the classical graph neural network baseline algorithms GCN, GAT, and GraphSAGE, respectively. The experimental results show that, on the validation set, the accuracy and target loss of the GR-GNN algorithm are better than or equal to other baseline algorithms; in terms of algorithm convergence speed, the performance of the GR-GNN algorithm is comparable to that of the GCN algorithm, which is higher than other algorithms. The research results show that the GR-GNN algorithm proposed in this paper has high accuracy and computational efficiency, and very wide application significance.
APA, Harvard, Vancouver, ISO, and other styles
7

Gholami, Fatemeh, Zahed Rahmati, Alireza Mofidi, and Mostafa Abbaszadeh. "On Enhancement of Text Classification and Analysis of Text Emotions Using Graph Machine Learning and Ensemble Learning Methods on Non-English Datasets." Algorithms 16, no. 10 (October 4, 2023): 470. http://dx.doi.org/10.3390/a16100470.

Full text
Abstract:
In recent years, machine learning approaches, in particular graph learning methods, have achieved great results in the field of natural language processing, in particular text classification tasks. However, many of such models have shown limited generalization on datasets in different languages. In this research, we investigate and elaborate graph machine learning methods on non-English datasets (such as the Persian Digikala dataset), which consists of users’ opinions for the task of text classification. More specifically, we investigate different combinations of (Pars) BERT with various graph neural network (GNN) architectures (such as GCN, GAT, and GIN) as well as use ensemble learning methods in order to tackle the text classification task on certain well-known non-English datasets. Our analysis and results demonstrate how applying GNN models helps in achieving good scores on the task of text classification by better capturing the topological information between textual data. Additionally, our experiments show how models employing language-specific pre-trained models (like ParsBERT, instead of BERT) capture better information about the data, resulting in better accuracies.
APA, Harvard, Vancouver, ISO, and other styles
8

Ennadir, Sofiane, Yassine Abbahaddou, Johannes F. Lutzeyer, Michalis Vazirgiannis, and Henrik Boström. "A Simple and Yet Fairly Effective Defense for Graph Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 19 (March 24, 2024): 21063–71. http://dx.doi.org/10.1609/aaai.v38i19.30098.

Full text
Abstract:
Graph Neural Networks (GNNs) have emerged as the dominant approach for machine learning on graph-structured data. However, concerns have arisen regarding the vulnerability of GNNs to small adversarial perturbations. Existing defense methods against such perturbations suffer from high time complexity and can negatively impact the model's performance on clean graphs. To address these challenges, this paper introduces NoisyGNNs, a novel defense method that incorporates noise into the underlying model's architecture. We establish a theoretical connection between noise injection and the enhancement of GNN robustness, highlighting the effectiveness of our approach. We further conduct extensive empirical evaluations on the node classification task to validate our theoretical findings, focusing on two popular GNNs: the GCN and GIN. The results demonstrate that NoisyGNN achieves superior or comparable defense performance to existing methods while minimizing added time complexity. The NoisyGNN approach is model-agnostic, allowing it to be integrated with different GNN architectures. Successful combinations of our NoisyGNN approach with existing defense techniques demonstrate even further improved adversarial defense results. Our code is publicly available at: https://github.com/Sennadir/NoisyGNN.
APA, Harvard, Vancouver, ISO, and other styles
9

Wu, Qingle, Benjamin K. Ng, Chan-Tong Lam, Xiangyu Cen, Yuanhui Liang, and Yan Ma. "Shared Graph Neural Network for Channel Decoding." Applied Sciences 13, no. 23 (November 24, 2023): 12657. http://dx.doi.org/10.3390/app132312657.

Full text
Abstract:
With the application of graph neural network (GNN) in the communication physical layer, GNN-based channel decoding algorithms have become a research hotspot. Compared with traditional decoding algorithms, GNN-based channel decoding algorithms have a better performance . GNN has good stability and can handle large-scale problems; GNN has good inheritance and can generalize to different network settings. Compared with deep learning-based channel decoding algorithms, GNN-based channel decoding algorithms avoid a large number of multiplications between learning weights and messages. However, the aggregation edges and nodes for GNN require many parameters, which requires a large amount of memory storage resources. In this work, we propose GNN-based channel decoding algorithms with shared parameters, called shared graph neural network (SGNN). For BCH codes and LDPC codes, the SGNN decoding algorithm only needs a quarter or half of the parameters, while achieving a slightly degraded bit error ratio (BER) performance.
APA, Harvard, Vancouver, ISO, and other styles
10

Kim, Cheolhyeong, Haeseong Moon, and Hyung Ju Hwang. "NEAR: Neighborhood Edge AggregatoR for Graph Classification." ACM Transactions on Intelligent Systems and Technology 13, no. 3 (June 30, 2022): 1–17. http://dx.doi.org/10.1145/3506714.

Full text
Abstract:
Learning graph-structured data with graph neural networks (GNNs) has been recently emerging as an important field because of its wide applicability in bioinformatics, chemoinformatics, social network analysis, and data mining. Recent GNN algorithms are based on neural message passing, which enables GNNs to integrate local structures and node features recursively. However, past GNN algorithms based on 1-hop neighborhood neural message passing are exposed to a risk of loss of information on local structures and relationships. In this article, we propose Neighborhood Edge AggregatoR (NEAR), a framework that aggregates relations between the nodes in the neighborhood via edges. NEAR, which can be orthogonally combined with Graph Isomorphism Network (GIN), gives integrated information that describes which nodes in the neighborhood are connected. Therefore, NEAR can reflect additional information of a local structure of each node beyond the nodes themselves in 1-hop neighborhood. Experimental results on multiple graph classification tasks show that our algorithm makes a good improvement over other existing 1-hop based GNN-based algorithms.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Gnn"

1

Nastorg, Matthieu. "Scalable GNN Solutions for CFD Simulations." Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASG020.

Full text
Abstract:
La Dynamique des Fluides Numérique (CFD) joue un rôle essentiel dans la prédiction de divers phénomènes physiques, tels que le climat, l'aérodynamique ou la circulation sanguine. Au coeur de la CFD se trouvent les équations de Navier-Stokes régissant le mouvement des fluides. Cependant, résoudre ces équations à grande échelle reste fastidieux, en particulier lorsqu'il s'agit des équations de Navier-Stokes incompressibles, qui nécessitent la résolution intensive d'un problème de Poisson de Pression, garantissant la contrainte d'incompressibilité. De nos jours, les méthodes d'apprentissage profond ont ouvert de nouvelles perspectives pour améliorer les simulations numériques. Parmi ces approches, les Graph Neural Networks (GNNs), conçus pour traiter des données de type graphe tels que les maillages, se sont révélés prometteurs. Cette thèse vise à explorer l'utilisation des GNNs pour améliorer la résolution du problème de Poisson de Pression. Une contribution clé implique l'introduction d'une nouvelle architecture GNN qui respecte intrinsèquement les conditions aux limites tout en exploitant la théorie des couches implicites pour ajuster automatiquement le nombre de couches GNN nécessaires à la convergence : ce nouveau modèle présente des capacités de généralisation améliorées, gérant efficacement des problèmes de Poisson de différentes tailles et formes. Néanmoins, ses limitations actuelles le restreignent aux problèmes à petite échelle, insuffisants pour les applications industrielles qui nécessitent souvent plusieurs milliers de noeuds. Pour mettre à l'échelle ces modèles, cette thèse explore la combinaison des GNNs avec les méthodes de Décomposition de Domaines, tirant parti des calculs en parallèle sur GPU pour produire des solutions d'ingénierie plus efficaces
Computational Fluid Dynamics (CFD) plays an essential role in predicting various physical phenomena, such as climate, aerodynamics, or blood flow. At the core of CFD lie the Navier-Stokes equations governing the motion of fluids. However, solving these equations at scale remains daunting, especially when dealing with Incompressible Navier-Stokes equations. Indeed, the well-known splitting schemes require the costly resolution of a Pressure Poisson problem that guarantees the incompressibility constraint. Nowadays, Deep Learning methods have opened new perspectives for enhancing numerical simulations. Among existing approaches, Graph Neural Networks (GNNs), designed to handle graph data like meshes, have proven to be promising. This thesis is dedicated to exploring the use of GNNs to enhance the resolution of the Pressure Poisson problem. One significant contribution involves introducing a novel physics-informed GNN-based model that inherently respects boundary conditions while leveraging the Implicit Layer theory to automatically adjust the number of GNN layers required for convergence. This results in a model with enhanced generalization capabilities, effectively handling Poisson problems of various sizes and shapes. Nevertheless, its current limitations restrict it to small-scale problems, insufficient for industrial applications that often require thousands of nodes. To scale up these models, this thesis further explores combining GNNs with Domain Decomposition methods, taking advantage of batch parallel computing on GPU to produce more efficient engineering solutions
APA, Harvard, Vancouver, ISO, and other styles
2

Amanzadi, Amirhossein. "Predicting safe drug combinations with Graph Neural Networks (GNN)." Thesis, Uppsala universitet, Institutionen för farmaceutisk biovetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-446691.

Full text
Abstract:
Many people - especially during their elderly - consume multiple drugs for the treatment of complex or co-existing diseases. Identifying side effects caused by polypharmacy is crucial for reducing mortality and morbidity of the patients which will lead to improvement in their quality of life. Since there is immense space for possible drug combinations, it is infeasible to examine them entirely in the lab. In silico models can offer a convenient solution, however, due to the lack of a sufficient amount of homogenous data it is difficult to develop both reliable and scalable models in its ability to accurately predict Polypharmacy Side Effect. Recent advancement in the field of representational learning has utilized the power of graph networks to harmonize information from the heterogeneous biological databases and interactomes. This thesis takes advantage of those techniques and incorporates them with the state-of-the-art Graph Neural Network algorithms to implement a Deep learning pipeline capable of predicting the Adverse Drug Reaction of any given paired drug combinations.
APA, Harvard, Vancouver, ISO, and other styles
3

Pappone, Francesco. "Graph neural networks: theory and applications." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/23893/.

Full text
Abstract:
Le reti neurali artificiali hanno visto, negli ultimi anni, una crescita vertiginosa nelle loro applicazioni e nelle architetture dei modelli impiegati. In questa tesi introduciamo le reti neurali su domini euclidei, in particolare mostrando l’importanza dell’equivarianza di traslazione nelle reti convoluzionali, e introduciamo, per analogia, un’estensione della convoluzione a dati strutturati come grafi. Inoltre presentiamo le architetture dei principali Graph Neural Network ed esponiamo, per ognuna delle tre architetture proposte (Spectral graph Convolutional Network, Graph Convolutional Network, Graph Attention neTwork) un’applicazione che ne mostri sia il funzionamento che l’importanza. Discutiamo, ulteriormente, l’implementazione di un algoritmo di classificazione basato su due varianti dell’architettura Graph Convolutional Network, addestrato e testato sul dataset PROTEINS, capace di classificare le proteine del dataset in due categorie: enzimi e non enzimi.
APA, Harvard, Vancouver, ISO, and other styles
4

Andersson, Mikael. "Gamma-ray racking using graph neural networks." Thesis, KTH, Fysik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-298610.

Full text
Abstract:
While there are existing methods of gamma ray-track reconstruction in specialized detectors such as AGATA, including backtracking and clustering, it is naturally of interest to diversify the portfolio of available tools to provide us viable alternatives. In this study some possibilities found in the field of machine learning were investigated, more specifically within the field of graph neural networks. In this project there was attempt to reconstruct gamma tracks in a germanium solid using data simulated in Geant4. The data consists of photon energies below the pair production limit and so we are limited to the processes of photoelectric absorption and Compton scattering. The author turned to the field of graph networks to utilize its edge and node structure for data of such variable input size as found in this task. A graph neural network (GNN) was implemented and trained on a variety of gamma multiplicities and energies and was subsequently tested in terms of various accuracy parameters and generated energy spectra. In the end the best result involved an edge classifier trained on a large dataset containing a 10^6 tracks bundled together into separate events to be resolved. The network was capable of recalling up to 95 percent of the connective edges for the selected threshold in the infinite resolution case with a peak-to-total ratio of 68 percent for a set of packed data with a model trained on simulated data including realistic uncertainties in both position and energy.
Trots att det existerar en mängd metoder för rekonstruktion av spår i specialiserade detektorer som AGATA är det av naturligt intresse att diversifiera och undersöka nya verktyg för uppgiften. I denna studie undersöktes några möjligheter inom maskininlärning, närmare bestämt inom området neurala grafnätverk.  Under projektets gång simulerades data av fotoner i en ihålig, sfärisk geometri av germanium i Geant4. Den simulerade datan är begränsad till energier under parproduktion så datan består av reaktioner genom den fotoelektriska effekten och comptonspridning. Den variabla storleken på denna data och dess spridning i detektorns geometri lämpar sig för ett grafformat med nod och länkstruktur. Ett neuralt grafnätverk (GNN) implementerades och tränades på data med gamma av variabel multiplicitet och energi och evaluerades på ett urval prestandaparametrar och dess förmåga att generera ett användbart spektra. Slutresultatet involverade en länkklassificerings modell tränat på data med 10^6 spår sammanslagna till händelser. Nätverket återkallade 95 procent av positiva länkar för ett val av tröskelvärde i fallet med oändlig upplösning med ett peak-to-total på 68 procent för packad data behandlad med osäkerhet i energi och position motsvarande fallet med begränsad upplösning.
APA, Harvard, Vancouver, ISO, and other styles
5

Andersson, Mikael. "Gamma-ray tracking using graph neural networks." Thesis, KTH, Fysik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-298610.

Full text
Abstract:
While there are existing methods of gamma ray-track reconstruction in specialized detectors such as AGATA, including backtracking and clustering, it is naturally of interest to diversify the portfolio of available tools to provide us viable alternatives. In this study some possibilities found in the field of machine learning were investigated, more specifically within the field of graph neural networks. In this project there was attempt to reconstruct gamma tracks in a germanium solid using data simulated in Geant4. The data consists of photon energies below the pair production limit and so we are limited to the processes of photoelectric absorption and Compton scattering. The author turned to the field of graph networks to utilize its edge and node structure for data of such variable input size as found in this task. A graph neural network (GNN) was implemented and trained on a variety of gamma multiplicities and energies and was subsequently tested in terms of various accuracy parameters and generated energy spectra. In the end the best result involved an edge classifier trained on a large dataset containing a 10^6 tracks bundled together into separate events to be resolved. The network was capable of recalling up to 95 percent of the connective edges for the selected threshold in the infinite resolution case with a peak-to-total ratio of 68 percent for a set of packed data with a model trained on simulated data including realistic uncertainties in both position and energy.
Trots att det existerar en mängd metoder för rekonstruktion av spår i specialiserade detektorer som AGATA är det av naturligt intresse att diversifiera och undersöka nya verktyg för uppgiften. I denna studie undersöktes några möjligheter inom maskininlärning, närmare bestämt inom området neurala grafnätverk.  Under projektets gång simulerades data av fotoner i en ihålig, sfärisk geometri av germanium i Geant4. Den simulerade datan är begränsad till energier under parproduktion så datan består av reaktioner genom den fotoelektriska effekten och comptonspridning. Den variabla storleken på denna data och dess spridning i detektorns geometri lämpar sig för ett grafformat med nod och länkstruktur. Ett neuralt grafnätverk (GNN) implementerades och tränades på data med gamma av variabel multiplicitet och energi och evaluerades på ett urval prestandaparametrar och dess förmåga att generera ett användbart spektra. Slutresultatet involverade en länkklassificerings modell tränat på data med 10^6 spår sammanslagna till händelser. Nätverket återkallade 95 procent av positiva länkar för ett val av tröskelvärde i fallet med oändlig upplösning med ett peak-to-total på 68 procent för packad data behandlad med osäkerhet i energi och position motsvarande fallet med begränsad upplösning.
APA, Harvard, Vancouver, ISO, and other styles
6

Gunnarsson, Robin, and Alexander Åkermark. "Approaching sustainable mobility utilizing graph neural networks." Thesis, Högskolan i Halmstad, Akademin för informationsteknologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-45191.

Full text
Abstract:
This report is done in collaboration with WirelessCar for the master of science thesis at Halmstad University. Many different parameters influence fuel consumption. The objective of the report is to evaluate if Graph neural networks are a practical model to perform fuel consumption prediction on areas. The model uses a partitioning of geographical locations of trip observations to capture their spatial information. The project also proposes a method to capture the non-stationary behavior of vehicles by defining a vehicle node as a separate entity. The model then captures their different features in a dense layer neural network and utilizes message passing to capture context about neighboring nodes. The model is compared to a baseline neural network with a similar network architecture as the graph neural network. The data is partitioned to define an area with Kmeans and static gridnet partition with and without terrain details. This partition is used to structure a homogeneous graph that is underperforming. The practical drawbacks of the initial homogeneous graph are inspected and addressed to develop a heterogeneous graph that can outperform the neural network baseline.
APA, Harvard, Vancouver, ISO, and other styles
7

Boszorád, Matej. "Segmentace obrazových dat pomocí grafových neuronových sítí." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2020. http://www.nusl.cz/ntk/nusl-412987.

Full text
Abstract:
This diploma thesis describes and implements the design of a graph neural network usedfor 2D segmentation of neural structure. The first chapter of the thesis briefly introduces the problem of segmentation. In this chapter, segmentation techniques are divided according to the principles of the methods they use. Each type of technique contains the essence of this category as well as a description of one representative. The second chapter of the diploma thesis explains graph neural networks (GNN for short). Here, the thesis divides graph neural networks in general and describes recurrent graph neural networks(RGNN for short) and graph autoencoders, that can be used for image segmentation, in more detail. The specific image segmentation solution is based on the message passing method in RGNN, which can replace convolution masks in convolutional neural networks.RGNN also provides a simpler multilayer perceptron topology. The second type of graph neural networks characterised in the thesis are graph autoencoders, which use various methods for better encoding of graph vertices into Euclidean space. The last part ofthe diploma thesis deals with the analysis of the problem, the proposal of its specific solution and the evaluation of results. The purpose of the practical part of the work was the implementation of GNN for image data segmentation. The advantage of using neural networks is the ability to solve different types of segmentation by changing training data. RGNN with messaging passing and node2vec were used as implementation GNNf or segmentation problem. RGNN training was performed on graphics cards provided bythe school and Google Colaboratory. Learning RGNN using node2vec was very memory intensive and therefore it was necessary to train on a processor with an operating memory larger than 12GB. As part of the RGNN optimization, learning was tested using various loss functions, changing topology and learning parameters. A tree structure method was developed to use node2vec to improve segmentation, but the results did not confirman improvement for a small number of iterations. The best outcomes of the practical implementation were evaluated by comparing the tested data with the convolutional neural network U-Net. It is possible to state comparable results to the U-Net network, but further testing is needed to compare these neural networks. The result of the thesisis the use of RGNN as a modern solution to the problem of image segmentation and providing a foundation for further research.
APA, Harvard, Vancouver, ISO, and other styles
8

Zhou, Rongyan. "Exploration of opportunities and challenges brought by Industry 4.0 to the global supply chains and the macroeconomy by integrating Artificial Intelligence and more traditional methods." Electronic Thesis or Diss., université Paris-Saclay, 2021. http://www.theses.fr/2021UPAST037.

Full text
Abstract:
L'industrie 4.0 est un changement important et un défi de taille pour chaque segment industriel. La recherche utilise d'abord l'analyse de la littérature pour trier la littérature et énumérer la direction du développement et l'état d'application de différents domaines, ce qui se consacre à montrer un rôle de premier plan pour la théorie et la pratique de l'industrie 4.0. La recherche explore ensuite la tendance principale de l'offre à plusieurs niveaux dans l'industrie 4.0 en combinant l'apprentissage automatique et les méthodes traditionnelles. Ensuite, la recherche examine la relation entre l'investissement et l'emploi dans l'industrie 4.0 pour examiner la dépendance interrégionale de l'industrie 4.0 afin de présenter un regroupement raisonnable basé sur différents critères et de faire des suggestions et une analyse de la chaîne d'approvisionnement mondiale pour les entreprises et les organisations.De plus, notre système d'analyse jette un coup d'oeil sur la macroéconomie. La combinaison du traitement du langage naturel dans l'apprentissage automatique pour classer les sujets de recherche et de la revue de la littérature traditionnelle pour enquêter sur la chaîne d'approvisionnement à plusieurs niveaux améliore considérablement l'objectivité de l'étude et jette une base solide pour des recherches ultérieures. L'utilisation de réseaux et d'économétrie complexes pour analyser la chaîne d'approvisionnement mondiale et les problèmes macroéconomiques enrichit la méthodologie de recherche au niveau macro et politique. Cette recherche fournit des analyses et des références aux chercheurs, aux décideurs et aux entreprises pour leur prise de décision stratégique
Industry 4.0 is a significant shift and a tremendous challenge for every industrial segment, especially for the manufacturing industry that gave birth to the new industrial revolution. The research first uses literature analysis to sort out the literature, and focuses on the use of “core literature extension method” to enumerate the development direction and application status of different fields, which devotes to showing a leading role for theory and practice of industry 4.0. The research then explores the main trend of multi-tier supply in Industry 4.0 by combining machine learning and traditional methods. Next, the research investigates the relationship of industry 4.0 investment and employment to look into the inter-regional dependence of industry 4.0 so as to present a reasonable clustering based on different criteria and make suggestions and analysis of the global supply chain for enterprises and organizations. Furthermore, our analysis system takes a glance at the macroeconomy. The combination of natural language processing in machine learning to classify research topics and traditional literature review to investigate the multi-tier supply chain significantly improves the study's objectivity and lays a solid foundation for further research. Using complex networks and econometrics to analyze the global supply chain and macroeconomic issues enriches the research methodology at the macro and policy level. This research provides analysis and references to researchers, decision-makers, and companies for their strategic decision-making
APA, Harvard, Vancouver, ISO, and other styles
9

Liberatore, Lorenzo. "Introduction to geometric deep learning and graph neural networks." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2022. http://amslaurea.unibo.it/25339/.

Full text
Abstract:
This thesis proposes an introduction to the fundamental concepts of supervised deep learning. Starting from Rosemblatt's Perceptron we will discuss the architectures that, in recent years, have revolutioned the world of deep learning: graph neural networks, which led to the formulation of geometric deep learning. We will then give a simple example of graph neural network, discussing the code that composes it and then test our architecture on the MNISTSuperpixels dataset, which is a variation of the benchmark dataset MNIST.
APA, Harvard, Vancouver, ISO, and other styles
10

Francis, Smita. "Optimisation of doping profiles for mm-wave GaAs and GaN gunn diodes." Thesis, Cape Peninsula University of Technology, 2017. http://hdl.handle.net/20.500.11838/2568.

Full text
Abstract:
Thesis (DTech (Electrical Engineering))--Cape Peninsula University of Technology, 2017.
Gunn diodes play a prominent role in the development of low-cost and reliable solid-state oscillators for diverse applications, such as in the military, security, automotive and consumer electronics industries. The primary focus of the research presented here is the optimisation of GaAs and GaN Gunn diodes for mm-wave operations, through rigorous Monte Carlo particle simulations. A novel, empirical technique to determine the upper operational frequency limit of devices based on the transferred electron mechanism is presented. This method exploits the hysteresis of the dynamic velocity-field curves of semiconductors to establish the upper frequency limit of the transferred electron mechanism in bulk material that supports this mechanism. The method can be applied to any bulk material exhibiting negative differential resistance. The simulations show that the upper frequency limits of the fundamental mode of operation for GaAs Gunn diodes are between 80 GHz and 100 GHz, and for GaN Gunn diodes between 250 GHz and 300 GHz, depending on the operating conditions. These results, based on the simulated bulk material characteristics, are confirmed by the simulated mm-wave performance of the GaAs and GaN Gunn devices. GaAs diodes are shown to exhibit a fundamental frequency limit of 90 GHz, but with harmonic power available up to 186_GHz. Simulated GaN diodes are capable of generating appreciable output power at operational frequencies up to 250 GHz in the fundamental mode, with harmonic output power available up to 525 GHz. The research furthermore establishes optimised doping profiles for two-domain GaAs Gunn diodes and single- and two-domain GaN Gunn diodes. The relevant design parameters that have been optimised, are the dimensions and doping profile of the transit regions, the width of the doping notches and buffer region (for two-domain devices), and the bias voltage. In the case of GaAs diodes, hot electron injection has also been implemented to improve the efficiency and output power of the devices. Multi-domain operation has been explored for both GaAs and GaN devices and found to be an effective way of increasing the output power. However, it is the opinion of the author that a maximum number of two domains is feasible for both GaAs and GaN diodes due to the significant increase in thermal heating associated with an increase in the number of transit regions. It has also been found that increasing the doping concentration of the transit region exponentially over the last 25% towards the anode by a factor of 1.5 above the nominal doping level enhances the output power of the diodes.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Gnn"

1

Global network navigator for dummies. Foster City, CA: IDG Books Worldwide, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Shui lai gen wo gan bei. Taibei Shi: Feng yun shi dai chu ban gu fen you xian gong si, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

shi, Zhong gong Jiujiang Shi wei Dang shi zi liao zheng ji ban gong. Gan bei Min Shan gen ju di. [Jiangxi]: Zhong gong Jiangxi Sheng wei dang shi zi liao zheng ji wei yuan hui, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ai yao gen zhe gan jue zou. Taibei Xian Xindian Shi: Wan sheng chu ban you xian gong si, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Shan Gan ge ming gen ju di shi. Beijing Shi: Ren min chu ban she, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Huo, Haidan. Shan Gan bian gen ju di yan jiu. Beijing: Zhong gong dang shi chu ban she, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Xiang E Gan ge ming gen ju di. Beijing: Zhong gong dang shi zi liao chu ban she, 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

1960-, Huang Huiyun, and Ouyang Xiaohua, eds. Xiang Gan ge ming gen ju di quan shi. Nanchang Shi: Jiangxi ren min chu ban she, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Zhu mu diao: Gen yu gan de yi shu. Shanghai: Shanghai shu dian chu ban she, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Min Zhe Gan gen ju di di jin rong. Shanghai: Shanghai she hui ke xue yuan chu ban she, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Gnn"

1

Zhang, Sen, and Baokui Li. "GNN-MRC: Machine Reading Comprehension Based on GNN Augmentation." In Artificial Neural Networks and Machine Learning – ICANN 2023, 398–409. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-44216-2_33.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Chen, Hongyu, Zhejiang Ran, Keshi Ge, Zhiquan Lai, Jingfei Jiang, and Dongsheng Li. "Auto-Divide GNN: Accelerating GNN Training with Subgraph Division." In Euro-Par 2023: Parallel Processing, 367–82. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-39698-4_25.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Din, Aafaq Mohi Ud, Shaima Qureshi, and Javaid Iqbal. "GNN Approach for Software Reliability." In System Reliability and Security, 1–13. New York: Auerbach Publications, 2023. http://dx.doi.org/10.1201/9781032624983-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Wang, Yu, An Liu, Junhua Fang, Jianfeng Qu, and Lei Zhao. "ADQ-GNN: Next POI Recommendation by Fusing GNN and Area Division with Quadtree." In Web Information Systems Engineering – WISE 2021, 177–92. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-91560-5_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Li, Mingkai, Peter Kok-Yiu Wong, Cong Huang, and Jack C. P. Cheng. "Indoor Trajectory Reconstruction Using Building Information Modeling and Graph Neural Networks." In CONVR 2023 - Proceedings of the 23rd International Conference on Construction Applications of Virtual Reality, 895–906. Florence: Firenze University Press, 2023. http://dx.doi.org/10.36253/10.36253/979-12-215-0289-3.89.

Full text
Abstract:
Trajectory reconstruction of pedestrian is of paramount importance to understand crowd dynamics and human movement pattern, which will provide insights to improve building design, facility management and route planning. Camera-based tracking methods have been widely explored with the rapid development of deep learning techniques. When moving to indoor environment, many challenges occur, including occlusions, complex environments and limited camera placement and coverage. Therefore, we propose a novel indoor trajectory reconstruction method using building information modeling (BIM) and graph neural network (GNN). A spatial graph representation is proposed for indoor environment to capture the spatial relationships of indoor areas and monitoring points. Closed circuit television (CCTV) system is integrated with BIM model through camera registration. Pedestrian simulation is conducted based on the BIM model to simulate the pedestrian movement in the considered indoor environment. The simulation results are embedded into the spatial graph for training of GNN. The indoor trajectory reconstruction is implemented as GNN conducts edge classification on the spatial graph
APA, Harvard, Vancouver, ISO, and other styles
6

Li, Mingkai, Peter Kok-Yiu Wong, Cong Huang, and Jack C. P. Cheng. "Indoor Trajectory Reconstruction Using Building Information Modeling and Graph Neural Networks." In CONVR 2023 - Proceedings of the 23rd International Conference on Construction Applications of Virtual Reality, 895–906. Florence: Firenze University Press, 2023. http://dx.doi.org/10.36253/979-12-215-0289-3.89.

Full text
Abstract:
Trajectory reconstruction of pedestrian is of paramount importance to understand crowd dynamics and human movement pattern, which will provide insights to improve building design, facility management and route planning. Camera-based tracking methods have been widely explored with the rapid development of deep learning techniques. When moving to indoor environment, many challenges occur, including occlusions, complex environments and limited camera placement and coverage. Therefore, we propose a novel indoor trajectory reconstruction method using building information modeling (BIM) and graph neural network (GNN). A spatial graph representation is proposed for indoor environment to capture the spatial relationships of indoor areas and monitoring points. Closed circuit television (CCTV) system is integrated with BIM model through camera registration. Pedestrian simulation is conducted based on the BIM model to simulate the pedestrian movement in the considered indoor environment. The simulation results are embedded into the spatial graph for training of GNN. The indoor trajectory reconstruction is implemented as GNN conducts edge classification on the spatial graph
APA, Harvard, Vancouver, ISO, and other styles
7

Li, Xu, and Yongsheng Chen. "Multi-Augmentation Contrastive Learning as Multi-Objective Optimization for Graph Neural Networks." In Advances in Knowledge Discovery and Data Mining, 495–507. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-33377-4_38.

Full text
Abstract:
AbstractRecently self-supervised learning is gaining popularity for Graph Neural Networks (GNN) by leveraging unlabeled data. Augmentation plays a key role in self-supervision. While there is a common set of image augmentation methods that preserve image labels in general, graph augmentation methods do not guarantee consistent graph semantics and are usually domain dependent. Existing self-supervised GNN models often handpick a small set of augmentation techniques that limit the performance of the model.In this paper, we propose a common set of graph augmentation methods to a wide range of GNN tasks, and rely on the Pareto optimality to select and balance among these possibly conflicting augmented versions, called Pareto Graph Contrastive Learning (PGCL) framework. We show that while random selection of the same set of augmentation leads to slow convergence or even divergence, PGCL converges much faster with lower error rate. Extensive experiments on multiple datasets of different domains and scales demonstrate superior or comparable performance of PGCL.
APA, Harvard, Vancouver, ISO, and other styles
8

Moriyama, Sota, Koji Watanabe, and Katsumi Inoue. "GNN Based Extraction of Minimal Unsatisfiable Subsets." In Inductive Logic Programming, 77–92. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-49299-0_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Begga, Ahmed, Miguel Ángel Lozano, and Francisco Escolano. "HEX-GNN: Hierarchical EXpanders for Node Classification." In Advances in Artificial Intelligence, 233–42. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-62799-6_24.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Zhang, Jun, Tong Zhang, and Ying Wang. "GNN-Based Structural Dynamics Simulation for Modular Buildings." In Pattern Recognition and Computer Vision, 245–58. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-18913-5_19.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Gnn"

1

Veyrin-Forrer, Luca, Ataollah Kamal, Stefan Duffner, Marc Plantevit, and Céline Robardet. "What Does My GNN Really Capture? On Exploring Internal GNN Representations." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/105.

Full text
Abstract:
Graph Neural Networks (GNNs) are very efficient at classifying graphs but their internal functioning is opaque which limits their field of application. Existing methods to explain GNN focus on disclosing the relationships between input graphs and model decision. In this article, we propose a method that goes further and isolates the internal features, hidden in the network layers, that are automatically identified by the GNN and used in the decision process. We show that this method makes possible to know the parts of the input graphs used by GNN with much less bias that SOTA methods and thus to bring confidence in the decision process.
APA, Harvard, Vancouver, ISO, and other styles
2

Chen, Yuzhao, Yatao Bian, Xi Xiao, Yu Rong, Tingyang Xu, and Junzhou Huang. "On Self-Distilling Graph Neural Network." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/314.

Full text
Abstract:
Recently, the teacher-student knowledge distillation framework has demonstrated its potential in training Graph Neural Networks (GNNs). However, due to the difficulty of training over-parameterized GNN models, one may not easily obtain a satisfactory teacher model for distillation. Furthermore, the inefficient training process of teacher-student knowledge distillation also impedes its applications in GNN models. In this paper, we propose the first teacher-free knowledge distillation method for GNNs, termed GNN Self-Distillation (GNN-SD), that serves as a drop-in replacement of the standard training process. The method is built upon the proposed neighborhood discrepancy rate (NDR), which quantifies the non-smoothness of the embedded graph in an efficient way. Based on this metric, we propose the adaptive discrepancy retaining (ADR) regularizer to empower the transferability of knowledge that maintains high neighborhood discrepancy across GNN layers. We also summarize a generic GNN-SD framework that could be exploited to induce other distillation strategies. Experiments further prove the effectiveness and generalization of our approach, as it brings: 1) state-of-the-art GNN distillation performance with less training cost, 2) consistent and considerable performance enhancement for various popular backbones.
APA, Harvard, Vancouver, ISO, and other styles
3

Rosenbluth, Eran, Jan Tönshoff, and Martin Grohe. "Some Might Say All You Need Is Sum." In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/464.

Full text
Abstract:
The expressivity of Graph Neural Networks (GNNs) is dependent on the aggregation functions they employ. Theoretical works have pointed towards Sum aggregation GNNs subsuming every other GNNs, while certain practical works have observed a clear advantage to using Mean and Max. An examination of the theoretical guarantee identifies two caveats. First, it is size-restricted, that is, the power of every specific GNN is limited to graphs of a specific size. Successfully processing larger graphs may require an other GNN, and so on. Second, it concerns the power to distinguish non-isomorphic graphs, not the power to approximate general functions on graphs, and the former does not necessarily imply the latter. It is desired that a GNN's usability will not be limited to graphs of any specific size. Therefore, we explore the realm of unrestricted-size expressivity. We prove that basic functions, which can be computed exactly by Mean or Max GNNs, are inapproximable by any Sum GNN. We prove that under certain restrictions, every Mean or Max GNN can be approximated by a Sum GNN, but even there, a combination of (Sum, [Mean/Max]) is more expressive than Sum alone. Lastly, we prove further expressivity limitations for GNNs with a broad class of aggregations.
APA, Harvard, Vancouver, ISO, and other styles
4

Peng, Jingshu, Zhao Chen, Yingxia Shao, Yanyan Shen, Lei Chen, and Jiannong Cao. "Sancus: Staleness-Aware Communication-Avoiding Full-Graph Decentralized Training in Large-Scale Graph Neural Networks (Extended Abstract)." In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/724.

Full text
Abstract:
Graph neural networks (GNNs) have emerged due to their success at modeling graph data. Yet, it is challenging for GNNs to efficiently scale to large graphs. Thus, distributed GNNs come into play. To avoid communication caused by expensive data movement between workers, we propose SANCUS, a staleness-aware communication-avoiding decentralized GNN system. By introducing a set of novel bounded embedding staleness metrics and adaptively skipping broadcasts, SANCUS abstracts decentralized GNN processing as sequential matrix multiplication and uses historical embeddings via cache. Theoretically, we show bounded approximation errors of embeddings and gradients with convergence guarantee. Empirically, we evaluate SANCUS with common GNN models via different system setups on large-scale benchmark datasets. Compared to SOTA works, SANCUS can avoid up to 74% communication with at least 1:86_ faster throughput on average without accuracy loss.
APA, Harvard, Vancouver, ISO, and other styles
5

Zhu, Hongmin, Fuli Feng, Xiangnan He, Xiang Wang, Yan Li, Kai Zheng, and Yongdong Zhang. "Bilinear Graph Neural Network with Neighbor Interactions." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/202.

Full text
Abstract:
Graph Neural Network (GNN) is a powerful model to learn representations and make predictions on graph data. Existing efforts on GNN have largely defined the graph convolution as a weighted sum of the features of the connected nodes to form the representation of the target node. Nevertheless, the operation of weighted sum assumes the neighbor nodes are independent of each other, and ignores the possible interactions between them. When such interactions exist, such as the co-occurrence of two neighbor nodes is a strong signal of the target node's characteristics, existing GNN models may fail to capture the signal. In this work, we argue the importance of modeling the interactions between neighbor nodes in GNN. We propose a new graph convolution operator, which augments the weighted sum with pairwise interactions of the representations of neighbor nodes. We term this framework as Bilinear Graph Neural Network (BGNN), which improves GNN representation ability with bilinear interactions between neighbor nodes. In particular, we specify two BGNN models named BGCN and BGAT, based on the well-known GCN and GAT, respectively. Empirical results on three public benchmarks of semi-supervised node classification verify the effectiveness of BGNN --- BGCN (BGAT) outperforms GCN (GAT) by 1.6% (1.5%) in classification accuracy. Codes are available at: https://github.com/zhuhm1996/bgnn.
APA, Harvard, Vancouver, ISO, and other styles
6

Tena Cucala, David, Bernardo Cuenca Grau, Boris Motik, and Egor V. Kostylev. "On the Correspondence Between Monotonic Max-Sum GNNs and Datalog." In 20th International Conference on Principles of Knowledge Representation and Reasoning {KR-2023}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/kr.2023/64.

Full text
Abstract:
Although there has been significant interest in applying machine learning techniques to structured data, the expressivity (i.e., a description of what can be learned) of such techniques is still poorly understood. In this paper, we study data transformations based on graph neural networks (GNNs). First, we note that the choice of how a dataset is encoded into a numeric form processable by a GNN can obscure the characterisation of a model's expressivity, and we argue that a canonical encoding provides an appropriate basis. Second, we study the expressivity of monotonic max-sum GNNs, which cover a subclass of GNNs with max and sum aggregation functions. We show that, for each such GNN, one can compute a Datalog program such that applying the GNN to any dataset produces the same facts as a single round of application of the program's rules to the dataset. Monotonic max-sum GNNs can sum an unbounded number of feature vectors which can result in arbitrarily large feature values, whereas rule application requires only a bounded number of constants. Hence, our result shows that the unbounded summation of monotonic max-sum GNNs does not increase their expressive power. Third, we sharpen our result to the subclass of monotonic max GNNs, which use only the max aggregation function, and identify a corresponding class of Datalog programs.
APA, Harvard, Vancouver, ISO, and other styles
7

Hu, Ziniu, Yuxiao Dong, Kuansan Wang, Kai-Wei Chang, and Yizhou Sun. "GPT-GNN." In KDD '20: The 26th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3394486.3403237.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Wang, Shuo, Yanran Li, Jiang Zhang, Qingye Meng, Lingwei Meng, and Fei Gao. "PM2.5-GNN." In SIGSPATIAL '20: 28th International Conference on Advances in Geographic Information Systems. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3397536.3422208.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Li, Zekun, Zeyu Cui, Shu Wu, Xiaoyu Zhang, and Liang Wang. "Fi-GNN." In CIKM '19: The 28th ACM International Conference on Information and Knowledge Management. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3357384.3357951.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Zhou, Fan, Chengtai Cao, Kunpeng Zhang, Goce Trajcevski, Ting Zhong, and Ji Geng. "Meta-GNN." In CIKM '19: The 28th ACM International Conference on Information and Knowledge Management. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3357384.3358106.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Gnn"

1

Fox, James, Bo Zhao, Sivasankaran Rajamanickam, Rampi Ramprasad, and Le Song. Concentric Spherical GNN for 3D Representation Learning. Office of Scientific and Technical Information (OSTI), March 2021. http://dx.doi.org/10.2172/1772205.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Jha, Sonal, Ayan Biswas, and Terece Turton. Graph Neural Network (GNN) - assisted Sampling for Cosmological Simulations. Office of Scientific and Technical Information (OSTI), August 2022. http://dx.doi.org/10.2172/1884741.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Garg, Raveesh, Eric Qin, Francisco Martinez, Robert Guirado, Akshay Jain, Sergi Abadal, Jose Abellan, et al. A Taxonomy for Classification and Comparison of Dataflows for GNN Accelerators. Office of Scientific and Technical Information (OSTI), March 2021. http://dx.doi.org/10.2172/1817326.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

CFA Institute. Gen Z and Investing: Social Media, Crypto, FOMO, and Family. CFA Institute, May 2023. http://dx.doi.org/10.56227/23.1.15.

Full text
Abstract:
This brief examines Gen Zs’ attitudes and behaviors around investing. It is based on data from a November–December 2022 online survey of 2,872 Gen Zs aged 18–25, Millennials, and Gen Xers from the United States, Canada, the United Kingdom, and China.
APA, Harvard, Vancouver, ISO, and other styles
5

Pavlidis, Dimitris. GaN MISFETs. Fort Belvoir, VA: Defense Technical Information Center, June 2000. http://dx.doi.org/10.21236/ada391089.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Hirshfield, Jay L. Bimodal Electron Gun. Office of Scientific and Technical Information (OSTI), December 2016. http://dx.doi.org/10.2172/1374056.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

CBL CORP REDWOOD CITY CA. Engineered GaN Substrates. Fort Belvoir, VA: Defense Technical Information Center, September 1996. http://dx.doi.org/10.21236/ada324733.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Cook, Philip, Jens Ludwig, Sudhir Venkatesh, and Anthony Braga. Underground Gun Markets. Cambridge, MA: National Bureau of Economic Research, November 2005. http://dx.doi.org/10.3386/w11737.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Cushing, Peter W. AAAV Gun & AMMO Update NDIA Gun and AMMO Symposium. Fort Belvoir, VA: Defense Technical Information Center, April 2001. http://dx.doi.org/10.21236/ada386173.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Long, CL, A. Del Genio, M. Deng, X. Fu, W. Gustafson, R. Houze, C. Jakob, et al. ARM MJO Investigation Experiment on Gan Island (AMIE-Gan) Science Plan. Office of Scientific and Technical Information (OSTI), April 2011. http://dx.doi.org/10.2172/1010958.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography