To see the other types of publications on this topic, follow the link: Message-passing neural network.

Journal articles on the topic 'Message-passing neural network'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Message-passing neural network.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

You, Jiaxuan, Jonathan M. Gomes-Selman, Rex Ying, and Jure Leskovec. "Identity-aware Graph Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 12 (2021): 10737–45. http://dx.doi.org/10.1609/aaai.v35i12.17283.

Full text
Abstract:
Message passing Graph Neural Networks (GNNs) provide a powerful modeling framework for relational data. However, the expressive power of existing GNNs is upper-bounded by the 1-Weisfeiler-Lehman (1-WL) graph isomorphism test, which means GNNs that are not able to predict node clustering coefficients and shortest path distances, and cannot differentiate between different d-regular graphs. Here we develop a class of message passing GNNs, named Identity-aware Graph Neural Networks (ID-GNNs), with greater expressive power than the 1-WL test. ID-GNN offers a minimal but powerful solution to limitat
APA, Harvard, Vancouver, ISO, and other styles
2

Nikolentzos, Giannis, Antoine Tixier, and Michalis Vazirgiannis. "Message Passing Attention Networks for Document Understanding." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 05 (2020): 8544–51. http://dx.doi.org/10.1609/aaai.v34i05.6376.

Full text
Abstract:
Graph neural networks have recently emerged as a very effective framework for processing graph-structured data. These models have achieved state-of-the-art performance in many tasks. Most graph neural networks can be described in terms of message passing, vertex update, and readout functions. In this paper, we represent documents as word co-occurrence networks and propose an application of the message passing framework to NLP, the Message Passing Attention network for Document understanding (MPAD). We also propose several hierarchical variants of MPAD. Experiments conducted on 10 standard text
APA, Harvard, Vancouver, ISO, and other styles
3

Yoon, Kanghoon, Kibum Kim, Jinyoung Moon, and Chanyoung Park. "Unbiased Heterogeneous Scene Graph Generation with Relation-Aware Message Passing Neural Network." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 3 (2023): 3285–94. http://dx.doi.org/10.1609/aaai.v37i3.25435.

Full text
Abstract:
Recent scene graph generation (SGG) frameworks have focused on learning complex relationships among multiple objects in an image. Thanks to the nature of the message passing neural network (MPNN) that models high-order interactions between objects and their neighboring objects, they are dominant representation learning modules for SGG. However, existing MPNN-based frameworks assume the scene graph as a homogeneous graph, which restricts the context-awareness of visual relations between objects. That is, they overlook the fact that the relations tend to be highly dependent on the objects with w
APA, Harvard, Vancouver, ISO, and other styles
4

Klipfel, Astrid, Zied Bouraoui, Olivier Peltre, Yaël Fregier, Najwa Harrati, and Adlane Sayede. "Equivariant Message Passing Neural Network for Crystal Material Discovery." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 12 (2023): 14304–11. http://dx.doi.org/10.1609/aaai.v37i12.26673.

Full text
Abstract:
Automatic material discovery with desired properties is a fundamental challenge for material sciences. Considerable attention has recently been devoted to generating stable crystal structures. While existing work has shown impressive success on supervised tasks such as property prediction, the progress on unsupervised tasks such as material generation is still hampered by the limited extent to which the equivalent geometric representations of the same crystal are considered. To address this challenge, we propose EPGNN a periodic equivariant message-passing neural network that learns crystal la
APA, Harvard, Vancouver, ISO, and other styles
5

Girish, L., and M. L. Raviprakash. "Message Passing-Based Prediction of Unlabelled Node Embedding Using Graph Neural Network." International Journal of Innovative Science and Research Technology 8, no. 3 (2023): 136–44. https://doi.org/10.5281/zenodo.7735576.

Full text
Abstract:
- Graph neural network are a part of deep learning methods created to perform presumption on data described by graphs. Graph neural network is a neutral network that can straight away be applied to graphs. It provides a agreeable way for node level, edge level and graph level prediction tasks. Moreover, most GNN models do not account for long distance relationships in graphs and instead simply aggregate data from short distances (e.g., 1-hop neighbours) in each round. In this paper work, we carry out node classification using graphs which can be put into large graphs comprise of labelled and u
APA, Harvard, Vancouver, ISO, and other styles
6

Tan, Xiaosi, Weihong Xu, Kai Sun, et al. "Improving Massive MIMO Message Passing Detectors With Deep Neural Network." IEEE Transactions on Vehicular Technology 69, no. 2 (2020): 1267–80. http://dx.doi.org/10.1109/tvt.2019.2960763.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Chu, Lon-Chan, and Benjamin W. Wah. "Optimal mapping of neural-network learning on message-passing multicomputers." Journal of Parallel and Distributed Computing 14, no. 3 (1992): 319–39. http://dx.doi.org/10.1016/0743-7315(92)90071-t.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Chen, Junyan, Wei Xiao, Xinmei Li, et al. "A Routing Optimization Method for Software-Defined Optical Transport Networks Based on Ensembles and Reinforcement Learning." Sensors 22, no. 21 (2022): 8139. http://dx.doi.org/10.3390/s22218139.

Full text
Abstract:
Optical transport networks (OTNs) are widely used in backbone- and metro-area transmission networks to increase network transmission capacity. In the OTN, it is particularly crucial to rationally allocate routes and maximize network capacities. By employing deep reinforcement learning (DRL)- and software-defined networking (SDN)-based solutions, the capacity of optical networks can be effectively increased. However, because most DRL-based routing optimization methods have low sample usage and difficulty in coping with sudden network connectivity changes, converging in software-defined OTN scen
APA, Harvard, Vancouver, ISO, and other styles
9

Kim, Cheolhyeong, Haeseong Moon, and Hyung Ju Hwang. "NEAR: Neighborhood Edge AggregatoR for Graph Classification." ACM Transactions on Intelligent Systems and Technology 13, no. 3 (2022): 1–17. http://dx.doi.org/10.1145/3506714.

Full text
Abstract:
Learning graph-structured data with graph neural networks (GNNs) has been recently emerging as an important field because of its wide applicability in bioinformatics, chemoinformatics, social network analysis, and data mining. Recent GNN algorithms are based on neural message passing, which enables GNNs to integrate local structures and node features recursively. However, past GNN algorithms based on 1-hop neighborhood neural message passing are exposed to a risk of loss of information on local structures and relationships. In this article, we propose Neighborhood Edge AggregatoR (NEAR), a fra
APA, Harvard, Vancouver, ISO, and other styles
10

Xu, Feng, Wanyue Xiong, Zizhu Fan, and Licheng Sun. "Node Classification Method Based on Hierarchical Hypergraph Neural Network." Sensors 24, no. 23 (2024): 7655. https://doi.org/10.3390/s24237655.

Full text
Abstract:
Hypergraph neural networks have gained widespread attention due to their effectiveness in handling graph-structured data with complex relationships and multi-dimensional interactions. However, existing hypergraph neural network models mainly rely on planar message-passing mechanisms, which have limitations: (i) low efficiency in encoding long-distance information; (ii) underutilization of high-order neighborhood features, aggregating information only on the edges of the original graph. This paper proposes an innovative hierarchical hypergraph neural network (HCHG) to address these issues. The
APA, Harvard, Vancouver, ISO, and other styles
11

Wohl, Peter. "EFFICIENCY THROUGH REDUCED COMMUNICATION IN MESSAGE PASSING SIMULATION OF NEURAL NETWORKS." International Journal on Artificial Intelligence Tools 02, no. 01 (1993): 133–62. http://dx.doi.org/10.1142/s0218213093000096.

Full text
Abstract:
Neural algorithms require massive computation and very high communication bandwidth and are naturally expressed at a level of granularity finer than parallel systems can exploit efficiently. Mapping Neural Networks onto parallel computers has traditionally implied a form of clustering neurons and weights to increase the granularity. SIMD simulations may exceed a million connections per second using thousands of processors, but are often tailored to particular networks and learning algorithms. MIMD simulations required an even larger granularity to run efficiently and often trade flexibility fo
APA, Harvard, Vancouver, ISO, and other styles
12

Fredyan, Renaldy. "Antiviral Medication Prediction Using A Deep Learning Model of Drug-Target Interaction for The Coronavirus SARS-COV." Engineering, MAthematics and Computer Science Journal (EMACS) 6, no. 2 (2024): 101–6. http://dx.doi.org/10.21512/emacsjournal.v6i2.11290.

Full text
Abstract:
Graph convolutional neural networks (GCNs) have shown promising performance in modeling graph data, particularly for small-scale molecules. Message-passing neural networks (MPNNs) are an important form of GCN variant. They excel at gathering and integrating particular information about molecules via several repetitions of message transmission. This capability has resulted in major advances in molecular modeling and property prediction. By combining the self-attention mechanism with MPNNs, there is potential to improve molecular representation while using Transformers' proven efficacy in other
APA, Harvard, Vancouver, ISO, and other styles
13

Li, Yu, Meng Qu, Jian Tang, and Yi Chang. "Signed Laplacian Graph Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 4 (2023): 4444–52. http://dx.doi.org/10.1609/aaai.v37i4.25565.

Full text
Abstract:
This paper studies learning meaningful node representations for signed graphs, where both positive and negative links exist. This problem has been widely studied by meticulously designing expressive signed graph neural networks, as well as capturing the structural information of the signed graph through traditional structure decomposition methods, e.g., spectral graph theory. In this paper, we propose a novel signed graph representation learning framework, called Signed Laplacian Graph Neural Network (SLGNN), which combines the advantages of both. Specifically, based on spectral graph theory a
APA, Harvard, Vancouver, ISO, and other styles
14

Zhu, Xingyu. "A Directed Message Passing Neural Network Model for Predicting True Solubility in Drug Discovery." Applied and Computational Engineering 166, no. 1 (2025): 135–40. https://doi.org/10.54254/2755-2721/2025.tj24494.

Full text
Abstract:
This study presents a novel directed message-passing neural network (DMPNN) model for predicting true solubility (logS) in drug discovery. Traditional methods such as high-throughput screening and QSAR modelsexemplified by the RuleofFivehave historically guided early discovery efforts but often fall short in handling modern high-dimensional, complex chemical datasets. Recent advances in integrating machine learning, including support vector machines, random forests, and deep learning, have improved prediction accuracy. However, high-dimensional data and accurate error estimation remain signifi
APA, Harvard, Vancouver, ISO, and other styles
15

Xu, Lei, and Peter Jeavons. "Simple Algorithms for Distributed Leader Election in Anonymous Synchronous Rings and Complete Networks Inspired by Neural Development in Fruit Flies." International Journal of Neural Systems 25, no. 07 (2015): 1550025. http://dx.doi.org/10.1142/s0129065715500252.

Full text
Abstract:
Leader election in anonymous rings and complete networks is a very practical problem in distributed computing. Previous algorithms for this problem are generally designed for a classical message passing model where complex messages are exchanged. However, the need to send and receive complex messages makes such algorithms less practical for some real applications. We present some simple synchronous algorithms for distributed leader election in anonymous rings and complete networks that are inspired by the development of the neural system of the fruit fly. Our leader election algorithms all ass
APA, Harvard, Vancouver, ISO, and other styles
16

Sun, Xiyang, and Fumiyasu Komaki. "BHGNN-RT: Capturing bidirectionality and network heterogeneity in graphs." PLOS One 20, no. 7 (2025): e0326756. https://doi.org/10.1371/journal.pone.0326756.

Full text
Abstract:
Graph neural networks (GNNs) have shown great promise for representation learning on complex graph-structured data, but existing models often fall short when applied to directed heterogeneous graphs. In this study, we proposed a novel embedding method, a bidirectional heterogeneous graph neural network with random teleport (BHGNN-RT) that leverages the bidirectional message-passing process and network heterogeneity, for directed heterogeneous graphs. Our method captures both incoming and outgoing message flows, integrates heterogeneous edge types through relation-specific transformations, and
APA, Harvard, Vancouver, ISO, and other styles
17

Li, Mingxuan, and Michael L. Littman. "Towards Sample Efficient Agents through Algorithmic Alignment (Student Abstract)." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 18 (2021): 15827–28. http://dx.doi.org/10.1609/aaai.v35i18.17910.

Full text
Abstract:
In this work, we propose and explore Deep Graph Value Network (DeepGV) as a promising method to work around sample complexity in deep reinforcement-learning agents using a message-passing mechanism. The main idea is that the agent should be guided by structured non-neural-network algorithms like dynamic programming. According to recent advances in algorithmic alignment, neural networks with structured computation procedures can be trained efficiently. We demonstrate the potential of graph neural network in supporting sample efficient learning by showing that Deep Graph Value Network can outper
APA, Harvard, Vancouver, ISO, and other styles
18

Xiang, Yan, Yu-Hang Tang, Guang Lin, and Huai Sun. "A Comparative Study of Marginalized Graph Kernel and Message-Passing Neural Network." Journal of Chemical Information and Modeling 61, no. 11 (2021): 5414–24. http://dx.doi.org/10.1021/acs.jcim.1c01118.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Jo, Jeonghee, Bumju Kwak, Byunghan Lee, and Sungroh Yoon. "Flexible Dual-Branched Message-Passing Neural Network for a Molecular Property Prediction." ACS Omega 7, no. 5 (2022): 4234–44. http://dx.doi.org/10.1021/acsomega.1c05877.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Huangfu, Hancong, Yongcai Wang, and Zhaoxiong Guan. "Non-intrusive power load decomposition method based on message passing graph neural network." Journal of Physics: Conference Series 2917, no. 1 (2024): 012034. https://doi.org/10.1088/1742-6596/2917/1/012034.

Full text
Abstract:
Abstract The non-invasive power load decomposition technology has a relatively small impact on the power system. Besides, it has a strong adaptability to different networks, but its load decomposition shows poor accuracy. To raise the accuracy of non-intrusive power load decomposition methods, a household power load decomposition model with factor hidden Markov model was established, which aims to use a message passing mechanism to improve the graph neural network and solve the power load decomposition model. The outcomes denote that the research design method can realize an F1 Score of 0.9326
APA, Harvard, Vancouver, ISO, and other styles
21

Xu, Jiaxing, Aihu Zhang, Qingtian Bian, Vijay Prakash Dwivedi, and Yiping Ke. "Union Subgraph Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 14 (2024): 16173–83. http://dx.doi.org/10.1609/aaai.v38i14.29551.

Full text
Abstract:
Graph Neural Networks (GNNs) are widely used for graph representation learning in many application domains. The expressiveness of vanilla GNNs is upper-bounded by 1-dimensional Weisfeiler-Leman (1-WL) test as they operate on rooted subtrees through iterative message passing. In this paper, we empower GNNs by injecting neighbor-connectivity information extracted from a new type of substructure. We first investigate different kinds of connectivities existing in a local neighborhood and identify a substructure called union subgraph, which is able to capture the complete picture of the 1-hop neigh
APA, Harvard, Vancouver, ISO, and other styles
22

Shen, Xiao, Shirui Pan, Kup-Sze Choi, and Xi Zhou. "Corrigendum to “ Domain-adaptive Message Passing Graph Neural Network” [Neural Netw. 164 (2023) 439–454]." Neural Networks 168 (November 2023): 337–38. http://dx.doi.org/10.1016/j.neunet.2023.09.026.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Du, Wei, Shifei Ding, Lili Guo, Jian Zhang, and Ling Ding. "Expressive Multi-Agent Communication via Identity-Aware Learning." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 16 (2024): 17354–61. http://dx.doi.org/10.1609/aaai.v38i16.29683.

Full text
Abstract:
Information sharing through communication is essential for tackling complex multi-agent reinforcement learning tasks. Many existing multi-agent communication protocols can be viewed as instances of message passing graph neural networks (GNNs). However, due to the significantly limited expressive ability of the standard GNN method, the agent feature representations remain similar and indistinguishable even though the agents have different neighborhood structures. This further results in the homogenization of agent behaviors and reduces the capability to solve tasks effectively. In this paper, w
APA, Harvard, Vancouver, ISO, and other styles
24

Cheng, Liang. "Research on Neural Networks Used in Parallel Computing." Applied Mechanics and Materials 411-414 (September 2013): 1998–2001. http://dx.doi.org/10.4028/www.scientific.net/amm.411-414.1998.

Full text
Abstract:
The parallel programming approaches were in the focus of research efforts due to an expected increase in efficiency of iterative processing in the parallel computational environment. On this end the parallel evolutionary asymmetric subset-hood product fuzzy-neural inference system has been developed to take advantage of parallelization in message passing. This paper study the structure of the neural network and the time series forecasting with neural network, the results could help us to obtain the optimal solutions to higher complexity of the problem.
APA, Harvard, Vancouver, ISO, and other styles
25

Cl udio Modesto, Rebecca Aben-Athar, Andrey Silva, Silvia Lins, Glauco Gon alves, and Aldebaro Klautau. "Delay estimation based on multiple stage message passing with attention mechanism using a real network communication dataset." ITU Journal on Future and Evolving Technologies 5, no. 4 (2024): 465–77. https://doi.org/10.52953/rbne4256.

Full text
Abstract:
Modeling network communication environments with Graph Neural Networks (GNNs) has gained notoriety in recent years due to the capability of GNNs to generalize well for data defined over graphs. Hence, GNN models have been used to abstract complex relationships from network environments, creating the so-called digital twins, with the objective of predicting important quality of service metrics, such as delay, jitter, link utilization, and so on. However, most previous work has used synthetic data obtained with simulations. The research question posed by the "ITU Graph Neural Networking Challeng
APA, Harvard, Vancouver, ISO, and other styles
26

Chen, Di, Andreas Doering, Shanshan Zhang, Jian Yang, Juergen Gall, and Bernt Schiele. "Keypoint Message Passing for Video-Based Person Re-identification." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 1 (2022): 239–47. http://dx.doi.org/10.1609/aaai.v36i1.19899.

Full text
Abstract:
Video-based person re-identification~(re-ID) is an important technique in visual surveillance systems which aims to match video snippets of people captured by different cameras. Existing methods are mostly based on convolutional neural networks~(CNNs), whose building blocks either process local neighbor pixels at a time, or, when 3D convolutions are used to model temporal information, suffer from the misalignment problem caused by person movement. In this paper, we propose to overcome the limitations of normal convolutions with a human-oriented graph method. Specifically, features located at p
APA, Harvard, Vancouver, ISO, and other styles
27

SONG, G., D. FU, and X. WU. "A Message Passing Neural Network Framework with Learnable PageRank for Author Impact Assessment." Advances in Electrical and Computer Engineering 25, no. 1 (2025): 11–20. https://doi.org/10.4316/aece.2025.01002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Mai, Sijie, Shuangjia Zheng, Yuedong Yang, and Haifeng Hu. "Communicative Message Passing for Inductive Relation Reasoning." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 5 (2021): 4294–302. http://dx.doi.org/10.1609/aaai.v35i5.16554.

Full text
Abstract:
Relation prediction for knowledge graphs aims at predicting missing relationships between entities. Despite the importance of inductive relation prediction, most previous works are limited to a transductive setting and cannot process previously unseen entities. The recent proposed subgraph-based relation reasoning models provided alternatives to predict links from the subgraph structure surrounding a candidate triplet inductively. However, we observe that these methods often neglect the directed nature of the extracted subgraph and weaken the role of relation information in the subgraph modeli
APA, Harvard, Vancouver, ISO, and other styles
29

Pathak, Yashaswi, Siddhartha Laghuvarapu, Sarvesh Mehta, and U. Deva Priyakumar. "Chemically Interpretable Graph Interaction Network for Prediction of Pharmacokinetic Properties of Drug-Like Molecules." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 01 (2020): 873–80. http://dx.doi.org/10.1609/aaai.v34i01.5433.

Full text
Abstract:
Solubility of drug molecules is related to pharmacokinetic properties such as absorption and distribution, which affects the amount of drug that is available in the body for its action. Computational or experimental evaluation of solvation free energies of drug-like molecules/solute that quantify solubilities is an arduous task and hence development of reliable computationally tractable models is sought after in drug discovery tasks in pharmaceutical industry. Here, we report a novel method based on graph neural network to predict solvation free energies. Previous studies considered only the s
APA, Harvard, Vancouver, ISO, and other styles
30

Coupvent des Graviers, Marc-Emmanuel, Kevin Osanlou, Christophe Guettier, and Tristan Cazenave. "Hybrid Search with Graph Neural Networks for Constraint-Based Navigation Planning [Extended Abstract]." Proceedings of the International Symposium on Combinatorial Search 16, no. 1 (2023): 171–72. http://dx.doi.org/10.1609/socs.v16i1.27299.

Full text
Abstract:
Route planning for autonomous vehicles is a challenging task, especially in dense road networks with multiple delivery points. Additional external constraints can quickly add overhead to this already-difficult problem that often requires prompt, on-the-fly decisions. This work introduces a hybrid method combining machine learning and Constraint Programming (CP) to improve search performance. A new message passing-based graph neural network tailored to constraint solving and global search is defined. Once trained, a single neural network inference is enough to guide CP search while ensuring sol
APA, Harvard, Vancouver, ISO, and other styles
31

Feng, Aosong, Chenyu You, Shiqiang Wang, and Leandros Tassiulas. "KerGNNs: Interpretable Graph Neural Networks with Graph Kernels." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 6 (2022): 6614–22. http://dx.doi.org/10.1609/aaai.v36i6.20615.

Full text
Abstract:
Graph kernels are historically the most widely-used technique for graph classification tasks. However, these methods suffer from limited performance because of the hand-crafted combinatorial features of graphs. In recent years, graph neural networks (GNNs) have become the state-of-the-art method in downstream graph-related tasks due to their superior performance. Most GNNs are based on Message Passing Neural Network (MPNN) frameworks. However, recent studies show that MPNNs can not exceed the power of the Weisfeiler-Lehman (WL) algorithm in graph isomorphism test. To address the limitations of
APA, Harvard, Vancouver, ISO, and other styles
32

Zaikis, Dimitrios, Christina Karalka, and Ioannis Vlahavas. "A Message Passing Approach to Biomedical Relation Classification for Drug–Drug Interactions." Applied Sciences 12, no. 21 (2022): 10987. http://dx.doi.org/10.3390/app122110987.

Full text
Abstract:
The task of extracting drug entities and possible interactions between drug pairings is known as Drug–Drug Interaction (DDI) extraction. Computer-assisted DDI extraction with Machine Learning techniques can help streamline this expensive and time-consuming process during the drug development cycle. Over the years, a variety of both traditional and Neural Network-based techniques for the extraction of DDIs have been proposed. Despite the introduction of several successful strategies, obtaining high classification accuracy is still an area where further progress can be made. In this work, we pre
APA, Harvard, Vancouver, ISO, and other styles
33

Lee, Seunghyun, and Byung Cheol Song. "Interpretable Embedding Procedure Knowledge Transfer via Stacked Principal Component Analysis and Graph Neural Network." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 9 (2021): 8297–305. http://dx.doi.org/10.1609/aaai.v35i9.17009.

Full text
Abstract:
Knowledge distillation (KD) is one of the most useful techniques for light-weight neural networks. Although neural networks have a clear purpose of embedding datasets into the low-dimensional space, the existing knowledge was quite far from this purpose and provided only limited information. We argue that good knowledge should be able to interpret the embedding procedure. This paper proposes a method of generating interpretable embedding procedure (IEP) knowledge based on principal component analysis, and distilling it based on a message passing neural network. Experimental results show that t
APA, Harvard, Vancouver, ISO, and other styles
34

Busk, Jonas, Peter Bjørn Jørgensen, Arghya Bhowmik, Mikkel N. Schmidt, Ole Winther, and Tejs Vegge. "Calibrated uncertainty for molecular property prediction using ensembles of message passing neural networks." Machine Learning: Science and Technology 3, no. 1 (2021): 015012. http://dx.doi.org/10.1088/2632-2153/ac3eb3.

Full text
Abstract:
Abstract Data-driven methods based on machine learning have the potential to accelerate computational analysis of atomic structures. In this context, reliable uncertainty estimates are important for assessing confidence in predictions and enabling decision making. However, machine learning models can produce badly calibrated uncertainty estimates and it is therefore crucial to detect and handle uncertainty carefully. In this work we extend a message passing neural network designed specifically for predicting properties of molecules and materials with a calibrated probabilistic predictive distr
APA, Harvard, Vancouver, ISO, and other styles
35

Guo, Jie, Bin Song, Yuhao Chi, et al. "Deep neural network-aided Gaussian message passing detection for ultra-reliable low-latency communications." Future Generation Computer Systems 95 (June 2019): 629–38. http://dx.doi.org/10.1016/j.future.2019.01.041.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Jun, Zhang, Pan Dai, Zong Yang Kong, Ao Yang, Weifeng Shen, and Qin Wang. "Message passing neural network-based contribution analysis towards CO2 solubility prediction in ionic liquids." Separation and Purification Technology 364 (August 2025): 132361. https://doi.org/10.1016/j.seppur.2025.132361.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Toraman, Suat, and Bihter Daş. "Interaction Prediction on BACE-1 Inhibitors Data for Alzheimer Disease using Message Passing Neural Network." Firat University Journal of Experimental and Computational Engineering 4, no. 1 (2025): 72–84. https://doi.org/10.62520/fujece.1466902.

Full text
Abstract:
The medical condition that develops as memory loss, dementia, and a general decrease in cognitive functions due to the death of brain cells over time is called Alzheimer's disease. This disease can lead to a gradual decline in cognitive functions and eventually severe memory losses that affect a person's daily life. Although the exact mechanism that causes Alzheimer's disease is not fully understood, it has been associated with certain structural changes in the brain, such as plaques and neurofibrillary bundles. This study investigates the use of geometric deep learning methods for the discove
APA, Harvard, Vancouver, ISO, and other styles
38

Pandit, Parthe, Mojtaba Sahraee-Ardakan, Sundeep Rangan, Philip Schniter, and Alyson K. Fletcher. "Matrix inference and estimation in multi-layer models*." Journal of Statistical Mechanics: Theory and Experiment 2021, no. 12 (2021): 124004. http://dx.doi.org/10.1088/1742-5468/ac3a75.

Full text
Abstract:
Abstract We consider the problem of estimating the input and hidden variables of a stochastic multi-layer neural network (NN) from an observation of the output. The hidden variables in each layer are represented as matrices with statistical interactions along both rows as well as columns. This problem applies to matrix imputation, signal recovery via deep generative prior models, multi-task and mixed regression, and learning certain classes of two-layer NNs. We extend a recently-developed algorithm—multi-layer vector approximate message passing, for this matrix-valued inference problem. It is
APA, Harvard, Vancouver, ISO, and other styles
39

Wang, Shike, Fan Xu, Yunyang Li, et al. "KG4SL: knowledge graph neural network for synthetic lethality prediction in human cancers." Bioinformatics 37, Supplement_1 (2021): i418—i425. http://dx.doi.org/10.1093/bioinformatics/btab271.

Full text
Abstract:
Abstract Motivation Synthetic lethality (SL) is a promising gold mine for the discovery of anti-cancer drug targets. Wet-lab screening of SL pairs is afflicted with high cost, batch-effect, and off-target problems. Current computational methods for SL prediction include gene knock-out simulation, knowledge-based data mining and machine learning methods. Most of the existing methods tend to assume that SL pairs are independent of each other, without taking into account the shared biological mechanisms underlying the SL pairs. Although several methods have incorporated genomic and proteomic data
APA, Harvard, Vancouver, ISO, and other styles
40

Alvarez-Gonzalez, Nurudin, Andreas Kaltenbrunner, and Vicenç Gómez. "Beyond Weisfeiler–Lehman with Local Ego-Network Encodings." Machine Learning and Knowledge Extraction 5, no. 4 (2023): 1234–65. http://dx.doi.org/10.3390/make5040063.

Full text
Abstract:
Identifying similar network structures is key to capturing graph isomorphisms and learning representations that exploit structural information encoded in graph data. This work shows that ego networks can produce a structural encoding scheme for arbitrary graphs with greater expressivity than the Weisfeiler–Lehman (1-WL) test. We introduce IGEL, a preprocessing step to produce features that augment node representations by encoding ego networks into sparse vectors that enrich message passing (MP) graph neural networks (GNNs) beyond 1-WL expressivity. We formally describe the relation between IGE
APA, Harvard, Vancouver, ISO, and other styles
41

Zhang, Yaolong, Junfan Xia, and Bin Jiang. "REANN: A PyTorch-based end-to-end multi-functional deep neural network package for molecular, reactive, and periodic systems." Journal of Chemical Physics 156, no. 11 (2022): 114801. http://dx.doi.org/10.1063/5.0080766.

Full text
Abstract:
In this work, we present a general purpose deep neural network package for representing energies, forces, dipole moments, and polarizabilities of atomistic systems. This so-called recursively embedded atom neural network model takes advantages of both the physically inspired atomic descriptor based neural networks and the message-passing based neural networks. Implemented in the PyTorch framework, the training process is parallelized on both the central processing unit and the graphics processing unit with high efficiency and low memory in which all hyperparameters can be optimized automatical
APA, Harvard, Vancouver, ISO, and other styles
42

Пушкарев, К. В., and В. Д. Кошур. "A hybrid heuristic parallel method of global optimization." Numerical Methods and Programming (Vychislitel'nye Metody i Programmirovanie), no. 2 (June 30, 2015): 242–55. http://dx.doi.org/10.26089/nummet.v16r224.

Full text
Abstract:
Рассматривается задача нахождения глобального минимума непрерывной целевой функции многих переменных в области, имеющей вид многомерного параллелепипеда. Для решения сложных задач глобальной оптимизации предлагается гибридный эвристический параллельный метод глобальной оптимизации (ГЭПМ), основанный на комбинировании и гибридизации различных методов и технологии многоагентной системы. В состав ГЭПМ включены как новые методы (например, метод нейросетевой аппроксимации инверсных зависимостей, использующий обобщeнно-регрессионные нейронные сети (GRNN), отображающие значения целевой функции в знач
APA, Harvard, Vancouver, ISO, and other styles
43

Shang, Bin, Yinliang Zhao, Jun Liu, and Di Wang. "Mixed Geometry Message and Trainable Convolutional Attention Network for Knowledge Graph Completion." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 8 (2024): 8966–74. http://dx.doi.org/10.1609/aaai.v38i8.28745.

Full text
Abstract:
Knowledge graph completion (KGC) aims to study the embedding representation to solve the incompleteness of knowledge graphs (KGs). Recently, graph convolutional networks (GCNs) and graph attention networks (GATs) have been widely used in KGC tasks by capturing neighbor information of entities. However, Both GCNs and GATs based KGC models have their limitations, and the best method is to analyze the neighbors of each entity (pre-validating), while this process is prohibitively expensive. Furthermore, the representation quality of the embeddings can affect the aggregation of neighbor information
APA, Harvard, Vancouver, ISO, and other styles
44

Wu, Xueyi, Yuanyuan Xu, Wenjie Zhang, and Ying Zhang. "Billion-Scale Bipartite Graph Embedding: A Global-Local Induced Approach." Proceedings of the VLDB Endowment 17, no. 2 (2023): 175–83. http://dx.doi.org/10.14778/3626292.3626300.

Full text
Abstract:
Bipartite graph embedding (BGE), as the fundamental task in bipartite network analysis, is to map each node to compact low-dimensional vectors that preserve intrinsic properties. The existing solutions towards BGE fall into two groups: metric-based methods and graph neural network-based (GNN-based) methods. The latter typically generates higher-quality embeddings than the former due to the strong representation ability of deep learning. Nevertheless, none of the existing GNN-based methods can handle billion-scale bipartite graphs due to the expensive message passing or complex modelling choice
APA, Harvard, Vancouver, ISO, and other styles
45

Cheng, Yuxiao, Lianglong Li, Tingxiong Xiao, et al. "CUTS+: High-Dimensional Causal Discovery from Irregular Time-Series." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 10 (2024): 11525–33. http://dx.doi.org/10.1609/aaai.v38i10.29034.

Full text
Abstract:
Causal discovery in time-series is a fundamental problem in the machine learning community, enabling causal reasoning and decision-making in complex scenarios. Recently, researchers successfully discover causality by combining neural networks with Granger causality, but their performances degrade largely when encountering high-dimensional data because of the highly redundant network design and huge causal graphs. Moreover, the missing entries in the observations further hamper the causal structural learning. To overcome these limitations, We propose CUTS+, which is built on the Granger-causali
APA, Harvard, Vancouver, ISO, and other styles
46

Venkateswararao, Pinagadi, and S. Murugavalli. "Unconstrained handwriting recoganization basedon neural network using connectionist temporal classification token passing algorithm." International Journal of Engineering & Technology 7, no. 1.9 (2018): 211. http://dx.doi.org/10.14419/ijet.v7i1.9.9825.

Full text
Abstract:
Recognition of human handwriting which offers the new way to improve the computer interface with the human and this process is very much useful for documents.Keyword spotting refers the spontaneous recognition of handwritten text, letter, and scripts from historical hand written books and the procedure of recovering all instance of a known keyword from an article. With a specific end goal to choose new components this paper, propose "a repetitive neural system manually written acknowledgment framework" for watchword spotting.The watchword seeing is finished utilizing an adjustment of the conne
APA, Harvard, Vancouver, ISO, and other styles
47

Zhang, Chaoyi, Yang Song, Lina Yao, and Weidong Cai. "Shape-Oriented Convolution Neural Network for Point Cloud Analysis." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 07 (2020): 12773–80. http://dx.doi.org/10.1609/aaai.v34i07.6972.

Full text
Abstract:
Point cloud is a principal data structure adopted for 3D geometric information encoding. Unlike other conventional visual data, such as images and videos, these irregular points describe the complex shape features of 3D objects, which makes shape feature learning an essential component of point cloud analysis. To this end, a shape-oriented message passing scheme dubbed ShapeConv is proposed to focus on the representation learning of the underlying shape formed by each local neighboring point. Despite this intra-shape relationship learning, ShapeConv is also designed to incorporate the contextu
APA, Harvard, Vancouver, ISO, and other styles
48

Tedeschini, Bernardo Camajori, Mattia Brambilla, and Monica Nicoli. "Message Passing Neural Network Versus Message Passing Algorithm for Cooperative Positioning." IEEE Transactions on Cognitive Communications and Networking, 2023, 1. http://dx.doi.org/10.1109/tccn.2023.3307953.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Zhong, Zhiqiang, Cheng-Te Li, and Jun Pang. "Hierarchical message-passing graph neural networks." Data Mining and Knowledge Discovery, November 17, 2022. http://dx.doi.org/10.1007/s10618-022-00890-9.

Full text
Abstract:
AbstractGraph Neural Networks (GNNs) have become a prominent approach to machine learning with graphs and have been increasingly applied in a multitude of domains. Nevertheless, since most existing GNN models are based on flat message-passing mechanisms, two limitations need to be tackled: (i) they are costly in encoding long-range information spanning the graph structure; (ii) they are failing to encode features in the high-order neighbourhood in the graphs as they only perform information aggregation across the observed edges in the original graph. To deal with these two issues, we propose a
APA, Harvard, Vancouver, ISO, and other styles
50

Xu, Lei, Zhen-Yu He, Kai Wang, Chang-Dong Wang, and Shu-Qiang Huang. "Explicit Message-Passing Heterogeneous Graph Neural Network." IEEE Transactions on Knowledge and Data Engineering, 2022, 1–13. http://dx.doi.org/10.1109/tkde.2022.3185128.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!