To see the other types of publications on this topic, follow the link: Graph transformer.

Journal articles on the topic 'Graph transformer'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Graph transformer.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Nguyen, Hoang D., Xuan-Son Vu, and Duc-Trong Le. "Modular Graph Transformer Networks for Multi-Label Image Classification." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 10 (2021): 9092–100. http://dx.doi.org/10.1609/aaai.v35i10.17098.

Full text
Abstract:
With the recent advances in graph neural networks, there is a rising number of studies on graph-based multi-label classification with the consideration of object dependencies within visual data. Nevertheless, graph representations can become indistinguishable due to the complex nature of label relationships. We propose a multi-label image classification framework based on graph transformer networks to fully exploit inter-label interactions. The paper presents a modular learning scheme to enhance the classification performance by segregating the computational graph into multiple sub-graphs base
APA, Harvard, Vancouver, ISO, and other styles
2

Lou, Wei, Guanbin Li, Xiang Wan, and Haofeng Li. "Cell Graph Transformer for Nuclei Classification." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 4 (2024): 3873–81. http://dx.doi.org/10.1609/aaai.v38i4.28179.

Full text
Abstract:
Nuclei classification is a critical step in computer-aided diagnosis with histopathology images. In the past, various methods have employed graph neural networks (GNN) to analyze cell graphs that model inter-cell relationships by considering nuclei as vertices. However, they are limited by the GNN mechanism that only passes messages among local nodes via fixed edges. To address the issue, we develop a cell graph transformer (CGT) that treats nodes and edges as input tokens to enable learnable adjacency and information exchange among all nodes. Nevertheless, training the transformer with a cell
APA, Harvard, Vancouver, ISO, and other styles
3

Oumaima, Hourrane, and Habib Benlahmar El. "Graph transformer for cross-lingual plagiarism detection." International Journal of Artificial Intelligence (IJ-AI) 11, no. 3 (2022): 905–15. https://doi.org/10.11591/ijai.v11.i3.pp905-915.

Full text
Abstract:
The existence of vast amounts of multilingual textual data on the internet leads to cross-lingual plagiarism which becomes a serious issue in different fields such as education, science, and literature. Current cross-lingual plagiarism detection approaches usually employ syntactic and lexical properties, external machine translation systems, or finding similarities within a multilingual set of text documents. However, most of these methods are conceived for literal plagiarism such as copy and paste, and their performance is diminished when handling complex cases of plagiarism including paraphr
APA, Harvard, Vancouver, ISO, and other styles
4

Hou, Guoqiang, Qiwen Yu, Fan Chen, and Guang Chen. "Directed Knowledge Graph Embedding Using a Hybrid Architecture of Spatial and Spectral GNNs." Mathematics 12, no. 23 (2024): 3689. http://dx.doi.org/10.3390/math12233689.

Full text
Abstract:
Knowledge graph embedding has been identified as an effective method for node-level classification tasks in directed graphs, the objective of which is to ensure that nodes of different categories are embedded as far apart as possible in the feature space. The directed graph is a general representation of unstructured knowledge graphs. However, existing methods lack the ability to simultaneously approximate high-order filters and globally pay attention to the task-related connectivity between distant nodes for directed graphs. To address this limitation, a directed spectral graph transformer (D
APA, Harvard, Vancouver, ISO, and other styles
5

AlBadani, Barakat, Ronghua Shi, Jian Dong, Raeed Al-Sabri, and Oloulade Babatounde Moctard. "Transformer-Based Graph Convolutional Network for Sentiment Analysis." Applied Sciences 12, no. 3 (2022): 1316. http://dx.doi.org/10.3390/app12031316.

Full text
Abstract:
Sentiment Analysis is an essential research topic in the field of natural language processing (NLP) and has attracted the attention of many researchers in the last few years. Recently, deep neural network (DNN) models have been used for sentiment analysis tasks, achieving promising results. Although these models can analyze sequences of arbitrary length, utilizing them in the feature extraction layer of a DNN increases the dimensionality of the feature space. More recently, graph neural networks (GNNs) have achieved a promising performance in different NLP tasks. However, previous models canno
APA, Harvard, Vancouver, ISO, and other styles
6

Zhou, Zhe-Wei, Wen-Ren Jong, Yu-Hung Ting, Shia-Chung Chen, and Ming-Chien Chiu. "Retrieval of Injection Molding Industrial Knowledge Graph Based on Transformer and BERT." Applied Sciences 13, no. 11 (2023): 6687. http://dx.doi.org/10.3390/app13116687.

Full text
Abstract:
Knowledge graphs play an important role in the field of knowledge management by providing a simple and clear way of expressing complex data relationships. Injection molding is a highly knowledge-intensive technology, and in our previous research, we have used knowledge graphs to manage and express relevant knowledge, gradually establishing an injection molding industrial knowledge graph. However, the current way of retrieving knowledge graphs is still mainly through programming, which results in many difficulties for users without programming backgrounds when it comes to searching a graph. Thi
APA, Harvard, Vancouver, ISO, and other styles
7

Hourrane, Oumaima, and El Habib Benlahmar. "Graph transformer for cross-lingual plagiarism detection." IAES International Journal of Artificial Intelligence (IJ-AI) 11, no. 3 (2022): 905. http://dx.doi.org/10.11591/ijai.v11.i3.pp905-915.

Full text
Abstract:
<span lang="EN-US">The existence of vast amounts of multilingual textual data on the internet leads to cross-lingual plagiarism which becomes a serious issue in different fields such as education, science, and literature. Current cross-lingual plagiarism detection approaches usually employ syntactic and lexical properties, external machine translation systems, or finding similarities within a multilingual set of text documents. However, most of these methods are conceived for literal plagiarism such as copy and paste, and their performance is diminished when handling complex cases of pla
APA, Harvard, Vancouver, ISO, and other styles
8

Vaghani, Dev. "An Approch for Representation of Node Using Graph Transformer Networks." International Journal for Research in Applied Science and Engineering Technology 11, no. 1 (2023): 27–37. http://dx.doi.org/10.22214/ijraset.2023.48485.

Full text
Abstract:
Abstract: In representation learning on graphs, graph neural networks (GNNs) have been widely employed and have attained cutting-edge performance in tasks like node categorization and link prediction. However, the majority of GNNs now in use are made to learn node representations on homogenous and fixed graphs. The limits are particularly significant when learning representations on a network that has been incorrectly described or one that is heterogeneous, or made up of different kinds of nodes and edges. This study proposes Graph Transformer Networks (GTNs), which may generate new network st
APA, Harvard, Vancouver, ISO, and other styles
9

Wang, Tianming, Xiaojun Wan, and Hanqi Jin. "AMR-To-Text Generation with Graph Transformer." Transactions of the Association for Computational Linguistics 8 (July 2020): 19–33. http://dx.doi.org/10.1162/tacl_a_00297.

Full text
Abstract:
Abstract meaning representation (AMR)-to-text generation is the challenging task of generating natural language texts from AMR graphs, where nodes represent concepts and edges denote relations. The current state-of-the-art methods use graph-to-sequence models; however, they still cannot significantly outperform the previous sequence-to-sequence models or statistical approaches. In this paper, we propose a novel graph-to-sequence model (Graph Transformer) to address this task. The model directly encodes the AMR graphs and learns the node representations. A pairwise interaction function is used
APA, Harvard, Vancouver, ISO, and other styles
10

Liu, Xiangwen, Shengyu Mao, Xiaohan Wang, and Jiajun Bu. "Generative Transformer with Knowledge-Guided Decoding for Academic Knowledge Graph Completion." Mathematics 11, no. 5 (2023): 1073. http://dx.doi.org/10.3390/math11051073.

Full text
Abstract:
Academic knowledge graphs are essential resources and can be beneficial in widespread real-world applications. Most of the existing academic knowledge graphs are far from completion; thus, knowledge graph completion—the task of extending a knowledge graph with missing entities and relations—attracts many researchers. Most existing methods utilize low-dimensional embeddings to represent entities and relations and follow the discrimination paradigm for link prediction. However, discrimination approaches may suffer from the scaling issue during inference with large-scale academic knowledge graphs
APA, Harvard, Vancouver, ISO, and other styles
11

Mohammadshahi, Alireza, and James Henderson. "Recursive Non-Autoregressive Graph-to-Graph Transformer for Dependency Parsing with Iterative Refinement." Transactions of the Association for Computational Linguistics 9 (February 2021): 120–38. http://dx.doi.org/10.1162/tacl_a_00358.

Full text
Abstract:
We propose the Recursive Non-autoregressive Graph-to-Graph Transformer architecture (RNGTr) for the iterative refinement of arbitrary graphs through the recursive application of a non-autoregressive Graph-to-Graph Transformer and apply it to syntactic dependency parsing. We demonstrate the power and effectiveness of RNGTr on several dependency corpora, using a refinement model pre-trained with BERT. We also introduce Syntactic Transformer (SynTr), a non-recursive parser similar to our refinement model. RNGTr can improve the accuracy of a variety of initial parsers on 13 languages from the Univ
APA, Harvard, Vancouver, ISO, and other styles
12

Wu, Jianshe, Yaolin Liu, Yuqian Wang, Lingjie Zhang, and Jingyi Ding. "HGphormer: Heterophilic Graph Transformer." Knowledge-Based Systems 326 (September 2025): 114031. https://doi.org/10.1016/j.knosys.2025.114031.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Cai, Deng, and Wai Lam. "Graph Transformer for Graph-to-Sequence Learning." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 05 (2020): 7464–71. http://dx.doi.org/10.1609/aaai.v34i05.6243.

Full text
Abstract:
The dominant graph-to-sequence transduction models employ graph neural networks for graph representation learning, where the structural information is reflected by the receptive field of neurons. Unlike graph neural networks that restrict the information exchange between immediate neighborhood, we propose a new model, known as Graph Transformer, that uses explicit relation encoding and allows direct communication between two distant nodes. It provides a more efficient way for global graph structure modeling. Experiments on the applications of text generation from Abstract Meaning Representatio
APA, Harvard, Vancouver, ISO, and other styles
14

Liang, Jianqing, Xinkai Wei, Min Chen, Zhiqiang Wang, and Jiye Liang. "GNN-Transformer Cooperative Architecture for Trustworthy Graph Contrastive Learning." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 18 (2025): 18667–75. https://doi.org/10.1609/aaai.v39i18.34054.

Full text
Abstract:
Graph contrastive learning (GCL) has become a hot topic in the field of graph representaion learning. In contrast to traditional supervised learning relying on a large number of labels, GCL exploits augmentation techniques to generate multiple views and positive/negative pairs, both of which greatly influence the performance. Unfortunately, commonly used random augmentations may disturb the underlying semantics of graphs. Moreover, traditional GNNs, a type of widely employed encoders in GCL, are inevitably confronted with over-smoothing and over-squashing problems. To address these issues, we
APA, Harvard, Vancouver, ISO, and other styles
15

Jimale, Elias Lemuye, Wenyu Chen, Mugahed A. Al-antari, et al. "Graph-to-Text Generation with Bidirectional Dual Cross-Attention and Concatenation." Mathematics 13, no. 6 (2025): 935. https://doi.org/10.3390/math13060935.

Full text
Abstract:
Graph-to-text generation (G2T) involves converting structured graph data into natural language text, a task made challenging by the need for encoders to capture the entities and their relationships within the graph effectively. While transformer-based encoders have advanced natural language processing, their reliance on linearized data often obscures the complex interrelationships in graph structures, leading to structural loss. Conversely, graph attention networks excel at capturing graph structures but lack the pre-training advantages of transformers. To leverage the strengths of both modali
APA, Harvard, Vancouver, ISO, and other styles
16

Zheng, Jianhan, Shengqing Gui, and Haomin Zhang. "Transformer Vibration Analysis Based on Double Branch Convolutional Neural Network." Journal of Physics: Conference Series 2503, no. 1 (2023): 012092. http://dx.doi.org/10.1088/1742-6596/2503/1/012092.

Full text
Abstract:
Abstract The power transformer is one of the important pieces of equipment in the power grid system, and its normal operation is related to the safety and reliability of the whole power system. There are many factors influencing transformer vibration in operation, and its characteristics are complex, so it is difficult to be directly used for transformer state analysis. This paper proposes a method for vibration signal analysis based on a continuous wavelet time-frequency graph. The segmented samples of transformer vibration signals are selected by the time-domain sample segmentation method, a
APA, Harvard, Vancouver, ISO, and other styles
17

Paik, Incheon, and Jun-Wei Wang. "Improving Text-to-Code Generation with Features of Code Graph on GPT-2." Electronics 10, no. 21 (2021): 2706. http://dx.doi.org/10.3390/electronics10212706.

Full text
Abstract:
Code generation, as a very hot application area of deep learning models for text, consists of two different fields: code-to-code and text-to-code. A recent approach, GraphCodeBERT uses code graph, which is called data flow, and showed good performance improvement. The base model architecture of it is bidirectional encoder representations from transformers (BERT), which uses the encoder part of a transformer. On the other hand, generative pre-trained transformer (GPT)—another multiple transformer architecture—uses the decoder part and shows great performance in the multilayer perceptron model.
APA, Harvard, Vancouver, ISO, and other styles
18

Luo, Renqiang, Huafei Huang, Ivan Lee, Chengpei Xu, Jianzhong Qi, and Feng Xia. "FairGP: A Scalable and Fair Graph Transformer Using Graph Partitioning." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 12 (2025): 12319–27. https://doi.org/10.1609/aaai.v39i12.33342.

Full text
Abstract:
Recent studies have highlighted significant fairness issues in Graph Transformer (GT) models, particularly against subgroups defined by sensitive features. Additionally, GTs are computationally intensive and memory-demanding, limiting their application to large-scale graphs. Our experiments demonstrate that graph partitioning can enhance the fairness of GT models while reducing computational complexity. To understand this improvement, we conducted a theoretical investigation into the root causes of fairness issues in GT models. We found that the sensitive features of higher-order nodes disprop
APA, Harvard, Vancouver, ISO, and other styles
19

Wang, Yuxuan, Zhouyuan Zhang, Shu Pi, Haishan Zhang, and Jiatian Pi. "Dual-Gated Graph Convolutional Recurrent Unit with Integrated Graph Learning (DG3L): A Novel Recurrent Network Architecture with Dynamic Graph Learning for Spatio-Temporal Predictions." Entropy 27, no. 2 (2025): 99. https://doi.org/10.3390/e27020099.

Full text
Abstract:
Spatio-temporal prediction is crucial in intelligent transportation systems (ITS) to enhance operational efficiency and safety. Although Transformer-based models have significantly advanced spatio-temporal prediction performance, recent research underscores the importance of learning dynamic spatio-temporal dependencies for these tasks. This paper introduces the Dual-Gated Graph Convolutional Recurrent Unit with Integrated Graph Learning (DG3L), a framework specifically designed to address the complex demands of spatio-temporal prediction. The DG3L model includes a memory-based graph learning
APA, Harvard, Vancouver, ISO, and other styles
20

Chen, Ruoyu, Yan Li, Yuru Jiang, Bochen Sun, Jingqi Wang, and Zhen Li. "Fact-Aware Generative Text Summarization with Dependency Graphs." Electronics 13, no. 16 (2024): 3230. http://dx.doi.org/10.3390/electronics13163230.

Full text
Abstract:
Generative text summaries often suffer from factual inconsistencies, where the summary deviates from the original text. This significantly reduces their usefulness. To address this issue, we propose a novel method for improving the factual accuracy of Chinese summaries by leveraging dependency graphs. Our approach involves analyzing the input text to build a dependency graph. This graph, along with the original text, is then processed by separate models: a Relational Graph Attention Neural Network for the dependency graph and a Transformer model for the text itself. Finally, a Transformer deco
APA, Harvard, Vancouver, ISO, and other styles
21

Lai, Xin, Yang Liu, Rui Qian, Yong Lin, and Qiwei Ye. "Deeper Exploiting Graph Structure Information by Discrete Ricci Curvature in a Graph Transformer." Entropy 25, no. 6 (2023): 885. http://dx.doi.org/10.3390/e25060885.

Full text
Abstract:
Graph-structured data, operating as an abstraction of data containing nodes and interactions between nodes, is pervasive in the real world. There are numerous ways dedicated to extract graph structure information explicitly or implicitly, but whether it has been adequately exploited remains an unanswered question. This work goes deeper by heuristically incorporating a geometric descriptor, the discrete Ricci curvature (DRC), in order to uncover more graph structure information. We present a curvature-based topology-aware graph transformer, termed Curvphormer. This work expands the expressivene
APA, Harvard, Vancouver, ISO, and other styles
22

Chen, Shuo, Ke Xu, Xinghao Jiang, and Tanfeng Sun. "Pyramid Spatial-Temporal Graph Transformer for Skeleton-Based Action Recognition." Applied Sciences 12, no. 18 (2022): 9229. http://dx.doi.org/10.3390/app12189229.

Full text
Abstract:
Although graph convolutional networks (GCNs) have shown their demonstrated ability in skeleton-based action recognition, both the spatial and the temporal connections rely too much on the predefined skeleton graph, which imposes a fixed prior knowledge for the aggregation of high-level semantic information via the graph-based convolution. Some previous GCN-based works introduced dynamic topology (vertex connection relationships) to capture flexible spatial correlations from different actions. Then, the local relationships from both the spatial and temporal domains can be captured by diverse GC
APA, Harvard, Vancouver, ISO, and other styles
23

Yin, Shuo, and Guoqiang Zhong. "TextGT: A Double-View Graph Transformer on Text for Aspect-Based Sentiment Analysis." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 17 (2024): 19404–12. http://dx.doi.org/10.1609/aaai.v38i17.29911.

Full text
Abstract:
Aspect-based sentiment analysis (ABSA) is aimed at predicting the sentiment polarities of the aspects included in a sentence instead of the whole sentence itself, and is a fine-grained learning task compared to the conventional text classification. In recent years, on account of the ability to model the connectivity relationships between the words in one sentence, graph neural networks have been more and more popular to handle the natural language processing tasks, and meanwhile many works emerge for the ABSA task. However, most of the works utilizing graph convolution easily incur the over-sm
APA, Harvard, Vancouver, ISO, and other styles
24

Pu, Shilin, Liang Chu, Jincheng Hu, Shibo Li, Jihao Li, and Wen Sun. "SGGformer: Shifted Graph Convolutional Graph-Transformer for Traffic Prediction." Sensors 22, no. 22 (2022): 9024. http://dx.doi.org/10.3390/s22229024.

Full text
Abstract:
Accurate traffic prediction is significant in intelligent cities’ safe and stable development. However, due to the complex spatiotemporal correlation of traffic flow data, establishing an accurate traffic prediction model is still challenging. Aiming to meet the challenge, this paper proposes SGGformer, an advanced traffic grade prediction model which combines a shifted window operation, a multi-channel graph convolution network, and a graph Transformer network. Firstly, the shifted window operation is used for coarsening the time series data, thus, the computational complexity can be reduced.
APA, Harvard, Vancouver, ISO, and other styles
25

Zhu, Wenhao, Yujun Xie, Qun Huang, et al. "Graph Transformer Collaborative Filtering Method for Multi-Behavior Recommendations." Mathematics 10, no. 16 (2022): 2956. http://dx.doi.org/10.3390/math10162956.

Full text
Abstract:
Graph convolutional networks are widely used in recommendation tasks owing to their ability to learn user and item embeddings using collaborative signals from high-order neighborhoods. Most of the graph convolutional recommendation tasks in existing studies have specialized in modeling a single type of user–item interaction preference. Meanwhile, graph-convolution-network-based recommendation models are prone to over-smoothing problems when stacking increased numbers of layers. Therefore, in this study we propose a multi-behavior recommendation method based on graph transformer collaborative f
APA, Harvard, Vancouver, ISO, and other styles
26

Oh, Dongryul, Sujin Kang, Heejin Kim, and Dongsuk Oh. "Enhancing Small Language Models for Graph Tasks Through Graph Encoder Integration." Applied Sciences 15, no. 5 (2025): 2418. https://doi.org/10.3390/app15052418.

Full text
Abstract:
Small language models (SLMs) are increasingly utilized for on-device applications due to their ability to ensure user privacy, reduce inference latency, and operate independently of cloud infrastructure. However, their performance is often limited when processing complex data structures such as graphs, which are ubiquitous in real-world datasets like social networks and system interactions. Graphs inherently encode intricate structural dependencies, requiring models to effectively capture both local and global relationships. Traditional language models, designed primarily for text data, strugg
APA, Harvard, Vancouver, ISO, and other styles
27

Lin, Hui, Zhiheng Ma, Xiaopeng Hong, Qinnan Shangguan, and Deyu Meng. "Gramformer: Learning Crowd Counting via Graph-Modulated Transformer." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 4 (2024): 3395–403. http://dx.doi.org/10.1609/aaai.v38i4.28126.

Full text
Abstract:
Transformer has been popular in recent crowd counting work since it breaks the limited receptive field of traditional CNNs. However, since crowd images always contain a large number of similar patches, the self-attention mechanism in Transformer tends to find a homogenized solution where the attention maps of almost all patches are identical. In this paper, we address this problem by proposing Gramformer: a graph-modulated transformer to enhance the network by adjusting the attention and input node features respectively on the basis of two different types of graphs. Firstly, an attention graph
APA, Harvard, Vancouver, ISO, and other styles
28

Wang, Qianqian, Haiming Xu, Zihao Zhang, Wei Feng, and Quanxue Gao. "Deep Multi-modal Graph Clustering via Graph Transformer Network." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 8 (2025): 7835–43. https://doi.org/10.1609/aaai.v39i8.32844.

Full text
Abstract:
Current deep multi-modal graph clustering methods primarily rely on Graph Neural Network (GNN) to fully exploit attribute features and graph structures, including message propagation and low-dimensional feature embedding. However, these methods lack further exploration of graph structural information, such as the relationship between nodes and shortest paths. Additionally, they may not sufficiently mine complementary information among multi-modal graph data. To address these issues, we propose a novel Deep Multi-modal Graph Clustering via Graph Transformer Network method, called DMGC-GTN. This
APA, Harvard, Vancouver, ISO, and other styles
29

Qiu, Xing, Guang Cheng, Weizhou Zhu, Dandan Niu, and Nan Fu. "Dual-Channel Interactive Graph Transformer for Traffic Classification with Message-Aware Flow Representation." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 1 (2025): 685–93. https://doi.org/10.1609/aaai.v39i1.32050.

Full text
Abstract:
Traffic classification is crucial for network management and security. Recently, deep learning-based methods have demonstrated good performance in traffic classification. However, they primarily capture features from raw packet bytes, overlooking the significance of inter-packet correlations within flows from a global perspective. Additionally, effectively handling both packet-length and temporal information, while extracting the structural relationships from a graph into the model, remains a challenge for enhancing the performance of traffic prediction. In this paper, we propose DigTraffic, a
APA, Harvard, Vancouver, ISO, and other styles
30

Xu, Xiaolong, Yibo Zhou, Haolong Xiang, et al. "NLGT: Neighborhood-based and Label-enhanced Graph Transformer Framework for Node Classification." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 12 (2025): 12954–62. https://doi.org/10.1609/aaai.v39i12.33413.

Full text
Abstract:
Graph Neural Networks (GNNs) are widely applied on graph-level tasks, such as node classification, link prediction and graph generation. Existing GNNs mostly adopt a message-passing mechanism to aggregate node information with their neighbors, which often makes node information similar after rounds of aggregations and leads to oversmoothing. Although recent works have made improvements by combining different message aggregation methods or introducing semantic encodings as priors, these message-passing based GNNs still fail to combat oversmoothing after multiple iterations of node aggregation.
APA, Harvard, Vancouver, ISO, and other styles
31

Su, Xiaorui, Pengwei Hu, Zhu-Hong You, Philip S. Yu, and Lun Hu. "Dual-Channel Learning Framework for Drug-Drug Interaction Prediction via Relation-Aware Heterogeneous Graph Transformer." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 1 (2024): 249–56. http://dx.doi.org/10.1609/aaai.v38i1.27777.

Full text
Abstract:
Identifying novel drug-drug interactions (DDIs) is a crucial task in pharmacology, as the interference between pharmacological substances can pose serious medical risks. In recent years, several network-based techniques have emerged for predicting DDIs. However, they primarily focus on local structures within DDI-related networks, often overlooking the significance of indirect connections between pairwise drug nodes from a global perspective. Additionally, effectively handling heterogeneous information present in both biomedical knowledge graphs and drug molecular graphs remains a challenge fo
APA, Harvard, Vancouver, ISO, and other styles
32

Wang, Yanying. "Enhancing Robot Learning with Transformer-based Morphology Modeling." Journal of Physics: Conference Series 2816, no. 1 (2024): 012100. http://dx.doi.org/10.1088/1742-6596/2816/1/012100.

Full text
Abstract:
Abstract The transformer model has made significant progress in various areas through large-scale training. In contrast, the traditional robot performs a single task, and there is an issue with migrating the strategic model. In this study, a Robot Morphology Learning (RML) method is proposed to enhance efficiency and generalization performance by learning multiple tasks in a transformer model. RML constructs the robot’s morphology as a graph and utilizes a graph neural network to handle graphs of arbitrary connections and sizes, addressing the disparity in state and action space dimensions. RM
APA, Harvard, Vancouver, ISO, and other styles
33

Li, Lu, Jiale Liu, Xingyu Ji, Maojun Wang, and Zeyu Zhang. "Self-Explainable Graph Transformer for Link Sign Prediction." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 11 (2025): 12084–92. https://doi.org/10.1609/aaai.v39i11.33316.

Full text
Abstract:
Signed Graph Neural Networks (SGNNs) have been shown to be effective in analyzing complex patterns in real-world situations where positive and negative links coexist. However, SGNN models suffer from poor explainability, which limit their adoptions in critical scenarios that require understanding the rationale behind predictions. To the best of our knowledge, there is currently no research work on the explainability of the SGNN models. Our goal is to address the explainability of decision-making for the downstream task of link sign prediction specific to signed graph neural networks. Since pos
APA, Harvard, Vancouver, ISO, and other styles
34

Park, Jinyoung, Hyeong Kyu Choi, Juyeon Ko, et al. "Relation-Aware Language-Graph Transformer for Question Answering." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 11 (2023): 13457–64. http://dx.doi.org/10.1609/aaai.v37i11.26578.

Full text
Abstract:
Question Answering (QA) is a task that entails reasoning over natural language contexts, and many relevant works augment language models (LMs) with graph neural networks (GNNs) to encode the Knowledge Graph (KG) information. However, most existing GNN-based modules for QA do not take advantage of rich relational information of KGs and depend on limited information interaction between the LM and the KG. To address these issues, we propose Question Answering Transformer (QAT), which is designed to jointly reason over language and graphs with respect to entity relations in a unified manner. Speci
APA, Harvard, Vancouver, ISO, and other styles
35

Chen, Yutong, Xia Li, Yang Liu, and Tiangui Hu. "Integrating Transformer Architecture and Householder Transformations for Enhanced Temporal Knowledge Graph Embedding in DuaTHP." Symmetry 17, no. 2 (2025): 173. https://doi.org/10.3390/sym17020173.

Full text
Abstract:
The rapid advancement of knowledge graph (KG) technology has led to the emergence of temporal knowledge graphs (TKGs), which represent dynamic relationships over time. Temporal knowledge graph embedding (TKGE) techniques are commonly employed for link prediction and knowledge graph completion, among other tasks. However, existing TKGE models mainly rely on basic arithmetic operations, such as addition, subtraction, and multiplication, which limits their capacity to capture complex, non-linear relationships between entities. Moreover, many neural network-based TKGE models focus on static entiti
APA, Harvard, Vancouver, ISO, and other styles
36

Mei, Xin, Xiaoyan Cai, Libin Yang, and Nanxin Wang. "Graph transformer networks based text representation." Neurocomputing 463 (November 2021): 91–100. http://dx.doi.org/10.1016/j.neucom.2021.08.032.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Sun, Zixuan. "Sequential recommendation based on graph transformer." Applied and Computational Engineering 28, no. 1 (2023): 132–40. http://dx.doi.org/10.54254/2755-2721/28/20230190.

Full text
Abstract:
Sequential Recommendation (SR) is an important scenario in recommendation tasks. Sequential recommendations model the sequential pattern between item-item or user-item based on a user's recent activity in a time series to predict their next preference. However, existing methods are based only on the conventional Graph Neural Networks (GNN) as a model architecture for adaptive fine-tuning of specific SR tasks. To get better recommendation results, more advanced GNNs can be used as the network architecture of the SR method. This paper introduces graph transformer, a combination of GNN and a good
APA, Harvard, Vancouver, ISO, and other styles
38

Hu, Xiao, Zezhen Zhang, Zhiyu Fan, et al. "GCN-Transformer-Based Spatio-Temporal Load Forecasting for EV Battery Swapping Stations under Differential Couplings." Electronics 13, no. 17 (2024): 3401. http://dx.doi.org/10.3390/electronics13173401.

Full text
Abstract:
To address the challenge of power absorption in grids with high renewable energy integration, electric vehicle battery swapping stations (EVBSSs) serve as critically important flexible resources. Current research on load forecasting for EVBSSs primarily employs Transformer models, which have increasingly shown a lack of adaptability to the rapid growth in scale and complexity. This paper proposes a novel data-driven forecasting model that combines the geographical feature extraction capability of graph convolutional networks (GCNs) with the multitask learning capability of Transformers. The GC
APA, Harvard, Vancouver, ISO, and other styles
39

Hoang , Van Thuy, Hyeon-Ju Jeon , Eun-Soon You , Yoewon Yoon , Sungyeop Jung , and O.-Joun Lee . "Graph Representation Learning and Its Applications: A Survey." Sensors 23, no. 8 (2023): 4168. http://dx.doi.org/10.3390/s23084168.

Full text
Abstract:
Graphs are data structures that effectively represent relational data in the real world. Graph representation learning is a significant task since it could facilitate various downstream tasks, such as node classification, link prediction, etc. Graph representation learning aims to map graph entities to low-dimensional vectors while preserving graph structure and entity relationships. Over the decades, many models have been proposed for graph representation learning. This paper aims to show a comprehensive picture of graph representation learning models, including traditional and state-of-the-a
APA, Harvard, Vancouver, ISO, and other styles
40

An, Kyung-Hwan, and Eun-Sol Kim. "A New Graph Transformer Algorithm for Leveraging External Knowledge Graph." KIISE Transactions on Computing Practices 30, no. 11 (2024): 588–93. https://doi.org/10.5626/ktcp.2024.30.11.588.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Wei, Siwei, Yang Yang, Donghua Liu, Ke Deng, and Chunzhi Wang. "Transformer-Based Spatiotemporal Graph Diffusion Convolution Network for Traffic Flow Forecasting." Electronics 13, no. 16 (2024): 3151. http://dx.doi.org/10.3390/electronics13163151.

Full text
Abstract:
Accurate traffic flow forecasting is a crucial component of intelligent transportation systems, playing a pivotal role in enhancing transportation intelligence. The integration of Graph Neural Networks (GNNs) and Transformers in traffic flow forecasting has gained significant adoption for enhancing prediction accuracy. Yet, the complex spatial and temporal dependencies present in traffic data continue to pose substantial challenges: (1) Most GNN-based methods assume that the graph structure reflects the actual dependencies between nodes, overlooking the complex dependencies present in the real
APA, Harvard, Vancouver, ISO, and other styles
42

Zhou, Zijie, Zhaoqi Lu, Xuekai Wei, et al. "Tokenphormer: Structure-aware Multi-token Graph Transformer for Node Classification." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 12 (2025): 13428–36. https://doi.org/10.1609/aaai.v39i12.33466.

Full text
Abstract:
Graph Neural Networks (GNNs) are widely used in graph data mining tasks. Traditional GNNs follow a message passing scheme that can effectively utilize local and structural information. However, the phenomena of over-smoothing and over-squashing limit the receptive field in message passing processes. Graph Transformers were introduced to address these issues, achieving a global receptive field but suffering from the noise of irrelevant nodes and loss of structural information. Therefore, drawing inspiration from fine-grained token-based representation learning in Natural Language Processing (NL
APA, Harvard, Vancouver, ISO, and other styles
43

Kim, Juyoung, Jong-Hyuk Lee, and Heesung Lee. "Fall Down Detection Using Vision Transformer and Graph Convolutional Network." Journal of the Korean Society for Railway 26, no. 4 (2023): 251–59. http://dx.doi.org/10.7782/jksr.2023.26.4.251.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Cleary-Balderas, Arthur, Gilberto Gonzalez-Avalos, Gerardo Ayala-Jaimes, and Aaron Padilla Garcia. "Modeling and Simulation of Internal Incipient Faults in Electrical Transformers Using a Bond Graph Approach." Energies 18, no. 13 (2025): 3307. https://doi.org/10.3390/en18133307.

Full text
Abstract:
Power transformers are a key piece of equipment located between the points of supply and consumption of electrical energy. Due to their continuous exposure to the environment, they may be subject to failure. Thus, the modeling of transformers subject to incipient faults using a bond graph approach is presented in this study. In particular, incipient faults in the primary and secondary windings with respect to ground and a turn-to-turn fault in the primary winding are modeled. In order to develop a mathematical model capturing the incipient faults in transformers including magnetic saturation e
APA, Harvard, Vancouver, ISO, and other styles
45

Vafaei, Elnaz, and Mohammad Hosseini. "Transformers in EEG Analysis: A Review of Architectures and Applications in Motor Imagery, Seizure, and Emotion Classification." Sensors 25, no. 5 (2025): 1293. https://doi.org/10.3390/s25051293.

Full text
Abstract:
Transformers have rapidly influenced research across various domains. With their superior capability to encode long sequences, they have demonstrated exceptional performance, outperforming existing machine learning methods. There has been a rapid increase in the development of transformer-based models for EEG analysis. The high volumes of recently published papers highlight the need for further studies exploring transformer architectures, key components, and models employed particularly in EEG studies. This paper aims to explore four major transformer architectures: Time Series Transformer, Vi
APA, Harvard, Vancouver, ISO, and other styles
46

Shao, Bo, Yeyun Gong, Weizhen Qi, Guihong Cao, Jianshu Ji, and Xiaola Lin. "Graph-Based Transformer with Cross-Candidate Verification for Semantic Parsing." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 05 (2020): 8807–14. http://dx.doi.org/10.1609/aaai.v34i05.6408.

Full text
Abstract:
In this paper, we present a graph-based Transformer for semantic parsing. We separate the semantic parsing task into two steps: 1) Use a sequence-to-sequence model to generate the logical form candidates. 2) Design a graph-based Transformer to rerank the candidates. To handle the structure of logical forms, we incorporate graph information to Transformer, and design a cross-candidate verification mechanism to consider all the candidates in the ranking process. Furthermore, we integrate BERT into our model and jointly train the graph-based Transformer and BERT. We conduct experiments on 3 seman
APA, Harvard, Vancouver, ISO, and other styles
47

Cui, Xueshen, Jikai Zhang, Yihao He, Zhixing Wang, and Wentao Zhao. "GCN-Former: A Method for Action Recognition Using Graph Convolutional Networks and Transformer." Applied Sciences 15, no. 8 (2025): 4511. https://doi.org/10.3390/app15084511.

Full text
Abstract:
Skeleton-based action recognition, which aims to classify human actions through the coordinates of body joints and their connectivity, is a significant research area in computer vision with broad application potential. Although Graph Convolutional Networks (GCNs) have made significant progress in processing skeleton data represented as graphs, their performance is constrained by local receptive fields and fixed joint connection patterns. Recently, researchers have introduced Transformer-based methods to overcome these limitations and better capture long-range dependencies. However, these metho
APA, Harvard, Vancouver, ISO, and other styles
48

Hroncová, Darina, Alexander Gmiterko, Peter Frankovský, and Eva Dzurišová. "Building Elements of Bond Graphs." Applied Mechanics and Materials 816 (November 2015): 339–48. http://dx.doi.org/10.4028/www.scientific.net/amm.816.339.

Full text
Abstract:
The aim of the thesis is to describe of building elements Bond Graph methodology for modeling dynamic systems. Technique of Bond Graph methodology for modeling dynamic systems is demonstrated and its place in the process of modeling of mechanic and electric system and its behavior is discussed. The building elements of bond graphs as source effort and flow, capacitor, resistor, inductor, gyrator and transformer are described.
APA, Harvard, Vancouver, ISO, and other styles
49

Xu, Kang, Miqi Chen, Yifan Feng, and Zhenjiang Dong. "Advancing rule learning in knowledge graphs with structure-aware graph transformer." Information Processing & Management 62, no. 2 (2025): 103976. http://dx.doi.org/10.1016/j.ipm.2024.103976.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Wang, Dongsheng, Kangjie Tang, Jun Zeng, et al. "MM-Transformer: A Transformer-Based Knowledge Graph Link Prediction Model That Fuses Multimodal Features." Symmetry 16, no. 8 (2024): 961. http://dx.doi.org/10.3390/sym16080961.

Full text
Abstract:
Multimodal knowledge graph completion necessitates the integration of information from multiple modalities (such as images and text) into the structural representation of entities to improve link prediction. However, most existing studies have overlooked the interaction between different modalities and the symmetry in the modal fusion process. To address this issue, this paper proposed a Transformer-based knowledge graph link prediction model (MM-Transformer) that fuses multimodal features. Different modal encoders are employed to extract structural, visual, and textual features, and symmetric
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!