Academic literature on the topic 'Graph and Multi-view Memory Attention'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Graph and Multi-view Memory Attention.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Graph and Multi-view Memory Attention"

1

Ai, Bing, Yibing Wang, Liang Ji, et al. "A graph neural network fused with multi-head attention for text classification." Journal of Physics: Conference Series 2132, no. 1 (2021): 012032. http://dx.doi.org/10.1088/1742-6596/2132/1/012032.

Full text
Abstract:
Abstract Graph neural network (GNN) has done a good job of processing intricate architecture and fusion of global messages, research has explored GNN technology for text classification. However, the model that fixed the entire corpus as a graph in the past faced many problems such as high memory consumption and the inability to modify the construction of the graph. We propose an improved model based on GNN to solve these problems. The model no longer fixes the entire corpus as a graph but constructs different graphs for each text. This method reduces memory consumption, but still retains globa
APA, Harvard, Vancouver, ISO, and other styles
2

Liu, Di, Hui Xu, Jianzhong Wang, Yinghua Lu, Jun Kong, and Miao Qi. "Adaptive Attention Memory Graph Convolutional Networks for Skeleton-Based Action Recognition." Sensors 21, no. 20 (2021): 6761. http://dx.doi.org/10.3390/s21206761.

Full text
Abstract:
Graph Convolutional Networks (GCNs) have attracted a lot of attention and shown remarkable performance for action recognition in recent years. For improving the recognition accuracy, how to build graph structure adaptively, select key frames and extract discriminative features are the key problems of this kind of method. In this work, we propose a novel Adaptive Attention Memory Graph Convolutional Networks (AAM-GCN) for human action recognition using skeleton data. We adopt GCN to adaptively model the spatial configuration of skeletons and employ Gated Recurrent Unit (GRU) to construct an att
APA, Harvard, Vancouver, ISO, and other styles
3

Feng, Aosong, Irene Li, Yuang Jiang, and Rex Ying. "Diffuser: Efficient Transformers with Multi-Hop Attention Diffusion for Long Sequences." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 11 (2023): 12772–80. http://dx.doi.org/10.1609/aaai.v37i11.26502.

Full text
Abstract:
Efficient Transformers have been developed for long sequence modeling, due to their subquadratic memory and time complexity. Sparse Transformer is a popular approach to improving the efficiency of Transformers by restricting self-attention to locations specified by the predefined sparse patterns. However, leveraging sparsity may sacrifice expressiveness compared to full-attention, when important token correlations are multiple hops away. To combine advantages of both the efficiency of sparse transformer and the expressiveness of full-attention Transformer, we propose Diffuser, a new state-of-t
APA, Harvard, Vancouver, ISO, and other styles
4

Li, Mingxiao, and Marie-Francine Moens. "Dynamic Key-Value Memory Enhanced Multi-Step Graph Reasoning for Knowledge-Based Visual Question Answering." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 10 (2022): 10983–92. http://dx.doi.org/10.1609/aaai.v36i10.21346.

Full text
Abstract:
Knowledge-based visual question answering (VQA) is a vision-language task that requires an agent to correctly answer image-related questions using knowledge that is not presented in the given image. It is not only a more challenging task than regular VQA but also a vital step towards building a general VQA system. Most existing knowledge-based VQA systems process knowledge and image information similarly and ignore the fact that the knowledge base (KB) contains complete information about a triplet, while the extracted image information might be incomplete as the relations between two objects a
APA, Harvard, Vancouver, ISO, and other styles
5

Jung, Tae-Won, Chi-Seo Jeong, In-Seon Kim, Min-Su Yu, Soon-Chul Kwon, and Kye-Dong Jung. "Graph Convolutional Network for 3D Object Pose Estimation in a Point Cloud." Sensors 22, no. 21 (2022): 8166. http://dx.doi.org/10.3390/s22218166.

Full text
Abstract:
Graph Neural Networks (GNNs) are neural networks that learn the representation of nodes and associated edges that connect it to every other node while maintaining graph representation. Graph Convolutional Neural Networks (GCNs), as a representative method in GNNs, in the context of computer vision, utilize conventional Convolutional Neural Networks (CNNs) to process data supported by graphs. This paper proposes a one-stage GCN approach for 3D object detection and poses estimation by structuring non-linearly distributed points of a graph. Our network provides the required details to analyze, ge
APA, Harvard, Vancouver, ISO, and other styles
6

Cui, Wei, Fei Wang, Xin He, et al. "Multi-Scale Semantic Segmentation and Spatial Relationship Recognition of Remote Sensing Images Based on an Attention Model." Remote Sensing 11, no. 9 (2019): 1044. http://dx.doi.org/10.3390/rs11091044.

Full text
Abstract:
A comprehensive interpretation of remote sensing images involves not only remote sensing object recognition but also the recognition of spatial relations between objects. Especially in the case of different objects with the same spectrum, the spatial relationship can help interpret remote sensing objects more accurately. Compared with traditional remote sensing object recognition methods, deep learning has the advantages of high accuracy and strong generalizability regarding scene classification and semantic segmentation. However, it is difficult to simultaneously recognize remote sensing obje
APA, Harvard, Vancouver, ISO, and other styles
7

Hou, Miaomiao, Xiaofeng Hu, Jitao Cai, Xinge Han, and Shuaiqi Yuan. "An Integrated Graph Model for Spatial–Temporal Urban Crime Prediction Based on Attention Mechanism." ISPRS International Journal of Geo-Information 11, no. 5 (2022): 294. http://dx.doi.org/10.3390/ijgi11050294.

Full text
Abstract:
Crime issues have been attracting widespread attention from citizens and managers of cities due to their unexpected and massive consequences. As an effective technique to prevent and control urban crimes, the data-driven spatial–temporal crime prediction can provide reasonable estimations associated with the crime hotspot. It thus contributes to the decision making of relevant departments under limited resources, as well as promotes civilized urban development. However, the deficient performance in the aspect of the daily spatial–temporal crime prediction at the urban-district-scale needs to b
APA, Harvard, Vancouver, ISO, and other styles
8

Mi, Chunlei, Shifen Cheng, and Feng Lu. "Predicting Taxi-Calling Demands Using Multi-Feature and Residual Attention Graph Convolutional Long Short-Term Memory Networks." ISPRS International Journal of Geo-Information 11, no. 3 (2022): 185. http://dx.doi.org/10.3390/ijgi11030185.

Full text
Abstract:
Predicting taxi-calling demands at the urban area level is vital to coordinate the supply–demand balance of the urban taxi system. Differing travel patterns, the impact of external data, and the expression of dynamic spatiotemporal demand dependence pose challenges to predicting demand. Here, a framework using residual attention graph convolutional long short-term memory networks (RAGCN-LSTMs) is proposed to predict taxi-calling demands. It consists of a spatial dependence (SD) extractor, which extracts SD features; an external dependence extractor, which extracts traffic environment-related f
APA, Harvard, Vancouver, ISO, and other styles
9

Karimanzira, Divas, Linda Ritzau, and Katharina Emde. "Catchment Area Multi-Streamflow Multiple Hours Ahead Forecast Based on Deep Learning." Transactions on Machine Learning and Artificial Intelligence 10, no. 5 (2022): 15–29. http://dx.doi.org/10.14738/tmlai.105.13049.

Full text
Abstract:
Modeling of rainfall-runoff is very critical for flood prediction studies in decision making for disaster management. Deep learning methods have proven to be very useful in hydrological prediction. To increase their acceptance in the hydrological community, they must be physic-informed and show some interpretability. They are several ways this can be achieved e.g. by learning from a fully-trained hydrological model which assumes the availability of the hydrological model or to use physic-informed data. In this work we developed a Graph Attention Network (GAT) with learnable Adjacency Matrix co
APA, Harvard, Vancouver, ISO, and other styles
10

Wang, Changhai, Jiaxi Ren, and Hui Liang. "MSGraph: Modeling multi-scale K-line sequences with graph attention network for profitable indices recommendation." Electronic Research Archive 31, no. 5 (2023): 2626–50. http://dx.doi.org/10.3934/era.2023133.

Full text
Abstract:
<abstract><p>Indices recommendation is a long-standing topic in stock market investment. Predicting the future trends of indices and ranking them based on the prediction results is the main scheme for indices recommendation. How to improve the forecasting performance is the central issue of this study. Inspired by the widely used trend-following investing strategy in financial investment, the indices' future trends are related to not only the nearby transaction data but also the long-term historical data. This article proposes the MSGraph, which tries to improve the index ranking p
APA, Harvard, Vancouver, ISO, and other styles
More sources
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!