To see the other types of publications on this topic, follow the link: Attention level.

Journal articles on the topic 'Attention level'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Attention level.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Au, Ricky K. C., and Ching-Nam Cheung. "The role of attention level in the attentional boost effect." Journal of Cognitive Psychology 32, no. 3 (2020): 255–77. http://dx.doi.org/10.1080/20445911.2020.1736086.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Flevaris, A., S. Bentin, and L. Robertson. "Attention to hierarchical level influences attentional selection of spatial scale." Journal of Vision 9, no. 8 (2010): 224. http://dx.doi.org/10.1167/9.8.224.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Flevaris, Anastasia V., Shlomo Bentin, and Lynn C. Robertson. "Attention to hierarchical level influences attentional selection of spatial scale." Journal of Experimental Psychology: Human Perception and Performance 37, no. 1 (2011): 12–22. http://dx.doi.org/10.1037/a0019251.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Yang, Yadong, Xiaofeng Wang, Quan Zhao, and Tingting Sui. "Two-Level Attentions and Grouping Attention Convolutional Network for Fine-Grained Image Classification." Applied Sciences 9, no. 9 (2019): 1939. http://dx.doi.org/10.3390/app9091939.

Full text
Abstract:
The focus of fine-grained image classification tasks is to ignore interference information and grasp local features. This challenge is what the visual attention mechanism excels at. Firstly, we have constructed a two-level attention convolutional network, which characterizes the object-level attention and the pixel-level attention. Then, we combine the two kinds of attention through a second-order response transform algorithm. Furthermore, we propose a clustering-based grouping attention model, which implies the part-level attention. The grouping attention method is to stretch all the semantic
APA, Harvard, Vancouver, ISO, and other styles
5

Pomeshchikova, Irina, O. Kudimova, and S. Loman. "The level of attention selectivity of basketball players of student's teams." Спортивні ігри (SPORT GAMES), no. 1 (11) (March 7, 2019): 50–57. https://doi.org/10.5281/zenodo.2543577.

Full text
Abstract:
<em>The question of psychological preparation is one of the main points of training of sportsmen-basketball players (team). Psychological features of competitions, regularities, reasons and dynamics of competitive states define high requirements to mentality of the sportsman. Distribution of attention allows the basketball player without being distracted by foreign irritants (shouts of fans, actions of the arbitrator) to watch the rival and partners, to trace movement of a ball, to distinguish false actions (feints) during the game. <strong>The purpose of the research</strong> &ndash; is to es
APA, Harvard, Vancouver, ISO, and other styles
6

Wei, Haiyang, Zhixin Li, Canlong Zhang, and Huifang Ma. "The synergy of double attention: Combine sentence-level and word-level attention for image captioning." Computer Vision and Image Understanding 201 (December 2020): 103068. http://dx.doi.org/10.1016/j.cviu.2020.103068.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Toa, Chean Khim, Kok Swee Sim, and Shing Chiang Tan. "Electroencephalogram-Based Attention Level Classification Using Convolution Attention Memory Neural Network." IEEE Access 9 (2021): 58870–81. http://dx.doi.org/10.1109/access.2021.3072731.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

XU, Lu-qiang, Jing-xia LIU, Guang-can XIAO, and Wei-dong JIN. "Characterization and classification of EEG attention level." Journal of Computer Applications 32, no. 11 (2013): 3268–70. http://dx.doi.org/10.3724/sp.j.1087.2012.03268.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Opt, Susan K. "Sea-Level Rise Policy as Attention Intervention." Western Journal of Communication 79, no. 1 (2014): 73–91. http://dx.doi.org/10.1080/10570314.2014.943419.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Lufi, Dubi, Shahar Segev, Adi Blum, Tal Rosen, and Iris Haimov. "The Effect of Age on Attention Level." International Journal of Aging and Human Development 81, no. 3 (2015): 176–88. http://dx.doi.org/10.1177/0091415015614953.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Kim, Hyunji, S. Kim, E. Lee, K. Won, S. C. Jun, and Minkyu Ahn. "Improving attention level through interactive neurofeedback game." IBRO Reports 6 (September 2019): S439. http://dx.doi.org/10.1016/j.ibror.2019.07.1391.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Sun, Yanfeng, Yunru Zhang, Huajie Jiang, Yongli Hu, and Baocai Yin. "Multi-level attention for referring expression comprehension." Pattern Recognition Letters 172 (August 2023): 252–58. http://dx.doi.org/10.1016/j.patrec.2023.07.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Liu, Yaju, Jianwei Fei, Peipeng Yu, Chengsheng Yuan, and Haopeng Liang. "Face forgery detection with cross-level attention." International Journal of Autonomous and Adaptive Communications Systems 17, no. 3 (2024): 233–46. http://dx.doi.org/10.1504/ijaacs.2024.138148.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Yu, Chengyi, and Xinrui Huang. "Attention level monitoring based on EEG measurement." Applied and Computational Engineering 38, no. 1 (2024): 151–55. http://dx.doi.org/10.54254/2755-2721/38/20230546.

Full text
Abstract:
This paper reports on research into the daily use of EEG measurement, based on brain-computer interface (BCI). Its purpose is to present a series of visualized data, clearly showing the attention level of the users brainwave This method of recognizing the attention level can be used in a variety of practical ways, such as assessing students cognitive functions, detecting drivers fatigue levels, and clinical application in supervising the moods of patients, especially those with mental diseases. By attaining different dimensions of brain waves, which are sorted into alpha waves, beta waves and
APA, Harvard, Vancouver, ISO, and other styles
15

Tang, Xuejiao, and Wenbin Zhang. "Attention Mechanism-Based Cognition-Level Scene Understanding." Information 16, no. 3 (2025): 203. https://doi.org/10.3390/info16030203.

Full text
Abstract:
Given a question–image input, a visual commonsense reasoning (VCR) model predicts an answer with a corresponding rationale, which requires inference abilities based on real-world knowledge. The VCR task, which calls for exploiting multi-source information as well as learning different levels of understanding and extensive commonsense knowledge, is a cognition-level scene understanding challenge. The VCR task has aroused researchers’ interests due to its wide range of applications, including visual question answering, automated vehicle systems, and clinical decision support. Previous approaches
APA, Harvard, Vancouver, ISO, and other styles
16

Wang, Yu, Shu Xu, Zenghui Ding, Cong Liu, and Xianjun Yang. "Link Predictions with Bi-Level Routing Attention." AI 6, no. 7 (2025): 156. https://doi.org/10.3390/ai6070156.

Full text
Abstract:
Background/Objectives: Knowledge Graphs (KGs) are often incomplete, which can significantly impact the performance of downstream applications. Manual completion of KGs is time-consuming and costly, emphasizing the importance of developing automated methods for KGC. Link prediction serves as a fundamental task in this domain. The semantic correlation among entity features plays a crucial role in determining the effectiveness of link-prediction models. Notably, the human brain can often infer information using a limited set of salient features. Methods: Inspired by this cognitive principle, this
APA, Harvard, Vancouver, ISO, and other styles
17

Beilock, Sian L., and Rob Gray. "From attentional control to attentional spillover: A skill-level investigation of attention, movement, and performance outcomes." Human Movement Science 31, no. 6 (2012): 1473–99. http://dx.doi.org/10.1016/j.humov.2012.02.014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Deng, Lihong, Fei Deng, Kepeng Zhou, Peifan Jiang, Gexiang Zhang, and Qiang Yang. "Multi-level attention network: Mixed time–frequency channel attention and multi-scale self-attentive standard deviation pooling for speaker recognition." Engineering Applications of Artificial Intelligence 128 (February 2024): 107439. http://dx.doi.org/10.1016/j.engappai.2023.107439.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Antonis, Theofilids. "Cognitive Functions—Attention." Psychiatry and Psychological Disorders 1, no. 2 (2023): 01–05. http://dx.doi.org/10.58489/2836-3558/007.

Full text
Abstract:
The concept of attention held a special place during the historical development of psychology (Cohen, Sparkling –Cohen &amp; O' Donell, 1993). Although hundreds of articles dealing with the concept of attention are published each year (Whyte, 1992a), due to the lack of coherence at a conceptual, methodological and theoretical level, there continues to be disagreement among scientists (Anderson, Craik &amp; Naveh-Benjamin, 1998. Van Zomeran &amp; Brower, 1994) on the nature of attention. Aim: The main purpose of the article is the definition of attention. Method: a review of the literature was
APA, Harvard, Vancouver, ISO, and other styles
20

Yin, Wenpeng, and Hinrich Schütze. "Attentive Convolution: Equipping CNNs with RNN-style Attention Mechanisms." Transactions of the Association for Computational Linguistics 6 (December 2018): 687–702. http://dx.doi.org/10.1162/tacl_a_00249.

Full text
Abstract:
In NLP, convolutional neural networks (CNNs) have benefited less than recurrent neural networks (RNNs) from attention mechanisms. We hypothesize that this is because the attention in CNNs has been mainly implemented as attentive pooling (i.e., it is applied to pooling) rather than as attentive convolution (i.e., it is integrated into convolution). Convolution is the differentiator of CNNs in that it can powerfully model the higher-level representation of a word by taking into account its local fixed-size context in the input text t x. In this work, we propose an attentive convolution network,
APA, Harvard, Vancouver, ISO, and other styles
21

AkterLata, Munira, MdMahmudur Rahman, Mohammad Abu Yousuf, Md Ibrahim Al Ananya, Devzani Roy, and Alvi Mahadi. "Development of an Empirical Model to Assess Attention Level and Control Driver's Attention." International Journal of Computer Science, Engineering and Information Technology 8, no. 1 (2018): 13–26. http://dx.doi.org/10.5121/ijcseit.2018.8102.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Wang, Zhibo, Jinxin Ma, Yongquan Zhang, Qian Wang, Ju Ren, and Peng Sun. "Attention-over-Attention Field-Aware Factorization Machine." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (2020): 6323–30. http://dx.doi.org/10.1609/aaai.v34i04.6101.

Full text
Abstract:
Factorization Machine (FM) has been a popular approach in supervised predictive tasks, such as click-through rate prediction and recommender systems, due to its great performance and efficiency. Recently, several variants of FM have been proposed to improve its performance. However, most of the state-of-the-art prediction algorithms neglected the field information of features, and they also failed to discriminate the importance of feature interactions due to the problem of redundant features. In this paper, we present a novel algorithm called Attention-over-Attention Field-aware Factorization
APA, Harvard, Vancouver, ISO, and other styles
23

Lv, Shangwen, Wanhui Qian, Longtao Huang, Jizhong Han, and Songlin Hu. "SAM-Net: Integrating Event-Level and Chain-Level Attentions to Predict What Happens Next." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 6802–9. http://dx.doi.org/10.1609/aaai.v33i01.33016802.

Full text
Abstract:
Scripts represent knowledge of event sequences that can help text understanding. Script event prediction requires to measure the relation between an existing chain and the subsequent event. The dominant approaches either focus on the effects of individual events, or the influence of the chain sequence. However, only considering individual events will lose much semantic relations within the event chain, and only considering the sequence of the chain will introduce much noise. With our observations, both the individual events and the event segments within the chain can facilitate the prediction
APA, Harvard, Vancouver, ISO, and other styles
24

Yuan, Changsen, Heyan Huang, Chong Feng, Ge Shi, and Xiaochi Wei. "Document-level relation extraction with Entity-Selection Attention." Information Sciences 568 (August 2021): 163–74. http://dx.doi.org/10.1016/j.ins.2021.04.007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Al-Rfou, Rami, Dokook Choe, Noah Constant, Mandy Guo, and Llion Jones. "Character-Level Language Modeling with Deeper Self-Attention." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 3159–66. http://dx.doi.org/10.1609/aaai.v33i01.33013159.

Full text
Abstract:
LSTMs and other RNN variants have shown strong performance on character-level language modeling. These models are typically trained using truncated backpropagation through time, and it is common to assume that their success stems from their ability to remember long-term contexts. In this paper, we show that a deep (64-layer) transformer model (Vaswani et al. 2017) with fixed context outperforms RNN variants by a large margin, achieving state of the art on two popular benchmarks: 1.13 bits per character on text8 and 1.06 on enwik8. To get good results at this depth, we show that it is important
APA, Harvard, Vancouver, ISO, and other styles
26

Shang, Chao, Hongliang Li, Fanman Meng, et al. "Instance-level Context Attention Network for instance segmentation." Neurocomputing 472 (February 2022): 124–37. http://dx.doi.org/10.1016/j.neucom.2021.11.104.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Mao, Wei, Miaomiao Liu, Mathieu Salzmann, and Hongdong Li. "Multi-level Motion Attention for Human Motion Prediction." International Journal of Computer Vision 129, no. 9 (2021): 2513–35. http://dx.doi.org/10.1007/s11263-021-01483-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Rosbergen, Edward, Rik Pieters, and Michel Wedel. "Visual Attention to Advertising: A Segment‐Level Analysis." Journal of Consumer Research 24, no. 3 (1997): 305–14. http://dx.doi.org/10.1086/209512.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Hibi, Yuko, Yuji Takeda, and Akihiro Yagi. "Attention level and negative priming in hierarchical patterns1." Japanese Psychological Research 44, no. 4 (2002): 241–46. http://dx.doi.org/10.1111/1468-5884.t01-1-00026.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Jiang, Changhong, Xiaoqiao Mu, Bingbing Zhang, Chao Liang, and Mujun Xie. "Category-Level Object Pose Estimation with Statistic Attention." Sensors 24, no. 16 (2024): 5347. http://dx.doi.org/10.3390/s24165347.

Full text
Abstract:
Six-dimensional object pose estimation is a fundamental problem in the field of computer vision. Recently, category-level object pose estimation methods based on 3D-GC have made significant breakthroughs due to advancements in 3D-GC. However, current methods often fail to capture long-range dependencies, which are crucial for modeling complex and occluded object shapes. Additionally, discerning detailed differences between different objects is essential. Some existing methods utilize self-attention mechanisms or Transformer encoder–decoder structures to address the lack of long-range dependenc
APA, Harvard, Vancouver, ISO, and other styles
31

Plummer, John, and Rui Ni. "Cost of Dividing Attention Moderated by Contrast Level." Journal of Vision 15, no. 12 (2015): 879. http://dx.doi.org/10.1167/15.12.879.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Nakamura, Mitsuru, Eisuke Matsushima, Katsuya Ohta, Katsumi Ando, and Takuya Kojima. "Relationship between attention and arousal level in schizophrenia." Psychiatry and Clinical Neurosciences 57, no. 5 (2003): 472–77. http://dx.doi.org/10.1046/j.1440-1819.2003.01150.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Yang, Xiaoyilei, Shuaijing Xu, Jian Wang, Hao Wu, and Rongfang Bie. "Attention Mechanism in Radiologist-Level Thorax Diseases Detection." Procedia Computer Science 174 (2020): 524–29. http://dx.doi.org/10.1016/j.procs.2020.06.120.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Cohen, E., and F. Tong. "Multi-level neural mechanisms of object-based attention." Journal of Vision 10, no. 7 (2010): 176. http://dx.doi.org/10.1167/10.7.176.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Flevaris, A., S. Bentin, and L. Robertson. "Attention to hierarchical level influences spatial frequency processing." Journal of Vision 8, no. 6 (2010): 143. http://dx.doi.org/10.1167/8.6.143.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Yan, Yichao, Bingbing Ni, Jinxian Liu, and Xiaokang Yang. "Multi-level attention model for person re-identification." Pattern Recognition Letters 127 (November 2019): 156–64. http://dx.doi.org/10.1016/j.patrec.2018.08.024.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Sussman, Elyse S., and Mitchell Steinschneider. "Attention modifies sound level detection in young children." Developmental Cognitive Neuroscience 1, no. 3 (2011): 351–60. http://dx.doi.org/10.1016/j.dcn.2011.01.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Chen, Wen-Xuan, Yong Hu, Bei-Yi Tian, Wen Luo, and Lin-Wang Yuan. "Multi-Level Cross-Attention Point Cloud Completion Network." Computers & Graphics 130 (August 2025): 104253. https://doi.org/10.1016/j.cag.2025.104253.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Mo, Xiuxian, Qixian Qin, Fengji Wu, et al. "Effects of Breathing Meditation Training on Sustained Attention Level, Mindfulness Attention Awareness Level, and Mental State of Operating Room Nurses." American Journal of Health Behavior 45, no. 6 (2021): 993–1001. http://dx.doi.org/10.5993/ajhb.45.6.4.

Full text
Abstract:
Objectives: In this paper, we explore the effects of breathing meditation training on the sustained attention level, mindfulness attention awareness level, and mental state of nurses in the operating room. Methods: We enrolled 40 nurses from September 2019 to December 2019, and divided them into a control group (N=20) and an observation group (N=20) using a random number table. The control group received routine training, based on which the observation group received breathing meditation training. We compared their sustained attention index, fatigue score, mindfulness attention awareness score
APA, Harvard, Vancouver, ISO, and other styles
40

Zirnsak, Marc, Frederik Beuth, and Fred H. Hamker. "Split of spatial attention as predicted by a systems-level model of visual attention." European Journal of Neuroscience 33, no. 11 (2011): 2035–45. http://dx.doi.org/10.1111/j.1460-9568.2011.07718.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Jaworski, Jessica L. Bean, and Inge-Marie Eigsti. "Low-level visual attention and its relation to joint attention in autism spectrum disorder." Child Neuropsychology 23, no. 3 (2015): 316–31. http://dx.doi.org/10.1080/09297049.2015.1104293.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Meng, Fanying, Chun Xie, Fanghui Qiu, Jiaxian Geng, and Fengrong Li. "Effects of Physical Activity Level on Attentional Networks in Young Adults." International Journal of Environmental Research and Public Health 19, no. 9 (2022): 5374. http://dx.doi.org/10.3390/ijerph19095374.

Full text
Abstract:
Although physical activity is associated with better attentional functioning in elderly populations or in specific clinical populations, the association between physical activity level and attention has been less studied in young adult populations. Thus, the purpose of this study was to investigate whether the positive effects of physical activity on attentional networks extend to young adults. In total, 57 college students were recruited and assigned to one of three groups of physical activity levels (high, moderate, and low) based on their self-reported exercise. Each participant completed t
APA, Harvard, Vancouver, ISO, and other styles
43

Seino, Tatsuki, Naoki Saito, Takahiro Ogawa, Satoshi Asamizu, and Miki Haseyama. "Expert–Novice Level Classification Using Graph Convolutional Network Introducing Confidence-Aware Node-Level Attention Mechanism." Sensors 24, no. 10 (2024): 3033. http://dx.doi.org/10.3390/s24103033.

Full text
Abstract:
In this study, we propose a classification method of expert–novice levels using a graph convolutional network (GCN) with a confidence-aware node-level attention mechanism. In classification using an attention mechanism, highlighted features may not be significant for accurate classification, thereby degrading classification performance. To address this issue, the proposed method introduces a confidence-aware node-level attention mechanism into a spatiotemporal attention GCN (STA-GCN) for the classification of expert–novice levels. Consequently, our method can contrast the attention value of ea
APA, Harvard, Vancouver, ISO, and other styles
44

Han, Shihui, and Yi Jiang. "Neural correlates of within-level and across-level attention to multiple compound stimuli." Brain Research 1076, no. 1 (2006): 193–97. http://dx.doi.org/10.1016/j.brainres.2006.01.028.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Van den Driessche, Charlotte, Mikaël Bastian, Hugo Peyre, et al. "Attentional Lapses in Attention-Deficit/Hyperactivity Disorder: Blank Rather Than Wandering Thoughts." Psychological Science 28, no. 10 (2017): 1375–86. http://dx.doi.org/10.1177/0956797617708234.

Full text
Abstract:
People with attention-deficit/hyperactivity disorder (ADHD) have difficulties sustaining their attention on external tasks. Such attentional lapses have often been characterized as the simple opposite of external sustained attention, but the different types of attentional lapses, and the subjective experiences to which they correspond, remain unspecified. In this study, we showed that unmedicated children (ages 6–12) with ADHD, when probed during a standard go/no-go task, reported more mind blanking (a mental state characterized by the absence of reportable content) than did control participan
APA, Harvard, Vancouver, ISO, and other styles
46

Hubner, Ronald. "Attention shifting between global and local target levels: The persistence of level-repetition effects." Visual Cognition 7, no. 4 (2000): 465–84. http://dx.doi.org/10.1080/135062800394612.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Mundy, Peter, Marian Sigman, and Connie Kasari. "Joint attention, developmental level, and symptom presentation in autism." Development and Psychopathology 6, no. 3 (1994): 389–401. http://dx.doi.org/10.1017/s0954579400006003.

Full text
Abstract:
AbstractRecent data suggest that a disturbance in the development of joint attention skills is a specific characteristic of young autistic children. This observation may have both theoretical and clinical significance. However, many pertinent issues remain to be addressed with regard to the parameters of joint attention disturbance in children with autism. This study attempted to address several of these issues. The study examines the effects of mental age and IQ on the joint attention skills of children with autism, mental retardation, and normal development. The study also examined the relat
APA, Harvard, Vancouver, ISO, and other styles
48

Zhang, Qiuyue, and Ran Lu. "A Multi-Attention Network for Aspect-Level Sentiment Analysis." Future Internet 11, no. 7 (2019): 157. http://dx.doi.org/10.3390/fi11070157.

Full text
Abstract:
Aspect-level sentiment analysis (ASA) aims at determining the sentiment polarity of specific aspect term with a given sentence. Recent advances in attention mechanisms suggest that attention models are useful in ASA tasks and can help identify focus words. Or combining attention mechanisms with neural networks are also common methods. However, according to the latest research, they often fail to extract text representations efficiently and to achieve interaction between aspect terms and contexts. In order to solve the complete task of ASA, this paper proposes a Multi-Attention Network (MAN) mo
APA, Harvard, Vancouver, ISO, and other styles
49

Miao, Yuqing, Ronghai Luo, Lin Zhu, et al. "Contextual Graph Attention Network for Aspect-Level Sentiment Classification." Mathematics 10, no. 14 (2022): 2473. http://dx.doi.org/10.3390/math10142473.

Full text
Abstract:
Aspect-level sentiment classification aims to predict the sentiment polarities towards the target aspects given in sentences. To address the issues of insufficient semantic information extraction and high computational complexity of attention mechanisms in existing aspect-level sentiment classification models based on deep learning, a contextual graph attention network (CGAT) is proposed. The proposed model adopts two graph attention networks to aggregate syntactic structure information into target aspects and employs a contextual attention network to extract semantic information in sentence-a
APA, Harvard, Vancouver, ISO, and other styles
50

Fu, Heidi H., Kristin A. White, and Raymond D. Collings. "The Effects of Conversation Arousal Level on Attention Processes." Psi Chi Journal of Psychological Research 25, no. 1 (2020): 30–41. http://dx.doi.org/10.24839/2325-7342.jn25.1.30.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!