Academic literature on the topic 'Spatiotemporal feature extraction'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Spatiotemporal feature extraction.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Spatiotemporal feature extraction"

1

Sun, Weitong, Xingya Yan, Yuping Su, Gaihua Wang, and Yumei Zhang. "MSDSANet: Multimodal Emotion Recognition Based on Multi-Stream Network and Dual-Scale Attention Network Feature Representation." Sensors 25, no. 7 (2025): 2029. https://doi.org/10.3390/s25072029.

Full text
Abstract:
Aiming at the shortcomings of EEG emotion recognition models in feature representation granularity and spatiotemporal dependence modeling, a multimodal emotion recognition model integrating multi-scale feature representation and attention mechanism is proposed. The model consists of a feature extraction module, feature fusion module, and classification module. The feature extraction module includes a multi-stream network module for extracting shallow EEG features and a dual-scale attention module for extracting shallow EOG features. The multi-scale and multi-granularity feature fusion improves
APA, Harvard, Vancouver, ISO, and other styles
2

Hoffmann, Susanne, Alexander Warmbold, Lutz Wiegrebe, and Uwe Firzlaff. "Spatiotemporal contrast enhancement and feature extraction in the bat auditory midbrain and cortex." Journal of Neurophysiology 110, no. 6 (2013): 1257–68. http://dx.doi.org/10.1152/jn.00226.2013.

Full text
Abstract:
Navigating on the wing in complete darkness is a challenging task for echolocating bats. It requires the detailed analysis of spatial and temporal information gained through echolocation. Thus neural encoding of spatiotemporal echo information is a major function in the bat auditory system. In this study we presented echoes in virtual acoustic space and used a reverse-correlation technique to investigate the spatiotemporal response characteristics of units in the inferior colliculus (IC) and the auditory cortex (AC) of the bat Phyllostomus discolor. Spatiotemporal response maps (STRMs) of IC u
APA, Harvard, Vancouver, ISO, and other styles
3

Mehrez, Ahmed, Ahmed A. Morgan, and Elsayed E. Hemayed. "Speeding up spatiotemporal feature extraction using GPU." Journal of Real-Time Image Processing 16, no. 6 (2018): 2379–407. http://dx.doi.org/10.1007/s11554-018-0755-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kamarol, Siti Khairuni Amalina, Jussi Parkkinen, Mohamed Hisham Jaward, and Rajendran Parthiban. "Spatiotemporal feature extraction for facial expression recognition." IET Image Processing 10, no. 7 (2016): 534–41. http://dx.doi.org/10.1049/iet-ipr.2015.0519.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Al-Shakarchy, Noor D., and Israa Hadi Ali. "Drowsy Detection based on Spatiotemporal Feature Extraction of Video Using 3D-CNN." Journal of Advanced Research in Dynamical and Control Systems 11, no. 10-SPECIAL ISSUE (2019): 742–51. http://dx.doi.org/10.5373/jardcs/v11sp10/20192865.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Al-Shakarchy, Noor D. "Drowsy Detection based on Spatiotemporal Feature Extraction of Video Using 3D-CNN." Journal of Advanced Research in Dynamical and Control Systems 11, no. 10-SPECIAL ISSUE (2019): 742–51. http://dx.doi.org/10.5373/jardcs/v11sp10/201928650.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Wang, Rui. "Human Activity Recognition Algorithm Based on Bidirectional Multi-Channel Feature Fusion." Applied and Computational Engineering 146, no. 1 (2025): 9–14. https://doi.org/10.54254/2755-2721/2025.21590.

Full text
Abstract:
This paper designs a bidirectional spatiotemporal feature fusion algorithm for human activity recognition based on frequency modulated continuous wave radar. The algorithm takes the three-dimensional point cloud data of human activity collected by the radar as input, and adopts a dual channel feature extraction method in spatial feature extraction. The voxelated point cloud data is put into a convolutional neural network for extracting coarse-grained spatial information. At the same time, a multi-layer perceptron is used to extract fine-grained spatial information from individual points in the
APA, Harvard, Vancouver, ISO, and other styles
8

Zhang, Bowen, and Tianqi Wang. "Visual Image Recognition of Basketball Turning and Dribbling Based on Feature Extraction." Traitement du Signal 39, no. 6 (2022): 2115–21. http://dx.doi.org/10.18280/ts.390624.

Full text
Abstract:
The processing of basketball videos with complex contents faces several challenges in terms of global motion features, group motion features, and individual pose features. The current research cannot solve problems, such as the diverse spatiotemporal features of actions, the utilization of correspondence between spatiotemporal features, the increase of data volume, and the complexity of the network. To solve these problems, this paper studies the visual image recognition of basketball turning and dribbling based on feature extraction. Specifically, the optical flow image was introduced to esta
APA, Harvard, Vancouver, ISO, and other styles
9

Young, S. R., A. Davis, A. Mishtal, and I. Arel. "Hierarchical spatiotemporal feature extraction using recurrent online clustering." Pattern Recognition Letters 37 (February 2014): 115–23. http://dx.doi.org/10.1016/j.patrec.2013.07.013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Lu, Shuang, Qian Zhang, Yi Liu, Lei Liu, Qing Zhu, and Ke Jing. "Retrieval of Multiple Spatiotemporally Correlated Images on Tourist Attractions Based on Image Processing." Traitement du Signal 37, no. 5 (2020): 847–54. http://dx.doi.org/10.18280/ts.370518.

Full text
Abstract:
The thriving of information technology (IT) has elevated the demand for intelligent query and retrieval of information about the tourist attractions of interest, which are the bases for preparing convenient and personalized itineraries. To realize accurate and rapid query of tourist attraction information (not limited to text information), this paper proposes a spatiotemporal feature extraction method and a ranking and retrieval method for multiple spatiotemporally correlated images (MSCIs) on tourist attractions based on deeply recursive convolutional network (DRCN). Firstly, the authors intr
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Spatiotemporal feature extraction"

1

Chen, Yun. "Mining Dynamic Recurrences in Nonlinear and Nonstationary Systems for Feature Extraction, Process Monitoring and Fault Diagnosis." Scholar Commons, 2016. http://scholarcommons.usf.edu/etd/6072.

Full text
Abstract:
Real-time sensing brings the proliferation of big data that contains rich information of complex systems. It is well known that real-world systems show high levels of nonlinear and nonstationary behaviors in the presence of extraneous noise. This brings significant challenges for human experts to visually inspect the integrity and performance of complex systems from the collected data. My research goal is to develop innovative methodologies for modeling and optimizing complex systems, and create enabling technologies for real-world applications. Specifically, my research focuses on Mining Dyna
APA, Harvard, Vancouver, ISO, and other styles
2

Chen, Chen (Tina). "Modeling Spatiotemporal Pedestrian-Environment Interactions for Predicting Pedestrian Crossing Intention from the Ego-View." Thesis, 2021. http://hdl.handle.net/1805/26393.

Full text
Abstract:
Indiana University-Purdue University Indianapolis (IUPUI)<br>For pedestrians and autonomous vehicles (AVs) to co-exist harmoniously and safely in the real-world, AVs will need to not only react to pedestrian actions, but also anticipate their intentions. In this thesis, we propose to use rich visual and pedestrian-environment interaction features to improve pedestrian crossing intention prediction from the ego-view.We do so by combining visual feature extraction, graph modeling of scene objects and their relationships, and feature encoding as comprehensive inputs for an LSTM en
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Spatiotemporal feature extraction"

1

Li, Ye, Guangqiang Yin, Shaoqi Hou, Jianhai Cui, and Zicheng Huang. "Spatiotemporal Feature Extraction for Pedestrian Re-identification." In Wireless Algorithms, Systems, and Applications. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-23597-0_15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Sudharshan, G., V. Khoshall, and M. Saravanan. "Brain-Inspired Spatiotemporal Feature Extraction Using Convolutional Legendre Memory Unit." In Third International Conference on Image Processing and Capsule Networks. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-12413-6_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Guan, Zhenghua, Peng Yang, Haijun Lei, et al. "Spatiotemporal Feature Extraction and Fusion for Longitudinal Alzheimer’s Disease Diagnosis." In Lecture Notes in Computer Science. Springer Nature Singapore, 2025. https://doi.org/10.1007/978-981-96-3297-8_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Tang, Ye, Junqiang Sun, Guangjin Wang, Wenjin Hong, and Li Li. "Analysis of Campus Crowd Behavior Based on Location Data and Physical Environment Data: A Case Study of Southeast University Wuxi Campus." In Computational Design and Robotic Fabrication. Springer Nature Singapore, 2025. https://doi.org/10.1007/978-981-96-3433-0_36.

Full text
Abstract:
Abstract The study on the behavior of on-campus individuals provides valuable insights for campus management, resource allocation, and planning layout. The application of multi-source data offers more objective and in-depth opportunities for exploring behavioral phenomena. Focusing on the Wuxi campus of Southeast University, this research utilized Wi-Fi probe positioning technology combined with a physical environment sensor system to comprehensively collect 28.87 million positioning data points and 340,000 environmental data points over a period of 14 days. After cleaning redundant, missing, abnormal, drifting, and ping-pong data, both types of data underwent visual analysis, and their correlations were studied. Additionally, trajectory feature extraction was conducted using a convolutional autoencoder neural network. The study revealed the temporal distribution of pedestrian flow, the spatial distribution of stopover behavior, and the spatiotemporal characteristics of pedestrian trajectories. This provides a reliable basis for guiding crowd behavior by improving specific campus areas and the physical environment.
APA, Harvard, Vancouver, ISO, and other styles
5

Pan, Xuefeng, Jintao Li, Shan Ba, Yongdong Zhang, and Sheng Tang. "Visual Features Extraction Through Spatiotemporal Slice Analysis." In Lecture Notes in Computer Science. Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/978-3-540-69429-8_31.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Naveen, M., A. V. Senthil Kumar, Ismail Bin Musirin, et al. "Dynamic SceneNet Spatio-Temporal Interactive Network for Dynamic Video Foreground Detection." In Advances in Information Security, Privacy, and Ethics. IGI Global, 2024. https://doi.org/10.4018/979-8-3693-3840-7.ch006.

Full text
Abstract:
This study introduces Dynamic SceneNet, a new Spatio-Temporal Interactive Network developed to tackle the issues of dynamic video foreground recognition. Proposed model uses sophisticated deep learning techniques to learn interactive spatiotemporal characteristics, resulting in a novel solution for accurate and adaptable foreground recognition in dynamic scenarios. Video foreground detection (VFD) is a crucial pre-processing step for accurate target tracking and recognition. Creating a reliable detection network is tough because to interference from shadows, changing backgrounds, and camera jitter. Convolution neural networks have shown to be reliable in several sectors due to their great feature extraction capabilities. This research proposes an interactive spatiotemporal feature learning network (ISFLN) for VFD.. CDnet2014, INO, and AICD datasets corroborate that the introduced ISFLN model surpasses state-of-the-art methodologies in video foreground detection, thanks to its prowess in feature enhancement, interactive multi-scale feature exploration, and deep learning techniques.
APA, Harvard, Vancouver, ISO, and other styles
7

Sha, Dexuan, Anusha Srirenganathan Malarvizhi, Hai Lan, et al. "ArcCI: A high-resolution aerial image management and processing platform for sea ice." In Recent Advancement in Geoinformatics and Data Science. Geological Society of America, 2023. http://dx.doi.org/10.1130/2022.2558(06).

Full text
Abstract:
ABSTRACT The Arctic sea-ice region has become an increasingly important study area since it is not only a key driver of the Earth’s climate but also a sensitive indicator of climate change. Therefore, it is crucial to extract high-resolution geophysical features of sea ice from remote sensing data to model and validate sea-ice changes. With large volumes of high spatial resolution data and intensive feature extraction, classification, and analysis processes, cloud infrastructure solutions can support Earth science. One example is the Arctic CyberInfrastructure (ArcCI), which was built to address image management and processing for sea-ice studies. The ArcCI system employs an efficient geophysical feature extraction workflow that is based on the object-based image analysis (OBIA) method alongside an on-demand web service for Arctic cyberinfrastructure. By integrating machine learning classification approaches, the on-demand sea-ice high spatial resolution (HSR) imagery management and processing service and framework allows for the efficient and accurate extraction of geophysical features and the spatiotemporal analysis of sea-ice leads.
APA, Harvard, Vancouver, ISO, and other styles
8

Chen, Luoyang, Junxian Li, Ye Tao, et al. "Advancing Three-Dimensional Human Pose Estimation Through Spatiotemporal Feature Fusion in Graph Convolutional Networks." In Advances in Transdisciplinary Engineering. IOS Press, 2025. https://doi.org/10.3233/atde241358.

Full text
Abstract:
Accurate three-dimensional (3D) human body pose estimation from video imagery is critical for a variety of applications, including action recognition, body language interpretation, motor skill acquisition, and motion capture. Despite considerable progress in this area, current methodologies often struggle to effectively integrate spatiotemporal features, resulting in limitations in both pose estimation accuracy and computational efficiency. Addressing this gap, we propose a novel graph convolutional network that synergistically fuses spatiotemporal properties to enhance 3D human pose estimation. By utilizing a sequence of human 2D poses as input, we extract spatial features through a semantic map convolutional network and temporal features using a time domain convolutional network. Additionally, we introduce a grouping Top K pooling method to optimize the extraction of multi-scale structural features, significantly reducing model parameters while enhancing pose estimation accuracy. Experimental evaluations on publicly available datasets demonstrate that our approach achieves highly accurate 3D pose estimation with real-time processing capabilities. This research not only provides a robust solution 3D pose estimation but also advances the field by improving the integration of spatiotemporal features, thus enhancing applicability in real-world scenarios.
APA, Harvard, Vancouver, ISO, and other styles
9

Tian, YingLi, Liangliang Cao, Zicheng Liu, and Zhengyou Zhang. "Action Detection by Fusing Hierarchically Filtered Motion with Spatiotemporal Interest Point Features." In Human Behavior Recognition Technologies. IGI Global, 2013. http://dx.doi.org/10.4018/978-1-4666-3682-8.ch012.

Full text
Abstract:
This chapter addresses the problem of action detection from cluttered videos. In recent years, many feature extraction schemes have been designed to describe various aspects of actions. However, due to the difficulty of action detection, e.g., the cluttered background and potential occlusions, a single type of feature cannot effectively solve the action detection problems in cluttered videos. In this chapter, the authors propose a new type of feature, Hierarchically Filtered Motion (HFM), and further investigate the fusion of HFM with Spatiotemporal Interest Point (STIP) features for action detection from cluttered videos. In order to effectively and efficiently detect actions, they propose a new approach that combines Gaussian Mixture Models (GMMs) with Branch-and-Bound search to locate interested actions in cluttered videos. The proposed new HFM features and action detection method have been evaluated on the classical KTH dataset and the challenging MSR Action Dataset II, which consists of crowded videos with moving people or vehicles in the background. Experiment results demonstrate that the proposed method significantly outperforms existing techniques, especially for action detection in crowded videos.
APA, Harvard, Vancouver, ISO, and other styles
10

Cao, Yu, and Kai Sun. "Spatiotemporal Features Recognition and Prediction of Urban Hot Spots Based on Trajectory Data." In Advances in Transdisciplinary Engineering. IOS Press, 2024. http://dx.doi.org/10.3233/atde240130.

Full text
Abstract:
With the continuous progress of urbanization, the construction of cities is gradually accelerated, and the urban population density is constantly increasing. Hot spots represent regions with more trips, greater population flow and higher travel demand. With the increasing complexity of traffic data, traditional methods for predicting hot spots can no longer meet the current situation. This also provides opportunities for the development of deep learning techniques. Given the continuous expansion in scale and intricacy of trajectory data, its relevance has become increasingly significant in domains such as urban planning, traffic management, and business decision-making. Aiming at the hot spot identification problem of trajectory data, this paper, inspired by the traffic prediction model, applied the STGCN model to the urban hot spot prediction problem and improved it. The model consists of four parts: cluster processing, spatial feature extraction, time feature extraction and hot spot region prediction. The experimental outcomes demonstrate that the model exhibits superior predictive performance across diverse datasets compared to conventional methods, showcasing robust generalization capabilities and practical applicability.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Spatiotemporal feature extraction"

1

Ding, Haohui, Jiaqiang Jiang, and Rui Yan. "A Time-Surface Enhancement Model for Event-based Spatiotemporal Feature Extraction." In 2024 International Joint Conference on Neural Networks (IJCNN). IEEE, 2024. http://dx.doi.org/10.1109/ijcnn60899.2024.10650047.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ma, Liang, Tristan Sands, Dion Khodagholy, and Jennifer Gelinas. "Time Domain-Based Oscillatory Feature Extraction for High Spatiotemporal Resolution Neurophysiologic Data." In 2024 IEEE International Electron Devices Meeting (IEDM). IEEE, 2024. https://doi.org/10.1109/iedm50854.2024.10873387.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Han, Jiangyu, Qiangjian Zhong, Ruiting Lin, Pengcheng Zhu, Nan Qiu, and Peiyang Li. "Spatiotemporal Feature Extraction of Dynamic Brain Networks and its Application in EEG-Based Emotion Recognition." In 2024 IEEE/WIC International Conference on Web Intelligence and Intelligent Agent Technology (WI-IAT). IEEE, 2024. https://doi.org/10.1109/wi-iat62293.2024.00092.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Yadav, Mukesh, and Peter Hawrylak. "Extracting Spatiotemporal Features For Detecting the Beginning of a Network Layer Attack with a Graph Neural Autoencoder and Deep Metric Learning." In 2024 Cyber Awareness and Research Symposium (CARS). IEEE, 2024. https://doi.org/10.1109/cars61786.2024.10778706.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Wenlu Yang and Liqing Zhang. "Spatiotemporal feature extraction based on invariance representation." In 2008 IEEE International Joint Conference on Neural Networks (IJCNN 2008 - Hong Kong). IEEE, 2008. http://dx.doi.org/10.1109/ijcnn.2008.4633910.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zhang, Dan-Dan, Jian-Qing Zheng, Jahanshah Fathi, Miao Sun, Fani Deligianni, and Guang-Zhong Yang. "Motor Imagery Classification based on RNNs with Spatiotemporal-Energy Feature Extraction." In UK-RAS Conference: Robots Working For and Among Us. EPSRC UK-RAS Network, 2018. http://dx.doi.org/10.31256/ukras17.55.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Wei, Yi, and Liujin Tang. "Ecotourism Data Mining and Spatiotemporal Differentiation Feature Extraction under the Background of Big Data." In ICISCAE 2021: 2021 IEEE 4th International Conference on Information Systems and Computer Aided Education. ACM, 2021. http://dx.doi.org/10.1145/3482632.3484132.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Wang, Zhaocong, Hang Du, Haiyu Zhu, and Xiaoyan Sun. "LM-Net-Dual: A Two Branch Spatiotemporal Feature Extraction Model for Dynamic Gestures Recognition." In 2023 3rd International Conference on Computer Science, Electronic Information Engineering and Intelligent Control Technology (CEI). IEEE, 2023. http://dx.doi.org/10.1109/cei60616.2023.10527792.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Zhao, Yaya, Kaiqi Zhao, Zhiqian Chen, Yuanyuan Zhang, Yalei Du, and Xiaoling Lu. "A Graph-based Representation Framework for Trajectory Recovery via Spatiotemporal Interval-Informed Seq2Seq." In Thirty-Third International Joint Conference on Artificial Intelligence {IJCAI-24}. International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/ijcai.2024/286.

Full text
Abstract:
The prevalent issue in urban trajectory data usage, notably in low-sample rate datasets, revolves around the accuracy of travel time estimations, traffic flow predictions, and trajectory similarity measurements. Conventional methods, often relying on simplistic mixes of static road networks and raw GPS data, fail to adequately integrate both network and trajectory dimensions. Addressing this, the innovative GRFTrajRec framework offers a graph-based solution for trajectory recovery. Its key feature is a trajectory-aware graph representation, enhancing the understanding of trajectory-road networ
APA, Harvard, Vancouver, ISO, and other styles
10

Tan, Junming, Wenjun Xiong, and Zhongwen Tu. "The Air Quality Prediction on Deep Spatiotemporal Feature Extraction with a Transductive Kernel Extreme Learning Machine." In 2022 IEEE 31st International Symposium on Industrial Electronics (ISIE). IEEE, 2022. http://dx.doi.org/10.1109/isie51582.2022.9831755.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!