Journal articles on the topic 'Video based modality'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 journal articles for your research on the topic 'Video based modality.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.
Oh, Changhyeon, and Yuseok Ban. "Cross-Modality Interaction-Based Traffic Accident Classification." Applied Sciences 14, no. 5 (2024): 1958. http://dx.doi.org/10.3390/app14051958.
Full textWang, Xingrun, Xiushan Nie, Xingbo Liu, Binze Wang, and Yilong Yin. "Modality correlation-based video summarization." Multimedia Tools and Applications 79, no. 45-46 (2020): 33875–90. http://dx.doi.org/10.1007/s11042-020-08690-3.
Full textJang, Jaeyoung, Yuseok Ban, and Kyungjae Lee. "Dual-Modality Cross-Interaction-Based Hybrid Full-Frame Video Stabilization." Applied Sciences 14, no. 10 (2024): 4290. http://dx.doi.org/10.3390/app14104290.
Full textNur, Azmina Rahmad, Amir As'ari Muhammad, Fathiah Ghazali Nurul, Shahar Norazman, and Anis Jasmin Sufri Nur. "A Survey of Video Based Action Recognition in Sports." Indonesian Journal of Electrical Engineering and Computer Science 11, no. 3 (2018): 987–93. https://doi.org/10.11591/ijeecs.v11.i3.pp987-993.
Full textZhang, Beibei, Tongwei Ren, and Gangshan Wu. "Text-Guided Nonverbal Enhancement Based on Modality-Invariant and -Specific Representations for Video Speaking Style Recognition." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 21 (2025): 22354–62. https://doi.org/10.1609/aaai.v39i21.34391.
Full textZong, Linlin, Wenmin Lin, Jiahui Zhou, et al. "Text-Guided Fine-grained Counterfactual Inference for Short Video Fake News Detection." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 1 (2025): 1237–45. https://doi.org/10.1609/aaai.v39i1.32112.
Full textLi, Yun, Su Wang, Jiawei Mo, and Xin Wei. "An Underwater Multi-Label Classification Algorithm Based on a Bilayer Graph Convolution Learning Network with Constrained Codec." Electronics 13, no. 16 (2024): 3134. http://dx.doi.org/10.3390/electronics13163134.
Full textRahmad, Nur Azmina, Muhammad Amir As'ari, Nurul Fathiah Ghazali, Norazman Shahar, and Nur Anis Jasmin Sufri. "A Survey of Video Based Action Recognition in Sports." Indonesian Journal of Electrical Engineering and Computer Science 11, no. 3 (2018): 987. http://dx.doi.org/10.11591/ijeecs.v11.i3.pp987-993.
Full textZawali, Bako, Richard A. Ikuesan, Victor R. Kebande, Steven Furnell, and Arafat A-Dhaqm. "Realising a Push Button Modality for Video-Based Forensics." Infrastructures 6, no. 4 (2021): 54. http://dx.doi.org/10.3390/infrastructures6040054.
Full textWaykar, Sanjay B., and C. R. Bharathi. "Multimodal Features and Probability Extended Nearest Neighbor Classification for Content-Based Lecture Video Retrieval." Journal of Intelligent Systems 26, no. 3 (2017): 585–99. http://dx.doi.org/10.1515/jisys-2016-0041.
Full textZhang, Bo, Xiya Yang, Ge Wang, Ying Wang, and Rui Sun. "M2ER: Multimodal Emotion Recognition Based on Multi-Party Dialogue Scenarios." Applied Sciences 13, no. 20 (2023): 11340. http://dx.doi.org/10.3390/app132011340.
Full textMao, Jianguo, Wenbin Jiang, Hong Liu, Xiangdong Wang, and Yajuan Lyu. "Inferential Knowledge-Enhanced Integrated Reasoning for Video Question Answering." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 11 (2023): 13380–88. http://dx.doi.org/10.1609/aaai.v37i11.26570.
Full textN, Pavithra, and H. Sharath Kumar Y. "A Computational Meta-Learning Inspired Model for Sketch-based Video Retrieval." Indian Journal of Science and Technology 16, no. 7 (2023): 476–84. https://doi.org/10.17485/IJST/v16i7.2121.
Full textZhu, Mengxiao, Liu He, Han Zhao, Ruoxiao Su, Licheng Zhang, and Bo Hu. "Same Vaccine, Different Voices: A Cross-Modality Analysis of HPV Vaccine Discourse on Social Media." Proceedings of the International AAAI Conference on Web and Social Media 19 (June 7, 2025): 2317–33. https://doi.org/10.1609/icwsm.v19i1.35936.
Full textPang, Nuo, Songlin Guo, Ming Yan, and Chien Aun Chan. "A Short Video Classification Framework Based on Cross-Modal Fusion." Sensors 23, no. 20 (2023): 8425. http://dx.doi.org/10.3390/s23208425.
Full textOliveira, Eva, Teresa Chambel, and Nuno Magalhães Ribeiro. "Sharing Video Emotional Information in the Web." International Journal of Web Portals 5, no. 3 (2013): 19–39. http://dx.doi.org/10.4018/ijwp.2013070102.
Full textXiang, Yun Zhu. "Multi-Modality Video Scene Segmentation Algorithm with Shot Force Competition." Applied Mechanics and Materials 513-517 (February 2014): 514–17. http://dx.doi.org/10.4028/www.scientific.net/amm.513-517.514.
Full textChen, Chun-Ying. "The Influence of Representational Formats and Learner Modality Preferences on Instructional Efficiency Using Interactive Video Tutorials." Journal of Education and Training 7, no. 2 (2020): 77. http://dx.doi.org/10.5296/jet.v7i2.17415.
Full textGerabon, Mariel, Asirah Amil, Ricky Dag-Uman, Rhea Dulla, Sandy Aldanese, and Sendy Suico. "Modality-Based Assessment Practices and Strategies during Pandemic." Journal of Education and Academic Settings 1, no. 1 (2024): 1–18. http://dx.doi.org/10.62596/ja5vhj56.
Full textYuan, Haiyue, Janko Ćalić, and Ahmet Kondoz. "Analysis of User Requirements in Interactive 3D Video Systems." Advances in Human-Computer Interaction 2012 (2012): 1–11. http://dx.doi.org/10.1155/2012/343197.
Full textGriffiths, Noola K., and Jonathon L. Reay. "The Relative Importance of Aural and Visual Information in the Evaluation of Western Canon Music Performance by Musicians and Nonmusicians." Music Perception 35, no. 3 (2018): 364–75. http://dx.doi.org/10.1525/mp.2018.35.3.364.
Full textZhuo, Junbao, Shuhui Wang, Zhenghan Chen, Li Shen, Qingming Huang, and Huimin Ma. "Image-to-video Adaptation with Outlier Modeling and Robust Self-learning." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 21 (2025): 23072–80. https://doi.org/10.1609/aaai.v39i21.34471.
Full textZhang, Zhenduo. "Cross-Category Highlight Detection via Feature Decomposition and Modality Alignment." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 3 (2023): 3525–33. http://dx.doi.org/10.1609/aaai.v37i3.25462.
Full textLi, Mingchao, Xiaoming Shi, Haitao Leng, Wei Zhou, Hai-Tao Zheng, and Kuncai Zhang. "Learning Semantic Alignment with Global Modality Reconstruction for Video-Language Pre-training towards Retrieval." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 1 (2023): 1377–85. http://dx.doi.org/10.1609/aaai.v37i1.25222.
Full textHa, Jongwoo, Joonhyuck Ryu, and Joonghoon Ko. "Multi-Modality Tensor Fusion Based Human Fatigue Detection." Electronics 12, no. 15 (2023): 3344. http://dx.doi.org/10.3390/electronics12153344.
Full textRadfar, Edalat, Won Hyuk Jang, Leila Freidoony, Jihoon Park, Kichul Kwon, and Byungjo Jung. "Single-channel stereoscopic video imaging modality based on transparent rotating deflector." Optics Express 23, no. 21 (2015): 27661. http://dx.doi.org/10.1364/oe.23.027661.
Full textHe, Ping, Huaying Qi, Shiyi Wang, and Jiayue Cang. "Cross-Modal Sentiment Analysis of Text and Video Based on Bi-GRU Cyclic Network and Correlation Enhancement." Applied Sciences 13, no. 13 (2023): 7489. http://dx.doi.org/10.3390/app13137489.
Full textWei, Haoran, Roozbeh Jafari, and Nasser Kehtarnavaz. "Fusion of Video and Inertial Sensing for Deep Learning–Based Human Action Recognition." Sensors 19, no. 17 (2019): 3680. http://dx.doi.org/10.3390/s19173680.
Full textBeemer, Lexie R., Wendy Tackett, Anna Schwartz, et al. "Use of a Novel Theory-Based Pragmatic Tool to Evaluate the Quality of Instructor-Led Exercise Videos to Promote Youth Physical Activity at Home: Preliminary Findings." International Journal of Environmental Research and Public Health 20, no. 16 (2023): 6561. http://dx.doi.org/10.3390/ijerph20166561.
Full textCitak, Erol, and Mine Elif Karsligil. "Multi-Modal Low-Data-Based Learning for Video Classification." Applied Sciences 14, no. 10 (2024): 4272. http://dx.doi.org/10.3390/app14104272.
Full textLee, Yong-Hyeok, Dong-Won Jang, Jae-Bin Kim, Rae-Hong Park, and Hyung-Min Park. "Audio–Visual Speech Recognition Based on Dual Cross-Modality Attentions with the Transformer Model." Applied Sciences 10, no. 20 (2020): 7263. http://dx.doi.org/10.3390/app10207263.
Full textYang, Saelyne, Sunghyun Park, Yunseok Jang, and Moontae Lee. "YTCommentQA: Video Question Answerability in Instructional Videos." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 17 (2024): 19359–67. http://dx.doi.org/10.1609/aaai.v38i17.29906.
Full textGuo, Jialong, Ke Liu, Jiangchao Yao, Zhihua Wang, Jiajun Bu, and Haishuai Wang. "MetaNeRV: Meta Neural Representations for Videos with Spatial-Temporal Guidance." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 3 (2025): 3257–65. https://doi.org/10.1609/aaai.v39i3.32336.
Full textNAGEL, Merav. "Exercise Intervention based on the Chinese Modality." Asian Journal of Physical Education & Recreation 13, no. 2 (2007): 13–20. http://dx.doi.org/10.24112/ajper.131829.
Full textZhang, Qiaoyun, Hsiang-Chuan Chang, Chia-Ling Ho, Huan-Chao Keh, and Diptendu Sinha Roy. "AI-Based Multimodal Anomaly Detection for Industrial Machine Operations." Journal of Internet Technology 26, no. 2 (2025): 255–64. https://doi.org/10.70003/160792642025032602010.
Full textKim, Eun Hee, and Ju Hyun Shin. "Multi-Modal Emotion Recognition in Videos Based on Pre-Trained Models." Korean Institute of Smart Media 13, no. 10 (2024): 19–27. http://dx.doi.org/10.30693/smj.2024.13.10.19.
Full textZhu, Xiaoguang, Ye Zhu, Haoyu Wang, Honglin Wen, Yan Yan, and Peilin Liu. "Skeleton Sequence and RGB Frame Based Multi-Modality Feature Fusion Network for Action Recognition." ACM Transactions on Multimedia Computing, Communications, and Applications 18, no. 3 (2022): 1–24. http://dx.doi.org/10.1145/3491228.
Full textJiang, Pin, and Yahong Han. "Reasoning with Heterogeneous Graph Alignment for Video Question Answering." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 07 (2020): 11109–16. http://dx.doi.org/10.1609/aaai.v34i07.6767.
Full textIoannis, Mademlis, Iosifidis Alexandros, Tefas Anastasios, Nikolaidis Nikos, and Pitas Ioannis. "Exploiting stereoscopic disparity for augmenting human activity recognition performance." Multimedia Tools and Applications 75 (October 4, 2016): 11641–60. https://doi.org/10.1007/s11042-015-2719-x.
Full textHua, Hang, Yunlong Tang, Chenliang Xu, and Jiebo Luo. "V2Xum-LLM: Cross-Modal Video Summarization with Temporal Prompt Instruction Tuning." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 4 (2025): 3599–607. https://doi.org/10.1609/aaai.v39i4.32374.
Full textZhang, Manlin, Jinpeng Wang, and Andy J. Ma. "Suppressing Static Visual Cues via Normalizing Flows for Self-Supervised Video Representation Learning." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 3 (2022): 3300–3308. http://dx.doi.org/10.1609/aaai.v36i3.20239.
Full textLeng, Zikang, Amitrajit Bhattacharjee, Hrudhai Rajasekhar, et al. "IMUGPT 2.0: Language-Based Cross Modality Transfer for Sensor-Based Human Activity Recognition." Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 8, no. 3 (2024): 1–32. http://dx.doi.org/10.1145/3678545.
Full textJiang, Meng, Liming Zhang, Xiaohua Wang, Shuang Li, and Yijie Jiao. "6D Object Pose Estimation Based on Cross-Modality Feature Fusion." Sensors 23, no. 19 (2023): 8088. http://dx.doi.org/10.3390/s23198088.
Full textMcLaren, Sean W., Dorota T. Kopycka-Kedzierawski, and Jed Nordfelt. "Accuracy of teledentistry examinations at predicting actual treatment modality in a pediatric dentistry clinic." Journal of Telemedicine and Telecare 23, no. 8 (2016): 710–15. http://dx.doi.org/10.1177/1357633x16661428.
Full textCheng, Yongjian, Dongmei Zhou, Siqi Wang, and Luhan Wen. "Emotion-Recognition Algorithm Based on Weight-Adaptive Thought of Audio and Video." Electronics 12, no. 11 (2023): 2548. http://dx.doi.org/10.3390/electronics12112548.
Full textWANG, MENG, XIAN-SHENG HUA, TAO MEI, et al. "INTERACTIVE VIDEO ANNOTATION BY MULTI-CONCEPT MULTI-MODALITY ACTIVE LEARNING." International Journal of Semantic Computing 01, no. 04 (2007): 459–77. http://dx.doi.org/10.1142/s1793351x0700024x.
Full textDwivedi, Shivangi, John Hayes, Isabella Pedron, et al. "Comparing the efficacy of AR-based training with video-based training." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 66, no. 1 (2022): 1862–66. http://dx.doi.org/10.1177/1071181322661289.
Full textChen, Yingju, and Jeongkyu Lee. "A Review of Machine-Vision-Based Analysis of Wireless Capsule Endoscopy Video." Diagnostic and Therapeutic Endoscopy 2012 (November 13, 2012): 1–9. http://dx.doi.org/10.1155/2012/418037.
Full textKim, Eun-Hee, Myung-Jin Lim, and Ju-Hyun Shin. "MMER-LMF: Multi-Modal Emotion Recognition in Lightweight Modality Fusion." Electronics 14, no. 11 (2025): 2139. https://doi.org/10.3390/electronics14112139.
Full textLie, Wen-Nung, Dao-Quang Le, Chun-Yu Lai, and Yu-Shin Fang. "Heart Rate Estimation from Facial Image Sequences of a Dual-Modality RGB-NIR Camera." Sensors 23, no. 13 (2023): 6079. http://dx.doi.org/10.3390/s23136079.
Full text