Journal articles on the topic 'Multi-modal dataset'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 journal articles for your research on the topic 'Multi-modal dataset.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.
Jeong, Changhoon, Sung-Eun Jang, Sanghyuck Na, and Juntae Kim. "Korean Tourist Spot Multi-Modal Dataset for Deep Learning Applications." Data 4, no. 4 (2019): 139. http://dx.doi.org/10.3390/data4040139.
Full textWang, Fang, Shenglin Yin, Xiaoying Bai, Minghao Hu, Tianwei Yan, and Yi Liang. "M^3EL: A Multi-task Multi-topic Dataset for Multi-modal Entity Linking." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 12 (2025): 12712–20. https://doi.org/10.1609/aaai.v39i12.33386.
Full textMa’sum, Muhammad Anwar. "Intelligent Clustering and Dynamic Incremental Learning to Generate Multi-Codebook Fuzzy Neural Network for Multi-Modal Data Classification." Symmetry 12, no. 4 (2020): 679. http://dx.doi.org/10.3390/sym12040679.
Full textChen, Delong, Jianfeng Liu, Wenliang Dai, and Baoyuan Wang. "Visual Instruction Tuning with Polite Flamingo." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 16 (2024): 17745–53. http://dx.doi.org/10.1609/aaai.v38i16.29727.
Full textDai, Yin, Yumeng Song, Weibin Liu, et al. "Multi-Focus Image Fusion Based on Convolution Neural Network for Parkinson’s Disease Image Classification." Diagnostics 11, no. 12 (2021): 2379. http://dx.doi.org/10.3390/diagnostics11122379.
Full textMa’sum, Muhammad Anwar, Hadaiq Rolis Sanabila, Petrus Mursanto, and Wisnu Jatmiko. "Clustering versus Incremental Learning Multi-Codebook Fuzzy Neural Network for Multi-Modal Data Classification." Computation 8, no. 1 (2020): 6. http://dx.doi.org/10.3390/computation8010006.
Full textSuryani, Dewi, Valentino Ekaputra, and Andry Chowanda. "Multi-modal Asian Conversation Mobile Video Dataset for Recognition Task." International Journal of Electrical and Computer Engineering (IJECE) 8, no. 5 (2018): 4042. http://dx.doi.org/10.11591/ijece.v8i5.pp4042-4046.
Full textDewi, Suryani, Ekaputra Valentino, and Chowanda Andry. "Multi-modal Asian Conversation Mobile Video Dataset for Recognition Task." International Journal of Electrical and Computer Engineering (IJECE) 8, no. 5 (2018): 4042–46. https://doi.org/10.11591/ijece.v8i5.pp4042-4046.
Full textGuan, Wenhao, Yishuang Li, Tao Li, et al. "MM-TTS: Multi-Modal Prompt Based Style Transfer for Expressive Text-to-Speech Synthesis." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 16 (2024): 18117–25. http://dx.doi.org/10.1609/aaai.v38i16.29769.
Full textWang, Bingbing, Yiming Du, Bin Liang, et al. "A New Formula for Sticker Retrieval: Reply with Stickers in Multi-Modal and Multi-Session Conversation." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 24 (2025): 25327–35. https://doi.org/10.1609/aaai.v39i24.34720.
Full textWang, Zi, Chenglong Li, Aihua Zheng, Ran He, and Jin Tang. "Interact, Embed, and EnlargE: Boosting Modality-Specific Representations for Multi-Modal Person Re-identification." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 3 (2022): 2633–41. http://dx.doi.org/10.1609/aaai.v36i3.20165.
Full textHegh, Abya Newton, Adekunle Adedotun Adeyelu, Aamo Iorliam, and Samera U. Otor. "MULTI-MODAL EMOTION RECOGNITION MODEL USING GENERATIVE ADVERSARIAL NETWORKS (GANs) FOR AUGMENTING FACIAL EXPRESSIONS AND PHYSIOLOGICAL SIGNALS." FUDMA JOURNAL OF SCIENCES 9, no. 5 (2025): 277–90. https://doi.org/10.33003/fjs-2025-0905-3412.
Full textZuo, Jialong, Ying Nie, Tianyu Guo, et al. "L-Man: A Large Multi-modal Model Unifying Human-centric Tasks." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 10 (2025): 11095–103. https://doi.org/10.1609/aaai.v39i10.33206.
Full textWang, Yueqian, Xiaojun Meng, Yuxuan Wang, Jianxin Liang, Qun Liu, and Dongyan Zhao. "Friends-MMC: A Dataset for Multi-modal Multi-party Conversation Understanding." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 24 (2025): 25425–33. https://doi.org/10.1609/aaai.v39i24.34731.
Full textIslam, Kh Tohidul, Sudanthi Wijewickrema, and Stephen O’Leary. "A Deep Learning Framework for Segmenting Brain Tumors Using MRI and Synthetically Generated CT Images." Sensors 22, no. 2 (2022): 523. http://dx.doi.org/10.3390/s22020523.
Full textTabassum, Israt, and Vimala Nunavath. "A Hybrid Deep Learning Approach for Multi-Class Cyberbullying Classification Using Multi-Modal Social Media Data." Applied Sciences 14, no. 24 (2024): 12007. https://doi.org/10.3390/app142412007.
Full textLi, Yangning, Tingwei Lu, Hai-Tao Zheng, et al. "MESED: A Multi-Modal Entity Set Expansion Dataset with Fine-Grained Semantic Classes and Hard Negative Entities." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 8 (2024): 8697–706. http://dx.doi.org/10.1609/aaai.v38i8.28715.
Full textPark, Jiho, Kwangryeol Park, and Dongho Kim. "DGU-HAU: A Dataset for 3D Human Action Analysis on Utterances." Electronics 12, no. 23 (2023): 4793. http://dx.doi.org/10.3390/electronics12234793.
Full textChang, Xin, and Władysław Skarbek. "Multi-Modal Residual Perceptron Network for Audio–Video Emotion Recognition." Sensors 21, no. 16 (2021): 5452. http://dx.doi.org/10.3390/s21165452.
Full textDas, Mithun, Rohit Raj, Punyajoy Saha, Binny Mathew, Manish Gupta, and Animesh Mukherjee. "HateMM: A Multi-Modal Dataset for Hate Video Classification." Proceedings of the International AAAI Conference on Web and Social Media 17 (June 2, 2023): 1014–23. http://dx.doi.org/10.1609/icwsm.v17i1.22209.
Full textLi, Chenrui, Kun Gao, Zibo Hu, et al. "CSMR: A Multi-Modal Registered Dataset for Complex Scenarios." Remote Sensing 17, no. 5 (2025): 844. https://doi.org/10.3390/rs17050844.
Full textCitak, Erol, and Mine Elif Karsligil. "Multi-Modal Low-Data-Based Learning for Video Classification." Applied Sciences 14, no. 10 (2024): 4272. http://dx.doi.org/10.3390/app14104272.
Full textHe, Yanzhong, Yanjiao Zhang, and Lin Zhu. "Improving chinese cross-modal retrieval with multi-modal transportation data." Journal of Physics: Conference Series 2813, no. 1 (2024): 012014. http://dx.doi.org/10.1088/1742-6596/2813/1/012014.
Full textQin, Jinghui, Changsong Liu, Tianchi Tang, et al. "Mental-Perceiver: Audio-Textual Multi-Modal Learning for Estimating Mental Disorders." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 23 (2025): 25029–37. https://doi.org/10.1609/aaai.v39i23.34687.
Full textNi, Peizhou, Xu Li, Wang Xu, Xiaojing Zhou, Tao Jiang, and Weiming Hu. "Robust 3D Semantic Segmentation Method Based on Multi-Modal Collaborative Learning." Remote Sensing 16, no. 3 (2024): 453. http://dx.doi.org/10.3390/rs16030453.
Full textDoyle, Daniel, and Ovidiu Şerban. "Interruption Audio & Transcript: Derived from Group Affect and Performance Dataset." Data 9, no. 9 (2024): 104. http://dx.doi.org/10.3390/data9090104.
Full textPan, Xuran, Kexing Xu, Shuhao Yang, Yukun Liu, Rui Zhang, and Ping He. "SDA-Net: A Spatially Optimized Dual-Stream Network with Adaptive Global Attention for Building Extraction in Multi-Modal Remote Sensing Images." Sensors 25, no. 7 (2025): 2112. https://doi.org/10.3390/s25072112.
Full textJiang, Jiali. "Multimodal Emotion Recognition Based on Deep Learning." International Journal of Computer Science and Information Technology 5, no. 2 (2025): 71–80. https://doi.org/10.62051/ijcsit.v5n2.10.
Full textWei, Haoran, Pranav Chopada, and Nasser Kehtarnavaz. "C-MHAD: Continuous Multimodal Human Action Dataset of Simultaneous Video and Inertial Sensing." Sensors 20, no. 10 (2020): 2905. http://dx.doi.org/10.3390/s20102905.
Full textRuiz de Oña, Esteban, Inés Barbero-García, Diego González-Aguilera, Fabio Remondino, Pablo Rodríguez-Gonzálvez, and David Hernández-López. "PhotoMatch: An Open-Source Tool for Multi-View and Multi-Modal Feature-Based Image Matching." Applied Sciences 13, no. 9 (2023): 5467. http://dx.doi.org/10.3390/app13095467.
Full textTao, Rui, Meng Zhu, Haiyan Cao, and Honge Ren. "Fine-Grained Cross-Modal Semantic Consistency in Natural Conservation Image Data from a Multi-Task Perspective." Sensors 24, no. 10 (2024): 3130. http://dx.doi.org/10.3390/s24103130.
Full textChen, Yatong, Chenzhi Hu, Tomoyoshi Kimura, et al. "SemiCMT: Contrastive Cross-Modal Knowledge Transfer for IoT Sensing with Semi-Paired Multi-Modal Signals." Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 8, no. 4 (2024): 1–30. http://dx.doi.org/10.1145/3699779.
Full textZhu, Wenming, Jia Zhou, Zizhe Wang, et al. "Three-Dimensional Object Detection Network Based on Multi-Layer and Multi-Modal Fusion." Electronics 13, no. 17 (2024): 3512. http://dx.doi.org/10.3390/electronics13173512.
Full textXu, Yangshuyi, Lin Zhang, and Xiang Shen. "Multi-modal adaptive gated mechanism for visual question answering." PLOS ONE 18, no. 6 (2023): e0287557. http://dx.doi.org/10.1371/journal.pone.0287557.
Full textFlores Fernández, Alberto, Jonas Wurst, Eduardo Sánchez Morales, Michael Botsch, Christian Facchi, and Andrés García Higuera. "Probabilistic Traffic Motion Labeling for Multi-Modal Vehicle Route Prediction." Sensors 22, no. 12 (2022): 4498. http://dx.doi.org/10.3390/s22124498.
Full textLiu, K., A. Wu, X. Wan, and S. Li. "MRSSC: A BENCHMARK DATASET FOR MULTIMODAL REMOTE SENSING SCENE CLASSIFICATION." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B2-2021 (June 28, 2021): 785–92. http://dx.doi.org/10.5194/isprs-archives-xliii-b2-2021-785-2021.
Full textGuo, Zihan, Xiang Shen, and Chongqing Chen. "TBKIN: Threshold-based explicit selection for enhanced cross-modal semantic alignments." PLOS One 20, no. 6 (2025): e0325543. https://doi.org/10.1371/journal.pone.0325543.
Full textWang, Jian, Haisen Li, Guanying Huo, Chao Li, and Yuhang Wei. "Multi-Modal Multi-Stage Underwater Side-Scan Sonar Target Recognition Based on Synthetic Images." Remote Sensing 15, no. 5 (2023): 1303. http://dx.doi.org/10.3390/rs15051303.
Full textZhang, Pengyu, Dong Wang, and Huchuan Lu. "Multi-modal visual tracking: Review and experimental comparison." Computational Visual Media 10, no. 2 (2024): 193–214. http://dx.doi.org/10.1007/s41095-023-0345-5.
Full textStephan, Benedict, Mona Köhler, Steffen Müller, Yan Zhang, Horst-Michael Gross, and Gunther Notni. "OHO: A Multi-Modal, Multi-Purpose Dataset for Human-Robot Object Hand-Over." Sensors 23, no. 18 (2023): 7807. http://dx.doi.org/10.3390/s23187807.
Full textBarbato, Mirko Paolo, Flavio Piccoli, and Paolo Napoletano. "Ticino: A multi-modal remote sensing dataset for semantic segmentation." Expert Systems with Applications 249 (September 2024): 123600. http://dx.doi.org/10.1016/j.eswa.2024.123600.
Full textXiao, Yun, Dan Cao, Chenglong Li, Bo Jiang, and Jin Tang. "A benchmark dataset for high-altitude UAV multi-modal tracking." Journal of Image and Graphics 30, no. 2 (2025): 361–74. https://doi.org/10.11834/jig.240040.
Full textSINGH, APOORVA, Soumyodeep Dey, Anamitra Singha, and Sriparna Saha. "Sentiment and Emotion-Aware Multi-Modal Complaint Identification." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 11 (2022): 12163–71. http://dx.doi.org/10.1609/aaai.v36i11.21476.
Full textYang, Junhuan, Yuzhou Zhang, Yi Sheng, Youzuo Lin, and Lei Yang. "A Novel Diffusion Model for Pairwise Geoscience Data Generation with Unbalanced Training Dataset." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 20 (2025): 21965–73. https://doi.org/10.1609/aaai.v39i20.35504.
Full textYang, Fan, Xiaozhi Men, Yangsheng Liu, et al. "Estimation of Landslide and Mudslide Susceptibility with Multi-Modal Remote Sensing Data and Semantics: The Case of Yunnan Mountain Area." Land 12, no. 10 (2023): 1949. http://dx.doi.org/10.3390/land12101949.
Full textDoan, Huong-Giang, and Ngoc-Trung Nguyen. "End-to-end multiple modals deep learning system for hand posture recognition." Indonesian Journal of Electrical Engineering and Computer Science 27, no. 1 (2022): 214–21. https://doi.org/10.11591/ijeecs.v27.i1.pp214-221.
Full textGao, Jingsheng, Jiacheng Ruan, Suncheng Xiang, et al. "LAMM: Label Alignment for Multi-Modal Prompt Learning." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 3 (2024): 1815–23. http://dx.doi.org/10.1609/aaai.v38i3.27950.
Full textQu, Fang, Youqiang Sun, Man Zhou, et al. "Vegetation Land Segmentation with Multi-Modal and Multi-Temporal Remote Sensing Images: A Temporal Learning Approach and a New Dataset." Remote Sensing 16, no. 1 (2023): 3. http://dx.doi.org/10.3390/rs16010003.
Full textOrtega, Juan Diego, Paola Natalia Cañas, Marcos Nieto, Oihana Otaegui, and Luis Salgado. "Challenges of Large-Scale Multi-Camera Datasets for Driver Monitoring Systems." Sensors 22, no. 7 (2022): 2554. http://dx.doi.org/10.3390/s22072554.
Full textLi, Boao. "Multi-modal sentiment analysis based on graph neural network." Applied and Computational Engineering 6, no. 1 (2023): 792–98. http://dx.doi.org/10.54254/2755-2721/6/20230918.
Full text