Journal articles on the topic 'Multi-Modal representations'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 journal articles for your research on the topic 'Multi-Modal representations.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.
Wu, Lianlong, Seewon Choi, Daniel Raggi, et al. "Generation of Visual Representations for Multi-Modal Mathematical Knowledge." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 21 (2024): 23850–52. http://dx.doi.org/10.1609/aaai.v38i21.30586.
Full textZhang, Yi, Mingyuan Chen, Jundong Shen, and Chongjun Wang. "Tailor Versatile Multi-Modal Learning for Multi-Label Emotion Recognition." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 8 (2022): 9100–9108. http://dx.doi.org/10.1609/aaai.v36i8.20895.
Full textZhang, Dong, Suzhong Wei, Shoushan Li, Hanqian Wu, Qiaoming Zhu, and Guodong Zhou. "Multi-modal Graph Fusion for Named Entity Recognition with Targeted Visual Guidance." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 16 (2021): 14347–55. http://dx.doi.org/10.1609/aaai.v35i16.17687.
Full textLiu, Hao, Jindong Han, Yanjie Fu, Jingbo Zhou, Xinjiang Lu, and Hui Xiong. "Multi-modal transportation recommendation with unified route representation learning." Proceedings of the VLDB Endowment 14, no. 3 (2020): 342–50. http://dx.doi.org/10.14778/3430915.3430924.
Full textWang, Huansha, Qinrang Liu, Ruiyang Huang, and Jianpeng Zhang. "Multi-Modal Entity Alignment Method Based on Feature Enhancement." Applied Sciences 13, no. 11 (2023): 6747. http://dx.doi.org/10.3390/app13116747.
Full textWu, Tianxing, Chaoyu Gao, Lin Li, and Yuxiang Wang. "Leveraging Multi-Modal Information for Cross-Lingual Entity Matching across Knowledge Graphs." Applied Sciences 12, no. 19 (2022): 10107. http://dx.doi.org/10.3390/app121910107.
Full textHan, Ning, Jingjing Chen, Hao Zhang, Huanwen Wang, and Hao Chen. "Adversarial Multi-Grained Embedding Network for Cross-Modal Text-Video Retrieval." ACM Transactions on Multimedia Computing, Communications, and Applications 18, no. 2 (2022): 1–23. http://dx.doi.org/10.1145/3483381.
Full textYing, Qichao, Xiaoxiao Hu, Yangming Zhou, Zhenxing Qian, Dan Zeng, and Shiming Ge. "Bootstrapping Multi-View Representations for Fake News Detection." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 4 (2023): 5384–92. http://dx.doi.org/10.1609/aaai.v37i4.25670.
Full textHuang, Yufeng, Jiji Tang, Zhuo Chen, et al. "Structure-CLIP: Towards Scene Graph Knowledge to Enhance Multi-Modal Structured Representations." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 3 (2024): 2417–25. http://dx.doi.org/10.1609/aaai.v38i3.28017.
Full textvan Tulder, Gijs, and Marleen de Bruijne. "Learning Cross-Modality Representations From Multi-Modal Images." IEEE Transactions on Medical Imaging 38, no. 2 (2019): 638–48. http://dx.doi.org/10.1109/tmi.2018.2868977.
Full textKiela, Douwe, and Stephen Clark. "Learning Neural Audio Embeddings for Grounding Semantics in Auditory Perception." Journal of Artificial Intelligence Research 60 (December 26, 2017): 1003–30. http://dx.doi.org/10.1613/jair.5665.
Full textCui, Xiaohui, Xiaolong Qu, Dongmei Li, Yu Yang, Yuxun Li, and Xiaoping Zhang. "MKGCN: Multi-Modal Knowledge Graph Convolutional Network for Music Recommender Systems." Electronics 12, no. 12 (2023): 2688. http://dx.doi.org/10.3390/electronics12122688.
Full textDong, Bin, Songlei Jian, and Kai Lu. "Learning Multimodal Representations by Symmetrically Transferring Local Structures." Symmetry 12, no. 9 (2020): 1504. http://dx.doi.org/10.3390/sym12091504.
Full textLi, Yehao, Jiahao Fan, Yingwei Pan, Ting Yao, Weiyao Lin, and Tao Mei. "Uni-EDEN: Universal Encoder-Decoder Network by Multi-Granular Vision-Language Pre-training." ACM Transactions on Multimedia Computing, Communications, and Applications 18, no. 2 (2022): 1–16. http://dx.doi.org/10.1145/3473140.
Full textGu, Zhihao, Jiangning Zhang, Liang Liu, et al. "Rethinking Reverse Distillation for Multi-Modal Anomaly Detection." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 8 (2024): 8445–53. http://dx.doi.org/10.1609/aaai.v38i8.28687.
Full textWang, Zi, Chenglong Li, Aihua Zheng, Ran He, and Jin Tang. "Interact, Embed, and EnlargE: Boosting Modality-Specific Representations for Multi-Modal Person Re-identification." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 3 (2022): 2633–41. http://dx.doi.org/10.1609/aaai.v36i3.20165.
Full textWróblewska, Anna, Jacek Dąbrowski, Michał Pastuszak, et al. "Designing Multi-Modal Embedding Fusion-Based Recommender." Electronics 11, no. 9 (2022): 1391. http://dx.doi.org/10.3390/electronics11091391.
Full textHe, Qibin. "Prompting Multi-Modal Image Segmentation with Semantic Grouping." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 3 (2024): 2094–102. http://dx.doi.org/10.1609/aaai.v38i3.27981.
Full textLiang, Meiyu, Junping Du, Zhengyang Liang, Yongwang Xing, Wei Huang, and Zhe Xue. "Self-Supervised Multi-Modal Knowledge Graph Contrastive Hashing for Cross-Modal Search." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 12 (2024): 13744–53. http://dx.doi.org/10.1609/aaai.v38i12.29280.
Full textBodapati, Jyostna Devi, Veeranjaneyulu Naralasetti, Shaik Nagur Shareef, et al. "Blended Multi-Modal Deep ConvNet Features for Diabetic Retinopathy Severity Prediction." Electronics 9, no. 6 (2020): 914. http://dx.doi.org/10.3390/electronics9060914.
Full textLiu, Hao, Ting Li, Renjun Hu, Yanjie Fu, Jingjing Gu, and Hui Xiong. "Joint Representation Learning for Multi-Modal Transportation Recommendation." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 1036–43. http://dx.doi.org/10.1609/aaai.v33i01.33011036.
Full textYang, Fan, Wei Li, Menglong Yang, Binbin Liang, and Jianwei Zhang. "Multi-Modal Disordered Representation Learning Network for Description-Based Person Search." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 15 (2024): 16316–24. http://dx.doi.org/10.1609/aaai.v38i15.29567.
Full textJüttner, Martin, and Ingo Rentschler. "Imagery in multi-modal object learning." Behavioral and Brain Sciences 25, no. 2 (2002): 197–98. http://dx.doi.org/10.1017/s0140525x0238004x.
Full textTao, Rui, Meng Zhu, Haiyan Cao, and Honge Ren. "Fine-Grained Cross-Modal Semantic Consistency in Natural Conservation Image Data from a Multi-Task Perspective." Sensors 24, no. 10 (2024): 3130. http://dx.doi.org/10.3390/s24103130.
Full textPugeault, Nicolas, Florentin Wörgötter, and Norbert Krüger. "Disambiguating Multi–Modal Scene Representations Using Perceptual Grouping Constraints." PLoS ONE 5, no. 6 (2010): e10663. http://dx.doi.org/10.1371/journal.pone.0010663.
Full textLara, Bruno, Juan Manuel Rendon-Mancha, and Marcos A. Capistran. "Prediction of Undesired Situations based on Multi-Modal Representations." IEEE Latin America Transactions 5, no. 2 (2007): 103–8. http://dx.doi.org/10.1109/tla.2007.4381351.
Full textGeng, Shijie, Peng Gao, Moitreya Chatterjee, et al. "Dynamic Graph Representation Learning for Video Dialog via Multi-Modal Shuffled Transformers." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 2 (2021): 1415–23. http://dx.doi.org/10.1609/aaai.v35i2.16231.
Full textYan, Facheng, Mingshu Zhang, and Bin Wei. "Multimodal integration for fake news detection on social media platforms." MATEC Web of Conferences 395 (2024): 01013. http://dx.doi.org/10.1051/matecconf/202439501013.
Full textEscobar-Grisales, Daniel, Cristian David Ríos-Urrego, and Juan Rafael Orozco-Arroyave. "Deep Learning and Artificial Intelligence Applied to Model Speech and Language in Parkinson’s Disease." Diagnostics 13, no. 13 (2023): 2163. http://dx.doi.org/10.3390/diagnostics13132163.
Full textZhai, Hanming, Xiaojun Lv, Zhiwen Hou, Xin Tong, and Fanliang Bu. "MLSFF: Multi-level structural features fusion for multi-modal knowledge graph completion." Mathematical Biosciences and Engineering 20, no. 8 (2023): 14096–116. http://dx.doi.org/10.3934/mbe.2023630.
Full textHua, Yan, Yingyun Yang, and Jianhe Du. "Deep Multi-Modal Metric Learning with Multi-Scale Correlation for Image-Text Retrieval." Electronics 9, no. 3 (2020): 466. http://dx.doi.org/10.3390/electronics9030466.
Full textYang, Yiying, Fukun Yin, Wen Liu, et al. "PM-INR: Prior-Rich Multi-Modal Implicit Large-Scale Scene Neural Representation." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 7 (2024): 6594–602. http://dx.doi.org/10.1609/aaai.v38i7.28481.
Full textHu, Lianyu, Liqing Gao, Zekang Liu, Chi-Man Pun, and Wei Feng. "COMMA: Co-articulated Multi-Modal Learning." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 3 (2024): 2238–46. http://dx.doi.org/10.1609/aaai.v38i3.27997.
Full textHill, Felix, Roi Reichart, and Anna Korhonen. "Multi-Modal Models for Concrete and Abstract Concept Meaning." Transactions of the Association for Computational Linguistics 2 (December 2014): 285–96. http://dx.doi.org/10.1162/tacl_a_00183.
Full textLiu, Xuanwu, Guoxian Yu, Carlotta Domeniconi, Jun Wang, Yazhou Ren, and Maozu Guo. "Ranking-Based Deep Cross-Modal Hashing." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 4400–4407. http://dx.doi.org/10.1609/aaai.v33i01.33014400.
Full textSezerer, Erhan, and Selma Tekir. "Incorporating Concreteness in Multi-Modal Language Models with Curriculum Learning." Applied Sciences 11, no. 17 (2021): 8241. http://dx.doi.org/10.3390/app11178241.
Full textGou, Yingdong, Kexin Wang, Siwen Wei, and Changxin Shi. "GMDA: GCN-Based Multi-Modal Domain Adaptation for Real-Time Disaster Detection." International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 31, no. 06 (2023): 957–73. http://dx.doi.org/10.1142/s0218488523500435.
Full textJang, Jiho, Chaerin Kong, DongHyeon Jeon, Seonhoon Kim, and Nojun Kwak. "Unifying Vision-Language Representation Space with Single-Tower Transformer." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 1 (2023): 980–88. http://dx.doi.org/10.1609/aaai.v37i1.25178.
Full textKabir, Anowarul, and Amarda Shehu. "GOProFormer: A Multi-Modal Transformer Method for Gene Ontology Protein Function Prediction." Biomolecules 12, no. 11 (2022): 1709. http://dx.doi.org/10.3390/biom12111709.
Full textAlam, Mohammad Arif Ul. "College Student Retention Risk Analysis from Educational Database Using Multi-Task Multi-Modal Neural Fusion." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 11 (2022): 12689–97. http://dx.doi.org/10.1609/aaai.v36i11.21545.
Full textQian, Shengsheng, Dizhan Xue, Huaiwen Zhang, Quan Fang, and Changsheng Xu. "Dual Adversarial Graph Neural Networks for Multi-label Cross-modal Retrieval." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 3 (2021): 2440–48. http://dx.doi.org/10.1609/aaai.v35i3.16345.
Full textLu, Lyujian, Saad Elbeleidy, Lauren Zoe Baker, and Hua Wang. "Learning Multi-Modal Biomarker Representations via Globally Aligned Longitudinal Enrichments." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 01 (2020): 817–24. http://dx.doi.org/10.1609/aaai.v34i01.5426.
Full textZhang, Heng, Vishal M. Patel, and Rama Chellappa. "Low-Rank and Joint Sparse Representations for Multi-Modal Recognition." IEEE Transactions on Image Processing 26, no. 10 (2017): 4741–52. http://dx.doi.org/10.1109/tip.2017.2721838.
Full textBlown, Eric, and Tom G. K. Bryce. "Conceptual Coherence Revealed in Multi‐Modal Representations of Astronomy Knowledge." International Journal of Science Education 32, no. 1 (2009): 31–67. http://dx.doi.org/10.1080/09500690902974207.
Full textTorasso, Pietro. "Multiple representations and multi-modal reasoning in medical diagnostic systems." Artificial Intelligence in Medicine 23, no. 1 (2001): 49–69. http://dx.doi.org/10.1016/s0933-3657(01)00075-6.
Full textWachinger, Christian, and Nassir Navab. "Entropy and Laplacian images: Structural representations for multi-modal registration." Medical Image Analysis 16, no. 1 (2012): 1–17. http://dx.doi.org/10.1016/j.media.2011.03.001.
Full textFang, Feiyi, Tao Zhou, Zhenbo Song, and Jianfeng Lu. "MMCAN: Multi-Modal Cross-Attention Network for Free-Space Detection with Uncalibrated Hyperspectral Sensors." Remote Sensing 15, no. 4 (2023): 1142. http://dx.doi.org/10.3390/rs15041142.
Full textGao, Jingsheng, Jiacheng Ruan, Suncheng Xiang, et al. "LAMM: Label Alignment for Multi-Modal Prompt Learning." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 3 (2024): 1815–23. http://dx.doi.org/10.1609/aaai.v38i3.27950.
Full textBao, Peijun, Wenhan Yang, Boon Poh Ng, Meng Hwa Er, and Alex C. Kot. "Cross-Modal Label Contrastive Learning for Unsupervised Audio-Visual Event Localization." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 1 (2023): 215–22. http://dx.doi.org/10.1609/aaai.v37i1.25093.
Full textLiu, Shuang, Mei Li, Zhong Zhang, Baihua Xiao, and Tariq S. Durrani. "Multi-Evidence and Multi-Modal Fusion Network for Ground-Based Cloud Recognition." Remote Sensing 12, no. 3 (2020): 464. http://dx.doi.org/10.3390/rs12030464.
Full text