Статті в журналах з теми "Multimodal Embeddings"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Multimodal Embeddings".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.
Tyshchuk, Kirill, Polina Karpikova, Andrew Spiridonov, Anastasiia Prutianova, Anton Razzhigaev, and Alexander Panchenko. "On Isotropy of Multimodal Embeddings." Information 14, no. 7 (2023): 392. http://dx.doi.org/10.3390/info14070392.
Повний текст джерелаGuo, Zhiqiang, Jianjun Li, Guohui Li, Chaoyang Wang, Si Shi, and Bin Ruan. "LGMRec: Local and Global Graph Learning for Multimodal Recommendation." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 8 (2024): 8454–62. http://dx.doi.org/10.1609/aaai.v38i8.28688.
Повний текст джерелаShang, Bin, Yinliang Zhao, Jun Liu, and Di Wang. "LAFA: Multimodal Knowledge Graph Completion with Link Aware Fusion and Aggregation." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 8 (2024): 8957–65. http://dx.doi.org/10.1609/aaai.v38i8.28744.
Повний текст джерелаSun, Zhongkai, Prathusha Sarma, William Sethares, and Yingyu Liang. "Learning Relationships between Text, Audio, and Video via Deep Canonical Correlation for Multimodal Language Analysis." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 05 (2020): 8992–99. http://dx.doi.org/10.1609/aaai.v34i05.6431.
Повний текст джерелаMerkx, Danny, and Stefan L. Frank. "Learning semantic sentence representations from visually grounded language without lexical knowledge." Natural Language Engineering 25, no. 4 (2019): 451–66. http://dx.doi.org/10.1017/s1351324919000196.
Повний текст джерелаMihail Mateev. "Comparative Analysis on Implementing Embeddings for Image Analysis." Journal of Information Systems Engineering and Management 10, no. 17s (2025): 89–102. https://doi.org/10.52783/jisem.v10i17s.2710.
Повний текст джерелаTang, Zhenchao, Jiehui Huang, Guanxing Chen, and Calvin Yu-Chian Chen. "Comprehensive View Embedding Learning for Single-Cell Multimodal Integration." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 14 (2024): 15292–300. http://dx.doi.org/10.1609/aaai.v38i14.29453.
Повний текст джерелаZhang, Linhai, Deyu Zhou, Yulan He, and Zeng Yang. "MERL: Multimodal Event Representation Learning in Heterogeneous Embedding Spaces." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 16 (2021): 14420–27. http://dx.doi.org/10.1609/aaai.v35i16.17695.
Повний текст джерелаSah, Shagan, Sabarish Gopalakishnan, and Raymond Ptucha. "Aligned attention for common multimodal embeddings." Journal of Electronic Imaging 29, no. 02 (2020): 1. http://dx.doi.org/10.1117/1.jei.29.2.023013.
Повний текст джерелаAlkaabi, Hussein, Ali Kadhim Jasim, and Ali Darroudi. "From Static to Contextual: A Survey of Embedding Advances in NLP." PERFECT: Journal of Smart Algorithms 2, no. 2 (2025): 57–66. https://doi.org/10.62671/perfect.v2i2.77.
Повний текст джерелаZhang, Rongchao, Yiwei Lou, Dexuan Xu, Yongzhi Cao, Hanpin Wang, and Yu Huang. "A Learnable Discrete-Prior Fusion Autoencoder with Contrastive Learning for Tabular Data Synthesis." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 15 (2024): 16803–11. http://dx.doi.org/10.1609/aaai.v38i15.29621.
Повний текст джерелаLin, Kaiyi, Xing Xu, Lianli Gao, Zheng Wang, and Heng Tao Shen. "Learning Cross-Aligned Latent Embeddings for Zero-Shot Cross-Modal Retrieval." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 07 (2020): 11515–22. http://dx.doi.org/10.1609/aaai.v34i07.6817.
Повний текст джерелаKhalifa, Omar Yasser Ibrahim, and Muhammad Zafran Muhammad Zaly Shah. "MultiPhishNet: A Multimodal Approach of QR Code Phishing Detection using Multi-Head Attention and Multilingual Embeddings." International Journal of Innovative Computing 15, no. 1 (2025): 53–61. https://doi.org/10.11113/ijic.v15n1.512.
Повний текст джерелаWaqas, Asim, Aakash Tripathi, Mia Naeini, Paul A. Stewart, Matthew B. Schabath, and Ghulam Rasool. "Abstract 991: PARADIGM: an embeddings-based multimodal learning framework with foundation models and graph neural networks." Cancer Research 85, no. 8_Supplement_1 (2025): 991. https://doi.org/10.1158/1538-7445.am2025-991.
Повний текст джерелаLi, Xiaolong, Yang Dong, Yunfei Yi, Zhixun Liang, and Shuqi Yan. "Hypergraph Neural Network for Multimodal Depression Recognition." Electronics 13, no. 22 (2024): 4544. http://dx.doi.org/10.3390/electronics13224544.
Повний текст джерелаZhu, Chaoyu, Zhihao Yang, Xiaoqiong Xia, Nan Li, Fan Zhong, and Lei Liu. "Multimodal reasoning based on knowledge graph embedding for specific diseases." Bioinformatics 38, no. 8 (2022): 2235–45. http://dx.doi.org/10.1093/bioinformatics/btac085.
Повний текст джерелаTripathi, Aakash Gireesh, Asim Waqas, Yasin Yilmaz, Matthew B. Schabath, and Ghulam Rasool. "Abstract 3641: Predicting treatment outcomes using cross-modality correlations in multimodal oncology data." Cancer Research 85, no. 8_Supplement_1 (2025): 3641. https://doi.org/10.1158/1538-7445.am2025-3641.
Повний текст джерелаTripathi, Aakash, Asim Waqas, Yasin Yilmaz, and Ghulam Rasool. "Abstract 4905: Multimodal transformer model improves survival prediction in lung cancer compared to unimodal approaches." Cancer Research 84, no. 6_Supplement (2024): 4905. http://dx.doi.org/10.1158/1538-7445.am2024-4905.
Повний текст джерелаXu, Jinfeng, Zheyu Chen, Shuo Yang, Jinze Li, Hewei Wang, and Edith C. H. Ngai. "MENTOR: Multi-level Self-supervised Learning for Multimodal Recommendation." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 12 (2025): 12908–17. https://doi.org/10.1609/aaai.v39i12.33408.
Повний текст джерелаOta, Kosuke, Keiichiro Shirai, Hidetoshi Miyao, and Minoru Maruyama. "Multimodal Analogy-Based Image Retrieval by Improving Semantic Embeddings." Journal of Advanced Computational Intelligence and Intelligent Informatics 26, no. 6 (2022): 995–1003. http://dx.doi.org/10.20965/jaciii.2022.p0995.
Повний текст джерелаYi, Moung-Ho, Keun-Chang Kwak, and Ju-Hyun Shin. "KoHMT: A Multimodal Emotion Recognition Model Integrating KoELECTRA, HuBERT with Multimodal Transformer." Electronics 13, no. 23 (2024): 4674. http://dx.doi.org/10.3390/electronics13234674.
Повний текст джерелаMai, Sijie, Haifeng Hu, and Songlong Xing. "Modality to Modality Translation: An Adversarial Representation Learning and Graph Fusion Network for Multimodal Fusion." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 01 (2020): 164–72. http://dx.doi.org/10.1609/aaai.v34i01.5347.
Повний текст джерелаKapil Adhar Wagh. "A Review: Word Embedding Models with Machine Learning Based Context Depend and Context Independent Techniques." Advances in Nonlinear Variational Inequalities 28, no. 3s (2024): 251–58. https://doi.org/10.52783/anvi.v28.2928.
Повний текст джерелаKim, Donghyun, Kuniaki Saito, Kate Saenko, Stan Sclaroff, and Bryan Plummer. "MULE: Multimodal Universal Language Embedding." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 07 (2020): 11254–61. http://dx.doi.org/10.1609/aaai.v34i07.6785.
Повний текст джерелаVijay Vaibhav Singh. "Vector Embeddings: The Mathematical Foundation of Modern AI Systems." International Journal of Scientific Research in Computer Science, Engineering and Information Technology 11, no. 1 (2025): 2408–17. https://doi.org/10.32628/cseit251112257.
Повний текст джерелаWehrmann, Jônatas, Anderson Mattjie, and Rodrigo C. Barros. "Order embeddings and character-level convolutions for multimodal alignment." Pattern Recognition Letters 102 (January 2018): 15–22. http://dx.doi.org/10.1016/j.patrec.2017.11.020.
Повний текст джерелаMithun, Niluthpol C., Juncheng Li, Florian Metze, and Amit K. Roy-Chowdhury. "Joint embeddings with multimodal cues for video-text retrieval." International Journal of Multimedia Information Retrieval 8, no. 1 (2019): 3–18. http://dx.doi.org/10.1007/s13735-018-00166-3.
Повний текст джерелаFodor, Ádám, András Lőrincz, and Rachid R. Saboundji. "Enhancing apparent personality trait analysis with cross-modal embeddings." Annales Universitatis Scientiarum Budapestinensis de Rolando Eötvös Nominatae. Sectio computatorica 57 (2024): 167–85. https://doi.org/10.71352/ac.57.167.
Повний текст джерелаRoshan, Nayak, S. Ullas Kannantha B, S. Kruthi, and Gururaj C. "Multimodal Offensive Meme Classification Using Transformers and BiLSTM." International Journal of Engineering and Advanced Technology (IJEAT) 11, no. 3 (2022): 96–102. https://doi.org/10.35940/ijeat.C3392.0211322.
Повний текст джерелаNayak, Roshan, B. S. Ullas Kannantha, Kruthi S, and C. Gururaj. "Multimodal Offensive Meme Classification u sing Transformers and BiLSTM." International Journal of Engineering and Advanced Technology 11, no. 3 (2022): 96–102. http://dx.doi.org/10.35940/ijeat.c3392.0211322.
Повний текст джерелаChen, Weijia, Zhijun Lu, Lijue You, Lingling Zhou, Jie Xu, and Ken Chen. "Artificial Intelligence–Based Multimodal Risk Assessment Model for Surgical Site Infection (AMRAMS): Development and Validation Study." JMIR Medical Informatics 8, no. 6 (2020): e18186. http://dx.doi.org/10.2196/18186.
Повний текст джерелаN.D., Smelik. "Multimodal topic model for texts and images utilizing their embeddings." Machine Learning and Data Analysis 2, no. 4 (2016): 421–41. http://dx.doi.org/10.21469/22233792.2.4.05.
Повний текст джерелаAbdou, Ahmed, Ekta Sood, Philipp Müller, and Andreas Bulling. "Gaze-enhanced Crossmodal Embeddings for Emotion Recognition." Proceedings of the ACM on Human-Computer Interaction 6, ETRA (2022): 1–18. http://dx.doi.org/10.1145/3530879.
Повний текст джерелаHu, Wenbo, Yifan Xu, Yi Li, Weiyue Li, Zeyuan Chen, and Zhuowen Tu. "BLIVA: A Simple Multimodal LLM for Better Handling of Text-Rich Visual Questions." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 3 (2024): 2256–64. http://dx.doi.org/10.1609/aaai.v38i3.27999.
Повний текст джерелаChen, Qihua, Xuejin Chen, Chenxuan Wang, Yixiong Liu, Zhiwei Xiong, and Feng Wu. "Learning Multimodal Volumetric Features for Large-Scale Neuron Tracing." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 2 (2024): 1174–82. http://dx.doi.org/10.1609/aaai.v38i2.27879.
Повний текст джерелаShen, Aili, Bahar Salehi, Jianzhong Qi, and Timothy Baldwin. "A General Approach to Multimodal Document Quality Assessment." Journal of Artificial Intelligence Research 68 (July 22, 2020): 607–32. http://dx.doi.org/10.1613/jair.1.11647.
Повний текст джерелаSata, Ikumi, Motoki Amagasaki, and Masato Kiyama. "Multimodal Retrieval Method for Images and Diagnostic Reports Using Cross-Attention." AI 6, no. 2 (2025): 38. https://doi.org/10.3390/ai6020038.
Повний текст джерелаKiran Chitturi. "Demystifying Multimodal AI: A Technical Deep Dive." International Journal of Scientific Research in Computer Science, Engineering and Information Technology 10, no. 6 (2024): 2011–17. https://doi.org/10.32628/cseit2410612394.
Повний текст джерелаTokar, Tomas, and Scott Sanner. "ICE-T: Interactions-aware Cross-column Contrastive Embedding for Heterogeneous Tabular Datasets." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 20 (2025): 20904–11. https://doi.org/10.1609/aaai.v39i20.35385.
Повний текст джерелаMa, Shukui, Pengyuan Ma, Shuaichao Feng, Fei Ma, and Guangping Zhuo. "Multimodal Data-Based Text Generation Depression Classification Model." International Journal of Computer Science and Information Technology 5, no. 1 (2025): 175–93. https://doi.org/10.62051/ijcsit.v5n1.16.
Повний текст джерелаZhang, Jianqiang, Renyao Chen, Shengwen Li, Tailong Li, and Hong Yao. "MGKGR: Multimodal Semantic Fusion for Geographic Knowledge Graph Representation." Algorithms 17, no. 12 (2024): 593. https://doi.org/10.3390/a17120593.
Повний текст джерелаJyoti, Arora, Khapekar Priyal, and Pal Rakhi. "Multimodal Sentiment Analysis using LSTM and RoBerta." Advanced Innovations in Computer Programming Languages 5, no. 2 (2023): 24–35. https://doi.org/10.5281/zenodo.8130701.
Повний текст джерелаTseng, Shao-Yen, Shrikanth Narayanan, and Panayiotis Georgiou. "Multimodal Embeddings From Language Models for Emotion Recognition in the Wild." IEEE Signal Processing Letters 28 (2021): 608–12. http://dx.doi.org/10.1109/lsp.2021.3065598.
Повний текст джерелаJing, Xuebin, Liang He, Zhida Song, and Shaolei Wang. "Audio–Visual Fusion Based on Interactive Attention for Person Verification." Sensors 23, no. 24 (2023): 9845. http://dx.doi.org/10.3390/s23249845.
Повний текст джерелаAzeroual, Saadia, Zakaria Hamane, Rajaa Sebihi, and Fatima-Ezzahraa Ben-Bouazza. "Toward Improved Glioma Mortality Prediction: A Multimodal Framework Combining Radiomic and Clinical Features." International Journal of Online and Biomedical Engineering (iJOE) 21, no. 05 (2025): 31–46. https://doi.org/10.3991/ijoe.v21i05.52691.
Повний текст джерелаSalin, Emmanuelle, Badreddine Farah, Stéphane Ayache, and Benoit Favre. "Are Vision-Language Transformers Learning Multimodal Representations? A Probing Perspective." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 10 (2022): 11248–57. http://dx.doi.org/10.1609/aaai.v36i10.21375.
Повний текст джерелаBikshapathy Peruka. "Sentemonet: A Comprehensive Framework for Multimodal Sentiment Analysis from Text and Emotions." Journal of Information Systems Engineering and Management 10, no. 34s (2025): 569–87. https://doi.org/10.52783/jisem.v10i34s.5852.
Повний текст джерелаSkantze, Gabriel, and Bram Willemsen. "CoLLIE: Continual Learning of Language Grounding from Language-Image Embeddings." Journal of Artificial Intelligence Research 74 (July 9, 2022): 1201–23. http://dx.doi.org/10.1613/jair.1.13689.
Повний текст джерелаLi, Wenxiang, Longyuan Ding, Yuliang Zhang, and Ziyuan Pu. "Understanding multimodal travel patterns based on semantic embeddings of human mobility trajectories." Journal of Transport Geography 124 (April 2025): 104169. https://doi.org/10.1016/j.jtrangeo.2025.104169.
Повний текст джерелаWang, Jenq-Haur, Mehdi Norouzi, and Shu Ming Tsai. "Augmenting Multimodal Content Representation with Transformers for Misinformation Detection." Big Data and Cognitive Computing 8, no. 10 (2024): 134. http://dx.doi.org/10.3390/bdcc8100134.
Повний текст джерела