Littérature scientifique sur le sujet « Cross-modality Translation »
Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres
Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Cross-modality Translation ».
À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.
Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.
Articles de revues sur le sujet "Cross-modality Translation"
Holubenko, Nataliia. « Modality from the Cross-cultural Studies Perspective : a Practical Approach to Intersemiotic Translation ». World Journal of English Language 13, no 2 (27 janvier 2023) : 86. http://dx.doi.org/10.5430/wjel.v13n2p86.
Texte intégralLiu, Ajian, Zichang Tan, Jun Wan, Yanyan Liang, Zhen Lei, Guodong Guo et Stan Z. Li. « Face Anti-Spoofing via Adversarial Cross-Modality Translation ». IEEE Transactions on Information Forensics and Security 16 (2021) : 2759–72. http://dx.doi.org/10.1109/tifs.2021.3065495.
Texte intégralRabadán, Rosa. « Modality and modal verbs in contrast ». Languages in Contrast 6, no 2 (15 décembre 2006) : 261–306. http://dx.doi.org/10.1075/lic.6.2.04rab.
Texte intégralWang, Yu, et Jianping Zhang. « CMMCSegNet : Cross-Modality Multicascade Indirect LGE Segmentation on Multimodal Cardiac MR ». Computational and Mathematical Methods in Medicine 2021 (5 juin 2021) : 1–14. http://dx.doi.org/10.1155/2021/9942149.
Texte intégralDanni, Yu. « A Genre Approach to the Translation of Political Speeches Based on a Chinese-Italian-English Trilingual Parallel Corpus ». SAGE Open 10, no 2 (avril 2020) : 215824402093360. http://dx.doi.org/10.1177/2158244020933607.
Texte intégralWu, Kevin E., Kathryn E. Yost, Howard Y. Chang et James Zou. « BABEL enables cross-modality translation between multiomic profiles at single-cell resolution ». Proceedings of the National Academy of Sciences 118, no 15 (7 avril 2021) : e2023070118. http://dx.doi.org/10.1073/pnas.2023070118.
Texte intégralSharma, Akanksha, et Neeru Jindal. « Cross-Modality Breast Image Translation with Improved Resolution Using Generative Adversarial Networks ». Wireless Personal Communications 119, no 4 (29 mars 2021) : 2877–91. http://dx.doi.org/10.1007/s11277-021-08376-5.
Texte intégralMai, Sijie, Haifeng Hu et Songlong Xing. « Modality to Modality Translation : An Adversarial Representation Learning and Graph Fusion Network for Multimodal Fusion ». Proceedings of the AAAI Conference on Artificial Intelligence 34, no 01 (3 avril 2020) : 164–72. http://dx.doi.org/10.1609/aaai.v34i01.5347.
Texte intégralLee, Yong-Hyeok, Dong-Won Jang, Jae-Bin Kim, Rae-Hong Park et Hyung-Min Park. « Audio–Visual Speech Recognition Based on Dual Cross-Modality Attentions with the Transformer Model ». Applied Sciences 10, no 20 (17 octobre 2020) : 7263. http://dx.doi.org/10.3390/app10207263.
Texte intégralWang, Yabing, Fan Wang, Jianfeng Dong et Hao Luo. « CL2CM : Improving Cross-Lingual Cross-Modal Retrieval via Cross-Lingual Knowledge Transfer ». Proceedings of the AAAI Conference on Artificial Intelligence 38, no 6 (24 mars 2024) : 5651–59. http://dx.doi.org/10.1609/aaai.v38i6.28376.
Texte intégralThèses sur le sujet "Cross-modality Translation"
Longuefosse, Arthur. « Apprentissage profond pour la conversion d’IRM vers TDM en imagerie thoracique ». Electronic Thesis or Diss., Bordeaux, 2024. http://www.theses.fr/2024BORD0489.
Texte intégralThoracic imaging faces significant challenges, with each imaging modality presenting its own limitations. CT, the gold standard for lung imaging, delivers high spatial resolution but relies on ionizing radiation, posing risks for patients requiring frequent scans. Conversely, lung MRI, offers a radiation-free alternative but is hindered by technical issues such as low contrast and artifacts, limiting its broader clinical use. Recently, UTE-MRI shows promise in addressing some of these limitations, but still lacks the high resolution and image quality of CT, particularly for detailed structural assessment. The primary objective of this thesis is to develop and validate deep learning-based models for synthesizing CT-like images from UTE-MRI. Specifically, we aim to assess the image quality, anatomical accuracy, and clinical applicability of these synthetic CT images in comparison to the original UTE-MRI and real CT scans in thoracic imaging. Initially, we explored the fundamentals of medical image synthesis, establishing the groundwork for MR to CT translation. We implemented a 2D GAN model based on the pix2pixHD framework, optimizing it using SPADE normalization and refining preprocessing techniques such as resampling and registration. Clinical evaluation with expert radiologists showed promising results in comparing synthetic images to real CT scans. Synthesis was further enhanced by introducing perceptual loss, which improved structural details and visual quality, and incorporated 2.5D strategies to balance between 2D and 3D synthesis. Additionally, we emphasized a rigorous validation process using task-specific metrics, challenging traditional intensity-based and global metrics by focusing on the accurate reconstruction of anatomical structures. In the final stage, we developed a robust and scalable 3D synthesis framework by adapting nnU-Net for CT generation, along with an anatomical feature-prioritized loss function, enabling superior reconstruction of critical structures such as airways and vessels. Our work highlights the potential of deep learning-based models for generating high-quality synthetic CT images from UTE-MRI, offering a significant improvement in non-invasive lung imaging. These advances could greatly enhance the clinical applicability of UTE-MRI, providing a safer alternative to CT for the follow-up of chronic lung diseases. Furthermore, a patent is currently in preparation for the adoption of our method, paving the way for potential clinical use
« Cross-modality semantic integration and robust interpretation of multimodal user interactions ». Thesis, 2010. http://library.cuhk.edu.hk/record=b6075023.
Texte intégralWe have also performed a latent semantic modeling (LSM) for interpreting multimodal user input consisting of speech and pen gestures. Each modality of a multimodal input carries semantics related to a domain-specific task goal (TG). Each input is annotated manually with a TG based on the semantics. Multimodal input usually has a simpler syntactic structure and different order of semantic constituents from unimodal input. Therefore, we proposed to use LSM to derive the latent semantics from the multimodal inputs. In order to achieve this, we characterized the cross-modal integration pattern as 3-tuple multimodal terms taking into account SLR, pen gesture type and their temporal relation. The correlation term matrix is then decomposed using singular value decomposition (SVD) to derive the latent semantics automatically. TG inference on disjoint test set based on the latent semantics achieves accurate performance for 99% of the multimodal inquiries.
Hui, Pui Yu.
Adviser: Helen Meng.
Source: Dissertation Abstracts International, Volume: 73-02, Section: B, page: .
Thesis (Ph.D.)--Chinese University of Hong Kong, 2010.
Includes bibliographical references (leaves 294-306).
Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [201-] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Abstract also in Chinese.
Chapitres de livres sur le sujet "Cross-modality Translation"
Zhang, Ran, Laetitia Meng-Papaxanthos, Jean-Philippe Vert et William Stafford Noble. « Semi-supervised Single-Cell Cross-modality Translation Using Polarbear ». Dans Lecture Notes in Computer Science, 20–35. Cham : Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04749-7_2.
Texte intégralKang, Bogyeong, Hyeonyeong Nam, Ji-Wung Han, Keun-Soo Heo et Tae-Eui Kam. « Multi-view Cross-Modality MR Image Translation for Vestibular Schwannoma and Cochlea Segmentation ». Dans Brainlesion : Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries, 100–108. Cham : Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-44153-0_10.
Texte intégralYang, Tao, et Lisheng Wang. « Koos Classification of Vestibular Schwannoma via Image Translation-Based Unsupervised Cross-Modality Domain Adaptation ». Dans Brainlesion : Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries, 59–67. Cham : Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-44153-0_6.
Texte intégralZhao, Ziyuan, Kaixin Xu, Huai Zhe Yeo, Xulei Yang et Cuntai Guan. « MS-MT : Multi-scale Mean Teacher with Contrastive Unpaired Translation for Cross-Modality Vestibular Schwannoma and Cochlea Segmentation ». Dans Brainlesion : Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries, 68–78. Cham : Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-44153-0_7.
Texte intégralZhu, Lei, Ling Ling Chan, Teck Khim Ng, Meihui Zhang et Beng Chin Ooi. « Deep Co-Training for Cross-Modality Medical Image Segmentation ». Dans Frontiers in Artificial Intelligence and Applications. IOS Press, 2023. http://dx.doi.org/10.3233/faia230633.
Texte intégralActes de conférences sur le sujet "Cross-modality Translation"
Li, Yingtai, Shuo Yang, Xiaoyan Wu, Shan He et S. Kevin Zhou. « Taming Stable Diffusion for MRI Cross-Modality Translation ». Dans 2024 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), 2134–41. IEEE, 2024. https://doi.org/10.1109/bibm62325.2024.10822349.
Texte intégralHassanzadeh, Reihaneh, Anees Abrol, Hamid Reza Hassanzadeh et Vince D. Calhoun. « Cross-Modality Translation with Generative Adversarial Networks to Unveil Alzheimer’s Disease Biomarkers ». Dans 2024 46th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), 1–4. IEEE, 2024. https://doi.org/10.1109/embc53108.2024.10781737.
Texte intégralXiang, Yixin, Xianhua Zeng, Dajiang Lei et Tao Fu. « MOADM : Manifold Optimization Adversarial Diffusion Model for Cross-Modality Medical Image Translation ». Dans 2024 IEEE International Conference on Medical Artificial Intelligence (MedAI), 380–85. IEEE, 2024. https://doi.org/10.1109/medai62885.2024.00057.
Texte intégralZhao, Pu, Hong Pan et Siyu Xia. « MRI-Trans-GAN : 3D MRI Cross-Modality Translation ». Dans 2021 40th Chinese Control Conference (CCC). IEEE, 2021. http://dx.doi.org/10.23919/ccc52363.2021.9550256.
Texte intégralQi, Jinwei, et Yuxin Peng. « Cross-modal Bidirectional Translation via Reinforcement Learning ». Dans Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California : International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/365.
Texte intégralTang, Shi, Xinchen Ye, Fei Xue et Rui Xu. « Cross-Modality depth Estimation via Unsupervised Stereo RGB-to-infrared Translation ». Dans ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2023. http://dx.doi.org/10.1109/icassp49357.2023.10095982.
Texte intégralYe, Jinhui, Wenxiang Jiao, Xing Wang, Zhaopeng Tu et Hui Xiong. « Cross-modality Data Augmentation for End-to-End Sign Language Translation ». Dans Findings of the Association for Computational Linguistics : EMNLP 2023. Stroudsburg, PA, USA : Association for Computational Linguistics, 2023. http://dx.doi.org/10.18653/v1/2023.findings-emnlp.904.
Texte intégralMaji, Prasenjit, Kunal Dhibar et Hemanta Kumar Mondal. « Revolutionizing and Enhancing Medical Diagnostics with Conditional GANs for Cross-Modality Image Translation ». Dans 2024 11th International Conference on Computing for Sustainable Global Development (INDIACom). IEEE, 2024. http://dx.doi.org/10.23919/indiacom61295.2024.10498844.
Texte intégralXu, Siwei, Junhao Liu et Jing Zhang. « scACT : Accurate Cross-modality Translation via Cycle-consistent Training from Unpaired Single-cell Data ». Dans CIKM '24 : The 33rd ACM International Conference on Information and Knowledge Management, 2722–31. New York, NY, USA : ACM, 2024. http://dx.doi.org/10.1145/3627673.3679576.
Texte intégralCheng, Xize, Tao Jin, Rongjie Huang, Linjun Li, Wang Lin, Zehan Wang, Ye Wang, Huadai Liu, Aoxiong Yin et Zhou Zhao. « MixSpeech : Cross-Modality Self-Learning with Audio-Visual Stream Mixup for Visual Speech Translation and Recognition ». Dans 2023 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, 2023. http://dx.doi.org/10.1109/iccv51070.2023.01442.
Texte intégral