To see the other types of publications on this topic, follow the link: Multi-Modal Medical Imaging.

Journal articles on the topic 'Multi-Modal Medical Imaging'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Multi-Modal Medical Imaging.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Dong, Di, Jie Tian, Yakang Dai, Guorui Yan, Fei Yang, and Ping Wu. "Unified reconstruction framework for multi-modal medical imaging." Journal of X-Ray Science and Technology 19, no. 1 (2011): 111–26. http://dx.doi.org/10.3233/xst-2010-0281.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Dong, Di, Jie Tian, Yakang Dai, Guorui Yan, Fei Yang, and Ping Wu. "Unified reconstruction framework for multi-modal medical imaging." Journal of X-Ray Science and Technology: Clinical Applications of Diagnosis and Therapeutics 19, no. 1 (2011): 111–26. http://dx.doi.org/10.3233/xst-2010-028100281.

Full text
Abstract:
Various types of advanced imaging technologies have significantly improved the quality of medical care available to patients. Corresponding medical image reconstruction algorithms, especially 3D reconstruction, play an important role in disease diagnosis and treatment assessment. However, these increasing reconstruction methods are not implemented in a unified software framework, which brings along lots of disadvantages such as breaking connection of different modalities, lack of module reuse and inconvenience to method comparison. This paper discusses reconstruction process from the viewpoint
APA, Harvard, Vancouver, ISO, and other styles
3

Zhang, Yilin. "Multi-Modal Medical Image Matching Based on Multi-Task Learning and Semantic-Enhanced Cross-Modal Retrieval." Traitement du Signal 40, no. 5 (2023): 2041–49. http://dx.doi.org/10.18280/ts.400522.

Full text
Abstract:
With the continuous advancement of medical imaging technology, a vast amount of multi-modal medical image data has been extensively utilized for disease diagnosis, treatment, and research. Effective management and utilization of these data becomes a pivotal challenge, particularly when undertaking image matching and retrieval. Although numerous methods for medical image matching and retrieval exist, they primarily rely on traditional image processing techniques, often limited to manual feature extraction and singular modality handling. To address these limitations, this study introduces an alg
APA, Harvard, Vancouver, ISO, and other styles
4

T, Dr Kusuma. "Survey on Multi-Modal Medical Image Fusion." International Journal for Research in Applied Science and Engineering Technology 11, no. 11 (2023): 1126–31. http://dx.doi.org/10.22214/ijraset.2023.56694.

Full text
Abstract:
Abstract: Multi-modality medical or clinical image fusion is a field of study aimed at enhancing diagnostic accuracy and aid in decisions to be taken by medical professional. Various fusion techniques such as pixel-based, region-based, and transformbased approaches are applied in image fusion to provide accurate fusion. Different devices which take scans of body such as MRI, CT, PET, SPECT, Ultrasound hold and carry different features, and different medical sensors obtain different information of the particular part of the body. Each of these imaging modalities offer only specific information
APA, Harvard, Vancouver, ISO, and other styles
5

Adil Ibrahim Khalil. "Multi-Modal Fusion Techniques for Improved Diagnosis in Medical Imaging." Journal of Information Systems Engineering and Management 10, no. 1s (2024): 47–56. https://doi.org/10.52783/jisem.v10i1s.100.

Full text
Abstract:
Identifying diverse disease states is crucial for prompt and efficient clinical management. Complementary data from many medical imaging modalities, including MRI, CT, and PET, can be integrated to improve diagnostic performance. This work aims to assess how well multi-modal fusion methods work to enhance medical picture diagnosis. A multicenter study was conducted with 150 patients with different clinical conditions (mean age 58.2 ± 12.4 years, 52% female). After gathering data from MRI, CT, and PET scans, structural, functional, and textural characteristics were removed from each modality. T
APA, Harvard, Vancouver, ISO, and other styles
6

Dehghani, Farzaneh, Reihaneh Derafshi, Joanna Lin, Sayeh Bayat, and Mariana Bento. "Alzheimer Disease Detection Studies: Perspective on Multi-Modal Data." Yearbook of Medical Informatics 33, no. 01 (2024): 266–76. https://doi.org/10.1055/s-0044-1800756.

Full text
Abstract:
Summary Objectives: Alzheimer's Disease (AD) is one of the most common neurodegenerative diseases, resulting in progressive cognitive decline, and so accurate and timely AD diagnosis is of critical importance. To this end, various medical technologies and computer-aided diagnosis (CAD), ranging from biosensors and raw signals to medical imaging, have been used to provide information about the state of AD. In this survey, we aim to provide a review on CAD systems for automated AD detection, focusing on different data types: namely, signals and sensors, medical imaging, and electronic medical re
APA, Harvard, Vancouver, ISO, and other styles
7

Pasupuleti, Murali Krishna. "AI-Driven Radiology: Multi-Modal Imaging Diagnosis Using Ensemble Models." International Journal of Academic and Industrial Research Innovations(IJAIRI) 05, no. 05 (2025): 620–30. https://doi.org/10.62311/nesx/rphcr22.

Full text
Abstract:
Abstract: The integration of artificial intelligence (AI) in radiology has significantly improved diagnostic workflows, particularly with the advent of multi-modal imaging systems. This study proposes an ensemble deep learning framework for radiological diagnosis by combining complementary information from computed tomography (CT), magnetic resonance imaging (MRI), and positron emission tomography (PET). Modality-specific convolutional neural networks—ResNet50 for CT, DenseNet121 for MRI, and EfficientNet for PET—were independently trained and their outputs aggregated via a fusion layer for fi
APA, Harvard, Vancouver, ISO, and other styles
8

V, Bhavana, and Krishnappa H. K. "Multi-modal image fusion using contourlet and wavelet transforms: a multi-resolution approach." Indonesian Journal of Electrical Engineering and Computer Science 28, no. 2 (2022): 762. http://dx.doi.org/10.11591/ijeecs.v28.i2.pp762-768.

Full text
Abstract:
In recent years, vast improvement and progress has been observed in the field of medical research, especially in digital medical imaging technology. Medical image fusion has been widely used in clinical diagnosis to get valuable information from different modalities of medical images to enhance its quality by fusing images like computed tomography (CT), and magnetic resonance imaging (MRI). MRI gives clear information on delicate tissue while CT gives details about denser tissues. A multi-resolution approach is proposed in this work for fusing medical images using non-sub-sampled contourlet tr
APA, Harvard, Vancouver, ISO, and other styles
9

V., Bhavana, and Krishnappa H. K. "Multi-modal image fusion using contourlet and wavelet transforms: a multi-resolution approach." Indonesian Journal of Electrical Engineering and Computer Science 28, no. 2 (2022): 762–68. https://doi.org/10.11591/ijeecs.v28.i2.pp762-768.

Full text
Abstract:
In recent years, vast improvement and progress has been observed in the field of medical research, especially in digital medical imaging technology. Medical image fusion has been widely used in clinical diagnosis to get valuable information from different modalities of medical images to enhance its quality by fusing images like computed tomography (CT), and magnetic resonance imaging (MRI). MRI gives clear information on delicate tissue while CT gives details about denser tissues. A multi-resolution approach is proposed in this work for fusing medical images using non-sub-sampled contourlet tr
APA, Harvard, Vancouver, ISO, and other styles
10

A, Sathya. "Multi-Modal Image Fusion for Early Disease Diagnosis: AI in Medical Imaging." Multidisciplinary Journal for Applied Research in Engineering and Technology 4, no. 2 (2024): 16–20. https://doi.org/10.54228/mjaret0624010.

Full text
Abstract:
This study focuses on a novel multi-modal image fusion algorithm with artificial intelligence in medical imaging for early-stage disease diagnosis. We introduced Deep Multi-Cascade Fusion (DMC-Fusion) algorithm, which fuses classifier-based features from MRI, CT and PET with self-supervised learning techniques. By leveraging a unique dataset of 10,000 multi-modal medical images of five different types of diseases, including brain tumors and lung cancer, the proposed method outperformed existing single-modality imaging systems by improving accuracy in the detection of early-stage diseases to 25
APA, Harvard, Vancouver, ISO, and other styles
11

Dai, Yin, Yifan Gao, and Fayu Liu. "TransMed: Transformers Advance Multi-Modal Medical Image Classification." Diagnostics 11, no. 8 (2021): 1384. http://dx.doi.org/10.3390/diagnostics11081384.

Full text
Abstract:
Over the past decade, convolutional neural networks (CNN) have shown very competitive performance in medical image analysis tasks, such as disease classification, tumor segmentation, and lesion detection. CNN has great advantages in extracting local features of images. However, due to the locality of convolution operation, it cannot deal with long-range relationships well. Recently, transformers have been applied to computer vision and achieved remarkable success in large-scale datasets. Compared with natural images, multi-modal medical images have explicit and important long-range dependencie
APA, Harvard, Vancouver, ISO, and other styles
12

Bashiri, Fereshteh, Ahmadreza Baghaie, Reihaneh Rostami, Zeyun Yu, and Roshan D’Souza. "Multi-Modal Medical Image Registration with Full or Partial Data: A Manifold Learning Approach." Journal of Imaging 5, no. 1 (2018): 5. http://dx.doi.org/10.3390/jimaging5010005.

Full text
Abstract:
Multi-modal image registration is the primary step in integrating information stored in two or more images, which are captured using multiple imaging modalities. In addition to intensity variations and structural differences between images, they may have partial or full overlap, which adds an extra hurdle to the success of registration process. In this contribution, we propose a multi-modal to mono-modal transformation method that facilitates direct application of well-founded mono-modal registration methods in order to obtain accurate alignment of multi-modal images in both cases, with comple
APA, Harvard, Vancouver, ISO, and other styles
13

R, Kaviya Nachiyar. "AI-POWERED DIAGNOSTIC SUITE: INTEGRATING MULTI-MODAL MEDICAL IMAGING AND PREDICTIVE ANALYTICS." International Research Journal of Education and Technology 6, no. 11 (2024): 1227–29. https://doi.org/10.70127/irjedt.vol.7.issue03.1229.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Carminati, Marco, and Carlo Fiorini. "Challenges for Microelectronics in Non-Invasive Medical Diagnostics." Sensors 20, no. 13 (2020): 3636. http://dx.doi.org/10.3390/s20133636.

Full text
Abstract:
Microelectronics is emerging, sometimes with changing fortunes, as a key enabling technology in diagnostics. This paper reviews some recent results and technical challenges which still need to be addressed in terms of the design of CMOS analog application specific integrated circuits (ASICs) and their integration in the surrounding systems, in order to consolidate this technological paradigm. Open issues are discussed from two, apparently distant but complementary, points of view: micro-analytical devices, combining microfluidics with affinity bio-sensing, and gamma cameras for simultaneous mu
APA, Harvard, Vancouver, ISO, and other styles
15

Mohit Jain and Adit Shah. "A multi-modal CNN framework for integrating medical imaging for COVID-19 Diagnosis." World Journal of Advanced Research and Reviews 8, no. 3 (2020): 475–93. https://doi.org/10.30574/wjarr.2020.8.3.0418.

Full text
Abstract:
Due to COVID-19 spreading fast, traditional methods have revealed many inadequacies, showing there is a strong need for better and faster tests. In this article, we discuss the structure, arrangement and clinical use of a multi-modal CNN framework for including medical imaging in COVID-19 diagnosis. When data from X-rays, CT scans and ultrasound are combined, it helps doctors better understand how a disease is showing up in the body. CNN, using each imaging technique’s special features, fuses, extracts data and integrates with attention to help diseases be identified more accurately. The artic
APA, Harvard, Vancouver, ISO, and other styles
16

Qu, Ruyi, and Zhifeng Xiao. "An Attentive Multi-Modal CNN for Brain Tumor Radiogenomic Classification." Information 13, no. 3 (2022): 124. http://dx.doi.org/10.3390/info13030124.

Full text
Abstract:
Medical images of brain tumors are critical for characterizing the pathology of tumors and early diagnosis. There are multiple modalities for medical images of brain tumors. Fusing the unique features of each modality of the magnetic resonance imaging (MRI) scans can accurately determine the nature of brain tumors. The current genetic analysis approach is time-consuming and requires surgical extraction of brain tissue samples. Accurate classification of multi-modal brain tumor images can speed up the detection process and alleviate patient suffering. Medical image fusion refers to effectively
APA, Harvard, Vancouver, ISO, and other styles
17

Islam, Kh Tohidul, Sudanthi Wijewickrema, and Stephen O’Leary. "A Deep Learning Framework for Segmenting Brain Tumors Using MRI and Synthetically Generated CT Images." Sensors 22, no. 2 (2022): 523. http://dx.doi.org/10.3390/s22020523.

Full text
Abstract:
Multi-modal three-dimensional (3-D) image segmentation is used in many medical applications, such as disease diagnosis, treatment planning, and image-guided surgery. Although multi-modal images provide information that no single image modality alone can provide, integrating such information to be used in segmentation is a challenging task. Numerous methods have been introduced to solve the problem of multi-modal medical image segmentation in recent years. In this paper, we propose a solution for the task of brain tumor segmentation. To this end, we first introduce a method of enhancing an exis
APA, Harvard, Vancouver, ISO, and other styles
18

Khan, Muhammad Waqar, Adnan Ahmed Siddiqui, and Syed Sajjad Hussain Rizvi. "A Systematic Study on Recent Evolutionary Algorithms for Multi-Modal Multi-Objective Optimization." Pakistan Journal of Engineering, Technology and Science 12, no. 2 (2024): 102–15. https://doi.org/10.22555/pjets.v12i2.1261.

Full text
Abstract:
The real-world optimization problems are inherently multi-modal and multi-objective (MMMO), such as bio-medical imaging, automotive engine design, plant identification in control systems, inference engine design, etc. This is mainly because of the acute diversity and diffusion of solutions in Pareto space and the multi-modality of the solution set. Therefore, finding the optimal solution for MMMO is a pressing need in the literature. It is evident from the literature that evolutionary Algorithms (EA) are the best candidates for solving MO problems. However, due to massive variants of single-ob
APA, Harvard, Vancouver, ISO, and other styles
19

Lohit Banakar. "Optimizing Disease Detection: A Multi-Modal Deep Learning Framework for Medical Imaging and Clinical Data Integration." Journal of Information Systems Engineering and Management 10, no. 8s (2025): 627–34. https://doi.org/10.52783/jisem.v10i8s.1118.

Full text
Abstract:
This study introduces a novel multi-modal deep learning framework that integrates medical imaging data with clinical records for enhanced disease detection. We propose a hybrid architecture combining convolutional neural networks (CNNs) for image analysis and transformer networks for processing clinical data. The framework was evaluated on a dataset of 10,000 patients over 12 months, focusing on detecting early signs of lung cancer and coronary artery disease. Results show our integrated approach achieves significantly higher accuracy compared to single-modality models, with an F1 score of 0.8
APA, Harvard, Vancouver, ISO, and other styles
20

Jiang, Mingfeng, Peihang Jia, Xin Huang, et al. "Frequency-Aware Diffusion Model for Multi-Modal MRI Image Synthesis." Journal of Imaging 11, no. 5 (2025): 152. https://doi.org/10.3390/jimaging11050152.

Full text
Abstract:
Magnetic Resonance Imaging (MRI) is a widely used, non-invasive imaging technology that plays a critical role in clinical diagnostics. Multi-modal MRI, which combines images from different modalities, enhances diagnostic accuracy by offering comprehensive tissue characterization. Meanwhile, multi-modal MRI enhances downstream tasks, like brain tumor segmentation and image reconstruction, by providing richer features. While recent advances in diffusion models (DMs) show potential for high-quality image translation, existing methods still struggle to preserve fine structural details and ensure a
APA, Harvard, Vancouver, ISO, and other styles
21

Arpit Mohankar, Aishwarya Nagpure, Sania Shaikh, Khushi Singh, and Firdous Jahan Shaikh. "Medical Image Segmentation." International Research Journal on Advanced Engineering Hub (IRJAEH) 2, no. 11 (2024): 2569–74. http://dx.doi.org/10.47392/irjaeh.2024.0353.

Full text
Abstract:
Medical image segmentation is a critical component in the development of computer-aided diagnosis and treatment planning systems. This paper provides a comprehensive survey of recent advances in segmentation techniques applied to various imaging modalities, including Magnetic Resonance Imaging (MRI). Traditional methods such as thresholding, region-growing, and active contours are reviewed alongside contemporary machine learning-based approaches, particularly deep learning models. The survey emphasizes the growing dominance of convolutional neural networks (CNNs) and their variants, including
APA, Harvard, Vancouver, ISO, and other styles
22

Zedadra, Amina, Mahmoud Yassine Salah-Salah, Ouarda Zedadra, and Antonio Guerrieri. "Multi-Modal AI for Multi-Label Retinal Disease Prediction Using OCT and Fundus Images: A Hybrid Approach." Sensors 25, no. 14 (2025): 4492. https://doi.org/10.3390/s25144492.

Full text
Abstract:
Ocular diseases can significantly affect vision and overall quality of life, with diagnosis often being time-consuming and dependent on expert interpretation. While previous computer-aided diagnostic systems have focused primarily on medical imaging, this paper proposes VisionTrack, a multi-modal AI system for predicting multiple retinal diseases, including Diabetic Retinopathy (DR), Age-related Macular Degeneration (AMD), Diabetic Macular Edema (DME), drusen, Central Serous Retinopathy (CSR), and Macular Hole (MH), as well as normal cases. The proposed framework integrates a Convolutional Neu
APA, Harvard, Vancouver, ISO, and other styles
23

Nishchhal, N. "SYMMETRIC REGISTRATION OF MULTI-MODAL MEDICAL IMAGES BASED ON ANATOMICAL INFORMATION." Vestnik komp'iuternykh i informatsionnykh tekhnologii, no. 250 (April 2025): 12–20. https://doi.org/10.14489/vkit.2025.04.pp.012-020.

Full text
Abstract:
This paper introduces a multi-modal symmetric image registration method leveraging the advanced SymReg-GAN architecture. The framework is specifically designed to address the challenges of aligning medical images from different modalities, such as CT and MRI, which often vary significantly in appearance while representing the same anatomical structures. The proposed method incorporates an Anatomical Attention Module (AAM), which focuses on preserving the anatomical coherence of key structures during the registration process. This ensures that critical regions are accurately aligned without los
APA, Harvard, Vancouver, ISO, and other styles
24

Morris, Robert H., Christophe L. Trabi, Abi Spicer, et al. "A natural fibre reinforced composite material for multi-modal medical imaging and radiotherapy treatment." Materials Letters 252 (October 2019): 289–92. http://dx.doi.org/10.1016/j.matlet.2019.05.119.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Elaiyaraja, K., and M. Senthil Kumar. "A Novel Variable Weight Grey Wolf Optimization Algorithm in Medical Image Fusion." Journal of Medical Imaging and Health Informatics 11, no. 5 (2021): 1501–8. http://dx.doi.org/10.1166/jmihi.2021.3475.

Full text
Abstract:
Medical image fusion (MIF) is essential in clinical domain that integrates the multi-modal medical features to a unique frame known as fused image which finds utility in diagnosis process. Scaling based approaches are the commonly used multimodal MIF model where the generalized scaling has a stationary scale value selection that enhances the fusion quality Discrete Wavelet Transform (db4)-based approaches give a maximum amount of approximation in multi-modal medical image fusion, while using less edge features. For generating efficient edge features, Laplacian filtering (LF) approach is employ
APA, Harvard, Vancouver, ISO, and other styles
26

Mylona, E., D. Zaridis, G. Grigoriadis, N. Tachos, and D. I. Fotiadis. "PD-0314 An explainable deep learning pipeline for multi-modal multi-organ medical image segmentation." Radiotherapy and Oncology 170 (May 2022): S275—S276. http://dx.doi.org/10.1016/s0167-8140(22)02807-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Macfadyen, Craig, Ajay Duraiswamy, and David Harris-Birtill. "Classification of hyper-scale multimodal imaging datasets." PLOS Digital Health 2, no. 12 (2023): e0000191. http://dx.doi.org/10.1371/journal.pdig.0000191.

Full text
Abstract:
Algorithms that classify hyper-scale multi-modal datasets, comprising of millions of images, into constituent modality types can help researchers quickly retrieve and classify diagnostic imaging data, accelerating clinical outcomes. This research aims to demonstrate that a deep neural network that is trained on a hyper-scale dataset (4.5 million images) composed of heterogeneous multi-modal data can be used to obtain significant modality classification accuracy (96%). By combining 102 medical imaging datasets, a dataset of 4.5 million images was created. A ResNet-50, ResNet-18, and VGG16 were
APA, Harvard, Vancouver, ISO, and other styles
28

Li, Qingyun, Zhibin Yu, Yubo Wang, and Haiyong Zheng. "TumorGAN: A Multi-Modal Data Augmentation Framework for Brain Tumor Segmentation." Sensors 20, no. 15 (2020): 4203. http://dx.doi.org/10.3390/s20154203.

Full text
Abstract:
The high human labor demand involved in collecting paired medical imaging data severely impedes the application of deep learning methods to medical image processing tasks such as tumor segmentation. The situation is further worsened when collecting multi-modal image pairs. However, this issue can be resolved through the help of generative adversarial networks, which can be used to generate realistic images. In this work, we propose a novel framework, named TumorGAN, to generate image segmentation pairs based on unpaired adversarial training. To improve the quality of the generated images, we i
APA, Harvard, Vancouver, ISO, and other styles
29

Jin, Weina, Xiaoxiao Li, and Ghassan Hamarneh. "Evaluating Explainable AI on a Multi-Modal Medical Imaging Task: Can Existing Algorithms Fulfill Clinical Requirements?" Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 11 (2022): 11945–53. http://dx.doi.org/10.1609/aaai.v36i11.21452.

Full text
Abstract:
Being able to explain the prediction to clinical end-users is a necessity to leverage the power of artificial intelligence (AI) models for clinical decision support. For medical images, a feature attribution map, or heatmap, is the most common form of explanation that highlights important features for AI models' prediction. However, it is unknown how well heatmaps perform on explaining decisions on multi-modal medical images, where each image modality or channel visualizes distinct clinical information of the same underlying biomedical phenomenon. Understanding such modality-dependent features
APA, Harvard, Vancouver, ISO, and other styles
30

Gottipati, Srinivas Babu, and Gowri Thumbur. "Multi-modal fusion deep transfer learning for accurate brain tumor classification using magnetic resonance imaging images." Indonesian Journal of Electrical Engineering and Computer Science 34, no. 2 (2024): 825. http://dx.doi.org/10.11591/ijeecs.v34.i2.pp825-834.

Full text
Abstract:
Early identification and treatment of brain tumors depend critically on accurate classification. Accurate brain tumor classification in medical imaging is essential for clinical decisions and individualized treatment plans. This paper introduces a novel method for classifying brain tumors called multimodal fusion deep transfer learning (MMFDTL) using original, contoured, and annotated magnetic resonance imaging (MRI) images to showcase its capabilities. The MMFDTL can capture complex tumor features frequently missed in analyzing individual modalities. The MMFDTL model employs three deep learni
APA, Harvard, Vancouver, ISO, and other styles
31

Gottipati, Srinivas Babu, and Gowri Thumbur. "Multi-modal fusion deep transfer learning for accurate brain tumor classification using magnetic resonance imaging images." Indonesian Journal of Electrical Engineering and Computer Science 34, no. 2 (2024): 825–34. https://doi.org/10.11591/ijeecs.v34.i2.pp825-834.

Full text
Abstract:
Early identification and treatment of brain tumors depend critically on accurate classification. Accurate brain tumor classification in medical imaging is essential for clinical decisions and individualized treatment plans. This paper introduces a novel method for classifying brain tumors called multimodal fusion deep transfer learning (MMFDTL) using original, contoured, and annotated magnetic resonance imaging (MRI) images to showcase its capabilities. The MMFDTL can capture complex tumor features frequently missed in analyzing individual modalities. The MMFDTL model employs three deep learni
APA, Harvard, Vancouver, ISO, and other styles
32

Keni, Shivank. "Evaluating artificial intelligence for medical imaging: a primer for clinicians." British Journal of Hospital Medicine 85, no. 7 (2024): 1–13. http://dx.doi.org/10.12968/hmed.2024.0312.

Full text
Abstract:
Artificial intelligence has the potential to transform medical imaging. The effective integration of artificial intelligence into clinical practice requires a robust understanding of its capabilities and limitations. This paper begins with an overview of key clinical use cases such as detection, classification, segmentation and radiomics. It highlights foundational concepts in machine learning such as learning types and strategies, as well as the training and evaluation process. We provide a broad theoretical framework for assessing the clinical effectiveness of medical imaging artificial inte
APA, Harvard, Vancouver, ISO, and other styles
33

Chen, Shaobo, Ning Xiao, Xinlai Shi, et al. "ColorMedGAN: A Semantic Colorization Framework for Medical Images." Applied Sciences 13, no. 5 (2023): 3168. http://dx.doi.org/10.3390/app13053168.

Full text
Abstract:
Colorization for medical images helps make medical visualizations more engaging, provides better visualization in 3D reconstruction, acts as an image enhancement technique for tasks such as segmentation, and makes it easier for non-specialists to perceive tissue changes and texture details in medical images in diagnosis and teaching. However, colorization algorithms have been hindered by limited semantic understanding. In addition, current colorization methods still rely on paired data, which is often not available for specific fields such as medical imaging. To address the texture detail of m
APA, Harvard, Vancouver, ISO, and other styles
34

Jyoti, Jain, Vashist Shrey, and Manjhi Diwash. "A Comprehensive Review of Medical Image Fusion Algorithms." Research and Applications: Emerging Technologies 7, no. 1 (2025): 28–45. https://doi.org/10.5281/zenodo.14642943.

Full text
Abstract:
<em>The challenge of manual design can be overcome by deep learning models, which can automatically extract the most useful elements from data. Introducing a deep learning model to the picture fusion field is the aim of this paper. Using supervised deep learning, it aims to create a novel concept for picture fusion. Pattern recognition and image processing are two fields where deep learning technology has been thoroughly investigated. The characteristics of multi- modal medical images, medical diagnostic technology, and practical implementation will be taken into consideration when proposing a
APA, Harvard, Vancouver, ISO, and other styles
35

Patel, Shrina, and Dr Ashwin Makwana. "MixGANMed: A Novel Hybrid Generative Framework forMulti-Modal Medical Imaging Synthesis." International Journal of Basic and Applied Sciences 14, no. 3 (2025): 277–85. https://doi.org/10.14419/78n87617.

Full text
Abstract:
This research introduces MixGANMed, a unique hybrid generative adversarial network that synthesizes images of both grayscale and RGB ‎types in medical images. Combining methods from DC-GAN, Conditional-GAN, and SR-GAN allows the architecture to improve areas of ‎stability, guided by labels and quality for humans. Evaluations were carried out across several datasets, for example, Pneumonia X-ray, ‎Diabetic Retinopathy, Brain Tumor MRI, Leukemia using WBC microscopy images, and Skin Cancer observed with Dermoscopy. While ‎ordinary GAN models needed more epochs to show results and performed poorl
APA, Harvard, Vancouver, ISO, and other styles
36

Zhou, Yong, Xi Zhang, Shuyi Liu, Zhouyi He, Weili Tian, and Shuping You. "Effectiveness of Multi-Modal Teaching Based on Online Case Libraries in the Education of Gene Methylation Combined with Spiral CT Screening for Pulmonary Ground-Glass Opacity Nodules." Proceedings of Anticancer Research 9, no. 1 (2025): 21–26. https://doi.org/10.26689/par.v9i1.9455.

Full text
Abstract:
Objective: To explore the effectiveness of multi-modal teaching based on an online case library in the education of gene methylation combined with spiral computed tomography (CT) screening for pulmonary ground-glass opacity (GGO) nodules. Methods: From October 2023 to April 2024, 66 medical imaging students were selected and randomly divided into a control group and an observation group, each with 33 students. The control group received traditional lecture-based teaching, while the observation group was taught using a multi-modal teaching approach based on an online case library. Performance o
APA, Harvard, Vancouver, ISO, and other styles
37

Santosh Kumar. "Optimized Multi-Modal Healthcare Data Integration: Harnessing HPC and GPU-Accelerated CNNs for Enhanced CDSS." Journal of Information Systems Engineering and Management 10, no. 22s (2025): 766–81. https://doi.org/10.52783/jisem.v10i22s.3619.

Full text
Abstract:
The mixing of multi-modal healthcare information is critical for enhancing clinical decision support systems (CDSS) by means of leveraging various data assets, consisting of electronic health information (EHRs), medical imaging, and wearable sensor information. however, traditional device studying fashions hostilities to efficiently method and examine such heterogeneous datasets because of their complexity, excessive dimensionality, and interoperability challenges. To address those boundaries, we advocate the automatic Multi-Modal records Integration (AMMI-CDSS) framework, a High-performance c
APA, Harvard, Vancouver, ISO, and other styles
38

Wardle, Grant, and Teo Sušnjak. "Image First or Text First? Optimising the Sequencing of Modalities in Large Language Model Prompting and Reasoning Tasks." Big Data and Cognitive Computing 9, no. 6 (2025): 149. https://doi.org/10.3390/bdcc9060149.

Full text
Abstract:
Our study investigates how the sequencing of text and image inputs within multi-modal prompts affects the reasoning performance of Large Language Models (LLMs). Through empirical evaluations of three major commercial LLM vendors—OpenAI, Google, and Anthropic—alongside a user study on interaction strategies, we develop and validate practical heuristics for optimising multi-modal prompt design. Our findings reveal that modality sequencing is a critical factor influencing reasoning performance, particularly in tasks with varying cognitive load and structural complexity. For simpler tasks involvin
APA, Harvard, Vancouver, ISO, and other styles
39

Nair, Arjun. "Federated Learning for Multi-Modal Health Data Integration: Enhancing Diagnostic Accuracy and Ensuring Data Privacy." International Journal for Research in Applied Science and Engineering Technology 13, no. 2 (2025): 396–403. https://doi.org/10.22214/ijraset.2025.66865.

Full text
Abstract:
The healthcare sector is experiencing a surge in diverse health data, encompassing medical imaging, electronic health records and live sensor readings from wearable technology. Integrating these multi-modal datasets holds immense potential for improving medical care by facilitating better diagnostic accuracy, customized therapeutic approaches, and more comprehensive understanding of how diseases evolve. However, centralizing this sensitive patient data across various institutions raises significant privacy concerns and raises complex issues around data stewardship and administrative oversight.
APA, Harvard, Vancouver, ISO, and other styles
40

NIE, Xin. "Multi-Modal Image Fusion for Medical Diagnosis: Combining MRI And CT Using Deep Generative Models." Clinical Medicine And Health Research Journal 5, no. 03 (2025): 1313–27. https://doi.org/10.18535/cmhrj.v5i03.486.

Full text
Abstract:
Medical imaging plays a pivotal role in early disease detection, surgical planning, and post-treatment monitoring. Among various imaging modalities, Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) are widely used due to their complementary strengths MRI offers superior soft tissue contrast, while CT excels in visualizing dense structures like bones. However, relying on a single modality often limits diagnostic precision, especially in complex clinical cases such as brain tumors, stroke evaluation, and orthopedic planning. Consequently, fusing information from both MRI and CT imag
APA, Harvard, Vancouver, ISO, and other styles
41

S. Sagar Imambi, Santosh Kumar,. "Advanced Framework for Multi-Modal Healthcare Data Integration: Leveraging HPC with GPU Computing and CNN Architecture in CDSS." Journal of Electrical Systems 20, no. 1s (2024): 1061–74. http://dx.doi.org/10.52783/jes.874.

Full text
Abstract:
In this study, we shall be looking at the challenges involved in integrating multi-modal healthcare data in the clinical decision support systems (CDSS). We propose the Automated Multi-Modal Data Integration (AMMI-CDSS) algorithm, which will utilize the latest high-performance computing (HPC) techniques such as the Convolutional Neural Network (CNN) architecture and the Graphics Processing Unit (GPU) computing to provide precise and rapid analysis. Which features will be extracted, multi-modal data will be merged, data will be prepared and algorithms developed in a distributed computing enviro
APA, Harvard, Vancouver, ISO, and other styles
42

KUMAR, N. NAGARAJA, T. JAYACHANDRA PRASAD, and K. SATYA PRASAD. "OPTIMIZED DUAL-TREE COMPLEX WAVELET TRANSFORM AND FUZZY ENTROPY FOR MULTI-MODAL MEDICAL IMAGE FUSION: A HYBRID META-HEURISTIC CONCEPT." Journal of Mechanics in Medicine and Biology 21, no. 03 (2021): 2150024. http://dx.doi.org/10.1142/s021951942150024x.

Full text
Abstract:
In recent times, multi-modal medical image fusion has emerged as an important medical application tool. An important goal is to fuse the multi-modal medical images from diverse imaging modalities into a single fused image. The physicians broadly utilize this for precise identification and treatment of diseases. This medical image fusion approach will help the physician perform the combined diagnosis, interventional treatment, pre-operative planning, and intra-operative guidance in various medical applications by developing the corresponding information from clinical images through different mo
APA, Harvard, Vancouver, ISO, and other styles
43

Morris, Robert H., Nicasio R. Geraldi, Johanna L. Stafford, et al. "Woven Natural Fibre Reinforced Composite Materials for Medical Imaging." Materials 13, no. 7 (2020): 1684. http://dx.doi.org/10.3390/ma13071684.

Full text
Abstract:
Repeatable patient positioning is key to minimising the burden on planning radiotherapy treatment. There are very few materials commercially available which are suitable for use in all common imaging and treatment modalities such as magnetic resonance imaging (MRI), X-Ray computed tomography (CT) and radiotherapy. In this article, we present several such materials based on woven natural fibres embedded in a range of different resin materials which are suitable for such applications. By investigating a range of resins and natural fibre materials in combination and evaluating their performance i
APA, Harvard, Vancouver, ISO, and other styles
44

Maqsood, Sarmad, Robertas Damaševičius, and Rytis Maskeliūnas. "Multi-Modal Brain Tumor Detection Using Deep Neural Network and Multiclass SVM." Medicina 58, no. 8 (2022): 1090. http://dx.doi.org/10.3390/medicina58081090.

Full text
Abstract:
Background and Objectives: Clinical diagnosis has become very significant in today’s health system. The most serious disease and the leading cause of mortality globally is brain cancer which is a key research topic in the field of medical imaging. The examination and prognosis of brain tumors can be improved by an early and precise diagnosis based on magnetic resonance imaging. For computer-aided diagnosis methods to assist radiologists in the proper detection of brain tumors, medical imagery must be detected, segmented, and classified. Manual brain tumor detection is a monotonous and error-pr
APA, Harvard, Vancouver, ISO, and other styles
45

Althenayan, Albatoul S., Shada A. AlSalamah, Sherin Aly, et al. "COVID-19 Hierarchical Classification Using a Deep Learning Multi-Modal." Sensors 24, no. 8 (2024): 2641. http://dx.doi.org/10.3390/s24082641.

Full text
Abstract:
Coronavirus disease 2019 (COVID-19), originating in China, has rapidly spread worldwide. Physicians must examine infected patients and make timely decisions to isolate them. However, completing these processes is difficult due to limited time and availability of expert radiologists, as well as limitations of the reverse-transcription polymerase chain reaction (RT-PCR) method. Deep learning, a sophisticated machine learning technique, leverages radiological imaging modalities for disease diagnosis and image classification tasks. Previous research on COVID-19 classification has encountered sever
APA, Harvard, Vancouver, ISO, and other styles
46

Nigar Sultana, Shariar Islam Saimon, Intiser Islam, et al. "Artificial Intelligence in Multi-Disease Medical Diagnostics: An Integrative Approach." Journal of Computer Science and Technology Studies 7, no. 1 (2025): 157–75. https://doi.org/10.32996/jcsts.2025.7.1.12.

Full text
Abstract:
With advanced algorithms, artificial intelligence (AI) has revolutionized the medical diagnostic field where diseases can be predicted simultaneously. The integrative nature of this approach is novel because it can better encompass the complexity of comorbid conditions that are so common in patients; thus, addressing them in a more holistic diagnostic tone that is lacking in previous works. In this study, the investigation of the usage of AI models for simultaneously diagnosing diseases like diabetes, cardiovascular conditions, and neurological disorders is done. Therefore, based on AI techniq
APA, Harvard, Vancouver, ISO, and other styles
47

A.V. Krishnarao P. "Advancements in Disease Detection and Volume Reduction: A Review on Medical Imaging and Healthcare Innovations." Journal of Information Systems Engineering and Management 10, no. 19s (2025): 10–15. https://doi.org/10.52783/jisem.v10i19s.2969.

Full text
Abstract:
Medical imaging remains a cornerstone of modern healthcare, essential for accurate disease detection and optimized treatment planning. This review examines advanced imaging technologies such as X-ray, CT, MRI, and ultrasound, alongside emerging methodologies incorporating machine learning (ML) and artificial intelligence (AI). Techniques for disease detection focus on identifying abnormalities, lesions, or pathological transformations, while strategies for volumetric reduction address minimizing affected tissues or organs. The integration of these approaches facilitates timely interventions an
APA, Harvard, Vancouver, ISO, and other styles
48

Choudhary, Anirudh, Li Tong, Yuanda Zhu, and May D. Wang. "Advancing Medical Imaging Informatics by Deep Learning-Based Domain Adaptation." Yearbook of Medical Informatics 29, no. 01 (2020): 129–38. http://dx.doi.org/10.1055/s-0040-1702009.

Full text
Abstract:
Introduction: There has been a rapid development of deep learning (DL) models for medical imaging. However, DL requires a large labeled dataset for training the models. Getting large-scale labeled data remains a challenge, and multi-center datasets suffer from heterogeneity due to patient diversity and varying imaging protocols. Domain adaptation (DA) has been developed to transfer the knowledge from a labeled data domain to a related but unlabeled domain in either image space or feature space. DA is a type of transfer learning (TL) that can improve the performance of models when applied to mu
APA, Harvard, Vancouver, ISO, and other styles
49

Sharma, Manoj Kumar, M. Shamim Kaiser, and Kanad Ray. "Deep convolutional neural network framework with multi-modal fusion for Alzheimer’s detection." International Journal of Reconfigurable and Embedded Systems (IJRES) 13, no. 1 (2024): 179. http://dx.doi.org/10.11591/ijres.v13.i1.pp179-191.

Full text
Abstract:
The biomedical profession has gained importance due to the rapid and accurate diagnosis of clinical patients using computer-aided diagnosis (CAD) tools. The diagnosis and treatment of Alzheimer’s disease (AD) using complementary multimodalities can improve the quality of life and mental state of patients. In this study, we integrated a lightweight custom convolutional neural network (CNN) model and nature-inspired optimization techniques to enhance the performance, robustness, and stability of progress detection in AD. A multi-modal fusion database approach was implemented, including positron
APA, Harvard, Vancouver, ISO, and other styles
50

Kucharski, A., S. Ma, S. Rudra, et al. "Evaluation of a Multi-Modal Radiation Oncology Elective for First-Year Medical Students." International Journal of Radiation Oncology*Biology*Physics 105, no. 1 (2019): E149. http://dx.doi.org/10.1016/j.ijrobp.2019.06.2205.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!