To see the other types of publications on this topic, follow the link: Image fusion methods.

Journal articles on the topic 'Image fusion methods'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Image fusion methods.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Patil, Varsha, Deepali Sale, and M. A. Joshi. "Image Fusion Methods and Quality Assessment Parameters." Asian Journal of Engineering and Applied Technology 2, no. 1 (2013): 40–45. http://dx.doi.org/10.51983/ajeat-2013.2.1.643.

Full text
Abstract:
Image processing techniques primarily focus upon enhancing the quality of an image or a set of images and to derive the maximum information from them. Image Fusion is such a technique of producing a superior quality image from a set of available images. It is the process of combining relevant information from two or more images into a single image wherein the resulting image will be more informative and complete than any of the input images. A lot of research is being done in this field encompassing areas of Computer Vision, Automatic object detection, Image processing, parallel and distribute
APA, Harvard, Vancouver, ISO, and other styles
2

Sun, Changqi, Cong Zhang, and Naixue Xiong. "Infrared and Visible Image Fusion Techniques Based on Deep Learning: A Review." Electronics 9, no. 12 (2020): 2162. http://dx.doi.org/10.3390/electronics9122162.

Full text
Abstract:
Infrared and visible image fusion technologies make full use of different image features obtained by different sensors, retain complementary information of the source images during the fusion process, and use redundant information to improve the credibility of the fusion image. In recent years, many researchers have used deep learning methods (DL) to explore the field of image fusion and found that applying DL has improved the time-consuming efficiency of the model and the fusion effect. However, DL includes many branches, and there is currently no detailed investigation of deep learning metho
APA, Harvard, Vancouver, ISO, and other styles
3

Zhou, Youyong, Lingjie Yu, Chao Zhi, et al. "A Survey of Multi-Focus Image Fusion Methods." Applied Sciences 12, no. 12 (2022): 6281. http://dx.doi.org/10.3390/app12126281.

Full text
Abstract:
As an important branch in the field of image fusion, the multi-focus image fusion technique can effectively solve the problem of optical lens depth of field, making two or more partially focused images fuse into a fully focused image. In this paper, the methods based on boundary segmentation was put forward as a group of image fusion method. Thus, a novel classification method of image fusion algorithms is proposed: transform domain methods, boundary segmentation methods, deep learning methods, and combination fusion methods. In addition, the subjective and objective evaluation standards are l
APA, Harvard, Vancouver, ISO, and other styles
4

Elaiyaraja, K., and M. Senthil Kumar. "Fusion Imaging in Pixel Level Image Processing Technique – A Literature Review." International Journal of Engineering & Technology 7, no. 3.12 (2018): 175. http://dx.doi.org/10.14419/ijet.v7i3.12.15913.

Full text
Abstract:
Image Processing is an art to get an enriched image or it can be used to retrieve information. This image processing methods are used in medical field also. Numerous modalities like Magnetic Resonance Imaging (MRI), Positron Emission Tomography (PET), and Computed Tomography (CT) etc. are used to analyze and diagnose diseases.Pixel-level image fusion is a combination of several images collected from various inputs and gives more information than any other input messages. Pixel-level image fusion shows a vital role in medical imaging. In this paper, pixel-level image fusionsmethods are survived
APA, Harvard, Vancouver, ISO, and other styles
5

Yahanda, Alexander T., Timothy J. Goble, Peter T. Sylvester, et al. "Impact of 3-Dimensional Versus 2-Dimensional Image Distortion Correction on Stereotactic Neurosurgical Navigation Image Fusion Reliability for Images Acquired With Intraoperative Magnetic Resonance Imaging." Operative Neurosurgery 19, no. 5 (2020): 599–607. http://dx.doi.org/10.1093/ons/opaa152.

Full text
Abstract:
Abstract BACKGROUND Fusion of preoperative and intraoperative magnetic resonance imaging (iMRI) studies during stereotactic navigation may be very useful for procedures such as tumor resections but can be subject to error because of image distortion. OBJECTIVE To assess the impact of 3-dimensional (3D) vs 2-dimensional (2D) image distortion correction on the accuracy of auto-merge image fusion for stereotactic neurosurgical images acquired with iMRI using a head phantom in different surgical positions. METHODS T1-weighted intraoperative images of the head phantom were obtained using 1.5T iMRI.
APA, Harvard, Vancouver, ISO, and other styles
6

Raid, Rafi Omar Al-Nima, Yaseen Al-Ridha Moatasem, and Hamid Abdulraheem Farqad. "Regenerating face images from multi-spectral palm images using multiple fusion methods." TELKOMNIKA Telecommunication, Computing, Electronics and Control 17, no. 6 (2019): 3110–19. https://doi.org/10.12928/TELKOMNIKA.v17i6.12857.

Full text
Abstract:
This paper established a relationship between multi-spectral palm images and a face image based on multiple fusion methods. The first fusion method to be considered is a feature extraction between different multi-spectral palm images, where multi-spectral CASIA database was used. The second fusion method to be considered is a score fusion between two parts of an output face image. Our method suggests that both right and left hands are used, and that each hand aims to produce a significant part of a face image by using a Multi-Layer Perceptron (MLP) network. This will lead to the second fusion
APA, Harvard, Vancouver, ISO, and other styles
7

Luo, Yongyu, and Zhongqiang Luo. "Infrared and Visible Image Fusion: Methods, Datasets, Applications, and Prospects." Applied Sciences 13, no. 19 (2023): 10891. http://dx.doi.org/10.3390/app131910891.

Full text
Abstract:
Infrared and visible light image fusion combines infrared and visible light images by extracting the main information from each image and fusing it together to provide a more comprehensive image with more features from the two photos. Infrared and visible image fusion has gained popularity in recent years and is increasingly being employed in sectors such as target recognition and tracking, night vision, scene segmentation, and others. In order to provide a concise overview of infrared and visible picture fusion, this paper first explores its historical context before outlining current domesti
APA, Harvard, Vancouver, ISO, and other styles
8

Masood, Saleha, Muhammad Sharif, Mussarat Yasmin, Muhammad Alyas Shahid, and Amjad Rehman. "Image Fusion Methods: A Survey." Journal of Engineering Science and Technology Review 10, no. 6 (2017): 186–95. http://dx.doi.org/10.25103/jestr.106.24.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ben-Shoshan, Yotam, and Yitzhak Yitzhaky. "Improvements of image fusion methods." Journal of Electronic Imaging 23, no. 2 (2014): 023021. http://dx.doi.org/10.1117/1.jei.23.2.023021.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Dhakad, Basant, and Vivek Shrivastava. "Performance Improvement of Multi Image Fusion in Wavelet Domain for Medical Images." COMPUSOFT: An International Journal of Advanced Computer Technology 02, no. 04 (2013): 103–7. https://doi.org/10.5281/zenodo.14594715.

Full text
Abstract:
Today, image fusion as one kind of information integrated technology has played an important role in many fields. Most of previous image fusion methods aim at obtaining as many as information from the different images. But in this paper the fusion criterion is to minimize different error between the fused image and the input images. This paper presents the use of image fusion of medical images. Multi-sensor image fusion is the process of combining information from two or more images into a single image. The resulting image contains all the information of the input images. It is containing more
APA, Harvard, Vancouver, ISO, and other styles
11

Giansiracusa, Mike, Larry Pearlstein, Tyler Daws, Soundararajan Ezekiel, and Abdullah Ali Alshehri. "A COMPARATIVE STUDY OF MULTI-FOCUS, MULTI-RESOLUTION IMAGE FUSION TRANSFORMS AND METHODS." COMPUSOFT: An International Journal of Advanced Computer Technology 08, no. 09 (2019): 3374–87. https://doi.org/10.5281/zenodo.14911931.

Full text
Abstract:
Multi-resolution image decomposition transforms are a popular approach to current image processing problems such as image fusion, noise reduction, and deblurring. Over the past few decades, new algorithms have been developed based on the wavelet transform to remedy its directional and shift invariant shortcomings (undecimated discrete wavelet transform is shift invariant). This study provides a comprehensive analysis of multi-focus image fusion techniques using six different multiresolution decomposition transforms to determine the optimal transform for an image fusion application. The transfo
APA, Harvard, Vancouver, ISO, and other styles
12

Dr., Jyoti S. Kulkarni. "Genetic Algorithm Approach for Image Fusion: A Simple Method and Block Method." International Journal of Innovative Technology and Exploring Engineering (IJITEE) 11, no. 6 (2022): 16–21. https://doi.org/10.35940/ijitee.F9895.0511622.

Full text
Abstract:
<strong>Abstract:</strong> The sensors available nowadays are not generating images of all objects in a scene with the same clarity at various distances. The progress in sensor technology improved the quality of images over recent years. However, the target data generated by a single image is limited. For merging information from multiple input images, image fusion is used. The basis of image fusion is on the image acquisition as well as on the level of processing and under this many image fusion techniques are available. Several input image acquisition techniques are available such as multise
APA, Harvard, Vancouver, ISO, and other styles
13

Cha, Dong Ik, Min Woo Lee, Ah Yeong Kim, et al. "Automatic image fusion of real-time ultrasound with computed tomography images: a prospective comparison between two auto-registration methods." Acta Radiologica 58, no. 11 (2017): 1349–57. http://dx.doi.org/10.1177/0284185117693459.

Full text
Abstract:
Background A major drawback of conventional manual image fusion is that the process may be complex, especially for less-experienced operators. Recently, two automatic image fusion techniques called Positioning and Sweeping auto-registration have been developed. Purpose To compare the accuracy and required time for image fusion of real-time ultrasonography (US) and computed tomography (CT) images between Positioning and Sweeping auto-registration. Material and Methods Eighteen consecutive patients referred for planning US for radiofrequency ablation or biopsy for focal hepatic lesions were enro
APA, Harvard, Vancouver, ISO, and other styles
14

Li, Kedong, Bo Cheng, Xiaoming Li, et al. "An Image Fusion Algorithm for Sustainable Development Goals Satellite-1 Night-Time Light Images Based on Optimized Image Stretching and Dual-Domain Fusion." Remote Sensing 16, no. 22 (2024): 4298. http://dx.doi.org/10.3390/rs16224298.

Full text
Abstract:
The Glimmer Imager of Urbanization (GIU) on SDGSAT-1 provides high-resolution and global-coverage images of night-time lights (NLs) with 10 m panchromatic (PAN) and 40 m multispectral (MS) imagery. High-resolution 10 m MS NL images after ideal fusion can be used to better study subtle manifestations of human activities. Most existing remote sensing image-fusion methods are based on the fusion of daytime optical remote sensing images, which do not apply to lossless compressed images of the GIU. To address this limitation, we propose a novel approach for 10 m NL data fusion, namely, a GIU NL ima
APA, Harvard, Vancouver, ISO, and other styles
15

Jose, Manuel Monsalve-Tellez, Alberto Garcés-Gómez Yeison, and Luis Torres-León Jorge. "Evaluation of optical and synthetic aperture radar image fusion methods: a case study applied to Sentinel imagery." International Journal of Electrical and Computer Engineering (IJECE) 13, no. 3 (2023): 2778–87. https://doi.org/10.11591/ijece.v13i3.pp2778-2787.

Full text
Abstract:
This paper evaluates different optical and synthetic aperture radar (SAR) image fusion methods applied to open-access Sentinel images with global coverage. The objective of this research was to evaluate the potential of image fusion methods to get a greater visual difference in land cover, especially in oil palm crops with natural forest areas that are difficult to differentiate visually. The application of the image fusion methods: Brovey (BR), high-frequency modulation (HFM), Gram-Schmidt (GS), and principal components (PC) was evaluated on Sentinel-2 optical and Sentinel-1 SAR images using
APA, Harvard, Vancouver, ISO, and other styles
16

Kumaraswamy, Suda, Dammavalam Srinivasa Rao, and Nuthanapati Naveen Kumar. "Satellite image fusion using fuzzy logic." Acta Universitatis Sapientiae, Informatica 8, no. 2 (2016): 241–53. http://dx.doi.org/10.1515/ausi-2016-0011.

Full text
Abstract:
Abstract Image fusion is a method of combining the Multispectral (MS) and Panchromatic (PAN) images into one image contains more information than any of the input. Image fusion aim is to decrease unknown and weaken common data in the fused output image at the same time improving necessary information. Fused images are helpful in various applications like, remote sensing, computer vision, biometrics, change detection, image analysis and image classification. Conventional fusion methods are having some side effects like assertive spatial information and uncertain color information is an usually
APA, Harvard, Vancouver, ISO, and other styles
17

Li, Jinjin, Jiacheng Zhang, Chao Yang, Huiyu Liu, Yangang Zhao, and Yuanxin Ye. "Comparative Analysis of Pixel-Level Fusion Algorithms and a New High-Resolution Dataset for SAR and Optical Image Fusion." Remote Sensing 15, no. 23 (2023): 5514. http://dx.doi.org/10.3390/rs15235514.

Full text
Abstract:
Synthetic aperture radar (SAR) and optical images often present different geometric structures and texture features for the same ground object. Through the fusion of SAR and optical images, it can effectively integrate their complementary information, thus better meeting the requirements of remote sensing applications, such as target recognition, classification, and change detection, so as to realize the collaborative utilization of multi-modal images. In order to select appropriate methods to achieve high-quality fusion of SAR and optical images, this paper conducts a systematic review of cur
APA, Harvard, Vancouver, ISO, and other styles
18

Li, Ming Jing, Yu Bing Dong, and Xiao Li Wang. "Research and Development of Non Multi-Scale to Pixel-Level Image Fusion." Applied Mechanics and Materials 448-453 (October 2013): 3621–24. http://dx.doi.org/10.4028/www.scientific.net/amm.448-453.3621.

Full text
Abstract:
Image fusion method based on the non multi-scale take the original image as object of study, using various fusion rule of image fusion to fuse images, but not decomposition or transform to original images. So, it can also be called simple multi sensor image fusion methods. Its advantages are low computational complexity and simple principle. Image fusion method based on the non multi-scale is currently the most widely used image fusion methods. The basic principle of fuse method is directly to select large gray, small gray and weighted average among pixel on the source image, to fuse into a ne
APA, Harvard, Vancouver, ISO, and other styles
19

Yang, Zhiguang, and Shan Zeng. "TPFusion: Texture Preserving Fusion of Infrared and Visible Images via Dense Networks." Entropy 24, no. 2 (2022): 294. http://dx.doi.org/10.3390/e24020294.

Full text
Abstract:
In this paper, we design an infrared (IR) and visible (VIS) image fusion via unsupervised dense networks, termed as TPFusion. Activity level measurements and fusion rules are indispensable parts of conventional image fusion methods. However, designing an appropriate fusion process is time-consuming and complicated. In recent years, deep learning-based methods are proposed to handle this problem. However, for multi-modality image fusion, using the same network cannot extract effective feature maps from source images that are obtained by different image sensors. In TPFusion, we can avoid this is
APA, Harvard, Vancouver, ISO, and other styles
20

Yadav, Satya Prakash, and Sachin Yadav. "Image fusion using hybrid methods in multimodality medical images." Medical & Biological Engineering & Computing 58, no. 4 (2020): 669–87. http://dx.doi.org/10.1007/s11517-020-02136-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Kim, Hong-Gi, Jung-Min Seo, and Soo Mee Kim. "Comparison of GAN Deep Learning Methods for Underwater Optical Image Enhancement." Journal of Ocean Engineering and Technology 36, no. 1 (2022): 32–40. http://dx.doi.org/10.26748/ksoe.2021.095.

Full text
Abstract:
Underwater optical images face various limitations that degrade the image quality compared with optical images taken in our atmosphere. Attenuation according to the wavelength of light and reflection by very small floating objects cause low contrast, blurry clarity, and color degradation in underwater images. We constructed an image data of the Korean sea and enhanced it by learning the characteristics of underwater images using the deep learning techniques of CycleGAN (cycle-consistent adversarial network), UGAN (underwater GAN), FUnIE-GAN (fast underwater image enhancement GAN). In addition,
APA, Harvard, Vancouver, ISO, and other styles
22

Huang, Bing, Feng Yang, Mengxiao Yin, Xiaoying Mo, and Cheng Zhong. "A Review of Multimodal Medical Image Fusion Techniques." Computational and Mathematical Methods in Medicine 2020 (April 23, 2020): 1–16. http://dx.doi.org/10.1155/2020/8279342.

Full text
Abstract:
The medical image fusion is the process of coalescing multiple images from multiple imaging modalities to obtain a fused image with a large amount of information for increasing the clinical applicability of medical images. In this paper, we attempt to give an overview of multimodal medical image fusion methods, putting emphasis on the most recent advances in the domain based on (1) the current fusion methods, including based on deep learning, (2) imaging modalities of medical image fusion, and (3) performance analysis of medical image fusion on mainly data set. Finally, the conclusion of this
APA, Harvard, Vancouver, ISO, and other styles
23

Liang, Lei, and Zhisheng Gao. "SharDif: Sharing and Differential Learning for Image Fusion." Entropy 26, no. 1 (2024): 57. http://dx.doi.org/10.3390/e26010057.

Full text
Abstract:
Image fusion is the generation of an informative image that contains complementary information from the original sensor images, such as texture details and attentional targets. Existing methods have designed a variety of feature extraction algorithms and fusion strategies to achieve image fusion. However, these methods ignore the extraction of common features in the original multi-source images. The point of view proposed in this paper is that image fusion is to retain, as much as possible, the useful shared features and complementary differential features of the original multi-source images.
APA, Harvard, Vancouver, ISO, and other styles
24

Garcia, J. A., Rosa Rodriguez-Sánchez, J. Fdez-Valdivia, and Alexander Toet. "Visual efficiency of image fusion methods." International Journal of Image and Data Fusion 3, no. 1 (2012): 39–69. http://dx.doi.org/10.1080/19479832.2011.592859.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Zhang, Yongxin, Deguang Li, and WenPeng Zhu. "Infrared and Visible Image Fusion with Hybrid Image Filtering." Mathematical Problems in Engineering 2020 (July 29, 2020): 1–17. http://dx.doi.org/10.1155/2020/1757214.

Full text
Abstract:
Image fusion is an important technique aiming to generate a composite image from multiple images of the same scene. Infrared and visible images can provide the same scene information from different aspects, which is useful for target recognition. But the existing fusion methods cannot well preserve the thermal radiation and appearance information simultaneously. Thus, we propose an infrared and visible image fusion method by hybrid image filtering. We represent the fusion problem with a divide and conquer strategy. A Gaussian filter is used to decompose the source images into base layers and d
APA, Harvard, Vancouver, ISO, and other styles
26

Saleh, Mohammed Ali, AbdElmgeid A. Ali, Kareem Ahmed, and Abeer M. Sarhan. "A Brief Analysis of Multimodal Medical Image Fusion Techniques." Electronics 12, no. 1 (2022): 97. http://dx.doi.org/10.3390/electronics12010097.

Full text
Abstract:
Recently, image fusion has become one of the most promising fields in image processing since it plays an essential role in different applications, such as medical diagnosis and clarification of medical images. Multimodal Medical Image Fusion (MMIF) enhances the quality of medical images by combining two or more medical images from different modalities to obtain an improved fused image that is clearer than the original ones. Choosing the best MMIF technique which produces the best quality is one of the important problems in the assessment of image fusion techniques. In this paper, a complete su
APA, Harvard, Vancouver, ISO, and other styles
27

Gopatoti, Anandbabu. "Approximation Layer based Weighted Average Image Fusion using Guided Filter for Medical Images." International Journal on Emerging Technologies 11, no. 3 (2020): 122–26. https://doi.org/10.5281/zenodo.11030289.

Full text
Abstract:
In this paper, we developed Approximation layers based Weighted Average Image Fusion using Guided Filter for Medical images. The proposed algorithm is very efficient and requires less computational time. Medical image fusion is a technique for clinical imaging analysis that is rapidly emerging as a research area present day. It helps in identifying abnormalities. Medical Imaging technique provides visual images of the interior body&rsquo;s targeted organ (or) tissues in which we collect all the necessary information (or) complementary information based on the application. All the required info
APA, Harvard, Vancouver, ISO, and other styles
28

Sabre, Rachid, and Ias Wahyuni. "Laplacian Pyramid and Dempster-Shafer with Alpha Stable Distance in Multi-Focus Image Fusion." Signal & Image Processing : An International Journal 16, no. 1 (2025): 1–15. https://doi.org/10.5121/sipij.2025.16101.

Full text
Abstract:
Multi-focal image fusion occupies a place in image processing research. It allows, from several images of the same scene with different blurred regions, to give a fused image without blur. This allows fusing photos taken by drones at different heights by zooming in each image a different object. Several methods are developed in the literature but which are made independently of the nature of the images. The aim of our work is to propose a method adapted essentially to images of significant fluctuations (of very large variance) considered as an alpha stable signal. For these images, we propose
APA, Harvard, Vancouver, ISO, and other styles
29

Monsalve-Tellez, Jose Manuel, Yeison Alberto Garcés-Gómez, and Jorge Luís Torres-León. "Evaluation of optical and synthetic aperture radar image fusion methods: a case study applied to Sentinel imagery." International Journal of Electrical and Computer Engineering (IJECE) 13, no. 3 (2023): 2778. http://dx.doi.org/10.11591/ijece.v13i3.pp2778-2787.

Full text
Abstract:
&lt;p&gt;This paper evaluates different optical and synthetic aperture radar (SAR) image fusion methods applied to open-access Sentinel images with global coverage. The objective of this research was to evaluate the potential of image fusion methods to get a greater visual difference in land cover, especially in oil palm crops with natural forest areas that are difficult to differentiate visually. The application of the image fusion methods: Brovey (BR), high-frequency modulation (HFM), Gram-Schmidt (GS), and principal components (PC) was evaluated on Sentinel-2 optical and Sentinel-1 SAR imag
APA, Harvard, Vancouver, ISO, and other styles
30

Prathipa, R., and R. Ramadevi. "Medical Diagnosis with Multimodal Image Fusion Techniques." International Journal on Recent and Innovation Trends in Computing and Communication 11, no. 8s (2023): 108–21. http://dx.doi.org/10.17762/ijritcc.v11i8s.7180.

Full text
Abstract:
Image Fusion is an effective approach utilized to draw out all the significant information from the source images, which supports experts in evaluation and quick decision making. Multi modal medical image fusion produces a composite fused image utilizing various sources to improve quality and extract complementary information. It is extremely challenging to gather every piece of information needed using just one imaging method. Therefore, images obtained from different modalities are fused Additional clinical information can be gleaned through the fusion of several types of medical image pairi
APA, Harvard, Vancouver, ISO, and other styles
31

Qi, Guanqiu, Gang Hu, Neal Mazur, Huahua Liang, and Matthew Haner. "A Novel Multi-Modality Image Simultaneous Denoising and Fusion Method Based on Sparse Representation." Computers 10, no. 10 (2021): 129. http://dx.doi.org/10.3390/computers10100129.

Full text
Abstract:
Multi-modality image fusion applied to improve image quality has drawn great attention from researchers in recent years. However, noise is actually generated in images captured by different types of imaging sensors, which can seriously affect the performance of multi-modality image fusion. As the fundamental method of noisy image fusion, source images are denoised first, and then the denoised images are fused. However, image denoising can decrease the sharpness of source images to affect the fusion performance. Additionally, denoising and fusion are processed in separate processing modes, whic
APA, Harvard, Vancouver, ISO, and other styles
32

Gayatri Phade, Priyanka Bhatambarekar,. "Exploring the Terrain: An Investigation into Deep Learning-Based Fusion Strategies for Integrating Infrared and Visible Imagery." Journal of Electrical Systems 20, no. 2 (2024): 2316–27. http://dx.doi.org/10.52783/jes.1998.

Full text
Abstract:
Infrared and visible image fusion technologies influence distinct image features acquired from distinct sensors, preserving complementary information from input images throughout the process of fusion, and utilizing redundant data to enhance the quality of the resulting fused image. Recently, deep learning methods (DL) have been employed by numerous researchers to investigate image fusion, revealing that the application of DL significantly enhances the efficiency of the model and the quality of fusion outcomes. Nevertheless, it is very important to note that DL can be implemented in various br
APA, Harvard, Vancouver, ISO, and other styles
33

Diwakar, Manoj, Prabhishek Singh, Vinayakumar Ravi, and Ankur Maurya. "A Non-Conventional Review on Multi-Modality-Based Medical Image Fusion." Diagnostics 13, no. 5 (2023): 820. http://dx.doi.org/10.3390/diagnostics13050820.

Full text
Abstract:
Today, medical images play a crucial role in obtaining relevant medical information for clinical purposes. However, the quality of medical images must be analyzed and improved. Various factors affect the quality of medical images at the time of medical image reconstruction. To obtain the most clinically relevant information, multi-modality-based image fusion is beneficial. Nevertheless, numerous multi-modality-based image fusion techniques are present in the literature. Each method has its assumptions, merits, and barriers. This paper critically analyses some sizable non-conventional work with
APA, Harvard, Vancouver, ISO, and other styles
34

Jin, Qi, Sanqing Tan, Gui Zhang, et al. "Visible and Infrared Image Fusion of Forest Fire Scenes Based on Generative Adversarial Networks with Multi-Classification and Multi-Level Constraints." Forests 14, no. 10 (2023): 1952. http://dx.doi.org/10.3390/f14101952.

Full text
Abstract:
Aimed at addressing deficiencies in existing image fusion methods, this paper proposed a multi-level and multi-classification generative adversarial network (GAN)-based method (MMGAN) for fusing visible and infrared images of forest fire scenes (the surroundings of firefighters), which solves the problem that GANs tend to ignore visible contrast ratio information and detailed infrared texture information. The study was based on real-time visible and infrared image data acquired by visible and infrared binocular cameras on forest firefighters’ helmets. We improved the GAN by, on the one hand, s
APA, Harvard, Vancouver, ISO, and other styles
35

Chinnala Balakrishna and Shepuri Srinivasulu. "Astronomical bodies detection with stacking of CoAtNets by fusion of RGB and depth Images." International Journal of Science and Research Archive 12, no. 2 (2024): 423–27. http://dx.doi.org/10.30574/ijsra.2024.12.2.1234.

Full text
Abstract:
Space situational awareness (SSA) system requires detection of space objects that are varied in sizes, shapes, and types. The space images are difficult because of various factors such as illumination and noise and as a result make the recognition task complex. Image fusion is an important area in image processing for a variety of applications including RGB-D sensor fusion, remote sensing, medical diagnostics, and infrared and visible image fusion. In recent times, various image fusion algorithms have been developed and they showed a superior performance to explore more information that is not
APA, Harvard, Vancouver, ISO, and other styles
36

Kulkarni, Dr Jyoti S. "Genetic Algorithm Approach for Image Fusion: A Simple Method and Block Method." International Journal of Innovative Technology and Exploring Engineering 11, no. 6 (2022): 16–21. http://dx.doi.org/10.35940/ijitee.f9895.0511622.

Full text
Abstract:
The sensors available nowadays are not generating images of all objects in a scene with the same clarity at various distances. The progress in sensor technology improved the quality of images over recent years. However, the target data generated by a single image is limited. For merging information from multiple input images, image fusion is used. The basis of image fusion is on the image acquisition as well as on the level of processing and under this many image fusion techniques are available. Several input image acquisition techniques are available such as multisensor, multifocus, and multi
APA, Harvard, Vancouver, ISO, and other styles
37

Peng, Yan-Tsung, He-Hao Liao, and Ching-Fu Chen. "Two-Exposure Image Fusion Based on Optimized Adaptive Gamma Correction." Sensors 22, no. 1 (2021): 24. http://dx.doi.org/10.3390/s22010024.

Full text
Abstract:
In contrast to conventional digital images, high-dynamic-range (HDR) images have a broader range of intensity between the darkest and brightest regions to capture more details in a scene. Such images are produced by fusing images with different exposure values (EVs) for the same scene. Most existing multi-scale exposure fusion (MEF) algorithms assume that the input images are multi-exposed with small EV intervals. However, thanks to emerging spatially multiplexed exposure technology that can capture an image pair of short and long exposure simultaneously, it is essential to deal with two-expos
APA, Harvard, Vancouver, ISO, and other styles
38

Li, Feiyan. "Assessment of Multisource Remote Sensing Image Fusion by several dissimilarity Methods." Journal of Physics: Conference Series 2031, no. 1 (2021): 012016. http://dx.doi.org/10.1088/1742-6596/2031/1/012016.

Full text
Abstract:
Abstract Recently, advancements in remote sensing technology have made it easier to obtain various temporal and spatial resolution satellite data. Remote sensing techniques can be a useful tool to detect vegetation and soil conditions, monitor crop diseases and natural disaster prevention, etc. Although the same scene taken by different sensors belong to the same ground object, the information that they offered are redundant, complementary and collaborative due to the spatial, spectral and temporal resolution are different. The method of image fusion can integrate an image with rich details an
APA, Harvard, Vancouver, ISO, and other styles
39

Raj, Sumit, B. K. Singh, and Amitava Choudhury. "An Unsupervised Image Fusion Approach Using Convolution Neural Network for Feature Enhancement of Medical Brain Images." Engineering World 6 (October 30, 2024): 154–58. http://dx.doi.org/10.37394/232025.2024.6.16.

Full text
Abstract:
Image fusion is the process of amalgamation of two or more images into a single image that contains composite, enriched with diverse details from the original images. In the medical field, image fusion serves as an indispensable tool for elevating the precision of medical imagery and facilitating diagnostic processes. With the advent of deep learning, there has been a significant increase in the accuracy and effectiveness of image fusion techniques. This paper presents a deeplearningbased approach for medical image fusion that combines the advantages of deep-learning techniques with traditiona
APA, Harvard, Vancouver, ISO, and other styles
40

Ahmed, Ali, and Sara Mohamed. "Implementation of early and late fusion methods for content-based image retrieval." International Journal of ADVANCED AND APPLIED SCIENCES 8, no. 7 (2021): 97–105. http://dx.doi.org/10.21833/ijaas.2021.07.012.

Full text
Abstract:
Content-Based Image Retrieval (CBIR) systems retrieve images from the image repository or database in which they are visually similar to the query image. CBIR plays an important role in various fields such as medical diagnosis, crime prevention, web-based searching, and architecture. CBIR consists mainly of two stages: The first is the extraction of features and the second is the matching of similarities. There are several ways to improve the efficiency and performance of CBIR, such as segmentation, relevance feedback, expansion of queries, and fusion-based methods. The literature has suggeste
APA, Harvard, Vancouver, ISO, and other styles
41

Tian, Yingzhong, Jie Luo, Wenjun Zhang, Tinggang Jia, Aiguo Wang, and Long Li. "Multifocus Image Fusion in Q-Shift DTCWT Domain Using Various Fusion Rules." Mathematical Problems in Engineering 2016 (2016): 1–12. http://dx.doi.org/10.1155/2016/5637306.

Full text
Abstract:
Multifocus image fusion is a process that integrates partially focused image sequence into a fused image which is focused everywhere, with multiple methods proposed in the past decades. The Dual Tree Complex Wavelet Transform (DTCWT) is one of the most precise ones eliminating two main defects caused by the Discrete Wavelet Transform (DWT). Q-shift DTCWT was proposed afterwards to simplify the construction of filters in DTCWT, producing better fusion effects. A different image fusion strategy based on Q-shift DTCWT is presented in this work. According to the strategy, firstly, each image is de
APA, Harvard, Vancouver, ISO, and other styles
42

Lokanath Reddy, C., and K. Sripal Reddy. "Improvement of Image Fusion by Integrating Wavelet Transform with Principal Component Analysis." International Journal of Engineering & Technology 7, no. 4.6 (2018): 220. http://dx.doi.org/10.14419/ijet.v7i4.6.20479.

Full text
Abstract:
The process which combines the two or more than two related source images and gives a single output image is known as image fusion. Image fusion is mainly used to analyze the image areas where the pixel values i.e. information is low intensity. Fusion of Images has been used in different applications .Correlation property is important in image fusion analysis. Correlation can be controlled by distributing the Energy in different spectral bands. Broadly image fusion process can be categorized into three groups i.e. spatial, transform and statistical methods. The image fusion process should pres
APA, Harvard, Vancouver, ISO, and other styles
43

Zhao, Liquan, Chen Ke, Yanfei Jia, Cong Xu, and Zhijun Teng. "Infrared and Visible Image Fusion via Residual Interactive Transformer and Cross-Attention Fusion." Sensors 25, no. 14 (2025): 4307. https://doi.org/10.3390/s25144307.

Full text
Abstract:
Infrared and visible image fusion combines infrared and visible images of the same scene to produce a more informative and comprehensive fused image. Existing deep learning-based fusion methods fail to establish dependencies between global and local information during feature extraction. This results in unclear scene texture details and low contrast of the infrared thermal targets in the fused image. This paper proposes an infrared and visible image fusion network to address this issue via the use of a residual interactive transformer and cross-attention fusion. The network first introduces a
APA, Harvard, Vancouver, ISO, and other styles
44

Dong, Linlu, Jun Wang, Liangjun Zhao, Yun Zhang, and Jie Yang. "ICIF: Image fusion via information clustering and image features." PLOS ONE 18, no. 8 (2023): e0286024. http://dx.doi.org/10.1371/journal.pone.0286024.

Full text
Abstract:
Image fusion technology is employed to integrate images collected by utilizing different types of sensors into the same image to generate high-definition images and extract more comprehensive information. However, all available techniques derive the features of the images by utilizing each sensor separately, resulting in poorly correlated image features when different types of sensors are utilized during the fusion process. The fusion strategy to make up for the differences between features alone is an important reason for the poor clarity of fusion results. Therefore, this paper proposes a fu
APA, Harvard, Vancouver, ISO, and other styles
45

Wang, Hongfeng, Jianzhong Wang, Haonan Xu, Yong Sun, and Zibo Yu. "DRSNFuse: Deep Residual Shrinkage Network for Infrared and Visible Image Fusion." Sensors 22, no. 14 (2022): 5149. http://dx.doi.org/10.3390/s22145149.

Full text
Abstract:
Infrared images are robust against illumination variation and disguises, containing the sharp edge contours of objects. Visible images are enriched with texture details. Infrared and visible image fusion seeks to obtain high-quality images, keeping the advantages of source images. This paper proposes an object-aware image fusion method based on a deep residual shrinkage network, termed as DRSNFuse. DRSNFuse exploits residual shrinkage blocks for image fusion and introduces a deeper network in infrared and visible image fusion tasks than existing methods based on fully convolutional networks. T
APA, Harvard, Vancouver, ISO, and other styles
46

Yang, Yong. "Wavelet Transform with a Novel Integration Technique for Image Fusion." Advanced Materials Research 204-210 (February 2011): 1419–22. http://dx.doi.org/10.4028/www.scientific.net/amr.204-210.1419.

Full text
Abstract:
Image fusion is to combine several different source images to form a new image by using a certain method. Recent studies show that among a variety of image fusion algorithms, the wavelet-based method is more effective. In the wavelet-based method, the key technique is the fusion scheme, which can decide the final fused result. This paper presents a novel fusion scheme that integrates the wavelet decomposed coefficients in a quite separate way when fusing images. The method is formed by considering the different physical meanings of the coefficients in both the low frequency and high frequency
APA, Harvard, Vancouver, ISO, and other styles
47

Allapakam, Venu, and Yepuganti Karuna. "An ensemble deep learning model for medical image fusion with Siamese neural networks and VGG-19." PLOS ONE 19, no. 10 (2024): e0309651. http://dx.doi.org/10.1371/journal.pone.0309651.

Full text
Abstract:
Multimodal medical image fusion methods, which combine complementary information from many multi-modality medical images, are among the most important and practical approaches in numerous clinical applications. Various conventional image fusion techniques have been developed for multimodality image fusion. Complex procedures for weight map computing, fixed fusion strategy and lack of contextual understanding remain difficult in conventional and machine learning approaches, usually resulting in artefacts that degrade the image quality. This work proposes an efficient hybrid learning model for m
APA, Harvard, Vancouver, ISO, and other styles
48

Wang, Qiang, Xuezhi Yan, Wenjie Xie, and Yong Wang. "Image Fusion Method Based on Snake Visual Imaging Mechanism and PCNN." Sensors 24, no. 10 (2024): 3077. http://dx.doi.org/10.3390/s24103077.

Full text
Abstract:
The process of image fusion is the process of enriching an image and improving the image’s quality, so as to facilitate the subsequent image processing and analysis. With the increasing importance of image fusion technology, the fusion of infrared and visible images has received extensive attention. In today’s deep learning environment, deep learning is widely used in the field of image fusion. However, in some applications, it is not possible to obtain a large amount of training data. Because some special organs of snakes can receive and process infrared information and visible information, t
APA, Harvard, Vancouver, ISO, and other styles
49

Li, Lei. "An Image Fusion Algorithm Based on Pseudo-Color." Advanced Materials Research 433-440 (January 2012): 5436–42. http://dx.doi.org/10.4028/www.scientific.net/amr.433-440.5436.

Full text
Abstract:
The pseudo-color processing for target identification and tracking is very meaningful Experimental results show that the pseudo-color image fusion is a very effective methods. This paper presents a false color image fusion based on the new method. Fusion using wavelet transform grayscale images, find the gray fused image and the difference between the original image, respectively, as the image of l, α, β components are color fusion image, and then after the color transformation, the final false color fused image. The results showed that the color fusion image colors more vivid, more in line wi
APA, Harvard, Vancouver, ISO, and other styles
50

Modak, Sourav, Jonathan Heil, and Anthony Stein. "Pansharpening Low-Altitude Multispectral Images of Potato Plants Using a Generative Adversarial Network." Remote Sensing 16, no. 5 (2024): 874. http://dx.doi.org/10.3390/rs16050874.

Full text
Abstract:
Image preprocessing and fusion are commonly used for enhancing remote-sensing images, but the resulting images often lack useful spatial features. As the majority of research on image fusion has concentrated on the satellite domain, the image-fusion task for Unmanned Aerial Vehicle (UAV) images has received minimal attention. This study investigated an image-improvement strategy by integrating image preprocessing and fusion tasks for UAV images. The goal is to improve spatial details and avoid color distortion in fused images. Techniques such as image denoising, sharpening, and Contrast Limite
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!