Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Image visible.

Articles de revues sur le sujet « Image visible »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleurs articles de revues pour votre recherche sur le sujet « Image visible ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les articles de revues sur diverses disciplines et organisez correctement votre bibliographie.

1

Uddin, Mohammad Shahab, Chiman Kwan, and Jiang Li. "MWIRGAN: Unsupervised Visible-to-MWIR Image Translation with Generative Adversarial Network." Electronics 12, no. 4 (2023): 1039. http://dx.doi.org/10.3390/electronics12041039.

Texte intégral
Résumé :
Unsupervised image-to-image translation techniques have been used in many applications, including visible-to-Long-Wave Infrared (visible-to-LWIR) image translation, but very few papers have explored visible-to-Mid-Wave Infrared (visible-to-MWIR) image translation. In this paper, we investigated unsupervised visible-to-MWIR image translation using generative adversarial networks (GANs). We proposed a new model named MWIRGAN for visible-to-MWIR image translation in a fully unsupervised manner. We utilized a perceptual loss to leverage shape identification and location changes of the objects in t
Styles APA, Harvard, Vancouver, ISO, etc.
2

Dong, Yumin, Zhengquan Chen, Ziyi Li, and Feng Gao. "A Multi-Branch Multi-Scale Deep Learning Image Fusion Algorithm Based on DenseNet." Applied Sciences 12, no. 21 (2022): 10989. http://dx.doi.org/10.3390/app122110989.

Texte intégral
Résumé :
Infrared images have good anti-environmental interference ability and can capture hot target information well, but their pictures lack rich detailed texture information and poor contrast. Visible image has clear and detailed texture information, but their imaging process depends more on the environment, and the quality of the environment determines the quality of the visible image. This paper presents an infrared image and visual image fusion algorithm based on deep learning. Two identical feature extractors are used to extract the features of visible and infrared images of different scales, f
Styles APA, Harvard, Vancouver, ISO, etc.
3

Zhang, Yongxin, Deguang Li, and WenPeng Zhu. "Infrared and Visible Image Fusion with Hybrid Image Filtering." Mathematical Problems in Engineering 2020 (July 29, 2020): 1–17. http://dx.doi.org/10.1155/2020/1757214.

Texte intégral
Résumé :
Image fusion is an important technique aiming to generate a composite image from multiple images of the same scene. Infrared and visible images can provide the same scene information from different aspects, which is useful for target recognition. But the existing fusion methods cannot well preserve the thermal radiation and appearance information simultaneously. Thus, we propose an infrared and visible image fusion method by hybrid image filtering. We represent the fusion problem with a divide and conquer strategy. A Gaussian filter is used to decompose the source images into base layers and d
Styles APA, Harvard, Vancouver, ISO, etc.
4

Son, Dong-Min, Hyuk-Ju Kwon, and Sung-Hak Lee. "Visible and Near Infrared Image Fusion Using Base Tone Compression and Detail Transform Fusion." Chemosensors 10, no. 4 (2022): 124. http://dx.doi.org/10.3390/chemosensors10040124.

Texte intégral
Résumé :
This study aims to develop a spatial dual-sensor module for acquiring visible and near-infrared images in the same space without time shifting and to synthesize the captured images. The proposed method synthesizes visible and near-infrared images using contourlet transform, principal component analysis, and iCAM06, while the blending method uses color information in a visible image and detailed information in an infrared image. The contourlet transform obtains detailed information and can decompose an image into directional images, making it better in obtaining detailed information than decomp
Styles APA, Harvard, Vancouver, ISO, etc.
5

Liu, Zheng, Su Mei Cui, He Yin, and Yu Chi Lin. "Comparative Analysis of Image Measurement Accuracy in High Temperature Based on Visible and Infrared Vision." Applied Mechanics and Materials 300-301 (February 2013): 1681–86. http://dx.doi.org/10.4028/www.scientific.net/amm.300-301.1681.

Texte intégral
Résumé :
Image measurement is a common and non-contact dimensional measurement method. However, because of light deflection, visible light imaging is influenced largely, which makes the measurement accuracy reduce greatly. Various factors of visual measurement in high temperature are analyzed with the application of Planck theory. Thereafter, by means of the light dispersion theory, image measurement errors of visible and infrared images in high temperature which caused by light deviation are comparatively analyzed. Imaging errors of visible and infrared images are proposed quantitatively with experime
Styles APA, Harvard, Vancouver, ISO, etc.
6

Huang, Hui, Linlu Dong, Zhishuang Xue, Xiaofang Liu, and Caijian Hua. "Fusion algorithm of visible and infrared image based on anisotropic diffusion and image enhancement (capitalize only the first word in a title (or heading), the first word in a subtitle (or subheading), and any proper nouns)." PLOS ONE 16, no. 2 (2021): e0245563. http://dx.doi.org/10.1371/journal.pone.0245563.

Texte intégral
Résumé :
Aiming at the situation that the existing visible and infrared images fusion algorithms only focus on highlighting infrared targets and neglect the performance of image details, and cannot take into account the characteristics of infrared and visible images, this paper proposes an image enhancement fusion algorithm combining Karhunen-Loeve transform and Laplacian pyramid fusion. The detail layer of the source image is obtained by anisotropic diffusion to get more abundant texture information. The infrared images adopt adaptive histogram partition and brightness correction enhancement algorithm
Styles APA, Harvard, Vancouver, ISO, etc.
7

Xu, Dongdong, Yongcheng Wang, Shuyan Xu, Kaiguang Zhu, Ning Zhang, and Xin Zhang. "Infrared and Visible Image Fusion with a Generative Adversarial Network and a Residual Network." Applied Sciences 10, no. 2 (2020): 554. http://dx.doi.org/10.3390/app10020554.

Texte intégral
Résumé :
Infrared and visible image fusion can obtain combined images with salient hidden objectives and abundant visible details simultaneously. In this paper, we propose a novel method for infrared and visible image fusion with a deep learning framework based on a generative adversarial network (GAN) and a residual network (ResNet). The fusion is accomplished with an adversarial game and directed by the unique loss functions. The generator with residual blocks and skip connections can extract deep features of source image pairs and generate an elementary fused image with infrared thermal radiation in
Styles APA, Harvard, Vancouver, ISO, etc.
8

Niu, Yifeng, Shengtao Xu, Lizhen Wu, and Weidong Hu. "Airborne Infrared and Visible Image Fusion for Target Perception Based on Target Region Segmentation and Discrete Wavelet Transform." Mathematical Problems in Engineering 2012 (2012): 1–10. http://dx.doi.org/10.1155/2012/275138.

Texte intégral
Résumé :
Infrared and visible image fusion is an important precondition of realizing target perception for unmanned aerial vehicles (UAVs), then UAV can perform various given missions. Information of texture and color in visible images are abundant, while target information in infrared images is more outstanding. The conventional fusion methods are mostly based on region segmentation; as a result, the fused image for target recognition could not be actually acquired. In this paper, a novel fusion method of airborne infrared and visible image based on target region segmentation and discrete wavelet tran
Styles APA, Harvard, Vancouver, ISO, etc.
9

Yang, Shihao, Min Sun, Xiayin Lou, Hanjun Yang, and Dong Liu. "Nighttime Thermal Infrared Image Translation Integrating Visible Images." Remote Sensing 16, no. 4 (2024): 666. http://dx.doi.org/10.3390/rs16040666.

Texte intégral
Résumé :
Nighttime Thermal InfraRed (NTIR) image colorization, also known as the translation of NTIR images into Daytime Color Visible (DCV) images, can facilitate human and intelligent system perception of nighttime scenes under weak lighting conditions. End-to-end neural networks have been used to learn the mapping relationship between temperature and color domains, and translate NTIR images with one channel into DCV images with three channels. However, this mapping relationship is an ill-posed problem with multiple solutions without constraints, resulting in blurred edges, color disorder, and semant
Styles APA, Harvard, Vancouver, ISO, etc.
10

Batchuluun, Ganbayar, Se Hyun Nam, and Kang Ryoung Park. "Deep Learning-Based Plant Classification Using Nonaligned Thermal and Visible Light Images." Mathematics 10, no. 21 (2022): 4053. http://dx.doi.org/10.3390/math10214053.

Texte intégral
Résumé :
There have been various studies conducted on plant images. Machine learning algorithms are usually used in visible light image-based studies, whereas, in thermal image-based studies, acquired thermal images tend to be analyzed with a naked eye visual examination. However, visible light cameras are sensitive to light, and cannot be used in environments with low illumination. Although thermal cameras are not susceptible to these drawbacks, they are sensitive to atmospheric temperature and humidity. Moreover, in previous thermal camera-based studies, time-consuming manual analyses were performed.
Styles APA, Harvard, Vancouver, ISO, etc.
11

Zhang, Yugui, Bo Zhai, Gang Wang, and Jianchu Lin. "Pedestrian Detection Method Based on Two-Stage Fusion of Visible Light Image and Thermal Infrared Image." Electronics 12, no. 14 (2023): 3171. http://dx.doi.org/10.3390/electronics12143171.

Texte intégral
Résumé :
Pedestrian detection has important research value and practical significance. It has been used in intelligent monitoring, intelligent transportation, intelligent therapy, and automatic driving. However, in the pixel-level fusion and the feature-level fusion of visible light images and thermal infrared images under shadows during the daytime or under low illumination at night in actual surveillance, missed and false pedestrian detection always occurs. To solve this problem, an algorithm for the pedestrian detection based on the two-stage fusion of visible light images and thermal infrared image
Styles APA, Harvard, Vancouver, ISO, etc.
12

Zhang, Zili, Yan Tian, Jianxiang Li, and Yiping Xu. "Unsupervised Remote Sensing Image Super-Resolution Guided by Visible Images." Remote Sensing 14, no. 6 (2022): 1513. http://dx.doi.org/10.3390/rs14061513.

Texte intégral
Résumé :
Remote sensing images are widely used in many applications. However, due to being limited by the sensors, it is difficult to obtain high-resolution (HR) images from remote sensing images. In this paper, we propose a novel unsupervised cross-domain super-resolution method devoted to reconstructing a low-resolution (LR) remote sensing image guided by an unpaired HR visible natural image. Therefore, an unsupervised visible image-guided remote sensing image super-resolution network (UVRSR) is built. The network is divided into two learnable branches: a visible image-guided branch (VIG) and a remot
Styles APA, Harvard, Vancouver, ISO, etc.
13

Lee, Min-Han, Young-Ho Go, Seung-Hwan Lee, and Sung-Hak Lee. "Low-Light Image Enhancement Using CycleGAN-Based Near-Infrared Image Generation and Fusion." Mathematics 12, no. 24 (2024): 4028. https://doi.org/10.3390/math12244028.

Texte intégral
Résumé :
Image visibility is often degraded under challenging conditions such as low light, backlighting, and inadequate contrast. To mitigate these issues, techniques like histogram equalization, high dynamic range (HDR) tone mapping and near-infrared (NIR)–visible image fusion are widely employed. However, these methods have inherent drawbacks: histogram equalization frequently causes oversaturation and detail loss, while visible–NIR fusion requires complex and error-prone images. The proposed algorithm of a complementary cycle-consistent generative adversarial network (CycleGAN)-based training with
Styles APA, Harvard, Vancouver, ISO, etc.
14

Lee, Ji-Min, Young-Eun An, EunSang Bak, and Sungbum Pan. "Improvement of Negative Emotion Recognition in Visible Images Enhanced by Thermal Imaging." Sustainability 14, no. 22 (2022): 15200. http://dx.doi.org/10.3390/su142215200.

Texte intégral
Résumé :
Facial expressions help in understanding the intentions of others as they are an essential means of communication, revealing human emotions. Recently, thermal imaging has been playing a complementary role in emotion recognition and is considered an alternative to overcome the drawbacks of visible imaging. Notably, a relatively severe recognition error of fear among negative emotions frequently occurs in visible imaging. This study aims to improve the recognition performance of fear by using the visible and thermal images acquired simultaneously. When fear was not recognized in a visible image,
Styles APA, Harvard, Vancouver, ISO, etc.
15

Liu, Xiaomin, Jun-Bao Li, and Jeng-Shyang Pan. "Feature Point Matching Based on Distinct Wavelength Phase Congruency and Log-Gabor Filters in Infrared and Visible Images." Sensors 19, no. 19 (2019): 4244. http://dx.doi.org/10.3390/s19194244.

Texte intégral
Résumé :
Infrared and visible image matching methods have been rising in popularity with the emergence of more kinds of sensors, which provide more applications in visual navigation, precision guidance, image fusion, and medical image analysis. In such applications, image matching is utilized for location, fusion, image analysis, and so on. In this paper, an infrared and visible image matching approach, based on distinct wavelength phase congruency (DWPC) and log-Gabor filters, is proposed. Furthermore, this method is modified for non-linear image matching with different physical wavelengths. Phase con
Styles APA, Harvard, Vancouver, ISO, etc.
16

Wang, Qi, Xiang Gao, Fan Wang, Zhihang Ji, and Xiaopeng Hu. "Feature Point Matching Method Based on Consistent Edge Structures for Infrared and Visible Images." Applied Sciences 10, no. 7 (2020): 2302. http://dx.doi.org/10.3390/app10072302.

Texte intégral
Résumé :
Infrared and visible image match is an important research topic in the field of multi-modality image processing. Due to the difference of image contents like pixel intensities and gradients caused by disparate spectrums, it is a great challenge for infrared and visible image match in terms of the detection repeatability and the matching accuracy. To improve the matching performance, a feature detection and description method based on consistent edge structures of images (DDCE) is proposed in this paper. First, consistent edge structures are detected to obtain similar contents of infrared and v
Styles APA, Harvard, Vancouver, ISO, etc.
17

Wang, Zhi, and Ao Dong. "Dual Generative Adversarial Network for Infrared and Visible Image Fusion." Journal of Computing and Electronic Information Management 16, no. 1 (2025): 55–59. https://doi.org/10.54097/jwm4va34.

Texte intégral
Résumé :
The objective of infrared and visible image fusion is to integrate the prominent targets from the infrared image with the background information from the visible image into a single image. Many deep learning-based approaches have been employed in the field of image fusion. However, most methods have not been able to sufficiently extract the distinct features of images from different modalities, resulting in fusion outcomes that lean towards one modality while losing information from the other. To address this, we have developed a novel method based on generative adversarial network for infrare
Styles APA, Harvard, Vancouver, ISO, etc.
18

Wu, Yan Hai, Hao Zhang, Fang Ni Zhang, and Yue Hua Han. "Fusion of Visible and Infrared Images Based on Non-Sampling Contourlet and Wavelet Transform." Applied Mechanics and Materials 599-601 (August 2014): 1523–26. http://dx.doi.org/10.4028/www.scientific.net/amm.599-601.1523.

Texte intégral
Résumé :
This paper gives a method for fusion of visible and infrared image, which combined non-sampling contourlet and wavelet transform. This method firstly makes contrast enhancements to infrared image. Next, does NSCT decomposition to visible image and enhanced-infrared image, then decomposes the low frequency from above decomposition using wavelet. Thirdly, for high-frequency subband of NSCT decomposition and high or low-frequency subband of wavelet, it uses different fusion rules. Finally, it gets fusion image through refactoring of wavelet and NSCT. Experiments show that the method not only reta
Styles APA, Harvard, Vancouver, ISO, etc.
19

HAO, Shuai, Xizi SUN, Xu MA, et al. "Infrared and visible image fusion method based on target enhancement and rat swarm optimization." Xibei Gongye Daxue Xuebao/Journal of Northwestern Polytechnical University 42, no. 4 (2024): 735–43. http://dx.doi.org/10.1051/jnwpu/20244240735.

Texte intégral
Résumé :
In order to solve the target ambiguity and information loss in the fusion results of traditional infrared and visible images, a fusion method of infrared and visible images based on the target enhancement and mouse swarm optimization, which is abbreviated as TERSFuse. Firstly, in order to reduce the loss of the original image details in the fusion results, the infrared contrast enhancement module and the visible image enhancement module based on the brightness perception are constructed respectively. Secondly, the infrared and visible enhanced images were decomposed by using the Laplace pyrami
Styles APA, Harvard, Vancouver, ISO, etc.
20

Du, Qinglei, Han Xu, Yong Ma, Jun Huang, and Fan Fan. "Fusing Infrared and Visible Images of Different Resolutions via Total Variation Model." Sensors 18, no. 11 (2018): 3827. http://dx.doi.org/10.3390/s18113827.

Texte intégral
Résumé :
In infrared and visible image fusion, existing methods typically have a prerequisite that the source images share the same resolution. However, due to limitations of hardware devices and application environments, infrared images constantly suffer from markedly lower resolution compared with the corresponding visible images. In this case, current fusion methods inevitably cause texture information loss in visible images or blur thermal radiation information in infrared images. Moreover, the principle of existing fusion rules typically focuses on preserving texture details in source images, whic
Styles APA, Harvard, Vancouver, ISO, etc.
21

Gao, Peng, Tian Tian, Tianming Zhao, Linfeng Li, Nan Zhang, and Jinwen Tian. "GF-Detection: Fusion with GAN of Infrared and Visible Images for Vehicle Detection at Nighttime." Remote Sensing 14, no. 12 (2022): 2771. http://dx.doi.org/10.3390/rs14122771.

Texte intégral
Résumé :
Vehicles are important targets in the remote sensing applications and nighttime vehicle detection has been a hot study topic in recent years. Vehicles in the visible images at nighttime have inadequate features for object detection. Infrared images retain the contours of vehicles while they lose the color information. Thus, it is valuable to fuse infrared and visible images to improve the vehicle detection performance at nighttime. However, it is still a challenge to design effective fusion models due to the complexity of visible and infrared images. In order to improve vehicle detection perfo
Styles APA, Harvard, Vancouver, ISO, etc.
22

Chen, Xianglong, Haipeng Wang, Yaohui Liang, Ying Meng, and Shifeng Wang. "A Novel Infrared and Visible Image Fusion Approach Based on Adversarial Neural Network." Sensors 22, no. 1 (2021): 304. http://dx.doi.org/10.3390/s22010304.

Texte intégral
Résumé :
The presence of fake pictures affects the reliability of visible face images under specific circumstances. This paper presents a novel adversarial neural network designed named as the FTSGAN for infrared and visible image fusion and we utilize FTSGAN model to fuse the face image features of infrared and visible image to improve the effect of face recognition. In FTSGAN model design, the Frobenius norm (F), total variation norm (TV), and structural similarity index measure (SSIM) are employed. The F and TV are used to limit the gray level and the gradient of the image, while the SSIM is used to
Styles APA, Harvard, Vancouver, ISO, etc.
23

Jia, Weibin, Zhihuan Song, and Zhengguo Li. "Multi-scale Fusion of Stretched Infrared and Visible Images." Sensors 22, no. 17 (2022): 6660. http://dx.doi.org/10.3390/s22176660.

Texte intégral
Résumé :
Infrared (IR) band sensors can capture digital images under challenging conditions, such as haze, smoke, and fog, while visible (VIS) band sensors seize abundant texture information. It is desired to fuse IR and VIS images to generate a more informative image. In this paper, a novel multi-scale IR and VIS images fusion algorithm is proposed to integrate information from both the images into the fused image and preserve the color of the VIS image. A content-adaptive gamma correction is first introduced to stretch the IR images by using one of the simplest edge-preserving filters, which alleviat
Styles APA, Harvard, Vancouver, ISO, etc.
24

Shen, Sen, Di Li, Liye Mei, et al. "DFA-Net: Multi-Scale Dense Feature-Aware Network via Integrated Attention for Unmanned Aerial Vehicle Infrared and Visible Image Fusion." Drones 7, no. 8 (2023): 517. http://dx.doi.org/10.3390/drones7080517.

Texte intégral
Résumé :
Fusing infrared and visible images taken by an unmanned aerial vehicle (UAV) is a challenging task, since infrared images distinguish the target from the background by the difference in infrared radiation, while the low resolution also produces a less pronounced effect. Conversely, the visible light spectrum has a high spatial resolution and rich texture; however, it is easily affected by harsh weather conditions like low light. Therefore, the fusion of infrared and visible light has the potential to provide complementary advantages. In this paper, we propose a multi-scale dense feature-aware
Styles APA, Harvard, Vancouver, ISO, etc.
25

Li, Shengshi, Yonghua Zou, Guanjun Wang, and Cong Lin. "Infrared and Visible Image Fusion Method Based on a Principal Component Analysis Network and Image Pyramid." Remote Sensing 15, no. 3 (2023): 685. http://dx.doi.org/10.3390/rs15030685.

Texte intégral
Résumé :
The aim of infrared (IR) and visible image fusion is to generate a more informative image for human observation or some other computer vision tasks. The activity-level measurement and weight assignment are two key parts in image fusion. In this paper, we propose a novel IR and visible fusion method based on the principal component analysis network (PCANet) and an image pyramid. Firstly, we use the lightweight deep learning network, a PCANet, to obtain the activity-level measurement and weight assignment of IR and visible images. The activity-level measurement obtained by the PCANet has a stron
Styles APA, Harvard, Vancouver, ISO, etc.
26

Zhang, Xiaoyu, Laixian Zhang, Huichao Guo, et al. "DCLTV: An Improved Dual-Condition Diffusion Model for Laser-Visible Image Translation." Sensors 25, no. 3 (2025): 697. https://doi.org/10.3390/s25030697.

Texte intégral
Résumé :
Laser active imaging systems can remedy the shortcomings of visible light imaging systems in difficult imaging circumstances, thereby attaining clear images. However, laser images exhibit significant modal discrepancy in contrast to the visible image, impeding human perception and computer processing. Consequently, it is necessary to translate laser images to visible images across modalities. Existing cross-modal image translation algorithms are plagued with issues, including difficult training and color bleeding. In recent studies, diffusion models have demonstrated superior image generation
Styles APA, Harvard, Vancouver, ISO, etc.
27

Li, Liangliang, Ming Lv, Zhenhong Jia, et al. "An Effective Infrared and Visible Image Fusion Approach via Rolling Guidance Filtering and Gradient Saliency Map." Remote Sensing 15, no. 10 (2023): 2486. http://dx.doi.org/10.3390/rs15102486.

Texte intégral
Résumé :
To solve problems of brightness and detail information loss in infrared and visible image fusion, an effective infrared and visible image fusion method using rolling guidance filtering and gradient saliency map is proposed in this paper. The rolling guidance filtering is used to decompose the input images into approximate layers and residual layers; the energy attribute fusion model is used to fuse the approximate layers; the gradient saliency map is introduced and the corresponding weight matrices are constructed to perform on residual layers. The fusion image is generated by reconstructing t
Styles APA, Harvard, Vancouver, ISO, etc.
28

Zhang, Hui, Xu Ma, and Yanshan Tian. "An Image Fusion Method Based on Curvelet Transform and Guided Filter Enhancement." Mathematical Problems in Engineering 2020 (June 27, 2020): 1–8. http://dx.doi.org/10.1155/2020/9821715.

Texte intégral
Résumé :
In order to improve the clarity of image fusion and solve the problem that the image fusion effect is affected by the illumination and weather of visible light, a fusion method of infrared and visible images for night-vision context enhancement is proposed. First, a guided filter is used to enhance the details of the visible image. Then, the enhanced visible and infrared images are decomposed by the curvelet transform. The improved sparse representation is used to fuse the low-frequency part, while the high-frequency part is fused with the parametric adaptation pulse-coupled neural networks. F
Styles APA, Harvard, Vancouver, ISO, etc.
29

Akbulut, Harun. "Visible Digital Image Watermarking Using Single Candidate Optimizer." Düzce Üniversitesi Bilim ve Teknoloji Dergisi 13, no. 1 (2025): 506–21. https://doi.org/10.29130/dubited.1532300.

Texte intégral
Résumé :
With the advent of internet technologies, accessing information has become remarkably facile, while concurrently precipitating copyright conundrums. This predicament can be ameliorated by embedding copyright information within digital images, a methodology termed digital image watermarking. Artificial intelligence optimization algorithms are extensively employed in myriad problem-solving scenarios, yielding efficacious outcomes. This study proposes a visible digital image watermarking method utilizing the Single Candidate Optimizer (SCO). Contrary to many prevalent metaheuristic optimization a
Styles APA, Harvard, Vancouver, ISO, etc.
30

Li, Xiang, Yue Shun He, Xuan Zhan, and Feng Yu Liu. "A Rapid Fusion Algorithm of Infrared and the Visible Images Based on Directionlet Transform." Applied Mechanics and Materials 20-23 (January 2010): 45–51. http://dx.doi.org/10.4028/www.scientific.net/amm.20-23.45.

Texte intégral
Résumé :
Direction transform; image fusion; infrared images; fusion rule; anisotropic Abstract Based on analysing the feature of infrared and the visible, this paper proposed an improved algorithm using Directionlet transform.The feature is like this: firstly, separate the color visible images to get the component images, and then make anisotropic decomposition for component images and inrared images, after analysing these images, process them according to regional energy rules ,finally incorporate the intense color to get the fused image. The simulation results shows that,this algorithm can effectivel
Styles APA, Harvard, Vancouver, ISO, etc.
31

Ma, Weihong, Kun Wang, Jiawei Li, et al. "Infrared and Visible Image Fusion Technology and Application: A Review." Sensors 23, no. 2 (2023): 599. http://dx.doi.org/10.3390/s23020599.

Texte intégral
Résumé :
The images acquired by a single visible light sensor are very susceptible to light conditions, weather changes, and other factors, while the images acquired by a single infrared light sensor generally have poor resolution, low contrast, low signal-to-noise ratio, and blurred visual effects. The fusion of visible and infrared light can avoid the disadvantages of two single sensors and, in fusing the advantages of both sensors, significantly improve the quality of the images. The fusion of infrared and visible images is widely used in agriculture, industry, medicine, and other fields. In this st
Styles APA, Harvard, Vancouver, ISO, etc.
32

Luo, Yongyu, and Zhongqiang Luo. "Infrared and Visible Image Fusion: Methods, Datasets, Applications, and Prospects." Applied Sciences 13, no. 19 (2023): 10891. http://dx.doi.org/10.3390/app131910891.

Texte intégral
Résumé :
Infrared and visible light image fusion combines infrared and visible light images by extracting the main information from each image and fusing it together to provide a more comprehensive image with more features from the two photos. Infrared and visible image fusion has gained popularity in recent years and is increasingly being employed in sectors such as target recognition and tracking, night vision, scene segmentation, and others. In order to provide a concise overview of infrared and visible picture fusion, this paper first explores its historical context before outlining current domesti
Styles APA, Harvard, Vancouver, ISO, etc.
33

Wang, Jingjing, Jinwen Ren, Hongzhen Li, et al. "DDGANSE: Dual-Discriminator GAN with a Squeeze-and-Excitation Module for Infrared and Visible Image Fusion." Photonics 9, no. 3 (2022): 150. http://dx.doi.org/10.3390/photonics9030150.

Texte intégral
Résumé :
Infrared images can provide clear contrast information to distinguish between the target and the background under any lighting conditions. In contrast, visible images can provide rich texture details and are compatible with the human visual system. The fusion of a visible image and infrared image will thus contain both comprehensive contrast information and texture details. In this study, a novel approach for the fusion of infrared and visible images is proposed based on a dual-discriminator generative adversarial network with a squeeze-and-excitation module (DDGANSE). Our approach establishes
Styles APA, Harvard, Vancouver, ISO, etc.
34

Niu, Yi Feng, Sheng Tao Xu, and Wei Dong Hu. "Fusion of Infrared and Visible Image Based on Target Regions for Environment Perception." Applied Mechanics and Materials 128-129 (October 2011): 589–93. http://dx.doi.org/10.4028/www.scientific.net/amm.128-129.589.

Texte intégral
Résumé :
Infrared and visible image fusion is an important precondition to realize target perception for unmanned aerial vehicles (UAV) based on which UAV can perform various missions. The details in visible images are abundant, while the target information is more outstanding in infrared images. However, the conventional fusion methods are mostly based on region segmentation, and then the fused image for target recognition can’t be actually acquired. In this paper, a novel fusion method of infrared and visible image based on target regions in discrete wavelet transform (DWT) domain is proposed, which
Styles APA, Harvard, Vancouver, ISO, etc.
35

Zhou, Ze Hua, and Min Tan. "Infrared Image and Visible Image Fusion Based on Wavelet Transform." Advanced Materials Research 756-759 (September 2013): 2850–56. http://dx.doi.org/10.4028/www.scientific.net/amr.756-759.2850.

Texte intégral
Résumé :
The same scene, the infrared image and visible image fusion can concurrently take advantage of the original image information can overcome the limitations and differences of a single sensor image in terms of geometric, spectral and spatial resolution, to improve the quality of the image , which help to locate, identify and explain the physical phenomena and events. Put forward a kind of image fusion method based on wavelet transform. And for the wavelet decomposition of the frequency domain, respectively, discussed the principles of select high-frequency coefficients and low frequency coeffici
Styles APA, Harvard, Vancouver, ISO, etc.
36

Li, Shaopeng, Decao Ma, Yao Ding, Yong Xian, and Tao Zhang. "DBSF-Net: Infrared Image Colorization Based on the Generative Adversarial Model with Dual-Branch Feature Extraction and Spatial-Frequency-Domain Discrimination." Remote Sensing 16, no. 20 (2024): 3766. http://dx.doi.org/10.3390/rs16203766.

Texte intégral
Résumé :
Thermal infrared cameras can image stably in complex scenes such as night, rain, snow, and dense fog. Still, humans are more sensitive to visual colors, so there is an urgent need to convert infrared images into color images in areas such as assisted driving. This paper studies a colorization method for infrared images based on a generative adversarial model. The proposed dual-branch feature extraction network ensures the stability of the content and structure of the generated visible light image; the proposed discrimination strategy combining spatial and frequency domain hybrid constraints ef
Styles APA, Harvard, Vancouver, ISO, etc.
37

Liu, Yaochen, Lili Dong, Yuanyuan Ji, and Wenhai Xu. "Infrared and Visible Image Fusion through Details Preservation." Sensors 19, no. 20 (2019): 4556. http://dx.doi.org/10.3390/s19204556.

Texte intégral
Résumé :
In many actual applications, fused image is essential to contain high-quality details for achieving a comprehensive representation of the real scene. However, existing image fusion methods suffer from loss of details because of the error accumulations of sequential tasks. This paper proposes a novel fusion method to preserve details of infrared and visible images by combining new decomposition, feature extraction, and fusion scheme. For decomposition, different from the most decomposition methods by guided filter, the guidance image contains only the strong edge of the source image but no othe
Styles APA, Harvard, Vancouver, ISO, etc.
38

Jang, Hyoseon, Sangkyun Kim, Suhong Yoo, Soohee Han, and Hong-Gyoo Sohn. "Feature Matching Combining Radiometric and Geometric Characteristics of Images, Applied to Oblique- and Nadir-Looking Visible and TIR Sensors of UAV Imagery." Sensors 21, no. 13 (2021): 4587. http://dx.doi.org/10.3390/s21134587.

Texte intégral
Résumé :
A large amount of information needs to be identified and produced during the process of promoting projects of interest. Thermal infrared (TIR) images are extensively used because they can provide information that cannot be extracted from visible images. In particular, TIR oblique images facilitate the acquisition of information of a building’s facade that is challenging to obtain from a nadir image. When a TIR oblique image and the 3D information acquired from conventional visible nadir imagery are combined, a great synergy for identifying surface information can be created. However, it is an
Styles APA, Harvard, Vancouver, ISO, etc.
39

Zhao, Liangjun, Yun Zhang, Linlu Dong, and Fengling Zheng. "Infrared and visible image fusion algorithm based on spatial domain and image features." PLOS ONE 17, no. 12 (2022): e0278055. http://dx.doi.org/10.1371/journal.pone.0278055.

Texte intégral
Résumé :
Multi-scale image decomposition is crucial for image fusion, extracting prominent feature textures from infrared and visible light images to obtain clear fused images with more textures. This paper proposes a fusion method of infrared and visible light images based on spatial domain and image features to obtain high-resolution and texture-rich images. First, an efficient hierarchical image clustering algorithm based on superpixel fast pixel clustering directly performs multi-scale decomposition of each source image in the spatial domain and obtains high-frequency, medium-frequency, and low-fre
Styles APA, Harvard, Vancouver, ISO, etc.
40

Yin, Ruyi, Bin Yang, Zuyan Huang, and Xiaozhi Zhang. "DSA-Net: Infrared and Visible Image Fusion via Dual-Stream Asymmetric Network." Sensors 23, no. 16 (2023): 7097. http://dx.doi.org/10.3390/s23167097.

Texte intégral
Résumé :
Infrared and visible image fusion technologies are used to characterize the same scene using diverse modalities. However, most existing deep learning-based fusion methods are designed as symmetric networks, which ignore the differences between modal images and lead to source image information loss during feature extraction. In this paper, we propose a new fusion framework for the different characteristics of infrared and visible images. Specifically, we design a dual-stream asymmetric network with two different feature extraction networks to extract infrared and visible feature maps, respectiv
Styles APA, Harvard, Vancouver, ISO, etc.
41

Huo, Xing, Yinping Deng, and Kun Shao. "Infrared and Visible Image Fusion with Significant Target Enhancement." Entropy 24, no. 11 (2022): 1633. http://dx.doi.org/10.3390/e24111633.

Texte intégral
Résumé :
Existing fusion rules focus on retaining detailed information in the source image, but as the thermal radiation information in infrared images is mainly characterized by pixel intensity, these fusion rules are likely to result in reduced saliency of the target in the fused image. To address this problem, we propose an infrared and visible image fusion model based on significant target enhancement, aiming to inject thermal targets from infrared images into visible images to enhance target saliency while retaining important details in visible images. First, the source image is decomposed with mu
Styles APA, Harvard, Vancouver, ISO, etc.
42

Yongshi, Ye, Ma Haoyu, Nima Tashi, Liu Xinting, Yuan Yuchen, and Shang Zihang. "Object Detection Based on Fusion of Visible and Infrared Images." Journal of Physics: Conference Series 2560, no. 1 (2023): 012021. http://dx.doi.org/10.1088/1742-6596/2560/1/012021.

Texte intégral
Résumé :
Abstract In consideration of the complementary characteristics between visible light and infrared images, this paper proposes a novel method for object detection based on the fusion of these two types of images, thereby enhancing detection accuracy even under harsh environmental conditions. Specifically, we employ an improved AE network, which encodes and decodes the visible light and infrared images into dual-scale image decomposition. By reconstructing the original images with the decoder, we highlight the details of the fused image. Yolov5 network is then constructed based on this fused ima
Styles APA, Harvard, Vancouver, ISO, etc.
43

Li, Liangliang, Yan Shi, Ming Lv, et al. "Infrared and Visible Image Fusion via Sparse Representation and Guided Filtering in Laplacian Pyramid Domain." Remote Sensing 16, no. 20 (2024): 3804. http://dx.doi.org/10.3390/rs16203804.

Texte intégral
Résumé :
The fusion of infrared and visible images together can fully leverage the respective advantages of each, providing a more comprehensive and richer set of information. This is applicable in various fields such as military surveillance, night navigation, environmental monitoring, etc. In this paper, a novel infrared and visible image fusion method based on sparse representation and guided filtering in Laplacian pyramid (LP) domain is introduced. The source images are decomposed into low- and high-frequency bands by the LP, respectively. Sparse representation has achieved significant effectivenes
Styles APA, Harvard, Vancouver, ISO, etc.
44

Jin, Qi, Sanqing Tan, Gui Zhang, et al. "Visible and Infrared Image Fusion of Forest Fire Scenes Based on Generative Adversarial Networks with Multi-Classification and Multi-Level Constraints." Forests 14, no. 10 (2023): 1952. http://dx.doi.org/10.3390/f14101952.

Texte intégral
Résumé :
Aimed at addressing deficiencies in existing image fusion methods, this paper proposed a multi-level and multi-classification generative adversarial network (GAN)-based method (MMGAN) for fusing visible and infrared images of forest fire scenes (the surroundings of firefighters), which solves the problem that GANs tend to ignore visible contrast ratio information and detailed infrared texture information. The study was based on real-time visible and infrared image data acquired by visible and infrared binocular cameras on forest firefighters’ helmets. We improved the GAN by, on the one hand, s
Styles APA, Harvard, Vancouver, ISO, etc.
45

Zhao, Yuqing, Guangyuan Fu, Hongqiao Wang, and Shaolei Zhang. "The Fusion of Unmatched Infrared and Visible Images Based on Generative Adversarial Networks." Mathematical Problems in Engineering 2020 (March 20, 2020): 1–12. http://dx.doi.org/10.1155/2020/3739040.

Texte intégral
Résumé :
Visible images contain clear texture information and high spatial resolution but are unreliable under nighttime or ambient occlusion conditions. Infrared images can display target thermal radiation information under day, night, alternative weather, and ambient occlusion conditions. However, infrared images often lack good contour and texture information. Therefore, an increasing number of researchers are fusing visible and infrared images to obtain more information from them, which requires two completely matched images. However, it is difficult to obtain perfectly matched visible and infrared
Styles APA, Harvard, Vancouver, ISO, etc.
46

Li, Xilai, Xiaosong Li, and Wuyang Liu. "CBFM: Contrast Balance Infrared and Visible Image Fusion Based on Contrast-Preserving Guided Filter." Remote Sensing 15, no. 12 (2023): 2969. http://dx.doi.org/10.3390/rs15122969.

Texte intégral
Résumé :
Infrared (IR) and visible image fusion is an important data fusion and image processing technique that can accurately and comprehensively integrate the thermal radiation and texture details of source images. However, existing methods neglect the high-contrast fusion problem, leading to suboptimal fusion performance when thermal radiation target information in IR images is replaced by high-contrast information in visible images. To address this limitation, we propose a contrast-balanced framework for IR and visible image fusion. Specifically, a novel contrast balance strategy is proposed to pro
Styles APA, Harvard, Vancouver, ISO, etc.
47

Santoyo-Garcia, Hector, Eduardo Fragoso-Navarro, Rogelio Reyes-Reyes, Clara Cruz-Ramos, and Mariko Nakano-Miyatake. "Visible Watermarking Technique Based on Human Visual System for Single Sensor Digital Cameras." Security and Communication Networks 2017 (2017): 1–18. http://dx.doi.org/10.1155/2017/7903198.

Texte intégral
Résumé :
In this paper we propose a visible watermarking algorithm, in which a visible watermark is embedded into the Bayer Colour Filter Array (CFA) domain. The Bayer CFA is the most common raw image representation for images captured by single sensor digital cameras equipped in almost all mobile devices. In proposed scheme, the captured image is watermarked before it is compressed and stored in the storage system. Then this method enforces the rightful ownership of the watermarked image, since there is no other version of the image rather than the watermarked one. We also take into consideration the
Styles APA, Harvard, Vancouver, ISO, etc.
48

Chen, Xiaoyu, Zhijie Teng, Yingqi Liu, Jun Lu, Lianfa Bai, and Jing Han. "Infrared-Visible Image Fusion Based on Semantic Guidance and Visual Perception." Entropy 24, no. 10 (2022): 1327. http://dx.doi.org/10.3390/e24101327.

Texte intégral
Résumé :
Infrared-visible fusion has great potential in night-vision enhancement for intelligent vehicles. The fusion performance depends on fusion rules that balance target saliency and visual perception. However, most existing methods do not have explicit and effective rules, which leads to the poor contrast and saliency of the target. In this paper, we propose the SGVPGAN, an adversarial framework for high-quality infrared-visible image fusion, which consists of an infrared-visible image fusion network based on Adversarial Semantic Guidance (ASG) and Adversarial Visual Perception (AVP) modules. Spec
Styles APA, Harvard, Vancouver, ISO, etc.
49

Chen, Jinfen, Bo Cheng, Xiaoping Zhang, et al. "A TIR-Visible Automatic Registration and Geometric Correction Method for SDGSAT-1 Thermal Infrared Image Based on Modified RIFT." Remote Sensing 14, no. 6 (2022): 1393. http://dx.doi.org/10.3390/rs14061393.

Texte intégral
Résumé :
High-resolution thermal infrared (TIR) remote sensing images can more accurately retrieve land surface temperature and describe the spatial pattern of urban thermal environment. The Thermal Infrared Spectrometer (TIS), which has high spatial resolution among spaceborne thermal infrared sensors at present, and global data acquisition capability, is one of the sensors equipped in the SDGSAT-1. It is an important complement to the existing international mainstream satellites. In order to produce standard data products, rapidly and accurately, the automatic registration and geometric correction me
Styles APA, Harvard, Vancouver, ISO, etc.
50

Pavez, Vicente, Gabriel Hermosilla, Francisco Pizarro, Sebastián Fingerhuth, and Daniel Yunge. "Thermal Image Generation for Robust Face Recognition." Applied Sciences 12, no. 1 (2022): 497. http://dx.doi.org/10.3390/app12010497.

Texte intégral
Résumé :
This article shows how to create a robust thermal face recognition system based on the FaceNet architecture. We propose a method for generating thermal images to create a thermal face database with six different attributes (frown, glasses, rotation, normal, vocal, and smile) based on various deep learning models. First, we use StyleCLIP, which oversees manipulating the latent space of the input visible image to add the desired attributes to the visible face. Second, we use the GANs N’ Roses (GNR) model, a multimodal image-to-image framework. It uses maps of style and content to generate therma
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!