To see the other types of publications on this topic, follow the link: Adversarial Information Fusion.

Journal articles on the topic 'Adversarial Information Fusion'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Adversarial Information Fusion.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Kott, Alexander, Rajdeep Singh, William M. McEneaney, and Wes Milks. "Hypothesis-driven information fusion in adversarial, deceptive environments." Information Fusion 12, no. 2 (2011): 131–44. http://dx.doi.org/10.1016/j.inffus.2010.09.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wu, Zhaoli, Xuehan Wu, Yuancai Zhu, et al. "Research on Multimodal Image Fusion Target Detection Algorithm Based on Generative Adversarial Network." Wireless Communications and Mobile Computing 2022 (January 24, 2022): 1–10. http://dx.doi.org/10.1155/2022/1740909.

Full text
Abstract:
In this paper, we propose a target detection algorithm based on adversarial discriminative domain adaptation for infrared and visible image fusion using unsupervised learning methods to reduce the differences between multimodal image information. Firstly, this paper improves the fusion model based on generative adversarial network and uses the fusion algorithm based on the dual discriminator generative adversarial network to generate high-quality IR-visible fused images and then blends the IR and visible images into a ternary dataset and combines the triple angular loss function to do migratio
APA, Harvard, Vancouver, ISO, and other styles
3

Yuan, C., C. Q. Sun, X. Y. Tang, and R. F. Liu. "FLGC-Fusion GAN: An Enhanced Fusion GAN Model by Importing Fully Learnable Group Convolution." Mathematical Problems in Engineering 2020 (October 22, 2020): 1–13. http://dx.doi.org/10.1155/2020/6384831.

Full text
Abstract:
The purpose of image fusion is to combine the source images of the same scene into a single composite image with more useful information and better visual effects. Fusion GAN has made a breakthrough in this field by proposing to use the generative adversarial network to fuse images. In some cases, considering retain infrared radiation information and gradient information at the same time, the existing fusion methods ignore the image contrast and other elements. To this end, we propose a new end-to-end network structure based on generative adversarial networks (GANs), termed as FLGC-Fusion GAN.
APA, Harvard, Vancouver, ISO, and other styles
4

Chen, Xiaoyu, Zhijie Teng, Yingqi Liu, Jun Lu, Lianfa Bai, and Jing Han. "Infrared-Visible Image Fusion Based on Semantic Guidance and Visual Perception." Entropy 24, no. 10 (2022): 1327. http://dx.doi.org/10.3390/e24101327.

Full text
Abstract:
Infrared-visible fusion has great potential in night-vision enhancement for intelligent vehicles. The fusion performance depends on fusion rules that balance target saliency and visual perception. However, most existing methods do not have explicit and effective rules, which leads to the poor contrast and saliency of the target. In this paper, we propose the SGVPGAN, an adversarial framework for high-quality infrared-visible image fusion, which consists of an infrared-visible image fusion network based on Adversarial Semantic Guidance (ASG) and Adversarial Visual Perception (AVP) modules. Spec
APA, Harvard, Vancouver, ISO, and other styles
5

Jia Ruiming, 贾瑞明, 李彤 Li Tong, 刘圣杰 Liu Shengjie, 崔家礼 Cui Jiali, and 袁飞 Yuan Fei. "Infrared Simulation Based on Cascade Multi-Scale Information Fusion Adversarial Network." Acta Optica Sinica 40, no. 18 (2020): 1810001. http://dx.doi.org/10.3788/aos202040.1810001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Song, Xuhui, Hongtao Yu, Shaomei Li, and Huansha Wang. "Robust Chinese Named Entity Recognition Based on Fusion Graph Embedding." Electronics 12, no. 3 (2023): 569. http://dx.doi.org/10.3390/electronics12030569.

Full text
Abstract:
Named entity recognition is an important basic task in the field of natural language processing. The current mainstream named entity recognition methods are mainly based on the deep neural network model. The vulnerability of the deep neural network itself leads to a significant decline in the accuracy of named entity recognition when there is adversarial text in the text. In order to improve the robustness of named entity recognition under adversarial conditions, this paper proposes a Chinese named entity recognition model based on fusion graph embedding. Firstly, the model encodes and represe
APA, Harvard, Vancouver, ISO, and other styles
7

Xu, Dongdong, Yongcheng Wang, Shuyan Xu, Kaiguang Zhu, Ning Zhang, and Xin Zhang. "Infrared and Visible Image Fusion with a Generative Adversarial Network and a Residual Network." Applied Sciences 10, no. 2 (2020): 554. http://dx.doi.org/10.3390/app10020554.

Full text
Abstract:
Infrared and visible image fusion can obtain combined images with salient hidden objectives and abundant visible details simultaneously. In this paper, we propose a novel method for infrared and visible image fusion with a deep learning framework based on a generative adversarial network (GAN) and a residual network (ResNet). The fusion is accomplished with an adversarial game and directed by the unique loss functions. The generator with residual blocks and skip connections can extract deep features of source image pairs and generate an elementary fused image with infrared thermal radiation in
APA, Harvard, Vancouver, ISO, and other styles
8

Tang, Wei, Yu Liu, Chao Zhang, Juan Cheng, Hu Peng, and Xun Chen. "Green Fluorescent Protein and Phase-Contrast Image Fusion via Generative Adversarial Networks." Computational and Mathematical Methods in Medicine 2019 (December 4, 2019): 1–11. http://dx.doi.org/10.1155/2019/5450373.

Full text
Abstract:
In the field of cell and molecular biology, green fluorescent protein (GFP) images provide functional information embodying the molecular distribution of biological cells while phase-contrast images maintain structural information with high resolution. Fusion of GFP and phase-contrast images is of high significance to the study of subcellular localization, protein functional analysis, and genetic expression. This paper proposes a novel algorithm to fuse these two types of biological images via generative adversarial networks (GANs) by carefully taking their own characteristics into account. Th
APA, Harvard, Vancouver, ISO, and other styles
9

He, Gang, Jiaping Zhong, Jie Lei, Yunsong Li, and Weiying Xie. "Hyperspectral Pansharpening Based on Spectral Constrained Adversarial Autoencoder." Remote Sensing 11, no. 22 (2019): 2691. http://dx.doi.org/10.3390/rs11222691.

Full text
Abstract:
Hyperspectral (HS) imaging is conducive to better describing and understanding the subtle differences in spectral characteristics of different materials due to sufficient spectral information compared with traditional imaging systems. However, it is still challenging to obtain high resolution (HR) HS images in both the spectral and spatial domains. Different from previous methods, we first propose spectral constrained adversarial autoencoder (SCAAE) to extract deep features of HS images and combine with the panchromatic (PAN) image to competently represent the spatial information of HR HS imag
APA, Harvard, Vancouver, ISO, and other styles
10

Zhou-xiang Jin, Zhou-xiang Jin, and Hao Qin Zhou-xiang Jin. "Generative Adversarial Network Based on Multi-feature Fusion Strategy for Motion Image Deblurring." 電腦學刊 33, no. 1 (2022): 031–41. http://dx.doi.org/10.53106/199115992022023301004.

Full text
Abstract:
<p>Deblurring of motion images is a part of the field of image restoration. The deblurring of motion images is not only difficult to estimate the motion parameters, but also contains complex factors such as noise, which makes the deblurring algorithm more difficult. Image deblurring can be divided into two categories: one is the non-blind image deblurring with known fuzzy kernel, and the other is the blind image deblurring with unknown fuzzy kernel. The traditional motion image deblurring networks ignore the non-uniformity of motion blurred images and cannot effectively recover the high
APA, Harvard, Vancouver, ISO, and other styles
11

Ma, Xiaole, Zhihai Wang, Shaohai Hu, and Shichao Kan. "Multi-Focus Image Fusion Based on Multi-Scale Generative Adversarial Network." Entropy 24, no. 5 (2022): 582. http://dx.doi.org/10.3390/e24050582.

Full text
Abstract:
The methods based on the convolutional neural network have demonstrated its powerful information integration ability in image fusion. However, most of the existing methods based on neural networks are only applied to a part of the fusion process. In this paper, an end-to-end multi-focus image fusion method based on a multi-scale generative adversarial network (MsGAN) is proposed that makes full use of image features by a combination of multi-scale decomposition with a convolutional neural network. Extensive qualitative and quantitative experiments on the synthetic and Lytro datasets demonstrat
APA, Harvard, Vancouver, ISO, and other styles
12

Wang, Min, Congyan Lang, Liqian Liang, Songhe Feng, Tao Wang, and Yutong Gao. "Fine-Grained Semantic Image Synthesis with Object-Attention Generative Adversarial Network." ACM Transactions on Intelligent Systems and Technology 12, no. 5 (2021): 1–18. http://dx.doi.org/10.1145/3470008.

Full text
Abstract:
Semantic image synthesis is a new rising and challenging vision problem accompanied by the recent promising advances in generative adversarial networks. The existing semantic image synthesis methods only consider the global information provided by the semantic segmentation mask, such as class label, global layout, and location, so the generative models cannot capture the rich local fine-grained information of the images (e.g., object structure, contour, and texture). To address this issue, we adopt a multi-scale feature fusion algorithm to refine the generated images by learning the fine-grain
APA, Harvard, Vancouver, ISO, and other styles
13

Wang, Jingjing, Jinwen Ren, Hongzhen Li, et al. "DDGANSE: Dual-Discriminator GAN with a Squeeze-and-Excitation Module for Infrared and Visible Image Fusion." Photonics 9, no. 3 (2022): 150. http://dx.doi.org/10.3390/photonics9030150.

Full text
Abstract:
Infrared images can provide clear contrast information to distinguish between the target and the background under any lighting conditions. In contrast, visible images can provide rich texture details and are compatible with the human visual system. The fusion of a visible image and infrared image will thus contain both comprehensive contrast information and texture details. In this study, a novel approach for the fusion of infrared and visible images is proposed based on a dual-discriminator generative adversarial network with a squeeze-and-excitation module (DDGANSE). Our approach establishes
APA, Harvard, Vancouver, ISO, and other styles
14

Minahil, Syeda, Jun-Hyung Kim, and Youngbae Hwang. "Patch-Wise Infrared and Visible Image Fusion Using Spatial Adaptive Weights." Applied Sciences 11, no. 19 (2021): 9255. http://dx.doi.org/10.3390/app11199255.

Full text
Abstract:
In infrared (IR) and visible image fusion, the significant information is extracted from each source image and integrated into a single image with comprehensive data. We observe that the salient regions in the infrared image contain targets of interests. Therefore, we enforce spatial adaptive weights derived from the infrared images. In this paper, a Generative Adversarial Network (GAN)-based fusion method is proposed for infrared and visible image fusion. Based on the end-to-end network structure with dual discriminators, a patch-wise discrimination is applied to reduce blurry artifact from t
APA, Harvard, Vancouver, ISO, and other styles
15

Abdalla, Younis, M. Tariq Iqbal, and Mohamed Shehata. "Copy-Move Forgery Detection and Localization Using a Generative Adversarial Network and Convolutional Neural-Network." Information 10, no. 9 (2019): 286. http://dx.doi.org/10.3390/info10090286.

Full text
Abstract:
The problem of forged images has become a global phenomenon that is spreading mainly through social media. New technologies have provided both the means and the support for this phenomenon, but they are also enabling a targeted response to overcome it. Deep convolution learning algorithms are one such solution. These have been shown to be highly effective in dealing with image forgery derived from generative adversarial networks (GANs). In this type of algorithm, the image is altered such that it appears identical to the original image and is nearly undetectable to the unaided human eye as a f
APA, Harvard, Vancouver, ISO, and other styles
16

Huang, Min, and Jinghan Yin. "Research on Adversarial Domain Adaptation Method and Its Application in Power Load Forecasting." Mathematics 10, no. 18 (2022): 3223. http://dx.doi.org/10.3390/math10183223.

Full text
Abstract:
Domain adaptation has been used to transfer the knowledge from the source domain to the target domain where training data is insufficient in the target domain; thus, it can overcome the data shortage problem of power load forecasting effectively. Inspired by Generative Adversarial Networks (GANs), adversarial domain adaptation transfers knowledge in adversarial learning. Existing adversarial domain adaptation faces the problems of adversarial disequilibrium and a lack of transferability quantification, which will eventually decrease the prediction accuracy. To address this issue, a novel adver
APA, Harvard, Vancouver, ISO, and other styles
17

Fu, Yu, Xiao-Jun Wu, and Tariq Durrani. "Image fusion based on generative adversarial network consistent with perception." Information Fusion 72 (August 2021): 110–25. http://dx.doi.org/10.1016/j.inffus.2021.02.019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Ma, Jiayi, Pengwei Liang, Wei Yu, et al. "Infrared and visible image fusion via detail preserving adversarial learning." Information Fusion 54 (February 2020): 85–98. http://dx.doi.org/10.1016/j.inffus.2019.07.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Nandhini Abirami, R., P. M. Durai Raj Vincent, Kathiravan Srinivasan, K. Suresh Manic, and Chuan-Yu Chang. "Multimodal Medical Image Fusion of Positron Emission Tomography and Magnetic Resonance Imaging Using Generative Adversarial Networks." Behavioural Neurology 2022 (April 14, 2022): 1–12. http://dx.doi.org/10.1155/2022/6878783.

Full text
Abstract:
Multimodal medical image fusion is a current technique applied in the applications related to medical field to combine images from the same modality or different modalities to improve the visual content of the image to perform further operations like image segmentation. Biomedical research and medical image analysis highly demand medical image fusion to perform higher level of medical analysis. Multimodal medical fusion assists medical practitioners to visualize the internal organs and tissues. Multimodal medical fusion of brain image helps to medical practitioners to simultaneously visualize
APA, Harvard, Vancouver, ISO, and other styles
20

Zhao, Yuqing, Guangyuan Fu, Hongqiao Wang, and Shaolei Zhang. "The Fusion of Unmatched Infrared and Visible Images Based on Generative Adversarial Networks." Mathematical Problems in Engineering 2020 (March 20, 2020): 1–12. http://dx.doi.org/10.1155/2020/3739040.

Full text
Abstract:
Visible images contain clear texture information and high spatial resolution but are unreliable under nighttime or ambient occlusion conditions. Infrared images can display target thermal radiation information under day, night, alternative weather, and ambient occlusion conditions. However, infrared images often lack good contour and texture information. Therefore, an increasing number of researchers are fusing visible and infrared images to obtain more information from them, which requires two completely matched images. However, it is difficult to obtain perfectly matched visible and infrared
APA, Harvard, Vancouver, ISO, and other styles
21

Zhao, Rui, Hengyu Li, Jingyi Liu, Huayan Pu, Shaorong Xie, and Jun Luo. "A video inpainting method for unmanned vehicle based on fusion of time series optical flow information and spatial information." International Journal of Advanced Robotic Systems 18, no. 5 (2021): 172988142110531. http://dx.doi.org/10.1177/17298814211053103.

Full text
Abstract:
In this article, the problem of video inpainting combines multiview spatial information and interframe information between video sequences. A vision system is an important way for autonomous vehicles to obtain information about the external environment. Loss or distortion of visual images caused by camera damage or pollution seriously makes an impact on the vision system ability to correctly perceive and understand the external environment. In this article, we solve the problem of image restoration by combining the optical flow information between frames in the video with the spatial informati
APA, Harvard, Vancouver, ISO, and other styles
22

Pu, Can, Runzi Song, Radim Tylecek, Nanbo Li, and Robert Fisher. "SDF-MAN: Semi-Supervised Disparity Fusion with Multi-Scale Adversarial Networks." Remote Sensing 11, no. 5 (2019): 487. http://dx.doi.org/10.3390/rs11050487.

Full text
Abstract:
Refining raw disparity maps from different algorithms to exploit their complementary advantages is still challenging. Uncertainty estimation and complex disparity relationships among pixels limit the accuracy and robustness of existing methods and there is no standard method for fusion of different kinds of depth data. In this paper, we introduce a new method to fuse disparity maps from different sources, while incorporating supplementary information (intensity, gradient, etc.) into a refiner network to better refine raw disparity inputs. A discriminator network classifies disparities at diffe
APA, Harvard, Vancouver, ISO, and other styles
23

Lu, Ting, Kexin Ding, Wei Fu, Shutao Li, and Anjing Guo. "Coupled adversarial learning for fusion classification of hyperspectral and LiDAR data." Information Fusion 93 (May 2023): 118–31. http://dx.doi.org/10.1016/j.inffus.2022.12.020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Fu, Weiyu, and Lixia Wang. "Component-Based Software Testing Method Based on Deep Adversarial Network." Security and Communication Networks 2022 (October 12, 2022): 1–11. http://dx.doi.org/10.1155/2022/4231083.

Full text
Abstract:
With the continuous updating and application of software, the current problems in software are becoming more and more serious. Aiming at this phenomenon, the application and testing methods of componentized software based on deep adversarial networks are discussed. The experiments show that: (1) some of the software has a high fusion rate, reaching an astonishing 95% adaptability. The instability and greater potential of component-based software are solved through GAN and gray evaluation. With the evaluation system, people are dispelled. Trust degree. (2) According to the data in the graph and
APA, Harvard, Vancouver, ISO, and other styles
25

Zhang, Jinsong, Haiyan Chen, and Zhiliang Wang. "Droplet Image Reconstruction Based on Generative Adversarial Network." Journal of Physics: Conference Series 2216, no. 1 (2022): 012096. http://dx.doi.org/10.1088/1742-6596/2216/1/012096.

Full text
Abstract:
Abstract In the digital microfluidic experiments, the improper adjustments of the camera focus and background illumination lead to the phenomena of low illumination and blurred edges in the droplet image, which seriously interferes with information acquisition. Removing these blurred factors is an essential pretreatment step before information extraction. In this paper, a generative adversarial network model combining multi-scale convolution and attention mechanism is proposed to reconstruct the droplet image. The feature reconstruction module in generator can reconstruct the image feature map
APA, Harvard, Vancouver, ISO, and other styles
26

Yang, Yuanbo, Qunbo Lv, Baoyu Zhu, Xuefu Sui, Yu Zhang, and Zheng Tan. "One-Sided Unsupervised Image Dehazing Network Based on Feature Fusion and Multi-Scale Skip Connection." Applied Sciences 12, no. 23 (2022): 12366. http://dx.doi.org/10.3390/app122312366.

Full text
Abstract:
Haze and mist caused by air quality, weather, and other factors can reduce the clarity and contrast of images captured by cameras, which limits the applications of automatic driving, satellite remote sensing, traffic monitoring, etc. Therefore, the study of image dehazing is of great significance. Most existing unsupervised image-dehazing algorithms rely on a priori knowledge and simplified atmospheric scattering models, but the physical causes of haze in the real world are complex, resulting in inaccurate atmospheric scattering models that affect the dehazing effect. Unsupervised generative a
APA, Harvard, Vancouver, ISO, and other styles
27

Zhang, Jiahuan, Keisuke Maeda, Takahiro Ogawa, and Miki Haseyama. "Regularization Meets Enhanced Multi-Stage Fusion Features: Making CNN More Robust against White-Box Adversarial Attacks." Sensors 22, no. 14 (2022): 5431. http://dx.doi.org/10.3390/s22145431.

Full text
Abstract:
Regularization has become an important method in adversarial defense. However, the existing regularization-based defense methods do not discuss which features in convolutional neural networks (CNN) are more suitable for regularization. Thus, in this paper, we propose a multi-stage feature fusion network with a feature regularization operation, which is called Enhanced Multi-Stage Feature Fusion Network (EMSF2Net). EMSF2Net mainly combines three parts: multi-stage feature enhancement (MSFE), multi-stage feature fusion (MSF2), and regularization. Specifically, MSFE aims to obtain enhanced and ex
APA, Harvard, Vancouver, ISO, and other styles
28

Fang, Jing, Xiaole Ma, Jingjing Wang, Kai Qin, Shaohai Hu, and Yuefeng Zhao. "A Noisy SAR Image Fusion Method Based on NLM and GAN." Entropy 23, no. 4 (2021): 410. http://dx.doi.org/10.3390/e23040410.

Full text
Abstract:
The unavoidable noise often present in synthetic aperture radar (SAR) images, such as speckle noise, negatively impacts the subsequent processing of SAR images. Further, it is not easy to find an appropriate application for SAR images, given that the human visual system is sensitive to color and SAR images are gray. As a result, a noisy SAR image fusion method based on nonlocal matching and generative adversarial networks is presented in this paper. A nonlocal matching method is applied to processing source images into similar block groups in the pre-processing step. Then, adversarial networks
APA, Harvard, Vancouver, ISO, and other styles
29

Zhang, Liping, Weisheng Li, Hefeng Huang, and Dajiang Lei. "A Pansharpening Generative Adversarial Network with Multilevel Structure Enhancement and a Multistream Fusion Architecture." Remote Sensing 13, no. 12 (2021): 2423. http://dx.doi.org/10.3390/rs13122423.

Full text
Abstract:
Deep learning has been widely used in various computer vision tasks. As a result, researchers have begun to explore the application of deep learning for pansharpening and have achieved remarkable results. However, most current pansharpening methods focus only on the mapping relationship between images and the lack overall structure enhancement, and do not fully and completely research optimization goals and fusion rules. Therefore, for these problems, we propose a pansharpening generative adversarial network with multilevel structure enhancement and a multistream fusion architecture. This meth
APA, Harvard, Vancouver, ISO, and other styles
30

Zhu, Baoyu, Qunbo Lv, and Zheng Tan. "Adaptive Multi-Scale Fusion Blind Deblurred Generative Adversarial Network Method for Sharpening Image Data." Drones 7, no. 2 (2023): 96. http://dx.doi.org/10.3390/drones7020096.

Full text
Abstract:
Drone and aerial remote sensing images are widely used, but their imaging environment is complex and prone to image blurring. Existing CNN deblurring algorithms usually use multi-scale fusion to extract features in order to make full use of aerial remote sensing blurred image information, but images with different degrees of blurring use the same weights, leading to increasing errors in the feature fusion process layer by layer. Based on the physical properties of image blurring, this paper proposes an adaptive multi-scale fusion blind deblurred generative adversarial network (AMD-GAN), which
APA, Harvard, Vancouver, ISO, and other styles
31

Ma, Shuai, Jianfeng Cui, Weidong Xiao, and Lijuan Liu. "Deep Learning-Based Data Augmentation and Model Fusion for Automatic Arrhythmia Identification and Classification Algorithms." Computational Intelligence and Neuroscience 2022 (August 11, 2022): 1–17. http://dx.doi.org/10.1155/2022/1577778.

Full text
Abstract:
Automated ECG-based arrhythmia detection is critical for early cardiac disease prevention and diagnosis. Recently, deep learning algorithms have been widely applied for arrhythmia detection with great success. However, the lack of labeled ECG data and low classification accuracy can have a significant impact on the overall effectiveness of a classification algorithm. In order to better apply deep learning methods to arrhythmia classification, in this study, feature extraction and classification strategy based on generative adversarial network data augmentation and model fusion are proposed to
APA, Harvard, Vancouver, ISO, and other styles
32

Liu, Yu, Yu Shi, Fuhao Mu, Juan Cheng, and Xun Chen. "Glioma Segmentation-Oriented Multi-Modal MR Image Fusion With Adversarial Learning." IEEE/CAA Journal of Automatica Sinica 9, no. 8 (2022): 1528–31. http://dx.doi.org/10.1109/jas.2022.105770.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Gu, Yansong, Xinya Wang, Can Zhang, and Baiyang Li. "Advanced Driving Assistance Based on the Fusion of Infrared and Visible Images." Entropy 23, no. 2 (2021): 239. http://dx.doi.org/10.3390/e23020239.

Full text
Abstract:
Obtaining key and rich visual information under sophisticated road conditions is one of the key requirements for advanced driving assistance. In this paper, a newfangled end-to-end model is proposed for advanced driving assistance based on the fusion of infrared and visible images, termed as FusionADA. In our model, we are committed to extracting and fusing the optimal texture details and salient thermal targets from the source images. To achieve this goal, our model constitutes an adversarial framework between the generator and the discriminator. Specifically, the generator aims to generate a
APA, Harvard, Vancouver, ISO, and other styles
34

Chen, Lei, Jun Han, and Feng Tian. "Infrared and visible image fusion using two-layer generative adversarial network." Journal of Intelligent & Fuzzy Systems 40, no. 6 (2021): 11897–913. http://dx.doi.org/10.3233/jifs-210041.

Full text
Abstract:
Infrared (IR) images can distinguish targets from their backgrounds based on difference in thermal radiation, whereas visible images can provide texture details with high spatial resolution. The fusion of the IR and visible images has many advantages and can be applied to applications such as target detection and recognition. This paper proposes a two-layer generative adversarial network (GAN) to fuse these two types of images. In the first layer, the network generate fused images using two GANs: one uses the IR image as input and the visible image as ground truth, and the other with the visib
APA, Harvard, Vancouver, ISO, and other styles
35

Huang, Ningbo, Gang Zhou, Mengli Zhang, Meng Zhang, and Ze Yu. "Modelling the Latent Semantics of Diffusion Sources in Information Cascade Prediction." Computational Intelligence and Neuroscience 2021 (September 29, 2021): 1–12. http://dx.doi.org/10.1155/2021/7880215.

Full text
Abstract:
Predicting the information spread tendency can help products recommendation and public opinion management. The existing information cascade prediction models are devoted to extract the chronological features from diffusion sequences but treat the diffusion sources as ordinary users. Diffusion source, the first user in the information cascade, can indicate the latent topic and diffusion pattern of an information item to mine user potential common interests, which facilitates information cascade prediction. In this paper, for modelling the abundant implicit semantics of diffusion sources in info
APA, Harvard, Vancouver, ISO, and other styles
36

Yang, Xiuzhu, Xinyue Zhang, Yi Ding, and Lin Zhang. "Indoor Activity and Vital Sign Monitoring for Moving People with Multiple Radar Data Fusion." Remote Sensing 13, no. 18 (2021): 3791. http://dx.doi.org/10.3390/rs13183791.

Full text
Abstract:
The monitoring of human activity and vital signs plays a significant role in remote health-care. Radar provides a non-contact monitoring approach without privacy and illumination concerns. However, multiple people in a narrow indoor environment bring dense multipaths for activity monitoring, and the received vital sign signals are heavily distorted with body movements. This paper proposes a framework based on Frequency Modulated Continuous Wave (FMCW) and Impulse Radio Ultra-Wideband (IR-UWB) radars to address these challenges, designing intelligent spatial-temporal information fusion for acti
APA, Harvard, Vancouver, ISO, and other styles
37

Chen, Xianglong, Haipeng Wang, Yaohui Liang, Ying Meng, and Shifeng Wang. "A Novel Infrared and Visible Image Fusion Approach Based on Adversarial Neural Network." Sensors 22, no. 1 (2021): 304. http://dx.doi.org/10.3390/s22010304.

Full text
Abstract:
The presence of fake pictures affects the reliability of visible face images under specific circumstances. This paper presents a novel adversarial neural network designed named as the FTSGAN for infrared and visible image fusion and we utilize FTSGAN model to fuse the face image features of infrared and visible image to improve the effect of face recognition. In FTSGAN model design, the Frobenius norm (F), total variation norm (TV), and structural similarity index measure (SSIM) are employed. The F and TV are used to limit the gray level and the gradient of the image, while the SSIM is used to
APA, Harvard, Vancouver, ISO, and other styles
38

Yin, Jian, Zhibo Zhou, Shaohua Xu, Ruiping Yang, and Kun Liu. "A Generative Adversarial Network Fused with Dual-Attention Mechanism and Its Application in Multitarget Image Fine Segmentation." Computational Intelligence and Neuroscience 2021 (December 18, 2021): 1–16. http://dx.doi.org/10.1155/2021/2464648.

Full text
Abstract:
Aiming at the problem of insignificant target morphological features, inaccurate detection and unclear boundary of small-target regions, and multitarget boundary overlap in multitarget complex image segmentation, combining the image segmentation mechanism of generative adversarial network with the feature enhancement method of nonlocal attention, a generative adversarial network fused with attention mechanism (AM-GAN) is proposed. The generative network in the model is composed of residual network and nonlocal attention module, which use the feature extraction and multiscale fusion mechanism o
APA, Harvard, Vancouver, ISO, and other styles
39

Liu, Shangwang, and Lihan Yang. "BPDGAN: A GAN-Based Unsupervised Back Project Dense Network for Multi-Modal Medical Image Fusion." Entropy 24, no. 12 (2022): 1823. http://dx.doi.org/10.3390/e24121823.

Full text
Abstract:
Single-modality medical images often cannot contain sufficient valid information to meet the information requirements of clinical diagnosis. The diagnostic efficiency is always limited by observing multiple images at the same time. Image fusion is a technique that combines functional modalities such as positron emission computed tomography (PET) and single-photon emission computed tomography (SPECT) with anatomical modalities such as computed tomography (CT) and magnetic resonance imaging (MRI) to supplement the complementary information. Meanwhile, fusing two anatomical images (like CT-MRI) i
APA, Harvard, Vancouver, ISO, and other styles
40

Dong, Yu, Yihao Liu, He Zhang, Shifeng Chen, and Yu Qiao. "FD-GAN: Generative Adversarial Networks with Fusion-Discriminator for Single Image Dehazing." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 07 (2020): 10729–36. http://dx.doi.org/10.1609/aaai.v34i07.6701.

Full text
Abstract:
Recently, convolutional neural networks (CNNs) have achieved great improvements in single image dehazing and attained much attention in research. Most existing learning-based dehazing methods are not fully end-to-end, which still follow the traditional dehazing procedure: first estimate the medium transmission and the atmospheric light, then recover the haze-free image based on the atmospheric scattering model. However, in practice, due to lack of priors and constraints, it is hard to precisely estimate these intermediate parameters. Inaccurate estimation further degrades the performance of de
APA, Harvard, Vancouver, ISO, and other styles
41

Dutta, Anjan, and Zeynep Akata. "Semantically Tied Paired Cycle Consistency for Any-Shot Sketch-Based Image Retrieval." International Journal of Computer Vision 128, no. 10-11 (2020): 2684–703. http://dx.doi.org/10.1007/s11263-020-01350-x.

Full text
Abstract:
Abstract Low-shot sketch-based image retrieval is an emerging task in computer vision, allowing to retrieve natural images relevant to hand-drawn sketch queries that are rarely seen during the training phase. Related prior works either require aligned sketch-image pairs that are costly to obtain or inefficient memory fusion layer for mapping the visual information to a semantic space. In this paper, we address any-shot, i.e. zero-shot and few-shot, sketch-based image retrieval (SBIR) tasks, where we introduce the few-shot setting for SBIR. For solving these tasks, we propose a semantically ali
APA, Harvard, Vancouver, ISO, and other styles
42

Vizil’ter, Yu V., O. V. Vygolov, D. V. Komarov, and M. A. Lebedev. "Fusion of Images of Different Spectra Based on Generative Adversarial Networks." Journal of Computer and Systems Sciences International 58, no. 3 (2019): 441–53. http://dx.doi.org/10.1134/s1064230719030201.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Gao, Jianhao, Qiangqiang Yuan, Jie Li, Hai Zhang, and Xin Su. "Cloud Removal with Fusion of High Resolution Optical and SAR Images Using Generative Adversarial Networks." Remote Sensing 12, no. 1 (2020): 191. http://dx.doi.org/10.3390/rs12010191.

Full text
Abstract:
The existence of clouds is one of the main factors that contributes to missing information in optical remote sensing images, restricting their further applications for Earth observation, so how to reconstruct the missing information caused by clouds is of great concern. Inspired by the image-to-image translation work based on convolutional neural network model and the heterogeneous information fusion thought, we propose a novel cloud removal method in this paper. The approach can be roughly divided into two steps: in the first step, a specially designed convolutional neural network (CNN) trans
APA, Harvard, Vancouver, ISO, and other styles
44

Chen, Zhuo, Ming Fang, Xu Chai, Feiran Fu, and Lihong Yuan. "U-GAN Model for Infrared and Visible Images Fusion." Xibei Gongye Daxue Xuebao/Journal of Northwestern Polytechnical University 38, no. 4 (2020): 904–12. http://dx.doi.org/10.1051/jnwpu/20203840904.

Full text
Abstract:
Infrared and visible image fusion is an effective method to solve the lack of single sensor imaging. The purpose is that the fusion images are suitable for human eyes and conducive to the next application and processing. In order to solve the problems of incomplete feature extraction, loss of details, and less samples of common data sets, it is not conducive to training, an end-to-end network architecture for image fusion is proposed. U-net is introduced into image fusion, and the final fusion result is obtained by using the generative adversarial network. Through its special convolution struc
APA, Harvard, Vancouver, ISO, and other styles
45

Zhou, Tao, Qi Li, Huiling Lu, Xiangxiang Zhang, and Qianru Cheng. "Hybrid Multimodal Medical Image Fusion Method Based on LatLRR and ED-D2GAN." Applied Sciences 12, no. 24 (2022): 12758. http://dx.doi.org/10.3390/app122412758.

Full text
Abstract:
In order to better preserve the anatomical structure information of Computed Tomography (CT) source images and highlight the metabolic information of lesion regions in Positron Emission Tomography (PET) source images, a hybrid multimodal medical image fusion method (LatLRR-GAN) based on Latent low-rank representation (LatLRR) and the dual discriminators Generative Adversarial Network (ED-D2GAN) is proposed. Firstly, considering the denoising capability of LatLRR, source images were decomposed by LatLRR. Secondly, the ED-D2GAN model was put forward as the low-rank region fusion method, which ca
APA, Harvard, Vancouver, ISO, and other styles
46

Yin, Xiao-Xia, Lihua Yin, and Sillas Hadjiloucas. "Pattern Classification Approaches for Breast Cancer Identification via MRI: State-Of-The-Art and Vision for the Future." Applied Sciences 10, no. 20 (2020): 7201. http://dx.doi.org/10.3390/app10207201.

Full text
Abstract:
Mining algorithms for Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI) of breast tissue are discussed. The algorithms are based on recent advances in multi-dimensional signal processing and aim to advance current state-of-the-art computer-aided detection and analysis of breast tumours when these are observed at various states of development. The topics discussed include image feature extraction, information fusion using radiomics, multi-parametric computer-aided classification and diagnosis using information fusion of tensorial datasets as well as Clifford algebra based classific
APA, Harvard, Vancouver, ISO, and other styles
47

Hou, Jilei, Dazhi Zhang, Wei Wu, Jiayi Ma, and Huabing Zhou. "A Generative Adversarial Network for Infrared and Visible Image Fusion Based on Semantic Segmentation." Entropy 23, no. 3 (2021): 376. http://dx.doi.org/10.3390/e23030376.

Full text
Abstract:
This paper proposes a new generative adversarial network for infrared and visible image fusion based on semantic segmentation (SSGAN), which can consider not only the low-level features of infrared and visible images, but also the high-level semantic information. Source images can be divided into foregrounds and backgrounds by semantic masks. The generator with a dual-encoder-single-decoder framework is used to extract the feature of foregrounds and backgrounds by different encoder paths. Moreover, the discriminator’s input image is designed based on semantic segmentation, which is obtained by
APA, Harvard, Vancouver, ISO, and other styles
48

de Villiers, James G., and Rensu P. Theart. "Predicting mitochondrial fission, fusion and depolarisation event locations from a single z-stack." PLOS ONE 18, no. 3 (2023): e0271151. http://dx.doi.org/10.1371/journal.pone.0271151.

Full text
Abstract:
This paper documents the development of a novel method to predict the occurrence and exact locations of mitochondrial fission, fusion and depolarisation events in three dimensions. This novel implementation of neural networks to predict these events using information encoded only in the morphology of the mitochondria eliminate the need for time-lapse sequences of cells. The ability to predict these morphological mitochondrial events using a single image can not only democratise research but also revolutionise drug trials. The occurrence and location of these events were successfully predicted
APA, Harvard, Vancouver, ISO, and other styles
49

Wang, Yang. "Survey on Deep Multi-modal Data Analytics: Collaboration, Rivalry, and Fusion." ACM Transactions on Multimedia Computing, Communications, and Applications 17, no. 1s (2021): 1–25. http://dx.doi.org/10.1145/3408317.

Full text
Abstract:
With the development of web technology, multi-modal or multi-view data has surged as a major stream for big data, where each modal/view encodes individual property of data objects. Often, different modalities are complementary to each other. This fact motivated a lot of research attention on fusing the multi-modal feature spaces to comprehensively characterize the data objects. Most of the existing state-of-the-arts focused on how to fuse the energy or information from multi-modal spaces to deliver a superior performance over their counterparts with single modal. Recently, deep neural networks
APA, Harvard, Vancouver, ISO, and other styles
50

Zhang, Junjie, Zhouyin Cai, Fansheng Chen, and Dan Zeng. "Hyperspectral Image Denoising via Adversarial Learning." Remote Sensing 14, no. 8 (2022): 1790. http://dx.doi.org/10.3390/rs14081790.

Full text
Abstract:
Due to sensor instability and atmospheric interference, hyperspectral images (HSIs) often suffer from different kinds of noise which degrade the performance of downstream tasks. Therefore, HSI denoising has become an essential part of HSI preprocessing. Traditional methods tend to tackle one specific type of noise and remove it iteratively, resulting in drawbacks including inefficiency when dealing with mixed noise. Most recently, deep neural network-based models, especially generative adversarial networks, have demonstrated promising performance in generic image denoising. However, in contras
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!