To see the other types of publications on this topic, follow the link: Image-to-image translation.

Journal articles on the topic 'Image-to-image translation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Image-to-image translation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Xu, Shuzhen, Qing Zhu, and Jin Wang. "Generative image completion with image-to-image translation." Neural Computing and Applications 32, no. 11 (2019): 7333–45. http://dx.doi.org/10.1007/s00521-019-04253-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Xu, Shuzhen, Qing Zhu, and Jin Wang. "Correction to: Generative image completion with image-to-image translation." Neural Computing and Applications 32, no. 23 (2020): 17809. http://dx.doi.org/10.1007/s00521-020-05213-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Cho, Younggun, Hyesu Jang, Ramavtar Malav, Gaurav Pandey, and Ayoung Kim. "Underwater Image Dehazing via Unpaired Image-to-image Translation." International Journal of Control, Automation and Systems 18, no. 3 (2020): 605–14. http://dx.doi.org/10.1007/s12555-019-0689-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Yin, Xu, Yan Li, and Byeong-Seok Shin. "DAGAN: A Domain-Aware Method for Image-to-Image Translations." Complexity 2020 (March 28, 2020): 1–15. http://dx.doi.org/10.1155/2020/9341907.

Full text
Abstract:
The image-to-image translation method aims to learn inter-domain mappings from paired/unpaired data. Although this technique has been widely used for visual predication tasks—such as classification and image segmentation—and achieved great results, we still failed to perform flexible translations when attempting to learn different mappings, especially for images containing multiple instances. To tackle this problem, we propose a generative framework DAGAN (Domain-aware Generative Adversarial etwork) that enables domains to learn diverse mapping relationships. We assumed that an image is compos
APA, Harvard, Vancouver, ISO, and other styles
5

Kinakh, Vitaliy, Yury Belousov, Guillaume Quétant, et al. "Hubble Meets Webb: Image-to-Image Translation in Astronomy." Sensors 24, no. 4 (2024): 1151. http://dx.doi.org/10.3390/s24041151.

Full text
Abstract:
This work explores the generation of James Webb Space Telescope (JWSP) imagery via image-to-image translation from the available Hubble Space Telescope (HST) data. Comparative analysis encompasses the Pix2Pix, CycleGAN, TURBO, and DDPM-based Palette methodologies, assessing the criticality of image registration in astronomy. While the focus of this study is not on the scientific evaluation of model fairness, we note that the techniques employed may bear some limitations and the translated images could include elements that are not present in actual astronomical phenomena. To mitigate this, unc
APA, Harvard, Vancouver, ISO, and other styles
6

Lin, Che-Tsung, Yen-Yi Wu, Po-Hao Hsu, and Shang-Hong Lai. "Multimodal Structure-Consistent Image-to-Image Translation." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 07 (2020): 11490–98. http://dx.doi.org/10.1609/aaai.v34i07.6814.

Full text
Abstract:
Unpaired image-to-image translation is proven quite effective in boosting a CNN-based object detector for a different domain by means of data augmentation that can well preserve the image-objects in the translated images. Recently, multimodal GAN (Generative Adversarial Network) models have been proposed and were expected to further boost the detector accuracy by generating a diverse collection of images in the target domain, given only a single/labelled image in the source domain. However, images generated by multimodal GANs would achieve even worse detection accuracy than the ones by a unimo
APA, Harvard, Vancouver, ISO, and other styles
7

Chen, Lei, Le Wu, Zhenzhen Hu, and Meng Wang. "Quality-Aware Unpaired Image-to-Image Translation." IEEE Transactions on Multimedia 21, no. 10 (2019): 2664–74. http://dx.doi.org/10.1109/tmm.2019.2907052.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Hoyez, Henri, Cédric Schockaert, Jason Rambach, Bruno Mirbach, and Didier Stricker. "Unsupervised Image-to-Image Translation: A Review." Sensors 22, no. 21 (2022): 8540. http://dx.doi.org/10.3390/s22218540.

Full text
Abstract:
Supervised image-to-image translation has been proven to generate realistic images with sharp details and to have good quantitative performance. Such methods are trained on a paired dataset, where an image from the source domain already has a corresponding translated image in the target domain. However, this paired dataset requirement imposes a huge practical constraint, requires domain knowledge or is even impossible to obtain in certain cases. Due to these problems, unsupervised image-to-image translation has been proposed, which does not require domain expertise and can take advantage of a
APA, Harvard, Vancouver, ISO, and other styles
9

Yin, Wenbin, Jun Yu, and Zhiyi Hu. "Generating Sea Surface Object Image Using Image-to-Image Translation." International Journal of Advanced Network, Monitoring and Controls 6, no. 2 (2021): 48–55. http://dx.doi.org/10.21307/ijanmc-2021-016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Zhao, Xi, Haizheng Yu, and Hong Bian. "Image to Image Translation Based on Differential Image Pix2Pix Model." Computers, Materials & Continua 77, no. 1 (2023): 181–98. http://dx.doi.org/10.32604/cmc.2023.041479.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Zhang, Tongtao, Aritra Chowdhury, Nimit Dhulekar, et al. "From Image to Translation." ACM Transactions on Asian and Low-Resource Language Information Processing 15, no. 4 (2016): 1–16. http://dx.doi.org/10.1145/2857052.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Fu, Yuanbin, Jiayi Ma, and Xiaojie Guo. "Unsupervised Exemplar-Domain Aware Image-to-Image Translation." Entropy 23, no. 5 (2021): 565. http://dx.doi.org/10.3390/e23050565.

Full text
Abstract:
Image-to-image translation is used to convert an image of a certain style to another of the target style with the original content preserved. A desired translator should be capable of generating diverse results in a controllable many-to-many fashion. To this end, we design a novel deep translator, namely exemplar-domain aware image-to-image translator (EDIT for short). From a logical perspective, the translator needs to perform two main functions, i.e., feature extraction and style transfer. With consideration of logical network partition, the generator of our EDIT comprises of a part of block
APA, Harvard, Vancouver, ISO, and other styles
13

Colleoni, Emanuele, and Danail Stoyanov. "Robotic Instrument Segmentation With Image-to-Image Translation." IEEE Robotics and Automation Letters 6, no. 2 (2021): 935–42. http://dx.doi.org/10.1109/lra.2021.3056354.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Ramya, S., S. Anchana, A. M. Bavidhraa, and R. Devanand. "Image to Image Translation using Deep Learning Techniques." International Journal of Computer Applications 175, no. 22 (2020): 40–42. http://dx.doi.org/10.5120/ijca2020920745.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Wang, Hsiang-Ying, Hsin-Chun Lin, Chih-Hsien Hsia, Natnuntnita Siriphockpirom, Hsien-I. Lin, and Yung-Yao Chen. "Image-to-image Translation via Contour-consistency Networks." Sensors and Materials 34, no. 2 (2022): 515. http://dx.doi.org/10.18494/sam3493.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Shao, Mingwen, Youcai Zhang, Huan Liu, Chao Wang, Le Li, and Xun Shao. "DMDIT: Diverse multi-domain image-to-image translation." Knowledge-Based Systems 229 (October 2021): 107311. http://dx.doi.org/10.1016/j.knosys.2021.107311.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Li, Yu, Sheng Tang, Rui Zhang, Yongdong Zhang, Jintao Li, and Shuicheng Yan. "Asymmetric GAN for Unpaired Image-to-Image Translation." IEEE Transactions on Image Processing 28, no. 12 (2019): 5881–96. http://dx.doi.org/10.1109/tip.2019.2922854.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Zareapoor, Masoumeh, and Jie Yang. "Equivariant Adversarial Network for Image-to-image Translation." ACM Transactions on Multimedia Computing, Communications, and Applications 17, no. 2s (2021): 1–14. http://dx.doi.org/10.1145/3458280.

Full text
Abstract:
Image-to-Image translation aims to learn an image from a source domain to a target domain. However, there are three main challenges, such as lack of paired datasets, multimodality, and diversity, that are associated with these problems and need to be dealt with. Convolutional neural networks (CNNs), despite of having great performance in many computer vision tasks, they fail to detect the hierarchy of spatial relationships between different parts of an object and thus do not form the ideal representative model we look for. This article presents a new variation of generative models that aims to
APA, Harvard, Vancouver, ISO, and other styles
19

Xiong, Feng, Qianqian Wang, and Quanxue Gao. "Consistent Embedded GAN for Image-to-Image Translation." IEEE Access 7 (2019): 126651–61. http://dx.doi.org/10.1109/access.2019.2939654.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Varghese, Subin, and Vedhus Hoskere. "Unpaired image-to-image translation of structural damage." Advanced Engineering Informatics 56 (April 2023): 101940. http://dx.doi.org/10.1016/j.aei.2023.101940.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Lee, Hanbit, Jinseok Seol, Sang-goo Lee, Jaehui Park, and Junho Shim. "Contrastive learning for unsupervised image-to-image translation." Applied Soft Computing 151 (January 2024): 111170. http://dx.doi.org/10.1016/j.asoc.2023.111170.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Alotaibi, Aziz. "Deep Generative Adversarial Networks for Image-to-Image Translation: A Review." Symmetry 12, no. 10 (2020): 1705. http://dx.doi.org/10.3390/sym12101705.

Full text
Abstract:
Many image processing, computer graphics, and computer vision problems can be treated as image-to-image translation tasks. Such translation entails learning to map one visual representation of a given input to another representation. Image-to-image translation with generative adversarial networks (GANs) has been intensively studied and applied to various tasks, such as multimodal image-to-image translation, super-resolution translation, object transfiguration-related translation, etc. However, image-to-image translation techniques suffer from some problems, such as mode collapse, instability,
APA, Harvard, Vancouver, ISO, and other styles
23

Cho, Younggun, Ramavtar Malav, Gaurav Pandey, and Ayoung Kim. "DehazeGAN: Underwater Haze Image Restoration using Unpaired Image-to-image Translation." IFAC-PapersOnLine 52, no. 21 (2019): 82–85. http://dx.doi.org/10.1016/j.ifacol.2019.12.287.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Sun, Boyang, Yupeng Mei, Ni Yan, and Yingyi Chen. "UMGAN: Underwater Image Enhancement Network for Unpaired Image-to-Image Translation." Journal of Marine Science and Engineering 11, no. 2 (2023): 447. http://dx.doi.org/10.3390/jmse11020447.

Full text
Abstract:
Due to light absorption and scattering underwater images suffer from low contrast, color distortion, blurred details, and uneven illumination, which affect underwater vision tasks and research. Therefore, underwater image enhancement is of great significance in vision applications. In contrast to existing methods for specific underwater environments or reliance on paired datasets, this study proposes an underwater multiscene generative adversarial network (UMGAN) to enhance underwater images. The network implements unpaired image-to-image translation between the underwater turbid domain and th
APA, Harvard, Vancouver, ISO, and other styles
25

Lu, Jiahao, Johan Öfverstedt, Joakim Lindblad, and Nataša Sladoje. "Is image-to-image translation the panacea for multimodal image registration? A comparative study." PLOS ONE 17, no. 11 (2022): e0276196. http://dx.doi.org/10.1371/journal.pone.0276196.

Full text
Abstract:
Despite current advancement in the field of biomedical image processing, propelled by the deep learning revolution, multimodal image registration, due to its several challenges, is still often performed manually by specialists. The recent success of image-to-image (I2I) translation in computer vision applications and its growing use in biomedical areas provide a tempting possibility of transforming the multimodal registration problem into a, potentially easier, monomodal one. We conduct an empirical study of the applicability of modern I2I translation methods for the task of rigid registration
APA, Harvard, Vancouver, ISO, and other styles
26

Islam, Naeem Ul, Sungmin Lee, and Jaebyung Park. "Accurate and Consistent Image-to-Image Conditional Adversarial Network." Electronics 9, no. 3 (2020): 395. http://dx.doi.org/10.3390/electronics9030395.

Full text
Abstract:
Image-to-image translation based on deep learning has attracted interest in the robotics and vision community because of its potential impact on terrain analysis and image representation, interpretation, modification, and enhancement. Currently, the most successful approach for generating a translated image is a conditional generative adversarial network (cGAN) for training an autoencoder with skip connections. Despite its impressive performance, it has low accuracy and a lack of consistency; further, its training is imbalanced. This paper proposes a balanced training strategy for image-to-ima
APA, Harvard, Vancouver, ISO, and other styles
27

Huang, Ben, Fei Kang, Xinyu Li, and Sisi Zhu. "Underwater dam crack image generation based on unsupervised image-to-image translation." Automation in Construction 163 (July 2024): 105430. http://dx.doi.org/10.1016/j.autcon.2024.105430.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Lin, Eason. "Comparative Analysis of Pix2Pix and CycleGAN for Image-to-Image Translation." Highlights in Science, Engineering and Technology 39 (April 1, 2023): 915–25. http://dx.doi.org/10.54097/hset.v39i.6676.

Full text
Abstract:
Image-to-Image translation technology is nowadays a prevailing research orientation in computer-vision domain, which aims to translate the styles and features in images from one image domain to another. With the rapid development of the convolutional neural networks, especially generative adversarial network technology, breakthroughs have been made in the performance of image translation, which has been widely used in many fields such as labeling photos to synthesize photos, reconstructing objects from line drawings, and coloring pictures. The Image-to-Image Translation problem is essentially
APA, Harvard, Vancouver, ISO, and other styles
29

Wadsworth, Emma, Advait Mahajan, Raksha Prasad, and Rajesh Menon. "Deep learning for thermal-RGB image-to-image translation." Infrared Physics & Technology 141 (September 2024): 105442. http://dx.doi.org/10.1016/j.infrared.2024.105442.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Tu, Hangyao, Zheng Wang, and Yanwei Zhao. "Unpaired Image-to-Image Translation with Diffusion Adversarial Network." Mathematics 12, no. 20 (2024): 3178. http://dx.doi.org/10.3390/math12203178.

Full text
Abstract:
Unpaired image translation with feature-level constraints presents significant challenges, including unstable network training and low diversity in generated tasks. This limitation is typically attributed to the following situations: 1. The generated images are overly simplistic, which fails to stimulate the network’s capacity for generating diverse and imaginative outputs. 2. The images produced are distorted, a direct consequence of unstable training conditions. To address this limitation, the unpaired image-to-image translation with diffusion adversarial network (UNDAN) is proposed. Specifi
APA, Harvard, Vancouver, ISO, and other styles
31

Ginger, Yiftach, Dov Danon, Hadar Averbuch-Elor, and Daniel Cohen-Or. "Implicit pairs for boosting unpaired image-to-image translation." Visual Informatics 4, no. 4 (2020): 50–58. http://dx.doi.org/10.1016/j.visinf.2020.10.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Lee, Hsin-Ying, Hung-Yu Tseng, Qi Mao, et al. "DRIT++: Diverse Image-to-Image Translation via Disentangled Representations." International Journal of Computer Vision 128, no. 10-11 (2020): 2402–17. http://dx.doi.org/10.1007/s11263-019-01284-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Sung, Thai Leang, and Hyo Jong Lee. "Image-to-Image Translation Using Identical-Pair Adversarial Networks." Applied Sciences 9, no. 13 (2019): 2668. http://dx.doi.org/10.3390/app9132668.

Full text
Abstract:
We propose Identical-pair Adversarial Networks (iPANs) to solve image-to-image translation problems, such as aerial-to-map, edge-to-photo, de-raining, and night-to-daytime. Our iPANs rely mainly on the effectiveness of adversarial loss function and its network architectures. Our iPANs consist of two main networks, an image transformation network T and a discriminative network D. We use U-NET for the transformation network T and a perceptual similarity network, which has two streams of VGG16 that share the same weights for network D. Our proposed adversarial losses play a minimax game against e
APA, Harvard, Vancouver, ISO, and other styles
34

Song, Seokbeom, Suhyeon Lee, Hongje Seong, Kyoungwon Min, and Euntai Kim. "SHUNIT: Style Harmonization for Unpaired Image-to-Image Translation." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 2 (2023): 2292–302. http://dx.doi.org/10.1609/aaai.v37i2.25324.

Full text
Abstract:
We propose a novel solution for unpaired image-to-image (I2I) translation. To translate complex images with a wide range of objects to a different domain, recent approaches often use the object annotations to perform per-class source-to-target style mapping. However, there remains a point for us to exploit in the I2I. An object in each class consists of multiple components, and all the sub-object components have different characteristics. For example, a car in CAR class consists of a car body, tires, windows and head and tail lamps, etc., and they should be handled separately for realistic I2I
APA, Harvard, Vancouver, ISO, and other styles
35

Qiao, Shishi, Ruiping Wang, Shiguang Shan, and Xilin Chen. "Hierarchical image-to-image translation with nested distributions modeling." Pattern Recognition 146 (February 2024): 110058. http://dx.doi.org/10.1016/j.patcog.2023.110058.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

WANG, Jianbo, Haozhi HUANG, Li SHEN, Xuan WANG, and Toshihiko YAMASAKI. "Hierarchical Detailed Intermediate Supervision for Image-to-Image Translation." IEICE Transactions on Information and Systems E106.D, no. 12 (2023): 2085–96. http://dx.doi.org/10.1587/transinf.2023edp7025.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Li, Jiguo, Xinfeng Zhang, Chuanmin Jia, et al. "Direct Speech-to-Image Translation." IEEE Journal of Selected Topics in Signal Processing 14, no. 3 (2020): 517–29. http://dx.doi.org/10.1109/jstsp.2020.2987417.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Liu, Yahui, Yajing Chem, Linchao Bao, Nicu Sebe, Bruno Lepri, and Nadai Marco De. "ISF-GAN: An Implicit Style Function for High Resolution Image-to-Image Translation." IEEE TRANSACTIONS ON MULTIMEDIA 25 (September 1, 2023): 3343–53. https://doi.org/10.1109/TMM.2022.3159115.

Full text
Abstract:
Recently, there has been an increasing interest in image editing methods that employ pre-trained unconditional image generators (e.g., StyleGAN). However, applying these methods to translate images to multiple visual domains remains challenging. Existing works do not often preserve the domain-invariant part of the image (e.g., the identity in human face translations), or they do not usually handle multiple domains or allow for multi-modal translations. This work proposes an implicit style function (ISF) to straightforwardly achieve multi-modal and multi-domain image-to-image translation from p
APA, Harvard, Vancouver, ISO, and other styles
39

Zhang, Zhuo, Guangyuan Fu, Fuqiang Di, Changlong Li, and Jia Liu. "Generative Reversible Data Hiding by Image-to-Image Translation via GANs." Security and Communication Networks 2019 (September 11, 2019): 1–10. http://dx.doi.org/10.1155/2019/4932782.

Full text
Abstract:
The traditional reversible data hiding technique is based on cover image modification which inevitably leaves some traces of rewriting that can be more easily analyzed and attacked by the warder. Inspired by the cover synthesis steganography-based generative adversarial networks, in this paper, a novel generative reversible data hiding (GRDH) scheme by image translation is proposed. First, an image generator is used to obtain a realistic image, which is used as an input to the image-to-image translation model with CycleGAN. After image translation, a stego image with different semantic informa
APA, Harvard, Vancouver, ISO, and other styles
40

Lee, Hong-Yu, Yung-Hui Li, Ting-Hsuan Lee, and Muhammad Saqlain Aslam. "Progressively Unsupervised Generative Attentional Networks with Adaptive Layer-Instance Normalization for Image-to-Image Translation." Sensors 23, no. 15 (2023): 6858. http://dx.doi.org/10.3390/s23156858.

Full text
Abstract:
Unsupervised image-to-image translation has received considerable attention due to the recent remarkable advancements in generative adversarial networks (GANs). In image-to-image translation, state-of-the-art methods use unpaired image data to learn mappings between the source and target domains. However, despite their promising results, existing approaches often fail in challenging conditions, particularly when images have various target instances and a translation task involves significant transitions in shape and visual artifacts when translating low-level information rather than high-level
APA, Harvard, Vancouver, ISO, and other styles
41

Hang, Yifei. "Image-to-pixel-art Translation Based on CycleGAN." Theoretical and Natural Science 83, no. 1 (2025): 110–17. https://doi.org/10.54254/2753-8818/2025.20028.

Full text
Abstract:
Image style transfer has gained significant attention in the computer vision fields in recent years, especially with the emergence of generative models. While numerous style transfer tasks have been handled by various models, image-to-pixel-art translations were not extensively explored, which is seemingly trivial yet requires delicacy in practice. To this end, this paper introduces the Pixel-Landscape-CycleGAN (PL-CycleGAN), which is a CycleGAN model that addresses the translation from, but not limited to, real-world landscape images to pixel art. The model is quantitatively evaluated using F
APA, Harvard, Vancouver, ISO, and other styles
42

Singh, Arvinder, Ninad Bhase, Manav Jain, and Tushar Ghorpade. "Machine Translation Systems for English Captions to Hindi Language Using Deep Learning." ITM Web of Conferences 44 (2022): 03004. http://dx.doi.org/10.1051/itmconf/20224403004.

Full text
Abstract:
Machine Translation is the process of translating text from one language to another which helps to reduce the conversation gap among people from different cultural backgrounds. The task performed by the Machine Translation System is to automatically translate between pairs of different natural languages, where Neural Machine Translation System stands out from all because it provides fluent translation along with reasonable translation accuracy. The Convolution Neural Network encoder is used to find patterns in the images and encode it into a vector that is passed to the Long Short Term Memory
APA, Harvard, Vancouver, ISO, and other styles
43

Botti, Filippo, Tomaso Fontanini, Massimo Bertozzi, and Andrea Prati. "Masked Style Transfer for Source-Coherent Image-to-Image Translation." Applied Sciences 14, no. 17 (2024): 7876. http://dx.doi.org/10.3390/app14177876.

Full text
Abstract:
The goal of image-to-image translation (I2I) is to translate images from one domain to another while maintaining the content representations. A popular method for I2I translation involves the use of a reference image to guide the transformation process. However, most architectures fail to maintain the input’s main characteristics and produce images that are too similar to the reference during style transfer. In order to avoid this problem, we propose a novel architecture that is able to perform source-coherent translation between multiple domains. Our goal is to preserve the input details duri
APA, Harvard, Vancouver, ISO, and other styles
44

Mueller, M. S., T. Sattler, M. Pollefeys, and B. Jutzi. "IMAGE-TO-IMAGE TRANSLATION FOR ENHANCED FEATURE MATCHING, IMAGE RETRIEVAL AND VISUAL LOCALIZATION." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences IV-2/W7 (September 16, 2019): 111–19. http://dx.doi.org/10.5194/isprs-annals-iv-2-w7-111-2019.

Full text
Abstract:
<p><strong>Abstract.</strong> The performance of machine learning and deep learning algorithms for image analysis depends significantly on the quantity and quality of the training data. The generation of annotated training data is often costly, time-consuming and laborious. Data augmentation is a powerful option to overcome these drawbacks. Therefore, we augment training data by rendering images with arbitrary poses from 3D models to increase the quantity of training images. These training images usually show artifacts and are of limited use for advanced image analysis. There
APA, Harvard, Vancouver, ISO, and other styles
45

Yoo, Jaechang, Heesong Eom, and Yong Suk Choi. "Image-To-Image Translation Using a Cross-Domain Auto-Encoder and Decoder." Applied Sciences 9, no. 22 (2019): 4780. http://dx.doi.org/10.3390/app9224780.

Full text
Abstract:
Recently, several studies have focused on image-to-image translation. However, the quality of the translation results is lacking in certain respects. We propose a new image-to-image translation method to minimize such shortcomings using an auto-encoder and an auto-decoder. This method includes pre-training two auto-encoders and decoder pairs for each source and target image domain, cross-connecting two pairs and adding a feature mapping layer. Our method is quite simple and straightforward to adopt but very effective in practice, and we experimentally demonstrated that our method can significa
APA, Harvard, Vancouver, ISO, and other styles
46

Lin, Ye, Keren Fu, Shenggui Ling, and Peng Cheng. "Unsupervised many‐to‐many image‐to‐image translation across multiple domains." IET Image Processing 15, no. 11 (2021): 2412–23. http://dx.doi.org/10.1049/ipr2.12227.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Gao, Fei, Xingxin Xu, Jun Yu, Meimei Shang, Xiang Li, and Dacheng Tao. "Complementary, Heterogeneous and Adversarial Networks for Image-to-Image Translation." IEEE Transactions on Image Processing 30 (2021): 3487–98. http://dx.doi.org/10.1109/tip.2021.3061286.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Saxena, Divya, Tarun Kulshrestha, Jiannong Cao, and Shing-Chi Cheung. "Multi-Constraint Adversarial Networks for Unsupervised Image-to-Image Translation." IEEE Transactions on Image Processing 31 (2022): 1601–12. http://dx.doi.org/10.1109/tip.2022.3144886.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Tang, Hao, Hong Liu, and Nicu Sebe. "Unified Generative Adversarial Networks for Controllable Image-to-Image Translation." IEEE Transactions on Image Processing 29 (2020): 8916–29. http://dx.doi.org/10.1109/tip.2020.3021789.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Sub-r-pa, Chayanon, and Rung-Ching Chen. "Knowledge Distillation Generative Adversarial Network for Image-to-Image Translation." Journal of Advances in Information Technology 15, no. 8 (2024): 896–902. http://dx.doi.org/10.12720/jait.15.8.896-902.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!