To see the other types of publications on this topic, follow the link: Variational Autoencoder.

Journal articles on the topic 'Variational Autoencoder'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Variational Autoencoder.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Sreeteish, M. "Image De-Noising Using Convolutional Variational Autoencoders." International Journal for Research in Applied Science and Engineering Technology 10, no. 6 (2022): 4002–9. http://dx.doi.org/10.22214/ijraset.2022.44826.

Full text
Abstract:
Abstract: Typically, image noise is random colour information in picture pixels that serves as an unfavourable by-product of the image, obscuring the intended information. In most cases, noise is injected into photographs during the transfer or reception of the image, or during the capture of an image when an object is moving quickly. To improve the noisy picture predictions, autoencoders that denoise the input images are employed. Autoencoders are a sort of unsupervised machine learning that compresses the input and reconstructs an output that is very similar to the original input. The autoen
APA, Harvard, Vancouver, ISO, and other styles
2

Liu, Junhong. "Review of variational autoencoders model." Applied and Computational Engineering 4, no. 1 (2023): 588–96. http://dx.doi.org/10.54254/2755-2721/4/2023328.

Full text
Abstract:
Variational autoencoder is one of the deep latent space generation models, which has become increasingly popular in image generation and anomaly detection in recent years. In this paper, we first review the development and research status of traditional variational autoencoders and their variants, and summarize and compare the performance of all variational autoencoders. then give a possible development direction of VAE.
APA, Harvard, Vancouver, ISO, and other styles
3

Abdullayeva, Fargana J. "Cloud Computing Virtual Machine Workload Prediction Method Based on Variational Autoencoder." International Journal of Systems and Software Security and Protection 12, no. 2 (2021): 33–45. http://dx.doi.org/10.4018/ijsssp.2021070103.

Full text
Abstract:
The paper proposes a method for predicting the workload of virtual machines in the cloud infrastructure. Reconstruction probabilities of variational autoencoders were used to provide the prediction. Reconstruction probability is a probability criterion that considers the variability in the distribution of variables. In the proposed approach, the values of the reconstruction probabilities of the variational autoencoder show the workload level of the virtual machines. The results of the experiments showed that variational autoencoders gave better results in predicting the workload of virtual mac
APA, Harvard, Vancouver, ISO, and other styles
4

Sun, Ying, Lang Li, Yang Ding, Jiabao Bai, and Xiangning Xin. "Image Compression Algorithm Based On Variational Autoencoder." Journal of Physics: Conference Series 2066, no. 1 (2021): 012008. http://dx.doi.org/10.1088/1742-6596/2066/1/012008.

Full text
Abstract:
Abstract Variational Autoencoder (VAE), as a kind of deep hidden space generation model, has achieved great success in performance in recent years, especially in image generation. This paper aims to study image compression algorithms based on variational autoencoders. This experiment uses the image quality evaluation measurement model, because the image super-resolution algorithm based on interpolation is the most direct and simple method to change the image resolution. In the experiment, the first step of the whole picture is transformed by the variational autoencoder, and then the actual cod
APA, Harvard, Vancouver, ISO, and other styles
5

Shin, Seung Yeop, and Han-joon Kim. "Extended Autoencoder for Novelty Detection with Reconstruction along Projection Pathway." Applied Sciences 10, no. 13 (2020): 4497. http://dx.doi.org/10.3390/app10134497.

Full text
Abstract:
Recently, novelty detection with reconstruction along projection pathway (RaPP) has made progress toward leveraging hidden activation values. RaPP compares the input and its autoencoder reconstruction in hidden spaces to detect novelty samples. Nevertheless, traditional autoencoders have not yet begun to fully exploit this method. In this paper, we propose a new model, the Extended Autoencoder Model, that adds an adversarial component to the autoencoder to take full advantage of RaPP. The adversarial component matches the latent variables of the reconstructed input to the latent variables of t
APA, Harvard, Vancouver, ISO, and other styles
6

Shevchenko, Dmytro, Mykhaylo Ugryumov, and Sergii Artiukh. "MONITORING DATA AGGREGATION OF DYNAMIC SYSTEMS USING INFORMATION TECHNOLOGIES." Innovative Technologies and Scientific Solutions for Industries, no. 1 (23) (April 20, 2023): 123–31. http://dx.doi.org/10.30837/itssi.2023.23.123.

Full text
Abstract:
The subject matter of the article is models, methods and information technologies of monitoring data aggregation. The goal of the article is to determine the best deep learning model for reducing the dimensionality of dynamic systems monitoring data. The following tasks were solved: analysis of existing dimensionality reduction approaches, description of the general architecture of vanilla and variational autoencoders, development of their architecture, development of software for training and testing of autoencoders, conducting research on the performance quality of autoencoders for the probl
APA, Harvard, Vancouver, ISO, and other styles
7

Haga, Takeshi, Hiroshi Kera, and Kazuhiko Kawamoto. "Sequential Variational Autoencoder with Adversarial Classifier for Video Disentanglement." Sensors 23, no. 5 (2023): 2515. http://dx.doi.org/10.3390/s23052515.

Full text
Abstract:
In this paper, we propose a sequential variational autoencoder for video disentanglement, which is a representation learning method that can be used to separately extract static and dynamic features from videos. Building sequential variational autoencoders with a two-stream architecture induces inductive bias for video disentanglement. However, our preliminary experiment demonstrated that the two-stream architecture is insufficient for video disentanglement because static features frequently contain dynamic features. Additionally, we found that dynamic features are not discriminative in the la
APA, Harvard, Vancouver, ISO, and other styles
8

Linhardt, Timothy, and Ananya Sen Gupta. "Empirical analysis of latent space encodings for submerged small target acoustic backscattering data." Journal of the Acoustical Society of America 151, no. 4 (2022): A102. http://dx.doi.org/10.1121/10.0010792.

Full text
Abstract:
With future sights on specialized classification methods, we work to generalize the acoustic backscattering data from sonar measurements of small targets submerged in water by learning a non-invertible mapping (encoding) to a low-dimensional vector space (ℝn). Finding the optimal dimensionality of this latent space is an important task. The encoding is accomplished by utilizing modality agnostic convolutional machine learning methods that have seen success in other signal and image processing domains. We have explored the autoencoder and its variants, the sparse autoencoder, and the variationa
APA, Harvard, Vancouver, ISO, and other styles
9

Khoshaman, Amir, Walter Vinci, Brandon Denis, Evgeny Andriyash, Hossein Sadeghi, and Mohammad H. Amin. "Quantum variational autoencoder." Quantum Science and Technology 4, no. 1 (2018): 014001. http://dx.doi.org/10.1088/2058-9565/aada1f.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Joo, Weonyoung, Wonsung Lee, Sungrae Park, and Il-Chul Moon. "Dirichlet Variational Autoencoder." Pattern Recognition 107 (November 2020): 107514. http://dx.doi.org/10.1016/j.patcog.2020.107514.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Alves de Oliveira, Vinicius, Marie Chabert, Thomas Oberlin, et al. "Reduced-Complexity End-to-End Variational Autoencoder for on Board Satellite Image Compression." Remote Sensing 13, no. 3 (2021): 447. http://dx.doi.org/10.3390/rs13030447.

Full text
Abstract:
Recently, convolutional neural networks have been successfully applied to lossy image compression. End-to-end optimized autoencoders, possibly variational, are able to dramatically outperform traditional transform coding schemes in terms of rate-distortion trade-off; however, this is at the cost of a higher computational complexity. An intensive training step on huge databases allows autoencoders to learn jointly the image representation and its probability distribution, possibly using a non-parametric density model or a hyperprior auxiliary autoencoder to eliminate the need for prior knowledg
APA, Harvard, Vancouver, ISO, and other styles
12

Akkari, Nissrine, Fabien Casenave, Elie Hachem, and David Ryckelynck. "A Bayesian Nonlinear Reduced Order Modeling Using Variational AutoEncoders." Fluids 7, no. 10 (2022): 334. http://dx.doi.org/10.3390/fluids7100334.

Full text
Abstract:
This paper presents a new nonlinear projection based model reduction using convolutional Variational AutoEncoders (VAEs). This framework is applied on transient incompressible flows. The accuracy is obtained thanks to the expression of the velocity and pressure fields in a nonlinear manifold maximising the likelihood on pre-computed data in the offline stage. A confidence interval is obtained for each time instant thanks to the definition of the reduced dynamic coefficients as independent random variables for which the posterior probability given the offline data is known. The parameters of th
APA, Harvard, Vancouver, ISO, and other styles
13

Kamata, Hiromichi, Yusuke Mukuta, and Tatsuya Harada. "Fully Spiking Variational Autoencoder." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 6 (2022): 7059–67. http://dx.doi.org/10.1609/aaai.v36i6.20665.

Full text
Abstract:
Spiking neural networks (SNNs) can be run on neuromorphic devices with ultra-high speed and ultra-low energy consumption because of their binary and event-driven nature. Therefore, SNNs are expected to have various applications, including as generative models being running on edge devices to create high-quality images. In this study, we build a variational autoencoder (VAE) with SNN to enable image generation. VAE is known for its stability among generative models; recently, its quality advanced. In vanilla VAE, the latent space is represented as a normal distribution, and floating-point calcu
APA, Harvard, Vancouver, ISO, and other styles
14

Mujkic, Esma, Mark P. Philipsen, Thomas B. Moeslund, Martin P. Christiansen, and Ole Ravn. "Anomaly Detection for Agricultural Vehicles Using Autoencoders." Sensors 22, no. 10 (2022): 3608. http://dx.doi.org/10.3390/s22103608.

Full text
Abstract:
The safe in-field operation of autonomous agricultural vehicles requires detecting all objects that pose a risk of collision. Current vision-based algorithms for object detection and classification are unable to detect unknown classes of objects. In this paper, the problem is posed as anomaly detection instead, where convolutional autoencoders are applied to identify any objects deviating from the normal pattern. Training an autoencoder network to reconstruct normal patterns in agricultural fields makes it possible to detect unknown objects by high reconstruction error. Basic autoencoder (AE),
APA, Harvard, Vancouver, ISO, and other styles
15

Hou, Yingzhen, Junhai Zhai, and Jiankai Chen. "Coupled adversarial variational autoencoder." Signal Processing: Image Communication 98 (October 2021): 116396. http://dx.doi.org/10.1016/j.image.2021.116396.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Nugroho, Herminarto, Meredita Susanty, Ade Irawan, Muhamad Koyimatu, and Ariana Yunita. "Fully Convolutional Variational Autoencoder For Feature Extraction Of Fire Detection System." Jurnal Ilmu Komputer dan Informasi 13, no. 1 (2020): 9. http://dx.doi.org/10.21609/jiki.v13i1.761.

Full text
Abstract:
This paper proposes a fully convolutional variational autoencoder (VAE) for features extraction from a large-scale dataset of fire images. The dataset will be used to train the deep learning algorithm to detect fire and smoke. The features extraction is used to tackle the curse of dimensionality, which is the common issue in training deep learning with huge datasets. Features extraction aims to reduce the dimension of the dataset significantly without losing too much essential information. Variational autoencoders (VAEs) are powerfull generative model, which can be used for dimension reduction
APA, Harvard, Vancouver, ISO, and other styles
17

Wu, Hanwei, and Markus Flierl. "Vector Quantization-Based Regularization for Autoencoders." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (2020): 6380–87. http://dx.doi.org/10.1609/aaai.v34i04.6108.

Full text
Abstract:
Autoencoders and their variations provide unsupervised models for learning low-dimensional representations for downstream tasks. Without proper regularization, autoencoder models are susceptible to the overfitting problem and the so-called posterior collapse phenomenon. In this paper, we introduce a quantization-based regularizer in the bottleneck stage of autoencoder models to learn meaningful latent representations. We combine both perspectives of Vector Quantized-Variational AutoEncoders (VQ-VAE) and classical denoising regularization methods of neural networks. We interpret quantizers as r
APA, Harvard, Vancouver, ISO, and other styles
18

Milano, Nicola, Monica Casella, Raffaella Esposito, and Davide Marocco. "Exploring the Potential of Variational Autoencoders for Modeling Nonlinear Relationships in Psychological Data." Behavioral Sciences 14, no. 7 (2024): 527. http://dx.doi.org/10.3390/bs14070527.

Full text
Abstract:
Latent variables analysis is an important part of psychometric research. In this context, factor analysis and other related techniques have been widely applied for the investigation of the internal structure of psychometric tests. However, these methods perform a linear dimensionality reduction under a series of assumptions that could not always be verified in psychological data. Predictive techniques, such as artificial neural networks, could complement and improve the exploration of latent space, overcoming the limits of traditional methods. In this study, we explore the latent space generat
APA, Harvard, Vancouver, ISO, and other styles
19

Sheng, Xin, Linli Xu, Junliang Guo, Jingchang Liu, Ruoyu Zhao, and Yinlong Xu. "IntroVNMT: An Introspective Model for Variational Neural Machine Translation." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 05 (2020): 8830–37. http://dx.doi.org/10.1609/aaai.v34i05.6411.

Full text
Abstract:
We propose a novel introspective model for variational neural machine translation (IntroVNMT) in this paper, inspired by the recent successful application of introspective variational autoencoder (IntroVAE) in high quality image synthesis. Different from the vanilla variational NMT model, IntroVNMT is capable of improving itself introspectively by evaluating the quality of the generated target sentences according to the high-level latent variables of the real and generated target sentences. As a consequence of introspective training, the proposed model is able to discriminate between the gener
APA, Harvard, Vancouver, ISO, and other styles
20

Mulyani, Sri Hasta. "Enhancing Mental Health Disorders Classification using Convolutional Variational Autoencoder." International Journal of Informatics and Computation 6, no. 1 (2024): 1. http://dx.doi.org/10.35842/ijicom.v6i1.65.

Full text
Abstract:
This research investigates the application of Convolutional Variational Autoencoder (CVAE) for multi-class classification of mental health disorders. The study utilizes a diverse dataset comprising five classes: Normal, Anxiety, Depression, Loneliness, and Stress. The CVAE model effectively captures spatial dependencies and learns latent representations from the mental health disorder data. The classification results demonstrate high precision, recall, and F1 scores for all classes, indicating the model's robustness in distinguishing between different disorders accurately. The research contrib
APA, Harvard, Vancouver, ISO, and other styles
21

Choong, Jun Jin, Xin Liu, and Tsuyoshi Murata. "Optimizing Variational Graph Autoencoder for Community Detection with Dual Optimization." Entropy 22, no. 2 (2020): 197. http://dx.doi.org/10.3390/e22020197.

Full text
Abstract:
Variational Graph Autoencoder (VGAE) has recently gained traction for learning representations on graphs. Its inception has allowed models to achieve state-of-the-art performance for challenging tasks such as link prediction, rating prediction, and node clustering. However, a fundamental flaw exists in Variational Autoencoder (VAE)-based approaches. Specifically, merely minimizing the loss of VAE increases the deviation from its primary objective. Focusing on Variational Graph Autoencoder for Community Detection (VGAECD) we found that optimizing the loss using the stochastic gradient descent o
APA, Harvard, Vancouver, ISO, and other styles
22

Zhang, Guangzi, Xiaolin Hong, Yan Liu, Yulin Qian, and Xingquan Cai. "Video Colorization Based on Variational Autoencoder." Electronics 13, no. 12 (2024): 2412. http://dx.doi.org/10.3390/electronics13122412.

Full text
Abstract:
This paper introduces a variational autoencoder network designed for video colorization using reference images, addressing the challenge of colorizing black-and-white videos. Although recent techniques perform well in some scenarios, they often struggle with color inconsistencies and artifacts in videos that feature complex scenes and long durations. To tackle this, we propose a variational autoencoder framework that incorporates spatio-temporal information for efficient video colorization. To improve temporal consistency, we unify semantic correspondence with color propagation, allowing for s
APA, Harvard, Vancouver, ISO, and other styles
23

Arifeen, Murshedul, Andrei Petrovski, Md Junayed Hasan, Khandaker Noman, Wasib Ul Navid, and Auwal Haruna. "Graph-Variational Convolutional Autoencoder-Based Fault Detection and Diagnosis for Photovoltaic Arrays." Machines 12, no. 12 (2024): 894. https://doi.org/10.3390/machines12120894.

Full text
Abstract:
Solar energy is a critical renewable energy source, with solar arrays or photovoltaic systems widely used to convert solar energy into electrical energy. However, solar array systems can develop faults and may exhibit poor performance. Diagnosing and resolving faults within these systems promptly is crucial to ensure reliability and efficiency in energy generation. Autoencoders and their variants have gained popularity in recent studies for detecting and diagnosing faults in solar arrays. However, traditional autoencoder models often struggle to capture the spatial and temporal relationships p
APA, Harvard, Vancouver, ISO, and other styles
24

Zemouri, Ryad. "Semi-Supervised Adversarial Variational Autoencoder." Machine Learning and Knowledge Extraction 2, no. 3 (2020): 361–78. http://dx.doi.org/10.3390/make2030020.

Full text
Abstract:
We present a method to improve the reconstruction and generation performance of a variational autoencoder (VAE) by injecting an adversarial learning. Instead of comparing the reconstructed with the original data to calculate the reconstruction loss, we use a consistency principle for deep features. The main contributions are threefold. Firstly, our approach perfectly combines the two models, i.e., GAN and VAE, and thus improves the generation and reconstruction performance of the VAE. Secondly, the VAE training is done in two steps, which allows to dissociate the constraints used for the const
APA, Harvard, Vancouver, ISO, and other styles
25

Lim, Kart-Leong, Xudong Jiang, and Chenyu Yi. "Deep Clustering With Variational Autoencoder." IEEE Signal Processing Letters 27 (2020): 231–35. http://dx.doi.org/10.1109/lsp.2020.2965328.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Arsini, Lorenzo, Barbara Caccia, Andrea Ciardiello, Stefano Giagu, and Carlo Mancini Terracciano. "Nearest Neighbours Graph Variational AutoEncoder." Algorithms 16, no. 3 (2023): 143. http://dx.doi.org/10.3390/a16030143.

Full text
Abstract:
Graphs are versatile structures for the representation of many real-world data. Deep Learning on graphs is currently able to solve a wide range of problems with excellent results. However, both the generation of graphs and the handling of large graphs still remain open challenges. This work aims to introduce techniques for generating large graphs and test the approach on a complex problem such as the calculation of dose distribution in oncological radiotherapy applications. To this end, we introduced a pooling technique (ReNN-Pool) capable of sampling nodes that are spatially uniform without c
APA, Harvard, Vancouver, ISO, and other styles
27

Kristian, Yosi, Natanael Simogiarto, Mahendra Tri Arif Sampurna, and Elizeus Hanindito. "Ensemble of multimodal deep learning autoencoder for infant cry and pain detection." F1000Research 11 (March 28, 2022): 359. http://dx.doi.org/10.12688/f1000research.73108.1.

Full text
Abstract:
Background: Babies cannot communicate their pain properly. Several pain scores are developed, but they are subjective and have high variability inter-observer agreement. The aim of this study was to construct models that use both facial expression and infant voice in classifying pain levels and cry detection. Methods: The study included a total of 23 infants below 12-months who were treated at Dr Soetomo General Hospital. The the Face Leg Activity Cry and Consolability (FLACC) pain scale and recordings of the baby's cries were taken in the video format. A machine-learning-based system was crea
APA, Harvard, Vancouver, ISO, and other styles
28

Kristian, Yosi, Natanael Simogiarto, Mahendra Tri Arif Sampurna, Elizeus Hanindito, and Visuddho Visuddho. "Ensemble of multimodal deep learning autoencoder for infant cry and pain detection." F1000Research 11 (January 30, 2023): 359. http://dx.doi.org/10.12688/f1000research.73108.2.

Full text
Abstract:
Background: Babies cannot communicate their pain properly. Several pain scores are developed, but they are subjective and have high variability inter-observer agreement. The aim of this study was to construct models that use both facial expression and infant voice in classifying pain levels and cry detection. Methods: The study included a total of 23 infants below 12-months who were treated at Dr Soetomo General Hospital. The the Face Leg Activity Cry and Consolability (FLACC) pain scale and recordings of the baby's cries were taken in the video format. A machine-learning-based system was crea
APA, Harvard, Vancouver, ISO, and other styles
29

Sun, Lili, Xueyan Liu, Min Zhao, and Bo Yang. "Interpretable Variational Graph Autoencoder with Noninformative Prior." Future Internet 13, no. 2 (2021): 51. http://dx.doi.org/10.3390/fi13020051.

Full text
Abstract:
Variational graph autoencoder, which can encode structural information and attribute information in the graph into low-dimensional representations, has become a powerful method for studying graph-structured data. However, most existing methods based on variational (graph) autoencoder assume that the prior of latent variables obeys the standard normal distribution which encourages all nodes to gather around 0. That leads to the inability to fully utilize the latent space. Therefore, it becomes a challenge on how to choose a suitable prior without incorporating additional expert knowledge. Given
APA, Harvard, Vancouver, ISO, and other styles
30

Sengodan, Boopathi Chettiagounder, Prince Mary Stanislaus, Sivakumar Sabapathy Arumugam, et al. "Variational Autoencoders for Network Lifetime Enhancement in Wireless Sensors." Sensors 24, no. 17 (2024): 5630. http://dx.doi.org/10.3390/s24175630.

Full text
Abstract:
Wireless sensor networks (WSNs) are structured for monitoring an area with distributed sensors and built-in batteries. However, most of their battery energy is consumed during the data transmission process. In recent years, several methodologies, like routing optimization, topology control, and sleep scheduling algorithms, have been introduced to improve the energy efficiency of WSNs. This study introduces a novel method based on a deep learning approach that utilizes variational autoencoders (VAEs) to improve the energy efficiency of WSNs by compressing transmission data. The VAE approach is
APA, Harvard, Vancouver, ISO, and other styles
31

Nedashkovskaya, Nadezhda, and Dmytro Androsov. "Generative time series model based on encoder-decoder architecture." System research and information technologies, no. 1 (April 25, 2022): 97–109. http://dx.doi.org/10.20535/srit.2308-8893.2022.1.08.

Full text
Abstract:
Encoder-decoder neural network models have found widespread use in recent years for solving various machine learning problems. In this paper, we investigate the variety of such models, including the sparse, denoising and variational autoencoders. To predict non-stationary time series, a generative model is presented and tested, which is based on a variational autoencoder, GRU recurrent networks, and uses elements of neural ordinary differential equations. Based on the constructed model, the system is implemented in the Python3 environment, the TensorFlow2 framework and the Keras library. The d
APA, Harvard, Vancouver, ISO, and other styles
32

Kuang, Shenfen, Jie Song, Shangjiu Wang, and Huafeng Zhu. "Variational Autoencoding with Conditional Iterative Sampling for Missing Data Imputation." Mathematics 12, no. 20 (2024): 3288. http://dx.doi.org/10.3390/math12203288.

Full text
Abstract:
Variational autoencoders (VAEs) are popular for their robust nonlinear representation capabilities and have recently achieved notable advancements in the problem of missing data imputation. However, existing imputation methods often exhibit instability due to the inherent randomness in the sampling process, leading to either underestimation or overfitting, particularly when handling complex missing data types such as images. To address this challenge, we introduce a conditional iterative sampling imputation method. Initially, we employ an importance-weighted beta variational autoencoder to lea
APA, Harvard, Vancouver, ISO, and other styles
33

Rákos, Olivér, Szilárd Aradi, Tamás Bécsi, and Zsolt Szalay. "Compression of Vehicle Trajectories with a Variational Autoencoder." Applied Sciences 10, no. 19 (2020): 6739. http://dx.doi.org/10.3390/app10196739.

Full text
Abstract:
The perception and prediction of the surrounding vehicles’ trajectories play a significant role in designing safe and optimal control strategies for connected and automated vehicles. The compression of trajectory data and the drivers’ strategic behavior’s classification is essential to communicate in vehicular ad-hoc networks (VANETs). This paper presents a Variational Autoencoder (VAE) solution to solve the compression problem, and as an added benefit, it also provides classification information. The input is the time series of vehicle positions along actual real-world trajectories obtained f
APA, Harvard, Vancouver, ISO, and other styles
34

Heinze-Deml, Christina, Sebastian Sippel, Angeline G. Pendergrass, Flavio Lehner, and Nicolai Meinshausen. "Latent Linear Adjustment Autoencoder v1.0: a novel method for estimating and emulating dynamic precipitation at high resolution." Geoscientific Model Development 14, no. 8 (2021): 4977–99. http://dx.doi.org/10.5194/gmd-14-4977-2021.

Full text
Abstract:
Abstract. A key challenge in climate science is to quantify the forced response in impact-relevant variables such as precipitation against the background of internal variability, both in models and observations. Dynamical adjustment techniques aim to remove unforced variability from a target variable by identifying patterns associated with circulation, thus effectively acting as a filter for dynamically induced variability. The forced contributions are interpreted as the variation that is unexplained by circulation. However, dynamical adjustment of precipitation at local scales remains challen
APA, Harvard, Vancouver, ISO, and other styles
35

Takahashi, Hiroshi, Tomoharu Iwata, Yuki Yamanaka, Masanori Yamada, and Satoshi Yagi. "Variational Autoencoder with Implicit Optimal Priors." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 5066–73. http://dx.doi.org/10.1609/aaai.v33i01.33015066.

Full text
Abstract:
The variational autoencoder (VAE) is a powerful generative model that can estimate the probability of a data point by using latent variables. In the VAE, the posterior of the latent variable given the data point is regularized by the prior of the latent variable using Kullback Leibler (KL) divergence. Although the standard Gaussian distribution is usually used for the prior, this simple prior incurs over-regularization. As a sophisticated prior, the aggregated posterior has been introduced, which is the expectation of the posterior over the data distribution. This prior is optimal for the VAE
APA, Harvard, Vancouver, ISO, and other styles
36

Ahn, Jewon, and Taesoo Kwon. "Motion Style Transfer using Variational Autoencoder." Journal of the Korea Computer Graphics Society 27, no. 5 (2021): 33–43. http://dx.doi.org/10.15701/kcgs.2021.27.5.33.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Zhang, Hangbin, Raymond K. Wong, and Victor W. Chu. "Hybrid Variational Autoencoder for Recommender Systems." ACM Transactions on Knowledge Discovery from Data 16, no. 2 (2022): 1–37. http://dx.doi.org/10.1145/3470659.

Full text
Abstract:
E-commerce platforms heavily rely on automatic personalized recommender systems, e.g., collaborative filtering models, to improve customer experience. Some hybrid models have been proposed recently to address the deficiency of existing models. However, their performances drop significantly when the dataset is sparse. Most of the recent works failed to fully address this shortcoming. At most, some of them only tried to alleviate the problem by considering either user side or item side content information. In this article, we propose a novel recommender model called Hybrid Variational Autoencode
APA, Harvard, Vancouver, ISO, and other styles
38

Emm, Toby A., and Yu Zhang. "Self-Adaptive Evolutionary Info Variational Autoencoder." Computers 13, no. 8 (2024): 214. http://dx.doi.org/10.3390/computers13080214.

Full text
Abstract:
With the advent of increasingly powerful machine learning algorithms and the ability to rapidly obtain accurate aerodynamic performance data, there has been a steady rise in the use of algorithms for automated aerodynamic design optimisation. However, long training times, high-dimensional design spaces and rapid geometry alteration pose barriers to this becoming an efficient and worthwhile process. The variational autoencoder (VAE) is a probabilistic generative model capable of learning a low-dimensional representation of high-dimensional input data. Despite their impressive power, VAEs suffer
APA, Harvard, Vancouver, ISO, and other styles
39

Islam, Zubayer, Mohamed Abdel-Aty, Qing Cai, and Jinghui Yuan. "Crash data augmentation using variational autoencoder." Accident Analysis & Prevention 151 (March 2021): 105950. http://dx.doi.org/10.1016/j.aap.2020.105950.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Xu, Weidi, and Ying Tan. "Semisupervised Text Classification by Variational Autoencoder." IEEE Transactions on Neural Networks and Learning Systems 31, no. 1 (2020): 295–308. http://dx.doi.org/10.1109/tnnls.2019.2900734.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Gordiychuk, Margarita, and Yaojun Zhang. "Variational autoencoder for predicting DNA flexibility." Biophysical Journal 123, no. 3 (2024): 497a. http://dx.doi.org/10.1016/j.bpj.2023.11.3008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Talbi, Farid, Miloud Chikr Elmezouar, Elhocine Boutellaa, and Fatiha Alim. "Vector-Quantized Variational AutoEncoder for pansharpening." International Journal of Remote Sensing 44, no. 20 (2023): 6329–49. http://dx.doi.org/10.1080/01431161.2023.2265542.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Silva, Adson, and Ricardo Farias. "AD-VAE: Adversarial Disentangling Variational Autoencoder." Sensors 25, no. 5 (2025): 1574. https://doi.org/10.3390/s25051574.

Full text
Abstract:
Face recognition (FR) is a less intrusive biometrics technology with various applications, such as security, surveillance, and access control systems. FR remains challenging, especially when there is only a single image per person as a gallery dataset and when dealing with variations like pose, illumination, and occlusion. Deep learning techniques have shown promising results in recent years using VAE and GAN, with approaches such as patch-VAE, VAE-GAN for 3D Indoor Scene Synthesis, and hybrid VAE-GAN models. However, in Single Sample Per Person Face Recognition (SSPP FR), the challenge of lea
APA, Harvard, Vancouver, ISO, and other styles
44

Qiu, Peijie, Wenhui Zhu, Sayantan Kumar, et al. "Multimodal Variational Autoencoder: A Barycentric View." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 19 (2025): 20060–68. https://doi.org/10.1609/aaai.v39i19.34209.

Full text
Abstract:
Multiple signal modalities, such as vision and sounds, are naturally present in real-world phenomena. Recently, there has been growing interest in learning generative models, in particular variational autoencoder (VAE), to for multimodal representation learning especially in the case of missing modalities. The primary goal of these models is to learn a modality-invariant and modality-specific representation that characterizes information across multiple modalities. Previous attempts at multimodal VAEs approach this mainly through the lens of experts, aggregating unimodal inference distribution
APA, Harvard, Vancouver, ISO, and other styles
45

Chen, Chenyu, and Jinhui Cai. "A Hybrid Cluster Variational Autoencoder Model for Monitoring the Multimode Blast Furnace System." Processes 11, no. 9 (2023): 2580. http://dx.doi.org/10.3390/pr11092580.

Full text
Abstract:
Efficient monitoring of the blast furnace system is crucial for maintaining high production efficiency and ensuring product quality. This article introduces a hybrid cluster variational autoencoder model for monitoring the blast furnace ironmaking process which exhibits multimode behaviors. In contrast to traditional approaches, this method utilizes neural networks to learn data features and effectively handles the diverse feature types observed in different production modes. Through the utilization of a clustering process within the hidden layer of the variational autoencoder, the proposed te
APA, Harvard, Vancouver, ISO, and other styles
46

Casado-Pérez, Alejandro, Samuel Yanes, Sergio L. Toral, Manuel Perales-Esteve, and Daniel Gutiérrez-Reina. "Variational Autoencoder for the Prediction of Oil Contamination Temporal Evolution in Water Environments." Sensors 25, no. 6 (2025): 1654. https://doi.org/10.3390/s25061654.

Full text
Abstract:
The water quality monitoring of large water masses using robotic vehicles is a complex task highly developed in recent years. The main approaches utilize adaptative informative path planning of fleets of autonomous surface vehicles and computer learning methods. However, water monitoring is characterized by a highly dynamic and unknown environment. Thus, the characterization of the water quality state of a water mass becomes a challenge. This paper proposes a variational autoencoder structure, trained in a model-free manner, that aims to provide a dynamic contamination model from partial obser
APA, Harvard, Vancouver, ISO, and other styles
47

Goh, Bonggyun, and Jun-Geol Baek. "Anomaly Detection with Variational Autoencoder to Prevent System Malfunctions." Journal of the Korean Institute of Industrial Engineers 45, no. 2 (2019): 138–45. http://dx.doi.org/10.7232/jkiie.2019.45.2.138.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Wang, Shuliang, Dapeng Li, Jing Geng, Longxing Yang, and Hongyong Leng. "Learning to balance the coherence and diversity of response generation in generation-based chatbots." International Journal of Advanced Robotic Systems 17, no. 4 (2020): 172988142095300. http://dx.doi.org/10.1177/1729881420953006.

Full text
Abstract:
Generating response with both coherence and diversity is a challenging task in generation-based chatbots. It is more difficult to improve the coherence and diversity of dialog generation at the same time in the response generation model. In this article, we propose an improved method that improves the coherence and diversity of dialog generation by changing the model to use gamma sampling and adding attention mechanism to the knowledge-guided conditional variational autoencoder. The experimental results demonstrate that our proposed method can significantly improve the coherence and diversity
APA, Harvard, Vancouver, ISO, and other styles
49

Walczyna, Tomasz, Damian Jankowski, and Zbigniew Piotrowski. "Enhancing Anomaly Detection Through Latent Space Manipulation in Autoencoders: A Comparative Analysis." Applied Sciences 15, no. 1 (2024): 286. https://doi.org/10.3390/app15010286.

Full text
Abstract:
This article explores the practical implementation of autoencoders for anomaly detection, emphasizing their latent space manipulation and applicability across various domains. This study highlights the impact of optimizing parameter configurations, lightweight architectures, and training methodologies to enhance anomaly detection performance. A comparative analysis of autoencoders, Variational Autoencoders, and their modified counterparts was conducted within a tailored experimental environment designed to simulate real-world scenarios. The results demonstrate that these models, when fine-tune
APA, Harvard, Vancouver, ISO, and other styles
50

Ramanaik, Chethan Krishnamurthy, Anna Willmann, Juan-Esteban Suarez Cardona, Pia Hanfeld, Nico Hoffmann, and Michael Hecht. "Ensuring Topological Data-Structure Preservation under Autoencoder Compression Due to Latent Space Regularization in Gauss–Legendre Nodes." Axioms 13, no. 8 (2024): 535. http://dx.doi.org/10.3390/axioms13080535.

Full text
Abstract:
We formulate a data-independent latent space regularization constraint for general unsupervised autoencoders. The regularization relies on sampling the autoencoder Jacobian at Legendre nodes, which are the centers of the Gauss–Legendre quadrature. Revisiting this classic allows us to prove that regularized autoencoders ensure a one-to-one re-embedding of the initial data manifold into its latent representation. Demonstrations show that previously proposed regularization strategies, such as contractive autoencoding, cause topological defects even in simple examples, as do convolutional-based (v
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!