Academic literature on the topic 'Autoencoders'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Autoencoders.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Autoencoders"

1

Alfayez, Sarah, Ouiem Bchir, and Mohamed Maher Ben Ismail. "Dynamic Depth Learning in Stacked AutoEncoders." Applied Sciences 13, no. 19 (2023): 10994. http://dx.doi.org/10.3390/app131910994.

Full text
Abstract:
The effectiveness of deep learning models depends on their architecture and topology. Thus, it is essential to determine the optimal depth of the network. In this paper, we propose a novel approach to learn the optimal depth of a stacked AutoEncoder, called Dynamic Depth for Stacked AutoEncoders (DDSAE). DDSAE learns in an unsupervised manner the depth of a stacked AutoEncoder while training the network model. Specifically, we propose a novel objective function, aside from the AutoEncoder’s loss function to optimize the network depth: The optimization of the objective function determines the layers’ relevance weights. Additionally, we propose an algorithm that iteratively prunes the irrelevant layers based on the learned relevance weights. The performance of DDSAE was assessed using benchmark and real datasets.
APA, Harvard, Vancouver, ISO, and other styles
2

Sreeteish, M. "Image De-Noising Using Convolutional Variational Autoencoders." International Journal for Research in Applied Science and Engineering Technology 10, no. 6 (2022): 4002–9. http://dx.doi.org/10.22214/ijraset.2022.44826.

Full text
Abstract:
Abstract: Typically, image noise is random colour information in picture pixels that serves as an unfavourable by-product of the image, obscuring the intended information. In most cases, noise is injected into photographs during the transfer or reception of the image, or during the capture of an image when an object is moving quickly. To improve the noisy picture predictions, autoencoders that denoise the input images are employed. Autoencoders are a sort of unsupervised machine learning that compresses the input and reconstructs an output that is very similar to the original input. The autoencoder tries to figure out non-linear correlations between data points. An encoder, a latent space, and a decoder all exist in autoencoders. The encoder reduces the dimensionality of an original picture to its latent space representation, which is then used by the decoder to reconstruct the reduced dimensional image back to its original image. Basic Autoencoder, Variational Autoencoder, and Convolutional Autoencoder are the three approaches that were employed to denoise the picture. In the basic and convolutional autoencoders, there is only one loss parameter, however in the variational autoencoder, there are two losses: generative loss and latent loss. TensorFlow as the frontend and Keras as the backend are used to implement autoencoders in this project. The noisy pictures are trained on every convolutional variational autoencoder techniques to produce a decent prediction of noisy test data.
APA, Harvard, Vancouver, ISO, and other styles
3

Jin, Weihua, Bo Sun, Zhidong Li, Shijie Zhang, and Zhonggui Chen. "Detecting Anomalies of Satellite Power Subsystem via Stage-Training Denoising Autoencoders." Sensors 19, no. 14 (2019): 3216. http://dx.doi.org/10.3390/s19143216.

Full text
Abstract:
Satellite telemetry data contains satellite status information, and ground-monitoring personnel need to promptly detect satellite anomalies from these data. This paper takes the satellite power subsystem as an example and presents a reliable anomaly detection method. Due to the lack of abnormal data, the autoencoder is a powerful method for unsupervised anomaly detection. This study proposes a novel stage-training denoising autoencoder (ST-DAE) that trains the features, in stages. This novel method has better reconstruction capabilities in comparison to common autoencoders, sparse autoencoders, and denoising autoencoders. Meanwhile, a cluster-based anomaly threshold determination method is proposed. In this study, specific methods were designed to evaluate the autoencoder performance in three perspectives. Experiments were carried out on real satellite telemetry data, and the results showed that the proposed ST-DAE generally outperformed the autoencoders, in comparison.
APA, Harvard, Vancouver, ISO, and other styles
4

Shevchenko, Dmytro, Mykhaylo Ugryumov, and Sergii Artiukh. "MONITORING DATA AGGREGATION OF DYNAMIC SYSTEMS USING INFORMATION TECHNOLOGIES." Innovative Technologies and Scientific Solutions for Industries, no. 1 (23) (April 20, 2023): 123–31. http://dx.doi.org/10.30837/itssi.2023.23.123.

Full text
Abstract:
The subject matter of the article is models, methods and information technologies of monitoring data aggregation. The goal of the article is to determine the best deep learning model for reducing the dimensionality of dynamic systems monitoring data. The following tasks were solved: analysis of existing dimensionality reduction approaches, description of the general architecture of vanilla and variational autoencoders, development of their architecture, development of software for training and testing of autoencoders, conducting research on the performance quality of autoencoders for the problem of dimensionality reduction. The following models and methods were used: data processing and preparation, data dimensionality reduction. The software was developed using the Python language. Scikit-learn, Pandas, PyTorch, NumPy, argparse and others were used as auxiliary libraries. Obtained results: the work presents a classification of models and methods for dimensionality reduction, general reviews of vanilla and variational autoencoders, which include a description of the models, their properties, loss functions and their application to the problem of dimensionality reduction. Custom autoencoder architectures were also created, including visual representations of the autoencoder architecture and descriptions of each component. The software for training and testing autoencoders was developed, the dynamic system monitoring data set, and the steps for pre-training the data set were described. The metric for evaluating the quality of models is also described; the configuration of autoencoders and their training are considered. Conclusions: The vanilla autoencoder recovers the data much better than the variational one. Looking at the fact that the architectures of the autoencoders are the same, except for the peculiarities of the autoencoders, it can be noted that a vanilla autoencoder compresses data better by keeping more useful variables for later recovery from the bottleneck. Additionally, by training on different bottleneck sizes, you can determine the size at which the data is recovered best, which means that the most important variables are preserved. Looking at the results in general, the autoencoders work effectively for the dimensionality reduction task and the data recovery quality metric shows that they recover the data well with an error of 3–4 digits after 0. In conclusion, the vanilla autoencoder is the best deep learning model for aggregating monitoring data of dynamic systems.
APA, Harvard, Vancouver, ISO, and other styles
5

Shin, Seung Yeop, and Han-joon Kim. "Extended Autoencoder for Novelty Detection with Reconstruction along Projection Pathway." Applied Sciences 10, no. 13 (2020): 4497. http://dx.doi.org/10.3390/app10134497.

Full text
Abstract:
Recently, novelty detection with reconstruction along projection pathway (RaPP) has made progress toward leveraging hidden activation values. RaPP compares the input and its autoencoder reconstruction in hidden spaces to detect novelty samples. Nevertheless, traditional autoencoders have not yet begun to fully exploit this method. In this paper, we propose a new model, the Extended Autoencoder Model, that adds an adversarial component to the autoencoder to take full advantage of RaPP. The adversarial component matches the latent variables of the reconstructed input to the latent variables of the original input to detect novelty samples with high hidden reconstruction errors. The proposed model can be combined with variants of the autoencoder, such as a variational autoencoder or adversarial autoencoder. The effectiveness of the proposed model was evaluated across various novelty detection datasets. Our results demonstrated that extended autoencoders are capable of outperforming conventional autoencoders in detecting novelties using the RaPP method.
APA, Harvard, Vancouver, ISO, and other styles
6

Song, Youngrok, Sangwon Hyun, and Yun-Gyung Cheong. "Analysis of Autoencoders for Network Intrusion Detection." Sensors 21, no. 13 (2021): 4294. http://dx.doi.org/10.3390/s21134294.

Full text
Abstract:
As network attacks are constantly and dramatically evolving, demonstrating new patterns, intelligent Network Intrusion Detection Systems (NIDS), using deep-learning techniques, have been actively studied to tackle these problems. Recently, various autoencoders have been used for NIDS in order to accurately and promptly detect unknown types of attacks (i.e., zero-day attacks) and also alleviate the burden of the laborious labeling task. Although the autoencoders are effective in detecting unknown types of attacks, it takes tremendous time and effort to find the optimal model architecture and hyperparameter settings of the autoencoders that result in the best detection performance. This can be an obstacle that hinders practical applications of autoencoder-based NIDS. To address this challenge, we rigorously study autoencoders using the benchmark datasets, NSL-KDD, IoTID20, and N-BaIoT. We evaluate multiple combinations of different model structures and latent sizes, using a simple autoencoder model. The results indicate that the latent size of an autoencoder model can have a significant impact on the IDS performance.
APA, Harvard, Vancouver, ISO, and other styles
7

Ghafar, Abdul, and Usman Sattar. "Convolutional Autoencoder for Image Denoising." UMT Artificial Intelligence Review 1, no. 2 (2021): 1–11. http://dx.doi.org/10.32350/air.0102.01.

Full text
Abstract:
Image denoising is a process used to remove noise from the image to create a sharp and clear image. It is mainly used in medical imaging, where due to the malfunctioning of machines or due to the precautions taken to protect patients from radiation, medical imaging machines create a lot of noise in the final image. Several techniques can be used in order to avoid such distortions in the image before their final printing. Autoencoders are the most notable software used to denoise images before their final printing. These software are not intelligent so the resultant image is not of good quality. In this paper, we introduced a modified autoencoder having a deep convolutional neural network. It creates better quality images as compared to traditional autoencoders. After training with a test dataset on the tensor board, the modified autoencoder is tested on a different dataset having various shapes. The results were satisfactory but not desirable due to several reasons. Nevertheless, our proposed system still performed better than traditional autoencoders.
 KEYWORDS: image denoising, deep learning, convolutional neural network, image autoencoder, image convolutional autoencoder
APA, Harvard, Vancouver, ISO, and other styles
8

Liu, Junhong. "Review of variational autoencoders model." Applied and Computational Engineering 4, no. 1 (2023): 588–96. http://dx.doi.org/10.54254/2755-2721/4/2023328.

Full text
Abstract:
Variational autoencoder is one of the deep latent space generation models, which has become increasingly popular in image generation and anomaly detection in recent years. In this paper, we first review the development and research status of traditional variational autoencoders and their variants, and summarize and compare the performance of all variational autoencoders. then give a possible development direction of VAE.
APA, Harvard, Vancouver, ISO, and other styles
9

Lin, Yen-Kuang, Chen-Yin Lee, and Chen-Yueh Chen. "Robustness of autoencoders for establishing psychometric properties based on small sample sizes: results from a Monte Carlo simulation study and a sports fan curiosity study." PeerJ Computer Science 8 (February 9, 2022): e782. http://dx.doi.org/10.7717/peerj-cs.782.

Full text
Abstract:
Background The principal component analysis (PCA) is known as a multivariate statistical model for reducing dimensions into a representation of principal components. Thus, the PCA is commonly adopted for establishing psychometric properties, i.e., the construct validity. Autoencoder is a neural network model, which has also been shown to perform well in dimensionality reduction. Although there are several ways the PCA and autoencoders could be compared for their differences, most of the recent literature focused on differences in image reconstruction, which are often sufficient for training data. In the current study, we looked at details of each autoencoder classifier and how they may provide neural network superiority that can better generalize non-normally distributed small datasets. Methodology A Monte Carlo simulation was conducted, varying the levels of non-normality, sample sizes, and levels of communality. The performances of autoencoders and a PCA were compared using the mean square error, mean absolute value, and Euclidian distance. The feasibility of autoencoders with small sample sizes was examined. Conclusions With extreme flexibility in decoding representation using linear and non-linear mapping, this study demonstrated that the autoencoder can robustly reduce dimensions, and hence was effective in building the construct validity with a sample size as small as 100. The autoencoders could obtain a smaller mean square error and small Euclidian distance between original dataset and predictions for a small non-normal dataset. Hence, when behavioral scientists attempt to explore the construct validity of a newly designed questionnaire, an autoencoder could also be considered an alternative to a PCA.
APA, Harvard, Vancouver, ISO, and other styles
10

Alam, Fardina Fathmiul, Taseef Rahman, and Amarda Shehu. "Evaluating Autoencoder-Based Featurization and Supervised Learning for Protein Decoy Selection." Molecules 25, no. 5 (2020): 1146. http://dx.doi.org/10.3390/molecules25051146.

Full text
Abstract:
Rapid growth in molecular structure data is renewing interest in featurizing structure. Featurizations that retain information on biological activity are particularly sought for protein molecules, where decades of research have shown that indeed structure encodes function. Research on featurization of protein structure is active, but here we assess the promise of autoencoders. Motivated by rapid progress in neural network research, we investigate and evaluate autoencoders on yielding linear and nonlinear featurizations of protein tertiary structures. An additional reason we focus on autoencoders as the engine to obtain featurizations is the versatility of their architectures and the ease with which changes to architecture yield linear versus nonlinear features. While open-source neural network libraries, such as Keras, which we employ here, greatly facilitate constructing, training, and evaluating autoencoder architectures and conducting model search, autoencoders have not yet gained popularity in the structure biology community. Here we demonstrate their utility in a practical context. Employing autoencoder-based featurizations, we address the classic problem of decoy selection in protein structure prediction. Utilizing off-the-shelf supervised learning methods, we demonstrate that the featurizations are indeed meaningful and allow detecting active tertiary structures, thus opening the way for further avenues of research.
APA, Harvard, Vancouver, ISO, and other styles
More sources
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography