To see the other types of publications on this topic, follow the link: Auto encoder.

Journal articles on the topic 'Auto encoder'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Auto encoder.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Xie, Chengxin, Jingui Huang, Yongjiang Shi, Hui Pang, Liting Gao, and Xiumei Wen. "Ensemble graph auto-encoders for clustering and link prediction." PeerJ Computer Science 11 (January 22, 2025): e2648. https://doi.org/10.7717/peerj-cs.2648.

Full text
Abstract:
Graph auto-encoders are a crucial research area within graph neural networks, commonly employed for generating graph embeddings while minimizing errors in unsupervised learning. Traditional graph auto-encoders focus on reconstructing minimal graph data loss to encode neighborhood information for each node, yielding node embedding representations. However, existing graph auto-encoder models often overlook node representations and fail to capture contextual node information within the graph data, resulting in poor embedding effects. Accordingly, this study proposes the ensemble graph auto-encode
APA, Harvard, Vancouver, ISO, and other styles
2

Chen, Shuangshuang, and Wei Guo. "Auto-Encoders in Deep Learning—A Review with New Perspectives." Mathematics 11, no. 8 (2023): 1777. http://dx.doi.org/10.3390/math11081777.

Full text
Abstract:
Deep learning, which is a subfield of machine learning, has opened a new era for the development of neural networks. The auto-encoder is a key component of deep structure, which can be used to realize transfer learning and plays an important role in both unsupervised learning and non-linear feature extraction. By highlighting the contributions and challenges of recent research papers, this work aims to review state-of-the-art auto-encoder algorithms. Firstly, we introduce the basic auto-encoder as well as its basic concept and structure. Secondly, we present a comprehensive summarization of di
APA, Harvard, Vancouver, ISO, and other styles
3

Theunissen, Carl Daniel, Steven Martin Bradshaw, Lidia Auret, and Tobias Muller Louw. "One-Dimensional Convolutional Auto-Encoder for Predicting Furnace Blowback Events from Multivariate Time Series Process Data—A Case Study." Minerals 11, no. 10 (2021): 1106. http://dx.doi.org/10.3390/min11101106.

Full text
Abstract:
Modern industrial mining and mineral processing applications are characterized by large volumes of historical process data. Hazardous events occurring in these processes compromise process safety and therefore overall viability. These events are recorded in historical data and are often preceded by characteristic patterns. Reconstruction-based data-driven models are trained to reconstruct the characteristic patterns of hazardous event-preceding process data with minimal residuals, facilitating effective event prediction based on reconstruction residuals. This investigation evaluated one-dimens
APA, Harvard, Vancouver, ISO, and other styles
4

Bous, Frederik, and Axel Roebel. "A Bottleneck Auto-Encoder for F0 Transformations on Speech and Singing Voice." Information 13, no. 3 (2022): 102. http://dx.doi.org/10.3390/info13030102.

Full text
Abstract:
In this publication, we present a deep learning-based method to transform the f0 in speech and singing voice recordings. f0 transformation is performed by training an auto-encoder on the voice signal’s mel-spectrogram and conditioning the auto-encoder on the f0. Inspired by AutoVC/F0, we apply an information bottleneck to it to disentangle the f0 from its latent code. The resulting model successfully applies the desired f0 to the input mel-spectrograms and adapts the speaker identity when necessary, e.g., if the requested f0 falls out of the range of the source speaker/singer. Using the mean f
APA, Harvard, Vancouver, ISO, and other styles
5

Augustine, Jeena. "Emotion Recognition in Speech Using with SVM, DSVM and Auto-Encoder." International Journal for Research in Applied Science and Engineering Technology 9, no. 8 (2021): 1021–26. http://dx.doi.org/10.22214/ijraset.2021.37545.

Full text
Abstract:
Abstract: Emotions recognition from the speech is one of the foremost vital subdomains within the sphere of signal process. during this work, our system may be a two-stage approach, particularly feature extraction, and classification engine. Firstly, 2 sets of options square measure investigated that are: thirty-nine Mel-frequency Cepstral coefficients (MFCC) and sixty-five MFCC options extracted supported the work of [20]. Secondly, we've got a bent to use the Support Vector Machine (SVM) because the most classifier engine since it is the foremost common technique within the sector of speech
APA, Harvard, Vancouver, ISO, and other styles
6

Kollias, Georgios, Vasileios Kalantzis, Tsuyoshi Ide, Aurélie Lozano, and Naoki Abe. "Directed Graph Auto-Encoders." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 7 (2022): 7211–19. http://dx.doi.org/10.1609/aaai.v36i7.20682.

Full text
Abstract:
We introduce a new class of auto-encoders for directed graphs, motivated by a direct extension of the Weisfeiler-Leman algorithm to pairs of node labels. The proposed model learns pairs of interpretable latent representations for the nodes of directed graphs, and uses parameterized graph convolutional network (GCN) layers for its encoder and an asymmetric inner product decoder. Parameters in the encoder control the weighting of representations exchanged between neighboring nodes. We demonstrate the ability of the proposed model to learn meaningful latent embeddings and achieve superior perform
APA, Harvard, Vancouver, ISO, and other styles
7

Karim, Ahmad M., Hilal Kaya, Mehmet Serdar Güzel, Mehmet R. Tolun, Fatih V. Çelebi, and Alok Mishra. "A Novel Framework Using Deep Auto-Encoders Based Linear Model for Data Classification." Sensors 20, no. 21 (2020): 6378. http://dx.doi.org/10.3390/s20216378.

Full text
Abstract:
This paper proposes a novel data classification framework, combining sparse auto-encoders (SAEs) and a post-processing system consisting of a linear system model relying on Particle Swarm Optimization (PSO) algorithm. All the sensitive and high-level features are extracted by using the first auto-encoder which is wired to the second auto-encoder, followed by a Softmax function layer to classify the extracted features obtained from the second layer. The two auto-encoders and the Softmax classifier are stacked in order to be trained in a supervised approach using the well-known backpropagation a
APA, Harvard, Vancouver, ISO, and other styles
8

SSSN Usha Devi N, Pokkuluri Kiran Sree,Prasun Chakrabarti,Martin Margala,. "Auto encoders with Cellular Automata for Anomaly Detection." Journal of Electrical Systems 20, no. 1s (2024): 728–33. http://dx.doi.org/10.52783/jes.815.

Full text
Abstract:
This work combines auto encoders with cellular automata (CA) to present a novel hybrid strategy for anomaly identification. For feature learning, auto encoders are used to identify spatial patterns in the input data. Simultaneously, temporal and geographical dependencies are captured by CA, which improves the model's capacity to identify complicated anomalies. Training on spatially altered data, the auto encoder-CA hybrid model makes use of CA's temporal evolution to reveal dynamic patterns. Reconstruction errors between the input data and its decoded representation are computed to identify an
APA, Harvard, Vancouver, ISO, and other styles
9

Prasun Chakrabarti, Martin Margala, SSSN Usha Devi N., Pokkuluri Kiran Sree,. "Auto encoders with Cellular Automata for Anomaly Detection." Journal of Electrical Systems 20, no. 2s (2024): 227–32. http://dx.doi.org/10.52783/jes.1131.

Full text
Abstract:
This work combines auto encoders with cellular automata (CA) to present a novel hybrid strategy for anomaly identification. For feature learning, auto encoders are used to identify spatial patterns in the input data. Simultaneously, temporal and geographical dependencies are captured by CA, which improves the model's capacity to identify complicated anomalies. Training on spatially altered data, the auto encoder-CA hybrid model makes use of CA's temporal evolution to reveal dynamic patterns. Reconstruction errors between the input data and its decoded representation are computed to identify an
APA, Harvard, Vancouver, ISO, and other styles
10

K.N., Sunilkumar. "Security Framework for Physiological Signals Using Auto Encoder." Journal of Advanced Research in Dynamical and Control Systems 12, no. 01-Special Issue (2020): 583–92. http://dx.doi.org/10.5373/jardcs/v12sp1/20201107.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Song, Chunfeng, Yongzhen Huang, Feng Liu, Zhenyu Wang, and Liang Wang. "Deep auto-encoder based clustering." Intelligent Data Analysis 18, no. 6S (2014): S65—S76. http://dx.doi.org/10.3233/ida-140709.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Zhai, Zhonghua. "Auto-encoder generative adversarial networks." Journal of Intelligent & Fuzzy Systems 35, no. 3 (2018): 3043–49. http://dx.doi.org/10.3233/jifs-169659.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Wang, Yasi, Hongxun Yao, and Sicheng Zhao. "Auto-encoder based dimensionality reduction." Neurocomputing 184 (April 2016): 232–42. http://dx.doi.org/10.1016/j.neucom.2015.08.104.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Park, Ji Hun, Seung Whoun Kim, Hak Tae Lee, Young Ho Ko, and Heon Jin Park. "Anomaly Trajectory Detection Model Using LSTM Auto Encoder." Korean Data Analysis Society 26, no. 1 (2024): 35–47. http://dx.doi.org/10.37727/jkdas.2024.26.1.35.

Full text
Abstract:
With the rise in aviation demand and the emergence of Urban Air Mobility, developing a safe aviation system in urban areas is becoming increasingly important. This study addresses the challenge of detecting anomalous flight trajectories, which can be influenced by environmental factors. We propose a novel Long Short-Term Memory-Auto Encoder (LSTM-AE) model that processes both environmental and trajectory data but only reconstructs trajectory data in its output. This approach was validated by assessing the average reconstruction error for specific trajectories. Additionally, the model's ability
APA, Harvard, Vancouver, ISO, and other styles
15

Song, Xiaona, Haichao Liu, Lijun Wang, et al. "A Semantic Segmentation Method for Road Environment Images Based on Hybrid Convolutional Auto-Encoder." Traitement du Signal 39, no. 4 (2022): 1235–45. http://dx.doi.org/10.18280/ts.390416.

Full text
Abstract:
Deep convolutional neural networks (CNNs) have presented amazing performance in the task of semantic segmentation. However, the network model is complex, the training time is prolonged, the semantic segmentation accuracy is not high and the real-time performance is not good, so it is difficult to be directly used in the semantic segmentation of road environment images of autonomous vehicles. As one of the three models of deep learning, the auto-encoder (AE) has powerful data learning and feature extracting capabilities from the raw data itself. In this study, the network architecture of auto-e
APA, Harvard, Vancouver, ISO, and other styles
16

Bai, Wenjun, Changqin Quan, and Zhi-Wei Luo. "Learning Flexible Latent Representations via Encapsulated Variational Encoders." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 9913–14. http://dx.doi.org/10.1609/aaai.v33i01.33019913.

Full text
Abstract:
Learning flexible latent representation of observed data is an important precursor for most downstream AI applications. To this end, we propose a novel form of variational encoder, i.e., encapsulated variational encoders (EVE) to exert direct control over encoded latent representations along with its learning algorithm, i.e., the EVE compatible automatic variational differentiation inference algorithm. Armed with this property, our derived EVE is capable of learning converged and diverged latent representations. Using CIFAR-10 as an example, we show that the learning of converged latent repres
APA, Harvard, Vancouver, ISO, and other styles
17

Shang, Zengqiang, Peiyang Shi, Pengyuan Zhang, Li Wang, and Guangying Zhao. "HierTTS: Expressive End-to-End Text-to-Waveform Using a Multi-Scale Hierarchical Variational Auto-Encoder." Applied Sciences 13, no. 2 (2023): 868. http://dx.doi.org/10.3390/app13020868.

Full text
Abstract:
End-to-end text-to-speech (TTS) models that directly generate waveforms from text are gaining popularity. However, existing end-to-end models are still not natural enough in their prosodic expressiveness. Additionally, previous studies on improving the expressiveness of TTS have mainly focused on acoustic models. There is a lack of research on enhancing expressiveness in an end-to-end framework. Therefore, we propose HierTTS, a highly expressive end-to-end text-to-waveform generation model. It deeply couples the hierarchical properties of speech with hierarchical variational auto-encoders and
APA, Harvard, Vancouver, ISO, and other styles
18

Lu, Yingjing. "The Level Weighted Structural Similarity Loss: A Step Away from MSE." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 9989–90. http://dx.doi.org/10.1609/aaai.v33i01.33019989.

Full text
Abstract:
The Mean Square Error (MSE) has shown its strength when applied in deep generative models such as Auto-Encoders to model reconstruction loss. However, in image domain especially, the limitation of MSE is obvious: it assumes pixel independence and ignores spatial relationships of samples. This contradicts most architectures of Auto-Encoders which use convolutional layers to extract spatial dependent features. We base on the structural similarity metric (SSIM) and propose a novel level weighted structural similarity (LWSSIM) loss for convolutional Auto-Encoders. Experiments on common datasets on
APA, Harvard, Vancouver, ISO, and other styles
19

Li, Yang, Guangcan Liu, Yubao Sun, Qingshan Liu, and Shengyong Chen. "3D Tensor Auto-encoder with Application to Video Compression." ACM Transactions on Multimedia Computing, Communications, and Applications 17, no. 2 (2021): 1–18. http://dx.doi.org/10.1145/3431768.

Full text
Abstract:
Auto-encoder has been widely used to compress high-dimensional data such as the images and videos. However, the traditional auto-encoder network needs to store a large number of parameters. Namely, when the input data is of dimension n , the number of parameters in an auto-encoder is in general O ( n ). In this article, we introduce a network structure called 3D Tensor Auto-Encoder (3DTAE). Unlike the traditional auto-encoder, in which a video is represented as a vector, our 3DTAE considers videos as 3D tensors to directly pass tensor objects through the network. The weights of each layer are
APA, Harvard, Vancouver, ISO, and other styles
20

Sun, Tongfeng, Shifei Ding, and Xinzheng Xu. "An iterative stacked weighted auto-encoder." Soft Computing 25, no. 6 (2021): 4833–43. http://dx.doi.org/10.1007/s00500-020-05490-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Zhu, Qiuyu, Hu Wang, and Ruixin Zhang. "Wavelet Loss Function for Auto-Encoder." IEEE Access 9 (2021): 27101–8. http://dx.doi.org/10.1109/access.2021.3058604.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Xu, Long, Ying Wei, Chenhe Dong, Chuaqiao Xu, and Zhaofu Diao. "Wasserstein Distance-Based Auto-Encoder Tracking." Neural Processing Letters 53, no. 3 (2021): 2305–29. http://dx.doi.org/10.1007/s11063-021-10507-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Du, Yingcai, Xijun Wang, Shujie Wang, Xinran Lu, and Lihui Liang. "Auto-detection system of incremental encoder." JOURNAL OF ELECTRONIC MEASUREMENT AND INSTRUMENT 26, no. 11 (2013): 993–98. http://dx.doi.org/10.3724/sp.j.1187.2012.00993.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Yu, Hualong, Dan Sun, Xiaoyan Xi, Xibei Yang, Shang Zheng, and Qi Wang. "Fuzzy One-Class Extreme Auto-encoder." Neural Processing Letters 50, no. 1 (2018): 701–27. http://dx.doi.org/10.1007/s11063-018-9952-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

N., Sunilkumar K., Shivashankar Shivashankar, and Keshavamurthy Keshavamurthy. "Bio-signals compression using auto-encoder." International Journal of Electrical and Computer Engineering (IJECE) 11, no. 1 (2021): 424. http://dx.doi.org/10.11591/ijece.v11i1.pp424-433.

Full text
Abstract:
Latest developments in wearable devices permits un-damageable and cheapest way for gathering of medical data such as bio-signals like ECG, Respiration, Blood pressure etc. Gathering and analysis of various biomarkers are considered to provide anticipatory healthcare through customized applications for medical purpose. Wearable devices will rely on size, resources and battery capacity; we need a novel algorithm to robustly control memory and the energy of the device. The rapid growth of the technology has led to numerous auto encoders that guarantee the results by extracting feature selection f
APA, Harvard, Vancouver, ISO, and other styles
26

Sunilkumar, K. N., Shivashankar, and Keshavamurthy. "Bio-signals compression using auto-encoder." International Journal of Electrical and Computer Engineering (IJECE) 11, no. 1 (2021): 424–33. https://doi.org/10.11591/ijece.v11i1.pp424-433.

Full text
Abstract:
Latest developments in wearable devices permits un-damageable and cheapest way for gathering of medical data such as bio-signals like ECG, Respiration, Blood pressure etc. Gathering and analysis of various biomarkers are considered to provide anticipatory healthcare through customized applications for medical purpose. Wearable devices will rely on size, resources and battery capacity; we need a novel algorithm to robustly control memory and the energy of the device. The rapid growth of the technology has led to numerous auto encoders that guarantee the results by extracting feature selection f
APA, Harvard, Vancouver, ISO, and other styles
27

Li, Songyuan, Yuyan Man, Chi Zhang, Qiong Fang, Suya Li, and Min Deng. "PRPD data analysis with Auto-Encoder Network." E3S Web of Conferences 81 (2019): 01019. http://dx.doi.org/10.1051/e3sconf/20198101019.

Full text
Abstract:
Gas Insulated Switchgear (GIS) is related to the stable operation of power equipment. The traditional partial discharge pattern recognition method relies on expert experience to carry out feature engineering design artificial features, which has strong subjectivity and large blindness. To address the problem, we introduce an encoding-decoding network to reconstruct the input data and then treat the encoded network output as a partial discharge signal feature. The adaptive feature mining ability of the Auto-Encoder Network is effectively utilized, and the traditional classifier is connected to
APA, Harvard, Vancouver, ISO, and other styles
28

Zhu, Yi, Lei Li, and Xindong Wu. "Stacked Convolutional Sparse Auto-Encoders for Representation Learning." ACM Transactions on Knowledge Discovery from Data 15, no. 2 (2021): 1–21. http://dx.doi.org/10.1145/3434767.

Full text
Abstract:
Deep learning seeks to achieve excellent performance for representation learning in image datasets. However, supervised deep learning models such as convolutional neural networks require a large number of labeled image data, which is intractable in applications, while unsupervised deep learning models like stacked denoising auto-encoder cannot employ label information. Meanwhile, the redundancy of image data incurs performance degradation on representation learning for aforementioned models. To address these problems, we propose a semi-supervised deep learning framework called stacked convolut
APA, Harvard, Vancouver, ISO, and other styles
29

Alshayeji, Mohammad H., Mousa AlSulaimi, Sa'ed Abed, and Reem Jaffal. "Network Intrusion Detection With Auto-Encoder and One-Class Support Vector Machine." International Journal of Information Security and Privacy 16, no. 1 (2022): 1–18. http://dx.doi.org/10.4018/ijisp.291703.

Full text
Abstract:
Recent advances in machine learning have shown promising results for detecting network intrusion through supervised machine learning. However, such techniques are ineffective for new types of attacks. In the preferred unsupervised and semi-supervised cases, these newer techniques suffer from lower accuracy and higher rates of false alarms. This work proposes a machine learning model that combines auto-encoder with one-class support vectors machine. In this model, the auto-encoders learn the representation of the input data in a latent space and reduces the dimensionality of the input data. The
APA, Harvard, Vancouver, ISO, and other styles
30

Yixuan, Liang. "Cost-sensitive multi-kernel ELM based on reduced expectation kernel auto-encoder." PLOS ONE 20, no. 2 (2025): e0314851. https://doi.org/10.1371/journal.pone.0314851.

Full text
Abstract:
ELM (Extreme learning machine) has drawn great attention due its high training speed and outstanding generalization performance. To solve the problem that the long training time of kernel ELM auto-encoder and the difficult setting of the weight of kernel function in the existing multi-kernel models, a multi-kernel cost-sensitive ELM method based on expectation kernel auto-encoder is proposed. Firstly, from the view of similarity, the reduced kernel auto-encoder is defined by randomly selecting the reference points from the input data; then, the reduced expectation kernel auto-encoder is design
APA, Harvard, Vancouver, ISO, and other styles
31

Zhou, Shenghan, Tianhuai Wang, Linchao Yang, Zhao He, and Siting Cao. "A Self-Supervised Fault Detection for UAV Based on Unbalanced Flight Data Representation Learning and Wavelet Analysis." Aerospace 10, no. 3 (2023): 250. http://dx.doi.org/10.3390/aerospace10030250.

Full text
Abstract:
This paper aims to build a Self-supervised Fault Detection Model for UAVs combined with an Auto-Encoder. With the development of data science, it is imperative to detect UAV faults and improve their safety. Many factors affect the fault of a UAV, such as the voltage of the generator, angle of attack, and position of the rudder surface. A UAV is a typical complex system, and its flight data are typical high-dimensional large sample data sets. In practical applications such as UAV fault detection, the fault data only appear in a small part of the data sets. In this study, representation learning
APA, Harvard, Vancouver, ISO, and other styles
32

Zhang, Si-si, Jian-wei Liu, Xin Zuo, Run-kun Lu, and Si-ming Lian. "Online deep learning based on auto-encoder." Applied Intelligence 51, no. 8 (2021): 5420–39. http://dx.doi.org/10.1007/s10489-020-02058-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Zhang, Yunhe, Zhoumin Lu, and Shiping Wang. "Unsupervised feature selection via transformed auto-encoder." Knowledge-Based Systems 215 (March 2021): 106748. http://dx.doi.org/10.1016/j.knosys.2021.106748.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

OMATA, Noriyasu. "Low-dimensionalization via Auto-encoder and Visualization." Journal of the Visualization Society of Japan 38, no. 151 (2018): 9–13. http://dx.doi.org/10.3154/jvs.38.151_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Yasukawa, Shinsuke, Sreeraman Raghura, Yuya Nishida, and Kazuo Ishii. "Underwater image reconstruction using convolutional auto-encoder." Proceedings of International Conference on Artificial Life and Robotics 26 (January 21, 2021): 262–65. http://dx.doi.org/10.5954/icarob.2021.os23-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Lee, Joonnyong, Sukkyu Sun, Seung Man Yang, et al. "Bidirectional Recurrent Auto-Encoder for Photoplethysmogram Denoising." IEEE Journal of Biomedical and Health Informatics 23, no. 6 (2019): 2375–85. http://dx.doi.org/10.1109/jbhi.2018.2885139.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Yang, Shuyuan, and Junxiao Wang. "Sparse Tensor Auto-Encoder for Saliency Detection." IEEE Access 8 (2020): 2924–30. http://dx.doi.org/10.1109/access.2019.2958058.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Kiasari, Mohammad Ahangar, Dennis Singh Moirangthem, and Minho Lee. "Coupled generative adversarial stacked Auto-encoder: CoGASA." Neural Networks 100 (April 2018): 1–9. http://dx.doi.org/10.1016/j.neunet.2018.01.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Du, Fang, Jiangshe Zhang, Nannan Ji, Junying Hu, and Chunxia Zhang. "Discriminative Representation Learning with Supervised Auto-encoder." Neural Processing Letters 49, no. 2 (2018): 507–20. http://dx.doi.org/10.1007/s11063-018-9828-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Hua, Qin, Han-Wen Hu, Shi-You Qian, Ding-Yu Yang, and Jian Cao. "Bi-GAE: A Bidirectional Generative Auto-Encoder." Journal of Computer Science and Technology 38, no. 3 (2023): 626–43. http://dx.doi.org/10.1007/s11390-023-1902-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Madiwa, Shweta M., and Vishwanath Burkpalli. "Sine Cosine Based Harris Hawks Optimizer: A Hybrid Optimization Algorithm for Skin Cancer Detection Using Deep Stack Auto Encoder." Revue d'Intelligence Artificielle 36, no. 5 (2022): 697–708. http://dx.doi.org/10.18280/ria.360506.

Full text
Abstract:
Skin cancer is becoming major problems due to its tremendous growth. Skin cancer is a malignant skin lesion, which may cause damage to human. Hence, prior detection and precise medical diagnosis of the skin lesion is essential. In medical practice, detection of malignant lesions needs pathological examination and biopsy, which is expensive. The existing techniques need a brief physical inspection, which is imprecise and time-consuming. This paper presents a computer-assisted skin cancer detection strategy for detecting the skin lesion in skin images using deep stacked auto encoder. Sine Cosine
APA, Harvard, Vancouver, ISO, and other styles
42

Nguyen, V. D., D. D. Tran, M. M. Tran, N. M. Nguyen, and V. C. Nguyen. "Robust Vehicle Detection Under Adverse Weather Conditions Using Auto-encoder Feature." International Journal of Machine Learning and Computing 10, no. 4 (2020): 549–55. http://dx.doi.org/10.18178/ijmlc.2020.10.4.971.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Hahn, Sangchul, Seokjin Hong, and Heeyoul Choi. "Application of Improved Variational Recurrent Auto-Encoder for Korean Sentence Generation." Journal of KIISE 45, no. 2 (2018): 157–64. http://dx.doi.org/10.5626/jok.2018.45.2.157.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Yoo, Jaechang, Heesong Eom, and Yong Suk Choi. "Image-To-Image Translation Using a Cross-Domain Auto-Encoder and Decoder." Applied Sciences 9, no. 22 (2019): 4780. http://dx.doi.org/10.3390/app9224780.

Full text
Abstract:
Recently, several studies have focused on image-to-image translation. However, the quality of the translation results is lacking in certain respects. We propose a new image-to-image translation method to minimize such shortcomings using an auto-encoder and an auto-decoder. This method includes pre-training two auto-encoders and decoder pairs for each source and target image domain, cross-connecting two pairs and adding a feature mapping layer. Our method is quite simple and straightforward to adopt but very effective in practice, and we experimentally demonstrated that our method can significa
APA, Harvard, Vancouver, ISO, and other styles
45

de los Rios, Martín. "Cosmic-kite: auto-encoding the cosmic microwave background." Monthly Notices of the Royal Astronomical Society 511, no. 4 (2022): 5525–35. http://dx.doi.org/10.1093/mnras/stac393.

Full text
Abstract:
ABSTRACT In this work, we present the results of the study of the cosmic microwave background temperature–temperature power spectrum through auto-encoders in which the latent variables are the cosmological parameters. This method was trained and calibrated using a data set composed of 80 000 power spectra from random cosmologies computed numerically with the camb code. Due to the specific architecture of the auto-encoder, the encoder part is a model that estimates the maximum-likelihood parameters from a given power spectrum. On the other hand, the decoder part is a model that computes the pow
APA, Harvard, Vancouver, ISO, and other styles
46

Yan, Xiaoan, Yadong Xu, Daoming She, and Wan Zhang. "Reliable Fault Diagnosis of Bearings Using an Optimized Stacked Variational Denoising Auto-Encoder." Entropy 24, no. 1 (2021): 36. http://dx.doi.org/10.3390/e24010036.

Full text
Abstract:
Variational auto-encoders (VAE) have recently been successfully applied in the intelligent fault diagnosis of rolling bearings due to its self-learning ability and robustness. However, the hyper-parameters of VAEs depend, to a significant extent, on artificial settings, which is regarded as a common and key problem in existing deep learning models. Additionally, its anti-noise capability may face a decline when VAE is used to analyze bearing vibration data under loud environmental noise. Therefore, in order to improve the anti-noise performance of the VAE model and adaptively select its parame
APA, Harvard, Vancouver, ISO, and other styles
47

Armand, Atiampo Kodjo, Gokou Hervé Fabrice Diédié, and N’Takpé Tchimou Euloge. "Super-tokens Auto-encoders for image compression and reconstruction in IoT applications." International Journal of Advances in Scientific Research and Engineering 10, no. 01 (2024): 29–46. http://dx.doi.org/10.31695/ijasre.2024.1.4.

Full text
Abstract:
New telecommunications networks are enabling powerful AI applications for smart cities and transport. These applications require real-time processing of large amounts of media data. Sending data to the cloud for processing is very difficult due to latency and energy constraints. Lossy compression can help, but traditional codecs may not provide enough quality or be efficient enough for resource-constrained devices. This paper proposes a new image compression and processing approach based on variational auto-encoders (VAEs). This VAE-based method aims to efficiently compress images while still
APA, Harvard, Vancouver, ISO, and other styles
48

Wu, Xing, Guangyuan Ma, Meng Lin, Zijia Lin, Zhongyuan Wang, and Songlin Hu. "ConTextual Masked Auto-Encoder for Dense Passage Retrieval." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 4 (2023): 4738–46. http://dx.doi.org/10.1609/aaai.v37i4.25598.

Full text
Abstract:
Dense passage retrieval aims to retrieve the relevant passages of a query from a large corpus based on dense representations (i.e., vectors) of the query and the passages. Recent studies have explored improving pre-trained language models to boost dense retrieval performance. This paper proposes CoT-MAE (ConTextual Masked Auto-Encoder), a simple yet effective generative pre-training method for dense passage retrieval. CoT-MAE employs an asymmetric encoder-decoder architecture that learns to compress the sentence semantics into a dense vector through self-supervised and context-supervised maske
APA, Harvard, Vancouver, ISO, and other styles
49

Bai, Haoyue, Haofeng Zhang, and Qiong Wang. "Dual discriminative auto-encoder network for zero shot image recognition." Journal of Intelligent & Fuzzy Systems 40, no. 3 (2021): 5159–70. http://dx.doi.org/10.3233/jifs-201920.

Full text
Abstract:
Zero Shot learning (ZSL) aims to use the information of seen classes to recognize unseen classes, which is achieved by transferring knowledge of the seen classes from the semantic embeddings. Since the domains of the seen and unseen classes do not overlap, most ZSL algorithms often suffer from domain shift problem. In this paper, we propose a Dual Discriminative Auto-encoder Network (DDANet), in which visual features and semantic attributes are self-encoded by using the high dimensional latent space instead of the feature space or the low dimensional semantic space. In the embedded latent spac
APA, Harvard, Vancouver, ISO, and other styles
50

Tong, Jinyu, Jin Luo, Haiyang Pan, Jinde Zheng, and Qing Zhang. "A Novel Cuckoo Search Optimized Deep Auto-Encoder Network-Based Fault Diagnosis Method for Rolling Bearing." Shock and Vibration 2020 (September 24, 2020): 1–12. http://dx.doi.org/10.1155/2020/8891905.

Full text
Abstract:
To enhance the performance of deep auto-encoder (AE) under complex working conditions, a novel deep auto-encoder network method for rolling bearing fault diagnosis is proposed in this paper. First, multiscale analysis is adopted to extract the multiscale features from the raw vibration signals of rolling bearing. Second, the sparse penalty term and contractive penalty term are used simultaneously to regularize the loss function of auto-encoder to enhance the feature learning ability of networks. Finally, the cuckoo search algorithm (CS) is used to find the optimal hyperparameters automatically
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!