Academic literature on the topic 'Auto encoder'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Auto encoder.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Auto encoder"

1

Xie, Chengxin, Jingui Huang, Yongjiang Shi, Hui Pang, Liting Gao, and Xiumei Wen. "Ensemble graph auto-encoders for clustering and link prediction." PeerJ Computer Science 11 (January 22, 2025): e2648. https://doi.org/10.7717/peerj-cs.2648.

Full text
Abstract:
Graph auto-encoders are a crucial research area within graph neural networks, commonly employed for generating graph embeddings while minimizing errors in unsupervised learning. Traditional graph auto-encoders focus on reconstructing minimal graph data loss to encode neighborhood information for each node, yielding node embedding representations. However, existing graph auto-encoder models often overlook node representations and fail to capture contextual node information within the graph data, resulting in poor embedding effects. Accordingly, this study proposes the ensemble graph auto-encode
APA, Harvard, Vancouver, ISO, and other styles
2

Chen, Shuangshuang, and Wei Guo. "Auto-Encoders in Deep Learning—A Review with New Perspectives." Mathematics 11, no. 8 (2023): 1777. http://dx.doi.org/10.3390/math11081777.

Full text
Abstract:
Deep learning, which is a subfield of machine learning, has opened a new era for the development of neural networks. The auto-encoder is a key component of deep structure, which can be used to realize transfer learning and plays an important role in both unsupervised learning and non-linear feature extraction. By highlighting the contributions and challenges of recent research papers, this work aims to review state-of-the-art auto-encoder algorithms. Firstly, we introduce the basic auto-encoder as well as its basic concept and structure. Secondly, we present a comprehensive summarization of di
APA, Harvard, Vancouver, ISO, and other styles
3

Theunissen, Carl Daniel, Steven Martin Bradshaw, Lidia Auret, and Tobias Muller Louw. "One-Dimensional Convolutional Auto-Encoder for Predicting Furnace Blowback Events from Multivariate Time Series Process Data—A Case Study." Minerals 11, no. 10 (2021): 1106. http://dx.doi.org/10.3390/min11101106.

Full text
Abstract:
Modern industrial mining and mineral processing applications are characterized by large volumes of historical process data. Hazardous events occurring in these processes compromise process safety and therefore overall viability. These events are recorded in historical data and are often preceded by characteristic patterns. Reconstruction-based data-driven models are trained to reconstruct the characteristic patterns of hazardous event-preceding process data with minimal residuals, facilitating effective event prediction based on reconstruction residuals. This investigation evaluated one-dimens
APA, Harvard, Vancouver, ISO, and other styles
4

Bous, Frederik, and Axel Roebel. "A Bottleneck Auto-Encoder for F0 Transformations on Speech and Singing Voice." Information 13, no. 3 (2022): 102. http://dx.doi.org/10.3390/info13030102.

Full text
Abstract:
In this publication, we present a deep learning-based method to transform the f0 in speech and singing voice recordings. f0 transformation is performed by training an auto-encoder on the voice signal’s mel-spectrogram and conditioning the auto-encoder on the f0. Inspired by AutoVC/F0, we apply an information bottleneck to it to disentangle the f0 from its latent code. The resulting model successfully applies the desired f0 to the input mel-spectrograms and adapts the speaker identity when necessary, e.g., if the requested f0 falls out of the range of the source speaker/singer. Using the mean f
APA, Harvard, Vancouver, ISO, and other styles
5

Augustine, Jeena. "Emotion Recognition in Speech Using with SVM, DSVM and Auto-Encoder." International Journal for Research in Applied Science and Engineering Technology 9, no. 8 (2021): 1021–26. http://dx.doi.org/10.22214/ijraset.2021.37545.

Full text
Abstract:
Abstract: Emotions recognition from the speech is one of the foremost vital subdomains within the sphere of signal process. during this work, our system may be a two-stage approach, particularly feature extraction, and classification engine. Firstly, 2 sets of options square measure investigated that are: thirty-nine Mel-frequency Cepstral coefficients (MFCC) and sixty-five MFCC options extracted supported the work of [20]. Secondly, we've got a bent to use the Support Vector Machine (SVM) because the most classifier engine since it is the foremost common technique within the sector of speech
APA, Harvard, Vancouver, ISO, and other styles
6

Kollias, Georgios, Vasileios Kalantzis, Tsuyoshi Ide, Aurélie Lozano, and Naoki Abe. "Directed Graph Auto-Encoders." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 7 (2022): 7211–19. http://dx.doi.org/10.1609/aaai.v36i7.20682.

Full text
Abstract:
We introduce a new class of auto-encoders for directed graphs, motivated by a direct extension of the Weisfeiler-Leman algorithm to pairs of node labels. The proposed model learns pairs of interpretable latent representations for the nodes of directed graphs, and uses parameterized graph convolutional network (GCN) layers for its encoder and an asymmetric inner product decoder. Parameters in the encoder control the weighting of representations exchanged between neighboring nodes. We demonstrate the ability of the proposed model to learn meaningful latent embeddings and achieve superior perform
APA, Harvard, Vancouver, ISO, and other styles
7

Karim, Ahmad M., Hilal Kaya, Mehmet Serdar Güzel, Mehmet R. Tolun, Fatih V. Çelebi, and Alok Mishra. "A Novel Framework Using Deep Auto-Encoders Based Linear Model for Data Classification." Sensors 20, no. 21 (2020): 6378. http://dx.doi.org/10.3390/s20216378.

Full text
Abstract:
This paper proposes a novel data classification framework, combining sparse auto-encoders (SAEs) and a post-processing system consisting of a linear system model relying on Particle Swarm Optimization (PSO) algorithm. All the sensitive and high-level features are extracted by using the first auto-encoder which is wired to the second auto-encoder, followed by a Softmax function layer to classify the extracted features obtained from the second layer. The two auto-encoders and the Softmax classifier are stacked in order to be trained in a supervised approach using the well-known backpropagation a
APA, Harvard, Vancouver, ISO, and other styles
8

SSSN Usha Devi N, Pokkuluri Kiran Sree,Prasun Chakrabarti,Martin Margala,. "Auto encoders with Cellular Automata for Anomaly Detection." Journal of Electrical Systems 20, no. 1s (2024): 728–33. http://dx.doi.org/10.52783/jes.815.

Full text
Abstract:
This work combines auto encoders with cellular automata (CA) to present a novel hybrid strategy for anomaly identification. For feature learning, auto encoders are used to identify spatial patterns in the input data. Simultaneously, temporal and geographical dependencies are captured by CA, which improves the model's capacity to identify complicated anomalies. Training on spatially altered data, the auto encoder-CA hybrid model makes use of CA's temporal evolution to reveal dynamic patterns. Reconstruction errors between the input data and its decoded representation are computed to identify an
APA, Harvard, Vancouver, ISO, and other styles
9

Prasun Chakrabarti, Martin Margala, SSSN Usha Devi N., Pokkuluri Kiran Sree,. "Auto encoders with Cellular Automata for Anomaly Detection." Journal of Electrical Systems 20, no. 2s (2024): 227–32. http://dx.doi.org/10.52783/jes.1131.

Full text
Abstract:
This work combines auto encoders with cellular automata (CA) to present a novel hybrid strategy for anomaly identification. For feature learning, auto encoders are used to identify spatial patterns in the input data. Simultaneously, temporal and geographical dependencies are captured by CA, which improves the model's capacity to identify complicated anomalies. Training on spatially altered data, the auto encoder-CA hybrid model makes use of CA's temporal evolution to reveal dynamic patterns. Reconstruction errors between the input data and its decoded representation are computed to identify an
APA, Harvard, Vancouver, ISO, and other styles
10

K.N., Sunilkumar. "Security Framework for Physiological Signals Using Auto Encoder." Journal of Advanced Research in Dynamical and Control Systems 12, no. 01-Special Issue (2020): 583–92. http://dx.doi.org/10.5373/jardcs/v12sp1/20201107.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Auto encoder"

1

Zhou, Chong. "Robust Auto-encoders." Digital WPI, 2016. https://digitalcommons.wpi.edu/etd-theses/393.

Full text
Abstract:
In this thesis, our aim is to improve deep auto-encoders, an important topic in the deep learning area, which has shown connections to latent feature discovery models in the literature. Our model is inspired by robust principal component analysis, and we build an outlier filter on the top of basic deep auto-encoders. By adding this filter, we can split the input data X into two parts X=L+S, where the L could be better reconstructed by a deep auto-encoder and the S contains the anomalous parts of the original data X. Filtering out the anomalies increases the robustness of the standard auto-enco
APA, Harvard, Vancouver, ISO, and other styles
2

Hudgins, Hayden. "Human Path Prediction using Auto Encoder LSTMs and Single Temporal Encoders." DigitalCommons@CalPoly, 2020. https://digitalcommons.calpoly.edu/theses/2119.

Full text
Abstract:
Due to automation, the world is changing at a rapid pace. Autonomous agents have become more common over the last several years and, as a result, have created a need for improved software to back them up. The most important aspect of this greater software is path prediction, as robots need to be able to decide where to move in the future. In order to accomplish this, a robot must know how to avoid humans, putting frame prediction at the core of many modern day solutions. A popular way to solve this complex problem of frame prediction is Auto Encoder LSTMs. Though there are many implementations
APA, Harvard, Vancouver, ISO, and other styles
3

VEGA, PEDRO JUAN SOTO. "SINGLE SAMPLE FACE RECOGNITION FROM VIDEO VIA SATCKED SUPERVISED AUTO-ENCODER." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2016. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=28102@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO<br>COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR<br>PROGRAMA DE EXCELENCIA ACADEMICA<br>Esta dissertação propõe e avalia estratégias baseadas nos Stacked Supervised Auto-encoders (SSAE) para representação de imagens faciais em aplicações de vídeo vigilância. O estudo foca na identificação de faces a partir de uma amostra por pessoa na galeria (single sample per person - SSPP). Variações em termos de pose, expressão facial, iluminação e oclusão são abordadas de duas formas. Primeiro, o SSAE extrai atributos das imagens de faces q
APA, Harvard, Vancouver, ISO, and other styles
4

Guo, Xinyu. "Improved Feature-Selection for Classification Problems using Multiple Auto-Encoders." University of Cincinnati / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1522420335154157.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Carriço, Nuno Filipe Marques. "Transformer approaches on hyper-parameter optimization and anomaly detection with applications in stream tuning." Master's thesis, Universidade de Évora, 2022. http://hdl.handle.net/10174/31068.

Full text
Abstract:
Hyper-parameter Optimisation consists of finding the parameters that maximise a model’s performance. However, this mainly concerns processes in which the model shouldn’t change over time. Hence, how should an online model be optimised? For this, we pose the following research question: How and when should the model be optimised? For the optimisation part, we explore the transformer architecture as a function mapping data statistics into model parameters, by means of graph attention layers, together with reinforcement learning approaches, achieving state of the art results. On the other hand, i
APA, Harvard, Vancouver, ISO, and other styles
6

Stark, Love. "Outlier detection with ensembled LSTM auto-encoders on PCA transformed financial data." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-296161.

Full text
Abstract:
Financial institutions today generate a large amount of data, data that can contain interesting information to investigate to further the economic growth of said institution. There exists an interest in analyzing these points of information, especially if they are anomalous from the normal day-to-day work. However, to find these outliers is not an easy task and not possible to do manually due to the massive amounts of data being generated daily. Previous work to solve this has explored the usage of machine learning to find outliers in these financial datasets. Previous studies have shown that
APA, Harvard, Vancouver, ISO, and other styles
7

Di, Ielsi Luca. "Analisi di serie temporali riguardanti dati energetici mediante architetture neurali profonde." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2016. http://amslaurea.unibo.it/10504/.

Full text
Abstract:
Il presente lavoro di tesi riguarda lo studio e l'impiego di architetture neurali profonde (nello specifico stacked denoising auto-encoder) per la definizione di un modello previsionale di serie temporali. Il modello implementato è stato applicato a dati industriali riguardanti un impianto fotovoltaico reale, per effettuare una predizione della produzione di energia elettrica sulla base della serie temporale che lo caratterizza. I risultati ottenuti hanno evidenziato come la struttura neurale profonda contribuisca a migliorare le prestazioni di previsione di strumenti statistici classici come
APA, Harvard, Vancouver, ISO, and other styles
8

San, Martín Silva Gabriel Antonio. "Modelo para identificación de modos de falla de máquinas en base a variational Auto-Encoders." Tesis, Universidad de Chile, 2018. http://repositorio.uchile.cl/handle/2250/151666.

Full text
Abstract:
Ingeniero Civil Mecánico<br>Dentro del campo de la ingeniería mecánica, una de las áreas que más crecimiento ha mostrado en los últimos años es la de la gestión de activos físicos y confiabilidad. Junto con la capacidad de construir máquinas y sistemas más complejos, el problema de la detección temprana de fallas en elementos mecánicos se vuelve de suma importancia. Al mismo tiempo, el incremento en la disponibilidad de tecnología sensitoria ha dado a los ingenieros la capacidad de medir una gran cantidad de variables operacionales, como por ejemplo presión, temperatura o emisiones acústicas, a
APA, Harvard, Vancouver, ISO, and other styles
9

Azad, Abul K. "Robust Speech Filter And Voice Encoder Parameter Estimation using the Phase-Phase Correlator." Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/97221.

Full text
Abstract:
In recent years, linear prediction voice encoders have become very efficient in terms of computing execution time and channel bandwidth usage while providing, in the absence of im- pulsive noise, natural sounding synthetic speech signals. This good performance has been achieved via the use of a maximum likelihood parameter estimation of an auto-regressive model of order ten that best fits the speech signal under the assumption that the signal and the noise are Gaussian stochastic processes. However, this method breaks down in the presence of impulse noise, which is common in practice, resultin
APA, Harvard, Vancouver, ISO, and other styles
10

Damir, Krklješ. "Projektovanje kapacitivnog senzora ugla i ugaone brzine inkrementalnog tipa na fleksibilnim supstratima." Phd thesis, Univerzitet u Novom Sadu, Fakultet tehničkih nauka u Novom Sadu, 2016. https://www.cris.uns.ac.rs/record.jsf?recordId=101224&source=NDLTD&language=en.

Full text
Abstract:
Disertacija istražuje primenu fleksibilne elektronike zakapacitivne senzore ugla i ugaone brzine tipa apsolutnog iinkrementalnog enkodera cilindrične strukture. Razmatraju se dvestrukture, apsolutnog i inkrementalnog enkodera. Izvršena je analizauticaja mehaničkih nesavršenosti na funkciju kapacitivnosti.Razvijena su dva prototipa kapacitivnih senzora za statičko idinamičko ispitivanje karakteristika senzora. Razvijena jeelektronika za obradu senzora inkrementalnog tipa saautokalibracijom senzora.<br>In this thesis a research on application of flexible electronics for capacitiveangular positio
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Auto encoder"

1

Chu, Yan, Haozhuang Li, Hui Ning, and Qingchao Zhao. "Wasserstein Graph Auto-Encoder." In Algorithms and Architectures for Parallel Processing. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-95384-3_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Choi, Myungwon, Pilsub Lee, Daegyeom Kim, et al. "Brain Network Decomposition by Auto Encoder (AE) and Graph Auto Encoder (GAE)." In Neural Information Processing. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-36708-4_47.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Rifai, Salah, Grégoire Mesnil, Pascal Vincent, et al. "Higher Order Contractive Auto-Encoder." In Machine Learning and Knowledge Discovery in Databases. Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-23783-6_41.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Chen, Fu-qiang, Yan Wu, Guo-dong Zhao, Jun-ming Zhang, Ming Zhu, and Jing Bai. "Contractive De-noising Auto-Encoder." In Intelligent Computing Theory. Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-09333-8_84.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Song, Chunfeng, Feng Liu, Yongzhen Huang, Liang Wang, and Tieniu Tan. "Auto-encoder Based Data Clustering." In Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications. Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-41822-8_15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Chen, Wei, Ruimin Hu, Xiaochen Wang, and Dengshi Li. "HRTF Representation with Convolutional Auto-encoder." In MultiMedia Modeling. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-37731-1_49.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Zu, Shuaishuai, Chuyu Wang, Yafei Liu, Jun Shen, and Li Li. "Contrastive Learning Augmented Graph Auto-Encoder." In Communications in Computer and Information Science. Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-8145-8_22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Boyadzhiev, Teodor, Stela Dimitrova, and Simeon Tsvetanov. "Comparison of Auto-Encoder Training Algorithms." In Human Interaction, Emerging Technologies and Future Systems V. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-85540-6_88.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Du, Guowang, Lihua Zhou, Yudi Yang, Kevin Lü, and Lizhen Wang. "Multi-view Clustering via Multiple Auto-Encoder." In Web and Big Data. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-60259-8_45.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ahmad, Mubashir, Jian Yang, Danni Ai, Syed Furqan Qadri, and Yongtian Wang. "Deep-Stacked Auto Encoder for Liver Segmentation." In Communications in Computer and Information Science. Springer Singapore, 2017. http://dx.doi.org/10.1007/978-981-10-7389-2_24.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Auto encoder"

1

Zhao, Long, Zonglong Yuan, and Yuhao Lou. "Cross Auto-Encoder for Inscription Character Inpainting." In 2024 International Joint Conference on Neural Networks (IJCNN). IEEE, 2024. http://dx.doi.org/10.1109/ijcnn60899.2024.10649951.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kamal, Hesham, and Maggie Mashaly. "Improving Anomaly Detection in IDS with Hybrid Auto Encoder-SVM and Auto Encoder-LSTM Models Using Resampling Methods." In 2024 6th Novel Intelligent and Leading Emerging Sciences Conference (NILES). IEEE, 2024. http://dx.doi.org/10.1109/niles63360.2024.10753149.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Perera, Yuvin, Gustavo Batista, Wen Hu, Salil Kanhere, and Sanjay Jha. "SAfER: Simplified Auto-encoder for (Anomalous) Event Recognition." In 2024 20th International Conference on Distributed Computing in Smart Systems and the Internet of Things (DCOSS-IoT). IEEE, 2024. http://dx.doi.org/10.1109/dcoss-iot61029.2024.00041.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Katyal, Rohit, and Neeshu Sharma. "Speech Emotion Recognition System based on Auto Encoder." In 2024 IEEE 5th India Council International Subsections Conference (INDISCON). IEEE, 2024. http://dx.doi.org/10.1109/indiscon62179.2024.10744282.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Wazir, Jaipreet Kour, Pawan Kumar, Javaid Ahmad Sheikh, and Karan Nathwani. "Speech Separation in Time-Domain Using Auto-Encoder." In 2024 First International Conference for Women in Computing (InCoWoCo). IEEE, 2024. https://doi.org/10.1109/incowoco64194.2024.10863419.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Bora, Maheswar, Saurabh Atreya, Aritra Mukherjee, and Abhijit Das. "KDC-MAE: Knowledge Distilled Contrastive Mask Auto-Encoder." In 2025 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV). IEEE, 2025. https://doi.org/10.1109/wacv61041.2025.00731.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Zaibi, Dorra, Alya Alkameli, and Riadh Ksantini. "Plant Disease Clustering System using Dynamic Auto-Encoder." In 2025 International Wireless Communications and Mobile Computing (IWCMC). IEEE, 2025. https://doi.org/10.1109/iwcmc65282.2025.11059664.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Liu, Xinhai, Zhizhong Han, Xin Wen, Yu-Shen Liu, and Matthias Zwicker. "L2G Auto-encoder." In MM '19: The 27th ACM International Conference on Multimedia. ACM, 2019. http://dx.doi.org/10.1145/3343031.3350960.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Oliveira, Emerson V., David H. do Santos, and Luiz M. G. Goncalves. "Auto-regressive Multi-variable Auto-encoder." In Anais Estendidos da Conference on Graphics, Patterns and Images. Sociedade Brasileira de Computação - SBC, 2022. http://dx.doi.org/10.5753/sibgrapi.est.2022.23279.

Full text
Abstract:
Due to the global pandemic disclaimer caused by the SARS-COV-2 virus propagation, also called COVID-19, governments, institutions, and researchers have mobilized intending to try to mitigate the effects caused by the virus on society. Some approaches were proposed and applied to try to make predictions of the behavior of possible pandemics indicators. Among those methodologies, some models are data orientated, also known as data-driven, which had considerable prominence over the others. Artificial Neural Networks are a widely used model among datadriven models. In this work, we propose a novel
APA, Harvard, Vancouver, ISO, and other styles
10

Choi, Youngwon, and Joong-Ho Won. "Ornstein Auto-Encoders." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/301.

Full text
Abstract:
We propose the Ornstein auto-encoder (OAE), a representation learning model for correlated data. In many interesting applications, data have nested structures. Examples include the VGGFace and MNIST datasets. We view such data consist of i.i.d. copies of a stationary random process, and seek a latent space representation of the observed sequences. This viewpoint necessitates a distance measure between two random processes. We propose to use Orstein's d-bar distance, a process extension of Wasserstein's distance. We first show that the theorem by Bousquet et al. (2017) for Wasserstein auto-enco
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Auto encoder"

1

Green, Andre. LUNA Condition-Based Monitoring Update: Mahalanobis, SVD, and Auto-Encoder Comparison. Office of Scientific and Technical Information (OSTI), 2021. http://dx.doi.org/10.2172/1782605.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Green, Andre. LUNA Condition-Based Monitoring Update: Auto-encoders on Actuator Data. Office of Scientific and Technical Information (OSTI), 2021. http://dx.doi.org/10.2172/1781344.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Strauss, Charles, and Garrett Kenyon. Transfer Learning using Denoising Auto-Encoders for Cellular-Level Annotation of Tumor in Pathology Slides. Office of Scientific and Technical Information (OSTI), 2021. http://dx.doi.org/10.2172/1768438.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Liu, Tairan. Addressing Urban Traffic Congestion: A Deep Reinforcement Learning-Based Approach. Mineta Transportation Institute, 2025. https://doi.org/10.31979/mti.2025.2322.

Full text
Abstract:
In an innovative venture, the research team embarked on a mission to redefine urban traffic flow by introducing an automated way to manage traffic light timings. This project integrates two critical technologies, Deep Q-Networks (DQN) and Auto-encoders, into reinforcement learning, with the goal of making traffic smoother and reducing the all-too-common road congestion in simulated city environments. Deep Q-Networks (DQN) are a form of reinforcement learning algorithms that learns the best actions to take in various situations through trial and error. Auto-encoders, on the other hand, are tool
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!