Contents
Academic literature on the topic 'Encoder-Decoder Models'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Encoder-Decoder Models.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Encoder-Decoder Models"
Zhang, Wenbo, Xiao Li, Yating Yang, Rui Dong, and Gongxu Luo. "Keeping Models Consistent between Pretraining and Translation for Low-Resource Neural Machine Translation." Future Internet 12, no. 12 (2020): 215. http://dx.doi.org/10.3390/fi12120215.
Full textLamar, Annie K. "Generating Metrically Accurate Homeric Poetry with Recurrent Neural Networks." International Journal of Transdisciplinary Artificial Intelligence 2, no. 1 (2020): 1–25. http://dx.doi.org/10.35708/tai1869-126247.
Full textMarkovnikov, Nikita, and Irina Kipyatkova. "Encoder-decoder models for recognition of Russian speech." Information and Control Systems, no. 4 (October 4, 2019): 45–53. http://dx.doi.org/10.31799/1684-8853-2019-4-45-53.
Full textMeng, Zhaorui, and Xianze Xu. "A Hybrid Short-Term Load Forecasting Framework with an Attention-Based Encoder–Decoder Network Based on Seasonal and Trend Adjustment." Energies 12, no. 24 (2019): 4612. http://dx.doi.org/10.3390/en12244612.
Full textDabre, Raj, and Atsushi Fujita. "Recurrent Stacking of Layers for Compact Neural Machine Translation Models." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 6292–99. http://dx.doi.org/10.1609/aaai.v33i01.33016292.
Full textOh, Jiun, and Yong-Suk Choi. "Reusing Monolingual Pre-Trained Models by Cross-Connecting Seq2seq Models for Machine Translation." Applied Sciences 11, no. 18 (2021): 8737. http://dx.doi.org/10.3390/app11188737.
Full textKhanh, Trinh Le Ba, Duy-Phuong Dao, Ngoc-Huynh Ho, et al. "Enhancing U-Net with Spatial-Channel Attention Gate for Abnormal Tissue Segmentation in Medical Imaging." Applied Sciences 10, no. 17 (2020): 5729. http://dx.doi.org/10.3390/app10175729.
Full textZheng, Chuanpan, Xiaoliang Fan, Cheng Wang, and Jianzhong Qi. "GMAN: A Graph Multi-Attention Network for Traffic Prediction." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 01 (2020): 1234–41. http://dx.doi.org/10.1609/aaai.v34i01.5477.
Full textMonteiro, João, Bruno Martins, Miguel Costa, and João M. Pires. "Geospatial Data Disaggregation through Self-Trained Encoder–Decoder Convolutional Models." ISPRS International Journal of Geo-Information 10, no. 9 (2021): 619. http://dx.doi.org/10.3390/ijgi10090619.
Full textÖzkaya Eren, Ayşegül, and Mustafa Sert. "Audio Captioning with Composition of Acoustic and Semantic Information." International Journal of Semantic Computing 15, no. 02 (2021): 143–60. http://dx.doi.org/10.1142/s1793351x21400018.
Full text