Academic literature on the topic 'Non-autoregressive Machine Translation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Non-autoregressive Machine Translation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Non-autoregressive Machine Translation"

1

Wang, Yiren, Fei Tian, Di He, Tao Qin, ChengXiang Zhai, and Tie-Yan Liu. "Non-Autoregressive Machine Translation with Auxiliary Regularization." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 5377–84. http://dx.doi.org/10.1609/aaai.v33i01.33015377.

Full text
Abstract:
As a new neural machine translation approach, NonAutoregressive machine Translation (NAT) has attracted attention recently due to its high efficiency in inference. However, the high efficiency has come at the cost of not capturing the sequential dependency on the target side of translation, which causes NAT to suffer from two kinds of translation errors: 1) repeated translations (due to indistinguishable adjacent decoder hidden states), and 2) incomplete translations (due to incomplete transfer of source side information via the decoder hidden states). In this paper, we propose to address thes
APA, Harvard, Vancouver, ISO, and other styles
2

Wang, Shuheng, Shumin Shi, Heyan Huang, and Wei Zhang. "Improving Non-Autoregressive Machine Translation via Autoregressive Training." Journal of Physics: Conference Series 2031, no. 1 (2021): 012045. http://dx.doi.org/10.1088/1742-6596/2031/1/012045.

Full text
Abstract:
Abstract In recent years, non-autoregressive machine translation has attracted many researchers’ attentions. Non-autoregressive translation (NAT) achieves faster decoding speed at the cost of translation accuracy compared with autoregressive translation (AT). Since NAT and AT models have similar architecture, a natural idea is to use AT task assisting NAT task. Previous works use curriculum learning or distillation to improve the performance of NAT model. However, they are complex to follow and diffucult to be integrated into some new works. So in this paper, to make it easy, we introduce a mu
APA, Harvard, Vancouver, ISO, and other styles
3

Ran, Qiu, Yankai Lin, Peng Li, and Jie Zhou. "Guiding Non-Autoregressive Neural Machine Translation Decoding with Reordering Information." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 15 (2021): 13727–35. http://dx.doi.org/10.1609/aaai.v35i15.17618.

Full text
Abstract:
Non-autoregressive neural machine translation (NAT) generates each target word in parallel and has achieved promising inference acceleration. However, existing NAT models still have a big gap in translation quality compared to autoregressive neural machine translation models due to the multimodality problem: the target words may come from multiple feasible translations. To address this problem, we propose a novel NAT framework ReorderNAT which explicitly models the reordering information to guide the decoding of NAT. Specially, ReorderNAT utilizes deterministic and non-deterministic decoding s
APA, Harvard, Vancouver, ISO, and other styles
4

Shao, Chenze, Jinchao Zhang, Jie Zhou, and Yang Feng. "Rephrasing the Reference for Non-autoregressive Machine Translation." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 11 (2023): 13538–46. http://dx.doi.org/10.1609/aaai.v37i11.26587.

Full text
Abstract:
Non-autoregressive neural machine translation (NAT) models suffer from the multi-modality problem that there may exist multiple possible translations of a source sentence, so the reference sentence may be inappropriate for the training when the NAT output is closer to other translations. In response to this problem, we introduce a rephraser to provide a better training target for NAT by rephrasing the reference sentence according to the NAT output. As we train NAT based on the rephraser output rather than the reference sentence, the rephraser output should fit well with the NAT output and not
APA, Harvard, Vancouver, ISO, and other styles
5

Wang, Shuheng, Shumin Shi, and Heyan Huang. "Enhanced encoder for non-autoregressive machine translation." Machine Translation 35, no. 4 (2021): 595–609. http://dx.doi.org/10.1007/s10590-021-09285-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Shao, Chenze, Jinchao Zhang, Yang Feng, Fandong Meng, and Jie Zhou. "Minimizing the Bag-of-Ngrams Difference for Non-Autoregressive Neural Machine Translation." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 01 (2020): 198–205. http://dx.doi.org/10.1609/aaai.v34i01.5351.

Full text
Abstract:
Non-Autoregressive Neural Machine Translation (NAT) achieves significant decoding speedup through generating target words independently and simultaneously. However, in the context of non-autoregressive translation, the word-level cross-entropy loss cannot model the target-side sequential dependency properly, leading to its weak correlation with the translation quality. As a result, NAT tends to generate influent translations with over-translation and under-translation errors. In this paper, we propose to train NAT to minimize the Bag-of-Ngrams (BoN) difference between the model output and the
APA, Harvard, Vancouver, ISO, and other styles
7

Li, Feng, Jingxian Chen, and Xuejun Zhang. "A Survey of Non-Autoregressive Neural Machine Translation." Electronics 12, no. 13 (2023): 2980. http://dx.doi.org/10.3390/electronics12132980.

Full text
Abstract:
Non-autoregressive neural machine translation (NAMT) has received increasing attention recently in virtue of its promising acceleration paradigm for fast decoding. However, these splendid speedup gains are at the cost of accuracy, in comparison to its autoregressive counterpart. To close this performance gap, many studies have been conducted for achieving a better quality and speed trade-off. In this paper, we survey the NAMT domain from two new perspectives, i.e., target dependency management and training strategies arrangement. Proposed approaches are elaborated at length, involving five mod
APA, Harvard, Vancouver, ISO, and other styles
8

Liu, Min, Yu Bao, Chengqi Zhao, and Shujian Huang. "Selective Knowledge Distillation for Non-Autoregressive Neural Machine Translation." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 11 (2023): 13246–54. http://dx.doi.org/10.1609/aaai.v37i11.26555.

Full text
Abstract:
Benefiting from the sequence-level knowledge distillation, the Non-Autoregressive Transformer (NAT) achieves great success in neural machine translation tasks. However, existing knowledge distillation has side effects, such as propagating errors from the teacher to NAT students, which may limit further improvements of NAT models and are rarely discussed in existing research. In this paper, we introduce selective knowledge distillation by introducing an NAT evaluator to select NAT-friendly targets that are of high quality and easy to learn. In addition, we introduce a simple yet effective progr
APA, Harvard, Vancouver, ISO, and other styles
9

Du, Quan, Kai Feng, Chen Xu, Tong Xiao, and Jingbo Zhu. "Non-autoregressive neural machine translation with auxiliary representation fusion." Journal of Intelligent & Fuzzy Systems 41, no. 6 (2021): 7229–39. http://dx.doi.org/10.3233/jifs-211105.

Full text
Abstract:
Recently, many efforts have been devoted to speeding up neural machine translation models. Among them, the non-autoregressive translation (NAT) model is promising because it removes the sequential dependence on the previously generated tokens and parallelizes the generation process of the entire sequence. On the other hand, the autoregressive translation (AT) model in general achieves a higher translation accuracy than the NAT counterpart. Therefore, a natural idea is to fuse the AT and NAT models to seek a trade-off between inference speed and translation quality. This paper proposes an ARF-N
APA, Harvard, Vancouver, ISO, and other styles
10

Xinlu, Zhang, Wu Hongguan, Ma Beijiao, and Zhai Zhengang. "Research on Low Resource Neural Machine Translation Based on Non-autoregressive Model." Journal of Physics: Conference Series 2171, no. 1 (2022): 012045. http://dx.doi.org/10.1088/1742-6596/2171/1/012045.

Full text
Abstract:
Abstract The autoregressive model can’t make full use of context information because of its single direction of generation, and the autoregressive method can’t perform parallel computation in decoding, which affects the efficiency of translation generation. Therefore, we explore a non-autoregressive translation generation method based on insertion and deletion in low-resource languages, which decomposes translation generation into three steps: deletion-insertion-generation. Therefore, the dynamic editing of the translation can be realized in the iterative updating process. At the same time, ea
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Non-autoregressive Machine Translation"

1

Xu, Jitao. "Writing in two languages : Neural machine translation as an assistive bilingual writing tool." Electronic Thesis or Diss., université Paris-Saclay, 2022. http://www.theses.fr/2022UPASG078.

Full text
Abstract:
Dans un monde de plus en plus globalisé, il est de plus en plus courant d'avoir à s'exprimer dans une langue étrangère ou dans plusieurs langues. Cependant, pour de nombreuses personnes, parler ou écrire dans une langue étrangère n'est pas une tâche facile. Les outils de traduction automatique peuvent aider à générer des textes en plusieurs langues. Grâce aux progrès récents de la traduction automatique neuronale (NMT), les technologies de traduction fournissent en effet des traductions utilisables dans un nombre croissant de contextes. Pour autant, il n'est pas encore réaliste d'attendre des
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Non-autoregressive Machine Translation"

1

Zhou, Long, Jiajun Zhang, Yang Zhao, and Chengqing Zong. "Non-autoregressive Neural Machine Translation with Distortion Model." In Natural Language Processing and Chinese Computing. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-60450-9_32.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wang, Shuheng, Shumin Shi, and Heyan Huang. "Improving Non-autoregressive Machine Translation with Soft-Masking." In Natural Language Processing and Chinese Computing. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-88480-2_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Guo, Ziyue, Hongxu Hou, Nier Wu, and Shuo Sun. "Word-Level Error Correction in Non-autoregressive Neural Machine Translation." In Communications in Computer and Information Science. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-63820-7_83.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Wang, Yisong, Hongxu Hou, Shuo Sun, et al. "Dynamic Mask Curriculum Learning for Non-Autoregressive Neural Machine Translation." In Communications in Computer and Information Science. Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-7960-6_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Chen, Xinran, Sufeng Duan, and Gongshen Liu. "Improving Non-autoregressive Machine Translation with Error Exposure and Consistency Regularization." In Lecture Notes in Computer Science. Springer Nature Singapore, 2024. http://dx.doi.org/10.1007/978-981-97-9437-9_19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Liu, Yan, Longyue Wang, Zhaopeng Tu, and Deyi Xiong. "Reassessing Non-Autoregressive Neural Machine Translation with a Fine-Grained Error Taxonomy." In Frontiers in Artificial Intelligence and Applications. IOS Press, 2024. http://dx.doi.org/10.3233/faia240946.

Full text
Abstract:
Non-autoregressive neural machine translation (NAT) has made remarkable progress since it is proposed. The performance of NAT in terms of BLEU has approached or even matched that of autoregressive neural machine translation (AT). However, other evaluation metrics show that NAT still lags behind. Unfortunately, these metrics only provide a numerical difference, and it is unclear how the translations produced by NAT differ from those produced by AT. In addition, the multimodality problem is always a significant issue in NAT. To assess whether NAT models are fully capable of solving the multimoda
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Non-autoregressive Machine Translation"

1

Liu, Guojing, Xiangqian Ding, Huili Gong, Xiangyu Qu, Zhenyu Yang, and Kai Yan. "Non-Autoregressive Multimodal Machine Translation." In ICASSP 2025 - 2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2025. https://doi.org/10.1109/icassp49660.2025.10889370.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Li, Haoran, Zhanming Jie, and Wei Lu. "Non-Autoregressive Machine Translation as Constrained HMM." In Findings of the Association for Computational Linguistics ACL 2024. Association for Computational Linguistics, 2024. http://dx.doi.org/10.18653/v1/2024.findings-acl.735.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

You, WangJie, Pei Guo, Juntao Li, Kehai Chen, and Min Zhang. "Efficient Domain Adaptation for Non-Autoregressive Machine Translation." In Findings of the Association for Computational Linguistics ACL 2024. Association for Computational Linguistics, 2024. http://dx.doi.org/10.18653/v1/2024.findings-acl.810.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Zheng, Xiaoli, Yonghong Tian, Chang Ma, and Kangkang Sun. "A Study on Non-Autoregressive Mongolian-Chinese Neural Machine Translation for Multilingual Pre-Training." In 2024 7th International Conference on Machine Learning and Natural Language Processing (MLNLP). IEEE, 2024. https://doi.org/10.1109/mlnlp63328.2024.10800156.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Bao, Guangsheng, Zhiyang Teng, Hao Zhou, Jianhao Yan, and Yue Zhang. "Non-Autoregressive Document-Level Machine Translation." In Findings of the Association for Computational Linguistics: EMNLP 2023. Association for Computational Linguistics, 2023. http://dx.doi.org/10.18653/v1/2023.findings-emnlp.986.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Xu, Jitao, Josep Crego, and François Yvon. "Integrating Translation Memories into Non-Autoregressive Machine Translation." In Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics. Association for Computational Linguistics, 2023. http://dx.doi.org/10.18653/v1/2023.eacl-main.96.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Saharia, Chitwan, William Chan, Saurabh Saxena, and Mohammad Norouzi. "Non-Autoregressive Machine Translation with Latent Alignments." In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, 2020. http://dx.doi.org/10.18653/v1/2020.emnlp-main.83.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Wei, Bingzhen, Mingxuan Wang, Hao Zhou, Junyang Lin, and Xu Sun. "Imitation Learning for Non-Autoregressive Neural Machine Translation." In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, 2019. http://dx.doi.org/10.18653/v1/p19-1125.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Qian, Lihua, Hao Zhou, Yu Bao, et al. "Glancing Transformer for Non-Autoregressive Neural Machine Translation." In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, 2021. http://dx.doi.org/10.18653/v1/2021.acl-long.155.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Shan, Yong, Yang Feng, and Chenze Shao. "Modeling Coverage for Non-Autoregressive Neural Machine Translation." In 2021 International Joint Conference on Neural Networks (IJCNN). IEEE, 2021. http://dx.doi.org/10.1109/ijcnn52387.2021.9533529.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!