Academic literature on the topic 'Seq2seq model'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Seq2seq model.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Seq2seq model"

1

Aydın, Özlem, and Hüsein Kantarcı. "Türkçe Anahtar Sözcük Çıkarımında LSTM ve BERT Tabanlı Modellerin Karşılaştırılması." Bilgisayar Bilimleri ve Mühendisliği Dergisi 17, no. 1 (2024): 9–18. http://dx.doi.org/10.54525/bbmd.1454220.

Full text
Abstract:
Günümüzde internet ortamında metne dayalı veri çok hızlı bir şekilde artış göstermektedir ve bu büyük veri içinden istenilen bilgiyi barındıran doğru içeriklere ulaşabilmek önemli bir ihtiyaçtır. İçeriklere ait anahtar sözcüklerin bilinmesi bu ihtiyacı karşılamada olumlu bir etki sağlayabilmektedir. Bu çalışmada, doğal dil işleme ve derin öğrenme modelleri ile Türkçe metinleri temsil eden anahtar sözcüklerin belirlenmesi amaçlanmıştır. Veri kümesi olarak Türkçe Etiketli Metin Derlemi ve Metin Özetleme-Anahtar Kelime Çıkarma Veri Kümesi birlikte kullanılmıştır. Derin öğrenme modeli olarak çalışmada iki farklı model ortaya konmuştur. İlk olarak Uzun Ömürlü Kısa Dönem Belleği ( LSTM) katmanlı bir Diziden Diziye (Seq2Seq) model tasarlanmıştır. Diğer model ise BERT (Transformatörler ile İki Yönlü Kodlayıcı Temsilleri) ile oluşturulmuş Seq2Seq bir modeldir. LSTM katmanlı Seq2seq modelin başarı değerlendirmesinde ROUGE-1 ölçütünde 0,38 F-1 değerine ulaşılmıştır. BERT tabanlı Seq2Seq modelde ROUGE-1 ölçütünde 0,399 F-1 değeri elde edilmiştir. Sonuç olarak dönüştürücü mimarisini temel alan BERT tabanlı Seq2Seq modelin, LSTM tabanlı Seq2seq modele görece daha başarılı olduğu gözlemlenmiştir.
APA, Harvard, Vancouver, ISO, and other styles
2

Sak, Semih, and Mustafa Alper Akkaş. "6G'de Nesnelerin İnterneti Teknolojisinin Medikal Alandaki Gelişmeleri." Bilgisayar Bilimleri ve Mühendisliği Dergisi 17, no. 1 (2024): 1–8. http://dx.doi.org/10.54525/bbmd.1454186.

Full text
Abstract:
Günümüzde internet ortamında metne dayalı veri çok hızlı bir şekilde artış göstermektedir ve bu büyük veri içinden istenilen bilgiyi barındıran doğru içeriklere ulaşabilmek önemli bir ihtiyaçtır. İçeriklere ait anahtar sözcüklerin bilinmesi bu ihtiyacı karşılamada olumlu bir etki sağlayabilmektedir. Bu çalışmada, doğal dil işleme ve derin öğrenme modelleri ile Türkçe metinleri temsil eden anahtar sözcüklerin belirlenmesi amaçlanmıştır. Veri kümesi olarak Türkçe Etiketli Metin Derlemi ve Metin Özetleme-Anahtar Kelime Çıkarma Veri Kümesi birlikte kullanılmıştır. Derin öğrenme modeli olarak çalışmada iki farklı model ortaya konmuştur. İlk olarak Uzun Ömürlü Kısa Dönem Belleği ( LSTM) katmanlı bir Diziden Diziye (Seq2Seq) model tasarlanmıştır. Diğer model ise BERT (Transformatörler ile İki Yönlü Kodlayıcı Temsilleri) ile oluşturulmuş Seq2Seq bir modeldir. LSTM katmanlı Seq2seq modelin başarı değerlendirmesinde ROUGE-1 ölçütünde 0,38 F-1 değerine ulaşılmıştır. BERT tabanlı Seq2Seq modelde ROUGE-1 ölçütünde 0,399 F-1 değeri elde edilmiştir. Sonuç olarak dönüştürücü mimarisini temel alan BERT tabanlı Seq2Seq modelin, LSTM tabanlı Seq2seq modele görece daha başarılı olduğu gözlemlenmiştir.
APA, Harvard, Vancouver, ISO, and other styles
3

Palasundram, Kulothunkan, Nurfadhlina Mohd Sharef, Nurul Amelina Nasharuddin, Khairul Azhar Kasmiran, and Azreen Azman. "Sequence to Sequence Model Performance for Education Chatbot." International Journal of Emerging Technologies in Learning (iJET) 14, no. 24 (2019): 56. http://dx.doi.org/10.3991/ijet.v14i24.12187.

Full text
Abstract:
Chatbot for education has great potential to complement human educators and education administrators. For example, it can be around the clock tutor to answer and clarify any questions from students who may have missed class. A chatbot can be implemented either by ruled based or artificial intel-ligence based. However, unlike the ruled-based chatbots, artificial intelli-gence based chatbots can learn and become smarter overtime and is more scalable and has become the popular choice for chatbot researchers recently. Recurrent Neural Network based Sequence-to-sequence (Seq2Seq) model is one of the most commonly researched model to implement artificial intelli-gence chatbot and has shown great progress since its introduction in 2014. However, it is still in infancy and has not been applied widely in educational chatbot development. Introduced originally for neural machine translation, the Seq2Seq model has been adapted for conversation modelling including question-answering chatbots. However, in-depth research and analysis of op-timal settings of the various components of Seq2Seq model for natural an-swer generation problem is very limited. Additionally, there has been no ex-periments and analysis conducted to understand how Seq2Seq model handles variations is questions posed to it to generate correct answers. Our experi-ments add to the empirical evaluations on Seq2Seq literature and provides insights to these questions. Additionally, we provide insights on how a cu-rated dataset can be developed and questions designed to train and test the performance of a Seq2Seq based question-answer model.
APA, Harvard, Vancouver, ISO, and other styles
4

Palasundram, Kulothunkan, Nurfadhlina Mohd Sharef, Khairul Azhar Kasmiran, and Azreen Azman. "SEQ2SEQ++: A Multitasking-Based Seq2seq Model to Generate Meaningful and Relevant Answers." IEEE Access 9 (2021): 164949–75. http://dx.doi.org/10.1109/access.2021.3133495.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Jin, Weihua, Shijie Zhang, Bo Sun, Pengli Jin, and Zhidong Li. "An Analytical Investigation of Anomaly Detection Methods Based on Sequence to Sequence Model in Satellite Power Subsystem." Sensors 22, no. 5 (2022): 1819. http://dx.doi.org/10.3390/s22051819.

Full text
Abstract:
The satellite power subsystem is responsible for all power supply in a satellite, and is an important component of it. The system’s performance has a direct impact on the operations of other systems as well as the satellite’s lifespan. Sequence to sequence (seq2seq) learning has recently advanced, gaining even more power in evaluating complicated and large-scale data. The potential of the seq2seq model in detecting anomalies in the satellite power subsystem is investigated in this work. A seq2seq-based scheme is given, with a thorough comparison of different neural-network cell types and levels of data smoothness. Three specific approaches were created to evaluate the seq2seq model performance, taking into account the unsupervised learning mechanism. The findings reveal that a CNN-based seq2seq with attention model under suitable data-smoothing conditions has a better ability to detect anomalies in the satellite power subsystem.
APA, Harvard, Vancouver, ISO, and other styles
6

Zhou, Lijian, Lijun Wang, Zhiang Zhao, Yuwei Liu, and Xiwu Liu. "A Seq2Seq Model Improved by Transcendental Learning and Imaged Sequence Samples for Porosity Prediction." Mathematics 11, no. 1 (2022): 39. http://dx.doi.org/10.3390/math11010039.

Full text
Abstract:
Since the accurate prediction of porosity is one of the critical factors for estimating oil and gas reservoirs, a novel porosity prediction method based on Imaged Sequence Samples (ISS) and a Sequence to Sequence (Seq2Seq) model fused by Transcendental Learning (TL) is proposed using well-logging data. Firstly, to investigate the correlation between logging features and porosity, the original logging features are normalized and selected by computing their correlation with porosity to obtain the point samples. Secondly, to better represent the depositional relations with depths, an ISS set is established by slidingly grouping sample points across depth, and the selected logging features are in a row. Therefore, spatial relations among the features are established along the vertical and horizontal directions. Thirdly, since the Seq2Seq model can better extract the spatio-temporal information of the input data than the Bidirectional Gate Recurrent Unit (BGRU), the Seq2Seq model is introduced for the first time to address the logging data and predict porosity. The experimental results show that it can achieve superior prediction results than state-of-the-art. However, the cumulative bias is likely to appear when using the Seq2Seq model. Motivated by teacher forcing, the idea of TL is proposed to be incorporated into the decoding process of Seq2Seq, named the TL-Seq2Seq model. The self-well and inter-well experimental results show that the proposed approach can significantly improve the accuracy of porosity prediction.
APA, Harvard, Vancouver, ISO, and other styles
7

S., Keerthana, and Venkatesan R. "Abstractive Text Summarization using Seq2seq Model." International Journal of Computer Applications 176, no. 33 (2020): 24–26. http://dx.doi.org/10.5120/ijca2020920401.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Byambadorj, Zolzaya, Ryota Nishimura, Altangerel Ayush, and Norihide Kitaoka. "Normalization of Transliterated Mongolian Words Using Seq2Seq Model with Limited Data." ACM Transactions on Asian and Low-Resource Language Information Processing 20, no. 6 (2021): 1–19. http://dx.doi.org/10.1145/3464361.

Full text
Abstract:
The huge increase in social media use in recent years has resulted in new forms of social interaction, changing our daily lives. Due to increasing contact between people from different cultures as a result of globalization, there has also been an increase in the use of the Latin alphabet, and as a result a large amount of transliterated text is being used on social media. In this study, we propose a variety of character level sequence-to-sequence (seq2seq) models for normalizing noisy, transliterated text written in Latin script into Mongolian Cyrillic script, for scenarios in which there is a limited amount of training data available. We applied performance enhancement methods, which included various beam search strategies, N-gram-based context adoption, edit distance-based correction and dictionary-based checking, in novel ways to two basic seq2seq models. We experimentally evaluated these two basic models as well as fourteen enhanced seq2seq models, and compared their noisy text normalization performance with that of a transliteration model and a conventional statistical machine translation (SMT) model. The proposed seq2seq models improved the robustness of the basic seq2seq models for normalizing out-of-vocabulary (OOV) words, and most of our models achieved higher normalization performance than the conventional method. When using test data during our text normalization experiment, our proposed method which included checking each hypothesis during the inference period achieved the lowest word error rate (WER = 13.41%), which was 4.51% fewer errors than when using the conventional SMT method.
APA, Harvard, Vancouver, ISO, and other styles
9

Gong, Gangjun, Xiaonan An, Nawaraj Kumar Mahato, Shuyan Sun, Si Chen, and Yafeng Wen. "Research on Short-Term Load Prediction Based on Seq2seq Model." Energies 12, no. 16 (2019): 3199. http://dx.doi.org/10.3390/en12163199.

Full text
Abstract:
Electricity load prediction is the primary basis on which power-related departments to make logical and effective generation plans and scientific scheduling plans for the most effective power utilization. The perpetual evolution of deep learning has recommended advanced and innovative concepts for short-term load prediction. Taking into consideration the time and nonlinear characteristics of power system load data and further considering the impact of historical and future information on the current state, this paper proposes a Seq2seq short-term load prediction model based on a long short-term memory network (LSTM). Firstly, the periodic fluctuation characteristics of users’ load data are analyzed, establishing a correlation of the load data so as to determine the model’s order in the time series. Secondly, the specifications of the Seq2seq model are given preference and a coalescence of the Residual mechanism (Residual) and the two Attention mechanisms (Attention) is developed. Then, comparing the predictive performance of the model under different types of Attention mechanism, this paper finally adopts the Seq2seq short-term load prediction model of Residual LSTM and the Bahdanau Attention mechanism. Eventually, the prediction model obtains better results when merging the actual power system load data of a certain place. In order to validate the developed model, the Seq2seq was compared with recurrent neural network (RNN), LSTM, and gated recurrent unit (GRU) algorithms. Last but not least, the performance indices were calculated. when training and testing the model with power system load data, it was noted that the root mean square error (RMSE) of Seq2seq was decreased by 6.61%, 16.95%, and 7.80% compared with RNN, LSTM, and GRU, respectively. In addition, a supplementary case study was carried out using data for a small power system considering different weather conditions and user behaviors in order to confirm the applicability and stability of the proposed model. The Seq2seq model for short-term load prediction can be reported to demonstrate superiority in all areas, exhibiting better prediction and stable performance.
APA, Harvard, Vancouver, ISO, and other styles
10

Geng, Xiaoran, Yue Ma, Wennian Cai, et al. "Evaluation of models for multi-step forecasting of hand, foot and mouth disease using multi-input multi-output: A case study of Chengdu, China." PLOS Neglected Tropical Diseases 17, no. 9 (2023): e0011587. http://dx.doi.org/10.1371/journal.pntd.0011587.

Full text
Abstract:
Background Hand, foot and mouth disease (HFMD) is a public health concern that threatens the health of children. Accurately forecasting of HFMD cases multiple days ahead and early detection of peaks in the number of cases followed by timely response are essential for HFMD prevention and control. However, many studies mainly predict future one-day incidence, which reduces the flexibility of prevention and control. Methods We collected the daily number of HFMD cases among children aged 0–14 years in Chengdu from 2011 to 2017, as well as meteorological and air pollutant data for the same period. The LSTM, Seq2Seq, Seq2Seq-Luong and Seq2Seq-Shih models were used to perform multi-step prediction of HFMD through multi-input multi-output. We evaluated the models in terms of overall prediction performance, the time delay and intensity of detection peaks. Results From 2011 to 2017, HFMD in Chengdu showed seasonal trends that were consistent with temperature, air pressure, rainfall, relative humidity, and PM10. The Seq2Seq-Shih model achieved the best performance, with RMSE, sMAPE and PCC values of 13.943~22.192, 17.880~27.937, and 0.887~0.705 for the 2-day to 15-day predictions, respectively. Meanwhile, the Seq2Seq-Shih model is able to detect peaks in the next 15 days with a smaller time delay. Conclusions The deep learning Seq2Seq-Shih model achieves the best performance in overall and peak prediction, and is applicable to HFMD multi-step prediction based on environmental factors.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Seq2seq model"

1

Song, Shiping. "Study of Semi-supervised Deep Learning Methods on Human Activity Recognition Tasks." Thesis, KTH, Robotik, perception och lärande, RPL, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-241366.

Full text
Abstract:
This project focuses on semi-supervised human activity recognition (HAR) tasks, in which the inputs are partly labeled time series data acquired from sensors such as accelerometer data, and the outputs are predefined human activities. Most state-of-the-art existing work in HAR area is supervised now, which relies on fully labeled datasets. Since the cost to label the collective instances increases fast with the increasing scale of data, semi-supervised methods are now widely required. This report proposed two semi-supervised methods and then investigated how well they perform on a partly labeled dataset, comparing to the state-of-the-art supervised method. One of these methods is designed based on the state-of-the-art supervised method, Deep-ConvLSTM, together with the semi-supervised learning concepts, self-training. Another one is modified based on a semi-supervised deep learning method, LSTM initialized by seq2seq autoencoder, which is firstly introduced for natural language processing. According to the experiments on a published dataset (Opportunity Activity Recognition dataset), both of these semi-supervised methods have better performance than the state-of-the-art supervised methods.<br>Detta projekt fokuserar på halvövervakad Human Activity Recognition (HAR), där indata delvis är märkta tidsseriedata från sensorer som t.ex. accelerometrar, och utdata är fördefinierade mänskliga aktiviteter. De främsta arbetena inom HAR-området använder numera övervakade metoder, vilka bygger på fullt märkta dataset. Eftersom kostnaden för att märka de samlade instanserna ökar snabbt med den ökade omfattningen av data, föredras numera ofta halvövervakade metoder. I denna rapport föreslås två halvövervakade metoder och det undersöks hur bra de presterar på ett delvis märkt dataset jämfört med den moderna övervakade metoden. En av dessa metoder utformas baserat på en högkvalitativ övervakad metod, DeepConvLSTM, kombinerad med självutbildning. En annan metod baseras på en halvövervakad djupinlärningsmetod, LSTM, initierad av seq2seq autoencoder, som först införs för behandling av naturligt språk. Enligt experimenten på ett publicerat dataset (Opportunity Activity Recognition dataset) har båda dessa metoder bättre prestanda än de toppmoderna övervakade metoderna.
APA, Harvard, Vancouver, ISO, and other styles
2

Granstedt, Jason Louis. "Data Augmentation with Seq2Seq Models." Thesis, Virginia Tech, 2017. http://hdl.handle.net/10919/78315.

Full text
Abstract:
Paraphrase sparsity is an issue that complicates the training process of question answering systems: syntactically diverse but semantically equivalent sentences can have significant disparities in predicted output probabilities. We propose a method for generating an augmented paraphrase corpus for the visual question answering system to make it more robust to paraphrases. This corpus is generated by concatenating two sequence to sequence models. In order to generate diverse paraphrases, we sample the neural network using diverse beam search. We evaluate the results on the standard VQA validation set. Our approach results in a significantly expanded training dataset and vocabulary size, but has slightly worse performance when tested on the validation split. Although not as fruitful as we had hoped, our work highlights additional avenues for investigation into selecting more optimal model parameters and the development of a more sophisticated paraphrase filtering algorithm. The primary contribution of this work is the demonstration that decent paraphrases can be generated from sequence to sequence models and the development of a pipeline for developing an augmented dataset.<br>Master of Science
APA, Harvard, Vancouver, ISO, and other styles
3

Holcner, Jonáš. "Strojový překlad pomocí umělých neuronových sítí." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2018. http://www.nusl.cz/ntk/nusl-386020.

Full text
Abstract:
The goal of this thesis is to describe and build a system for neural machine translation. System is built with recurrent neural networks - encoder-decoder architecture in particular. The result is a nmt library used to conduct experiments with different model parameters. Results of the experiments are compared with system built with the statistical tool Moses.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Seq2seq model"

1

Bansal, Ashish. Advanced Natural Language Processing with TensorFlow 2: Build effective real-world NLP applications using NER, RNNs, seq2seq models, Transformers, and more. Packt Publishing, 2021.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Seq2seq model"

1

Ji, Zhi. "A Multi-modal Seq2seq Chatbot Framework." In Proceeding of 2021 International Conference on Wireless Communications, Networking and Applications. Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-2456-9_24.

Full text
Abstract:
AbstractThe pandemic has forced young people to stay away from school and friends, complete online learning at home and live at home. Therefore, various mental illnesses such as anxiety and depression occur more frequently. Chatbot is a communication method that is more acceptable to young people. This paper proposes a multi-modal chatbot seq2seq framework, which divides the mental state of young people into different types through multi-modal information such as text and images entered by users in the chatbot. This model combines image description and text summarization modules with the attention mechanism in a multi-modal model to control related content in different modalities. Experiments on multi-modal data sets show that this method has 70% average accuracy and real users who use this system also believe that this method has good judgment ability.
APA, Harvard, Vancouver, ISO, and other styles
2

Liu, Lincong, Shijun Liu, and Li Pan. "Reservoir Flood Prediction Service Based on Seq2seq Model." In Communications in Computer and Information Science. Springer Nature Singapore, 2024. http://dx.doi.org/10.1007/978-981-97-5760-2_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ouahrani, Leila, and Djamal Bennouar. "Attentional Seq2Seq Model for Arabic Opinion Question Generation." In Lecture Notes in Networks and Systems. Springer Nature Switzerland, 2025. https://doi.org/10.1007/978-3-031-82112-7_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ning, Xingxing, Yupeng Zhao, and Jie liu. "Learning Seq2Seq Model with Dynamic Schema Linking for NL2SQL." In Communications in Computer and Information Science. Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-8300-9_16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Fan, Haoshen, Jie Wang, Bojin Zhuang, Shaojun Wang, and Jing Xiao. "A Hierarchical Attention Based Seq2Seq Model for Chinese Lyrics Generation." In PRICAI 2019: Trends in Artificial Intelligence. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-29894-4_23.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Adil, Tanmaya Garg, and Rajesh Kumar. "Development of a Chatbot Using LSTM Architecture and Seq2Seq Model." In Intelligent Systems and Smart Infrastructure. CRC Press, 2023. http://dx.doi.org/10.1201/9781003357346-75.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Pandya, Kshitiz, and Priyanka Patel. "PolyNMT: Neural Machine Translation Model with Seq2Seq Encoder-Decoder System." In Lecture Notes in Networks and Systems. Springer Nature Singapore, 2024. https://doi.org/10.1007/978-981-97-8090-7_22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Deepa, R., T. Sree Sharmila, and R. Niruban. "An Efficient Deep Learning Based Seq2Seq Model for Abstractive Text Summarization." In Communications in Computer and Information Science. Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-75164-6_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Xiang, Weidong, Vivekanandh Elangovan, and Sridhar Lakshmanan. "A Real-Time Seq2Seq Beamforming Prediction Model for C-V2X Links." In AI-enabled Technologies for Autonomous and Connected Vehicles. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-06780-8_18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Lin, Yulong, Haipeng Chen, Xuebin Zhuang, and Kun Zeng. "Hypersonic Vehicle Maneuver Trajectory Multi-label Classification Based on Seq2Seq Model." In Lecture Notes in Electrical Engineering. Springer Nature Singapore, 2025. https://doi.org/10.1007/978-981-96-2232-0_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Seq2seq model"

1

V, Rahul Chiranjeevi, Senthil Pandi S, Keerthana H, Abhignya P, and Rajendiran M. "Chatbot for Government Schemes Using SEQ2SEQ Model." In 2024 Second International Conference on Advances in Information Technology (ICAIT). IEEE, 2024. http://dx.doi.org/10.1109/icait61638.2024.10690438.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wu, Peng, Zhifu Liu, and Changzhe Wu. "Seq2Seq Model-Based Augmentation of Atmospheric Microwave Remote Sensing Data." In 2024 International Conference on Microwave and Millimeter Wave Technology (ICMMT). IEEE, 2024. http://dx.doi.org/10.1109/icmmt61774.2024.10672036.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Nerella, Pujitha, DivyaSri Pittu, Sandhya Undrakonda, Sasidhar Chennamsetty, Venkatrama Phani Kumar S, and Venkata Krishna Kishore K. "An Efficient Seq2Seq Model to Predict Question and Answer Response System." In 2024 Second International Conference on Advances in Information Technology (ICAIT). IEEE, 2024. http://dx.doi.org/10.1109/icait61638.2024.10690343.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Lindemann, Matthias, Alexander Koller, and Ivan Titov. "SIP: Injecting a Structural Inductive Bias into a Seq2Seq Model by Simulation." In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, 2024. http://dx.doi.org/10.18653/v1/2024.acl-long.355.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Lu, Zhenhai, Kun Xu, Hansheng Wang, and Ke Du. "A Seq2Seq-LSTM-Attention Model for Ionospheric foF2 Prediction in the Middle Latitude Region." In 2024 9th International Conference on Signal and Image Processing (ICSIP). IEEE, 2024. http://dx.doi.org/10.1109/icsip61881.2024.10671459.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Wang, Keyue. "Emotion measurement method based on improved temporal global principal component analysis and Seq2Seq model." In 2024 4th International Signal Processing, Communications and Engineering Management Conference (ISPCEM). IEEE, 2024. https://doi.org/10.1109/ispcem64498.2024.00073.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Chun, Sun Sim, Jung Ho Lee, Ju-Il Jeon, Jin Ah Kang, and Young-Su Cho. "Extended LTE Based Fingerprinting Positioning for Emergency Applications by Utilizing Seq2seq Model with Beam-Search Inference." In 37th International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS+ 2024). Institute of Navigation, 2024. http://dx.doi.org/10.33012/2024.19930.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Zhang, Mengyuan, Yingjie Tian, Yun Su, et al. "Short-term Forecasting for Charging Pile and Photovoltaic Loads Based on LSTM-Seq2seq Model in Extreme Weather." In 2024 8th International Conference on Smart Grid and Smart Cities (ICSGSC). IEEE, 2024. https://doi.org/10.1109/icsgsc62639.2024.10813768.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Setiawan, Bambang Abdi, Ema Utami, and Anggit Dwi Hartanto. "Banjarese Chatbot Using Seq2Seq Model." In 2021 4th International Conference on Information and Communications Technology (ICOIACT). IEEE, 2021. http://dx.doi.org/10.1109/icoiact53268.2021.9563915.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

"Keyphrase Generation with a Seq2seq Model." In 2019 the 9th International Workshop on Computer Science and Engineering. WCSE, 2019. http://dx.doi.org/10.18178/wcse.2019.06.107.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography