To see the other types of publications on this topic, follow the link: Text-to-text transfer transformer.

Journal articles on the topic 'Text-to-text transfer transformer'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Text-to-text transfer transformer.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Ay, Betul, Fatih Ertam, Guven Fidan, and Galip Aydin. "Turkish abstractive text document summarization using text to text transfer transformer." Alexandria Engineering Journal 68 (April 2023): 1–13. http://dx.doi.org/10.1016/j.aej.2023.01.008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Neda, Alipour, and Aydın Serdar. "Abstractive summarization using multilingual text-to-text transfer transformer for the Turkish text." IAES International Journal of Artificial Intelligence (IJ-AI) 14, no. 2 (2025): 1587–96. https://doi.org/10.11591/ijai.v14.i2.pp1587-1596.

Full text
Abstract:
Today, with the increase in text data, the application of automatic techniques such as automatic text summarization, which is one of the most critical natural language processing (NLP) tasks, has attracted even more attention and led to more research in this area. Nowadays, with the developments in deep learning, pre-trained sequence-to-sequence (text-to-text transfer converter (T5) and bidirectional encoder representations from transformers (BERT) algorithm) encoder-decoder models are used to obtain the most advanced results. However, most of the studies were done in the English language. Wit
APA, Harvard, Vancouver, ISO, and other styles
3

Alipour, Neda, and Serdar Aydın. "Abstractive summarization using multilingual text-to-text transfer transformer for the Turkish text." IAES International Journal of Artificial Intelligence (IJ-AI) 14, no. 2 (2025): 1587. https://doi.org/10.11591/ijai.v14.i2.pp1587-1596.

Full text
Abstract:
<span lang="EN-US">Today, with the increase in text data, the application of automatic techniques such as automatic text summarization, which is one of the most critical natural language processing (NLP) tasks, has attracted even more attention and led to more research in this area. Nowadays, with the developments in deep learning, pre-trained sequence-to-sequence (text-to-text transfer converter (T5) and bidirectional encoder representations from transformers (BERT) algorithm) encoder-decoder models are used to obtain the most advanced results. However, most of the studies were done in
APA, Harvard, Vancouver, ISO, and other styles
4

Itsnaini, Qurrota A’yuna, Mardhiya Hayaty, Andriyan Dwi Putra, and Nidal A. M. Jabari. "Abstractive Text Summarization using Pre-Trained Language Model "Text-to-Text Transfer Transformer (T5)"." ILKOM Jurnal Ilmiah 15, no. 1 (2023): 124–31. http://dx.doi.org/10.33096/ilkom.v15i1.1532.124-131.

Full text
Abstract:
Automatic Text Summarization (ATS) is one of the utilizations of technological sophistication in terms of text processing assisting humans in producing a summary or key points of a document in large quantities. We use Indonesian language as objects because there are few resources in NLP research using Indonesian language. This paper utilized PLTMs (Pre-Trained Language Models) from the transformer architecture, namely T5 (Text-to-Text Transfer Transformer) which has been completed previously with a larger dataset. Evaluation in this study was measured through comparison of the ROUGE (Recall-Or
APA, Harvard, Vancouver, ISO, and other styles
5

Teng, Zhaopu. "Abstractive summarization of COVID-19 with transfer text-to-text transformer." Applied and Computational Engineering 2, no. 1 (2023): 232–38. http://dx.doi.org/10.54254/2755-2721/2/20220520.

Full text
Abstract:
As a classic problem of Natural Language Processing, summarization provides convenience for studies, research, and daily life. The performance of generation summarization by Natural Language Processing techniques has attracted considerable attention. Meanwhile, COVID-19, a global explosion event, has led to the emergence of a large number of articles and research. The wide variety of articles makes it a perfect realization object for summarization generation tasks. This paper designed and implemented experiments by fine tuning T5 model to get an abstract summarization of COVID-19 literatures.
APA, Harvard, Vancouver, ISO, and other styles
6

Chomphooyod, Peerawat, Atiwong Suchato, Nuengwong Tuaycharoen, and Proadpran Punyabukkana. "English grammar multiple-choice question generation using Text-to-Text Transfer Transformer." Computers and Education: Artificial Intelligence 5 (2023): 100158. http://dx.doi.org/10.1016/j.caeai.2023.100158.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Nikita, Chaudhari, Vora Deepali, Kadam Payal, Khairnar Vaishali, Patil Shruti, and Kotecha Ketan. "Towards efficient knowledge extraction: Natural language processing-based summarization of research paper introductions." IAES International Journal of Artificial Intelligence (IJ-AI) 14, no. 1 (2025): 680–91. https://doi.org/10.11591/ijai.v14.i1.pp680-691.

Full text
Abstract:
Academic and research papers serve as valuable platforms for disseminating expertise and discoveries to diverse audiences. The growing volume of academic papers, with nearly 7 million new publications annually, presents a formidable challenge for students and researchers alike. Consequently, the development of research paper summarization tools has become crucial to distilling crucial insights efficiently. This study examines the effectiveness of pre-trained models like text-to-text transfer transformer (T5), bidirectional encoder representations from transformers (BERT), bidirectional and aut
APA, Harvard, Vancouver, ISO, and other styles
8

Phakmongkol, Puri, and Peerapon Vateekul. "Enhance Text-to-Text Transfer Transformer with Generated Questions for Thai Question Answering." Applied Sciences 11, no. 21 (2021): 10267. http://dx.doi.org/10.3390/app112110267.

Full text
Abstract:
Question Answering (QA) is a natural language processing task that enables the machine to understand a given context and answer a given question. There are several QA research trials containing high resources of the English language. However, Thai is one of the languages that have low availability of labeled corpora in QA studies. According to previous studies, while the English QA models could achieve more than 90% of F1 scores, Thai QA models could obtain only 70% in our baseline. In this study, we aim to improve the performance of Thai QA models by generating more question-answer pairs with
APA, Harvard, Vancouver, ISO, and other styles
9

Yani, Mohammad, Nur Siti Khodijah, Rendi Rendi, and Muhamad Mustamiin. "Aplikasi Peringkas Teks Bahasa Indonesia Menggunakan Model Text-to-Text Transfer Transformer (T5)." IKRA-ITH Informatika : Jurnal Komputer dan Informatika 9, no. 2 (2024): 78–86. http://dx.doi.org/10.37817/ikraith-informatika.v9i2.4392.

Full text
Abstract:
In the digital era, information can be easily accessed through various available media suchas search engines, academic repositories, social media, news portals, and websites.However, the information available is often presented in lengthy textual form. For instance,information about 'Joko Widodo' on the Wikipedia page contains at least 8,716 charactersof text. Such a long text can be challenging to extract its main points efficiently. Severalstudies have been conducted on the development of text summarization applications.Nevertheless, a summarization application that accepts input from variou
APA, Harvard, Vancouver, ISO, and other styles
10

Zhao, Qingqing, Yuhan Xia, Yunfei Long, Ge Xu, and Jia Wang. "Leveraging sensory knowledge into Text-to-Text Transfer Transformer for enhanced emotion analysis." Information Processing & Management 62, no. 1 (2025): 103876. http://dx.doi.org/10.1016/j.ipm.2024.103876.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Winda Puspitasari, Dani Ramdani, and Aditya Muhamad Maulana. "IndoT5 (Text-to-Text Transfer Transformer) Algorithm for Paraphrasing Indonesian Language Islamic Sermon Manuscripts." Khazanah Journal of Religion and Technology 2, no. 2 (2025): 63–73. https://doi.org/10.15575/kjrt.v2i2.1093.

Full text
Abstract:
Pengembangan sistem parafrase otomatis dalam bahasa Indonesia semakin relevan seiring dengan meningkatnya kebutuhan aplikasi pemrosesan bahasa alami. Penelitian ini berfokus pada penerapan algoritma IndoT5 (Text-to-Text Transfer Transformer) untuk membangun sistem parafrase otomatis dalam bahasa Indonesia. Sistem ini dievaluasi menggunakan metrik BLEU, ROUGE, dan METEOR untuk mengukur kesamaan antara teks parafrase yang dihasilkan model dengan teks target yang diinginkan. Hasil evaluasi menunjukkan nilai BLEU sebesar 0,28, ROUGE-1 sebesar 0,59, ROUGE-2 sebesar 0,40, ROUGE-L sebesar 0,55, dan M
APA, Harvard, Vancouver, ISO, and other styles
12

Suhendar, Hilman, Cepy Slamet, and Undang Syaripudin. "Analisis Sentimen Hasil Transkripsi Audio Berbahasa Indonesia Menggunakan T5 (Text-to-Text Transfer Transformer)." SMATIKA JURNAL 15, no. 01 (2025): 115–25. https://doi.org/10.32664/smatika.v15i01.1521.

Full text
Abstract:
Dalam era digital, analisis sentimen menjadi alat penting untuk memahami opini publik, khususnya pada data yang berasal dari media digital seperti video. Namun, analisis sentimen berbasis suara dalam bahasa Indonesia masih jarang dilakukan. Penelitian ini bertujuan untuk mengembangkan model T5 untuk analisis sentimen teks bahasa Indonesia yang dihasilkan dari konversi suara menggunakan teknologi speech-to-text. Keunggulan utama model T5 terletak pada kemampuannya untuk menangani teks panjang, memahami konteks bahasa alami, dan menyesuaikan pelatihan untuk tugas spesifik seperti analisis sentim
APA, Harvard, Vancouver, ISO, and other styles
13

Hwang, Myeong-Ha, Jikang Shin, Hojin Seo, Jeong-Seon Im, Hee Cho, and Chun-Kwon Lee. "Ensemble-NQG-T5: Ensemble Neural Question Generation Model Based on Text-to-Text Transfer Transformer." Applied Sciences 13, no. 2 (2023): 903. http://dx.doi.org/10.3390/app13020903.

Full text
Abstract:
Deep learning chatbot research and development is exploding recently to offer customers in numerous industries personalized services. However, human resources are used to create a learning dataset for a deep learning chatbot. In order to augment this, the idea of neural question generation (NQG) has evolved, although it has restrictions on how questions can be expressed in different ways and has a finite capacity for question generation. In this paper, we propose an ensemble-type NQG model based on the text-to-text transfer transformer (T5). Through the proposed model, the number of generated
APA, Harvard, Vancouver, ISO, and other styles
14

Alikhashashneh, Enas, Hedaia Alsawan, Khalid M. O. Nahar, et al. "Unified Transformer Framework for Automated Cyberbullying Detection." International Journal of Cloud Applications and Computing 15, no. 1 (2025): 1–29. https://doi.org/10.4018/ijcac.386166.

Full text
Abstract:
Cyberbullying is a fast-growing public-health hazard, demanding reliable, real-time detection of abusive language online. This study presents a unified transformer framework that compares bidirectional encoder representations from transformers, generative pre-trained transformer-2 and text-to-text transfer transformer (T5) on the 90 356-message Mendeley Cyber-Bullying corpus. A shared pipeline normalises text, removes stop-words, and using T5, augments minority classes to curb imbalance. Models are fine-tuned under identical splits (70% train/15% val/15% test, 15 epochs) and scored with accura
APA, Harvard, Vancouver, ISO, and other styles
15

S, Tarun. "Bridging Languages through Images: A Multilingual Text-to-Image Synthesis Approach." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 05 (2024): 1–5. http://dx.doi.org/10.55041/ijsrem33773.

Full text
Abstract:
This research investigates the challenges posed by the predominant focus on English language text-to-image generation (TTI) because of the lack of annotated image caption data in other languages. The resulting inequitable access to TTI technology in non-English-speaking regions motivates the research of multilingual TTI (mTTI) and the potential of neural machine translation (NMT) to facilitate its development. The study presents two main contributions. Firstly, a systematic empirical study employing a multilingual multi-modal encoder evaluates standard cross-lingual NLP methods applied to mTTI
APA, Harvard, Vancouver, ISO, and other styles
16

Nandana, Suresh S. S., S. A. Parvathy, Dev Sumitha, R. Vinita, and E. S. Smitha. "APPLICATION OF T5 FOR QUESTION ANSWER GENERATION AND QUIZ SYSTEM." Research and Applications of Web Development and Design 7, no. 3 (2024): 12–22. https://doi.org/10.5281/zenodo.12747841.

Full text
Abstract:
<em>The Transformer in NLP is a novel architecture that aims to solve sequence-to-sequence tasks while handling long-range dependencies with ease.</em><em> </em><em>Transformer-based approaches, such as the Text-To-Text Transfer Transformer (T5), have significantly advanced question answering (QA) systems. Google Research introduced T5, which provides a common framework for several activities related to natural language processing, such as QA creation. This work article examines in detail the state of QA generation systems that use T5, examining approaches, developments, obstacles, and potenti
APA, Harvard, Vancouver, ISO, and other styles
17

Ma, Da, Xingyu Chen, Ruisheng Cao, Zhi Chen, Lu Chen, and Kai Yu. "Relation-Aware Graph Transformer for SQL-to-Text Generation." Applied Sciences 12, no. 1 (2021): 369. http://dx.doi.org/10.3390/app12010369.

Full text
Abstract:
Generating natural language descriptions for structured representation (e.g., a graph) is an important yet challenging task. In this work, we focus on SQL-to-text, a task that maps a SQL query into the corresponding natural language question. Previous work represents SQL as a sparse graph and utilizes a graph-to-sequence model to generate questions, where each node can only communicate with k-hop nodes. Such a model will degenerate when adapted to more complex SQL queries due to the inability to capture long-term and the lack of SQL-specific relations. To tackle this problem, we propose a rela
APA, Harvard, Vancouver, ISO, and other styles
18

Alruily, Meshrif, Abdul Manaf Fazal, Ayman Mohamed Mostafa, and Mohamed Ezz. "Automated Arabic Long-Tweet Classification Using Transfer Learning with BERT." Applied Sciences 13, no. 6 (2023): 3482. http://dx.doi.org/10.3390/app13063482.

Full text
Abstract:
Social media platforms like Twitter are commonly used by people interested in various activities, interests, and subjects that may cover their everyday activities and plans, as well as their thoughts on religion, technology, or the products they use. In this paper, we present bidirectional encoder representations from transformers (BERT)-based text classification model, ARABERT4TWC, for classifying the Arabic tweets of users into different categories. This work aims to provide an enhanced deep-learning model that can automatically classify the robust Arabic tweets of different users. In our pr
APA, Harvard, Vancouver, ISO, and other styles
19

Honda, Kosuke, Masaki Kurematsu, Hamido Fujita, and Ali Selamat. "Multi-Task Learning for Scene Text Image Super-Resolution with Multiple Transformers." Electronics 11, no. 22 (2022): 3813. http://dx.doi.org/10.3390/electronics11223813.

Full text
Abstract:
Scene text image super-resolution aims to improve readability by recovering text shapes from low-resolution degraded text images. Although recent developments in deep learning have greatly improved super-resolution (SR) techniques, recovering text images with irregular shapes, heavy noise, and blurriness is still challenging. This is because networks with Convolutional Neural Network (CNN)-based backbones cannot sufficiently capture the global long-range correlations of text images or detailed sequential information about the text structure. In order to address this issue, this paper proposes
APA, Harvard, Vancouver, ISO, and other styles
20

Zayyanu, Zaki Muhammad. "Revolutionising Translation Technology: A Comparative Study of Variant Transformer Models - BERT, GPT, and T5." Computer Science & Engineering: An International Journal 14, no. 3 (2024): 15–27. http://dx.doi.org/10.5121/cseij.2024.14302.

Full text
Abstract:
Recently, transformer-based models have reshaped the landscape of Natural Language Processing (NLP), particularly in the domain of Machine Translation (MT). this study explores three revolutionary transformer models: Bidirectional Encoder Representations from Transformers (BERT), Generative Pretrained Transformer (GPT), and Text-to-Text Transfer Transformer (T5). The study delves into their architecture, capabilities, and applications in the context of translation technology. The study begins by discussing the evolution of machine translation from rule-based to statistical machine translation
APA, Harvard, Vancouver, ISO, and other styles
21

Wu, Chunhua, Xiaolong Chen, and Xingbiao Li. "Mask Transformer: Unpaired Text Style Transfer Based on Masked Language." Applied Sciences 10, no. 18 (2020): 6196. http://dx.doi.org/10.3390/app10186196.

Full text
Abstract:
Currently, most text style transfer methods encode the text into a style-independent latent representation and decode it into new sentences with the target style. Due to the limitation of the latent representation, previous works can hardly get satisfactory target style sentence especially in terms of semantic remaining of the original sentence. We propose a “Mask and Generation” structure, which can obtain an explicit representation of the content of original sentence and generate the target sentence with a transformer. This explicit representation is a masked text that masks the words with t
APA, Harvard, Vancouver, ISO, and other styles
22

Gupta, Manish, and Puneet Agrawal. "Compression of Deep Learning Models for Text: A Survey." ACM Transactions on Knowledge Discovery from Data 16, no. 4 (2022): 1–55. http://dx.doi.org/10.1145/3487045.

Full text
Abstract:
In recent years, the fields of natural language processing (NLP) and information retrieval (IR) have made tremendous progress thanks to deep learning models like Recurrent Neural Networks (RNNs), Gated Recurrent Units (GRUs) and Long Short-Term Memory (LSTMs) networks, and Transformer [ 121 ] based models like Bidirectional Encoder Representations from Transformers (BERT) [ 24 ], Generative Pre-training Transformer (GPT-2) [ 95 ], Multi-task Deep Neural Network (MT-DNN) [ 74 ], Extra-Long Network (XLNet) [ 135 ], Text-to-text transfer transformer (T5) [ 96 ], T-NLG [ 99 ], and GShard [ 64 ]. B
APA, Harvard, Vancouver, ISO, and other styles
23

Baniata, Laith H., and Sangwoo Kang. "Transformer Text Classification Model for Arabic Dialects That Utilizes Inductive Transfer." Mathematics 11, no. 24 (2023): 4960. http://dx.doi.org/10.3390/math11244960.

Full text
Abstract:
In the realm of the five-category classification endeavor, there has been limited exploration of applied techniques for classifying Arabic text. These methods have primarily leaned on single-task learning, incorporating manually crafted features that lack robust sentence representations. Recently, the Transformer paradigm has emerged as a highly promising alternative. However, when these models are trained using single-task learning, they often face challenges in achieving outstanding performance and generating robust latent feature representations, especially when dealing with small datasets.
APA, Harvard, Vancouver, ISO, and other styles
24

Gao, Liuxin. "MFE-transformer: Adaptive English text named entity recognition method based on multi-feature extraction and transformer." Computer Science and Information Systems, no. 00 (2024): 61. http://dx.doi.org/10.2298/csis240418061g.

Full text
Abstract:
English text named entity recognition aims to alleviate the problem of insufficient labeling data in the target domain. Existing methods usually use feature representation or model parameter sharing to realize cross-domain transfer of entity recognition capability, but there is still a lack of full utilization of structured knowledge in text sequences. Therefore, this paper proposes an adaptive English named text entity recognition method based on multi-feature extraction and transformer. Firstly, a bidirectional long term memory conditional random field entity recognition model based on BERT
APA, Harvard, Vancouver, ISO, and other styles
25

George, Shini, and Srividhya V. "AUTOMATED SUMMARIZATION OF RESTAURANT REVIEWS USING HYBRID APPROACHES." ICTACT Journal on Soft Computing 12, no. 4 (2022): 2690–96. http://dx.doi.org/10.21917/ijsc.2022.0384.

Full text
Abstract:
The arena of automatic text summarization incorporates the paramount and relevant information from a large document. This research paper attempts at representing two hybrid models for automatic text summarization. Extractive summarization followed by an abstractive summarization, is the strategy which is adopted in this paper to produce an informative and concise summary. The LexRank algorithm is used for extractive summarization, while BART (Bidirectional and Auto Regressive Transformers) and T5 (Text-ToText Transfer Transformer) are used for abstractive summarization. BART and T5 are advance
APA, Harvard, Vancouver, ISO, and other styles
26

Shi, Yukai, Sen Zhang, Chenxing Zhou, Xiaodan Liang, Xiaojun Yang, and Liang Lin. "GTAE: Graph Transformer–Based Auto-Encoders for Linguistic-Constrained Text Style Transfer." ACM Transactions on Intelligent Systems and Technology 12, no. 3 (2021): 1–16. http://dx.doi.org/10.1145/3448733.

Full text
Abstract:
Non-parallel text style transfer has attracted increasing research interests in recent years. Despite successes in transferring the style based on the encoder-decoder framework, current approaches still lack the ability to preserve the content and even logic of original sentences, mainly due to the large unconstrained model space or too simplified assumptions on latent embedding space. Since language itself is an intelligent product of humans with certain grammars and has a limited rule-based model space by its nature, relieving this problem requires reconciling the model capacity of deep neur
APA, Harvard, Vancouver, ISO, and other styles
27

Kumar, V., L. Rasouliyan, AG Althoff, S. Long, C. Zema, and MB Rao. "MSR43 Extracting Severity Markers from Unstructured Clinical Data of Congestive Heart Failure Patients Using a Pretrained Text-to-Text Transfer Transformer Model." Value in Health 25, no. 7 (2022): S526. http://dx.doi.org/10.1016/j.jval.2022.04.1250.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Chaudhari, Nikita, Deepali Vora, Payal Kadam, Vaishali Khairnar, Shruti Patil, and Ketan Kotecha. "Towards efficient knowledge extraction: Natural language processing-based summarization of research paper introductions." IAES International Journal of Artificial Intelligence (IJ-AI) 14, no. 1 (2025): 680. http://dx.doi.org/10.11591/ijai.v14.i1.pp680-691.

Full text
Abstract:
Academic and research papers serve as valuable platforms for disseminating expertise and discoveries to diverse audiences. The growing volume of academic papers, with nearly 7 million new publications annually, presents a formidable challenge for students and researchers alike. Consequently, the development of research paper summarization tools has become crucial to distilling crucial insights efficiently. This study examines the effectiveness of pre-trained models like text-to-text transfer transformer (T5), bidirectional encoder representations from transformers (BERT), bidirectional and aut
APA, Harvard, Vancouver, ISO, and other styles
29

Lin, Chen, Steven Bethard, Dmitriy Dligach, Farig Sadeque, Guergana Savova, and Timothy A. Miller. "Does BERT need domain adaptation for clinical negation detection?" Journal of the American Medical Informatics Association 27, no. 4 (2020): 584–91. http://dx.doi.org/10.1093/jamia/ocaa001.

Full text
Abstract:
Abstract Introduction Classifying whether concepts in an unstructured clinical text are negated is an important unsolved task. New domain adaptation and transfer learning methods can potentially address this issue. Objective We examine neural unsupervised domain adaptation methods, introducing a novel combination of domain adaptation with transformer-based transfer learning methods to improve negation detection. We also want to better understand the interaction between the widely used bidirectional encoder representations from transformers (BERT) system and domain adaptation methods. Materials
APA, Harvard, Vancouver, ISO, and other styles
30

Tan, Yee Fan, Tee Connie, Michael Kah Ong Goh, and Andrew Beng Jin Teoh. "A Pipeline Approach to Context-Aware Handwritten Text Recognition." Applied Sciences 12, no. 4 (2022): 1870. http://dx.doi.org/10.3390/app12041870.

Full text
Abstract:
Despite concerted efforts towards handwritten text recognition, the automatic location and transcription of handwritten text remain a challenging task. Text detection and segmentation methods are often prone to errors, affecting the accuracy of the subsequent recognition procedure. In this paper, a pipeline that locates texts on a page and recognizes the text types, as well as the context of the texts within the detected region, is proposed. Clinical receipts are used as the subject of study. The proposed model is comprised of an object detection neural network that extracts text sequences pre
APA, Harvard, Vancouver, ISO, and other styles
31

Wang, Kaijie, Tiejun Wang, Xiaoran Guo, Kui Xu, and Jiao Wu. "Thangka Image—Text Matching Based on Adaptive Pooling Layer and Improved Transformer." Applied Sciences 14, no. 2 (2024): 807. http://dx.doi.org/10.3390/app14020807.

Full text
Abstract:
Image–text matching is a research hotspot in the multimodal task of integrating image and text processing. In order to solve the difficult problem of associating image and text data in the multimodal knowledge graph of Thangka, we propose an image and text matching method based on the Visual Semantic Embedding (VSE) model. The method introduces an adaptive pooling layer to improve the feature extraction capability of semantic associations between Thangka images and texts. We also improved the traditional Transformer architecture by combining bidirectional residual concatenation and mask attent
APA, Harvard, Vancouver, ISO, and other styles
32

Balabin, Helena, Charles Tapley Hoyt, Colin Birkenbihl, et al. "STonKGs: a sophisticated transformer trained on biomedical text and knowledge graphs." Bioinformatics 38, no. 6 (2022): 1648–56. http://dx.doi.org/10.1093/bioinformatics/btac001.

Full text
Abstract:
Abstract Motivation The majority of biomedical knowledge is stored in structured databases or as unstructured text in scientific publications. This vast amount of information has led to numerous machine learning-based biological applications using either text through natural language processing (NLP) or structured data through knowledge graph embedding models. However, representations based on a single modality are inherently limited. Results To generate better representations of biological knowledge, we propose STonKGs, a Sophisticated Transformer trained on biomedical text and Knowledge Grap
APA, Harvard, Vancouver, ISO, and other styles
33

Kim, Seonho, Juntae Yoon, and Ohyoung Kwon. "Biomedical Relation Extraction Using Dependency Graph and Decoder-Enhanced Transformer Model." Bioengineering 10, no. 5 (2023): 586. http://dx.doi.org/10.3390/bioengineering10050586.

Full text
Abstract:
The identification of drug–drug and chemical–protein interactions is essential for understanding unpredictable changes in the pharmacological effects of drugs and mechanisms of diseases and developing therapeutic drugs. In this study, we extract drug-related interactions from the DDI (Drug–Drug Interaction) Extraction-2013 Shared Task dataset and the BioCreative ChemProt (Chemical–Protein) dataset using various transfer transformers. We propose BERTGAT that uses a graph attention network (GAT) to take into account the local structure of sentences and embedding features of nodes under the self-
APA, Harvard, Vancouver, ISO, and other styles
34

Huang, Junyang, and Xiaoxiao Lin. "Self-supervised text de-stylization based on BERT." Applied and Computational Engineering 50, no. 1 (2024): 234–44. http://dx.doi.org/10.54254/2755-2721/50/20241579.

Full text
Abstract:
Recent advancements in Natural Language Processing (NLP) have ushered in a new era of textual style transfer (TST), a domain aimed at altering textual attributes such as tone and sentiment while preserving the content's essence. This study introduces a creative framework that employs a dual-component architecture consisting of a classifier and a generator to achieve text de-stylization, particularly sentiment neutralization. The classifier, built upon the Bidirectional Encoder Representations from Transformers (BERT) model, serves as a dynamic loss function guiding the generator, constructed o
APA, Harvard, Vancouver, ISO, and other styles
35

Zaman, Farooq, Munaza Afzal, Pin Shen Teh, et al. "Intelligent Abstractive Summarization of Scholarly Publications with Transfer Learning." Journal of Informatics and Web Engineering 3, no. 3 (2024): 256–70. http://dx.doi.org/10.33093/jiwe.2024.3.3.16.

Full text
Abstract:
Intelligent abstractive text summarization of scholarly publications refers to machine-generated summaries that capture the essential ideas of an article while maintaining semantic coherence and grammatical accuracy. As information continues to grow at an overwhelming rate, text summarization has emerged as a critical area of research. In the past, summarization of scientific publications predominantly relied on extractive methods. These approaches involve selecting key sentences or phrases directly from the original document to create a summary or generate a suitable title. Although extractiv
APA, Harvard, Vancouver, ISO, and other styles
36

El-Deeb, Reham Hesham, Walid Abdelmoez, and Nashwa El-Bendary. "Enhancing E-Recruitment Recommendations Through Text Summarization Techniques." Information 16, no. 4 (2025): 333. https://doi.org/10.3390/info16040333.

Full text
Abstract:
This research aims to enhance e-recruitment systems using text summarization techniques and pretrained large language models (LLMs). A job recommender system is built with integrated text summarization. The text summarization techniques that are selected are BART, T5 (Text-to-Text Transfer Transformer), BERT, and Pegasus. Content-based recommendation is the model chosen to be implemented. The LinkedIn Job Postings dataset is used. The evaluation of the text summarization techniques is performed using ROUGE-1, ROUGE-2, and ROUGE-L. The results of this approach deduce that the recommendation doe
APA, Harvard, Vancouver, ISO, and other styles
37

Rahhal, Mohamad M. Al, Mohamed Abdelkader Bencherif, Yakoub Bazi, Abdullah Alharbi, and Mohamed Lamine Mekhalfi. "Contrasting Dual Transformer Architectures for Multi-Modal Remote Sensing Image Retrieval." Applied Sciences 13, no. 1 (2022): 282. http://dx.doi.org/10.3390/app13010282.

Full text
Abstract:
Remote sensing technology has advanced rapidly in recent years. Because of the deployment of quantitative and qualitative sensors, as well as the evolution of powerful hardware and software platforms, it powers a wide range of civilian and military applications. This in turn leads to the availability of large data volumes suitable for a broad range of applications such as monitoring climate change. Yet, processing, retrieving, and mining large data are challenging. Usually, content-based remote sensing image (RS) retrieval approaches rely on a query image to retrieve relevant images from the d
APA, Harvard, Vancouver, ISO, and other styles
38

Chouikhi, Hasna, and Mohammed Alsuhaibani. "Deep Transformer Language Models for Arabic Text Summarization: A Comparison Study." Applied Sciences 12, no. 23 (2022): 11944. http://dx.doi.org/10.3390/app122311944.

Full text
Abstract:
Large text documents are sometimes challenging to understand and time-consuming to extract vital information from. These issues are addressed by automatic text summarizing techniques, which condense lengthy texts while preserving their key information. Thus, the development of automatic summarization systems capable of fulfilling the ever-increasing demands of textual data becomes of utmost importance. It is even more vital with complex natural languages. This study explores five State-Of-The-Art (SOTA) Arabic deep Transformer-based Language Models (TLMs) in the task of text summarization by a
APA, Harvard, Vancouver, ISO, and other styles
39

Xu, Miaomiao, Jiang Zhang, Lianghui Xu, Wushour Silamu, and Yanbing Li. "Collaborative Encoding Method for Scene Text Recognition in Low Linguistic Resources: The Uyghur Language Case Study." Applied Sciences 14, no. 5 (2024): 1707. http://dx.doi.org/10.3390/app14051707.

Full text
Abstract:
Current research on scene text recognition primarily focuses on languages with abundant linguistic resources, such as English and Chinese. In contrast, there is relatively limited research dedicated to low-resource languages. Advanced methods for scene text recognition often employ Transformer-based architectures. However, the performance of Transformer architectures is suboptimal when dealing with low-resource datasets. This paper proposes a Collaborative Encoding Method for Scene Text Recognition in the low-resource Uyghur language. The encoding framework comprises three main modules: the Fi
APA, Harvard, Vancouver, ISO, and other styles
40

Fiok, Krzysztof, Waldemar Karwowski, Edgar Gutierrez, Mohammad Reza Davahli, Maciej Wilamowski, and Tareq Ahram. "Revisiting Text Guide, a Truncation Method for Long Text Classification." Applied Sciences 11, no. 18 (2021): 8554. http://dx.doi.org/10.3390/app11188554.

Full text
Abstract:
The quality of text classification has greatly improved with the introduction of deep learning, and more recently, models using attention mechanism. However, to address the problem of classifying text instances that are longer than the length limit adopted by most of the best performing transformer models, the most common method is to naively truncate the text so that it meets the model limit. Researchers have proposed other approaches, but they do not appear to be popular, because of their high computational cost and implementation complexity. Recently, another method called Text Guide has be
APA, Harvard, Vancouver, ISO, and other styles
41

Pandraju, Saichandra, and Sakthi Ganesh Mahalingam. "Answer-Aware Question Generation from Tabular and Textual Data using T5." International Journal of Emerging Technologies in Learning (iJET) 16, no. 18 (2021): 256. http://dx.doi.org/10.3991/ijet.v16i18.25121.

Full text
Abstract:
Automatic Question Generation (AQG) systems are applied in a myriad of domains to generate questions from sources such as documents, images, knowledge graphs to name a few. With the rising interest in such AQG systems, it is equally important to recognize structured data like tables while generating questions from documents. In this paper, we propose a single model architecture for question generation from tables along with text using “Text-to-Text Transfer Transformer” (T5) - a fully end-to-end model which does not rely on any intermediate planning steps, delexicalization, or copy mechanisms.
APA, Harvard, Vancouver, ISO, and other styles
42

Zeng, Chengbin, Yi Liu, and Chunli Song. "Rwin-FPN++: Rwin Transformer with Feature Pyramid Network for Dense Scene Text Spotting." Applied Sciences 12, no. 17 (2022): 8488. http://dx.doi.org/10.3390/app12178488.

Full text
Abstract:
Scene text spotting has made tremendous progress with the in-depth research on deep convolutional neural networks (DCNN). Previous approaches mainly focus on the spotting of arbitrary-shaped scene text, on which it is difficult to achieve satisfactory results on dense scene text containing various instances of bending, occlusion, and lighting. To address this problem, we propose an approach called Rwin-FPN++, which incorporates the long-range dependency merit of the Rwin Transformer into the feature pyramid network (FPN) to effectively enhance the functionality and generalization of FPN. Speci
APA, Harvard, Vancouver, ISO, and other styles
43

Hamza, Ouabiba, and Sniba Farah. "Disease prediction using NLP techniques." ITM Web of Conferences 69 (2024): 03001. https://doi.org/10.1051/itmconf/20246903001.

Full text
Abstract:
This paper explores the application of the T5 (Text-To-Text Transfer Transformer) model Originating from the groundbreaking “Attention Is All You Need” concept, fine-tuned on a medical dataset to predict diseases and symptoms from unstructured medical reports. By leveraging Natural Language Processing (NLP), the system offers automated analysis, enabling quicker and more accurate diagnoses based on symptoms provided by users. The fine- tuning process involved training the T5 model to adapt to the specific language and context of medical texts. The model’s performance is evaluated based on its
APA, Harvard, Vancouver, ISO, and other styles
44

Lee, Eunchan, Changhyeon Lee, and Sangtae Ahn. "Comparative Study of Multiclass Text Classification in Research Proposals Using Pretrained Language Models." Applied Sciences 12, no. 9 (2022): 4522. http://dx.doi.org/10.3390/app12094522.

Full text
Abstract:
Recently, transformer-based pretrained language models have demonstrated stellar performance in natural language understanding (NLU) tasks. For example, bidirectional encoder representations from transformers (BERT) have achieved outstanding performance through masked self-supervised pretraining and transformer-based modeling. However, the original BERT may only be effective for English-based NLU tasks, whereas its effectiveness for other languages such as Korean is limited. Thus, the applicability of BERT-based language models pretrained in languages other than English to NLU tasks based on t
APA, Harvard, Vancouver, ISO, and other styles
45

Salih, Mohammed I., Salim M. Mohammed, Asaad Kh Ibrahim, Omar M. Ahmed, and Lailan M. Haji. "Fine-Tuning BERT for Automated News Classification." Engineering, Technology & Applied Science Research 15, no. 3 (2025): 22953–59. https://doi.org/10.48084/etasr.10625.

Full text
Abstract:
Text classification is a fundamental task in Natural Language Processing (NLP) with a wide range of applications such as sentiment analysis, document classification and content recommendation. Traditional approaches like Naive Bayes (NB), Support Vector Machine (SVM) and Random Forest (RF) relied on feature engineering but lacked contextual understanding. Deep learning came into the picture for text classification with transformer models such as Bidirectional Encoder Representations from Transformers (BERT), which could understand contextual words bidirectionally. In this article, we utilize a
APA, Harvard, Vancouver, ISO, and other styles
46

Alshanqiti, Abdullah M., Sami Albouq, Ahmad B. Alkhodre, Abdallah Namoun, and Emad Nabil. "Employing a Multilingual Transformer Model for Segmenting Unpunctuated Arabic Text." Applied Sciences 12, no. 20 (2022): 10559. http://dx.doi.org/10.3390/app122010559.

Full text
Abstract:
Long unpunctuated texts containing complex linguistic sentences are a stumbling block to processing any low-resource languages. Thus, approaches that attempt to segment lengthy texts with no proper punctuation into simple candidate sentences are a vitally important preprocessing task in many hard-to-solve NLP applications. To this end, we propose a preprocessing solution for segmenting unpunctuated Arabic texts into potentially independent clauses. This solution consists of: (1) a punctuation detection model built on top of a multilingual BERT-based model, and (2) some generic linguistic rules
APA, Harvard, Vancouver, ISO, and other styles
47

Saeed, Muhammad, Naeem Ahmed, Danish Ali, et al. "In-depth Urdu Sentiment Analysis Through Multilingual BERT and Supervised Learning Approaches." IECE Transactions on Intelligent Systematics 1, no. 3 (2024): 161–75. http://dx.doi.org/10.62762/tis.2024.585616.

Full text
Abstract:
Sentiment analysis is the process of identifying and categorizing opinions expressed in a piece of text. It has been extensively studied for languages like English and Chinese but still needs to be explored for languages such as Urdu and Hindi. This paper presents an in-depth analysis of Urdu text using state-of-the-art supervised learning techniques and a transformer-based technique. We manually annotated and preprocessed the dataset from various Urdu blog websites to categorize the sentiments into positive, neutral, and negative classes. We utilize five machine learning classifiers: Support
APA, Harvard, Vancouver, ISO, and other styles
48

Lytvyn, Vasyl, Petro Pukach, Victoria Vysotska, Myroslava Vovk, and Nataliia Kholodna. "Identification and Correction of Grammatical Errors in Ukrainian Texts Based on Machine Learning Technology." Mathematics 11, no. 4 (2023): 904. http://dx.doi.org/10.3390/math11040904.

Full text
Abstract:
A machine learning model for correcting errors in Ukrainian texts has been developed. It was established that the neural network has the ability to correct simple sentences written in Ukrainian; however, the development of a full-fledged system requires the use of spell-checking using dictionaries and the checking of rules, both simple and those based on the result of parsing dependencies or other features. In order to save computing resources, a pre-trained BERT (Bidirectional Encoder Representations from Transformer) type neural network was used. Such neural networks have half as many parame
APA, Harvard, Vancouver, ISO, and other styles
49

Xu, Wenda, Michael Saxon, Misha Sra, and William Yang Wang. "Self-Supervised Knowledge Assimilation for Expert-Layman Text Style Transfer." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 10 (2022): 11566–74. http://dx.doi.org/10.1609/aaai.v36i10.21410.

Full text
Abstract:
Expert-layman text style transfer technologies have the potential to improve communication between members of scientific communities and the general public. High-quality information produced by experts is often filled with difficult jargon laypeople struggle to understand. This is a particularly notable issue in the medical domain, where layman are often confused by medical text online. At present, two bottlenecks interfere with the goal of building high-quality medical expert-layman style transfer systems: a dearth of pretrained medical-domain language models spanning both expert and layman t
APA, Harvard, Vancouver, ISO, and other styles
50

AlZahrani, Fetoun Mansour, and Maha Al-Yahya. "A Transformer-Based Approach to Authorship Attribution in Classical Arabic Texts." Applied Sciences 13, no. 12 (2023): 7255. http://dx.doi.org/10.3390/app13127255.

Full text
Abstract:
Authorship attribution (AA) is a field of natural language processing that aims to attribute text to its author. Although the literature includes several studies on Arabic AA in general, applying AA to classical Arabic texts has not gained similar attention. This study focuses on investigating recent Arabic pretrained transformer-based models in a rarely studied domain with limited research contributions: the domain of Islamic law. We adopt an experimental approach to investigate AA. Because no dataset has been designed specifically for this task, we design and build our own dataset using Isla
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!