To see the other types of publications on this topic, follow the link: BERT.

Journal articles on the topic 'BERT'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'BERT.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Bicca, Aline Brugalli, and Lezilda Carvalho Torgan. "Novos registros de Eunotia Ehrenberg (Eunotiaceae-Bacillariophyta) para o Estado do Rio Grande do Sul e Brasil." Acta Botanica Brasilica 23, no. 2 (2009): 427–35. http://dx.doi.org/10.1590/s0102-33062009000200014.

Full text
Abstract:
O trabalho tem como objetivo apresentar as características morfológicas, e/ou estruturais e métricas de 12 espécies de Eunotia (E. batavica A. Berg, E. deficiens Metz., Lange-Bert & García-Rodr., E. genuflexa Nörpel-Sch., E. hepaticola Lang-Bert. & Wydrz., E. herzogii Krasske, E. mucophila (Lange-Bert., Nörpel-Sch. & Alles) Lange-Bert., E. pileus Ehr., E. pirla Carter & Flower, E. schwabei Krasske, E. subarcuatoides Alles, Nörpel-Sch. & Lange-Bert., E. transfuga Metz. & Lange-Bert. e E. yanomami Metz. & Lange-Bert.) encontradas nas áreas da Lagoa do Casamento e dos
APA, Harvard, Vancouver, ISO, and other styles
2

Smith, Rod. "Bert." Baffler 6 (November 1994): 90. http://dx.doi.org/10.1162/bflr.1994.6.90.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Nicolae, Dragoş Constantin, Rohan Kumar Yadav, and Dan Tufiş. "A Lite Romanian BERT: ALR-BERT." Computers 11, no. 4 (2022): 57. http://dx.doi.org/10.3390/computers11040057.

Full text
Abstract:
Large-scale pre-trained language representation and its promising performance in various downstream applications have become an area of interest in the field of natural language processing (NLP). There has been huge interest in further increasing the model’s size in order to outperform the best previously obtained performances. However, at some point, increasing the model’s parameters may lead to reaching its saturation point due to the limited capacity of GPU/TPU. In addition to this, such models are mostly available in English or a shared multilingual structure. Hence, in this paper, we prop
APA, Harvard, Vancouver, ISO, and other styles
4

Yu, Daegon, Yongyeon Kim, Sangwoo Han, and Byung-Won On. "CLES-BERT: Contrastive Learning-based BERT Model for Automated Essay Scoring." Journal of Korean Institute of Information Technology 21, no. 4 (2023): 31–43. http://dx.doi.org/10.14801/jkiit.2023.21.4.31.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Angger Saputra, Revelin, and Yuliant Sibaroni. "Multilabel Hate Speech Classification in Indonesian Political Discourse on X using Combined Deep Learning Models with Considering Sentence Length." Jurnal Ilmu Komputer dan Informasi 18, no. 1 (2025): 113–25. https://doi.org/10.21609/jiki.v18i1.1440.

Full text
Abstract:
Hate speech, as public expression of hatred or offensive discourse targeting race, religion, gender, or sexual orientation, is widespread on social media. This study assesses BERT-based models for multi-label hate speech detection, emphasizing how text length impacts model performance. Models tested include BERT, BERT-CNN, BERT-LSTM, BERT-BiLSTM, and BERT with two LSTM layers. Overall, BERT-BiLSTM achieved the highest (82.00%) and best performance on longer texts (83.20% ) with high and , highlighting its ability to capture nuanced context. BERT-CNN excelled in shorter texts, achieving the hig
APA, Harvard, Vancouver, ISO, and other styles
6

Xu, Huatao, Pengfei Zhou, Rui Tan, Mo Li, and Guobin Shen. "LIMU-BERT." GetMobile: Mobile Computing and Communications 26, no. 3 (2022): 39–42. http://dx.doi.org/10.1145/3568113.3568124.

Full text
Abstract:
Deep learning greatly empowers Inertial Measurement Unit (IMU) sensors for a wide range of sensing applications. Most existing works require substantial amounts of wellcurated labeled data to train IMU-based sensing models, which incurs high annotation and training costs. Compared with labeled data, unlabeled IMU data are abundant and easily accessible. This article presents a novel representation learning model that can make use of unlabeled IMU data and extract generalized rather than task-specific features. With the representations learned via our model, task-specific models trained with li
APA, Harvard, Vancouver, ISO, and other styles
7

Duffy, Dennis. "Knighting Bert." Ontario History 104, no. 2 (2012): 28. http://dx.doi.org/10.7202/1065436ar.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Fournier, Véronique. "Anne Bert." Cerveau & Psycho N° 93, no. 10 (2017): 62–64. http://dx.doi.org/10.3917/cerpsy.093.0062.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Cazalaà, Jean-Bernard. "Paul Bert." Anesthesiology 117, no. 6 (2012): 1244. http://dx.doi.org/10.1097/aln.0b013e31827ce191.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Skitol, Robert. "Bert Foer." Antitrust Bulletin 60, no. 2 (2015): 88–90. http://dx.doi.org/10.1177/0003603x15584607.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Mees, Barend M. E. "Bert Eikelboom." European Journal of Vascular and Endovascular Surgery 57, no. 3 (2019): 464. http://dx.doi.org/10.1016/j.ejvs.2018.12.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Erbrink, Jacobien. "Bert Keizer." Tijdschrift voor Ouderengeneeskunde 35, no. 3 (2010): 100–101. http://dx.doi.org/10.1007/bf03089855.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Torri Saldanha Coelho, Renata. "BERT HELLINGER." Alamedas 10, no. 2 (2023): 110–20. http://dx.doi.org/10.48075/ra.v10i2.30319.

Full text
Abstract:
Este estudo buscou demonstrar a história de vida de Bert Hellinger. Bert Hellinger nasceu em 1925 e faleceu em 2019, sendo mundialmente reconhecido como o criador das constelações familiares. Atualmente, o estudo das constelações familiares ingressa no âmbito acadêmico, pois já é uma prática integrativa reconhecida pelo Sistema Único de Saúde e também uma forma de resolução de conflitos dentro do Poder Judiciário. Contudo, pelo que o próprio paradigma sistêmico propõe, é impossível abordar um tema isoladamente, sem compreender o contexto em que ele está inserido. Assim, o presente trabalho pon
APA, Harvard, Vancouver, ISO, and other styles
14

Mahule, Ms Shubhangi. "Advanced Product Helpfulness Detection Using BERT and LSTM." INTERNATIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 06 (2025): 1–9. https://doi.org/10.55041/ijsrem50318.

Full text
Abstract:
ABSTRACT: In today’s global marketplace, consumers are exposed to a vast volume of product reviews across numerous platforms, making it increasingly difficult to identify genuinely helpful feedback. Reliable review insights are crucial not only for consumers making purchase decisions but also for businesses aiming to improve product quality, service, and overall customer satisfaction. This research introduces a machine learning framework designed to evaluate the helpfulness of product reviews by leveraging advanced natural language processing techniques. Utilizing the Amazon Fine Food Reviews
APA, Harvard, Vancouver, ISO, and other styles
15

Kim, Kyungmo, Seongkeun Park, Jeongwon Min, et al. "Multifaceted Natural Language Processing Task–Based Evaluation of Bidirectional Encoder Representations From Transformers Models for Bilingual (Korean and English) Clinical Notes: Algorithm Development and Validation." JMIR Medical Informatics 12 (October 30, 2024): e52897-e52897. http://dx.doi.org/10.2196/52897.

Full text
Abstract:
Abstract Background The bidirectional encoder representations from transformers (BERT) model has attracted considerable attention in clinical applications, such as patient classification and disease prediction. However, current studies have typically progressed to application development without a thorough assessment of the model’s comprehension of clinical context. Furthermore, limited comparative studies have been conducted on BERT models using medical documents from non–English-speaking countries. Therefore, the applicability of BERT models trained on English clinical notes to non-English c
APA, Harvard, Vancouver, ISO, and other styles
16

Su, Jing, Qingyun Dai, Frank Guerin, and Mian Zhou. "BERT-hLSTMs: BERT and hierarchical LSTMs for visual storytelling." Computer Speech & Language 67 (May 2021): 101169. http://dx.doi.org/10.1016/j.csl.2020.101169.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Kaur, Kamaljit, and Parminder Kaur. "BERT-CNN: Improving BERT for Requirements Classification using CNN." Procedia Computer Science 218 (2023): 2604–11. http://dx.doi.org/10.1016/j.procs.2023.01.234.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Prakash, PKS, Srinivas Chilukuri, Nikhil Ranade, and Shankar Viswanathan. "RareBERT: Transformer Architecture for Rare Disease Patient Identification using Administrative Claims." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 1 (2021): 453–60. http://dx.doi.org/10.1609/aaai.v35i1.16122.

Full text
Abstract:
A rare disease is any disease that affects a very small percentage (1 in 1,500) of population. It is estimated that there are nearly 7,000 rare disease affecting 30 million patients in the U. S. alone. Most of the patients suffering from rare diseases experience multiple misdiagnoses and may never be diagnosed correctly. This is largely driven by the low prevalence of the disease that results in a lack of awareness among healthcare providers. There have been efforts from machine learning researchers to develop predictive models to help diagnose patients using healthcare datasets such as electr
APA, Harvard, Vancouver, ISO, and other styles
19

Liu, Weijie, Peng Zhou, Zhe Zhao, et al. "K-BERT: Enabling Language Representation with Knowledge Graph." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 03 (2020): 2901–8. http://dx.doi.org/10.1609/aaai.v34i03.5681.

Full text
Abstract:
Pre-trained language representation models, such as BERT, capture a general language representation from large-scale corpora, but lack domain-specific knowledge. When reading a domain text, experts make inferences with relevant knowledge. For machines to achieve this capability, we propose a knowledge-enabled language representation model (K-BERT) with knowledge graphs (KGs), in which triples are injected into the sentences as domain knowledge. However, too much knowledge incorporation may divert the sentence from its correct meaning, which is called knowledge noise (KN) issue. To overcome KN,
APA, Harvard, Vancouver, ISO, and other styles
20

Yu, Geyang. "An analysis of BERT-based model for Berkshire stock performance prediction using Warren Buffet's letters." Applied and Computational Engineering 52, no. 1 (2024): 55–61. http://dx.doi.org/10.54254/2755-2721/52/20241232.

Full text
Abstract:
The objective of this study is to discover and validate eective Bidirectional Encoder Representations from Transformers (BERT)-based models for stock market prediction of Berkshire Hathaway. The stock market is full of uncertainty and dynamism and its prediction has always been a critical challenge in the nancial domain. Therefore, accurate predictions of market trends are important for making investment decisions and risk management. The primary approach involves sentiment analysis of reviews on market performance. This work selects Warren Buetts annual letters to investors and the year-by-ye
APA, Harvard, Vancouver, ISO, and other styles
21

Dharmendra, Mangal Hemant Makwana. "Performance analysis of different BERT implementation for event burst detection from social media text." Indonesian Journal of Electrical Engineering and Computer Science 38, no. 1 (2025): 439–46. https://doi.org/10.11591/ijeecs.v38.i1.pp439-446.

Full text
Abstract:
The language models play very important role in natural language processing (NLP) tasks. To understand natural languages, the learning models are required to be trained on large corpus. This requires a lot of time and computing resources. The detection of information like events, and locations from text is an important NLP task. As events detection is to be done in real-time so that immediate actions can be taken, hence we need efficient decision-making models. The pertained models like bi-directional encoders representation from transformers (BERT) gaining popularity to solve NLP problems. As
APA, Harvard, Vancouver, ISO, and other styles
22

Said, Fadillah, and Lindung Parningotan Manik. "Aspect-Based Sentiment Analysis on Indonesian Presidential Election Using Deep Learning." Paradigma - Jurnal Komputer dan Informatika 24, no. 2 (2022): 160–67. http://dx.doi.org/10.31294/paradigma.v24i2.1415.

Full text
Abstract:
Pemilihan presiden tahun 2019 merupakan pemilihan presiden yang menjadi perbincangan hangat selama beberapa waktu bahkan orang membicarakan topik ini sejak tahun 2018 di internet. Dalam memprediksi pemenang pemilihan presiden penelitian sebelumnya telah melakukan penelitian terhadap dataset Analisis sentimen berbasis aspek (ABSA) pemilihan presiden tahun 2019 menggunakan algoritma pembelajaran mesin seperti Support Vector Machine (SVM), Naive Bayes (NB), dan K-Nearest Neighbors (KNN) dan menghasilkan akurasi yang cukup baik. Penelitian ini mengusulkan metode deep learning dengan menggunakan mo
APA, Harvard, Vancouver, ISO, and other styles
23

Balderas, Luis, Miguel Lastra, and José M. Benítez. "A Green AI Methodology Based on Persistent Homology for Compressing BERT." Applied Sciences 15, no. 1 (2025): 390. https://doi.org/10.3390/app15010390.

Full text
Abstract:
Large Language Models (LLMs) like BERT have gained significant prominence due to their remarkable performance in various natural language processing tasks. However, they come with substantial computational and memory costs. Additionally, they are essentially black-box models, being challenging to explain and interpret. In this article, Persistent BERT Compression and Explainability (PBCE) is proposed, a Green AI methodology to prune BERT models using persistent homology, aiming to measure the importance of each neuron by studying the topological characteristics of their outputs. As a result, P
APA, Harvard, Vancouver, ISO, and other styles
24

Arefeva, Veronika, and Roman Egger. "When BERT Started Traveling: TourBERT—A Natural Language Processing Model for the Travel Industry." Digital 2, no. 4 (2022): 546–59. http://dx.doi.org/10.3390/digital2040030.

Full text
Abstract:
In recent years, Natural Language Processing (NLP) has become increasingly important for extracting new insights from unstructured text data, and pre-trained language models now have the ability to perform state-of-the-art tasks like topic modeling, text classification, or sentiment analysis. Currently, BERT is the most widespread and widely used model, but it has been shown that a potential to optimize BERT can be applied to domain-specific contexts. While a number of BERT models that improve downstream tasks’ performance for other domains already exist, an optimized BERT model for tourism ha
APA, Harvard, Vancouver, ISO, and other styles
25

Li, Yanjie, and He Mao. "Evaluation and Construction of College Students’ Growth and Development Index System Based on Data Association Mining and Deep Learning Model." Security and Communication Networks 2021 (December 31, 2021): 1–8. http://dx.doi.org/10.1155/2021/7415129.

Full text
Abstract:
The rise of big data in the field of education provides an opportunity to solve college students’ growth and development. The establishment of a personalized student management mode based on big data in universities will promote the change of personalized student management from the empirical mode to the scientific mode, from passive response to active warning, from reliance on point data to holistic data, and thus improve the efficiency and quality of personalized student management. In this paper, using the latest ideas and techniques in deep learning such as self-supervised learning and mul
APA, Harvard, Vancouver, ISO, and other styles
26

Sanchan, Nattapong. "Intent Mining of Thai Phone Call Text Using a Stacking Ensemble Classifier with GPT-3 Embeddings." ECTI Transactions on Computer and Information Technology (ECTI-CIT) 19, no. 1 (2025): 135–45. https://doi.org/10.37936/ecti-cit.2025191.258239.

Full text
Abstract:
Intent mining has recently attracted Natural Language Processing (NLP) research communities. Despite the extensive research on English and other widely spoken languages, intent mining in Thai remains unexplored. This paper proposes an extended framework for mining intentions in Thai phone call text. It utilized a stacking ensemble method with GPT-3 embeddings, constructed by systematically determining based and meta-classifiers using Q-statistic and F1 scores. Overall, the based classifiers consisting of Support Vector Classier (SVC), k-nearest Neighbors (KNN), and Random Forest (RF) were deri
APA, Harvard, Vancouver, ISO, and other styles
27

Mangal, Dharmendra, and Hemant Makwana. "Performance analysis of different BERT implementation for event burst detection from social media text." Indonesian Journal of Electrical Engineering and Computer Science 38, no. 1 (2025): 439. https://doi.org/10.11591/ijeecs.v38.i1.pp439-446.

Full text
Abstract:
<p>The language models play very important role in natural language processing (NLP) tasks. To understand natural languages, the learning models are required to be trained on large corpus. This requires a lot of time and computing resources. The detection of information like events, and locations from text is an important NLP task. As events detection is to be done in real-time so that immediate actions can be taken, hence we need efficient decision-making models. The pertained models like bi-directional encoders representation from transformers (BERT) gaining popularity to solve NLP pro
APA, Harvard, Vancouver, ISO, and other styles
28

Hoffmann, Birgitt. "Bert G. Fragner." Iranian Studies 55, no. 2 (2022): 599–601. http://dx.doi.org/10.1017/irn.2022.9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Brown, Karl. "The BERT robot." ACM SIGFORTH Newsletter 3, no. 3 (1991): 15–18. http://dx.doi.org/10.1145/126517.126519.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Feshiach, Herman, and Kosta Tsipis. "Jerome Bert Wiesner." Physics Today 48, no. 4 (1995): 104–6. http://dx.doi.org/10.1063/1.2807995.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Heislbetz, Hans Peter. "�bert-uniforme Gruppen." Archiv der Mathematik 61, no. 4 (1993): 329–39. http://dx.doi.org/10.1007/bf01201448.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Harry Collins, F. B. A. "Remembering Bert Dreyfus." AI & SOCIETY 34, no. 2 (2018): 373–76. http://dx.doi.org/10.1007/s00146-018-0796-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Ettinger, M. G. "Abraham Bert Baker." Neurology 38, no. 4 (1988): 513. http://dx.doi.org/10.1212/wnl.38.4.513.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Grimm, Erk. "Bert Papenfuß: Tiské." GDR Bulletin 25, no. 1 (1998): 83–85. http://dx.doi.org/10.4148/gdrb.v25i0.1265.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Shen, Sheng, Zhen Dong, Jiayu Ye, et al. "Q-BERT: Hessian Based Ultra Low Precision Quantization of BERT." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 05 (2020): 8815–21. http://dx.doi.org/10.1609/aaai.v34i05.6409.

Full text
Abstract:
Transformer based architectures have become de-facto models used for a range of Natural Language Processing tasks. In particular, the BERT based models achieved significant accuracy gain for GLUE tasks, CoNLL-03 and SQuAD. However, BERT based models have a prohibitive memory footprint and latency. As a result, deploying BERT based models in resource constrained environments has become a challenging task. In this work, we perform an extensive analysis of fine-tuned BERT models using second order Hessian information, and we use our results to propose a novel method for quantizing BERT models to
APA, Harvard, Vancouver, ISO, and other styles
36

Jiang, Liangzhen, Jici Jiang, Xiao Wang, et al. "IUP-BERT: Identification of Umami Peptides Based on BERT Features." Foods 11, no. 22 (2022): 3742. http://dx.doi.org/10.3390/foods11223742.

Full text
Abstract:
Umami is an important widely-used taste component of food seasoning. Umami peptides are specific structural peptides endowing foods with a favorable umami taste. Laboratory approaches used to identify umami peptides are time-consuming and labor-intensive, which are not feasible for rapid screening. Here, we developed a novel peptide sequence-based umami peptide predictor, namely iUP-BERT, which was based on the deep learning pretrained neural network feature extraction method. After optimization, a single deep representation learning feature encoding method (BERT: bidirectional encoder represe
APA, Harvard, Vancouver, ISO, and other styles
37

Li, Fenfang, Zhengzhang Zhao, Li Wang, and Han Deng. "Tibetan Sentence Boundaries Automatic Disambiguation Based on Bidirectional Encoder Representations from Transformers on Byte Pair Encoding Word Cutting Method." Applied Sciences 14, no. 7 (2024): 2989. http://dx.doi.org/10.3390/app14072989.

Full text
Abstract:
Sentence Boundary Disambiguation (SBD) is crucial for building datasets for tasks such as machine translation, syntactic analysis, and semantic analysis. Currently, most automatic sentence segmentation in Tibetan adopts the methods of rule-based and statistical learning, as well as the combination of the two, which have high requirements on the corpus and the linguistic foundation of the researchers and are more costly to annotate manually. In this study, we explore Tibetan SBD using deep learning technology. Initially, we analyze Tibetan characteristics and various subword techniques, selecti
APA, Harvard, Vancouver, ISO, and other styles
38

Владимир Александрович, Минаев,, and Симонов, Александр Валерьевич. "COMPARISON OF BERT TRANSFORMER MODELS IN IDENTIFYING DESTRUCTIVE CONTENT IN SOCIAL MEDIA." ИНФОРМАЦИЯ И БЕЗОПАСНОСТЬ, no. 3(-) (October 24, 2022): 341–48. http://dx.doi.org/10.36622/vstu.2022.25.3.003.

Full text
Abstract:
Цель статьи состоит в определении наиболее эффективной модели из семейства BERT по выявлению деструктивного контента в социальных медиа. Произведено сравнение пяти наиболее известных моделей BERT по выявлению деструктивного контента. Для этого осуществлено создание текстового корпуса из материалов социальных медиа (СМ), дополненного запрещённым к распространению в Российской Федерации контентом нацистского характера из Федерального списка экстремистских материалов. Представлена структура классификатора текстовых данных, основанного на глубокой искусственной нейронной сети BERT, и описана его р
APA, Harvard, Vancouver, ISO, and other styles
39

Gupta, Rajesh. "Bidirectional encoders to state-of-the-art: a review of BERT and its transformative impact on natural language processing." Информатика. Экономика. Управление - Informatics. Economics. Management 3, no. 1 (2024): 0311–20. http://dx.doi.org/10.47813/2782-5280-2024-3-1-0311-0320.

Full text
Abstract:
First developed in 2018 by Google researchers, Bidirectional Encoder Representations from Transformers (BERT) represents a breakthrough in natural language processing (NLP). BERT achieved state-of-the-art results across a range of NLP tasks while using a single transformer-based neural network architecture. This work reviews BERT's technical approach, performance when published, and significant research impact since release. We provide background on BERT's foundations like transformer encoders and transfer learning from universal language models. Core technical innovations include deeply bidir
APA, Harvard, Vancouver, ISO, and other styles
40

Qiu, Zhaopeng, Xian Wu, Jingyue Gao, and Wei Fan. "U-BERT: Pre-training User Representations for Improved Recommendation." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 5 (2021): 4320–27. http://dx.doi.org/10.1609/aaai.v35i5.16557.

Full text
Abstract:
Learning user representation is a critical task for recommendation systems as it can encode user preference for personalized services. User representation is generally learned from behavior data, such as clicking interactions and review comments. However, for less popular domains, the behavior data is insufficient to learn precise user representations. To deal with this problem, a natural thought is to leverage content-rich domains to complement user representations. Inspired by the recent success of BERT in NLP, we propose a novel pre-training and fine-tuning based approach U-BERT. Different
APA, Harvard, Vancouver, ISO, and other styles
41

Eshwarappa, Sunil Mugalihalli, and Vinay Shivasubramanyan. "Enhancing sentiment analysis in Kannada texts by feature selection." International Journal of Electrical and Computer Engineering (IJECE) 14, no. 6 (2024): 6572. http://dx.doi.org/10.11591/ijece.v14i6.pp6572-6582.

Full text
Abstract:
In recent years, there has been a noticeable surge in research activities focused on sentiment analysis within the Kannada language domain. The existing research highlights a lack of labelled datasets and limited exploration in feature selection for Kannada sentiment analysis, hindering accurate sentiment classification. To address this gap, the study aims to introduce a novel Kannada dataset and develop an effective classifier for improved sentiment analysis in Kannada texts. The study presents a new Kannada dataset from SemEval 2014 Task4 using Google Translate. It then introduces a modified
APA, Harvard, Vancouver, ISO, and other styles
42

Wen, Yu, Yezhang Liang, and Xinhua Zhu. "Sentiment analysis of hotel online reviews using the BERT model and ERNIE model—Data from China." PLOS ONE 18, no. 3 (2023): e0275382. http://dx.doi.org/10.1371/journal.pone.0275382.

Full text
Abstract:
The emotion analysis of hotel online reviews is discussed by using the neural network model BERT, which proves that this method can not only help hotel network platforms fully understand customer needs but also help customers find suitable hotels according to their needs and affordability and help hotel recommendations be more intelligent. Therefore, using the pretraining BERT model, a number of emotion analytical experiments were carried out through fine-tuning, and a model with high classification accuracy was obtained by frequently adjusting the parameters during the experiment. The BERT la
APA, Harvard, Vancouver, ISO, and other styles
43

Fu, Guanping, and Jianwei Sun. "Chinese text multi-classification based on Sentences Order Prediction improved Bert model." Journal of Physics: Conference Series 2031, no. 1 (2021): 012054. http://dx.doi.org/10.1088/1742-6596/2031/1/012054.

Full text
Abstract:
Abstract For the strong noise interference brought by the NSP mechanism (Next Sentences Prediction) in Bert to the model, in order to improve the classification effect of the Bert model when it is used in text classification, an SOP (Sentences Order Prediction) mechanism is used to replace the Bert model of the NSP mechanism-Multi-classification of Chinese news texts. At first, use randomly sorted adjacent sentence pairs for segment embedding. Then use the Transformer structure of the Bert model to encode the Chinese text, and obtain the final CLS vector as the semantic vector of the text. Fin
APA, Harvard, Vancouver, ISO, and other styles
44

Iswarya, M., P. Sai Krishna, K. Naveen, M. Ganesh, and M. Yasin. "Video Transcript Summarization Using Bert." International Journal of Research Publication and Reviews 4, no. 3 (2023): 1837–41. http://dx.doi.org/10.55248/gengpi.2023.4.32991.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Zhang, Min, and Juanle Wang. "Automatic Extraction of Flooding Control Knowledge from Rich Literature Texts Using Deep Learning." Applied Sciences 13, no. 4 (2023): 2115. http://dx.doi.org/10.3390/app13042115.

Full text
Abstract:
Flood control is a global problem; increasing number of flooding disasters occur annually induced by global climate change and extreme weather events. Flood studies are important knowledge sources for flood risk reduction and have been recorded in the academic literature. The main objective of this paper was to acquire flood control knowledge from long-tail data of the literature by using deep learning techniques. Screening was conducted to obtain 4742 flood-related academic documents from past two decades. Machine learning was conducted to parse the documents, and 347 sample data points from
APA, Harvard, Vancouver, ISO, and other styles
46

Okolo, Omachi, B. Y. Baha, and M. D. Philemon. "Using Causal Graph Model variable selection for BERT models Prediction of Patient Survival in a Clinical Text Discharge Dataset." Journal of Future Artificial Intelligence and Technologies 1, no. 4 (2025): 455–73. https://doi.org/10.62411/faith.3048-3719-61.

Full text
Abstract:
Feature selection in most black-box machine learning algorithms, such as BERT, is based on the cor-relations between features and the target variable rather than causal relationships in the dataset. This makes their predictive power and decisions questionable because of their potential bias. This paper presents novel BERT models that learn from causal variables in a clinical discharge dataset. The causal-directed acyclic Graphs (DAG) identify input variables for patients’ survival rate prediction and decisions. The core idea behind our model lies in the ability of the BERT-based model to learn
APA, Harvard, Vancouver, ISO, and other styles
47

Zulkalnain, Mohd Asyraf, A. R. Syafeeza, Wira Hidayat Mohd Saad, and Shahid Rahaman. "Evaluation of Transformer-Based Models for Sentiment Analysis in Bahasa Malaysia." Journal of Telecommunication, Electronic and Computer Engineering (JTEC) 17, no. 1 (2025): 29–33. https://doi.org/10.54554/jtec.2025.17.01.004.

Full text
Abstract:
This study investigates the application of advanced Transformer-based models, namely BERT, DistilBERT, BERT-multilingual, ALBERT, and BERT-CNN, for sentiment analysis in Bahasa Malaysia, addressing unique challenges such as mixed-language usage and abbreviated expressions in social media text. Using the Malaya dataset to ensure linguistic diversity and domain coverage, the research incorporates robust preprocessing techniques, including synonym mapping and sentiment-aware tokenization, to enhance feature extraction. Through rigorous evaluation, BERT-CNN exhibits the best accuracy (96.3%), foll
APA, Harvard, Vancouver, ISO, and other styles
48

Maringka, Raissa Camilla, and Reynard Justino Nehemia Makarawung. "OPTIMALISASI ANALISIS UJARAN KEBENCIAN ULASAN E-COMMERCE BERBASIS BERT DAN FAISS." Journal of Information System Management (JOISM) 7, no. 1 (2025): 127–34. https://doi.org/10.24076/joism.2025v7i1.2132.

Full text
Abstract:
Ujaran kebencian dalam ulasan aplikasi e-commerce dapat merusak reputasi platform dan menurunkan kepercayaan pengguna. Masalah utama yang dihadapi adalah banyaknya ujaran kebencian yang disampaikan secara implisit dan dalam bentuk bahasa informal, sehingga sulit dideteksi menggunakan pendekatan berbasis kata kunci atau metode klasifikasi tradisional. Penelitian ini bertujuan untuk mengoptimalkan deteksi ujaran kebencian pada ulasan aplikasi e-commerce dengan menggabungkan BERT dan vector search berbasis FAISS. Dataset ulasan yang telah dilabeli diambil dari Kaggle, sehingga tidak memerlukan pr
APA, Harvard, Vancouver, ISO, and other styles
49

Albashayreh, Alaa, Nahid Zeinali, and Stephanie White. "INNOVATING THE DETECTION OF CARE PRIORITIES IN HEART FAILURE USING LARGE LANGUAGE MODELS." Innovation in Aging 8, Supplement_1 (2024): 1339. https://doi.org/10.1093/geroni/igae098.4272.

Full text
Abstract:
Abstract Engaging older adults with advanced chronic conditions, such as heart failure, in discussions about their care priorities is crucial for ensuring treatments align with their preferences, especially at the end of life. Despite the abundance of data in electronic health records (EHRs), documentation of care priorities is often inconsistent and underutilized. This study utilizes natural language processing (NLP) to detect and characterize care priorities in the EHRs of older adults with heart failure, aiming to enhance patient-centered care. We retrained Bio-Clinical-BERT, a Bidirectiona
APA, Harvard, Vancouver, ISO, and other styles
50

Kannan, Eswariah, and Lakshmi Anusha Kothamasu. "Fine-Tuning BERT Based Approach for Multi-Class Sentiment Analysis on Twitter Emotion Data." Ingénierie des systèmes d information 27, no. 1 (2022): 93–100. http://dx.doi.org/10.18280/isi.270111.

Full text
Abstract:
Tweets are difficult to classify due to their simplicity and frequent use of non-standard orthodoxy or slang words. Although several studies have identified highly accurate sentiment data classifications, most have not been tested on Twitter data. Previous research on sentiment interpretation focused on binary or ternary sentiments in monolingual texts. However, emotions emerge in bilingual and multilingual texts. The emotions expressed in today's social media, including microblogs, are different. We use a dataset that combines everyday dialogue, easy and emotional stimulation to carry out the
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!