Academic literature on the topic 'BERT model'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'BERT model.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "BERT model"

1

Yu, Daegon, Yongyeon Kim, Sangwoo Han, and Byung-Won On. "CLES-BERT: Contrastive Learning-based BERT Model for Automated Essay Scoring." Journal of Korean Institute of Information Technology 21, no. 4 (2023): 31–43. http://dx.doi.org/10.14801/jkiit.2023.21.4.31.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Cui, Hongyang, Chentao Wang, and Yibo Yu. "News Short Text Classification Based on Bert Model and Fusion Model." Highlights in Science, Engineering and Technology 34 (February 28, 2023): 262–68. http://dx.doi.org/10.54097/hset.v34i.5482.

Full text
Abstract:
Text classification task is one of the most fundamental tasks in NLP, and the classification of short news text could be the basis for many other tasks. In this paper, we applied a fusion model combining Bert and TextRNN with some modified details to expect higher accuracy of text classification. We used the THUCNews as dataset which consists of two columns one for news text and the other for numbers. The original dataset was seperated into three parts: training set, validation set and test set. Besides, we used BERT model which contains two pre-training tasks and TextRNN model which refers to the use of RNN to solve text classification problems. We trained these two models in parallel, and then the optimal Bert and TextRNN models obtained through training and parameter tuning are added with a fully-connected layer to receive the final results by weighting the efficiency of Bert and TextRNN. The fusion model solves the problem of over-fitting and under-fitting of a single model, and helps to obtain a model with better generalization performance. The experimental results show the sharp change in loss and accuracy as well as the final accuracy of the BERT model. The precision, recall-rate and F1-score are also evaluated in this paper. The accuracy of fusion model of BERT and TextRNN is much better than single Bert model and has a gap to 1.76%.
APA, Harvard, Vancouver, ISO, and other styles
3

Wen, Yu, Yezhang Liang, and Xinhua Zhu. "Sentiment analysis of hotel online reviews using the BERT model and ERNIE model—Data from China." PLOS ONE 18, no. 3 (2023): e0275382. http://dx.doi.org/10.1371/journal.pone.0275382.

Full text
Abstract:
The emotion analysis of hotel online reviews is discussed by using the neural network model BERT, which proves that this method can not only help hotel network platforms fully understand customer needs but also help customers find suitable hotels according to their needs and affordability and help hotel recommendations be more intelligent. Therefore, using the pretraining BERT model, a number of emotion analytical experiments were carried out through fine-tuning, and a model with high classification accuracy was obtained by frequently adjusting the parameters during the experiment. The BERT layer was taken as a word vector layer, and the input text sequence was used as the input to the BERT layer for vector transformation. The output vectors of BERT passed through the corresponding neural network and were then classified by the softmax activation function. ERNIE is an enhancement of the BERT layer. Both models can lead to good classification results, but the latter performs better. ERNIE exhibits stronger classification and stability than BERT, which provides a promising research direction for the field of tourism and hotels.
APA, Harvard, Vancouver, ISO, and other styles
4

S, Sushma, Sasmita Kumari Nayak, and M. Vamsi Krishna. "Enhanced toxic comment detection model through Deep Learning models using Word embeddings and transformer architectures." Future Technology 4, no. 3 (2025): 76–84. https://doi.org/10.55670/fpll.futech.4.3.8.

Full text
Abstract:
The proliferation of harmful and toxic comments on social media platforms necessitates the development of robust methods for automatically detecting and classifying such content. This paper investigates the application of natural language processing (NLP) and ML techniques for toxic comment classification using the Jigsaw Toxic Comment Dataset. Several deep learning models, including recurrent neural networks (RNN, LSTM, and GRU), are evaluated in combination with feature extraction methods such as TF-IDF, Word2Vec, and BERT embeddings. The text data is pre-processed using both Word2Vec and TF-IDF techniques for feature extraction. Rather than implementing a combined ensemble output, the study conducts a comparative evaluation of model-embedding combinations to determine the most effective pairings. Results indicate that integrating BERT with traditional models (RNN+BERT, LSTM+BERT, GRU+BERT) leads to significant improvements in classification accuracy, precision, recall, and F1-score, demonstrating the effectiveness of BERT embeddings in capturing nuanced text features. Among all configurations, LSTM combined with Word2Vec and LSTM with BERT yielded the highest performance. This comparative approach highlights the potential of combining classical recurrent models with transformer-based embeddings as a promising direction for detecting toxic comments. The findings of this work provide valuable insights into leveraging deep learning techniques for toxic comment detection, suggesting future directions for refining such models in real-world applications.
APA, Harvard, Vancouver, ISO, and other styles
5

Okolo, Omachi, B. Y. Baha, and M. D. Philemon. "Using Causal Graph Model variable selection for BERT models Prediction of Patient Survival in a Clinical Text Discharge Dataset." Journal of Future Artificial Intelligence and Technologies 1, no. 4 (2025): 455–73. https://doi.org/10.62411/faith.3048-3719-61.

Full text
Abstract:
Feature selection in most black-box machine learning algorithms, such as BERT, is based on the cor-relations between features and the target variable rather than causal relationships in the dataset. This makes their predictive power and decisions questionable because of their potential bias. This paper presents novel BERT models that learn from causal variables in a clinical discharge dataset. The causal-directed acyclic Graphs (DAG) identify input variables for patients’ survival rate prediction and decisions. The core idea behind our model lies in the ability of the BERT-based model to learn from the causal DAG semi-synthetic dataset, enabling it to model the underlying causal structure accurately in-stead of the generic spurious correlations devoid of causation. The results from Causal DAG Conditional Independence Test (CIT) validation metrics showed that the conceptual assumptions of the causal DAG were supported, the Pearson correlation coefficient ranges between -1 and 1, the p-value was (>0.05), and the confidence interval of 95% and 25% were satisfied. We further mapped the semi-synthetic dataset that evolved from the Causal DAG to three BERT models. Two metrics, pre-diction accuracy, and AUC score, were used to compare the performance of the BERT models. The accuracy of the BERT models showed that the regular BERT has a performance of 96%, while Clinical-BERT performance was 90%, and Clinical-BERT-Discharge-summary was 92%. On the other hand, the AUC score for BERT was 79%, ClinicalBERT was 77%, while ClinicalBERT-discharge summary was 84%. Our experiments on the synthetic dataset for the patient’s survival rate from the causal DAG datasets demonstrate high predictive performance and explainable input variables for human under-standing to justify prediction.
APA, Harvard, Vancouver, ISO, and other styles
6

Zhao, Lanxin, Wanrong Gao, and Jianbin Fang. "Optimizing Large Language Models on Multi-Core CPUs: A Case Study of the BERT Model." Applied Sciences 14, no. 6 (2024): 2364. http://dx.doi.org/10.3390/app14062364.

Full text
Abstract:
The BERT model is regarded as the cornerstone of various pre-trained large language models that have achieved promising results in recent years. This article investigates how to optimize the BERT model in terms of fine-tuning speed and prediction accuracy, aiming to accelerate the execution of the BERT model on a multi-core processor and improve its prediction accuracy in typical downstream natural language processing tasks. Our contributions are two-fold. First, we port and parallelize the fine-tuning training of the BERT model on a multi-core shared-memory processor. We port the BERT model onto a multi-core processor platform to accelerate the fine-tuning training process of the model for downstream tasks. Second, we improve the prediction performance of typical downstream natural language processing tasks through fine-tuning the model parameters. We select five typical downstream natural language processing tasks (CoLA, SST-2, MRPC, RTE, and WNLI) and perform optimization on the multi-core platform, taking the hyperparameters of batch size, learning rate, and training epochs into account. Our experimental results show that, by increasing the number of CPUs and the number of threads, the model training time can be significantly reduced. We observe that the reduced time is primarily concentrated in the self-attention mechanism. Our further experimental results show that setting reasonable hyperparameters can improve the accuracy of the BERT model when applied to downstream tasks and that appropriately increasing the batch size under conditions of sufficient computing resources can significantly reduce training time.
APA, Harvard, Vancouver, ISO, and other styles
7

Manuel-Ilie, Dorca, Pitic Antoniu Gabriel, and Crețulescu Radu George. "Sentiment Analysis Using Bert Model." International Journal of Advanced Statistics and IT&C for Economics and Life Sciences 13, no. 1 (2023): 59–66. http://dx.doi.org/10.2478/ijasitels-2023-0007.

Full text
Abstract:
Abstract The topic of this presentation entails a comprehensive investigation of our sentiment analysis algorithm. The document provides a thorough examination of its theoretical underpinnings, meticulous assessment criteria, consequential findings, and an enlightening comparative analysis. Our system makes a substantial contribution to the field of sentiment analysis by using advanced techniques based on deep learning and state-of-the-art architectures.
APA, Harvard, Vancouver, ISO, and other styles
8

Said, Fadillah, and Lindung Parningotan Manik. "Aspect-Based Sentiment Analysis on Indonesian Presidential Election Using Deep Learning." Paradigma - Jurnal Komputer dan Informatika 24, no. 2 (2022): 160–67. http://dx.doi.org/10.31294/paradigma.v24i2.1415.

Full text
Abstract:
Pemilihan presiden tahun 2019 merupakan pemilihan presiden yang menjadi perbincangan hangat selama beberapa waktu bahkan orang membicarakan topik ini sejak tahun 2018 di internet. Dalam memprediksi pemenang pemilihan presiden penelitian sebelumnya telah melakukan penelitian terhadap dataset Analisis sentimen berbasis aspek (ABSA) pemilihan presiden tahun 2019 menggunakan algoritma pembelajaran mesin seperti Support Vector Machine (SVM), Naive Bayes (NB), dan K-Nearest Neighbors (KNN) dan menghasilkan akurasi yang cukup baik. Penelitian ini mengusulkan metode deep learning dengan menggunakan model BERT (Bidirectional Encoder Representation form Transformers) dan RoBERTa (A Robustly Optimized BERT Pretraining Approach). Hasil penelitian ini menunjukkan bahwa model BERT indobenchmark dan RoBERTa base-indonesian single label classification pada fitur target dengan preprocessing menghasilkan akurasi yang terbaik yaitu sebesar 98.02%. Model BERT indolem dan indobenchmark single label classification pada fitur target tanpa preprocessing menghasilkan akurasi yang terbaik yaitu sebesar 98.02%. Model BERT indobenchmark single label classification pada fitur aspek dengan preprocessing menghasilkan akurasi yang terbaik yaitu sebesar 74.26%. Model BERT indolem single label classification pada fitur aspek tanpa preprocessing menghasilkan akurasi yang terbaik yaitu sebesar 74.26%. Model BERT indolem single label classification pada fitur sentiment dengan preprocessing menghasilkan akurasi yang terbaik yaitu sebesar 93.07%. Model BERT indolem single label classification pada fitur sentiment tanpa preprocessing menghasilkan akurasi yang terbaik yaitu sebesar 94.06%. Model BERT indobenchmark multi label classification dengan preprocessing menghasilkan akurasi yang terbaik yaitu sebesar 98.66%. Model BERT indobenchmark multi label classification tanpa preprocessing menghasilkan akurasi yang terbaik yaitu sebesar 98.66%.
APA, Harvard, Vancouver, ISO, and other styles
9

Fu, Guanping, and Jianwei Sun. "Chinese text multi-classification based on Sentences Order Prediction improved Bert model." Journal of Physics: Conference Series 2031, no. 1 (2021): 012054. http://dx.doi.org/10.1088/1742-6596/2031/1/012054.

Full text
Abstract:
Abstract For the strong noise interference brought by the NSP mechanism (Next Sentences Prediction) in Bert to the model, in order to improve the classification effect of the Bert model when it is used in text classification, an SOP (Sentences Order Prediction) mechanism is used to replace the Bert model of the NSP mechanism-Multi-classification of Chinese news texts. At first, use randomly sorted adjacent sentence pairs for segment embedding. Then use the Transformer structure of the Bert model to encode the Chinese text, and obtain the final CLS vector as the semantic vector of the text. Finally, connect the different semantic vectors to the multi-category Classification. After ablation experiments, the improved SOP-Bert model obtained the highest F1 value of 96.69. The results show that this model is more effective than the original Bert model on text multi-classification problems.
APA, Harvard, Vancouver, ISO, and other styles
10

Mannix, Ilma Alpha, and Evi Yulianti. "Academic expert finding using BERT pre-trained language model." International Journal of Advances in Intelligent Informatics 10, no. 2 (2024): 280. http://dx.doi.org/10.26555/ijain.v10i2.1497.

Full text
Abstract:
Academic expert finding has numerous advantages, such as: finding paper-reviewers, research collaboration, enhancing knowledge transfer, etc. Especially, for research collaboration, researchers tend to seek collaborators who share similar backgrounds or with the same native languages. Despite its importance, academic expert findings remain relatively unexplored within the context of Indonesian language. Recent studies have primarily relied on static word embedding techniques such as Word2Vec to match documents with relevant expertise areas. However, Word2Vec is unable to capture the varying meanings of words in different contexts. To address this research gap, this study employs Bidirectional Encoder Representations from Transformers (BERT), a state-of-the-art contextual embedding model. This paper aims to examine the effectiveness of BERT on the task of academic expert finding. The proposed model in this research consists of three variations of BERT, namely IndoBERT (Indonesian BERT), mBERT (Multilingual BERT), and SciBERT (Scientific BERT), which will be compared to a static embedding model using Word2Vec. Two approaches were employed to rank experts using the BERT variations: feature-based and fine-tuning. We found that the IndoBERT model outperforms the baseline by 6–9% when utilizing the feature-based approach and shows an improvement of 10–18% with the fine-tuning approach. Our results proved that the fine-tuning approach performs better than the feature-based approach, with an improvement of 1–5%. It concludes by using IndoBERT, this research has shown an improved effectiveness in the academic expert finding within the context of Indonesian language.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "BERT model"

1

Zouari, Hend. "French AXA Insurance Word Embeddings : Effects of Fine-tuning BERT and Camembert on AXA France’s data." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-284108.

Full text
Abstract:
We explore in this study the different Natural Language Processing state-of-the art technologies that allow transforming textual data into numerical representation. We go through the theory of the existing traditional methods as well as the most recent ones. This thesis focuses on the recent advances in Natural Language processing being developed upon the Transfer model. One of the most relevant innovations was the release of a deep bidirectional encoder called BERT that broke several state of the art results. BERT utilises Transfer Learning to improve modelling language dependencies in text. BERT is used for several different languages, other specialized model were released like the french BERT: Camembert. This thesis compares the language models of these different pre-trained models and compares their capability to insure a domain adaptation. Using the multilingual and the french pre-trained version of BERT and a dataset from AXA France’s emails, clients’ messages, legal documents, insurance documents containing over 60 million words. We fine-tuned the language models in order to adapt them on the Axa insurance’s french context to create a French AXAInsurance BERT model. We evaluate the performance of this model on the capability of the language model of predicting a masked token based on the context. BERT proves to perform better : modelling better the french AXA’s insurance text without finetuning than Camembert. However, with this small amount of data, Camembert is more capable of adaptation to this specific domain of insurance.<br>I denna studie undersöker vi de senaste teknologierna för Natural Language Processing, som gör det möjligt att omvandla textdata till numerisk representation. Vi går igenom teorin om befintliga traditionella metoder såväl som de senaste. Denna avhandling fokuserar på de senaste framstegen inom bearbetning av naturliga språk som utvecklats med hjälp av överföringsmodellen. En av de mest relevanta innovationerna var lanseringen av en djup dubbelriktad kodare som heter BERT som bröt flera toppmoderna resultat. BERT använder Transfer Learning för att förbättra modelleringsspråkberoenden i text. BERT används för flera olika språk, andra specialmodeller släpptes som den franska BERT: Camembert. Denna avhandling jämför språkmodellerna för dessa olika förutbildade modeller och jämför deras förmåga att säkerställa en domänanpassning. Med den flerspråkiga och franska förutbildade versionen av BERT och en dataset från AXA Frankrikes epostmeddelanden, kundmeddelanden, juridiska dokument, försäkringsdokument som innehåller över 60 miljoner ord. Vi finjusterade språkmodellerna för att anpassa dem till Axas försäkrings franska sammanhang för att skapa en fransk AXAInsurance BERT-modell. Vi utvärderar prestandan för denna modell på förmågan hos språkmodellen att förutsäga en maskerad token baserat på sammanhanget. BERTpresterar bättre: modellerar bättre den franska AXA-försäkringstexten utan finjustering än Camembert. Men med denna lilla mängd data är Camembert mer kapabel att anpassa sig till denna specifika försäkringsdomän.
APA, Harvard, Vancouver, ISO, and other styles
2

Holm, Henrik. "Bidirectional Encoder Representations from Transformers (BERT) for Question Answering in the Telecom Domain. : Adapting a BERT-like language model to the telecom domain using the ELECTRA pre-training approach." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-301313.

Full text
Abstract:
The Natural Language Processing (NLP) research area has seen notable advancements in recent years, one being the ELECTRA model which improves the sample efficiency of BERT pre-training by introducing a discriminative pre-training approach. Most publicly available language models are trained on general-domain datasets. Thus, research is lacking for niche domains with domain-specific vocabulary. In this paper, the process of adapting a BERT-like model to the telecom domain is investigated. For efficiency in training the model, the ELECTRA approach is selected. For measuring target- domain performance, the Question Answering (QA) downstream task within the telecom domain is used. Three domain adaption approaches are considered: (1) continued pre- training on telecom-domain text starting from a general-domain checkpoint, (2) pre-training on telecom-domain text from scratch, and (3) pre-training from scratch on a combination of general-domain and telecom-domain text. Findings indicate that approach 1 is both inexpensive and effective, as target- domain performance increases are seen already after small amounts of training, while generalizability is retained. Approach 2 shows the highest performance on the target-domain QA task by a wide margin, albeit at the expense of generalizability. Approach 3 combines the benefits of the former two by achieving good performance on QA both in the general domain and the telecom domain. At the same time, it allows for a tokenization vocabulary well-suited for both domains. In conclusion, the suitability of a given domain adaption approach is shown to depend on the available data and computational budget. Results highlight the clear benefits of domain adaption, even when the QA task is learned through behavioral fine-tuning on a general-domain QA dataset due to insufficient amounts of labeled target-domain data being available.<br>Dubbelriktade språkmodeller som BERT har på senare år nått stora framgångar inom språkteknologiområdet. Flertalet vidareutvecklingar av BERT har tagits fram, bland andra ELECTRA, vars nyskapande diskriminativa träningsprocess förkortar träningstiden. Majoriteten av forskningen inom området utförs på data från den allmänna domänen. Med andra ord finns det utrymme för kunskapsbildning inom domäner med områdesspecifikt språk. I detta arbete utforskas metoder för att anpassa en dubbelriktad språkmodell till telekomdomänen. För att säkerställa hög effektivitet i förträningsstadiet används ELECTRA-modellen. Uppnådd prestanda i måldomänen mäts med hjälp av ett frågebesvaringsdataset för telekom-området. Tre metoder för domänanpassning undersöks: (1) fortsatt förträning på text från telekom-området av en modell förtränad på den allmänna domänen; (2) förträning från grunden på telekom-text; samt (3) förträning från grunden på en kombination av text från telekom-området och den allmänna domänen. Experimenten visar att metod 1 är både kostnadseffektiv och fördelaktig ur ett prestanda-perspektiv. Redan efter kort fortsatt förträning kan tydliga förbättringar inom frågebesvaring inom måldomänen urskiljas, samtidigt som generaliserbarhet kvarhålls. Tillvägagångssätt 2 uppvisar högst prestanda inom måldomänen, om än med markant sämre förmåga att generalisera. Metod 3 kombinerar fördelarna från de tidigare två metoderna genom hög prestanda dels inom måldomänen, dels inom den allmänna domänen. Samtidigt tillåter metoden användandet av ett tokenizer-vokabulär väl anpassat för båda domäner. Sammanfattningsvis bestäms en domänanpassningsmetods lämplighet av den respektive situationen och datan som tillhandahålls, samt de tillgängliga beräkningsresurserna. Resultaten påvisar de tydliga vinningar som domänanpassning kan ge upphov till, även då frågebesvaringsuppgiften lärs genom träning på ett dataset hämtat ur den allmänna domänen på grund av otillräckliga mängder frågebesvaringsdata inom måldomänen.
APA, Harvard, Vancouver, ISO, and other styles
3

Marzo, i. Grimalt Núria. "Natural Language Processing Model for Log Analysis to Retrieve Solutions For Troubleshooting Processes." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-300042.

Full text
Abstract:
In the telecommunications industry, one of the most time-consuming tasks is troubleshooting and the resolution of Trouble Report (TR) tickets. This task involves the understanding of textual data which can be challenging due to its domain- and company-specific features. The text contains many abbreviations, typos, tables as well as numerical information. This work tries to solve the issue of retrieving solutions for new troubleshooting reports in an automated way by using a Natural Language Processing (NLP) model, in particular Bidirectional Encoder Representations from Transformers (BERT)- based approaches. It proposes a text ranking model that, given a description of a fault, can rank the best possible solutions to that problem using answers from past TRs. The model tackles the trade-off between accuracy and latency by implementing a multi-stage BERT-based architecture with an initial retrieval stage and a re-ranker stage. Having a model that achieves a desired accuracy under a latency constraint allows it to be suited for industry applications. The experiments to evaluate the latency and the accuracy of the model have been performed on Ericsson’s troubleshooting dataset. The evaluation of the proposed model suggest that it is able to retrieve and re-rank solution for TRs with a significant improvement compared to a non-BERT model.<br>En av de mest tidskrävande uppgifterna inom telekommunikationsindustrin är att felsöka och hitta lösningar till felrapporter (TR). Denna uppgift kräver förståelse av textdata, som försvåras as att texten innehåller företags- och domänspecifika attribut. Texten innehåller typiskt sett många förkortningar, felskrivningar och tabeller blandat med numerisk information. Detta examensarbete ämnar att förenkla inhämtningen av lösningar av nya felsökningar på ett automatiserat sätt med hjälp av av naturlig språkbehandling (NLP), specifikt modeller baserade på dubbelriktad kodrepresentation (BERT). Examensarbetet föreslår en textrankningsmodell som, givet en felbeskrivning, kan rangordna de bästa möjliga lösningarna till felet baserat på tidigare felsökningar. Modellen hanterar avvägningen mellan noggrannhet och fördröjning genom att implementera den dubbelriktade kodrepresentationen i två faser: en initial inhämtningsfas och en omordningsfas. För industrianvändning krävs att modellen uppnår en given noggrannhet med en viss tidsbegränsning. Experimenten för att utvärdera noggrannheten och fördröjningen har utförts på Ericssons felsökningsdata. Utvärderingen visar att den föreslagna modellen kan hämta och omordna data för felsökningar med signifikanta förbättringar gentemot modeller utan dubbelriktad kodrepresentation.
APA, Harvard, Vancouver, ISO, and other styles
4

Lindblad, Maria. "A Comparative Study of the Quality between Formality Style Transfer of Sentences in Swedish and English, leveraging the BERT model." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-299932.

Full text
Abstract:
Formality Style Transfer (FST) is the task of automatically transforming a piece of text from one level of formality to another. Previous research has investigated different methods of performing FST on text in English, but at the time of this project there were to the author’s knowledge no previous studies analysing the quality of FST on text in Swedish. The purpose of this thesis was to investigate how a model trained for FST in Swedish performs. This was done by comparing the quality of a model trained on text in Swedish for FST, to an equivalent model trained on text in English for FST. Both models were implemented as encoder-decoder architectures, warm-started using two pre-existing Bidirectional Encoder Representations from Transformers (BERT) models, pre-trained on Swedish and English text respectively. The two FST models were fine-tuned for both the informal to formal task as well as the formal to informal task, using the Grammarly’s Yahoo Answers Formality Corpus (GYAFC). The Swedish version of GYAFC was created through automatic machine translation of the original English version. The Swedish corpus was then evaluated on the three criteria meaning preservation, formality preservation and fluency preservation. The results of the study indicated that the Swedish model had the capacity to match the quality of the English model but was held back by the inferior quality of the Swedish corpus. The study also highlighted the need for task specific corpus in Swedish.<br>Överföring av formalitetsstil syftar på uppgiften att automatiskt omvandla ett stycke text från en nivå av formalitet till en annan. Tidigare forskning har undersökt olika metoder för att utföra uppgiften på engelsk text men vid tiden för detta projekt fanns det enligt författarens vetskap inga tidigare studier som analyserat kvaliteten för överföring av formalitetsstil på svensk text. Syftet med detta arbete var att undersöka hur en modell tränad för överföring av formalitetsstil på svensk text presterar. Detta gjordes genom att jämföra kvaliteten på en modell tränad för överföring av formalitetsstil på svensk text, med en motsvarande modell tränad på engelsk text. Båda modellerna implementerades som kodnings-avkodningsmodeller, vars vikter initierats med hjälp av två befintliga Bidirectional Encoder Representations from Transformers (BERT)-modeller, förtränade på svensk respektive engelsk text. De två modellerna finjusterades för omvandling både från informell stil till formell och från formell stil till informell. Under finjusteringen användes en svensk och en engelsk version av korpusen Grammarly’s Yahoo Answers Formality Corpus (GYAFC). Den svenska versionen av GYAFC skapades genom automatisk maskinöversättning av den ursprungliga engelska versionen. Den svenska korpusen utvärderades sedan med hjälp av de tre kriterierna betydelse-bevarande, formalitets-bevarande och flödes-bevarande. Resultaten från studien indikerade att den svenska modellen hade kapaciteten att matcha kvaliteten på den engelska modellen men hölls tillbaka av den svenska korpusens sämre kvalitet. Studien underströk också behovet av uppgiftsspecifika korpusar på svenska.
APA, Harvard, Vancouver, ISO, and other styles
5

Appelstål, Michael. "Multimodal Model for Construction Site Aversion Classification." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-421011.

Full text
Abstract:
Aversion on construction sites can be everything from missingmaterial, fire hazards, or insufficient cleaning. These aversionsappear very often on construction sites and the construction companyneeds to report and take care of them in order for the site to runcorrectly. The reports consist of an image of the aversion and atext describing the aversion. Report categorization is currentlydone manually which is both time and cost-ineffective. The task for this thesis was to implement and evaluate an automaticmultimodal machine learning classifier for the reported aversionsthat utilized both the image and text data from the reports. Themodel presented is a late-fusion model consisting of a Swedish BERTtext classifier and a VGG16 for image classification. The results showed that an automated classifier is feasible for thistask and could be used in real life to make the classification taskmore time and cost-efficient. The model scored a 66.2% accuracy and89.7% top-5 accuracy on the task and the experiments revealed someareas of improvement on the data and model that could be furtherexplored to potentially improve the performance.
APA, Harvard, Vancouver, ISO, and other styles
6

Monsen, Julius. "Building high-quality datasets for abstractive text summarization : A filtering‐based method applied on Swedish news articles." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-176352.

Full text
Abstract:
With an increasing amount of information on the internet, automatic text summarization could potentially make content more readily available for a larger variety of people. Training and evaluating text summarization models require datasets of sufficient size and quality. Today, most such datasets are in English, and for minor languages such as Swedish, it is not easy to obtain corresponding datasets with handwritten summaries. This thesis proposes methods for compiling high-quality datasets suitable for abstractive summarization from a large amount of noisy data through characterization and filtering. The data used consists of Swedish news articles and their preambles which are here used as summaries. Different filtering techniques are applied, yielding five different datasets. Furthermore, summarization models are implemented by warm-starting an encoder-decoder model with BERT checkpoints and fine-tuning it on the different datasets. The fine-tuned models are evaluated with ROUGE metrics and BERTScore. All models achieve significantly better results when evaluated on filtered test data than when evaluated on unfiltered test data. Moreover, models trained on the most filtered dataset with the smallest size achieves the best results on the filtered test data. The trade-off between dataset size and quality and other methodological implications of the data characterization, the filtering and the model implementation are discussed, leading to suggestions for future research.
APA, Harvard, Vancouver, ISO, and other styles
7

Holmström, Oskar. "Exploring Transformer-Based Contextual Knowledge Graph Embeddings : How the Design of the Attention Mask and the Input Structure Affect Learning in Transformer Models." Thesis, Linköpings universitet, Artificiell intelligens och integrerade datorsystem, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-175400.

Full text
Abstract:
The availability and use of knowledge graphs have become commonplace as a compact storage of information and for lookup of facts. However, the discrete representation makes the knowledge graph unavailable for tasks that need a continuous representation, such as predicting relationships between entities, where the most probable relationship needs to be found. The need for a continuous representation has spurred the development of knowledge graph embeddings. The idea is to position the entities of the graph relative to each other in a continuous low-dimensional vector space, so that their relationships are preserved, and ideally leading to clusters of entities with similar characteristics. Several methods to produce knowledge graph embeddings have been created, from simple models that minimize the distance between related entities to complex neural models. Almost all of these embedding methods attempt to create an accurate static representation of each entity and relation. However, as with words in natural language, both entities and relations in a knowledge graph hold different meanings in different local contexts.  With the recent development of Transformer models, and their success in creating contextual representations of natural language, work has been done to apply them to graphs. Initial results show great promise, but there are significant differences in archi- tecture design across papers. There is no clear direction on how Transformer models can be best applied to create contextual knowledge graph embeddings. Two of the main differences in previous work is how the attention mask is applied in the model and what input graph structures the model is trained on.  This report explores how different attention masking methods and graph inputs affect a Transformer model (in this report, BERT) on a link prediction task for triples. Models are trained with five different attention masking methods, which to varying degrees restrict attention, and on three different input graph structures (triples, paths, and interconnected triples).  The results indicate that a Transformer model trained with a masked language model objective has the strongest performance on the link prediction task when there are no restrictions on how attention is directed, and when it is trained on graph structures that are sequential. This is similar to how models like BERT learn sentence structure after being exposed to a large number of training samples. For more complex graph structures it is beneficial to encode information of the graph structure through how the attention mask is applied. There also seems to be some indications that the input graph structure affects the models’ capabilities to learn underlying characteristics in the knowledge graph that is trained upon.
APA, Harvard, Vancouver, ISO, and other styles
8

Vache, Marian [Verfasser], Andreas [Akademischer Betreuer] Janshoff, Andreas [Gutachter] Janshoff, et al. "Structure and Mechanics of Neuronal Model Systems : Insights from Atomic Force Microscopy and Micropipette Aspiration / Marian Vache ; Gutachter: Andreas Janshoff, Franziska Thomas, Florian Rehfeldt, Silvio O. Rizzoli, Carolin Wichmann, Bert De Groot ; Betreuer: Andreas Janshoff." Göttingen : Niedersächsische Staats- und Universitätsbibliothek Göttingen, 2019. http://d-nb.info/1200209206/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Gräb, Oliver [Verfasser], Claudia [Akademischer Betreuer] [Gutachter] Steinem, Daniel John [Gutachter] Jackson, et al. "Solid Supported Model Membranes Containing Plant Glycolipids: A Tool to Study Interactions between Diatom Biomolecules and the Silicalemma in vitro / Oliver Gräb ; Gutachter: Claudia Steinem, Daniel John Jackson, Burkhard Geil, Bert de Groot, Jochen Hub, Sebastian Kruss ; Betreuer: Claudia Steinem." Göttingen : Niedersächsische Staats- und Universitätsbibliothek Göttingen, 2017. http://d-nb.info/113577840X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Zhang, Tao. "Discrepancy-based algorithms for best-subset model selection." Diss., University of Iowa, 2013. https://ir.uiowa.edu/etd/4800.

Full text
Abstract:
The selection of a best-subset regression model from a candidate family is a common problem that arises in many analyses. In best-subset model selection, we consider all possible subsets of regressor variables; thus, numerous candidate models may need to be fit and compared. One of the main challenges of best-subset selection arises from the size of the candidate model family: specifically, the probability of selecting an inappropriate model generally increases as the size of the family increases. For this reason, it is usually difficult to select an optimal model when best-subset selection is attempted based on a moderate to large number of regressor variables. Model selection criteria are often constructed to estimate discrepancy measures used to assess the disparity between each fitted candidate model and the generating model. The Akaike information criterion (AIC) and the corrected AIC (AICc) are designed to estimate the expected Kullback-Leibler (K-L) discrepancy. For best-subset selection, both AIC and AICc are negatively biased, and the use of either criterion will lead to overfitted models. To correct for this bias, we introduce a criterion AICi, which has a penalty term evaluated from Monte Carlo simulation. A multistage model selection procedure AICaps, which utilizes AICi, is proposed for best-subset selection. In the framework of linear regression models, the Gauss discrepancy is another frequently applied measure of proximity between a fitted candidate model and the generating model. Mallows' conceptual predictive statistic (Cp) and the modified Cp (MCp) are designed to estimate the expected Gauss discrepancy. For best-subset selection, Cp and MCp exhibit negative estimation bias. To correct for this bias, we propose a criterion CPSi that again employs a penalty term evaluated from Monte Carlo simulation. We further devise a multistage procedure, CPSaps, which selectively utilizes CPSi. In this thesis, we consider best-subset selection in two different modeling frameworks: linear models and generalized linear models. Extensive simulation studies are compiled to compare the selection behavior of our methods and other traditional model selection criteria. We also apply our methods to a model selection problem in a study of bipolar disorder.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "BERT model"

1

Zhang, Xiaoqun, and Neil Song. Learn About Topic Analysis Using BERT Model With Fortune 500 Company Data (2022). SAGE Publications Ltd, 2023. http://dx.doi.org/10.4135/9781529666557.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

W.A.Ve. (Workshop) (17th : 2018 : Venice, Italy), ed. Una macchina Gold(berg): A Gold(berg) machine : Larino e Casacalenda/Molise. Anteferma, 2019.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Rocşoreanu, C. The FitzHugh-Nagumo Model: Bifurcation and Dynamics. Springer Netherlands, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Hardebusch, Christof, and Monika Schröder. Das SelbstBau-Modell: Eine Mietergenossenschaft in Prenzlauer Berg. Edited by Energiekontor GmbH, Kny & Weber, and Gesellschaft der Behutsamen Stadterneuerung Berlin. Ch. Links, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Su, Zhenpeng. A Global Kinetic Model for Electron Radiation Belt Formation and Evolution. Springer Berlin Heidelberg, 2015. http://dx.doi.org/10.1007/978-3-662-46651-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Canada. Natural Resources. Canadian Forest Service. Lake Abitibi model forest.: Compendium of phase one projects. Natural Resources., 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Sha, Nanshi. On testing the change-point in the longitudinal bent line quantile regression model. [publisher not identified], 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Karolewski, Bogusław. Modelowanie zjawisk dynamicznych w przenośnikach taśmowych. Wydawn. Politechniki Wrocławskiej, 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

I, Kitney R., and Rompelman O, eds. The Beat-by-beat investigation of cardiovascular function: Measurement, analysis, and applications. Clarendon Press, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ward, Donald L. Laboratory study of a dynamic berm revetment. U.S. Army Engineer Waterways Experiment Station, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "BERT model"

1

Sabharwal, Navin, and Amit Agrawal. "BERT Model Applications: Other Tasks." In Hands-on Question Answering Systems with BERT. Apress, 2021. http://dx.doi.org/10.1007/978-1-4842-6664-9_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Tseng, Hou-Chiang, Hsueh-Chih Chen, Kuo-En Chang, Yao-Ting Sung, and Berlin Chen. "An Innovative BERT-Based Readability Model." In Lecture Notes in Computer Science. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-35343-8_32.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Chouikhi, Hasna, Hamza Chniter, and Fethi Jarray. "Arabic Sentiment Analysis Using BERT Model." In Advances in Computational Collective Intelligence. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-88113-9_50.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Zhang, Yice, Yihui Li, Peng Xu, et al. "LightBERT: A Distilled Chinese BERT Model." In Lecture Notes in Computer Science. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-96033-9_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Peška, Ladislav, Gregor Kovalčík, Tomáš Souček, Vít Škrhák, and Jakub Lokoč. "W2VV++ BERT Model at VBS 2021." In MultiMedia Modeling. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-67835-7_46.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Sabharwal, Navin, and Amit Agrawal. "BERT Model Applications: Question Answering System." In Hands-on Question Answering Systems with BERT. Apress, 2021. http://dx.doi.org/10.1007/978-1-4842-6664-9_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Guo, Ping, Yue Hu, and Yunpeng Li. "MG-BERT: A Multi-glosses BERT Model for Word Sense Disambiguation." In Knowledge Science, Engineering and Management. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-55393-7_24.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Bai, Huan, Daling Wang, Shi Feng, and Yifei Zhang. "EK-BERT: An Enhanced K-BERT Model for Chinese Sentiment Analysis." In Communications in Computer and Information Science. Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-7596-7_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Dang, Weixia, Biyu Zhou, Lingwei Wei, Weigang Zhang, Ziang Yang, and Songlin Hu. "TS-Bert: Time Series Anomaly Detection via Pre-training Model Bert." In Computational Science – ICCS 2021. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-77964-1_17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kumar, Pankaj, and Sanjib Kumar Sahu. "SIM-BERT: Speech intelligence model using NLP-BERT with improved accuracy." In Artificial Intelligence and Speech Technology. CRC Press, 2021. http://dx.doi.org/10.1201/9781003150664-48.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "BERT model"

1

Ba Alawi, Abdulfattah E., Ferhat Bozkurt, and Mete Yağanoğlu. "BERT-AraPeotry: BERT-based Arabic Poems Classification Model." In 2024 4th International Conference on Emerging Smart Technologies and Applications (eSmarTA). IEEE, 2024. http://dx.doi.org/10.1109/esmarta62850.2024.10638967.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Davis, John, Reshma K R, Saktheeswaran D, Sreyas A S, and Reni Jose. "Fake News Detection using BERT Model." In 2025 2nd International Conference on Trends in Engineering Systems and Technologies (ICTEST). IEEE, 2025. https://doi.org/10.1109/ictest64710.2025.11042689.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Raj Kumar, V. S., T. Kumaresan, V. Mythily, and N. Raghavendran. "Product Quality Enhancement in Social Media using Emotion-Enhanced Bert (EE-BERT) Model." In 2024 2nd International Conference on Self Sustainable Artificial Intelligence Systems (ICSSAS). IEEE, 2024. https://doi.org/10.1109/icssas64001.2024.10760611.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Vakili, Yazdan Zandiye, Avisa Fallah, and Hedieh Sajedi. "Distilled BERT Model in Natural Language Processing." In 2024 14th International Conference on Computer and Knowledge Engineering (ICCKE). IEEE, 2024. https://doi.org/10.1109/iccke65377.2024.10874673.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Cheng, Yong, and Mofan Duan. "Chinese Grammatical Error Detection Based on BERT Model." In Proceedings of the 6th Workshop on Natural Language Processing Techniques for Educational Applications. Association for Computational Linguistics, 2020. http://dx.doi.org/10.18653/v1/2020.nlptea-1.15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Javaji, Prashanth, Pulaparthi Satya Sreeya, and Sudha Rajesh. "Detection of AI Generated Text With BERT Model." In 2024 2nd World Conference on Communication & Computing (WCONF). IEEE, 2024. http://dx.doi.org/10.1109/wconf61366.2024.10692072.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Widjaja, Tiffany, Jocelin Wilson, Benedictus Filbert Federico, Daniel Eric Phangandy, and Zhandos Yessenbayev. "Analyzing Genshin Impact Review Scores Using BERT Model." In 2024 International Conference on ICT for Smart Society (ICISS). IEEE, 2024. http://dx.doi.org/10.1109/iciss62896.2024.10751401.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Vardhan, Puvvadi Harsha, Hari Prasad, Bandi Vishnu Swaroop, Busa Thanuj Sathwik Reddy, and Manju Venugopalan. "Bert-Based Transformer Model for Hate Speech Detection." In 2024 5th IEEE Global Conference for Advancement in Technology (GCAT). IEEE, 2024. https://doi.org/10.1109/gcat62922.2024.10923943.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Shaik, Mohammad Khaja, Yaisna Rajkumari, J. Naga Madhuri, V. Arunadevi, C. Raghavendra Reddy, and Prema S. "Automated English Language Learning Using BERT-LSTM Model." In 2024 International Conference on Artificial Intelligence and Quantum Computation-Based Sensor Application (ICAIQSA). IEEE, 2024. https://doi.org/10.1109/icaiqsa64000.2024.10882368.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Zhao, Yunchong, Chiawei Chu, and Chung-Lun Wei. "Neural Machine Translation Based on the BERT Model." In 2025 5th International Symposium on Computer Technology and Information Science (ISCTIS). IEEE, 2025. https://doi.org/10.1109/isctis65944.2025.11065648.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "BERT model"

1

Freeman, John W. Radiation Belt Test Model. Defense Technical Information Center, 2000. http://dx.doi.org/10.21236/ada399732.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Alexander, John. Piracy: The Best Business Model Available. Defense Technical Information Center, 2013. http://dx.doi.org/10.21236/ada591812.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Musa, Padde, Zita Ekeocha, Stephen Robert Byrn, and Kari L. Clase. Knowledge Sharing in Organisations: Finding a Best-fit Model for a Regulatory Authority in East Africa. Purdue University, 2021. http://dx.doi.org/10.5703/1288284317432.

Full text
Abstract:
Knowledge is an essential organisational asset that contributes to organisational effectiveness when carefully managed. Knowledge sharing (KS) is a vital component of knowledge management that allows individuals to engage in new knowledge creation. Until it’s shared, knowledge is considered useless since it resides within the human brain. Public organisations specifically, are more involved in providing and developing knowledge and hence can be classified as knowledge-intensive organisations. Scholarly research conducted on KS has proposed a number of models to help understand the KS process between individuals but none of these models is specifically for a public organisation. Moreover, to really reap the benefits that KS brings to an organization, it’s imperative to apply a model that is attributable to the unique characteristics of that organisation. This study reviews literature from electronic databases that discuss models of KS between individuals. Factors that influence KS under each model were isolated and the extent of each of their influence on KS in a public organization context, were critically analysed. The result of this analysis gave rise to factors that were thought to be most critical in understanding KS process in a public sector setting. These factors were then used to develop a KS model by categorizing them into themes including organisational culture, motivation to share and opportunity to share. From these themes, a KS model was developed and proposed for KS in a medicines regulatory authority in East Africa. The project recommends that an empirical study be conducted to validate the applicability of the proposed KS model at a medicines regulatory authority in East Africa.
APA, Harvard, Vancouver, ISO, and other styles
4

Neary, Vincent Sinclair, Zhaoqing Yang, Taiping Wang, Budi Gunawan, and Ann Renee Dallman. Model Test Bed for Evaluating Wave Models and Best Practices for Resource Assessment and Characterization. Office of Scientific and Technical Information (OSTI), 2016. http://dx.doi.org/10.2172/1431460.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Williams, Kristine. Multimodal Transportation Best Practices and Model Element. University of South Florida, 2014. http://dx.doi.org/10.5038/cutr-nctr-rr-2013-12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ogunbire, Abimbola, Panick Kalambay, Hardik Gajera, and Srinivas Pulugurtha. Deep Learning, Machine Learning, or Statistical Models for Weather-related Crash Severity Prediction. Mineta Transportation Institute, 2023. http://dx.doi.org/10.31979/mti.2023.2320.

Full text
Abstract:
Nearly 5,000 people are killed and more than 418,000 are injured in weather-related traffic incidents each year. Assessments of the effectiveness of statistical models applied to crash severity prediction compared to machine learning (ML) and deep learning techniques (DL) help researchers and practitioners know what models are most effective under specific conditions. Given the class imbalance in crash data, the synthetic minority over-sampling technique for nominal (SMOTE-N) data was employed to generate synthetic samples for the minority class. The ordered logit model (OLM) and the ordered probit model (OPM) were evaluated as statistical models, while random forest (RF) and XGBoost were evaluated as ML models. For DL, multi-layer perceptron (MLP) and TabNet were evaluated. The performance of these models varied across severity levels, with property damage only (PDO) predictions performing the best and severe injury predictions performing the worst. The TabNet model performed best in predicting severe injury and PDO crashes, while RF was the most effective in predicting moderate injury crashes. However, all models struggled with severe injury classification, indicating the potential need for model refinement and exploration of other techniques. Hence, the choice of model depends on the specific application and the relative costs of false negatives and false positives. This conclusion underscores the need for further research in this area to improve the prediction accuracy of severe and moderate injury incidents, ultimately improving available data that can be used to increase road safety.
APA, Harvard, Vancouver, ISO, and other styles
7

Keskinen, Michael J. Preliminary Global Radiation Belt Formation and Prediction Model. Defense Technical Information Center, 2008. http://dx.doi.org/10.21236/ada483182.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Speer, Josh, David Hinkler, and Gareth Alford. Biomanufacturing digitalization readiness model: Best practice and guidance. BioPhorum, 2023. http://dx.doi.org/10.46220/2023ts003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Filipiak, Katarzyna, Dietrich von Rosen, Martin Singull, and Wojciech Rejchel. Estimation under inequality constraints in univariate and multivariate linear models. Linköping University Electronic Press, 2024. http://dx.doi.org/10.3384/lith-mat-r-2024-01.

Full text
Abstract:
In this paper least squares and maximum likelihood estimates under univariate and multivariate linear models with a priori information related to maximum effects in the models are determined. Both loss functions (the least squares and negative log-likelihood) and the constraints are convex, so the convex optimization theory can be utilized to obtain estimates, which in this paper are called Safety belt estimates. In particular, the complementary slackness condition, common in convex optimization, implies two alternative types of solutions, strongly dependent on the data and the restriction. It is experimentally shown that, despite of the similarity to the ridge regression estimation under the univariate linear model, the Safety belt estimates behave usually better than estimates obtained via ridge regression. Moreover, concerning the multivariate model, the proposed technique represents a completely novel approach.
APA, Harvard, Vancouver, ISO, and other styles
10

Schwarz, Kurt. Automated Best Value Model Decision Support System. Functional Description. Defense Technical Information Center, 1993. http://dx.doi.org/10.21236/ada277875.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!