To see the other types of publications on this topic, follow the link: BERT model.

Dissertations / Theses on the topic 'BERT model'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'BERT model.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Zouari, Hend. "French AXA Insurance Word Embeddings : Effects of Fine-tuning BERT and Camembert on AXA France’s data." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-284108.

Full text
Abstract:
We explore in this study the different Natural Language Processing state-of-the art technologies that allow transforming textual data into numerical representation. We go through the theory of the existing traditional methods as well as the most recent ones. This thesis focuses on the recent advances in Natural Language processing being developed upon the Transfer model. One of the most relevant innovations was the release of a deep bidirectional encoder called BERT that broke several state of the art results. BERT utilises Transfer Learning to improve modelling language dependencies in text. BERT is used for several different languages, other specialized model were released like the french BERT: Camembert. This thesis compares the language models of these different pre-trained models and compares their capability to insure a domain adaptation. Using the multilingual and the french pre-trained version of BERT and a dataset from AXA France’s emails, clients’ messages, legal documents, insurance documents containing over 60 million words. We fine-tuned the language models in order to adapt them on the Axa insurance’s french context to create a French AXAInsurance BERT model. We evaluate the performance of this model on the capability of the language model of predicting a masked token based on the context. BERT proves to perform better : modelling better the french AXA’s insurance text without finetuning than Camembert. However, with this small amount of data, Camembert is more capable of adaptation to this specific domain of insurance.<br>I denna studie undersöker vi de senaste teknologierna för Natural Language Processing, som gör det möjligt att omvandla textdata till numerisk representation. Vi går igenom teorin om befintliga traditionella metoder såväl som de senaste. Denna avhandling fokuserar på de senaste framstegen inom bearbetning av naturliga språk som utvecklats med hjälp av överföringsmodellen. En av de mest relevanta innovationerna var lanseringen av en djup dubbelriktad kodare som heter BERT som bröt flera toppmoderna resultat. BERT använder Transfer Learning för att förbättra modelleringsspråkberoenden i text. BERT används för flera olika språk, andra specialmodeller släpptes som den franska BERT: Camembert. Denna avhandling jämför språkmodellerna för dessa olika förutbildade modeller och jämför deras förmåga att säkerställa en domänanpassning. Med den flerspråkiga och franska förutbildade versionen av BERT och en dataset från AXA Frankrikes epostmeddelanden, kundmeddelanden, juridiska dokument, försäkringsdokument som innehåller över 60 miljoner ord. Vi finjusterade språkmodellerna för att anpassa dem till Axas försäkrings franska sammanhang för att skapa en fransk AXAInsurance BERT-modell. Vi utvärderar prestandan för denna modell på förmågan hos språkmodellen att förutsäga en maskerad token baserat på sammanhanget. BERTpresterar bättre: modellerar bättre den franska AXA-försäkringstexten utan finjustering än Camembert. Men med denna lilla mängd data är Camembert mer kapabel att anpassa sig till denna specifika försäkringsdomän.
APA, Harvard, Vancouver, ISO, and other styles
2

Holm, Henrik. "Bidirectional Encoder Representations from Transformers (BERT) for Question Answering in the Telecom Domain. : Adapting a BERT-like language model to the telecom domain using the ELECTRA pre-training approach." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-301313.

Full text
Abstract:
The Natural Language Processing (NLP) research area has seen notable advancements in recent years, one being the ELECTRA model which improves the sample efficiency of BERT pre-training by introducing a discriminative pre-training approach. Most publicly available language models are trained on general-domain datasets. Thus, research is lacking for niche domains with domain-specific vocabulary. In this paper, the process of adapting a BERT-like model to the telecom domain is investigated. For efficiency in training the model, the ELECTRA approach is selected. For measuring target- domain performance, the Question Answering (QA) downstream task within the telecom domain is used. Three domain adaption approaches are considered: (1) continued pre- training on telecom-domain text starting from a general-domain checkpoint, (2) pre-training on telecom-domain text from scratch, and (3) pre-training from scratch on a combination of general-domain and telecom-domain text. Findings indicate that approach 1 is both inexpensive and effective, as target- domain performance increases are seen already after small amounts of training, while generalizability is retained. Approach 2 shows the highest performance on the target-domain QA task by a wide margin, albeit at the expense of generalizability. Approach 3 combines the benefits of the former two by achieving good performance on QA both in the general domain and the telecom domain. At the same time, it allows for a tokenization vocabulary well-suited for both domains. In conclusion, the suitability of a given domain adaption approach is shown to depend on the available data and computational budget. Results highlight the clear benefits of domain adaption, even when the QA task is learned through behavioral fine-tuning on a general-domain QA dataset due to insufficient amounts of labeled target-domain data being available.<br>Dubbelriktade språkmodeller som BERT har på senare år nått stora framgångar inom språkteknologiområdet. Flertalet vidareutvecklingar av BERT har tagits fram, bland andra ELECTRA, vars nyskapande diskriminativa träningsprocess förkortar träningstiden. Majoriteten av forskningen inom området utförs på data från den allmänna domänen. Med andra ord finns det utrymme för kunskapsbildning inom domäner med områdesspecifikt språk. I detta arbete utforskas metoder för att anpassa en dubbelriktad språkmodell till telekomdomänen. För att säkerställa hög effektivitet i förträningsstadiet används ELECTRA-modellen. Uppnådd prestanda i måldomänen mäts med hjälp av ett frågebesvaringsdataset för telekom-området. Tre metoder för domänanpassning undersöks: (1) fortsatt förträning på text från telekom-området av en modell förtränad på den allmänna domänen; (2) förträning från grunden på telekom-text; samt (3) förträning från grunden på en kombination av text från telekom-området och den allmänna domänen. Experimenten visar att metod 1 är både kostnadseffektiv och fördelaktig ur ett prestanda-perspektiv. Redan efter kort fortsatt förträning kan tydliga förbättringar inom frågebesvaring inom måldomänen urskiljas, samtidigt som generaliserbarhet kvarhålls. Tillvägagångssätt 2 uppvisar högst prestanda inom måldomänen, om än med markant sämre förmåga att generalisera. Metod 3 kombinerar fördelarna från de tidigare två metoderna genom hög prestanda dels inom måldomänen, dels inom den allmänna domänen. Samtidigt tillåter metoden användandet av ett tokenizer-vokabulär väl anpassat för båda domäner. Sammanfattningsvis bestäms en domänanpassningsmetods lämplighet av den respektive situationen och datan som tillhandahålls, samt de tillgängliga beräkningsresurserna. Resultaten påvisar de tydliga vinningar som domänanpassning kan ge upphov till, även då frågebesvaringsuppgiften lärs genom träning på ett dataset hämtat ur den allmänna domänen på grund av otillräckliga mängder frågebesvaringsdata inom måldomänen.
APA, Harvard, Vancouver, ISO, and other styles
3

Marzo, i. Grimalt Núria. "Natural Language Processing Model for Log Analysis to Retrieve Solutions For Troubleshooting Processes." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-300042.

Full text
Abstract:
In the telecommunications industry, one of the most time-consuming tasks is troubleshooting and the resolution of Trouble Report (TR) tickets. This task involves the understanding of textual data which can be challenging due to its domain- and company-specific features. The text contains many abbreviations, typos, tables as well as numerical information. This work tries to solve the issue of retrieving solutions for new troubleshooting reports in an automated way by using a Natural Language Processing (NLP) model, in particular Bidirectional Encoder Representations from Transformers (BERT)- based approaches. It proposes a text ranking model that, given a description of a fault, can rank the best possible solutions to that problem using answers from past TRs. The model tackles the trade-off between accuracy and latency by implementing a multi-stage BERT-based architecture with an initial retrieval stage and a re-ranker stage. Having a model that achieves a desired accuracy under a latency constraint allows it to be suited for industry applications. The experiments to evaluate the latency and the accuracy of the model have been performed on Ericsson’s troubleshooting dataset. The evaluation of the proposed model suggest that it is able to retrieve and re-rank solution for TRs with a significant improvement compared to a non-BERT model.<br>En av de mest tidskrävande uppgifterna inom telekommunikationsindustrin är att felsöka och hitta lösningar till felrapporter (TR). Denna uppgift kräver förståelse av textdata, som försvåras as att texten innehåller företags- och domänspecifika attribut. Texten innehåller typiskt sett många förkortningar, felskrivningar och tabeller blandat med numerisk information. Detta examensarbete ämnar att förenkla inhämtningen av lösningar av nya felsökningar på ett automatiserat sätt med hjälp av av naturlig språkbehandling (NLP), specifikt modeller baserade på dubbelriktad kodrepresentation (BERT). Examensarbetet föreslår en textrankningsmodell som, givet en felbeskrivning, kan rangordna de bästa möjliga lösningarna till felet baserat på tidigare felsökningar. Modellen hanterar avvägningen mellan noggrannhet och fördröjning genom att implementera den dubbelriktade kodrepresentationen i två faser: en initial inhämtningsfas och en omordningsfas. För industrianvändning krävs att modellen uppnår en given noggrannhet med en viss tidsbegränsning. Experimenten för att utvärdera noggrannheten och fördröjningen har utförts på Ericssons felsökningsdata. Utvärderingen visar att den föreslagna modellen kan hämta och omordna data för felsökningar med signifikanta förbättringar gentemot modeller utan dubbelriktad kodrepresentation.
APA, Harvard, Vancouver, ISO, and other styles
4

Lindblad, Maria. "A Comparative Study of the Quality between Formality Style Transfer of Sentences in Swedish and English, leveraging the BERT model." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-299932.

Full text
Abstract:
Formality Style Transfer (FST) is the task of automatically transforming a piece of text from one level of formality to another. Previous research has investigated different methods of performing FST on text in English, but at the time of this project there were to the author’s knowledge no previous studies analysing the quality of FST on text in Swedish. The purpose of this thesis was to investigate how a model trained for FST in Swedish performs. This was done by comparing the quality of a model trained on text in Swedish for FST, to an equivalent model trained on text in English for FST. Both models were implemented as encoder-decoder architectures, warm-started using two pre-existing Bidirectional Encoder Representations from Transformers (BERT) models, pre-trained on Swedish and English text respectively. The two FST models were fine-tuned for both the informal to formal task as well as the formal to informal task, using the Grammarly’s Yahoo Answers Formality Corpus (GYAFC). The Swedish version of GYAFC was created through automatic machine translation of the original English version. The Swedish corpus was then evaluated on the three criteria meaning preservation, formality preservation and fluency preservation. The results of the study indicated that the Swedish model had the capacity to match the quality of the English model but was held back by the inferior quality of the Swedish corpus. The study also highlighted the need for task specific corpus in Swedish.<br>Överföring av formalitetsstil syftar på uppgiften att automatiskt omvandla ett stycke text från en nivå av formalitet till en annan. Tidigare forskning har undersökt olika metoder för att utföra uppgiften på engelsk text men vid tiden för detta projekt fanns det enligt författarens vetskap inga tidigare studier som analyserat kvaliteten för överföring av formalitetsstil på svensk text. Syftet med detta arbete var att undersöka hur en modell tränad för överföring av formalitetsstil på svensk text presterar. Detta gjordes genom att jämföra kvaliteten på en modell tränad för överföring av formalitetsstil på svensk text, med en motsvarande modell tränad på engelsk text. Båda modellerna implementerades som kodnings-avkodningsmodeller, vars vikter initierats med hjälp av två befintliga Bidirectional Encoder Representations from Transformers (BERT)-modeller, förtränade på svensk respektive engelsk text. De två modellerna finjusterades för omvandling både från informell stil till formell och från formell stil till informell. Under finjusteringen användes en svensk och en engelsk version av korpusen Grammarly’s Yahoo Answers Formality Corpus (GYAFC). Den svenska versionen av GYAFC skapades genom automatisk maskinöversättning av den ursprungliga engelska versionen. Den svenska korpusen utvärderades sedan med hjälp av de tre kriterierna betydelse-bevarande, formalitets-bevarande och flödes-bevarande. Resultaten från studien indikerade att den svenska modellen hade kapaciteten att matcha kvaliteten på den engelska modellen men hölls tillbaka av den svenska korpusens sämre kvalitet. Studien underströk också behovet av uppgiftsspecifika korpusar på svenska.
APA, Harvard, Vancouver, ISO, and other styles
5

Appelstål, Michael. "Multimodal Model for Construction Site Aversion Classification." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-421011.

Full text
Abstract:
Aversion on construction sites can be everything from missingmaterial, fire hazards, or insufficient cleaning. These aversionsappear very often on construction sites and the construction companyneeds to report and take care of them in order for the site to runcorrectly. The reports consist of an image of the aversion and atext describing the aversion. Report categorization is currentlydone manually which is both time and cost-ineffective. The task for this thesis was to implement and evaluate an automaticmultimodal machine learning classifier for the reported aversionsthat utilized both the image and text data from the reports. Themodel presented is a late-fusion model consisting of a Swedish BERTtext classifier and a VGG16 for image classification. The results showed that an automated classifier is feasible for thistask and could be used in real life to make the classification taskmore time and cost-efficient. The model scored a 66.2% accuracy and89.7% top-5 accuracy on the task and the experiments revealed someareas of improvement on the data and model that could be furtherexplored to potentially improve the performance.
APA, Harvard, Vancouver, ISO, and other styles
6

Monsen, Julius. "Building high-quality datasets for abstractive text summarization : A filtering‐based method applied on Swedish news articles." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-176352.

Full text
Abstract:
With an increasing amount of information on the internet, automatic text summarization could potentially make content more readily available for a larger variety of people. Training and evaluating text summarization models require datasets of sufficient size and quality. Today, most such datasets are in English, and for minor languages such as Swedish, it is not easy to obtain corresponding datasets with handwritten summaries. This thesis proposes methods for compiling high-quality datasets suitable for abstractive summarization from a large amount of noisy data through characterization and filtering. The data used consists of Swedish news articles and their preambles which are here used as summaries. Different filtering techniques are applied, yielding five different datasets. Furthermore, summarization models are implemented by warm-starting an encoder-decoder model with BERT checkpoints and fine-tuning it on the different datasets. The fine-tuned models are evaluated with ROUGE metrics and BERTScore. All models achieve significantly better results when evaluated on filtered test data than when evaluated on unfiltered test data. Moreover, models trained on the most filtered dataset with the smallest size achieves the best results on the filtered test data. The trade-off between dataset size and quality and other methodological implications of the data characterization, the filtering and the model implementation are discussed, leading to suggestions for future research.
APA, Harvard, Vancouver, ISO, and other styles
7

Holmström, Oskar. "Exploring Transformer-Based Contextual Knowledge Graph Embeddings : How the Design of the Attention Mask and the Input Structure Affect Learning in Transformer Models." Thesis, Linköpings universitet, Artificiell intelligens och integrerade datorsystem, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-175400.

Full text
Abstract:
The availability and use of knowledge graphs have become commonplace as a compact storage of information and for lookup of facts. However, the discrete representation makes the knowledge graph unavailable for tasks that need a continuous representation, such as predicting relationships between entities, where the most probable relationship needs to be found. The need for a continuous representation has spurred the development of knowledge graph embeddings. The idea is to position the entities of the graph relative to each other in a continuous low-dimensional vector space, so that their relationships are preserved, and ideally leading to clusters of entities with similar characteristics. Several methods to produce knowledge graph embeddings have been created, from simple models that minimize the distance between related entities to complex neural models. Almost all of these embedding methods attempt to create an accurate static representation of each entity and relation. However, as with words in natural language, both entities and relations in a knowledge graph hold different meanings in different local contexts.  With the recent development of Transformer models, and their success in creating contextual representations of natural language, work has been done to apply them to graphs. Initial results show great promise, but there are significant differences in archi- tecture design across papers. There is no clear direction on how Transformer models can be best applied to create contextual knowledge graph embeddings. Two of the main differences in previous work is how the attention mask is applied in the model and what input graph structures the model is trained on.  This report explores how different attention masking methods and graph inputs affect a Transformer model (in this report, BERT) on a link prediction task for triples. Models are trained with five different attention masking methods, which to varying degrees restrict attention, and on three different input graph structures (triples, paths, and interconnected triples).  The results indicate that a Transformer model trained with a masked language model objective has the strongest performance on the link prediction task when there are no restrictions on how attention is directed, and when it is trained on graph structures that are sequential. This is similar to how models like BERT learn sentence structure after being exposed to a large number of training samples. For more complex graph structures it is beneficial to encode information of the graph structure through how the attention mask is applied. There also seems to be some indications that the input graph structure affects the models’ capabilities to learn underlying characteristics in the knowledge graph that is trained upon.
APA, Harvard, Vancouver, ISO, and other styles
8

Vache, Marian [Verfasser], Andreas [Akademischer Betreuer] Janshoff, Andreas [Gutachter] Janshoff, et al. "Structure and Mechanics of Neuronal Model Systems : Insights from Atomic Force Microscopy and Micropipette Aspiration / Marian Vache ; Gutachter: Andreas Janshoff, Franziska Thomas, Florian Rehfeldt, Silvio O. Rizzoli, Carolin Wichmann, Bert De Groot ; Betreuer: Andreas Janshoff." Göttingen : Niedersächsische Staats- und Universitätsbibliothek Göttingen, 2019. http://d-nb.info/1200209206/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Gräb, Oliver [Verfasser], Claudia [Akademischer Betreuer] [Gutachter] Steinem, Daniel John [Gutachter] Jackson, et al. "Solid Supported Model Membranes Containing Plant Glycolipids: A Tool to Study Interactions between Diatom Biomolecules and the Silicalemma in vitro / Oliver Gräb ; Gutachter: Claudia Steinem, Daniel John Jackson, Burkhard Geil, Bert de Groot, Jochen Hub, Sebastian Kruss ; Betreuer: Claudia Steinem." Göttingen : Niedersächsische Staats- und Universitätsbibliothek Göttingen, 2017. http://d-nb.info/113577840X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Zhang, Tao. "Discrepancy-based algorithms for best-subset model selection." Diss., University of Iowa, 2013. https://ir.uiowa.edu/etd/4800.

Full text
Abstract:
The selection of a best-subset regression model from a candidate family is a common problem that arises in many analyses. In best-subset model selection, we consider all possible subsets of regressor variables; thus, numerous candidate models may need to be fit and compared. One of the main challenges of best-subset selection arises from the size of the candidate model family: specifically, the probability of selecting an inappropriate model generally increases as the size of the family increases. For this reason, it is usually difficult to select an optimal model when best-subset selection is attempted based on a moderate to large number of regressor variables. Model selection criteria are often constructed to estimate discrepancy measures used to assess the disparity between each fitted candidate model and the generating model. The Akaike information criterion (AIC) and the corrected AIC (AICc) are designed to estimate the expected Kullback-Leibler (K-L) discrepancy. For best-subset selection, both AIC and AICc are negatively biased, and the use of either criterion will lead to overfitted models. To correct for this bias, we introduce a criterion AICi, which has a penalty term evaluated from Monte Carlo simulation. A multistage model selection procedure AICaps, which utilizes AICi, is proposed for best-subset selection. In the framework of linear regression models, the Gauss discrepancy is another frequently applied measure of proximity between a fitted candidate model and the generating model. Mallows' conceptual predictive statistic (Cp) and the modified Cp (MCp) are designed to estimate the expected Gauss discrepancy. For best-subset selection, Cp and MCp exhibit negative estimation bias. To correct for this bias, we propose a criterion CPSi that again employs a penalty term evaluated from Monte Carlo simulation. We further devise a multistage procedure, CPSaps, which selectively utilizes CPSi. In this thesis, we consider best-subset selection in two different modeling frameworks: linear models and generalized linear models. Extensive simulation studies are compiled to compare the selection behavior of our methods and other traditional model selection criteria. We also apply our methods to a model selection problem in a study of bipolar disorder.
APA, Harvard, Vancouver, ISO, and other styles
11

Borggren, Lukas. "Automatic Categorization of News Articles With Contextualized Language Models." Thesis, Linköpings universitet, Artificiell intelligens och integrerade datorsystem, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-177004.

Full text
Abstract:
This thesis investigates how pre-trained contextualized language models can be adapted for multi-label text classification of Swedish news articles. Various classifiers are built on pre-trained BERT and ELECTRA models, exploring global and local classifier approaches. Furthermore, the effects of domain specialization, using additional metadata features and model compression are investigated. Several hundred thousand news articles are gathered to create unlabeled and labeled datasets for pre-training and fine-tuning, respectively. The findings show that a local classifier approach is superior to a global classifier approach and that BERT outperforms ELECTRA significantly. Notably, a baseline classifier built on SVMs yields competitive performance. The effect of further in-domain pre-training varies; ELECTRA’s performance improves while BERT’s is largely unaffected. It is found that utilizing metadata features in combination with text representations improves performance. Both BERT and ELECTRA exhibit robustness to quantization and pruning, allowing model sizes to be cut in half without any performance loss.
APA, Harvard, Vancouver, ISO, and other styles
12

Chirravuri, Varun R. "Identifying a low-order beat-to-beat model of arterial baroreflex action." Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/61152.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.<br>Cataloged from PDF version of thesis.<br>Includes bibliographical references (p. 127-133).<br>The arterial baroreflex is a fast-acting control mechanism that the body relies on to regulate blood pressure. Previous efforts to quantitatively model the baroreflex have relied primarily on non-parametric characterization of the transfer function from blood pressure to heart rate (Berger et al.,1989, Akselrod et al., 1981,1985). Of the parametric models proposed, most focus on matching empirical transfer functions with continuous-time models (Berger et al., 1991). Use of these models is often restricted to simulation, and consequently not focused on prediction. We develop a beat-to-beat, one-pole model for the baroreflex that can parsimoniously capture both the empirical frequency-domain and time-domain characteristics of the baroreflex. Further, we develop a robust identification method for on-line estimation of our model parameters from clinical data. We conclude by presenting preliminary results of our model and estimation method applied to patients undergoing drug-induced autonomic blockade.<br>by Varun R. Chirravuri.<br>M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
13

Sagen, Markus. "Large-Context Question Answering with Cross-Lingual Transfer." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-440704.

Full text
Abstract:
Models based around the transformer architecture have become one of the most prominent for solving a multitude of natural language processing (NLP)tasks since its introduction in 2017. However, much research related to the transformer model has focused primarily on achieving high performance and many problems remain unsolved. Two of the most prominent currently are the lack of high performing non-English pre-trained models, and the limited number of words most trained models can incorporate for their context. Solving these problems would make NLP models more suitable for real-world applications, improving information retrieval, reading comprehension, and more. All previous research has focused on incorporating long-context for English language models. This thesis investigates the cross-lingual transferability between languages when only training for long-context in English. Training long-context models in English only could make long-context in low-resource languages, such as Swedish, more accessible since it is hard to find such data in most languages and costly to train for each language. This could become an efficient method for creating long-context models in other languages without the need for such data in all languages or pre-training from scratch. We extend the models’ context using the training scheme of the Longformer architecture and fine-tune on a question-answering task in several languages. Our evaluation could not satisfactorily confirm nor deny if transferring long-term context is possible for low-resource languages. We believe that using datasets that require long-context reasoning, such as a multilingual TriviaQAdataset, could demonstrate our hypothesis’s validity.
APA, Harvard, Vancouver, ISO, and other styles
14

Perilla, Rozo Carlos Andres. "Noise model for a dual frequency comb beat." Doctoral thesis, Université Laval, 2019. http://hdl.handle.net/20.500.11794/34414.

Full text
Abstract:
Cette thèse porte sur le raffinement d’un modèle du bruit utilisé pour des mesures spectroscopiques réalisées avec des peignes de fréquences optiques. La majorité des travaux antérieurs utilisaient des peignes de fréquences où le glissement (chirp) est minimisé, en supposant que tout glissement différentiel entre les peignes allait réduire le rapport signal sur bruit. L’hypothèse sous-jacente était que l’impact du bruit multiplicatif serait augmenté, le glissement lui permettant d’agir plus longtemps sur le signal d’interférence. Cependant, d’autres recherches indiquaient plutôt contraire : le chirp pourrait améliorer la mesure. Cette thèse cherche à augmenter la compréhension du comportement du bruit lorsque les peignes ont des glissements différentiels. De plus, celle-ci apporte de nouvelles évidences sur l’utilité du chirp dans ce type de mesure. À cet effet, nous avons fait une révision bibliographique des modèles du bruit dans les peignes de fréquences optiques. Ensuite, du point de vue théorique, nous avons analysé les effets du chirp sur les bruits additifs et multiplicatifs. Pour le bruit d’intensité, nous avons proposé un modèle phénoménologique décrivant le comportement de l’émission spontanée amplifiée (ASE) dans un laser à verrouillage de mode par rotation non linéaire de polarisation. Les spectres des peignes et leurs battements ont été caractérisés en portant une attention particulière à leur relation avec l’ASE. La thèse permet de conclure que le chirp différentiel n’affecte pas les niveaux des densités spectrales de bruit. Grâce au glissement différentiel de fréquence, il est possible d’envoyer plus puissance à l’échantillon et ainsi améliorer le rapport signal sur bruit des instruments à peignes de fréquence. D’un autre côté, la caractérisation de l’ASE a établi sa nature non-stationnaire. Elle a aussi expliqué des attributs spectraux qui sont observés régulièrement dans les signaux de battement des peignes. Finalement, en supposant que l’ASE circule largement dans une cavité opérée sous le seuil, sa caractérisation fournit une méthode pour estimer le déphasage non linéaire que subit le train d’impulsions femtosecondes.<br>This thesis proposes a noise model refinement for spectroscopic measurements using dual optical frequency combs. Until now most studies centered their efforts on noise characterization using chirp free combs based on an unproved hypothesis: measurements would get worse with chirped combs since multiplicative noises would be present over a longer duration on the interference pattern thus leading to a greater impact. However, at least one experimental result hinted to the contrary: differential chirp would actually improve the signal to noise ratio. This thesis therefore aims at increasing the understanding of noise when a differential chirp is present in a dual comb measurement. The specific goal is to provide new insights about the usefulness of chirp in this kind of measurement. With this in mind, we conducted a literature review of noise models in optical frequency combs. We subsequently analyzed the chirp’s effect in the presence of both additive and multiplicative noise. The thesis also proposes a phenomenological model to describe the amplified spontaneous emission - ASE in short pulse lasers mode locked using non linear polarization rotation. Finally the comb spectra and their beat notes are characterized putting special attention to their relation with the ASE components. As conclusions, we can report that noise power spectral density levels do not change with a differential chirp. Chirping allows sending a greater optical power through the sample, such that the measurement signal to noise ratio can be improved. On the other hand, the ASE characterization established its non-stationary nature and explained very well characteristic features routinely observed in dual comb beat notes that were not fully understood. Finally, assuming the ASE experiences a sub threshold linear cavity allows using theses features to estimate the non linear phase shift experienced by the modelocked pulse train in the laser cavity.
APA, Harvard, Vancouver, ISO, and other styles
15

Wenestam, Arvid. "Labelling factual information in legal cases using fine-tuned BERT models." Thesis, Uppsala universitet, Statistiska institutionen, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-447230.

Full text
Abstract:
Labelling factual information on the token level in legal cases requires legal expertise and is time-consuming. This thesis proposes transfer-learning and fine-tuning implementation of pre-trained state-of-the-art BERT models to perform this labelling task. Investigations are done to compare whether models pre-trained on solely legal corpus outperforms a generic corps trained BERT and the model’s behaviour as the number of cases in the training sample varies. This work showed that the models metric scores are stable and on par using 40-60 professionally annotated cases as opposed to using the full sample of 100 cases. Also, the generic-trained BERT model is a strong baseline, and a solely pre-trained BERT on legal corpus is not crucial for this task.
APA, Harvard, Vancouver, ISO, and other styles
16

Wang, Jun. "Selecting the Best Linear Mixed Model Using Predictive Approaches." Diss., CLICK HERE for online access, 2007. http://contentdm.lib.byu.edu/ETD/image/etd1697.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Nnamani, Amuluche Gregory. "The African Synod and the Model of Church-as-Family." Bulletin of Ecumenical Theology, 1994. http://digital.library.duq.edu/u?/bet,438.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Buck, Mitchell Arthur. "Experiments and numerical model for berm and dune erosion." Access to citation, abstract and download form provided by ProQuest Information and Learning Company; downloadable PDF file, 190 p, 2008. http://proquest.umi.com/pqdweb?did=1456291111&sid=6&Fmt=2&clientId=8331&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Berggren, Erik, and Fredrik Folkelid. "Which GARCH model is best for Value-at-Risk?" Thesis, Uppsala universitet, Nationalekonomiska institutionen, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-244448.

Full text
Abstract:
The purpose of this thesis is to identify the best volatility model for Value-at-Risk(VaR) estimations. We estimate 1 % and 5 % VaR figures for Nordic indices andstocks by using two symmetrical and two asymmetrical GARCH models underdifferent error distributions. Out-of-sample volatility forecasts are produced usinga 500 day rolling window estimation on data covering January 2007 to December2014. The VaR estimates are thereafter evaluated through Kupiec’s test andChristoffersen’s test in order to find the best model. The results suggest thatasymmetrical models perform better than symmetrical models albeit the simpleARCH is often good enough for 1 % VaR estimates.
APA, Harvard, Vancouver, ISO, and other styles
20

Abeln, Brittany, and Brittany Abeln. "Best Practice Model for Nurses Experiencing Work-Related Bereavement." Thesis, The University of Arizona, 2014. http://hdl.handle.net/10150/555521.

Full text
Abstract:
The purpose of this thesis is to present a best practice model to support nurses experiencing work-related bereavement to prevent complicated grief following the death of a patient. The death of a patient is a universal experience for nurses. Nurses are at risk for experiencing complicated grief due to limited time to process patient deaths. Review of the literature in PubMed was conducted using the keywords: nurse, death, patient death, attitudes, nurses' response and dying. The project produced a best practice model for nurses experiencing work-related bereavement. The Helping Overcome Patient Expiration (HOPE) Model gives nurses experiencing work-related bereavement options for support. It incorporates a monthly nurse's grief support meeting, Zen room, handout and workplace memorial. The project culminated in a hypothetical implementation and evaluation plan of the HOPE Model. The proposed best practice model for nurses experiencing work-related bereavement would reduce possible stressors, prevent maladaptive coping, and promote nurse retention in hospitals.
APA, Harvard, Vancouver, ISO, and other styles
21

Tarsitano, Davide. "Is there a best model? : a radioecological case study." Thesis, University of Nottingham, 2005. http://eprints.nottingham.ac.uk/10202/.

Full text
Abstract:
Mathematical models are extensively used to support decision-making in many disciplines. Nevertheless there are not clear standard guidelines to assess models performance. This significantly affects model selection processes, which aim to determine the "best model", among several possible candidates. Model performance is often measured by the accuracy with which models predictions fit independent observations. However this test assesses only a single aspect of a model. A model selection process should establish the similarities between the constructed and the conceptual model. Therefore it should be based on a comprehensive assessment of the models capabilities, which is the objective of the multi-aspect comparison approach proposed in this work. The innovative aspect of this approach is to create a relationship among four conventional tests, i.e. uncertainty and sensitivity analysis, goodness-of-fit prediction-observations, model complexity and level of details, in order to provide a reliable estimation of the differences between the constructed and conceptual models. Although, model complexity is quantified using a standard approach, a novel methodology is proposed in this thesis, intended to be an intuitive and illustrative approach in creating a linkage between model complexity and level of detail. Five radioecological models have been considered: SAVE rural model, TEMAS rural model, SAVE semi-natural model, FORM and RIFE1. The results show that there is a limited resemblance between these models and the respective conceptual models. This is due to low prediction accuracy (RIFE1 and FORM); high level of uncertainty (SAVE rural); sensitivity to parameters which is not consistent with the current understanding of radiocaesium behaviour in the environment (TEMAS and SAVE rural). The SAVE rural model has been revisited in order to increase the similarity between the constructed and conceptual model. The resulting model prediction shows lower degree of uncertainty and there is a significant agreement between the model sensitivity results and the general understanding of the processes affecting Cs soil-to-plant transfer. Nonetheless the revised model does not show higher prediction accuracy than the original model. It is concluded that a reliable methodology for model selection should be based on a comprehensive investigation of each considered model aspect and that there is not a single best approach. The methodology proposed in this work has been successful in the case of the five radioecological models studied.
APA, Harvard, Vancouver, ISO, and other styles
22

Roos, Daniel. "Evaluation of BERT-like models for small scale ad-hoc information retrieval." Thesis, Linköpings universitet, Artificiell intelligens och integrerade datorsystem, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-177675.

Full text
Abstract:
Measuring semantic similarity between two sentences is an ongoing research field with big leaps being taken every year. This thesis looks at using modern methods of semantic similarity measurement for an ad-hoc information retrieval (IR) system. The main challenge tackled was answering the question "What happens when you don’t have situation-specific data?". Using encoder-based transformer architectures pioneered by Devlin et al., which excel at fine-tuning to situationally specific domains, this thesis shows just how well the presented methodology can work and makes recommendations for future attempts at similar domain-specific tasks. It also shows an example of how a web application can be created to make use of these fast-learning architectures.
APA, Harvard, Vancouver, ISO, and other styles
23

Njoku, Uzochukwu J. "AFRICAN COMMUNALISM: FROM A CULTURAL MODEL TO A CULTURE IN CRISIS." Bulletin of Ecumenical Theology, 2006. http://digital.library.duq.edu/u?/bet,2900.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Riat, Amerdeep Singh. "#Best practice' lean production in small to medium sized manufacturing enterprises, and its assessment." Thesis, University of Sunderland, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.337041.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Varenius, Malin. "Using Hidden Markov Models to Beat OMXS30." Thesis, Uppsala universitet, Tillämpad matematik och statistik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-409780.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Kim, Dooroo. "Dynamic modeling of belt drives using the elastic/perfectly-plastic friction law." Thesis, Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/29637.

Full text
Abstract:
Thesis (M. S.)--Mechanical Engineering, Georgia Institute of Technology, 2010.<br>Committee Chair: Leamy, Michael; Committee Member: Costello, Mark; Committee Member: Ferri, Aldo. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
27

Poon, Joanna L. K. "Development of a process model for the design stage of building projects." Thesis, University of Wolverhampton, 2001. http://hdl.handle.net/2436/111548.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Schreiter, Robert J. "Identity and Communication between Locality/Contextuality and Globality/Universality - A Semiotic-Linguistic Model." Bulletin of Ecumenical Theology, 1997. http://digital.library.duq.edu/u?/bet,610.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Indrawati, Triyanti. "Sequential method for selecting the best probability model for equities." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape2/PQDD_0025/MQ51723.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Al-Dubaibi, Aishah Saad. "The best model for pairwise comparisons and a case study." Thesis, Brunel University, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.387876.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Athey, Glenn C. "Evaluating a best practice model for an economic development agency." Thesis, University of Glasgow, 1998. http://theses.gla.ac.uk/3129/.

Full text
Abstract:
This thesis is concerned with evaluating effectiveness and performance in economic development agencies. Development agencies are typically quasi- public bodies that operate at metropolitan, sub-regional and local scales with the purpose of promoting and realising economic development in their areas. The aim of this thesis is to develop a best practice model for such agencies. The institutions that were studied as part of this project included a wide range of different economic development organisations located in Belfast, Berlin, Glasgow and London. Initially, the thesis discusses the history of economic development activity at sub-national scales in the UK and internationally, and explores the role that such agencies play. Aspects of organisational performance and effectiveness in the context of economic development agencies are further discussed. The research proceeds according to a framework of organisational analysis, describing and analysing the environment that agencies operate in, the most influential characteristics and factors for agency performance, and features of operational design and implementation. The basis for the original research in this thesis is data from a substantial number of qualitative interviews with individuals from development agencies and other interest groups. The thesis argues that there are a wide range of characteristics and factors that contribute to agency effectiveness and performance, and that these have been insufficiently explored in past research. Economic development agencies are also significantly influenced by the environment which they operate in. Overall, it is argued that in order to be successful at their task, economic development agencies need to be truly excellent organisations. This includes developing effective mechanisms for corporate management, staff development, and a market-led rationale for organisational philosophy and action. The concluding chapter of this thesis develops a framework for creating and sustaining excellence in economic development organisations.
APA, Harvard, Vancouver, ISO, and other styles
32

Haeggström, Andreas, and Jennie Sund. "Prognosmodell för svenska läns bruttoregionalprodukt (BRP) : En komparativ analys av bayesian model averaging, best subset selection och en longitudinell modell." Thesis, Umeå universitet, Statistik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-105339.

Full text
Abstract:
Föreliggande uppsats har som främsta syfte att skapa en prognosmodell för bruttoregionalprodukten (BRP) för Sveriges 21 län. Behovet av en prognosmodell motiveras av att Statistiska centralbyrån (SCB) i dagsläget redovisar de definitiva siffrorna av BRP med två års fördröjning. Det kan därmed finnas ett intresse hos regionala beslutsfattare att få en uppfattning om hur BRP utvecklats under de två senaste åren. Metoden som används är bayesian model averaging (BMA), vilken kommer att utvärderas samt jämföras med två andra metoder: En multipel linjär modell som skattas med minsta kvadratmetoden där variabelselektion utförs med best subset selection (BSS). Den andra metoden är en tidsseriemodell och kallas här för en longitudinell modell (LM). Resultatet påvisar bland annat att modellerna lider av multikollinjäritet. Hur väl dess tre metoder predikterar BRP utvärderas med Validation set approach och presenteras med olika precisionsmått. Ett av måtten mean absolute percentage error (MAPE) resulterade i 6,67 % för BMA, 6,61 % för BSS och 4,08 % för LM.
APA, Harvard, Vancouver, ISO, and other styles
33

Sun, Di, and 孙镝. "Optimization of Berth allocations in container terminals." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2012. http://hub.hku.hk/bib/B48199564.

Full text
Abstract:
Efficient and effective berth allocation is essential to guarantee high container throughput in a container terminal. Modern mega-terminals are usually comprised of multiple disjointed berths. However, this type of Berth Allocation Problem (BAP) has not attracted a lot of attention from the academic world due to its great complexity. This research develops new methodologies for solving complex BAPs, in particular, BAPs involving quay crane scheduling in a multiple-berth environment. This research develops a mathematical model and a new Branch and Price algorithm (B&P) which hybridizes the column generation approach and the Branch and Bound method (B&B) to generate optimal multiple-berth plans (MBAP) within acceptable time limits. A new exact algorithm based on the label-correcting concept is designed to obtain all potential columns by defining a new label structure and dominance rules. To accelerate the generation of columns, two heuristics are proposed to distribute vessels among berths and to establish the handling sequence of the vessels allocated to each berth. An early termination condition is also developed to avoid the “tailing off effect” phenomenon during column generation process. The effectiveness and robustness of the proposed methodology are demonstrated by solving a set of randomly generated test problems. Since the Berth Allocation Problem (BAP) and the Quay Crane Scheduling Problem (QCSP) strongly interact, this research also studies the Simultaneous Berth Allocation and Quay Crane Scheduling Problem (BAQCSP). An advanced mathematical model and a new hybrid meta-heuristic GA-TS algorithm which is based on the concept of Genetic Algorithm (GA) are developed to solve the proposed BAQCSP effectively and efficiently. A new crossover operation inspired by the memory-based strategy of Tabu Search (TS) and the mutation operation are implemented to avoid premature convergence of the optimization process. The local search ability of TS is incorporated into the mutation operation to improve the exploitation of the solution space. Comparative experiments are also conducted to show the superiority of the performance of the proposed GA-TS Algorithm over the B&B and the canonical GA. Furthermore, this research extends the scope of BAQCSP to consider the Simultaneous Multiple-berth Allocation and Quay Crane Scheduling Problem (MBAQCSP). A MBAQCSP model is developed consisting of various operational constraints arising from a wide range of practical applications. Since MBAQCSP combines the structures of both MBAP and BAQCSP, the exact B&P proposed for solving MBAP can be modified to optimally solve MBAQCSP. However, the calculation time of B&P increases significantly as the V/B ratio (i.e., vessel number to berth number) grows. In order to eliminate this shortcoming, this research develops a GA-TS Aided Column Generation Algorithm which hybridizes the GA-TS Algorithm proposed for solving BAQCSP with the Column Generation Algorithm to locate the optimal or near optimal solutions of MBAQCSP. The computational results show that the proposed hybrid algorithm locates excellent near optimal solutions to all test problems within acceptable time limits, even problems with high V/B ratios. Finally, this research also shows that the proposed GA-TS Aided Column Generation Algorithm can be easily modified to solve MBAP efficiently.<br>published_or_final_version<br>Industrial and Manufacturing Systems Engineering<br>Doctoral<br>Doctor of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
34

Kudlička, Miroslav. "Model terminálu VSAT." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2010. http://www.nusl.cz/ntk/nusl-218635.

Full text
Abstract:
This work deals with the description of the communication using the VSAT satellite network. A used network topology, frequency bands, satellite orbits and also an access technology are defined. The next part is focused on the VSAT terminal, where the block diagram is shown. A model of the indoor unit IDU is designed in the system background of Ansoft Designer. Individual parts of the system model are analyzed in terms of input variables. The results of the simulation are shown. The curves of BER before Viterbi decoding and after Viterbi decoding are shown too.
APA, Harvard, Vancouver, ISO, and other styles
35

Brooks, Roger John. "A framework for choosing the best model in mathematical modelling and simulation." Thesis, University of Birmingham, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.364524.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Grek, Åsa. "Forecasting accuracy for ARCH models and GARCH (1,1) family : Which model does best capture the volatility of the Swedish stock market?" Thesis, Örebro universitet, Handelshögskolan vid Örebro Universitet, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-37495.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Parthasarathy, Prithwick. "Model for energy consumption of 2D Belt Robot : Master’s thesis work." Thesis, Högskolan Väst, Avdelningen för produktionssystem (PS), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:hv:diva-9871.

Full text
Abstract:
A production industry with many robots working 24 hours a day, 7 days a week consumes a lot of energy. Industries aim to reduce the energy consumed per machine so as to support their financial budgets and also to be a more sustainable, energy efficient entity. Energy models can be used to predict the energy consumed by robot(s) for optimising the input parameters which determine robot motion and task execution. This work presents an ener-gy model to predict the energy consumption of 2D belt robots used for press line tending. Based on the components' specifications and the trajectory, an estimation of the energy consumption is computed. As part of this work, the proposed energy model is formulated, implemented in MATLAB and experimentally validated. The energy model is further used to investigate the effect of tool weight on energy consumption which includes predicting potential energy reductions achieved by reducing the weight of the gripper tools. Further, investigation of potential energy savings which can be achieved when mechanical brakes are used when the robot is idle is also presented. This illustrates the purpose and usefulness of the proposed energy model.
APA, Harvard, Vancouver, ISO, and other styles
38

Carter, Knute Derek. "Best-subset model selection based on multitudinal assessments of likelihood improvements." Diss., University of Iowa, 2013. https://ir.uiowa.edu/etd/5726.

Full text
Abstract:
Given a set of potential explanatory variables, one model selection approach is to select the best model, according to some criterion, from among the collection of models defined by all possible subsets of the explanatory variables. A popular procedure that has been used in this setting is to select the model that results in the smallest value of the Akaike information criterion (AIC). One drawback in using the AIC is that it can lead to the frequent selection of overspecified models. This can be problematic if the researcher wishes to assert, with some level of certainty, the necessity of any given variable that has been selected. This thesis develops a model selection procedure that allows the researcher to nominate, a priori, the probability at which overspecified models will be selected from among all possible subsets. The procedure seeks to determine if the inclusion of each candidate variable results in a sufficiently improved fitting term, and hence is referred to as the SIFT procedure. In order to determine whether there is sufficient evidence to retain a candidate variable or not, a set of threshold values are computed. Two procedures are proposed: a naive method based on a set of restrictive assumptions; and an empirical permutation-based method. Graphical tools have also been developed to be used in conjunction with the SIFT procedure. The graphical representation of the SIFT procedure clarifies the process being undertaken. Using these tools can also assist researchers in developing a deeper understanding of the data they are analyzing. The naive and empirical SIFT methods are investigated by way of simulation under a range of conditions within the standard linear model framework. The performance of the SIFT methodology is compared with model selection by minimum AIC; minimum Bayesian Information Criterion (BIC); and backward elimination based on p-values. The SIFT procedure is found to behave as designed—asymptotically selecting those variables that characterize the underlying data generating mechanism, while limiting the selection of false or spurious variables to the desired level. The SIFT methodology offers researchers a promising new approach to model selection, whereby they are now able to control the probability of selecting an overspecified model to a level that best suits their needs.
APA, Harvard, Vancouver, ISO, and other styles
39

Bergkvist, Alexander, Nils Hedberg, Sebastian Rollino, and Markus Sagen. "Surmize: An Online NLP System for Close-Domain Question-Answering and Summarization." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-412247.

Full text
Abstract:
The amount of data available and consumed by people globally is growing. To reduce mental fatigue and increase the general ability to gain insight into complex texts or documents, we have developed an application to aid in this task. The application allows users to upload documents and ask domain-specific questions about them using our web application. A summarized version of each document is presented to the user, which could further facilitate their understanding of the document and guide them towards what types of questions could be relevant to ask. Our application allows users flexibility with the types of documents that can be processed, it is publicly available, stores no user data, and uses state-of-the-art models for its summaries and answers. The result is an application that yields near human-level intuition for answering questions in certain isolated cases, such as Wikipedia and news articles, as well as some scientific texts. The application shows a decrease in reliability and its prediction as to the complexity of the subject, the number of words in the document, and grammatical inconsistency in the questions increases. These are all aspects that can be improved further if used in production.<br>Mängden data som är tillgänglig och konsumeras av människor växer globalt. För att minska den mentala trötthet och öka den allmänna förmågan att få insikt i komplexa, massiva texter eller dokument, har vi utvecklat en applikation för att bistå i de uppgifterna. Applikationen tillåter användare att ladda upp dokument och fråga kontextspecifika frågor via vår webbapplikation. En sammanfattad version av varje dokument presenteras till användaren, vilket kan ytterligare förenkla förståelsen av ett dokument och vägleda dem mot vad som kan vara relevanta frågor att ställa. Vår applikation ger användare möjligheten att behandla olika typer av dokument, är tillgänglig för alla, sparar ingen personlig data, och använder de senaste modellerna inom språkbehandling för dess sammanfattningar och svar. Resultatet är en applikation som når en nära mänsklig intuition för vissa domäner och frågor, som exempelvis Wikipedia- och nyhetsartiklar, samt viss vetensaplig text. Noterade undantag för tillämpningen härrör från ämnets komplexitet, grammatiska korrekthet för frågorna och dokumentets längd. Dessa är områden som kan förbättras ytterligare om den används i produktionen.
APA, Harvard, Vancouver, ISO, and other styles
40

Luo, Ziyang. "Analyzing the Anisotropy Phenomenon in Transformer-based Masked Language Models." Thesis, Uppsala universitet, Institutionen för lingvistik och filologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-445537.

Full text
Abstract:
In this thesis, we examine the anisotropy phenomenon in popular masked language models, BERT and RoBERTa, in detail. We propose a possible explanation for this unreasonable phenomenon. First, we demonstrate that the contextualized word vectors derived from pretrained masked language model-based encoders share a common, perhaps undesirable pattern across layers. Namely, we find cases of persistent outlier neurons within BERT and RoBERTa's hidden state vectors that consistently bear the smallest or largest values in said vectors. In an attempt to investigate the source of this information, we introduce a neuron-level analysis method, which reveals that the outliers are closely related to information captured by positional embeddings. Second, we find that a simple normalization method, whitening can make the vector space isotropic. Lastly, we demonstrate that ''clipping'' the outliers or whitening can more accurately distinguish word senses, as well as lead to better sentence embeddings when mean pooling.
APA, Harvard, Vancouver, ISO, and other styles
41

Njoku, Francis O. C. "Some Indigenous Models In African Theology And An Ethic Of Inculturation." Bulletin of Ecumenical Theology, 1996. http://digital.library.duq.edu/u?/bet,568.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Burkes, Darryl A. "X-33 TELEMETRY BEST SOURCE SELECTION, PROCESSING, DISPLAY, AND SIMULATION MODEL COMPARISON." International Foundation for Telemetering, 1998. http://hdl.handle.net/10150/609673.

Full text
Abstract:
International Telemetering Conference Proceedings / October 26-29, 1998 / Town & Country Resort Hotel and Convention Center, San Diego, California<br>The X-33 program requires the use of multiple telemetry ground stations to provide continuous coverage of the launch, ascent, re-entry and approach phases for flights from Edwards AFB, California, to landings at Dugway Proving Grounds, Utah, and Malmstrom AFB, Montana. This paper will discuss the X-33 telemetry requirements and design, including information on the fixed and mobile telemetry systems, automated best source selection system, processing/display support for range safety officers (RSO) and range engineers, and comparison of real-time data with simulated data using the Dynamic Ground Station Analysis model. Due to the use of multiple ground stations and short duration flights, the goal throughout the X-33 missions is to automatically provide the best telemetry source for critical vehicle performance monitoring. The X-33 program was initiated by National Aeronautics and Space Administration (NASA) Cooperative Agreement No. NCC8-115 with Lockheed Martin Skunk Works (LMSW).
APA, Harvard, Vancouver, ISO, and other styles
43

Thatcher, Gregory W. "A model of best practice: Leadership development programs in the nuclear industry." Thesis, University of North Texas, 2006. https://digital.library.unt.edu/ark:/67531/metadc5307/.

Full text
Abstract:
This study looked at leadership development at top performing nuclear plants in the United States. The examination of leadership development as actually practiced in the nuclear energy industry lead to the development of a best practice model. The nuclear industry is self-regulated through the Institute for Nuclear Power Operations (INPO). INPO has been evaluating nuclear plants over the past 15 years. Recently they have identified supervisor performance as a key factor in poor plant performance. INPO created a model for leadership development called Growing Industry Leaders. The nuclear industry has identified its aging workforce and subsequent loss of leadership as an emerging issue facing the nuclear industry in the next five to ten years. This initiative was aimed at both the supervisor shortfalls identified through plant evaluations and the state of the workforce within the nuclear industry. This research evaluated the elements of this model and compared them to a model of best practice. This research answered the following questions: What elements of leadership development should be included in leadership development programs? What would a model of best practice in leadership development look like? Data was collected from nine out of 103 top performing plants. Development activities were categorized by a seven member panel of experts. These categories were then validated using three rounds of a Delphi process to reach consensus. This became the basis for the best practice model for leadership development.
APA, Harvard, Vancouver, ISO, and other styles
44

Viana, Luiz Fernando GonÃalves. "Proposal for a model of charging of the bulk water in state of CearÃ: a review of current model." Universidade Federal do CearÃ, 2011. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=6593.

Full text
Abstract:
The aim of this study was to propose a new model of charging for water use which considers the granted rights, the capitation volumes, and the discarded domestic sewage. For this purpose, the main charging models applied in federal river basins, implemented by the ANA, were analyzed: ParaÃba do Sul river, Piracicaba, Capivari and Jundiaà rivers, and SÃo Francisco river. After defining the model that better adapts to the reality of Cearà State, considering simplicity and applicability of each model, the valuation of water as an economic good was performed in the Salgado river basin, in Cariri region. Optimal prices for water supply and domestic sewage was calculated based on the economic theory of general equilibrium, known as second best. The results showed that elasticity of demand, for each purpose of use, is inelastic reinforcing the results of other studies on water charging. The calculated optimal prices were R$ 0,0148/m3 for water supply, and R$ 0,1914/kg DBO for domestic sewage.<br>O objetivo deste estudo à propor um novo modelo de cobranÃa pelo uso da Ãgua que considere volumes outorgados, captados e lanÃamento de efluentes domÃsticos. Para tanto, foram analisados os principais modelos de cobranÃa adotados nas bacias hidrogrÃficas de rios federais, implementados pela AgÃncia Nacional de Ãguas (ANA): rio ParaÃba do Sul, rios Piracicaba, Capivari e JundiaÃ, e rio SÃo Francisco. ApÃs a definiÃÃo do modelo que melhor se adÃqua à realidade do CearÃ, considerando os aspectos de simplicidade e aplicabilidade, procedeu-se à valoraÃÃo da Ãgua como bem econÃmico na bacia hidrogrÃfica do rio Salgado, na regiÃo do Cariri. A determinaÃÃo dos preÃos Ãtimos pelo uso da Ãgua, para os usos de abastecimento pÃblico e esgoto domÃstico, foi calculada com base na teoria econÃmica de equilÃbrio geral em second best. Os resultados demonstraram que a elasticidade-preÃo da demanda, em cada um dos usos, à inelÃstica, reforÃando os resultados de outros estudos sobre a cobranÃa. Os preÃos Ãtimos calculados foram de R$ 0,0148/m3 para o abastecimento pÃblico, e R$ 0,1914/kg DBO para o lanÃamento de efluentes domÃsticos.
APA, Harvard, Vancouver, ISO, and other styles
45

Burdon, Wendy. "Models for compliance in the financial service industry : theory versus practice : is a best practice model feasible in an environment of regulatory flux?" Thesis, Northumbria University, 2016. http://nrl.northumbria.ac.uk/30225/.

Full text
Abstract:
The overall purpose of this thesis is to examine the models for effective compliance, and those currently adopted in practice within the financial service sector. The need for financial service organisations to maintain a robust compliance function has developed due to ever increasing regulatory demands following the most recent global financial crisis, alongside concerns over compliance culture within financial service organisations. An overarching research question exists of why the compliance function is often viewed as business inhibiting within practice. This research engaged with practitioners with experience of working in financial service organisations and regulatory bodies. Repertory grid interviews (a technique stemming from Personal Construct Theory) explored practitioners’ personal worldviews of what comprises effective compliance via consideration of experiences ranging from ‘worst’ to ‘aspirational’ compliance. Practitioners do not align perceptions of benefits and costs of compliance in a linear fashion, when comparing worst and aspirational compliance experiences, which challenges the traditional models presented within academic literature. Barriers to regulatory compliance were highlighted, when exploring personal constructs with recurring themes of culture (management buy in) and also judgement (spirit, as opposed to, letter of the law). Compliance officer are highly aware of the importance of relationships with the regulator, and remain proactive in prioritising workload around the regulatory approach. An alternative model for compliance is presented in the form of the ‘Compliance Trust’. The model results in a compliance community which would operate independently from the financial service firms that they serve, and differs from traditional commercial consultancy or outsourcing with the emphasis on societal contribution and integrity, rather than economic motivations. The compliance trust would benefit organisations, via rotation of experience and knowledge sharing. This research provokes reflection on current practice in comparison to existing academic theories, and seeks to identify whether alternative models are viable for the future of compliance approaches within financial service practice.
APA, Harvard, Vancouver, ISO, and other styles
46

Tsinias, Vasileios. "A hybrid approach to tyre modelling based on modal testing and non-linear tyre-wheel motion." Thesis, Loughborough University, 2014. https://dspace.lboro.ac.uk/2134/17852.

Full text
Abstract:
The current state-of-the-art tyre models tend to be demanding in parameterisation terms, typically requiring extensive and expensive testing, and computational power. Consequently, an alternative parameterisation approach, which also allows for the separation of model fidelity from computational demand, is essential. Based on the above, a tyre model is introduced in this work. Tyre motion is separated into two components, the first being the non-linear global motion of the tyre as a rigid body and the second being the linear local deformation of each node. The resulting system of differential equations of motion consists of a reduced number of equations, depending on the number of rigid and elastic modes considered rather than the degrees of freedom. These equations are populated by the eigenvectors and the eigenvalues of the elastic tyre modes, the eigenvectors corresponding to the rigid tyre modes and the inertia properties of the tyre. The contact sub-model consists of bristles attached to each belt node. Shear forces generated in the contact area are calculated by a distributed LuGre friction model while vertical tread dynamics are obtained by the vertical motion of the contact nodes and the corresponding bristle stiffness and damping characteristics. To populate the abovementioned system of differential equations, the modal properties of the rigid and the elastic belt modes are required. In the context of the present work, rigid belt modes are calculated analytically, while in-plane and out-of-plane elastic belt modes are identified experimentally by performing modal testing on the physical tyre. To this end, the eigenvalue of any particular mode is obtained by fitting a rational fraction polynomial expression to frequency response data surrounding that mode. The eigenvector calculation requires a different approach as typically modes located in the vicinity of the examined mode have an effect on the apparent residue. Consequently, an alternative method has been developed which takes into account the out-of-band modes leading to identified residues representing only the modes of interest. The validation of the proposed modelling approach is performed by comparing simulation results to experimental data and trends found in the literature. In terms of vertical stiffness, correlation with experimental data is achieved for a limited vertical load range, due to the nature of the identified modal properties. Moreover, the tyre model response to transient lateral slip is investigated for a range of longitudinal speeds and vertical loads, and the resulting relaxation length trends are compared with the relevant literature.
APA, Harvard, Vancouver, ISO, and other styles
47

Brügger, Robert. "Die phänologische Entwicklung von Buche und Fichte : Beobachtung, Variabilität, Darstellung und deren Nachvollzug in einem Modell /." Bern : Verl. des Geographischen Institutes der Universität Bern, 1998. http://www.ub.unibe.ch/content/bibliotheken_sammlungen/sondersammlungen/dissen_bestellformular/index_ger.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Webb, Evan. "Towards a General Logic Model for Recreational Youth Development Programs." Thesis, Université d'Ottawa / University of Ottawa, 2017. http://hdl.handle.net/10393/36872.

Full text
Abstract:
Recreational organizations that deliver activities to youth within their communities can provide an effective setting for positive youth development (PYD) endeavours due to being fun, engaging, and an environment where skill-building is inherent. However, not all recreational organizations offering PYD aimed programs are successful and many are cancelled after a short amount of time. A framework or guide for (1) promoting PYD through community recreation and (2) evaluating and identifying PYD outcomes does not yet exist. This research seeks to develop a model to inform recreational program design to bring about positive developmental outcomes in youth participants using empirical data collected from three successful organizations. Both one-on-one interviews and a focus group with youth participants and adult staff were utilized following a qualitative multiple case study approach. Data collected was concerned with the positive developmental outcomes experienced by youth participants in the organizations and mechanisms used to realize these outcomes. The key themes, derived through inductive and deductive analyses, are presented as a five-step logic model. These themes help identify the intended results of programs along with the resources and processes needed to achieve these results, thus making this study’s findings easy to integrate into recreational programming. The model’s process factors included a series of inputs (i.e., contextual factors and external assets) and activities (i.e., direct and indirect strategies). Findings identified as intended PYD outcomes included outputs (i.e., objective measurable indicators), short-term outcomes (i.e., life skills), and long-term impacts (i.e., the four Cs including life skill transfer and contribution). This study elaborates on concepts identified in previous research that are conducive to PYD while bringing them together into a framework for designing recreational programs with the goal of promoting positive developmental outcomes in youth. However, further testing through quantitative, longitudinal, and intervention research may be needed.
APA, Harvard, Vancouver, ISO, and other styles
49

Vermelho, Alexandre Filipe Correia Cajana. "Calculating best estimates in a GLM framework. Frequency/severity models vs total loss models." Master's thesis, Instituto Superior de Economia e Gestão, 2014. http://hdl.handle.net/10400.5/7040.

Full text
Abstract:
Mestrado em Ciências Actuariais<br>When using generalized linear models to predict future claim payments, should actuaries use separate frequency/severity models or a single loss cost model? This is the question this paper addresses, covering some theoretical background, testing both alternatives on real data from the Industrial Multiple Risks (IMR) sub-­‐branch and analysing its results. Data was provided by 7 companies operating in Portugal in the years 2010 and 2011, who own a 70% share of the Portuguese IMR market and was collected by Associação Portuguesa de Seguradores (APS).
APA, Harvard, Vancouver, ISO, and other styles
50

Tai, Yeung Kam Lan (Daisy). "Statistical model development to identify the best data pooling for early stage construction price forecasts." Thesis, Queensland University of Technology, 2009. https://eprints.qut.edu.au/20300/1/Kam_Lan_Tai_Yeung_Thesis.pdf.

Full text
Abstract:
In the early feasibility study stage, the information concerning the target project is very limited. It is very common in practice for a Quantity Surveyor (Q.S.) to use the mean value of the historical building price data (with similar characteristics to the target project) to forecast the early construction cost for a target project. Most clients rely heavily on this early cost forecast, provided by the Q.S., and use it to make their investment decision and advance financial arrangement. The primary aim of this research is to develop a statistical model and demonstrate through this developed model how to measure the accuracy of mean value forecast. A secondary aim is to review the homogeneity of construction project cost. The third aim is to identify the best data pooling for mean value cost forecast in early construction stages by making the best use of the data available. Three types of mean value forecasts are considered: (1) the use of the target base group (relating to a source with similar characteristics to the target project), (2) the use of a non-target base group (relating to sources with less or dissimilar characteristics to the target project) and (3) the use of a combined target and non-target base group. A formulation of mean square error is derived for each to measure the forecasting accuracy. To accomplish the above research aims, this research uses cost data from 450 completed Hong Kong projects. The collected data is clustered into two levels as: (1) Level one - by project nature (i.e. Residential, Commercial centre, Car parking, Social community centre, School, Office, Hotel, Industrial, University and Hospital), (2) Level two -by project specification and construction floor area. In this research, the accuracy of mean value forecast (i.e. mean square error) for a total number of 10,539 of combined data groups is measured. From their performance, it may reasonably be concluded that (1) the use of a non-target base group (relating to sources with less or dissimilar characteristics to the target project) never improves the forecasting performance, (2) the use of a target base group (relating to a source with similar characteristics to the target project) cannot always provide the best forecasting performance, (3) the use of a combined target and non-target base group in some cases can furnish a better forecasting performance, and (4) when the cost data groups are clustered into a more detailed level, it can improve the forecasting performance.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!