Dissertations / Theses on the topic 'DeepL Translator'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 39 dissertations / theses for your research on the topic 'DeepL Translator.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Marcassoli, Giulia. "Gli output dei sistemi di traduzione automatica neurale: valutazione della qualità di Google Translate e DeepL Translator nella combinazione tedesco-italiano." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/19536/.
Full textLuccioli, Alessandra. "Stereotipi di genere e traduzione automatica dall'inglese all’italiano: uno studio di caso sul femminile nelle professioni." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/20408/.
Full textCozza, Antonella. "Google Translate e DeepL: la traduzione automatica in ambito turistico." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2019.
Find full textDi, Gangi Mattia Antonino. "Neural Speech Translation: From Neural Machine Translation to Direct Speech Translation." Doctoral thesis, Università degli studi di Trento, 2020. http://hdl.handle.net/11572/259137.
Full textOlmucci, Poddubnyy Oleksandr. "Investigating Single Translation Function CycleGANs." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amslaurea.unibo.it/16126/.
Full textChatterjee, Rajen. "Automatic Post-Editing for Machine Translation." Doctoral thesis, Università degli studi di Trento, 2019. http://hdl.handle.net/11572/242495.
Full textCaglayan, Ozan. "Multimodal Machine Translation." Thesis, Le Mans, 2019. http://www.theses.fr/2019LEMA1016/document.
Full textMachine translation aims at automatically translating documents from one language to another without human intervention. With the advent of deep neural networks (DNN), neural approaches to machine translation started to dominate the field, reaching state-ofthe-art performance in many languages. Neural machine translation (NMT) also revived the interest in interlingual machine translation due to how it naturally fits the task into an encoder-decoder framework which produces a translation by decoding a latent source representation. Combined with the architectural flexibility of DNNs, this framework paved the way for further research in multimodality with the objective of augmenting the latent representations with other modalities such as vision or speech, for example. This thesis focuses on a multimodal machine translation (MMT) framework that integrates a secondary visual modality to achieve better and visually grounded language understanding. I specifically worked with a dataset containing images and their translated descriptions, where visual context can be useful forword sense disambiguation, missing word imputation, or gender marking when translating from a language with gender-neutral nouns to one with grammatical gender system as is the case with English to French. I propose two main approaches to integrate the visual modality: (i) a multimodal attention mechanism that learns to take into account both sentence and convolutional visual representations, (ii) a method that uses global visual feature vectors to prime the sentence encoders and the decoders. Through automatic and human evaluation conducted on multiple language pairs, the proposed approaches were demonstrated to be beneficial. Finally, I further show that by systematically removing certain linguistic information from the input sentences, the true strength of both methods emerges as they successfully impute missing nouns, colors and can even translate when parts of the source sentences are completely removed
Sandström, Emil. "Molecular Optimization Using Graph-to-Graph Translation." Thesis, Umeå universitet, Institutionen för matematik och matematisk statistik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-172584.
Full textGarcía, Martínez Mercedes. "Factored neural machine translation." Thesis, Le Mans, 2018. http://www.theses.fr/2018LEMA1002/document.
Full textCommunication between humans across the lands is difficult due to the diversity of languages. Machine translation is a quick and cheap way to make translation accessible to everyone. Recently, Neural Machine Translation (NMT) has achievedimpressive results. This thesis is focus on the Factored Neural Machine Translation (FNMT) approach which is founded on the idea of using the morphological and grammatical decomposition of the words (lemmas and linguistic factors) in the target language. This architecture addresses two well-known challenges occurring in NMT. Firstly, the limitation on the target vocabulary size which is a consequence of the computationally expensive softmax function at the output layer of the network, leading to a high rate of unknown words. Secondly, data sparsity which is arising when we face a specific domain or a morphologically rich language. With FNMT, all the inflections of the words are supported and larger vocabulary is modelled with similar computational cost. Moreover, new words not included in the training dataset can be generated. In this work, I developed different FNMT architectures using various dependencies between lemmas and factors. In addition, I enhanced the source language side also with factors. The FNMT model is evaluated on various languages including morphologically rich ones. State of the art models, some using Byte Pair Encoding (BPE) are compared to the FNMT model using small and big training datasets. We found out that factored models are more robust in low resource conditions. FNMT has been combined with BPE units performing better than pure FNMT model when trained with big data. We experimented with different domains obtaining improvements with the FNMT models. Furthermore, the morphology of the translations is measured using a special test suite showing the importance of explicitly modeling the target morphology. Our work shows the benefits of applying linguistic factors in NMT
Bujwid, Sebastian. "GANtruth – a regularization method for unsupervised image-to-image translation." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-233849.
Full textI det här arbetet föreslår vi en ny och effektiv metod för att begränsa värdemängden för det illa-definierade problemet som utgörs av oövervakad bild-till-bild-översättning. Vi antar att miljön i källdomänen är känd, och vi föreslår att uttryckligen framtvinga bevarandet av grundfaktaetiketterna på bilder översatta från källa till måldomän. Vi utför empiriska experiment där information som semantisk segmentering och skillnad bevaras och visar belägg för att vår metod uppnår förbättrad prestanda över baslinjemetoden UNIT på att översätta bilder från SYNTHIA till Cityscapes. De genererade bilderna uppfattas som mer realistiska i undersökningar där människor tillfrågats och har minskat fel när de används som anpassade bilder i domänpassningsscenario. Dessutom är det underliggande grundfaktabevarande antagandet kompletterat med alternativa tillvägagångssätt och genom att kombinera det med UNIT-ramverket förbättrar vi resultaten ytterligare.
Belinkov, Yonatan. "On internal language representations in deep learning : an analysis of machine translation and speech recognition." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/118079.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (pages 183-228).
Language technology has become pervasive in everyday life. Neural networks are a key component in this technology thanks to their ability to model large amounts of data. Contrary to traditional systems, models based on deep neural networks (a.k.a. deep learning) can be trained in an end-to-end fashion on input-output pairs, such as a sentence in one language and its translation in another language, or a speech utterance and its transcription. The end-to-end training paradigm simplifies the engineering process while giving the model flexibility to optimize for the desired task. This, however, often comes at the expense of model interpretability: understanding the role of different parts of the deep neural network is difficult, and such models are sometimes perceived as "black-box", hindering research efforts and limiting their utility to society. This thesis investigates what kind of linguistic information is represented in deep learning models for written and spoken language. In order to study this question, I develop a unified methodology for evaluating internal representations in neural networks, consisting of three steps: training a model on a complex end-to-end task; generating feature representations from different parts of the trained model; and training classifiers on simple supervised learning tasks using the representations. I demonstrate the approach on two core tasks in human language technology: machine translation and speech recognition. I perform a battery of experiments comparing different layers, modules, and architectures in end-to-end models that are trained on these tasks, and evaluate their quality at different linguistic levels. First, I study how neural machine translation models learn morphological information. Second, I compare lexical semantic and part-of-speech information in neural machine translation. Third, I investigate where syntactic and semantic structures are captured in these models. Finally, I explore how end-to-end automatic speech recognition models encode phonetic information. The analyses illuminate the inner workings of end-to-end machine translation and speech recognition systems, explain how they capture different language properties, and suggest potential directions for improving them. I also point to open questions concerning the representation of other linguistic properties, the investigation of different models, and the use of other analysis methods. Taken together, this thesis provides a comprehensive analysis of internal language representations in deep learning models.
by Yonatan Belinkov.
Ph. D.
Braghittoni, Laura. "La localizzazione software: proposta di traduzione della documentazione di memoQ." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/20421/.
Full textSepasdar, Reza. "A Deep Learning Approach to Predict Full-Field Stress Distribution in Composite Materials." Thesis, Virginia Tech, 2021. http://hdl.handle.net/10919/103427.
Full textM.S.
Fiber-reinforced composites are material types with excellent mechanical performance. They form the major material in the construction of space shuttles, aircraft, fancy cars, etc., the structures that are designed to be lightweight and at the same time extremely stiff and strong. Due to the broad application, especially in the sensitives industries, fiber-reinforced composites have always been a subject of meticulous research studies. The research studies to better understand the mechanical behavior of these composites has to be conducted on the micro-scale. Since the experimental studies on micro-scale are expensive and extremely limited, numerical simulations are normally adopted. Numerical simulations, however, are complex, time-consuming, and highly computationally expensive even when run on powerful supercomputers. Hence, this research aims to leverage artificial intelligence to reduce the complexity and computational cost associated with the existing high-fidelity simulation techniques. We propose a robust deep learning framework that can be used as a replacement for the conventional numerical simulations to predict important mechanical attributes of the fiber-reinforced composite materials on the micro-scale. The proposed framework is shown to have high accuracy in predicting complex phenomena including stress distributions at various stages of mechanical loading.
Kalchbrenner, Nal. "Encoder-decoder neural networks." Thesis, University of Oxford, 2017. http://ora.ox.ac.uk/objects/uuid:d56e48db-008b-4814-bd82-a5d612000de9.
Full textAckerman, Wesley. "Semantic-Driven Unsupervised Image-to-Image Translation for Distinct Image Domains." BYU ScholarsArchive, 2020. https://scholarsarchive.byu.edu/etd/8684.
Full textWang, Xun. "Entity-Centric Discourse Analysis and Its Applications." Kyoto University, 2017. http://hdl.handle.net/2433/228251.
Full textHamrell, Hanna. "Image-to-Image Translation for Improvement of Synthetic Thermal Infrared Training Data Using Generative Adversarial Networks." Thesis, Linköpings universitet, Datorseende, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-174928.
Full textKarlsson, Simon, and Per Welander. "Generative Adversarial Networks for Image-to-Image Translation on Street View and MR Images." Thesis, Linköpings universitet, Institutionen för medicinsk teknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-148475.
Full textSenko, Jozef. "Hluboký syntaxí řízený překlad." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2015. http://www.nusl.cz/ntk/nusl-234933.
Full textPeris, Abril Álvaro. "Interactivity, Adaptation and Multimodality in Neural Sequence-to-sequence Learning." Doctoral thesis, Universitat Politècnica de València, 2020. http://hdl.handle.net/10251/134058.
Full text[CAT] El problema conegut com a de seqüència a seqüència consisteix en transformar una seqüència d'entrada en una seqüència d'eixida. Seguint aquesta perspectiva, es pot atacar una àmplia quantitat de problemes, entre els quals destaquen la traducció automàtica, el reconeixement automàtic de la parla o la descripció automàtica d'objectes multimèdia. L'aplicació de xarxes neuronals profundes ha revolucionat aquesta disciplina, i s'han aconseguit progressos notables. Però els sistemes automàtics encara produeixen prediccions que disten molt de ser perfectes. Per a obtindre prediccions de gran qualitat, els sistemes automàtics són utilitzats amb la supervisió d'un humà, qui corregeix els errors. Aquesta tesi se centra principalment en el problema de la traducció de llenguatge natural, el qual s'ataca emprant models enterament neuronals. El nostre objectiu principal és desenvolupar sistemes més eficients. Per a aquesta tasca, les nostres contribucions s'assenten sobre dos pilars fonamentals: com utilitzar el sistema d'una manera més eficient i com aprofitar dades generades durant la fase d'explotació d'aquest. En el primer cas, apliquem el marc teòric conegut com a predicció interactiva a la traducció automàtica neuronal. Aquest procés consisteix en integrar usuari i sistema en un procés de correcció cooperatiu, amb l'objectiu de reduir l'esforç humà emprat per obtindre traduccions d'alta qualitat. Desenvolupem diferents protocols d'interacció per a aquesta tecnologia, aplicant interacció basada en prefixos i en segments, implementats modificant el procés de cerca del sistema. A més a més, busquem mecanismes per a obtindre una interacció amb el sistema més precisa, mantenint la velocitat de generació. Duem a terme una extensa experimentació, que mostra el potencial d'aquestes tècniques: superem l'estat de l'art anterior per un gran marge i observem que els nostres sistemes reaccionen millor a les interacciones humanes. A continuació, estudiem com millorar un sistema neuronal mitjançant les dades generades com a subproducte d'aquest procés de correcció. Per a això, ens basem en dos paradigmes de l'aprenentatge automàtic: l'aprenentatge mostra a mostra i l'aprenentatge actiu. En el primer cas, el sistema s'actualitza immediatament després que l'usuari corregeix una frase. Per tant, el sistema aprén d'una manera contínua a partir de correccions, evitant cometre errors previs i especialitzant-se en un usuari o domini concrets. Avaluem aquests sistemes en una gran quantitat de situacions i per a dominis diferents, que demostren el potencial que tenen els sistemes adaptatius. També duem a terme una avaluació amb traductors professionals, qui varen quedar molt satisfets amb el sistema adaptatiu. A més, van ser més eficients quan ho van usar, si ho comparem amb el sistema estàtic. Pel que fa al segon paradigma, l'apliquem per a l'escenari en el qual han de traduir-se grans quantitats de frases, i la supervisió de totes elles és inviable. En aquest cas, el sistema selecciona les mostres que paga la pena supervisar, traduint la resta automàticament. Aplicant aquest protocol, reduírem en aproximadament un quart l'esforç necessari per a arribar a certa qualitat de traducció. Finalment, ataquem el complex problema de la descripció d'objectes multimèdia. Aquest problema consisteix en descriure, en llenguatge natural, un objecte visual, una imatge o un vídeo. Comencem amb la tasca de descripció de vídeos d'un domini general. A continuació, ens movem a un cas més específic: la descripció d''esdeveniments a partir d'imatges egocèntriques, capturades al llarg d'un dia. Busquem extraure relacions entre ells per a generar descripcions més informades, desenvolupant un sistema capaç d'analitzar un major context. El model amb context estés genera descripcions de major qualitat que el model bàsic. Finalment, apliquem la predicció interactiva a aquestes tasques multimèdia, di
[EN] The sequence-to-sequence problem consists in transforming an input sequence into an output sequence. A variety of problems can be posed in these terms, including machine translation, speech recognition or multimedia captioning. In the last years, the application of deep neural networks has revolutionized these fields, achieving impressive advances. However and despite the improvements, the output of the automatic systems is still far to be perfect. For achieving high-quality predictions, fully-automatic systems require to be supervised by a human agent, who corrects the errors. This is a common procedure in the translation industry. This thesis is mainly framed into the machine translation problem, tackled using fully neural systems. Our main objective is to develop more efficient neural machine translation systems, that allow for a more productive usage and deployment of the technology. To this end, we base our contributions on two main cornerstones: how to better use of the system and how to better leverage the data generated along its usage. First, we apply the so-called interactive-predictive framework to neural machine translation. This embeds the human agent and the system into a cooperative correction process, that seeks to reduce the human effort spent for obtaining high-quality translations. We develop different interactive protocols for the neural machine translation technology, namely, a prefix-based and a segment-based protocols. They are implemented by modifying the search space of the model. Moreover, we introduce mechanisms for achieving a fine-grained interaction while maintaining the decoding speed of the system. We carried out a wide experimentation that shows the potential of our contributions. The previous state of the art is overcame by a large margin and the current systems are able to react better to the human interactions. Next, we study how to improve a neural system using the data generated as a byproduct of this correction process. To this end, we rely on two main learning paradigms: online and active learning. Under the first one, the system is updated on the fly, as soon as a sentence is corrected. Hence, the system is continuously learning from the corrections, avoiding previous errors and specializing towards a given user or domain. A large experimentation stressed the adaptive systems under different conditions and domains, demonstrating the capabilities of adaptive systems. Moreover, we also carried out a human evaluation of the system, involving professional users. They were very pleased with the adaptive system, and worked more efficiently using it. The second paradigm, active learning, is devised for the translation of huge amounts of data, that are infeasible to being completely supervised. In this scenario, the system selects samples that are worth to be supervised, and leaves the rest automatically translated. Applying this framework, we obtained reductions of approximately a quarter of the effort required for reaching a desired translation quality. The neural approach also obtained large improvements compared with previous translation technologies. Finally, we address another challenging problem: visual captioning. It consists in generating a description in natural language from a visual object, namely an image or a video. We follow the sequence-to-sequence framework, under a a multimodal perspective. We start by tackling the task of generating captions of videos from a general domain. Next, we move on to a more specific case: describing events from egocentric images, acquired along the day. Since these events are consecutive, we aim to extract inter-eventual relationships, for generating more informed captions. The context-aware model improved the generation quality with respect to a regular one. As final point, we apply the intractive-predictive protocol to these multimodal captioning systems, reducing the effort required for correcting the outputs.
Section 5.4 describes an user evaluation of an adaptive translation system. This was done in collaboration with Miguel Domingo and the company Pangeanic, with funding from the Spanish Center for Technological and Industrial Development (Centro para el Desarrollo Tecnológico Industrial). [...] Most of Chapter 6 is the result of a collaboration with Marc Bolaños, supervised by Prof. Petia Radeva, from Universitat de Barcelona/CVC. This collaboration was supported by the R-MIPRCV network, under grant TIN2014-54728-REDC.
Peris Abril, Á. (2019). Interactivity, Adaptation and Multimodality in Neural Sequence-to-sequence Learning [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/134058
TESIS
Solár, Peter. "Syntaxí řízený překlad založený na hlubokých zásobníkových automatech." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2009. http://www.nusl.cz/ntk/nusl-236779.
Full textFrini, Marouane. "Diagnostic des engrenages à base des indicateurs géométriques des signaux électriques triphasés." Thesis, Lyon, 2018. http://www.theses.fr/2018LYSES052.
Full textAlthough they are widely used, classical vibration measurements have several limitations. Vibration analysis can only identify about 60% of the defects that may occur in mechanical systems. However, the main drawbacks of vibration measurements are the difficult access to the transmission system in order to place the sensor as well as the consequent cost of implementation. This results in sensitivity problems relative to the position of the installation and the difficulty to distinguish the source of vibration because of the diversity of mechanical excitations that exist in the industrial environment.Hence, the Motor Current Signatures Analysis (M.C.S.A.) represents a promising alternative to the vibration analysis and has therefore been the subject of increasing attention in recent years. Indeed, the analysis of electrical signatures has the advantage of being a technically accessible method as well as inexpensive and non-intrusive to the system. Techniques based on currents and voltages only require the motor’s electrical measurements which are often already supervised for the purposes of the control and the protection of the electrical machines. This process was mainly used for the detection of motors faults such as rotor bars breakage and eccentricity faults as well as bearings defects. On the other hand, very little research has been focused on gear faults detection using the current analysis. In addition, three-phase electrical signals are characterized by specific geometric representations related to their waveforms and they can serve as different indicators providing additional information. Among these geometric indicators, the Park and Concordia transforms model the electrical components in a two-dimensional coordinate system and any deviation from the original representation indicates the apparition of a malfunction. Moreover, the differential equations of Frenet-Serret represent the trajectory of the signal in a three-dimensional euclidean space and thus indicate any changes in the state of the system. Although they have been previously used for bearing defects, these indicators have not been applied in the detection of gear defects using the analysis of electrical current signatures. Hence, the innovative idea of combining these indicators with signal processing techniques, as well as classification techniques for gears diagnosis using the three-phase motor’s electrical current signatures analysis is established.Hence, in this work, a new approach is proposed for gear faults diagnosis using the motor currents analysis, based on a set of geometric indicators (Park and Concordia transforms as well as the properties of the Frenet-Serret frame). These indicators are part of a specifically built fault signatures library and which also includes the classical indicators used for a wide range of faults. Thus, a proposed estimation algorithm combines experimental measurements of electrical signals with advanced signal processing methods (Empirical Mode Decomposition, ...). Next, it selects the most relevant indicators within the library based on feature selection algorithms (Sequential Backward Selection and Principal Component Analysis). Finally, this selection is combined with non-supervised classification (K-means) for the distinction between the healthy state and faulty states. It was finally validated with a an additional experimental configuration in different cases with gear faults, bearing faults and combined faults with various load levels
Limoncelli, Kelly A. "Identification of Factors Involved in 18S Nonfunctional Ribosomal RNA Decay and a Method for Detecting 8-oxoguanosine by RNA-Seq." eScholarship@UMMS, 2017. https://escholarship.umassmed.edu/gsbs_diss/945.
Full textElbayad, Maha. "Une alternative aux modèles neuronaux séquence-à-séquence pour la traduction automatique." Thesis, Université Grenoble Alpes, 2020. http://www.theses.fr/2020GRALM012.
Full textIn recent years, deep learning has enabled impressive achievements in Machine Translation.Neural Machine Translation (NMT) relies on training deep neural networks with large number of parameters on vast amounts of parallel data to learn how to translate from one language to another.One crucial factor to the success of NMT is the design of new powerful and efficient architectures. State-of-the-art systems are encoder-decoder models that first encode a source sequence into a set of feature vectors and then decode the target sequence conditioning on the source features.In this thesis we question the encoder-decoder paradigm and advocate for an intertwined encoding of the source and target so that the two sequences interact at increasing levels of abstraction. For this purpose, we introduce Pervasive Attention, a model based on two-dimensional convolutions that jointly encode the source and target sequences with interactions that are pervasive throughout the network.To improve the efficiency of NMT systems, we explore online machine translation where the source is read incrementally and the decoder is fed partial contexts so that the model can alternate between reading and writing. We investigate deterministic agents that guide the read/write alternation through a rigid decoding path, and introduce new dynamic agents to estimate a decoding path for each sample.We also address the resource-efficiency of encoder-decoder models and posit that going deeper in a neural network is not required for all instances.We design depth-adaptive Transformer decoders that allow for anytime prediction and sample-adaptive halting mechanisms to favor low cost predictions for low complexity instances and save deeper predictions for complex scenarios
Otero-Garcia, Sílvio César [UNESP]. "Integrale, Longueur, Aire de Henri Lebesgue." Universidade Estadual Paulista (UNESP), 2015. http://hdl.handle.net/11449/133947.
Full textApproved for entry into archive by Sandra Manzano de Almeida (smanzano@marilia.unesp.br) on 2016-01-29T18:35:42Z (GMT) No. of bitstreams: 1 oterogarcia_sc_dr_rcla.pdf: 4577308 bytes, checksum: 3607cfa97f6ae710923054567d80026c (MD5)
Made available in DSpace on 2016-01-29T18:35:42Z (GMT). No. of bitstreams: 1 oterogarcia_sc_dr_rcla.pdf: 4577308 bytes, checksum: 3607cfa97f6ae710923054567d80026c (MD5) Previous issue date: 2015-12-17
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Ce thèse porte sur une traduction et analyse de la thèse de doctorat d’Henri Lebesgue Intégrale, Longueur, Aire. Il s’agit d’une thèse parue en 1902 dans laquelle Lebesgue présente la théorie de la mesure et d’intégration ayant son nom. La traduction, de toute fidélité possible à son original, a pour base la méthodologie de Vinay et Darbelnet (1977). L’analyse se développe à partir du référentiel théorique de l’herméneutique des profondeurs plus spécifiquement, Thompson (2011). Notre intention est de rendre plus accessible une source originale en langue étrangère, le portugais du Brésil dans ce cas, pour contribuer avec les études en Histoire Mathématique mais aussi en avoir comme un outil pédagogique pour l’enseignement des mathématiques.
Neste trabalho traduzimos e analisamos a tese de doutorado de Henri Lebesgue Inté- grale, Longueur, Aire. Publicada em 1902, é nela que Lebesgue apresenta a teoria da medida e integração que levam o seu nome. A tradução, que pretende ser o mais fiel pos- sível ao original, foi feita seguindo a metodologia de Vinay e Darbelnet (1977). A análise foi respaldada pelo referencial teórico da hermenêutica das profundidades proposta por Thompson (2011). O nosso objetivo geral é disponibilizar uma fonte original mais aces- sível para pesquisas em história da matemática bem como facilitar seu uso como recuso pedagógico na educação matemática; trazendo, assim, contribuições para essas áreas do conhecimento.
This thesis deals with the translation and analysis of Henri Lebesgue doctoral thesis, Integrale, Longueur, Aire. Published in 1902, this thesis presents the Lebesgue measure theory and integration named after him. The translation, intended to be as faithful to the original as possible, is based on Vinay and Darbelnet’s (1977) methodology. The analysis is developed from the theoretical reference of the deep hermeneutics, more specifically, Thompson (2011). Our intention is to make available an original source in Portuguese to help with Brazilian studies in History of Mathematics and also ease its use as na educational tool for math teaching.
FAPESP: 2010/18737-1
Šmihula, Michal. "Kulturně společenské centrum u brněnské přehrady - architektonická studie objektů pro kulturně společenské i sportovní akce." Master's thesis, Vysoké učení technické v Brně. Fakulta architektury, 2010. http://www.nusl.cz/ntk/nusl-215678.
Full textLiu, Chin-Heng, and 劉景恆. "A Study on Taiwanese Indigenous Languages Machine Translation by Deep Learning." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/gbft82.
Full text國立東華大學
資訊工程學系
106
Language is an indispensable medium for the development of world civilization, but nowadays many minority languages have gradually disappeared and become endangered languages. According to a survey published in 2016 by the Council of Indigenous Peoples of the Republic of China, the lower the average age of Taiwanese indigenous peoples, the lower the vitality of the indigenous languages, which presents a potential crisis of indigenous language loss. In this study, we choose several Taiwanese indigenous ethnic groups with a larger population, the Amis, the Atayal, and the Bunun, building a "Taiwanese Indigenous Languages Translation System." Using the Deep Learning technology, it is able to convert indigenous languages to Mandarin automatically through the Machine Translation after reading the languages. We hope that we can promote the learning of Taiwanese indigenous languages and enhance the inheritance of endangered languages.
"Automatic Programming Code Explanation Generation with Structured Translation Models." Doctoral diss., 2020. http://hdl.handle.net/2286/R.I.56975.
Full textDissertation/Thesis
Doctoral Dissertation Engineering 2020
CHIEN, YU-CHUN, and 簡侑俊. "Deep license plate recognition in ill-conditioned environments with training data expansion by image translation." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/v3a35d.
Full text中華大學
資訊工程學系
106
Recently, the deep learning technologies make the conventional vision-based surveillance technologies getting significant improvement in terms of feature discrimination and recognition accuracy, e.g., the vision-based license plate recognition (LPR) technology. However, the conventional LPR systems still face the serious challenges in the outdoor ill-conditioned environments. In this work, we used WebGL to augment the license plate training database required for all-weather adverse environment. In general, the conventional LPR systems consist of the following modules: feature extraction, license plate locating, character segmentation, and character recognition. However, the performances of these module are strongly correlated with some low level image features, e.g., edges, colors, and textures. These low level image features can be influenced significantly by the illumination and view angle variations of license plates such that the recognition accuracy is degraded. Therefore, this project is expected to the following contributions. First, we apply the WebGL to construct the training database of the ill-conditioned outdoor environments. Second, we used the YOLOv2 DNN architecture to develop deep license plate recognition system in the ill-conditioned environments with recognition accuracy 94%.
Lavoie-Marchildon, Samuel. "Representation learning in unsupervised domain translation." Thesis, 2019. http://hdl.handle.net/1866/24324.
Full textThis thesis is concerned with the problem of unsupervised domain translation. Unsupervised domain translation is the task of transferring one domain, the source domain, to a target domain. We first study this problem using the formalism of optimal transport. Next, we study the problem of high-level semantic image to image translation using advances in representation learning and transfer learning. The first chapter is devoted to reviewing the background concepts used in this work. We first describe representation learning including a description of neural networks and supervised and unsupervised representation learning. We then introduce generative models and optimal transport. We finish with the relevant notions of transfer learning that will be used in chapter 3. The second chapter presents Neural Wasserstein Flow. In this work, we build on the theory of optimal transport and show that deep neural networks can be used to learn a Wasserstein barycenter of distributions. We further show how a neural network can amortize any barycenter yielding a continuous interpolation. We also show how this idea can be used in the generative model framework. Finally, we show results on shape interpolation and colour interpolation. In the third chapter, we tackle the task of high level semantic image to image translation. We show that high level semantic image to image translation can be achieved by simply learning a conditional GAN with the representation learned from a neural network. We further show that we can make this process unsupervised if the representation learning is a clustering. Finally, we show that our approach works on the task of MNIST to SVHN.
Jean, Sébastien. "From Word Embeddings to Large Vocabulary Neural Machine Translation." Thèse, 2015. http://hdl.handle.net/1866/13421.
Full textIn this thesis, we examine some properties of word embeddings and propose a technique to handle large vocabularies in neural machine translation. We first look at a well-known analogy task and examine the effect of position-dependent weights, the choice of combination function and the impact of supervised learning. We then show that simple embeddings learnt with translational contexts can match or surpass the state of the art on the TOEFL synonym detection task and on the recently introduced SimLex-999 word similarity gold standard. Finally, motivated by impressive results obtained by small-vocabulary (30,000 words) neural machine translation embeddings on some word similarity tasks, we present a GPU-friendly approach to increase the vocabulary size by more than an order of magnitude. Despite originally being developed for obtaining the embeddings only, we show that this technique actually works quite well on actual translation tasks, especially for English to French (WMT'14).
Dwiastuti, Meisyarah. "Indonésko-anglický neuronový strojový překlad." Master's thesis, 2019. http://www.nusl.cz/ntk/nusl-405089.
Full textPopel, Martin. "Strojový překlad s využitím syntaktické analýzy." Doctoral thesis, 2018. http://www.nusl.cz/ntk/nusl-391349.
Full textChung, Junyoung. "On Deep Multiscale Recurrent Neural Networks." Thèse, 2018. http://hdl.handle.net/1866/21588.
Full textBhardwaj, Shivendra. "Open source quality control tool for translation memory using artificial intelligence." Thesis, 2020. http://hdl.handle.net/1866/24307.
Full textTranslation Memory (TM) plays a decisive role during translation and is the go-to database for most language professionals. However, they are highly prone to noise, and additionally, there is no one specific source. There have been many significant efforts in cleaning the TM, especially for training a better Machine Translation system. In this thesis, we also try to clean the TM but with a broader goal of maintaining its overall quality and making it robust for internal use in institutions. We propose a two-step process, first clean an almost clean TM, i.e. noise removal and then detect texts translated from neural machine translation systems. For the noise removal task, we propose an architecture involving five approaches based on heuristics, feature engineering, and deep-learning and evaluate this task by both manual annotation and Machine Translation (MT). We report a notable gain of +1.08 BLEU score over a state-of-the-art, off-the-shelf TM cleaning system. We also propose a web-based tool “OSTI: An Open-Source Translation-memory Instrument” that automatically annotates the incorrect translations (including misaligned) for the institutions to maintain an error-free TM. Deep neural models tremendously improved MT systems, and these systems are translating an immense amount of text every day. The automatically translated text finds a way to TM, and storing these translation units in TM is not ideal. We propose a detection module under two settings: a monolingual task, in which the classifier only looks at the translation; and a bilingual task, in which the source text is also taken into consideration. We report a mean accuracy of around 85% in-domain and 75% out-of-domain for bilingual and 81% in-domain and 63% out-of-domain from monolingual tasks using deep-learning classifiers.
Libovický, Jindřich. "Multimodalita ve strojovém překladu." Doctoral thesis, 2019. http://www.nusl.cz/ntk/nusl-408143.
Full textvan, Merriënboer Bart. "Sequence-to-sequence learning for machine translation and automatic differentiation for machine learning software tools." Thèse, 2018. http://hdl.handle.net/1866/21743.
Full textGrégoire, Francis. "Extraction de phrases parallèles à partir d’un corpus comparable avec des réseaux de neurones récurrents bidirectionnels." Thèse, 2017. http://hdl.handle.net/1866/20191.
Full textGulcehre, Caglar. "Learning and time : on using memory and curricula for language understanding." Thèse, 2018. http://hdl.handle.net/1866/21739.
Full text