To see the other types of publications on this topic, follow the link: DeepL.

Dissertations / Theses on the topic 'DeepL'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'DeepL.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Novazio, Giulio. "DEEPL und die Übersetzung von Kinderliteratur – eine Fallstudie." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/18908/.

Full text
Abstract:
Maschinelle Übersetzung (MT) ist ein äußerst spannender Bereich, der sich ich in einer Phase rasanter Entwicklungen befindet. Diese Technologie hat sich schon als äußerst effektiv bei der Übersetzung von informativen Texten erwiesen. Die Idee, dass sie zur Übersetzung von literarischen Texten verwendet werden könnte, stößt jedoch auf große Skepsis. Eine gelungene literarische Übersetzung setzt nämlich zum einen umfassende Kenntnisse von Ausgangs- und Zielsprache voraus, die es ermöglichen semantische Nuancen zu erkennen sowie Rhythmus und Ausdruckskraft des Originaltextes wiederzugeben. Zum anderen erfordert sie ein tief greifendes Wissen über die Besonderheiten sowohl der Ausgangs- als auch der Zielkultur. Kann ein Programm für maschinelle Übersetzungen diesen hohen Anforderungen gerecht werden? In dieser Studie habe ich einerseits einen theoretischen Überblick der Leistungsfähigkeit und des Entwicklungspotenzials von maschineller Übersetzung gegeben. Andererseits habe ich anhand einer kontrastiven Analyse zwischen meiner Übersetzung und den Vorschlägen von DEEPL die Leistungsfähigkeit von MT bei der Übersetzung eines Kinderromans untersucht. Aus meiner Analyse habe ich schließen können, dass DEEPL auch in einem literarischen Kontext sowohl in qualitativer als auch in quantitativer Hinsicht ein leistungsstarkes Hilfsmittel für Übersetzer sein kann, vorausgesetzt, dass diese über ausgeprägte sprachliche und kulturelle Kenntnisse verfügen.
APA, Harvard, Vancouver, ISO, and other styles
2

Cozza, Antonella. "Google Translate e DeepL: la traduzione automatica in ambito turistico." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2019.

Find full text
Abstract:
Il presente elaborato ha come scopo quello di analizzare il comportamento dei due traduttori automatici più utilizzati attualmente, Google Translate e DeepL, sulla base di testi pertinenti all'ambito turistico, nello specifico, le recensioni di strutture alberghiere. Dopo un breve inquadramento teorico sulla traduzione automatica e sulle principali caratteristiche dei TA sopra citati, si passerà a un esperimento di traduzione dallo spagnolo all'italiano che li coinvolgerà in maniera diretta prendendo in considerazione tre diverse tipologie di recensione. Nel secondo capitolo, verranno valutate le proposte di traduzione offerte da Google Translate e DeepL in base a problemi linguistici, testuali, extralinguistici, di intenzionalità e pragmatici sorti durante l’esperimento. A seguito di ciò, il terzo capitolo e le conclusioni finali si focalizzeranno sui limiti che compromettono maggiormente le prestazioni dei TA, prendendo come riferimento il linguaggio e le caratteristiche proprie dei testi turistici.
APA, Harvard, Vancouver, ISO, and other styles
3

Pagin, Elia. "La traduzione a servizio dell’internazionalizzazione d’impresa: l’output del programma di traduzione automatica DeepL a confronto con i risultati della traduzione assistita." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amslaurea.unibo.it/15269/.

Full text
Abstract:
Il traduttore riveste un ruolo chiave nel processo di internazionalizzazione delle imprese, non solo garantendo la veicolazione dei contenuti aziendali in altre lingue, ma anche sensibilizzando le imprese stesse a una fruizione di servizi linguistici di qualità. Da queste considerazioni trae le proprie basi il progetto Language Toolkit, nel cui ambito è stato redatto il presente elaborato. Trattasi della traduzione dall’italiano verso il tedesco di un manuale tecnico per l’azienda ATI di Cesena, dapprima eseguita con l’ausilio di uno strumento di traduzione assistita e di seguito posta a confronto con l’output generato dal sistema di traduzione automatica DeepL, di recente introduzione. L’obiettivo del presente elaborato consta nella valutazione della qualità dell’output grezzo di DeepL e della sua applicabilità alla traduzione di testi specialistici, in vista di un risparmio di risorse e di tempo. L’analisi è stata condotta ricorrendo a due tipologie di valutazione, la prima di tipo automatico mediante l’algoritmo BLEU score, e la seconda svolta dallo stesso traduttore umano sulla base degli interventi di post-editing necessari affinché la traduzione possa essere pubblicata.
APA, Harvard, Vancouver, ISO, and other styles
4

Marcassoli, Giulia. "Gli output dei sistemi di traduzione automatica neurale: valutazione della qualità di Google Translate e DeepL Translator nella combinazione tedesco-italiano." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/19536/.

Full text
Abstract:
MT is becoming a powerful tool for professional translators, language service providers and common users. The present work focuses on its quality, evaluating the translations produced by two neural MT systems – i.e. Google Translate and DeepL Translator – through manual error annotation. The data set used for this task is composed of semi-specialized German texts translated into Italian. Aim of the present work is to assess the quality of MT outputs for the data set considered and obtain a detailed overview of the type of errors made by the two neural MT systems examined. The first part of this work provides a theoretical background for MT and its evaluation. Chapter 1 deals with the definition of MT and summarizes its history. Moreover, a detailed analysis of the different MT architectures is provided, as well as an overview of the possible application scenarios and the different categories of users. Chapter 2 introduces the notion of quality in the translation field and the main automatic and manual methods applied to MT quality assessment tasks. A comprehensive analysis of some of the most significant studies on neural and phrase-based MT systems output quality is then provided. The second part of this work presents a quality assessment of the output produced by two neural MT systems, i.e. Google Translation and DeepL Translator. The evaluation was performed through manual error annotation based on a fine-grained error taxonomy. Chapter 3 outlines the methodology followed during the evaluation, with a description of the dataset, the neural MT systems chosen for the study, the annotation tool and the taxonomy used during the annotation task. Chapter 4 provides the results of the evaluation and a comment thereof, offering examples extracted from the annotated data set. The final part of this work summarizes the major findings of the present contribution. Results are then discussed, with a focus on their implication for future work.
APA, Harvard, Vancouver, ISO, and other styles
5

Giavelli, Francesca. "DeepL: la nuova frontiera della traduzione automatica neurale a confronto con il linguaggio enologico. Uno studio basato sulla traduzione del sito della Cantina di Cesena." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amslaurea.unibo.it/15177/.

Full text
Abstract:
Quando a Novembre 2017 ho cominciato a pensare alla mia idea di tesi, ho deciso di voler portare a compimento il mio lavoro di traduzione del sito della Cantina di Cesena. La mia relatrice, la prof.ssa Heiss, mi ha poi consigliato di lavorare e confrontare le mie traduzioni, vecchie e nuove, con il nuovo traduttore automatico DeepL: si riteneva interessante rilevare quali risultati questo avrebbe potuto produrre quando messo a confronto con diversi tipi di testi e con un linguaggio assai specifico, come quello enologico. Il lavoro che ne è risultato è assai ampio e variegato, e questa tesi ne riporta solo una parte, quella dell’analisi di DeepL a confronto con questo tipo di traduzione; il resto si può trovare in Appendice e, parzialmente, on-line all’indirizzo: www.cantinacesena.it. La traduzione delle schede dei vini ha portato alla creazione di un breve glossario bilingue due parti nel quale si sono raccolti i termini specifici utilizzati: una parte riguarda la terminologia enologica, l’altra, riguarda la terminologia gastronomica utilizzata nelle schede dei vini sotto la voce “Abbinamenti:”, in cui vengono suggeriti i piatti tipici da accostare ad ogni vino. A fine lavoro, tutti i testi sono stati caricati sul sito, grazie al permesso concessomi dai dirigenti della Cantina di Cesena di lavorarci; l’upload on-line ha avuto delle limitazioni, dovute alle credenziali fornitemi, che permettono solamente la modifica dei contenuti, non della struttura: in alcune pagine è possibile individuare errori e parti non tradotte. Di tali errori si dà una breve illustrazione in Appendice, dove sono inserite alcune pagine (Screenshot) salvate dal sito web.
APA, Harvard, Vancouver, ISO, and other styles
6

Brizzi, Mattia. "La traduzione automatica dall’italiano in francese e il linguaggio dell’arte: post-editing dell’output di DeepL Traduttore a partire dal sito di Piero della Francesca." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/20403/.

Full text
Abstract:
Abstract La presente tesi illustra il processo di post-editing completo dell’output di DeepL Traduttore a partire dalle sezioni “Biografia” e “Le Opere” del sito http://www.pierodellafrancesca.it/. Il primo capitolo traccia un quadro della storia della traduzione automatica e del ruolo che questa riveste al giorno d’oggi. In relazione a tale aspetto, viene fornita una definizione del concetto di post-editing e vengono illustrate le modalità e gli scenari in cui tale attività può essere svolta. Il secondo capitolo descrive la metodologia di lavoro adottata per la revisione dell’output di DeepL, con un approfondimento sul tema della traduzione dei nomi propri. In seguito, si fornisce una descrizione dei principali strumenti di valutazione della qualità dei sistemi di TA e si espone la tassonomia con cui sono state successivamente classificate le revisioni effettuate. Tale tassonomia prende ampio spunto da quelle elaborate in Temnikova (2010) e Koponen et al. (2012). Il terzo capitolo è dedicato all’analisi del testo di partenza, che si articola in parallelo rispetto al lavoro di editing e si suddivide in tre livelli: lessicale, morfo-sintattico e stilistico. Il quarto capitolo presenta le proposte di revisione avanzate per portare la qualità l’output grezzo al livello di quella di un testo pubblicabile. Infine, nel quinto capitolo, si ricorre alla tassonomia adottata per osservare la distribuzione delle correzioni apportate. L’osservazione delle tipologie di interventi di editing effettuati, intrinsecamente legati agli errori commessi dal traduttore automatico, è funzionale all’elaborazione di un giudizio complessivo sulla qualità dell’output fornito da DeepL Traduttore.
APA, Harvard, Vancouver, ISO, and other styles
7

Luccioli, Alessandra. "Stereotipi di genere e traduzione automatica dall'inglese all’italiano: uno studio di caso sul femminile nelle professioni." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/20408/.

Full text
Abstract:
La presente tesi si pone come obiettivo quello di indagare il rapporto tra stereotipi di genere e traduzione automatica, proponendo uno studio di caso composto da frasi contenenti dei sostantivi riferiti a professioni tradotti automaticamente dall’inglese all’italiano. Il capitolo I offre una panoramica teorica sulla traduzione automatica, per poi approfondire il tema degli stereotipi di genere nella traduzione automatica. Il capitolo II si concentra invece sul rapporto tra genere e lingua, partendo dalla definizione di stereotipi e pregiudizi per passare poi alla questione dei femminili professionali. Vengono delineati gli aspetti linguistici per poi fornire una breve panoramica delle iniziative che sono state proposte nel contesto italiano e internazionale per promuovere l’uso del genere femminile nella lingua ed evitare di usarla in maniera sessista e non inclusiva. Nel capitolo III, dopo aver esposto nel dettaglio la metodologia impiegata degli studi precedenti, si presenta la struttura frasale ideata per condurre lo studio di caso. Partendo dall’elaborazione dei dati statistici sul numero di donne per ogni occupazione, si giunge alla selezione delle professioni da impiegare nello studio. Vengono inoltre illustrati gli strumenti utilizzati nell’analisi basata su corpora che verrà condotta nel capitolo IV, dove si presenta l’analisi degli output forniti da due sistemi di traduzione automatica, DeepL e Google Translate, nella combinazione linguistica inglese-italiano. L’analisi dettagliata di tutti gli aspetti della struttura frasale è corredata da tabelle e grafici esplicativi, inoltre sono presenti approfondimenti sulle traduzioni di alcune professioni di particolare rilevanza. I risultati dell’analisi verranno infine discussi in maniera approfondita, ipotizzando infine le prospettive future degli studi in questo ambito.
APA, Harvard, Vancouver, ISO, and other styles
8

Zaccagnini, Rebecca. "La traduzione automatica e i composti occasionali in tedesco: un esperimento pilota." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amslaurea.unibo.it/16004/.

Full text
Abstract:
Il fulcro del presente elaborato è rappresentato dalla ricerca condotta sulle neoformazioni occasionali del tedesco in ambito politico, in particolare su quelle adottate dai politici nei discorsi al Bundestag, e sulle proposte traduttive di tali neologismi fornite dai sistemi di traduzione automatica quali Google Translate e DeepL. Lo scopo di tale ricerca è quello di indagare su come i sistemi di traduzione automatica si comportino di fronte a tali neoformazioni, valutandone, infine, il grado di accettabilità delle proposte traduttive e individuando le principali difficoltà e i fenomeni emersi. L’elaborato traccia a grandi linee i procedimenti di formazione delle parole, specialmente la composizione, e i composti occasionali dal punto di vista teorico, per poi addentrarsi nella ricerca vera e propria: il secondo e il terzo capitolo sono, infatti, interamente dedicati alla descrizione degli strumenti adoperati (il software AntConc, un corpus linguistico e i programmi di traduzione automatica sopracitati) e del procedimento svolto. Infine, nel quarto e ultimo capitolo verranno discussi i risultati ottenuti da tale ricerca.
APA, Harvard, Vancouver, ISO, and other styles
9

Braghittoni, Laura. "La localizzazione software: proposta di traduzione della documentazione di memoQ." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/20421/.

Full text
Abstract:
ABSTRACT The modern world is becoming every day more and more tied to technology. The technological development of the last thirty years has led us to live in a digital world dominated by the Internet, which today reaches more than 4 billion users worldwide. Thus computers, smartphones and tablets are now part of everyday life for more and more people. Behind the worldwide spread of modern technologies we find localization, which plays a key role in our society, despite being barely known by non-experts. Every day localization enables users all around the world to access digital products and content in their own language, thus becoming essential in today’s digital world. Given the author’s interest in translation technologies, the subject of this thesis is the localization of the guide memoQ 8.7 Getting Started from English into Italian. The guide is part of the user documentation of memoQ, a software for Computer-Assisted Translation (CAT) used by many professional translators. Chapter 1 starts by introducing the historical background of localization, from its origin to the consolidation of the localization industry, and then provides an overview of previous literature, focusing mainly on the process of software localization. Chapter 2 presents the localization project and all the activities carried out prior to the translation, including the analysis of the source text. The localization of the memoQ guide is the focus of Chapter 3, in which some of the most interesting aspects that emerged in the translation process are examined. Finally, Chapter 4 addresses the increasingly popular topic of Machine Translation (MT): after providing a general overview, the human translation of the memoQ guide is compared to the translation performed by the MT system DeepL.
APA, Harvard, Vancouver, ISO, and other styles
10

Santi, Greta. "La nuova frontiera della traduzione: la localizzazione di un sito web." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2020.

Find full text
Abstract:
L’obiettivo dell’elaborato è favorire l’ampliamento del mercato dell’azienda agricola Rio del Sol nei paesi germanofoni. In particolare, viene tradotto in lingua tedesca il sito web aziendale, estremamente importante per la presentazione su scala internazionale dei prodotti e per lo sviluppo dell’e-commerce, risorsa fondamentale, specialmente in questo periodo segnato dall’emergenza da Coronavirus. Infatti, ora più che mai, molti preferiscono fare acquisti comodamente da casa. Quindi, la localizzazione di siti web, destinata a marcare il nostro futuro, rappresenta la nuova frontiera della traduzione. La localizzazione del sito web Rio del Sol viene fatta con l’aiuto di DeepL, strumento di traduzione automatica, del quale si cerca di valutare l’efficacia. In più, tale lavoro comporta molteplici competenze e non si limita alla mera traduzione di alcuni testi: un sito web, come quello di Rio del Sol, si articola spesso in numerose voci di menu contenenti link extra-testuali ad articoli di giornale o testi audiovisivi. Infatti, l’elaborato affronta anche la localizzazione di un articolo apparso su Corriere Romagna e di un’intervista alla titolare dell’azienda pubblicata su YouTube da Italia nel Bicchiere. Il primo capitolo dell’elaborato tratta, perciò, delle caratteristiche del sito web Rio del Sol e delle particolarità della localizzazione del sito stesso e dell’articolo di giornale. Viene inoltre descritto l’utilizzo dei programmi OmegaT e SDL Trados Studio, necessari per la creazione di documenti in lingua tedesca. Il secondo capitolo si concentra, invece, sulla sottotitolazione del video pubblicato su YouTube, descrivendone norme e realizzazione tramite Subtitle Edit. Infine, il terzo capitolo analizza il lavoro svolto da DeepL, valutando se si tratti di un basilare software per la traduzione automatica o di un efficiente supporto al post-editing. Per trarre conclusioni valide vengono utilizzate pubblicazioni accademiche e l’esperienza maturata in corso d’opera.
APA, Harvard, Vancouver, ISO, and other styles
11

Blanc, Beyne Thibault. "Estimation de posture 3D à partir de données imprécises et incomplètes : application à l'analyse d'activité d'opérateurs humains dans un centre de tri." Thesis, Toulouse, INPT, 2020. http://www.theses.fr/2020INPT0106.

Full text
Abstract:
Dans un contexte d’étude de la pénibilité et de l’ergonomie au travail pour la prévention des troubles musculo-squelettiques, la société Ebhys cherche à développer un outil d’analyse de l’activité des opérateurs humains dans un centre de tri, par l’évaluation d’indicateurs ergonomiques. Pour faire face à l’environnement non contrôlé du centre de tri et pour faciliter l’acceptabilité du dispositif, ces indicateurs sont mesurés à partir d’images de profondeur. Une étude ergonomique nous permet de définir les indicateurs à mesurer. Ces indicateurs sont les zones d’évolution des mains de l’opérateur et d’angulations de certaines articulations du haut du corps. Ce sont donc des indicateurs obtenables à partir d’une analyse de la posture 3D de l’opérateur. Le dispositif de calcul des indicateurs sera donc composé de trois parties : une première partie sépare l’opérateur du reste de la scène pour faciliter l’estimation de posture 3D, une seconde partie calcule la posture 3D de l’opérateur, et la troisième utilise la posture 3D de l’opérateur pour calculer les indicateurs ergonomiques. Tout d’abord, nous proposons un algorithme qui permet d’extraire l’opérateur du reste de l’image de profondeur. Pour ce faire, nous utilisons une première segmentation automatique basée sur la suppression du fond statique et la sélection d’un objet dynamique à l’aide de sa position et de sa taille. Cette première segmentation sert à entraîner un algorithme d’apprentissage qui améliore les résultats obtenus. Cet algorithme d’apprentissage est entraîné à l’aide des segmentations calculées précédemment, dont on sélectionne automatiquement les échantillons de meilleure qualité au cours de l’entraînement. Ensuite, nous construisons un modèle de réseau de neurones pour l’estimation de la posture 3D de l’opérateur. Nous proposons une étude qui permet de trouver un modèle léger et optimal pour l’estimation de posture 3D sur des images de profondeur de synthèse, que nous générons numériquement. Finalement, comme ce modèle n’est pas directement applicable sur les images de profondeur acquises dans les centres de tri, nous construisons un module qui permet de transformer les images de profondeur de synthèse en images de profondeur plus réalistes. Ces images de profondeur plus réalistes sont utilisées pour réentrainer l’algorithme d’estimation de posture 3D, pour finalement obtenir une estimation de posture 3D convaincante sur les images de profondeur acquises en conditions réelles, permettant ainsi de calculer les indicateurs ergonomiques
In a context of study of stress and ergonomics at work for the prevention of musculoskeletal disorders, the company Ebhys wants to develop a tool for analyzing the activity of human operators in a waste sorting center, by measuring ergonomic indicators. To cope with the uncontrolled environment of the sorting center, these indicators are measured from depth images. An ergonomic study allows us to define the indicators to be measured. These indicators are zones of movement of the operator’s hands and zones of angulations of certain joints of the upper body. They are therefore indicators that can be obtained from an analysis of the operator’s 3D pose. The software for calculating the indicators will thus be composed of three steps : a first part segments the operator from the rest of the scene to ease the 3D pose estimation, a second part estimates the operator’s 3D pose, and the third part uses the operator’s 3D pose to compute the ergonomic indicators. First of all, we propose an algorithm that extracts the operator from the rest of the depth image. To do this, we use a first automatic segmentation based on static background removal and selection of a moving element given its position and size. This first segmentation allows us to train a neural network that improves the results. This neural network is trained using the segmentations obtained from the first automatic segmentation, from which the best quality samples are automatically selected during training. Next, we build a neural network model to estimate the operator’s 3D pose. We propose a study that allows us to find a light and optimal model for 3D pose estimation on synthetic depth images, which we generate numerically. However, if this network gives outstanding performances on synthetic depth images, it is not directly applicable to real depth images that we acquired in an industrial context. To overcome this issue, we finally build a module that allows us to transform the synthetic depth images into more realistic depth images. This image-to-image translation model modifies the style of the depth image without changing its content, keeping the 3D pose of the operator from the synthetic source image unchanged on the translated realistic depth frames. These more realistic depth images are then used to re-train the 3D pose estimation neural network, to finally obtain a convincing 3D pose estimation on the depth images acquired in real conditions, to compute de ergonomic indicators
APA, Harvard, Vancouver, ISO, and other styles
12

Mazzuca, Nicholas John. "The dreamer deepe." Connect to this title online, 2009. http://etd.lib.clemson.edu/documents/1247508478/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Straube, Nicolas. "Deep divergence." Diss., Ludwig-Maximilians-Universität München, 2011. http://nbn-resolving.de/urn:nbn:de:bvb:19-138186.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Joseph, Caberbe. "DEEP WITHIN." Master's thesis, University of Central Florida, 2009. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/2794.

Full text
Abstract:
As a contemporary photographer, I focus most on light and color to bring out the uniqueness of my images. Photography is about lighting and I manipulate lights to raise questions in my viewers. Manipulating light is my way of being curious about how it may change mood physically and emotionally. Inspired by classical paintings, I have developed a body of photographs that can be admired by anyone. Although the main focus of my work is light and color, this body of work is also intended to empower those with little confidence in themselves and those who have been rejected, abused, or mistrusted.
M.F.A.
Department of Art
Arts and Humanities
Studio Art and the Computer MFA
APA, Harvard, Vancouver, ISO, and other styles
15

Peterson, Grant. "Deep time /." abstract, 2008. http://0-gateway.proquest.com.innopac.library.unr.edu/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:1455664.

Full text
Abstract:
Thesis (M.A.)--University of Nevada, Reno, 2008.
"May, 2008." Library also has microfilm. Ann Arbor, Mich. : ProQuest Information and Learning Company, [2009]. 1 microfilm reel ; 35 mm. Online version available on the World Wide Web.
APA, Harvard, Vancouver, ISO, and other styles
16

Traxl, Dominik. "Deep graphs." Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät, 2017. http://dx.doi.org/10.18452/17785.

Full text
Abstract:
Netzwerk Theorie hat sich als besonders zweckdienlich in der Darstellung von Systemen herausgestellt. Jedoch fehlen in der Netzwerkdarstellung von Systemen noch immer essentielle Bausteine um diese generell zur Datenanalyse heranzuziehen zu können. Allen voran fehlt es an einer expliziten Assoziation von Informationen mit den Knoten und Kanten eines Netzwerks und einer schlüssigen Darstellung von Gruppen von Knoten und deren Relationen auf verschiedenen Skalen. Das Hauptaugenmerk dieser Dissertation ist der Einbindung dieser Bausteine in eine verallgemeinerte Rahmenstruktur gewidmet. Diese Rahmenstruktur - Deep Graphs - ist in der Lage als Bindeglied zwischen einer vereinheitlichten und generalisierten Netzwerkdarstellung von Systemen und den Methoden der Statistik und des maschinellen Lernens zu fungieren (Software: https://github.com/deepgraph/deepgraph). Anwendungen meiner Rahmenstruktur werden dargestellt. Ich konstruiere einen Regenfall Deep Graph und analysiere raumzeitliche Extrem-Regenfallcluster. Auf Grundlage dieses Graphs liefere ich einen statistischen Beleg, dass die Größenverteilung dieser Cluster einem exponentiell gedämpften Potenzgesetz folgt. Mit Hilfe eines generativen Sturm-Modells zeige ich, dass die exponentielle Dämpfung der beobachteten Größenverteilung durch das Vorhandensein von Landmasse auf unserem Planeten zustande kommen könnte. Dann verknüpfe ich zwei hochauflösende Satelliten-Produkte um raumzeitliche Cluster von Feuer-betroffenen Gebieten im brasilianischen Amazonas zu identifizieren und deren Brandeigenschaften zu charakterisieren. Zuletzt untersuche ich den Einfluss von weißem Rauschen und der globalen Kopplungsstärke auf die maximale Synchronisierbarkeit von Oszillatoren-Netzwerken für eine Vielzahl von Oszillatoren-Modellen, welche durch ein breites Spektrum an Netzwerktopologien gekoppelt sind. Ich finde ein allgemeingültiges sigmoidales Skalierungsverhalten, und validiere dieses mit einem geeignetem Regressionsmodell.
Network theory has proven to be a powerful instrument in the representation of complex systems. Yet, even in its latest and most general form (i.e., multilayer networks), it is still lacking essential qualities to serve as a general data analysis framework. These include, most importantly, an explicit association of information with the nodes and edges of a network, and a conclusive representation of groups of nodes and their respective interrelations on different scales. The implementation of these qualities into a generalized framework is the primary contribution of this dissertation. By doing so, I show how my framework - deep graphs - is capable of acting as a go-between, joining a unified and generalized network representation of systems with the tools and methods developed in statistics and machine learning. A software package accompanies this dissertation, see https://github.com/deepgraph/deepgraph. A number of applications of my framework are demonstrated. I construct a rainfall deep graph and conduct an analysis of spatio-temporal extreme rainfall clusters. Based on the constructed deep graph, I provide statistical evidence that the size distribution of these clusters is best approximated by an exponentially truncated powerlaw. By means of a generative storm-track model, I argue that the exponential truncation of the observed distribution could be caused by the presence of land masses. Then, I combine two high-resolution satellite products to identify spatio-temporal clusters of fire-affected areas in the Brazilian Amazon and characterize their land use specific burning conditions. Finally, I investigate the effects of white noise and global coupling strength on the maximum degree of synchronization for a variety of oscillator models coupled according to a broad spectrum of network topologies. I find a general sigmoidal scaling and validate it with a suitable regression model.
APA, Harvard, Vancouver, ISO, and other styles
17

Jönsson, Jennifer Annie Patricia. "Deep Impression." Thesis, Högskolan i Borås, Akademin för textil, teknik och ekonomi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:hb:diva-22025.

Full text
Abstract:
The scope of this thesis is to reveal the hidden dimensions of fashion. With the aim to stress the worth of participation and the individual experience of fashion. This work is questioning what we see, and later what is actually there. Through a thorough investigation of the knit technique the relationship of loop and thread (pause and activity) is the focus of this paper. Enhancing the significant qualities of the knitted technique, where material and shape is born simultaneously, the result presented holds a variety of results. With the aim to discuss multiple dimensions this knit investigation is presented in a fashion context. Styled with technical sportswear this work is challenging knitwear -as well as sportswear. By clashing sports connotated materials with the knitted wool, both fields are expanded and new options and expression are presented. The motive of this investigation is to further state the worth of fashion. To create a space for the experience of fashion, stating the various result that is not depending on the presentation on body. This work questions the pre-set truths and conventions of what fashion could be, and our ability to judge what is presented for us.
APA, Harvard, Vancouver, ISO, and other styles
18

Wood, Rebecca. "Deep Surface." University of Cincinnati / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1427899904.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Peralta, Yaddyra. "Deep Waters." FIU Digital Commons, 2012. http://digitalcommons.fiu.edu/etd/622.

Full text
Abstract:
The purpose of this creative thesis was to explore the state of exile via the use of the contemporary lyric poem. Written primarily in free verse, with some poems written in the traditional forms of the sonnet, haiku and senryu, the thesis explored exile and its variant themes of colonization, assimilation, familial history, cultural and personal myth. The result was the discovery that the lyric poem is an ideal, productive and fluid medium through which a poet can consider and encounter the liminality of exile identity.
APA, Harvard, Vancouver, ISO, and other styles
20

Backstad, Sebastian. "Federated Averaging Deep Q-NetworkA Distributed Deep Reinforcement Learning Algorithm." Thesis, Umeå universitet, Institutionen för datavetenskap, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-149637.

Full text
Abstract:
In the telecom sector, there is a huge amount of rich data generated every day. This trend will increase with the launch of 5G networks. Telco companies are interested in analyzing their data to shape and improve their core businesses. However, there can be a number of limiting factors that prevents them from logging data to central data centers for analysis.  Some examples include data privacy, data transfer, network latency etc. In this work, we present a distributed Deep Reinforcement Learning (DRL) method called Federated Averaging Deep Q-Network (FADQN), that employs a distributed hierarchical reinforcement learning architecture. It utilizes gradient averaging to decrease communication cost. Privacy concerns are also satisfied by training the agent locally and only sending aggregated information to the centralized server. We introduce two versions of FADQN: synchronous and asynchronous. Results on the cart-pole environment show 80 times reduction in communication without any significant loss in performance. Additionally, in case of asynchronous approach, we see a great improvement in convergence.
APA, Harvard, Vancouver, ISO, and other styles
21

Dunlop, J. S., R. J. McLure, A. D. Biggs, J. E. Geach, M. J. Michałowski, R. J. Ivison, W. Rujopakarn, et al. "A deep ALMA image of the Hubble Ultra Deep Field." OXFORD UNIV PRESS, 2017. http://hdl.handle.net/10150/623849.

Full text
Abstract:
We present the results of the first, deep Atacama Large Millimeter Array ( ALMA) imaging covering the full similar or equal to 4.5 arcmin(2) of the Hubble Ultra Deep Field ( HUDF) imaged with Wide Field Camera 3/IR on HST. Using a 45-pointing mosaic, we have obtained a homogeneous 1.3-mm image reaching sigma 1.3 similar or equal to 35 mu Jy, at a resolution of similar or equal to 0.7 arcsec. From an initial list of similar or equal to 50 > 3.5 sigma peaks, a rigorous analysis confirms 16 sources with S-1.3 > 120 mu Jy. All of these have secure galaxy counterparts with robust redshifts (< z > = 2.15). Due to the unparalleled supporting data, the physical properties of the ALMA sources are well constrained, including their stellar masses ( M-*) and UV+FIR star formation rates ( SFR). Our results show that stellar mass is the best predictor of SFR in the high-redshift Universe; indeed at z = 2 our ALMA sample contains seven of the nine galaxies in the HUDF withM(*) = 2 x 10(10)M circle dot, and we detect only one galaxy at z > 3.5, reflecting the rapid drop-off of high-mass galaxies with increasing redshift. The detections, coupled with stacking, allow us to probe the redshift/mass distribution of the 1.3-mm background down to S1.3 similar or equal to 10 mu Jy. We find strong evidence for a steep star-forming `main sequence' at z similar or equal to 2, with SFR. M* and a mean specific SFR similar or equal to 2.2 Gyr(-1). Moreover, we find that similar or equal to 85 per cent of total star formation at z similar or equal to 2 is enshrouded in dust, with similar or equal to 65 per cent of all star formation at this epoch occurring in high-mass galaxies ( M-* > 2 x 10(10)M circle dot), for which the average obscured: unobscured SF ratio is similar or equal to 200. Finally, we revisit the cosmic evolution of SFR density; we find this peaks at z similar or equal to 2.5, and that the star-forming Universe transits from primarily unobscured to primarily obscured at z similar or equal to 4.
APA, Harvard, Vancouver, ISO, and other styles
22

Manna, Amin(Amin A. ). "Deep linguistic lensing." Thesis, Massachusetts Institute of Technology, 2018. https://hdl.handle.net/1721.1/121630.

Full text
Abstract:
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 81-84).
Language models and semantic word embeddings have become ubiquitous as sources for machine learning features in a wide range of predictive tasks and real-world applications. We argue that language models trained on a corpus of text can learn the linguistic biases implicit in that corpus. We discuss linguistic biases, or differences in identity and perspective that account for the variation in language use from one speaker to another. We then describe methods to intentionally capture "linguistic lenses": computational representations of these perspectives. We show how the captured lenses can be used to guide machine learning models during training. We define a number of lenses for author-to-author similarity and word-to-word interchangeability. We demonstrate how lenses can be used during training time to imbue language models with perspectives about writing style, or to create lensed language models that learn less linguistic gender bias than their un-lensed counterparts.
by Amin Manna.
M. Eng.
M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
APA, Harvard, Vancouver, ISO, and other styles
23

Carvalho, Micael. "Deep representation spaces." Electronic Thesis or Diss., Sorbonne université, 2018. http://www.theses.fr/2018SORUS292.

Full text
Abstract:
Ces dernières années, les techniques d’apprentissage profond ont fondamentalement transformé l'état de l'art de nombreuses applications de l'apprentissage automatique, devenant la nouvelle approche standard pour plusieurs d’entre elles. Les architectures provenant de ces techniques ont été utilisées pour l'apprentissage par transfert, ce qui a élargi la puissance des modèles profonds à des tâches qui ne disposaient pas de suffisamment de données pour les entraîner à partir de zéro. Le sujet d'étude de cette thèse couvre les espaces de représentation créés par les architectures profondes. Dans un premier temps, nous étudions les propriétés de leurs espaces, en prêtant un intérêt particulier à la redondance des dimensions et la précision numérique de leurs représentations. Nos résultats démontrent un fort degré de robustesse, pointant vers des schémas de compression simples et puissants. Ensuite, nous nous concentrons sur le l'affinement de ces représentations. Nous choisissons d'adopter un problème multi-tâches intermodal et de concevoir une fonction de coût capable de tirer parti des données de plusieurs modalités, tout en tenant compte des différentes tâches associées au même ensemble de données. Afin d'équilibrer correctement ces coûts, nous développons également un nouveau processus d'échantillonnage qui ne prend en compte que des exemples contribuant à la phase d'apprentissage, c'est-à-dire ceux ayant un coût positif. Enfin, nous testons notre approche sur un ensemble de données à grande échelle de recettes de cuisine et d'images associées. Notre méthode améliore de 5 fois l'état de l'art sur cette tâche, et nous montrons que l'aspect multitâche de notre approche favorise l'organisation sémantique de l'espace de représentation, lui permettant d'effectuer des sous-tâches jamais vues pendant l'entraînement, comme l'exclusion et la sélection d’ingrédients. Les résultats que nous présentons dans cette thèse ouvrent de nombreuses possibilités, y compris la compression de caractéristiques pour les applications distantes, l'apprentissage multi-modal et multitâche robuste et l'affinement de l'espace des caractéristiques. Pour l'application dans le contexte de la cuisine, beaucoup de nos résultats sont directement applicables dans une situation réelle, en particulier pour la détection d'allergènes, la recherche de recettes alternatives en raison de restrictions alimentaires et la planification de menus
In recent years, Deep Learning techniques have swept the state-of-the-art of many applications of Machine Learning, becoming the new standard approach for them. The architectures issued from these techniques have been used for transfer learning, which extended the power of deep models to tasks that did not have enough data to fully train them from scratch. This thesis' subject of study is the representation spaces created by deep architectures. First, we study properties inherent to them, with particular interest in dimensionality redundancy and precision of their features. Our findings reveal a strong degree of robustness, pointing the path to simple and powerful compression schemes. Then, we focus on refining these representations. We choose to adopt a cross-modal multi-task problem, and design a loss function capable of taking advantage of data coming from multiple modalities, while also taking into account different tasks associated to the same dataset. In order to correctly balance these losses, we also we develop a new sampling scheme that only takes into account examples contributing to the learning phase, i.e. those having a positive loss. Finally, we test our approach in a large-scale dataset of cooking recipes and associated pictures. Our method achieves a 5-fold improvement over the state-of-the-art, and we show that the multi-task aspect of our approach promotes a semantically meaningful organization of the representation space, allowing it to perform subtasks never seen during training, like ingredient exclusion and selection. The results we present in this thesis open many possibilities, including feature compression for remote applications, robust multi-modal and multi-task learning, and feature space refinement. For the cooking application, in particular, many of our findings are directly applicable in a real-world context, especially for the detection of allergens, finding alternative recipes due to dietary restrictions, and menu planning
APA, Harvard, Vancouver, ISO, and other styles
24

Dufourq, Emmanuel. "Evolutionary deep learning." Doctoral thesis, Faculty of Science, 2019. http://hdl.handle.net/11427/30357.

Full text
Abstract:
The primary objective of this thesis is to investigate whether evolutionary concepts can improve the performance, speed and convenience of algorithms in various active areas of machine learning research. Deep neural networks are exhibiting an explosion in the number of parameters that need to be trained, as well as the number of permutations of possible network architectures and hyper-parameters. There is little guidance on how to choose these and brute-force experimentation is prohibitively time consuming. We show that evolutionary algorithms can help tame this explosion of freedom, by developing an algorithm that robustly evolves near optimal deep neural network architectures and hyper-parameters across a wide range of image and sentiment classification problems. We further develop an algorithm that automatically determines whether a given data science problem is of classification or regression type, successfully choosing the correct problem type with more than 95% accuracy. Together these algorithms show that a great deal of the current "art" in the design of deep learning networks - and in the job of the data scientist - can be automated. Having discussed the general problem of optimising deep learning networks the thesis moves on to a specific application: the automated extraction of human sentiment from text and images of human faces. Our results reveal that our approach is able to outperform several public and/or commercial text sentiment analysis algorithms using an evolutionary algorithm that learned to encode and extend sentiment lexicons. A second analysis looked at using evolutionary algorithms to estimate text sentiment while simultaneously compressing text data. An extensive analysis of twelve sentiment datasets reveal that accurate compression is possible with 3.3% loss in classification accuracy even with 75% compression of text size, which is useful in environments where data volumes are a problem. Finally, the thesis presents improvements to automated sentiment analysis of human faces to identify emotion, an area where there has been a tremendous amount of progress using convolutional neural networks. We provide a comprehensive critique of past work, highlight recommendations and list some open, unanswered questions in facial expression recognition using convolutional neural networks. One serious challenge when implementing such networks for facial expression recognition is the large number of trainable parameters which results in long training times. We propose a novel method based on evolutionary algorithms, to reduce the number of trainable parameters whilst simultaneously retaining classification performance, and in some cases achieving superior performance. We are robustly able to reduce the number of parameters on average by 95% with no loss in classification accuracy. Overall our analyses show that evolutionary algorithms are a valuable addition to machine learning in the deep learning era: automating, compressing and/or improving results significantly, depending on the desired goal.
APA, Harvard, Vancouver, ISO, and other styles
25

Lifshitz, Michael. "Suggestion modulates deeply ingrained processes." Thesis, McGill University, 2014. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=123096.

Full text
Abstract:
Behavioural scientists typically classify cognitive processes as either controlled or automatic. Whereas controlled processes are slow and effortful, automatic processes are fast and involuntary. Cognitive researchers have recently begun investigating how top-down influence in the form of suggestion can allow individuals to modulate the automaticity of deeply ingrained processes. The present thesis surveys a background of converging findings that collectively indicate that certain individuals can derail involuntary processes, such as reading. We extend previous Stroop findings to several other well-established automatic paradigms, including the McGurk effect. We thus demonstrate how, in the case of highly suggestible individuals, suggestion seems to wield control over a process that is likely even more automatic than the Stroop effect. Furthermore, we present findings from two novel experimental paradigms exploring the potential of shifting automaticity in the opposite direction – i.e., transforming, without practice, a controlled task into one that is automatic. In addition, we present findings from an experiment leveraging de-automatization to illuminate a longstanding debate on the nature of hypnotic suggestibility: whether it reflects a stable trait determined by cognitive aptitude or a flexible skill amenable to attitudinal factors such as beliefs and expectations. We surreptitiously controlled light and sound stimuli to convince participants that they were responding strongly to hypnotic suggestions for visual and auditory hallucinations. Extending our previous findings, we indexed hypnotic suggestibility by de-automatizing an involuntary audiovisual phenomenon—the McGurk effect. Our findings intimate that, at least in the present experimental context, expectation hardly correlates with—and is unlikely to be a primary determinant of—high hypnotic suggestibility. Finally, the thesis concludes by addressing related evidence from the neuroscience of contemplative practices and discussing how these findings pave the road to a more scientific understanding of voluntary control and automaticity.
Les scientifiques distinguent habituellement deux classes de processus cognitifs : les processus contrôlés et les processus automatiques. Tandis que les processus contrôlés sont lents et requièrent un effort, les processus automatiques sont rapides et involontaires. Les chercheurs en sciences cognitives ont récemment commencé à étudier comment l'influence des suggestions peut de moduler l'automaticité de processus profondément enracinés. La présente thèse examine un ensemble de découvertes qui indiquent collectivement que certaines personnes peuvent modifier des processus involontaires. Nous étendons les découvertes précédentes sur l'effet Stroop à plusieurs autres paradigmes automatiques bien établis, y compris l'effet McGurk. Nous démontrons ainsi comment, dans le cas des individus très suggestibles, la suggestion semble exercer un contrôle sur un processus qui est probablement encore plus automatique que l'effet Stroop. En outre, nous présentons les résultats de deux nouveaux paradigmes expérimentaux qui explorent la possibilité de déplacer l'automaticité dans la direction opposée – c'est-à-dire de transformer, sans entraînement, une tâche contrôlée en une tâche automatique. Par ailleurs, nous présentons les résultats d'une expérience qui mobilise la dé-automatisation pour éclairer un débat de longue date sur la nature de la suggestibilité hypnotique: la question de savoir si elle reflète un trait de caractère stable et déterminé par une aptitude cognitive, ou bien une compétence flexible et exprimable en termes de facteurs comportementaux. En étendant nos résultats précédents, nous avons indexé la suggestibilité hypnotique en dé-automatisant un phénomène audiovisuel involontaire : l'effet McGurk. Nos résultats montrent que, au moins dans ce contexte expérimental, l'attente est très peu corrélée à la suggestibilité hypnotique, et est peu susceptible d'en être un facteur déterminant. Enfin, nous concluons cette thèse en abordant les données apparentées en neurosciences des pratiques contemplatives, et en discutant comment ces résultats ouvrent la voie à une compréhension plus scientifique du contrôle volontaire et de l'automaticité.
APA, Harvard, Vancouver, ISO, and other styles
26

Janardhanan, Deepa [Verfasser]. "Wideband Speech Enhancement / Deepa Janardhanan." Aachen : Shaker, 2008. http://d-nb.info/1162792663/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Marchesini, Gregorio. "Caratterizzazione della Sardinia Deep Space Antenna in supporto di missioni deep space." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/20809/.

Full text
Abstract:
Nel seguente elaborato verranno analizzate le caratteristiche principali della Sardinia Deep Space Antenna, il radio telescopio italiano co-finanziato da INAF e ASI per supportare sia la ricerca nell’ambito dell’astronomia, sia le missioni planetarie attualmente in corso e quelle future. Nello specifico, si analizzeranno le capacità della SDSA nell’ambito delle missioni deep space partendo da un confronto con le Deep Space Antennas da 35-m e 34-m, che vengono attualmente impiegate rispettivamente da ESA e NASA. Particolare attenzione verrà data alle soluzioni che accomunano e differenziano le tre DSA per valutare quale potrebbe essere il contributo innovativo della SDSA nell’ambito delle missioni deep space.
APA, Harvard, Vancouver, ISO, and other styles
28

Mansour, Tarek M. Eng Massachusetts Institute of Technology. "Deep neural networks are lazy : on the inductive bias of deep learning." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/121680.

Full text
Abstract:
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 75-78).
Deep learning models exhibit superior generalization performance despite being heavily overparametrized. Although widely observed in practice, there is currently very little theoretical backing for such a phenomena. In this thesis, we propose a step forward towards understanding generalization in deep learning. We present evidence that deep neural networks have an inherent inductive bias that makes them inclined to learn generalizable hypotheses and avoid memorization. In this respect, we propose results that suggest that the inductive bias stems from neural networks being lazy: they tend to learn simpler rules first. We also propose a definition of simplicity in deep learning based on the implicit priors ingrained in deep neural networks.
by Tarek Mansour.
M. Eng.
M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
APA, Harvard, Vancouver, ISO, and other styles
29

Perjeru, Florentine. "Deep Defects in Wide Bandgap Materials Investigated Using Deep Level Transient Spectroscopy." Ohio University / OhioLINK, 2001. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou997365452.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Burchfield, Monica R. "Fish from Deep Water." Digital Archive @ GSU, 2010. http://digitalarchive.gsu.edu/english_theses/100.

Full text
Abstract:
These poems are lyrical narratives dealing primarily with the joys and sufferings of familial relationships in present and past generations, and how one is influenced and haunted by these interactions. There is a particular emphasis placed on the relationship between parent and child. Other poems deal with passion, both in the tangible and spiritual realms. The poems aim to use vivid figurative language to explore complex and sometimes distressing situations and emotions.
APA, Harvard, Vancouver, ISO, and other styles
31

Daniels, Kelly L. "Deep water, open water." Master's thesis, Mississippi State : Mississippi State University, 2009. http://library.msstate.edu/etd/show.asp?etd=etd-04022009-163550.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

King, John Douglas. "Deep Web Collection Selection." Queensland University of Technology, 2004. http://eprints.qut.edu.au/15992/.

Full text
Abstract:
The deep web contains a massive number of collections that are mostly invisible to search engines. These collections often contain high-quality, structured information that cannot be crawled using traditional methods. An important problem is selecting which of these collections to search. Automatic collection selection methods try to solve this problem by suggesting the best subset of deep web collections to search based on a query. A few methods for deep Web collection selection have proposed in Collection Retrieval Inference Network system and Glossary of Servers Server system. The drawback in these methods is that they require communication between the search broker and the collections, and need metadata about each collection. This thesis compares three different sampling methods that do not require communication with the broker or metadata about each collection. It also transforms some traditional information retrieval based techniques to this area. In addition, the thesis tests these techniques using INEX collection for total 18 collections (including 12232 XML documents) and total 36 queries. The experiment shows that the performance of sample-based technique is satisfactory in average.
APA, Harvard, Vancouver, ISO, and other styles
33

Wülfing, Jan [Verfasser], and Martin [Akademischer Betreuer] Riedmiller. "Stable deep reinforcement learning." Freiburg : Universität, 2019. http://d-nb.info/1204826188/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Stone, Rebecca E. "Deep mixed layer entrainment." Monterey, California. Naval Postgraduate School, 1997. http://hdl.handle.net/10945/8198.

Full text
Abstract:
Approved for public release; distribution is unlimited.
A bulk turbulence-closure mixed layer model is generalized to allow prediction of very deep polar sea mixing. The model includes unsteady three- component turbulent kinetic energy budgets. In addition to terms for shear production, pressure redistribution, and dissipation, special attention is devoted to realistic treatment of thermobaric enhancement of buoyancy flux and to Coriolis effect on turbulence. The model is initialized and verified with CTD data taken by R/V Valdivia in the Greenland Sea during winter 1993-1994. Model simulations show (1) mixed layer deepening is significantly enhanced when the thermal expansion coefficient's increase with pressure is included; (2) entrainment rate is sensitive to the direction of wind stress because of Coriolis; and (3) the predicted mixed layer depth evolution agrees qualitatively with the observations. Results demonstrate the importance of water column initial conditions, accurate representation of strong surface cooling events, and inclusion of the thermobaric effect on buoyancy, to determine the depth of mixing and ultimately the heat and salt flux into the deep ocean. Since coupling of the ocean to the atmosphere through deep mixed layers in polar regions is fundamental to our climate system, it is important that regional and global models be developed that incorporate realistic representation of this coupling
APA, Harvard, Vancouver, ISO, and other styles
35

Beyer, Franziska C. "Deep levels in SiC." Doctoral thesis, Linköpings universitet, Halvledarmaterial, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-70356.

Full text
Abstract:
Silicon carbide (SiC) has been discussed as a promising material for high power bipolar devices for almost twenty years. Advances in SiC crystal growth especially the development of chemical vapor deposition (CVD) have enabled the fabrication of high quality material. Much progress has further been achieved in identifying minority charge carrier lifetime limiting defects, which may be attributed to structural defects, surface recombination or point defects located in the band gap of SiC. Deep levels can act as recombination centers by interacting with both the valence and conduction band. As such, the defect levels reduce the minority charge carrier lifetime, which is of great importance in bipolar devices. Impurities in semiconductors play an important role to adjust their semiconducting properties. Intentional doping can introduce shallow defect levels to increase the conductivity or deep levels for achieving semi-insulating (SI) SiC. Impurities, especially transition metals generate defect levels deep in the band gap of SiC, which trap charge carriers and thus reduce the charge carrier lifetime. Transition metals, such as vanadium, are used in SiC to compensate the residual nitrogen doping. It has previously been reported that valence band edges of the different SiC polytypes are pinned to the same level and that deep levels related to transition metals can serve as a common reference level; this is known as the LANGER-HEINRICH (LH) rule. Electron irradiation introduces or enhances the concentration of existing point defects, such as the carbon vacancy (VC) and the carbon interstitial (Ci). Limiting the irradiation energy, Eirr, below the displacement energy of silicon in the SiC lattice (Eirr < 220 keV), the generated defects can be attributed to carbon related defects, which are already created at lower Eirr. Ci are mobile at low temperatures and using low temperature heat treatments, the annealing behavior of the introduced Ci and their complexes can be studied. Deep levels, which appear and disappear depending on the electrical, thermal and optical conditions prior to the measurements are associated with metastable defects. These defects can exist in more than one configuration, which itself can have different charge states. Capacitance transient investigations, where the defect’s occupation is studied by varying the depletion region in a diode, can be used to observe such occupational changes. Such unstable behavior may influence device performance, since defects may be electrically active in one configuration and inactive after transformation to another configuration. This thesis is focused on electrical characterization of deep levels in SiC using deep level transient spectroscopy (DLTS). The first part, papers 1-4, is dedicated to defect studies of both impurities and intrinsic defects in as-grown material. The second part, consisting of papers 5-7, is dealing with the defect content after electron irradiation and the annealing behavior of the introduced deep levels. In the first part, transition metal incorporation of iron (Fe) and tungsten (W) is discussed in papers 1 and 2, respectively. Fe and W are possible candidates to compensate the residual nitrogen doping in SiC. The doping with Fe resulted in one level in n-type material and two levels in p-type 4H-SiC. The capture process is strongly coupled to the lattice. Secondary ion mass spectrometry measurements detected the presence of B and Fe. The defects are suggested to be related to Fe and/or Fe-B-pairs. Previous reports on tungsten doping showed that W gives rise to two levels (one shallow and one deep) in 4H- and only one deep level in 6H-SiC. In 3C-SiC, we detected two levels, one likely related to W and one intrinsic defect, labeled E1. The W related energy level aligns well with the deeper levels observed in 4H- and 6H-SiC in agreement with the LH rule. The LH rule is observed from experiments to be also valid for intrinsic levels. The level related to the DLTS peak EH6=7 in 4H-SiC aligns with the level related to E7 in 6H-SiC as well as with the level related to E1 in 3C-SiC. The alignment suggests that these levels may originate from the same defect, probably the VC, which has been proposed previously for 4H- and 6H-SiC. In paper 3, electrical characterization of 3C-layers grown heteroepitaxially on different SiC substrates is discussed. The material was of high quality with a low background doping concentration and SCHOTTKY diodes were fabricated. It was observed that nickel as rectifying contact material exhibits a similar barrier height as the previously suggested gold. A leakage current in the low nA range at a reverse bias of -2 V was achieved, which allowed capacitance transient measurements. One defect related to DLTS peak E1, previously presented in paper 2, was detected and suggested to be related to an intrinsic defect. Paper 4 gives the evidence that chloride-based CVD grown material yields the same kind of defects as reported for standard CVD growth processes. However, for very high growth rates, exceeding 100 mm/h, an additional defect is observed as well as an increase of the Ti-concentration. Based on the knowledge from paper 2, the origin of the additional peak and the assumed increase of Ti-concentration can instead both be attributed to the deeper and the shallower level of tungsten in 4H-SiC, respectively. In the second part of the thesis, studies of low-energy (200 keV) electron irradiated as-grown 4H-SiC were performed. In paper 5, bistable defects, labeled EB-centers, evolved in the DLTS spectrum after the annihilation of the irradiation induced defect levels related to DLTS peaks EH1, EH3 and the bistable M-center. In a detailed annealing study presented in paper 6, the partial transformation of M-centers into the EB-centers is discussed. The transition between the two defects (M-centers → EB-centers) takes place at rather low temperatures (T ≈ 400 oC), which suggests a mobile defect as origin. The M-center and the EB-centers are suggested to be related to Ci and/or Ci complex defects. The EB-centers anneal out at about 700 oC. In paper 7, the DLTS peak EH5, which is observed after low- and high-energy electron irradiation is presented. The peak is associated with a bistable defect, labeled F-center. Configuration A exists unoccupied and occupied by an electron, whereas configuration B is only stable when filled by an electron. Reconfiguration temperatures for both configurations were determined and the reconfiguration energies were calculated from the transition kinetics. The reconfiguration B→A can also be achieved by minority charge carrier injection. The F-center is likely a carbon related defect, since it is already present after low-energy irradiation.
APA, Harvard, Vancouver, ISO, and other styles
36

Simonetto, Andrea. "Indagini in Deep Inference." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2010. http://amslaurea.unibo.it/1455/.

Full text
Abstract:
La tesi è uno studio di alcuni aspetti della nuova metodologia “deep inference”, abbinato ad una rivisitazione dei concetti classici di proof theory, con l'aggiunta di alcuni risultati originali orientati ad una maggior comprensione dell'argomento, nonché alle applicazioni pratiche. Nel primo capitolo vengono introdotti, seguendo un approccio di stampo formalista (con alcuni spunti personali), i concetti base della teoria della dimostrazione strutturale – cioè quella che usa strumenti combinatoriali (o “finitistici”) per studiare le proprietà delle dimostrazioni. Il secondo capitolo focalizza l'attenzione sulla logica classica proposizionale, prima introducendo il calcolo dei sequenti e dimostrando il Gentzen Hauptsatz, per passare poi al calcolo delle strutture (sistema SKS), dimostrando anche per esso un teorema di eliminazione del taglio, appositamente adattato dall'autore. Infine si discute e dimostra la proprietà di località per il sistema SKS. Un percorso analogo viene tracciato dal terzo ed ultimo capitolo, per quanto riguarda la logica lineare. Viene definito e motivato il calcolo dei sequenti lineari, e si discute del suo corrispettivo nel calcolo delle strutture. L'attenzione qui è rivolta maggiormente al problema di definire operatori non-commutativi, che mettono i sistemi in forte relazione con le algebre di processo.
APA, Harvard, Vancouver, ISO, and other styles
37

Wolfe, Traci. "Digging deep for meaning." Online version, 2008. http://www.uwstout.edu/lib/thesis/2008/2008wolfet.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

White, Martin. "Deep Learning Software Repositories." W&M ScholarWorks, 2017. https://scholarworks.wm.edu/etd/1516639667.

Full text
Abstract:
Bridging the abstraction gap between artifacts and concepts is the essence of software engineering (SE) research problems. SE researchers regularly use machine learning to bridge this gap, but there are three fundamental issues with traditional applications of machine learning in SE research. Traditional applications are too reliant on labeled data. They are too reliant on human intuition, and they are not capable of learning expressive yet efficient internal representations. Ultimately, SE research needs approaches that can automatically learn representations of massive, heterogeneous, datasets in situ, apply the learned features to a particular task and possibly transfer knowledge from task to task. Improvements in both computational power and the amount of memory in modern computer architectures have enabled new approaches to canonical machine learning tasks. Specifically, these architectural advances have enabled machines that are capable of learning deep, compositional representations of massive data depots. The rise of deep learning has ushered in tremendous advances in several fields. Given the complexity of software repositories, we presume deep learning has the potential to usher in new analytical frameworks and methodologies for SE research and the practical applications it reaches. This dissertation examines and enables deep learning algorithms in different SE contexts. We demonstrate that deep learners significantly outperform state-of-the-practice software language models at code suggestion on a Java corpus. Further, these deep learners for code suggestion automatically learn how to represent lexical elements. We use these representations to transmute source code into structures for detecting similar code fragments at different levels of granularity—without declaring features for how the source code is to be represented. Then we use our learning-based framework for encoding fragments to intelligently select and adapt statements in a codebase for automated program repair. In our work on code suggestion, code clone detection, and automated program repair, everything for representing lexical elements and code fragments is mined from the source code repository. Indeed, our work aims to move SE research from the art of feature engineering to the science of automated discovery.
APA, Harvard, Vancouver, ISO, and other styles
39

Liu, Qian. "Deep spiking neural networks." Thesis, University of Manchester, 2018. https://www.research.manchester.ac.uk/portal/en/theses/deep-spiking-neural-networks(336e6a37-2a0b-41ff-9ffb-cca897220d6c).html.

Full text
Abstract:
Neuromorphic Engineering (NE) has led to the development of biologically-inspired computer architectures whose long-term goal is to approach the performance of the human brain in terms of energy efficiency and cognitive capabilities. Although there are a number of neuromorphic platforms available for large-scale Spiking Neural Network (SNN) simulations, the problem of programming these brain-like machines to be competent in cognitive applications still remains unsolved. On the other hand, Deep Learning has emerged in Artificial Neural Network (ANN) research to dominate state-of-the-art solutions for cognitive tasks. Thus the main research problem emerges of understanding how to operate and train biologically-plausible SNNs to close the gap in cognitive capabilities between SNNs and ANNs. SNNs can be trained by first training an equivalent ANN and then transferring the tuned weights to the SNN. This method is called ‘off-line’ training, since it does not take place on an SNN directly, but rather on an ANN instead. However, previous work on such off-line training methods has struggled in terms of poor modelling accuracy of the spiking neurons and high computational complexity. In this thesis we propose a simple and novel activation function, Noisy Softplus (NSP), to closely model the response firing activity of biologically-plausible spiking neurons, and introduce a generalised off-line training method using the Parametric Activation Function (PAF) to map the abstract numerical values of the ANN to concrete physical units, such as current and firing rate in the SNN. Based on this generalised training method and its fine tuning, we achieve the state-of-the-art accuracy on the MNIST classification task using spiking neurons, 99.07%, on a deep spiking convolutional neural network (ConvNet). We then take a step forward to ‘on-line’ training methods, where Deep Learning modules are trained purely on SNNs in an event-driven manner. Existing work has failed to provide SNNs with recognition accuracy equivalent to ANNs due to the lack of mathematical analysis. Thus we propose a formalised Spike-based Rate Multiplication (SRM) method which transforms the product of firing rates to the number of coincident spikes of a pair of rate-coded spike trains. Moreover, these coincident spikes can be captured by the Spike-Time-Dependent Plasticity (STDP) rule to update the weights between the neurons in an on-line, event-based, and biologically-plausible manner. Furthermore, we put forward solutions to reduce correlations between spike trains; thereby addressing the result of performance drop in on-line SNN training. The promising results of spiking Autoencoders (AEs) and Restricted Boltzmann Machines (SRBMs) exhibit equivalent, sometimes even superior, classification and reconstruction capabilities compared to their non-spiking counterparts. To provide meaningful comparisons between these proposed SNN models and other existing methods within this rapidly advancing field of NE, we propose a large dataset of spike-based visual stimuli and a corresponding evaluation methodology to estimate the overall performance of SNN models and their hardware implementations.
APA, Harvard, Vancouver, ISO, and other styles
40

Sheiretov, Yanko Konstantinov. "Deep penetration magnetoquasistatic sensors." Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/16772.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2001.
Includes bibliographical references (p. 193-198).
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
This research effort extends the capabilities of existing model-based spatially periodic quasistatic-field sensors. The research developed three significant improvements in the field of nondestructive evaluation. The impact of each is detailed below: 1. The design of a distributed current drive magneto resistive magnetometer that matches the model response sufficiently to perform air calibration and absolute property measurement. Replacing the secondary winding with a magnetoresistive sensor allows the magnetometer to be operated at frequencies much lower than ordinarily possible, including static (DC) operation, which enables deep penetration defect imaging. Low frequencies are needed for deep probing of metals, where the depth of penetration is otherwise limited by the skin depth due to the shielding effect of induced eddy currents. The capability to perform such imaging without dependence on calibration standards has both substantial cost, ease of use, and technological benefits. The absolute property measurement capability is important because it provides a robust comparison for manufacturing quality control and monitoring of aging processes. Air calibration also alleviates the dependence on calibration standards that can be difficult to maintain. 2. The development and validation of cylindrical geometry models for inductive and capacitive sensors. The development of cylindrical geometry models enable the design of families of circularly symmetric magnetometers and dielectrometers with the "model-based" methodology, which requires close agreement between actual sensor response and simulated response. These kinds of sensors are needed in applications where the components being tested have circular symmetry, e.g. cracks near fasteners, or if it is important to measure the spatial average of an anisotropic property. 3. The development of accurate and efficient two-dimensional inverse interpolation and grid look-up techniques to determine electromagnetic and geometric properties. The ability to perform accurate and efficient grid interpolation is important for all sensors that follow the model-based principle, but it is particularly important for the complex shaped grids used with the magnetometers and dielectrometers in this thesis. A prototype sensor that incorporates all new features, i.e. a circularly symmetric magnetometer with a distributed current drive that uses a magnetoresistive secondary element, was designed, built, and tested. The primary winding is designed to have no net dipole moment, which improves repeatability by reducing the influence of distant objects. It can also support operation at two distinct effective spatial wavelengths. A circuit is designed that places the magnetoresistive sensor in a feedback configuration with a secondary winding to provide the necessary biasing and to ensure a linear transfer characteristic. Efficient FFT-based methods are developed to model magnetometers with a distributed current drive for both Cartesian and cylindrical geometry sensors. Results from measurements with a prototype circular dielectrometer that agree with the model-based analysis are also presented. In addition to the main contributions described so far, this work also includes other related enhancements to the time and space periodic-field sensor models, such as incorporating motion in the models to account for moving media effects. This development is important in low frequency scanning applications. Some improvements of the existing semi-analytical collocation point models for the standard Cartesian magnetometers and dielectrometers are also presented.
by Yanko Sheiretov.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
41

Halle, Alex, and Alexander Hasse. "Topologieoptimierung mittels Deep Learning." Technische Universität Chemnitz, 2019. https://monarch.qucosa.de/id/qucosa%3A34343.

Full text
Abstract:
Die Topologieoptimierung ist die Suche einer optimalen Bauteilgeometrie in Abhängigkeit des Einsatzfalls. Für komplexe Probleme kann die Topologieoptimierung aufgrund eines hohen Detailgrades viel Zeit- und Rechenkapazität erfordern. Diese Nachteile der Topologieoptimierung sollen mittels Deep Learning reduziert werden, so dass eine Topologieoptimierung dem Konstrukteur als sekundenschnelle Hilfe dient. Das Deep Learning ist die Erweiterung künstlicher neuronaler Netzwerke, mit denen Muster oder Verhaltensregeln erlernt werden können. So soll die bislang numerisch berechnete Topologieoptimierung mit dem Deep Learning Ansatz gelöst werden. Hierzu werden Ansätze, Berechnungsschema und erste Schlussfolgerungen vorgestellt und diskutiert.
APA, Harvard, Vancouver, ISO, and other styles
42

Goh, Hanlin. "Learning deep visual representations." Paris 6, 2013. http://www.theses.fr/2013PA066356.

Full text
Abstract:
Les avancées récentes en apprentissage profond et en traitement d'image présentent l'opportunité d'unifier ces deux champs de recherche complémentaires pour une meilleure résolution du problème de classification d'images dans des catégories sémantiques. L'apprentissage profond apporte au traitement d'image le pouvoir de représentation nécessaire à l'amélioration des performances des méthodes de classification d'images. Cette thèse propose de nouvelles méthodes d'apprentissage de représentations visuelles profondes pour la résolution de cette tache. L'apprentissage profond a été abordé sous deux angles. D'abord nous nous sommes intéressés à l'apprentissage non supervisé de représentations latentes ayant certaines propriétés à partir de données en entrée. Il s'agit ici d'intégrer une connaissance à priori, à travers un terme de régularisation, dans l'apprentissage d'une machine de Boltzmann restreinte (RBM). Nous proposons plusieurs formes de régularisation qui induisent différentes propriétés telles que la parcimonie, la sélectivité et l'organisation en structure topographique. Le second aspect consiste au passage graduel de l'apprentissage non supervisé à l'apprentissage supervisé de réseaux profonds. Ce but est réalisé par l'introduction sous forme de supervision, d'une information relative à la catégorie sémantique. Deux nouvelles méthodes sont proposées. Le premier est basé sur une régularisation top-down de réseaux de croyance profonds à base de RBMs. Le second optimise un cout intégrant un critre de reconstruction et un critre de supervision pour l'entrainement d'autoencodeurs profonds. Les méthodes proposées ont été appliquées au problme de classification d'images. Nous avons adopté le modèle sac-de-mots comme modèle de base parce qu'il offre d'importantes possibilités grâce à l'utilisation de descripteurs locaux robustes et de pooling par pyramides spatiales qui prennent en compte l'information spatiale de l'image. L'apprentissage profonds avec agrÉgation spatiale est utilisé pour apprendre un dictionnaire hiÉrarchique pour l'encodage de reprÉsentations visuelles de niveau intermÉdiaire. Cette mÉthode donne des rÉsultats trs compétitifs en classification de scènes et d'images. Les dictionnaires visuels appris contiennent diverses informations non-redondantes ayant une structure spatiale cohérente. L'inférence est aussi très rapide. Nous avons par la suite optimisé l'étape de pooling sur la base du codage produit par le dictionnaire hiérarchique précédemment appris en introduisant introduit une nouvelle paramétrisation dérivable de l'opération de pooling qui permet un apprentissage par descente de gradient utilisant l'algorithme de rétro-propagation. Ceci est la premire tentative d'unification de l'apprentissage profond et du modèle de sac de mots. Bien que cette fusion puisse sembler évidente, l'union de plusieurs aspects de l'apprentissage profond de représentations visuelles demeure une tache complexe à bien des égards et requiert encore un effort de recherche important
Recent advancements in the areas of deep learning and visual information processing have presented an opportunity to unite both fields. These complementary fields combine to tackle the problem of classifying images into their semantic categories. Deep learning brings learning and representational capabilities to a visual processing model that is adapted for image classification. This thesis addresses problems that lead to the proposal of learning deep visual representations for image classification. The problem of deep learning is tackled on two fronts. The first aspect is the problem of unsupervised learning of latent representations from input data. The main focus is the integration of prior knowledge into the learning of restricted Boltzmann machines (RBM) through regularization. Regularizers are proposed to induce sparsity, selectivity and topographic organization in the coding to improve discrimination and invariance. The second direction introduces the notion of gradually transiting from unsupervised layer-wise learning to supervised deep learning. This is done through the integration of bottom-up information with top-down signals. Two novel implementations supporting this notion are explored. The first method uses top-down regularization to train a deep network of RBMs. The second method combines predictive and reconstructive loss functions to optimize a stack of encoder-decoder networks. The proposed deep learning techniques are applied to tackle the image classification problem. The bag-of-words model is adopted due to its strengths in image modeling through the use of local image descriptors and spatial pooling schemes. Deep learning with spatial aggregation is used to learn a hierarchical visual dictionary for encoding the image descriptors into mid-level representations. This method achieves leading image classification performances for object and scene images. The learned dictionaries are diverse and non-redundant. The speed of inference is also high. From this, a further optimization is performed for the subsequent pooling step. This is done by introducing a differentiable pooling parameterization and applying the error backpropagation algorithm. This thesis represents one of the first attempts to synthesize deep learning and the bag-of-words model. This union results in many challenging research problems, leaving much room for further study in this area
APA, Harvard, Vancouver, ISO, and other styles
43

Brown, Kevin. "A Deep Diver's Becoming." Thesis, Université d'Ottawa / University of Ottawa, 2020. http://hdl.handle.net/10393/40424.

Full text
Abstract:
When scuba diving under a physical overhead such as a cave, a mine, a shipwreck, or under a virtual overhead due to decompression requirements, it makes it impossible to safely access the surface in the event of an emergency. Therefore, diving with overhead is often described as technical diving. In this research, I address how technical divers in Outaouais, Quebec, practice this risky sport with unforgiving consequences. Based on fieldwork in Outaouais, I focus on divers, including myself, who perform trimix dives deeper than 200 feet. I argue that the process of becoming a deep diver is a lifelong journey where a diver learns to adapt to a milieu hostile to human life. The basic skills are acquired during classes to ensure that a novice diver will survive in this limit-environment. As divers bend the rules and take more risks to go deeper for longer lengths of time, they will go through a series of limit-experiences and near misses that are essential to their development and found to be regenerative. In turn, those limit-experiences and near-miss events shared with teammates create mutual trust. It is this trust that becomes the foundation of the team and allows the team to improve upon existing techniques and increase the depth and difficulty of their dives.
APA, Harvard, Vancouver, ISO, and other styles
44

Geirsson, Gunnlaugur. "Deep learning exotic derivatives." Thesis, Uppsala universitet, Avdelningen för systemteknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-430410.

Full text
Abstract:
Monte Carlo methods in derivative pricing are computationally expensive, in particular for evaluating models partial derivatives with regard to inputs. This research proposes the use of deep learning to approximate such valuation models for highly exotic derivatives, using automatic differentiation to evaluate input sensitivities. Deep learning models are trained to approximate Phoenix Autocall valuation using a proprietary model used by Svenska Handelsbanken AB. Models are trained on large datasets of low-accuracy (10^4 simulations) Monte Carlo data, successfully learning the true model with an average error of 0.1% on validation data generated by 10^8 simulations. A specific model parametrisation is proposed for 2-day valuation only, to be recalibrated interday using transfer learning. Automatic differentiation approximates sensitivity to (normalised) underlying asset prices with a mean relative error generally below 1.6%. Overall error when predicting sensitivity to implied volatililty is found to lie within 10%-40%. Near identical results are found by finite difference as automatic differentiation in both cases. Automatic differentiation is not successful at capturing sensitivity to interday contract change in value, though errors of 8%-25% are achieved by finite difference. Model recalibration by transfer learning proves to converge over 15 times faster and with up to 14% lower relative error than training using random initialisation. The results show that deep learning models can efficiently learn Monte Carlo valuation, and that these can be quickly recalibrated by transfer learning. The deep learning model gradient computed by automatic differentiation proves a good approximation of the true model sensitivities. Future research proposals include studying optimised recalibration schedules, using training data generated by single Monte Carlo price paths, and studying additional parameters and contracts.
APA, Harvard, Vancouver, ISO, and other styles
45

Debain, Yann. "Deep Convolutional Nonnegative Autoencoders." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-287352.

Full text
Abstract:
In this thesis, nonnegative matrix factorization (NMF) is viewed as a feedbackward neural network and generalized to a deep convolutional architecture with forwardpropagation under β-divergence. NMF and feedfoward neural networks are put in relation and a new class of autoencoders is proposed, namely the nonnegative autoencoders. It is shown that NMF is essentially the decoder part of an autoencoder with nonnegative weights and input. The shallow autoencoder with fully connected neurons is extended to a deep convolutional autoencoder with the same properties. Multiplicative factor updates are used to ensure nonnegativity of the weights in the network. As a result, a shallow nonnegative autoencoder (NAE), a shallow convolutional nonnegative autoencoder (CNAE) and a deep convolutional nonnegative autoencoder (DCNAE) are developed. Finally, all three variants of the nonnegative autoencoder are tested on different tasks, such as signal reconstruction and signal enhancement.
I den här rapporten betraktas icke-negativ matrisfaktorisering (eng: nonnegative matrix factorization, NMF) som ett återkopplat neuralt nätverk. NMF är generaliserat till en djup faltningsarkitektur med “forwardpropagation” och β-divergens. NMF och “feedforward” neurala nät jämförs och en ny typ av autokodare är presenterat. Den nya typen av autokodare kallas icke-negativ autokodare. NMF betraktas avkodardelen av en autokodare med icke-negativa vikter och ingång. Den grunda autokodare med summationsdelen är utbyggd till en djup faltningsautokodare med icke-negativa vikter och ingång. I den här rapporten utvecklades en grund icke-negativ autokodare (eng: nonnegative autoencoder, NAE), en grund icke-negativ faltningsautokodare (eng: convolutional nonnegative autoencoder, CNAE) och en djup icke-negativ faltningsautokodare (eng: deep convolutional nonnegative autoencoder, DCNAE). Slutligen testas de tre varianterna av icke-negativ autokodare på några olika uppgifter som signalrekonstruktion och signalförbättring.
APA, Harvard, Vancouver, ISO, and other styles
46

Patil, Raj. "Deep UV Raman Spectroscopy." Thesis, The University of Arizona, 2016. http://hdl.handle.net/10150/613378.

Full text
Abstract:
This thesis examines the performance of a custom built deep UV laser (257.5nm) for Raman spectroscopy and the advantages of Raman spectroscopy with a laser in the deep UV over a laser in the visible range (532 nm). It describes the theory of resonance Raman scattering, the experimental setup for Raman spectroscopy and a few Raman spectroscopy measurements. The measurements were performed on biological samples oak tree leaf and lactobacillus acidophilus and bifidobacteria from probotioc medicinal capsules. Fluorescence free Raman spectra were acquired for the two samples with 257.5 nm laser. The Raman spectra for the two samples with a 532nm laser was masked with fluorescence. Raman measurements for an inorganic salt sodium nitrate showed a resonance Raman effect with 257.5 nm laser which led to enhancement in the Raman intensity as compared to that with 532 nm laser. Therefore we were able to demonstrate two advantages of deep UV Raman spectroscopy. First one is the possibility of acquiring fluorescence free spectra for biological samples. Second is the possibility of gaining enhancement in Raman intensity due to resonance Raman effect. It was observed that 257.5 nm laser requires optimization to reduce the bandwidth of the laser to get better resolution. The 257.5 nm laser also needs to be optimized to obtain higher power to get better signal to noise ratio. The experimental setup can also be further improved to obtain better resolution. If the improvements required in the setup are implemented, the deep UV Raman setup will become an important tool for spectroscopy.
APA, Harvard, Vancouver, ISO, and other styles
47

Arnold, Ludovic. "Learning Deep Representations : Toward a better new understanding of the deep learning paradigm." Phd thesis, Université Paris Sud - Paris XI, 2013. http://tel.archives-ouvertes.fr/tel-00842447.

Full text
Abstract:
Since 2006, deep learning algorithms which rely on deep architectures with several layers of increasingly complex representations have been able to outperform state-of-the-art methods in several settings. Deep architectures can be very efficient in terms of the number of parameters required to represent complex operations which makes them very appealing to achieve good generalization with small amounts of data. Although training deep architectures has traditionally been considered a difficult problem, a successful approach has been to employ an unsupervised layer-wise pre-training step to initialize deep supervised models. First, unsupervised learning has many benefits w.r.t. generalization because it only relies on unlabeled data which is easily found. Second, the possibility to learn representations layer by layer instead of all layers at once improves generalization further and reduces computational time. However, deep learning is a very recent approach and still poses a lot of theoretical and practical questions concerning the consistency of layer-wise learning with many layers and difficulties such as evaluating performance, performing model selection and optimizing layers. In this thesis we first discuss the limitations of the current variational justification for layer-wise learning which does not generalize well to many layers. We ask if a layer-wise method can ever be truly consistent, i.e. capable of finding an optimal deep model by training one layer at a time without knowledge of the upper layers. We find that layer-wise learning can in fact be consistent and can lead to optimal deep generative models. To do this, we introduce the Best Latent Marginal (BLM) upper bound, a new criterion which represents the maximum log-likelihood of a deep generative model where the upper layers are unspecified. We prove that maximizing this criterion for each layer leads to an optimal deep architecture, provided the rest of the training goes well. Although this criterion cannot be computed exactly, we show that it can be maximized effectively by auto-encoders when the encoder part of the model is allowed to be as rich as possible. This gives a new justification for stacking models trained to reproduce their input and yields better results than the state-of-the-art variational approach. Additionally, we give a tractable approximation of the BLM upper-bound and show that it can accurately estimate the final log-likelihood of models. Taking advantage of these theoretical advances, we propose a new method for performing layer-wise model selection in deep architectures, and a new criterion to assess whether adding more layers is warranted. As for the difficulty of training layers, we also study the impact of metrics and parametrization on the commonly used gradient descent procedure for log-likelihood maximization. We show that gradient descent is implicitly linked with the metric of the underlying space and that the Euclidean metric may often be an unsuitable choice as it introduces a dependence on parametrization and can lead to a breach of symmetry. To mitigate this problem, we study the benefits of the natural gradient and show that it can restore symmetry, regrettably at a high computational cost. We thus propose that a centered parametrization may alleviate the problem with almost no computational overhead.
APA, Harvard, Vancouver, ISO, and other styles
48

Ohta, Atsuyuki, Koh Naito, Yoshihisa Okuda, and Iwao Kawabe. "Geochemical characteristics of Antarctic deep-sea ferromanganese nodules from highly oxic deep-sea water." Dept. of Earth and Planetary Sciences, Nagoya University, 1999. http://hdl.handle.net/2237/2843.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Grant, Hazel Christine. "The role of Weddell Sea deep and bottom waters in ventilating the deep ocean." Thesis, University of East Anglia, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.492970.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Chavva, Venkataramana Reddy. "Development of a deep level transient spectrometer and some deep levelstudies of Gallium Arsenide." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1993. http://hub.hku.hk/bib/B31211252.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography