To see the other types of publications on this topic, follow the link: Deep.

Dissertations / Theses on the topic 'Deep'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Deep.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Peralta, Yaddyra. "Deep Waters." FIU Digital Commons, 2012. http://digitalcommons.fiu.edu/etd/622.

Full text
Abstract:
The purpose of this creative thesis was to explore the state of exile via the use of the contemporary lyric poem. Written primarily in free verse, with some poems written in the traditional forms of the sonnet, haiku and senryu, the thesis explored exile and its variant themes of colonization, assimilation, familial history, cultural and personal myth. The result was the discovery that the lyric poem is an ideal, productive and fluid medium through which a poet can consider and encounter the liminality of exile identity.
APA, Harvard, Vancouver, ISO, and other styles
2

Straube, Nicolas. "Deep divergence." Diss., Ludwig-Maximilians-Universität München, 2011. http://nbn-resolving.de/urn:nbn:de:bvb:19-138186.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Joseph, Caberbe. "DEEP WITHIN." Master's thesis, University of Central Florida, 2009. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/2794.

Full text
Abstract:
As a contemporary photographer, I focus most on light and color to bring out the uniqueness of my images. Photography is about lighting and I manipulate lights to raise questions in my viewers. Manipulating light is my way of being curious about how it may change mood physically and emotionally. Inspired by classical paintings, I have developed a body of photographs that can be admired by anyone. Although the main focus of my work is light and color, this body of work is also intended to empower those with little confidence in themselves and those who have been rejected, abused, or mistrusted.
M.F.A.
Department of Art
Arts and Humanities
Studio Art and the Computer MFA
APA, Harvard, Vancouver, ISO, and other styles
4

Krotevych, K. "Deep web." Thesis, Sumy State University, 2015. http://essuir.sumdu.edu.ua/handle/123456789/40487.

Full text
Abstract:
We got accustomed to the fact that all the information on the Internet instantly could be found by search engines. They know everything about everyone. But is it really so? It turns out there are areas in WWW, neither Google nor Yandex have access. Moreover, according to most experts, their size is hundreds of times greater than the size of the rest of the internet. This secret web called deep web.
APA, Harvard, Vancouver, ISO, and other styles
5

Wood, Rebecca. "Deep Surface." University of Cincinnati / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1427899904.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Peterson, Grant. "Deep time /." abstract, 2008. http://0-gateway.proquest.com.innopac.library.unr.edu/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:1455664.

Full text
Abstract:
Thesis (M.A.)--University of Nevada, Reno, 2008.
"May, 2008." Library also has microfilm. Ann Arbor, Mich. : ProQuest Information and Learning Company, [2009]. 1 microfilm reel ; 35 mm. Online version available on the World Wide Web.
APA, Harvard, Vancouver, ISO, and other styles
7

Traxl, Dominik. "Deep graphs." Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät, 2017. http://dx.doi.org/10.18452/17785.

Full text
Abstract:
Netzwerk Theorie hat sich als besonders zweckdienlich in der Darstellung von Systemen herausgestellt. Jedoch fehlen in der Netzwerkdarstellung von Systemen noch immer essentielle Bausteine um diese generell zur Datenanalyse heranzuziehen zu können. Allen voran fehlt es an einer expliziten Assoziation von Informationen mit den Knoten und Kanten eines Netzwerks und einer schlüssigen Darstellung von Gruppen von Knoten und deren Relationen auf verschiedenen Skalen. Das Hauptaugenmerk dieser Dissertation ist der Einbindung dieser Bausteine in eine verallgemeinerte Rahmenstruktur gewidmet. Diese Rahmenstruktur - Deep Graphs - ist in der Lage als Bindeglied zwischen einer vereinheitlichten und generalisierten Netzwerkdarstellung von Systemen und den Methoden der Statistik und des maschinellen Lernens zu fungieren (Software: https://github.com/deepgraph/deepgraph). Anwendungen meiner Rahmenstruktur werden dargestellt. Ich konstruiere einen Regenfall Deep Graph und analysiere raumzeitliche Extrem-Regenfallcluster. Auf Grundlage dieses Graphs liefere ich einen statistischen Beleg, dass die Größenverteilung dieser Cluster einem exponentiell gedämpften Potenzgesetz folgt. Mit Hilfe eines generativen Sturm-Modells zeige ich, dass die exponentielle Dämpfung der beobachteten Größenverteilung durch das Vorhandensein von Landmasse auf unserem Planeten zustande kommen könnte. Dann verknüpfe ich zwei hochauflösende Satelliten-Produkte um raumzeitliche Cluster von Feuer-betroffenen Gebieten im brasilianischen Amazonas zu identifizieren und deren Brandeigenschaften zu charakterisieren. Zuletzt untersuche ich den Einfluss von weißem Rauschen und der globalen Kopplungsstärke auf die maximale Synchronisierbarkeit von Oszillatoren-Netzwerken für eine Vielzahl von Oszillatoren-Modellen, welche durch ein breites Spektrum an Netzwerktopologien gekoppelt sind. Ich finde ein allgemeingültiges sigmoidales Skalierungsverhalten, und validiere dieses mit einem geeignetem Regressionsmodell.
Network theory has proven to be a powerful instrument in the representation of complex systems. Yet, even in its latest and most general form (i.e., multilayer networks), it is still lacking essential qualities to serve as a general data analysis framework. These include, most importantly, an explicit association of information with the nodes and edges of a network, and a conclusive representation of groups of nodes and their respective interrelations on different scales. The implementation of these qualities into a generalized framework is the primary contribution of this dissertation. By doing so, I show how my framework - deep graphs - is capable of acting as a go-between, joining a unified and generalized network representation of systems with the tools and methods developed in statistics and machine learning. A software package accompanies this dissertation, see https://github.com/deepgraph/deepgraph. A number of applications of my framework are demonstrated. I construct a rainfall deep graph and conduct an analysis of spatio-temporal extreme rainfall clusters. Based on the constructed deep graph, I provide statistical evidence that the size distribution of these clusters is best approximated by an exponentially truncated powerlaw. By means of a generative storm-track model, I argue that the exponential truncation of the observed distribution could be caused by the presence of land masses. Then, I combine two high-resolution satellite products to identify spatio-temporal clusters of fire-affected areas in the Brazilian Amazon and characterize their land use specific burning conditions. Finally, I investigate the effects of white noise and global coupling strength on the maximum degree of synchronization for a variety of oscillator models coupled according to a broad spectrum of network topologies. I find a general sigmoidal scaling and validate it with a suitable regression model.
APA, Harvard, Vancouver, ISO, and other styles
8

Jönsson, Jennifer Annie Patricia. "Deep Impression." Thesis, Högskolan i Borås, Akademin för textil, teknik och ekonomi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:hb:diva-22025.

Full text
Abstract:
The scope of this thesis is to reveal the hidden dimensions of fashion. With the aim to stress the worth of participation and the individual experience of fashion. This work is questioning what we see, and later what is actually there. Through a thorough investigation of the knit technique the relationship of loop and thread (pause and activity) is the focus of this paper. Enhancing the significant qualities of the knitted technique, where material and shape is born simultaneously, the result presented holds a variety of results. With the aim to discuss multiple dimensions this knit investigation is presented in a fashion context. Styled with technical sportswear this work is challenging knitwear -as well as sportswear. By clashing sports connotated materials with the knitted wool, both fields are expanded and new options and expression are presented. The motive of this investigation is to further state the worth of fashion. To create a space for the experience of fashion, stating the various result that is not depending on the presentation on body. This work questions the pre-set truths and conventions of what fashion could be, and our ability to judge what is presented for us.
APA, Harvard, Vancouver, ISO, and other styles
9

Lynch, Cassie A. "Korangan: Deep Time and Deep Transformation in Noongar Country." Thesis, Curtin University, 2020. http://hdl.handle.net/20.500.11937/81989.

Full text
Abstract:
Recent research suggests that Indigenous stories that feature 'cold times' and rising seas are in fact eyewitness accounts of the last ice age and the rise in sea-level that followed it. Building on this notion, this research explores whether writing fiction in the scale of deep time can be employed to explore colonial pasts, the contested present and radical futures.
APA, Harvard, Vancouver, ISO, and other styles
10

Backstad, Sebastian. "Federated Averaging Deep Q-NetworkA Distributed Deep Reinforcement Learning Algorithm." Thesis, Umeå universitet, Institutionen för datavetenskap, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-149637.

Full text
Abstract:
In the telecom sector, there is a huge amount of rich data generated every day. This trend will increase with the launch of 5G networks. Telco companies are interested in analyzing their data to shape and improve their core businesses. However, there can be a number of limiting factors that prevents them from logging data to central data centers for analysis.  Some examples include data privacy, data transfer, network latency etc. In this work, we present a distributed Deep Reinforcement Learning (DRL) method called Federated Averaging Deep Q-Network (FADQN), that employs a distributed hierarchical reinforcement learning architecture. It utilizes gradient averaging to decrease communication cost. Privacy concerns are also satisfied by training the agent locally and only sending aggregated information to the centralized server. We introduce two versions of FADQN: synchronous and asynchronous. Results on the cart-pole environment show 80 times reduction in communication without any significant loss in performance. Additionally, in case of asynchronous approach, we see a great improvement in convergence.
APA, Harvard, Vancouver, ISO, and other styles
11

Dunlop, J. S., R. J. McLure, A. D. Biggs, J. E. Geach, M. J. Michałowski, R. J. Ivison, W. Rujopakarn, et al. "A deep ALMA image of the Hubble Ultra Deep Field." OXFORD UNIV PRESS, 2017. http://hdl.handle.net/10150/623849.

Full text
Abstract:
We present the results of the first, deep Atacama Large Millimeter Array ( ALMA) imaging covering the full similar or equal to 4.5 arcmin(2) of the Hubble Ultra Deep Field ( HUDF) imaged with Wide Field Camera 3/IR on HST. Using a 45-pointing mosaic, we have obtained a homogeneous 1.3-mm image reaching sigma 1.3 similar or equal to 35 mu Jy, at a resolution of similar or equal to 0.7 arcsec. From an initial list of similar or equal to 50 > 3.5 sigma peaks, a rigorous analysis confirms 16 sources with S-1.3 > 120 mu Jy. All of these have secure galaxy counterparts with robust redshifts (< z > = 2.15). Due to the unparalleled supporting data, the physical properties of the ALMA sources are well constrained, including their stellar masses ( M-*) and UV+FIR star formation rates ( SFR). Our results show that stellar mass is the best predictor of SFR in the high-redshift Universe; indeed at z = 2 our ALMA sample contains seven of the nine galaxies in the HUDF withM(*) = 2 x 10(10)M circle dot, and we detect only one galaxy at z > 3.5, reflecting the rapid drop-off of high-mass galaxies with increasing redshift. The detections, coupled with stacking, allow us to probe the redshift/mass distribution of the 1.3-mm background down to S1.3 similar or equal to 10 mu Jy. We find strong evidence for a steep star-forming `main sequence' at z similar or equal to 2, with SFR. M* and a mean specific SFR similar or equal to 2.2 Gyr(-1). Moreover, we find that similar or equal to 85 per cent of total star formation at z similar or equal to 2 is enshrouded in dust, with similar or equal to 65 per cent of all star formation at this epoch occurring in high-mass galaxies ( M-* > 2 x 10(10)M circle dot), for which the average obscured: unobscured SF ratio is similar or equal to 200. Finally, we revisit the cosmic evolution of SFR density; we find this peaks at z similar or equal to 2.5, and that the star-forming Universe transits from primarily unobscured to primarily obscured at z similar or equal to 4.
APA, Harvard, Vancouver, ISO, and other styles
12

Dufourq, Emmanuel. "Evolutionary deep learning." Doctoral thesis, Faculty of Science, 2019. http://hdl.handle.net/11427/30357.

Full text
Abstract:
The primary objective of this thesis is to investigate whether evolutionary concepts can improve the performance, speed and convenience of algorithms in various active areas of machine learning research. Deep neural networks are exhibiting an explosion in the number of parameters that need to be trained, as well as the number of permutations of possible network architectures and hyper-parameters. There is little guidance on how to choose these and brute-force experimentation is prohibitively time consuming. We show that evolutionary algorithms can help tame this explosion of freedom, by developing an algorithm that robustly evolves near optimal deep neural network architectures and hyper-parameters across a wide range of image and sentiment classification problems. We further develop an algorithm that automatically determines whether a given data science problem is of classification or regression type, successfully choosing the correct problem type with more than 95% accuracy. Together these algorithms show that a great deal of the current "art" in the design of deep learning networks - and in the job of the data scientist - can be automated. Having discussed the general problem of optimising deep learning networks the thesis moves on to a specific application: the automated extraction of human sentiment from text and images of human faces. Our results reveal that our approach is able to outperform several public and/or commercial text sentiment analysis algorithms using an evolutionary algorithm that learned to encode and extend sentiment lexicons. A second analysis looked at using evolutionary algorithms to estimate text sentiment while simultaneously compressing text data. An extensive analysis of twelve sentiment datasets reveal that accurate compression is possible with 3.3% loss in classification accuracy even with 75% compression of text size, which is useful in environments where data volumes are a problem. Finally, the thesis presents improvements to automated sentiment analysis of human faces to identify emotion, an area where there has been a tremendous amount of progress using convolutional neural networks. We provide a comprehensive critique of past work, highlight recommendations and list some open, unanswered questions in facial expression recognition using convolutional neural networks. One serious challenge when implementing such networks for facial expression recognition is the large number of trainable parameters which results in long training times. We propose a novel method based on evolutionary algorithms, to reduce the number of trainable parameters whilst simultaneously retaining classification performance, and in some cases achieving superior performance. We are robustly able to reduce the number of parameters on average by 95% with no loss in classification accuracy. Overall our analyses show that evolutionary algorithms are a valuable addition to machine learning in the deep learning era: automating, compressing and/or improving results significantly, depending on the desired goal.
APA, Harvard, Vancouver, ISO, and other styles
13

He, Fengxiang. "Theoretical Deep Learning." Thesis, The University of Sydney, 2021. https://hdl.handle.net/2123/25674.

Full text
Abstract:
Deep learning has long been criticised as a black-box model for lacking sound theoretical explanation. During the PhD course, I explore and establish theoretical foundations for deep learning. In this thesis, I present my contributions positioned upon existing literature: (1) analysing the generalizability of the neural networks with residual connections via complexity and capacity-based hypothesis complexity measures; (2) modeling stochastic gradient descent (SGD) by stochastic differential equations (SDEs) and their dynamics, and further characterizing the generalizability of deep learning; (3) understanding the geometrical structures of the loss landscape that drives the trajectories of the dynamic systems, which sheds light in reconciling the over-representation and excellent generalizability of deep learning; and (4) discovering the interplay between generalization, privacy preservation, and adversarial robustness, which have seen rising concerns in deep learning deployment.
APA, Harvard, Vancouver, ISO, and other styles
14

Manna, Amin(Amin A. ). "Deep linguistic lensing." Thesis, Massachusetts Institute of Technology, 2018. https://hdl.handle.net/1721.1/121630.

Full text
Abstract:
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 81-84).
Language models and semantic word embeddings have become ubiquitous as sources for machine learning features in a wide range of predictive tasks and real-world applications. We argue that language models trained on a corpus of text can learn the linguistic biases implicit in that corpus. We discuss linguistic biases, or differences in identity and perspective that account for the variation in language use from one speaker to another. We then describe methods to intentionally capture "linguistic lenses": computational representations of these perspectives. We show how the captured lenses can be used to guide machine learning models during training. We define a number of lenses for author-to-author similarity and word-to-word interchangeability. We demonstrate how lenses can be used during training time to imbue language models with perspectives about writing style, or to create lensed language models that learn less linguistic gender bias than their un-lensed counterparts.
by Amin Manna.
M. Eng.
M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
APA, Harvard, Vancouver, ISO, and other styles
15

FRACCAROLI, MICHELE. "Explainable Deep Learning." Doctoral thesis, Università degli studi di Ferrara, 2023. https://hdl.handle.net/11392/2503729.

Full text
Abstract:
Il grande successo che il Deep Learning ha ottenuto in ambiti strategici per la nostra società quali l'industria, la difesa, la medicina etc., ha portanto sempre più realtà a investire ed esplorare l'utilizzo di questa tecnologia. Ormai si possono trovare algoritmi di Machine Learning e Deep Learning quasi in ogni ambito della nostra vita. Dai telefoni, agli elettrodomestici intelligenti fino ai veicoli che guidiamo. Quindi si può dire che questa tecnologia pervarsiva è ormai a contatto con le nostre vite e quindi dobbiamo confrontarci con essa. Da questo nasce l’eXplainable Artificial Intelligence o XAI, uno degli ambiti di ricerca che vanno per la maggiore al giorno d'oggi in ambito di Deep Learning e di Intelligenza Artificiale. Il concetto alla base di questo filone di ricerca è quello di rendere e/o progettare i nuovi algoritmi di Deep Learning in modo che siano affidabili, interpretabili e comprensibili all'uomo. Questa necessità è dovuta proprio al fatto che le reti neurali, modello matematico che sta alla base del Deep Learning, agiscono come una scatola nera, rendendo incomprensibile all'uomo il ragionamento interno che compiono per giungere ad una decisione. Dato che stiamo delegando a questi modelli matematici decisioni sempre più importanti, integrandole nei processi più delicati della nostra società quali, ad esempio, la diagnosi medica, la guida autonoma o i processi di legge, è molto importante riuscire a comprendere le motivazioni che portano questi modelli a produrre determinati risultati. Il lavoro presentato in questa tesi consiste proprio nello studio e nella sperimentazione di algoritmi di Deep Learning integrati con tecniche di Intelligenza Artificiale simbolica. Questa integrazione ha un duplice scopo: rendere i modelli più potenti, consentendogli di compiere ragionamenti o vincolandone il comportamento in situazioni complesse, e renderli interpretabili. La tesi affronta due macro argomenti: le spiegazioni ottenute grazie all'integrazione neuro-simbolica e lo sfruttamento delle spiegazione per rendere gli algoritmi di Deep Learning più capaci o intelligenti. Il primo macro argomento si concentra maggiormente sui lavori svolti nello sperimentare l'integrazione di algoritmi simbolici con le reti neurali. Un approccio è stato quelli di creare un sistema per guidare gli addestramenti delle reti stesse in modo da trovare la migliore combinazione di iper-parametri per automatizzare la progettazione stessa di queste reti. Questo è fatto tramite l'integrazione di reti neurali con la Programmazione Logica Probabilistica (PLP) che consente di sfruttare delle regole probabilistiche indotte dal comportamento delle reti durante la fase di addestramento o ereditate dall'esperienza maturata dagli esperti del settore. Queste regole si innescano allo scatenarsi di un problema che il sistema rileva durate l'addestramento della rete. Questo ci consente di ottenere una spiegazione di cosa è stato fatto per migliorare l'addestramento una volta identificato un determinato problema. Un secondo approccio è stato quello di far cooperare sistemi logico-probabilistici con reti neurali per la diagnosi medica da fonti di dati eterogenee. La seconda tematica affrontata in questa tesi tratta lo sfruttamento delle spiegazioni che possiamo ottenere dalle rete neurali. In particolare, queste spiegazioni sono usate per creare moduli di attenzione che aiutano a vincolare o a guidare le reti neurali portandone ad avere prestazioni migliorate. Tutti i lavori sviluppati durante il dottorato e descritti in questa tesi hanno portato alle pubblicazioni elencate nel Capitolo 14.2.
The great success that Machine and Deep Learning has achieved in areas that are strategic for our society such as industry, defence, medicine, etc., has led more and more realities to invest and explore the use of this technology. Machine Learning and Deep Learning algorithms and learned models can now be found in almost every area of our lives. From phones to smart home appliances, to the cars we drive. So it can be said that this pervasive technology is now in touch with our lives, and therefore we have to deal with it. This is why eXplainable Artificial Intelligence or XAI was born, one of the research trends that are currently in vogue in the field of Deep Learning and Artificial Intelligence. The idea behind this line of research is to make and/or design the new Deep Learning algorithms so that they are interpretable and comprehensible to humans. This necessity is due precisely to the fact that neural networks, the mathematical model underlying Deep Learning, act like a black box, making the internal reasoning they carry out to reach a decision incomprehensible and untrustable to humans. As we are delegating more and more important decisions to these mathematical models, it is very important to be able to understand the motivations that lead these models to make certain decisions. This is because we have integrated them into the most delicate processes of our society, such as medical diagnosis, autonomous driving or legal processes. The work presented in this thesis consists in studying and testing Deep Learning algorithms integrated with symbolic Artificial Intelligence techniques. This integration has a twofold purpose: to make the models more powerful, enabling them to carry out reasoning or constraining their behaviour in complex situations, and to make them interpretable. The thesis focuses on two macro topics: the explanations obtained through neuro-symbolic integration and the exploitation of explanations to make the Deep Learning algorithms more capable or intelligent. The neuro-symbolic integration was addressed twice, by experimenting with the integration of symbolic algorithms with neural networks. A first approach was to create a system to guide the training of the networks themselves in order to find the best combination of hyper-parameters to automate the design of these networks. This is done by integrating neural networks with Probabilistic Logic Programming (PLP). This integration makes it possible to exploit probabilistic rules tuned by the behaviour of the networks during the training phase or inherited from the experience of experts in the field. These rules are triggered when a problem occurs during network training. This generates an explanation of what was done to improve the training once a particular issue was identified. A second approach was to make probabilistic logic systems cooperate with neural networks for medical diagnosis on heterogeneous data sources. The second topic addressed in this thesis concerns the exploitation of explanations. In particular, the explanations one can obtain from neural networks are used in order to create attention modules that help in constraining and improving the performance of neural networks. All works developed during the PhD and described in this thesis have led to the publications listed in Chapter 14.2.
APA, Harvard, Vancouver, ISO, and other styles
16

Carvalho, Micael. "Deep representation spaces." Electronic Thesis or Diss., Sorbonne université, 2018. http://www.theses.fr/2018SORUS292.

Full text
Abstract:
Ces dernières années, les techniques d’apprentissage profond ont fondamentalement transformé l'état de l'art de nombreuses applications de l'apprentissage automatique, devenant la nouvelle approche standard pour plusieurs d’entre elles. Les architectures provenant de ces techniques ont été utilisées pour l'apprentissage par transfert, ce qui a élargi la puissance des modèles profonds à des tâches qui ne disposaient pas de suffisamment de données pour les entraîner à partir de zéro. Le sujet d'étude de cette thèse couvre les espaces de représentation créés par les architectures profondes. Dans un premier temps, nous étudions les propriétés de leurs espaces, en prêtant un intérêt particulier à la redondance des dimensions et la précision numérique de leurs représentations. Nos résultats démontrent un fort degré de robustesse, pointant vers des schémas de compression simples et puissants. Ensuite, nous nous concentrons sur le l'affinement de ces représentations. Nous choisissons d'adopter un problème multi-tâches intermodal et de concevoir une fonction de coût capable de tirer parti des données de plusieurs modalités, tout en tenant compte des différentes tâches associées au même ensemble de données. Afin d'équilibrer correctement ces coûts, nous développons également un nouveau processus d'échantillonnage qui ne prend en compte que des exemples contribuant à la phase d'apprentissage, c'est-à-dire ceux ayant un coût positif. Enfin, nous testons notre approche sur un ensemble de données à grande échelle de recettes de cuisine et d'images associées. Notre méthode améliore de 5 fois l'état de l'art sur cette tâche, et nous montrons que l'aspect multitâche de notre approche favorise l'organisation sémantique de l'espace de représentation, lui permettant d'effectuer des sous-tâches jamais vues pendant l'entraînement, comme l'exclusion et la sélection d’ingrédients. Les résultats que nous présentons dans cette thèse ouvrent de nombreuses possibilités, y compris la compression de caractéristiques pour les applications distantes, l'apprentissage multi-modal et multitâche robuste et l'affinement de l'espace des caractéristiques. Pour l'application dans le contexte de la cuisine, beaucoup de nos résultats sont directement applicables dans une situation réelle, en particulier pour la détection d'allergènes, la recherche de recettes alternatives en raison de restrictions alimentaires et la planification de menus
In recent years, Deep Learning techniques have swept the state-of-the-art of many applications of Machine Learning, becoming the new standard approach for them. The architectures issued from these techniques have been used for transfer learning, which extended the power of deep models to tasks that did not have enough data to fully train them from scratch. This thesis' subject of study is the representation spaces created by deep architectures. First, we study properties inherent to them, with particular interest in dimensionality redundancy and precision of their features. Our findings reveal a strong degree of robustness, pointing the path to simple and powerful compression schemes. Then, we focus on refining these representations. We choose to adopt a cross-modal multi-task problem, and design a loss function capable of taking advantage of data coming from multiple modalities, while also taking into account different tasks associated to the same dataset. In order to correctly balance these losses, we also we develop a new sampling scheme that only takes into account examples contributing to the learning phase, i.e. those having a positive loss. Finally, we test our approach in a large-scale dataset of cooking recipes and associated pictures. Our method achieves a 5-fold improvement over the state-of-the-art, and we show that the multi-task aspect of our approach promotes a semantically meaningful organization of the representation space, allowing it to perform subtasks never seen during training, like ingredient exclusion and selection. The results we present in this thesis open many possibilities, including feature compression for remote applications, robust multi-modal and multi-task learning, and feature space refinement. For the cooking application, in particular, many of our findings are directly applicable in a real-world context, especially for the detection of allergens, finding alternative recipes due to dietary restrictions, and menu planning
APA, Harvard, Vancouver, ISO, and other styles
17

Marchesini, Gregorio. "Caratterizzazione della Sardinia Deep Space Antenna in supporto di missioni deep space." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/20809/.

Full text
Abstract:
Nel seguente elaborato verranno analizzate le caratteristiche principali della Sardinia Deep Space Antenna, il radio telescopio italiano co-finanziato da INAF e ASI per supportare sia la ricerca nell’ambito dell’astronomia, sia le missioni planetarie attualmente in corso e quelle future. Nello specifico, si analizzeranno le capacità della SDSA nell’ambito delle missioni deep space partendo da un confronto con le Deep Space Antennas da 35-m e 34-m, che vengono attualmente impiegate rispettivamente da ESA e NASA. Particolare attenzione verrà data alle soluzioni che accomunano e differenziano le tre DSA per valutare quale potrebbe essere il contributo innovativo della SDSA nell’ambito delle missioni deep space.
APA, Harvard, Vancouver, ISO, and other styles
18

Perjeru, Florentine. "Deep Defects in Wide Bandgap Materials Investigated Using Deep Level Transient Spectroscopy." Ohio University / OhioLINK, 2001. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou997365452.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Mansour, Tarek M. Eng Massachusetts Institute of Technology. "Deep neural networks are lazy : on the inductive bias of deep learning." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/121680.

Full text
Abstract:
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 75-78).
Deep learning models exhibit superior generalization performance despite being heavily overparametrized. Although widely observed in practice, there is currently very little theoretical backing for such a phenomena. In this thesis, we propose a step forward towards understanding generalization in deep learning. We present evidence that deep neural networks have an inherent inductive bias that makes them inclined to learn generalizable hypotheses and avoid memorization. In this respect, we propose results that suggest that the inductive bias stems from neural networks being lazy: they tend to learn simpler rules first. We also propose a definition of simplicity in deep learning based on the implicit priors ingrained in deep neural networks.
by Tarek Mansour.
M. Eng.
M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
APA, Harvard, Vancouver, ISO, and other styles
20

Daniels, Kelly L. "Deep water, open water." Master's thesis, Mississippi State : Mississippi State University, 2009. http://library.msstate.edu/etd/show.asp?etd=etd-04022009-163550.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Burchfield, Monica R. "Fish from Deep Water." Digital Archive @ GSU, 2010. http://digitalarchive.gsu.edu/english_theses/100.

Full text
Abstract:
These poems are lyrical narratives dealing primarily with the joys and sufferings of familial relationships in present and past generations, and how one is influenced and haunted by these interactions. There is a particular emphasis placed on the relationship between parent and child. Other poems deal with passion, both in the tangible and spiritual realms. The poems aim to use vivid figurative language to explore complex and sometimes distressing situations and emotions.
APA, Harvard, Vancouver, ISO, and other styles
22

Stone, Rebecca E. "Deep mixed layer entrainment." Monterey, California. Naval Postgraduate School, 1997. http://hdl.handle.net/10945/8198.

Full text
Abstract:
Approved for public release; distribution is unlimited.
A bulk turbulence-closure mixed layer model is generalized to allow prediction of very deep polar sea mixing. The model includes unsteady three- component turbulent kinetic energy budgets. In addition to terms for shear production, pressure redistribution, and dissipation, special attention is devoted to realistic treatment of thermobaric enhancement of buoyancy flux and to Coriolis effect on turbulence. The model is initialized and verified with CTD data taken by R/V Valdivia in the Greenland Sea during winter 1993-1994. Model simulations show (1) mixed layer deepening is significantly enhanced when the thermal expansion coefficient's increase with pressure is included; (2) entrainment rate is sensitive to the direction of wind stress because of Coriolis; and (3) the predicted mixed layer depth evolution agrees qualitatively with the observations. Results demonstrate the importance of water column initial conditions, accurate representation of strong surface cooling events, and inclusion of the thermobaric effect on buoyancy, to determine the depth of mixing and ultimately the heat and salt flux into the deep ocean. Since coupling of the ocean to the atmosphere through deep mixed layers in polar regions is fundamental to our climate system, it is important that regional and global models be developed that incorporate realistic representation of this coupling
APA, Harvard, Vancouver, ISO, and other styles
23

Beyer, Franziska C. "Deep levels in SiC." Doctoral thesis, Linköpings universitet, Halvledarmaterial, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-70356.

Full text
Abstract:
Silicon carbide (SiC) has been discussed as a promising material for high power bipolar devices for almost twenty years. Advances in SiC crystal growth especially the development of chemical vapor deposition (CVD) have enabled the fabrication of high quality material. Much progress has further been achieved in identifying minority charge carrier lifetime limiting defects, which may be attributed to structural defects, surface recombination or point defects located in the band gap of SiC. Deep levels can act as recombination centers by interacting with both the valence and conduction band. As such, the defect levels reduce the minority charge carrier lifetime, which is of great importance in bipolar devices. Impurities in semiconductors play an important role to adjust their semiconducting properties. Intentional doping can introduce shallow defect levels to increase the conductivity or deep levels for achieving semi-insulating (SI) SiC. Impurities, especially transition metals generate defect levels deep in the band gap of SiC, which trap charge carriers and thus reduce the charge carrier lifetime. Transition metals, such as vanadium, are used in SiC to compensate the residual nitrogen doping. It has previously been reported that valence band edges of the different SiC polytypes are pinned to the same level and that deep levels related to transition metals can serve as a common reference level; this is known as the LANGER-HEINRICH (LH) rule. Electron irradiation introduces or enhances the concentration of existing point defects, such as the carbon vacancy (VC) and the carbon interstitial (Ci). Limiting the irradiation energy, Eirr, below the displacement energy of silicon in the SiC lattice (Eirr < 220 keV), the generated defects can be attributed to carbon related defects, which are already created at lower Eirr. Ci are mobile at low temperatures and using low temperature heat treatments, the annealing behavior of the introduced Ci and their complexes can be studied. Deep levels, which appear and disappear depending on the electrical, thermal and optical conditions prior to the measurements are associated with metastable defects. These defects can exist in more than one configuration, which itself can have different charge states. Capacitance transient investigations, where the defect’s occupation is studied by varying the depletion region in a diode, can be used to observe such occupational changes. Such unstable behavior may influence device performance, since defects may be electrically active in one configuration and inactive after transformation to another configuration. This thesis is focused on electrical characterization of deep levels in SiC using deep level transient spectroscopy (DLTS). The first part, papers 1-4, is dedicated to defect studies of both impurities and intrinsic defects in as-grown material. The second part, consisting of papers 5-7, is dealing with the defect content after electron irradiation and the annealing behavior of the introduced deep levels. In the first part, transition metal incorporation of iron (Fe) and tungsten (W) is discussed in papers 1 and 2, respectively. Fe and W are possible candidates to compensate the residual nitrogen doping in SiC. The doping with Fe resulted in one level in n-type material and two levels in p-type 4H-SiC. The capture process is strongly coupled to the lattice. Secondary ion mass spectrometry measurements detected the presence of B and Fe. The defects are suggested to be related to Fe and/or Fe-B-pairs. Previous reports on tungsten doping showed that W gives rise to two levels (one shallow and one deep) in 4H- and only one deep level in 6H-SiC. In 3C-SiC, we detected two levels, one likely related to W and one intrinsic defect, labeled E1. The W related energy level aligns well with the deeper levels observed in 4H- and 6H-SiC in agreement with the LH rule. The LH rule is observed from experiments to be also valid for intrinsic levels. The level related to the DLTS peak EH6=7 in 4H-SiC aligns with the level related to E7 in 6H-SiC as well as with the level related to E1 in 3C-SiC. The alignment suggests that these levels may originate from the same defect, probably the VC, which has been proposed previously for 4H- and 6H-SiC. In paper 3, electrical characterization of 3C-layers grown heteroepitaxially on different SiC substrates is discussed. The material was of high quality with a low background doping concentration and SCHOTTKY diodes were fabricated. It was observed that nickel as rectifying contact material exhibits a similar barrier height as the previously suggested gold. A leakage current in the low nA range at a reverse bias of -2 V was achieved, which allowed capacitance transient measurements. One defect related to DLTS peak E1, previously presented in paper 2, was detected and suggested to be related to an intrinsic defect. Paper 4 gives the evidence that chloride-based CVD grown material yields the same kind of defects as reported for standard CVD growth processes. However, for very high growth rates, exceeding 100 mm/h, an additional defect is observed as well as an increase of the Ti-concentration. Based on the knowledge from paper 2, the origin of the additional peak and the assumed increase of Ti-concentration can instead both be attributed to the deeper and the shallower level of tungsten in 4H-SiC, respectively. In the second part of the thesis, studies of low-energy (200 keV) electron irradiated as-grown 4H-SiC were performed. In paper 5, bistable defects, labeled EB-centers, evolved in the DLTS spectrum after the annihilation of the irradiation induced defect levels related to DLTS peaks EH1, EH3 and the bistable M-center. In a detailed annealing study presented in paper 6, the partial transformation of M-centers into the EB-centers is discussed. The transition between the two defects (M-centers → EB-centers) takes place at rather low temperatures (T ≈ 400 oC), which suggests a mobile defect as origin. The M-center and the EB-centers are suggested to be related to Ci and/or Ci complex defects. The EB-centers anneal out at about 700 oC. In paper 7, the DLTS peak EH5, which is observed after low- and high-energy electron irradiation is presented. The peak is associated with a bistable defect, labeled F-center. Configuration A exists unoccupied and occupied by an electron, whereas configuration B is only stable when filled by an electron. Reconfiguration temperatures for both configurations were determined and the reconfiguration energies were calculated from the transition kinetics. The reconfiguration B→A can also be achieved by minority charge carrier injection. The F-center is likely a carbon related defect, since it is already present after low-energy irradiation.
APA, Harvard, Vancouver, ISO, and other styles
24

Liu, Qian. "Deep spiking neural networks." Thesis, University of Manchester, 2018. https://www.research.manchester.ac.uk/portal/en/theses/deep-spiking-neural-networks(336e6a37-2a0b-41ff-9ffb-cca897220d6c).html.

Full text
Abstract:
Neuromorphic Engineering (NE) has led to the development of biologically-inspired computer architectures whose long-term goal is to approach the performance of the human brain in terms of energy efficiency and cognitive capabilities. Although there are a number of neuromorphic platforms available for large-scale Spiking Neural Network (SNN) simulations, the problem of programming these brain-like machines to be competent in cognitive applications still remains unsolved. On the other hand, Deep Learning has emerged in Artificial Neural Network (ANN) research to dominate state-of-the-art solutions for cognitive tasks. Thus the main research problem emerges of understanding how to operate and train biologically-plausible SNNs to close the gap in cognitive capabilities between SNNs and ANNs. SNNs can be trained by first training an equivalent ANN and then transferring the tuned weights to the SNN. This method is called ‘off-line’ training, since it does not take place on an SNN directly, but rather on an ANN instead. However, previous work on such off-line training methods has struggled in terms of poor modelling accuracy of the spiking neurons and high computational complexity. In this thesis we propose a simple and novel activation function, Noisy Softplus (NSP), to closely model the response firing activity of biologically-plausible spiking neurons, and introduce a generalised off-line training method using the Parametric Activation Function (PAF) to map the abstract numerical values of the ANN to concrete physical units, such as current and firing rate in the SNN. Based on this generalised training method and its fine tuning, we achieve the state-of-the-art accuracy on the MNIST classification task using spiking neurons, 99.07%, on a deep spiking convolutional neural network (ConvNet). We then take a step forward to ‘on-line’ training methods, where Deep Learning modules are trained purely on SNNs in an event-driven manner. Existing work has failed to provide SNNs with recognition accuracy equivalent to ANNs due to the lack of mathematical analysis. Thus we propose a formalised Spike-based Rate Multiplication (SRM) method which transforms the product of firing rates to the number of coincident spikes of a pair of rate-coded spike trains. Moreover, these coincident spikes can be captured by the Spike-Time-Dependent Plasticity (STDP) rule to update the weights between the neurons in an on-line, event-based, and biologically-plausible manner. Furthermore, we put forward solutions to reduce correlations between spike trains; thereby addressing the result of performance drop in on-line SNN training. The promising results of spiking Autoencoders (AEs) and Restricted Boltzmann Machines (SRBMs) exhibit equivalent, sometimes even superior, classification and reconstruction capabilities compared to their non-spiking counterparts. To provide meaningful comparisons between these proposed SNN models and other existing methods within this rapidly advancing field of NE, we propose a large dataset of spike-based visual stimuli and a corresponding evaluation methodology to estimate the overall performance of SNN models and their hardware implementations.
APA, Harvard, Vancouver, ISO, and other styles
25

Sheiretov, Yanko Konstantinov. "Deep penetration magnetoquasistatic sensors." Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/16772.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2001.
Includes bibliographical references (p. 193-198).
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
This research effort extends the capabilities of existing model-based spatially periodic quasistatic-field sensors. The research developed three significant improvements in the field of nondestructive evaluation. The impact of each is detailed below: 1. The design of a distributed current drive magneto resistive magnetometer that matches the model response sufficiently to perform air calibration and absolute property measurement. Replacing the secondary winding with a magnetoresistive sensor allows the magnetometer to be operated at frequencies much lower than ordinarily possible, including static (DC) operation, which enables deep penetration defect imaging. Low frequencies are needed for deep probing of metals, where the depth of penetration is otherwise limited by the skin depth due to the shielding effect of induced eddy currents. The capability to perform such imaging without dependence on calibration standards has both substantial cost, ease of use, and technological benefits. The absolute property measurement capability is important because it provides a robust comparison for manufacturing quality control and monitoring of aging processes. Air calibration also alleviates the dependence on calibration standards that can be difficult to maintain. 2. The development and validation of cylindrical geometry models for inductive and capacitive sensors. The development of cylindrical geometry models enable the design of families of circularly symmetric magnetometers and dielectrometers with the "model-based" methodology, which requires close agreement between actual sensor response and simulated response. These kinds of sensors are needed in applications where the components being tested have circular symmetry, e.g. cracks near fasteners, or if it is important to measure the spatial average of an anisotropic property. 3. The development of accurate and efficient two-dimensional inverse interpolation and grid look-up techniques to determine electromagnetic and geometric properties. The ability to perform accurate and efficient grid interpolation is important for all sensors that follow the model-based principle, but it is particularly important for the complex shaped grids used with the magnetometers and dielectrometers in this thesis. A prototype sensor that incorporates all new features, i.e. a circularly symmetric magnetometer with a distributed current drive that uses a magnetoresistive secondary element, was designed, built, and tested. The primary winding is designed to have no net dipole moment, which improves repeatability by reducing the influence of distant objects. It can also support operation at two distinct effective spatial wavelengths. A circuit is designed that places the magnetoresistive sensor in a feedback configuration with a secondary winding to provide the necessary biasing and to ensure a linear transfer characteristic. Efficient FFT-based methods are developed to model magnetometers with a distributed current drive for both Cartesian and cylindrical geometry sensors. Results from measurements with a prototype circular dielectrometer that agree with the model-based analysis are also presented. In addition to the main contributions described so far, this work also includes other related enhancements to the time and space periodic-field sensor models, such as incorporating motion in the models to account for moving media effects. This development is important in low frequency scanning applications. Some improvements of the existing semi-analytical collocation point models for the standard Cartesian magnetometers and dielectrometers are also presented.
by Yanko Sheiretov.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
26

Patil, Raj. "Deep UV Raman Spectroscopy." Thesis, The University of Arizona, 2016. http://hdl.handle.net/10150/613378.

Full text
Abstract:
This thesis examines the performance of a custom built deep UV laser (257.5nm) for Raman spectroscopy and the advantages of Raman spectroscopy with a laser in the deep UV over a laser in the visible range (532 nm). It describes the theory of resonance Raman scattering, the experimental setup for Raman spectroscopy and a few Raman spectroscopy measurements. The measurements were performed on biological samples oak tree leaf and lactobacillus acidophilus and bifidobacteria from probotioc medicinal capsules. Fluorescence free Raman spectra were acquired for the two samples with 257.5 nm laser. The Raman spectra for the two samples with a 532nm laser was masked with fluorescence. Raman measurements for an inorganic salt sodium nitrate showed a resonance Raman effect with 257.5 nm laser which led to enhancement in the Raman intensity as compared to that with 532 nm laser. Therefore we were able to demonstrate two advantages of deep UV Raman spectroscopy. First one is the possibility of acquiring fluorescence free spectra for biological samples. Second is the possibility of gaining enhancement in Raman intensity due to resonance Raman effect. It was observed that 257.5 nm laser requires optimization to reduce the bandwidth of the laser to get better resolution. The 257.5 nm laser also needs to be optimized to obtain higher power to get better signal to noise ratio. The experimental setup can also be further improved to obtain better resolution. If the improvements required in the setup are implemented, the deep UV Raman setup will become an important tool for spectroscopy.
APA, Harvard, Vancouver, ISO, and other styles
27

Debain, Yann. "Deep Convolutional Nonnegative Autoencoders." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-287352.

Full text
Abstract:
In this thesis, nonnegative matrix factorization (NMF) is viewed as a feedbackward neural network and generalized to a deep convolutional architecture with forwardpropagation under β-divergence. NMF and feedfoward neural networks are put in relation and a new class of autoencoders is proposed, namely the nonnegative autoencoders. It is shown that NMF is essentially the decoder part of an autoencoder with nonnegative weights and input. The shallow autoencoder with fully connected neurons is extended to a deep convolutional autoencoder with the same properties. Multiplicative factor updates are used to ensure nonnegativity of the weights in the network. As a result, a shallow nonnegative autoencoder (NAE), a shallow convolutional nonnegative autoencoder (CNAE) and a deep convolutional nonnegative autoencoder (DCNAE) are developed. Finally, all three variants of the nonnegative autoencoder are tested on different tasks, such as signal reconstruction and signal enhancement.
I den här rapporten betraktas icke-negativ matrisfaktorisering (eng: nonnegative matrix factorization, NMF) som ett återkopplat neuralt nätverk. NMF är generaliserat till en djup faltningsarkitektur med “forwardpropagation” och β-divergens. NMF och “feedforward” neurala nät jämförs och en ny typ av autokodare är presenterat. Den nya typen av autokodare kallas icke-negativ autokodare. NMF betraktas avkodardelen av en autokodare med icke-negativa vikter och ingång. Den grunda autokodare med summationsdelen är utbyggd till en djup faltningsautokodare med icke-negativa vikter och ingång. I den här rapporten utvecklades en grund icke-negativ autokodare (eng: nonnegative autoencoder, NAE), en grund icke-negativ faltningsautokodare (eng: convolutional nonnegative autoencoder, CNAE) och en djup icke-negativ faltningsautokodare (eng: deep convolutional nonnegative autoencoder, DCNAE). Slutligen testas de tre varianterna av icke-negativ autokodare på några olika uppgifter som signalrekonstruktion och signalförbättring.
APA, Harvard, Vancouver, ISO, and other styles
28

Halle, Alex, and Alexander Hasse. "Topologieoptimierung mittels Deep Learning." Technische Universität Chemnitz, 2019. https://monarch.qucosa.de/id/qucosa%3A34343.

Full text
Abstract:
Die Topologieoptimierung ist die Suche einer optimalen Bauteilgeometrie in Abhängigkeit des Einsatzfalls. Für komplexe Probleme kann die Topologieoptimierung aufgrund eines hohen Detailgrades viel Zeit- und Rechenkapazität erfordern. Diese Nachteile der Topologieoptimierung sollen mittels Deep Learning reduziert werden, so dass eine Topologieoptimierung dem Konstrukteur als sekundenschnelle Hilfe dient. Das Deep Learning ist die Erweiterung künstlicher neuronaler Netzwerke, mit denen Muster oder Verhaltensregeln erlernt werden können. So soll die bislang numerisch berechnete Topologieoptimierung mit dem Deep Learning Ansatz gelöst werden. Hierzu werden Ansätze, Berechnungsschema und erste Schlussfolgerungen vorgestellt und diskutiert.
APA, Harvard, Vancouver, ISO, and other styles
29

Goh, Hanlin. "Learning deep visual representations." Paris 6, 2013. http://www.theses.fr/2013PA066356.

Full text
Abstract:
Les avancées récentes en apprentissage profond et en traitement d'image présentent l'opportunité d'unifier ces deux champs de recherche complémentaires pour une meilleure résolution du problème de classification d'images dans des catégories sémantiques. L'apprentissage profond apporte au traitement d'image le pouvoir de représentation nécessaire à l'amélioration des performances des méthodes de classification d'images. Cette thèse propose de nouvelles méthodes d'apprentissage de représentations visuelles profondes pour la résolution de cette tache. L'apprentissage profond a été abordé sous deux angles. D'abord nous nous sommes intéressés à l'apprentissage non supervisé de représentations latentes ayant certaines propriétés à partir de données en entrée. Il s'agit ici d'intégrer une connaissance à priori, à travers un terme de régularisation, dans l'apprentissage d'une machine de Boltzmann restreinte (RBM). Nous proposons plusieurs formes de régularisation qui induisent différentes propriétés telles que la parcimonie, la sélectivité et l'organisation en structure topographique. Le second aspect consiste au passage graduel de l'apprentissage non supervisé à l'apprentissage supervisé de réseaux profonds. Ce but est réalisé par l'introduction sous forme de supervision, d'une information relative à la catégorie sémantique. Deux nouvelles méthodes sont proposées. Le premier est basé sur une régularisation top-down de réseaux de croyance profonds à base de RBMs. Le second optimise un cout intégrant un critre de reconstruction et un critre de supervision pour l'entrainement d'autoencodeurs profonds. Les méthodes proposées ont été appliquées au problme de classification d'images. Nous avons adopté le modèle sac-de-mots comme modèle de base parce qu'il offre d'importantes possibilités grâce à l'utilisation de descripteurs locaux robustes et de pooling par pyramides spatiales qui prennent en compte l'information spatiale de l'image. L'apprentissage profonds avec agrÉgation spatiale est utilisé pour apprendre un dictionnaire hiÉrarchique pour l'encodage de reprÉsentations visuelles de niveau intermÉdiaire. Cette mÉthode donne des rÉsultats trs compétitifs en classification de scènes et d'images. Les dictionnaires visuels appris contiennent diverses informations non-redondantes ayant une structure spatiale cohérente. L'inférence est aussi très rapide. Nous avons par la suite optimisé l'étape de pooling sur la base du codage produit par le dictionnaire hiérarchique précédemment appris en introduisant introduit une nouvelle paramétrisation dérivable de l'opération de pooling qui permet un apprentissage par descente de gradient utilisant l'algorithme de rétro-propagation. Ceci est la premire tentative d'unification de l'apprentissage profond et du modèle de sac de mots. Bien que cette fusion puisse sembler évidente, l'union de plusieurs aspects de l'apprentissage profond de représentations visuelles demeure une tache complexe à bien des égards et requiert encore un effort de recherche important
Recent advancements in the areas of deep learning and visual information processing have presented an opportunity to unite both fields. These complementary fields combine to tackle the problem of classifying images into their semantic categories. Deep learning brings learning and representational capabilities to a visual processing model that is adapted for image classification. This thesis addresses problems that lead to the proposal of learning deep visual representations for image classification. The problem of deep learning is tackled on two fronts. The first aspect is the problem of unsupervised learning of latent representations from input data. The main focus is the integration of prior knowledge into the learning of restricted Boltzmann machines (RBM) through regularization. Regularizers are proposed to induce sparsity, selectivity and topographic organization in the coding to improve discrimination and invariance. The second direction introduces the notion of gradually transiting from unsupervised layer-wise learning to supervised deep learning. This is done through the integration of bottom-up information with top-down signals. Two novel implementations supporting this notion are explored. The first method uses top-down regularization to train a deep network of RBMs. The second method combines predictive and reconstructive loss functions to optimize a stack of encoder-decoder networks. The proposed deep learning techniques are applied to tackle the image classification problem. The bag-of-words model is adopted due to its strengths in image modeling through the use of local image descriptors and spatial pooling schemes. Deep learning with spatial aggregation is used to learn a hierarchical visual dictionary for encoding the image descriptors into mid-level representations. This method achieves leading image classification performances for object and scene images. The learned dictionaries are diverse and non-redundant. The speed of inference is also high. From this, a further optimization is performed for the subsequent pooling step. This is done by introducing a differentiable pooling parameterization and applying the error backpropagation algorithm. This thesis represents one of the first attempts to synthesize deep learning and the bag-of-words model. This union results in many challenging research problems, leaving much room for further study in this area
APA, Harvard, Vancouver, ISO, and other styles
30

Geirsson, Gunnlaugur. "Deep learning exotic derivatives." Thesis, Uppsala universitet, Avdelningen för systemteknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-430410.

Full text
Abstract:
Monte Carlo methods in derivative pricing are computationally expensive, in particular for evaluating models partial derivatives with regard to inputs. This research proposes the use of deep learning to approximate such valuation models for highly exotic derivatives, using automatic differentiation to evaluate input sensitivities. Deep learning models are trained to approximate Phoenix Autocall valuation using a proprietary model used by Svenska Handelsbanken AB. Models are trained on large datasets of low-accuracy (10^4 simulations) Monte Carlo data, successfully learning the true model with an average error of 0.1% on validation data generated by 10^8 simulations. A specific model parametrisation is proposed for 2-day valuation only, to be recalibrated interday using transfer learning. Automatic differentiation approximates sensitivity to (normalised) underlying asset prices with a mean relative error generally below 1.6%. Overall error when predicting sensitivity to implied volatililty is found to lie within 10%-40%. Near identical results are found by finite difference as automatic differentiation in both cases. Automatic differentiation is not successful at capturing sensitivity to interday contract change in value, though errors of 8%-25% are achieved by finite difference. Model recalibration by transfer learning proves to converge over 15 times faster and with up to 14% lower relative error than training using random initialisation. The results show that deep learning models can efficiently learn Monte Carlo valuation, and that these can be quickly recalibrated by transfer learning. The deep learning model gradient computed by automatic differentiation proves a good approximation of the true model sensitivities. Future research proposals include studying optimised recalibration schedules, using training data generated by single Monte Carlo price paths, and studying additional parameters and contracts.
APA, Harvard, Vancouver, ISO, and other styles
31

Wolfe, Traci. "Digging deep for meaning." Online version, 2008. http://www.uwstout.edu/lib/thesis/2008/2008wolfet.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Simonetto, Andrea. "Indagini in Deep Inference." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2010. http://amslaurea.unibo.it/1455/.

Full text
Abstract:
La tesi è uno studio di alcuni aspetti della nuova metodologia “deep inference”, abbinato ad una rivisitazione dei concetti classici di proof theory, con l'aggiunta di alcuni risultati originali orientati ad una maggior comprensione dell'argomento, nonché alle applicazioni pratiche. Nel primo capitolo vengono introdotti, seguendo un approccio di stampo formalista (con alcuni spunti personali), i concetti base della teoria della dimostrazione strutturale – cioè quella che usa strumenti combinatoriali (o “finitistici”) per studiare le proprietà delle dimostrazioni. Il secondo capitolo focalizza l'attenzione sulla logica classica proposizionale, prima introducendo il calcolo dei sequenti e dimostrando il Gentzen Hauptsatz, per passare poi al calcolo delle strutture (sistema SKS), dimostrando anche per esso un teorema di eliminazione del taglio, appositamente adattato dall'autore. Infine si discute e dimostra la proprietà di località per il sistema SKS. Un percorso analogo viene tracciato dal terzo ed ultimo capitolo, per quanto riguarda la logica lineare. Viene definito e motivato il calcolo dei sequenti lineari, e si discute del suo corrispettivo nel calcolo delle strutture. L'attenzione qui è rivolta maggiormente al problema di definire operatori non-commutativi, che mettono i sistemi in forte relazione con le algebre di processo.
APA, Harvard, Vancouver, ISO, and other styles
33

Brown, Kevin. "A Deep Diver's Becoming." Thesis, Université d'Ottawa / University of Ottawa, 2020. http://hdl.handle.net/10393/40424.

Full text
Abstract:
When scuba diving under a physical overhead such as a cave, a mine, a shipwreck, or under a virtual overhead due to decompression requirements, it makes it impossible to safely access the surface in the event of an emergency. Therefore, diving with overhead is often described as technical diving. In this research, I address how technical divers in Outaouais, Quebec, practice this risky sport with unforgiving consequences. Based on fieldwork in Outaouais, I focus on divers, including myself, who perform trimix dives deeper than 200 feet. I argue that the process of becoming a deep diver is a lifelong journey where a diver learns to adapt to a milieu hostile to human life. The basic skills are acquired during classes to ensure that a novice diver will survive in this limit-environment. As divers bend the rules and take more risks to go deeper for longer lengths of time, they will go through a series of limit-experiences and near misses that are essential to their development and found to be regenerative. In turn, those limit-experiences and near-miss events shared with teammates create mutual trust. It is this trust that becomes the foundation of the team and allows the team to improve upon existing techniques and increase the depth and difficulty of their dives.
APA, Harvard, Vancouver, ISO, and other styles
34

Wülfing, Jan [Verfasser], and Martin [Akademischer Betreuer] Riedmiller. "Stable deep reinforcement learning." Freiburg : Universität, 2019. http://d-nb.info/1204826188/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

White, Martin. "Deep Learning Software Repositories." W&M ScholarWorks, 2017. https://scholarworks.wm.edu/etd/1516639667.

Full text
Abstract:
Bridging the abstraction gap between artifacts and concepts is the essence of software engineering (SE) research problems. SE researchers regularly use machine learning to bridge this gap, but there are three fundamental issues with traditional applications of machine learning in SE research. Traditional applications are too reliant on labeled data. They are too reliant on human intuition, and they are not capable of learning expressive yet efficient internal representations. Ultimately, SE research needs approaches that can automatically learn representations of massive, heterogeneous, datasets in situ, apply the learned features to a particular task and possibly transfer knowledge from task to task. Improvements in both computational power and the amount of memory in modern computer architectures have enabled new approaches to canonical machine learning tasks. Specifically, these architectural advances have enabled machines that are capable of learning deep, compositional representations of massive data depots. The rise of deep learning has ushered in tremendous advances in several fields. Given the complexity of software repositories, we presume deep learning has the potential to usher in new analytical frameworks and methodologies for SE research and the practical applications it reaches. This dissertation examines and enables deep learning algorithms in different SE contexts. We demonstrate that deep learners significantly outperform state-of-the-practice software language models at code suggestion on a Java corpus. Further, these deep learners for code suggestion automatically learn how to represent lexical elements. We use these representations to transmute source code into structures for detecting similar code fragments at different levels of granularity—without declaring features for how the source code is to be represented. Then we use our learning-based framework for encoding fragments to intelligently select and adapt statements in a codebase for automated program repair. In our work on code suggestion, code clone detection, and automated program repair, everything for representing lexical elements and code fragments is mined from the source code repository. Indeed, our work aims to move SE research from the art of feature engineering to the science of automated discovery.
APA, Harvard, Vancouver, ISO, and other styles
36

King, John Douglas. "Deep Web Collection Selection." Thesis, Queensland University of Technology, 2004. https://eprints.qut.edu.au/15992/3/John_King_Thesis.pdf.

Full text
Abstract:
The deep web contains a massive number of collections that are mostly invisible to search engines. These collections often contain high-quality, structured information that cannot be crawled using traditional methods. An important problem is selecting which of these collections to search. Automatic collection selection methods try to solve this problem by suggesting the best subset of deep web collections to search based on a query. A few methods for deep Web collection selection have proposed in Collection Retrieval Inference Network system and Glossary of Servers, Server system. The drawback in these methods is that they require communication between the search broker and the collections, and need metadata about each collection. This thesis compares three different sampling methods that do not require communication with the broker or metadata about each collection. It also transforms some traditional information retrieval based techniques to this area. In addition, the thesis tests these techniques using INEX collection for total 18 collections (including 12232 XML documents) and total 36 queries. The experiment shows that the performance of sample-based technique is satisfactory in average.
APA, Harvard, Vancouver, ISO, and other styles
37

King, John Douglas. "Deep Web Collection Selection." Queensland University of Technology, 2004. http://eprints.qut.edu.au/15992/.

Full text
Abstract:
The deep web contains a massive number of collections that are mostly invisible to search engines. These collections often contain high-quality, structured information that cannot be crawled using traditional methods. An important problem is selecting which of these collections to search. Automatic collection selection methods try to solve this problem by suggesting the best subset of deep web collections to search based on a query. A few methods for deep Web collection selection have proposed in Collection Retrieval Inference Network system and Glossary of Servers Server system. The drawback in these methods is that they require communication between the search broker and the collections, and need metadata about each collection. This thesis compares three different sampling methods that do not require communication with the broker or metadata about each collection. It also transforms some traditional information retrieval based techniques to this area. In addition, the thesis tests these techniques using INEX collection for total 18 collections (including 12232 XML documents) and total 36 queries. The experiment shows that the performance of sample-based technique is satisfactory in average.
APA, Harvard, Vancouver, ISO, and other styles
38

Sun, Haozhe. "Modularity in deep learning." Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG090.

Full text
Abstract:
L'objectif de cette thèse est de rendre l'apprentissage profond plus efficace en termes de ressources en appliquant le principe de modularité. La thèse comporte plusieurs contributions principales : une étude de la littérature sur la modularité dans l'apprentissage profond; la conception d'OmniPrint et de Meta-Album, des outils qui facilitent l'étude de la modularité des données; des études de cas examinant les effets de l'apprentissage épisodique, un exemple de modularité des données; un mécanisme d'évaluation modulaire appelé LTU pour évaluer les risques en matière de protection de la vie privée; et la méthode RRR pour réutiliser des modèles modulaires pré-entraînés afin d'en construire des versions plus compactes. La modularité, qui implique la décomposition d'une entité en sous-entités, est un concept répandu dans diverses disciplines. Cette thèse examine la modularité sur trois axes de l'apprentissage profond : les données, la tâche et le modèle. OmniPrint et Meta-Album facilitent de comparer les modèles modulaires et d'explorer les impacts de la modularité des données. LTU garantit la fiabilité de l'évaluation de la protection de la vie privée. RRR améliore l'efficacité de l'utilisation des modèles modulaires pré-entraînés. Collectivement, cette thèse fait le lien entre le principe de modularité et l'apprentissage profond et souligne ses avantages dans certains domaines de l'apprentissage profond, contribuant ainsi à une intelligence artificielle plus efficace en termes de ressources
This Ph.D. thesis is dedicated to enhancing the efficiency of Deep Learning by leveraging the principle of modularity. It contains several main contributions: a literature survey on modularity in Deep Learning; the introduction of OmniPrint and Meta-Album, tools that facilitate the investigation of data modularity; case studies examining the effects of episodic few-shot learning, an instance of data modularity; a modular evaluation mechanism named LTU for assessing privacy risks; and the method RRR for reusing pre-trained modular models to create more compact versions. Modularity, which involves decomposing an entity into sub-entities, is a prevalent concept across various disciplines. This thesis examines modularity across three axes of Deep Learning: data, task, and model. OmniPrint and Meta-Album assist in benchmarking modular models and exploring data modularity's impacts. LTU ensures the reliability of the privacy assessment. RRR significantly enhances the utilization efficiency of pre-trained modular models. Collectively, this thesis bridges the modularity principle with Deep Learning and underscores its advantages in selected fields of Deep Learning, contributing to more resource-efficient Artificial Intelligence
APA, Harvard, Vancouver, ISO, and other styles
39

Arnold, Ludovic. "Learning Deep Representations : Toward a better new understanding of the deep learning paradigm." Phd thesis, Université Paris Sud - Paris XI, 2013. http://tel.archives-ouvertes.fr/tel-00842447.

Full text
Abstract:
Since 2006, deep learning algorithms which rely on deep architectures with several layers of increasingly complex representations have been able to outperform state-of-the-art methods in several settings. Deep architectures can be very efficient in terms of the number of parameters required to represent complex operations which makes them very appealing to achieve good generalization with small amounts of data. Although training deep architectures has traditionally been considered a difficult problem, a successful approach has been to employ an unsupervised layer-wise pre-training step to initialize deep supervised models. First, unsupervised learning has many benefits w.r.t. generalization because it only relies on unlabeled data which is easily found. Second, the possibility to learn representations layer by layer instead of all layers at once improves generalization further and reduces computational time. However, deep learning is a very recent approach and still poses a lot of theoretical and practical questions concerning the consistency of layer-wise learning with many layers and difficulties such as evaluating performance, performing model selection and optimizing layers. In this thesis we first discuss the limitations of the current variational justification for layer-wise learning which does not generalize well to many layers. We ask if a layer-wise method can ever be truly consistent, i.e. capable of finding an optimal deep model by training one layer at a time without knowledge of the upper layers. We find that layer-wise learning can in fact be consistent and can lead to optimal deep generative models. To do this, we introduce the Best Latent Marginal (BLM) upper bound, a new criterion which represents the maximum log-likelihood of a deep generative model where the upper layers are unspecified. We prove that maximizing this criterion for each layer leads to an optimal deep architecture, provided the rest of the training goes well. Although this criterion cannot be computed exactly, we show that it can be maximized effectively by auto-encoders when the encoder part of the model is allowed to be as rich as possible. This gives a new justification for stacking models trained to reproduce their input and yields better results than the state-of-the-art variational approach. Additionally, we give a tractable approximation of the BLM upper-bound and show that it can accurately estimate the final log-likelihood of models. Taking advantage of these theoretical advances, we propose a new method for performing layer-wise model selection in deep architectures, and a new criterion to assess whether adding more layers is warranted. As for the difficulty of training layers, we also study the impact of metrics and parametrization on the commonly used gradient descent procedure for log-likelihood maximization. We show that gradient descent is implicitly linked with the metric of the underlying space and that the Euclidean metric may often be an unsuitable choice as it introduces a dependence on parametrization and can lead to a breach of symmetry. To mitigate this problem, we study the benefits of the natural gradient and show that it can restore symmetry, regrettably at a high computational cost. We thus propose that a centered parametrization may alleviate the problem with almost no computational overhead.
APA, Harvard, Vancouver, ISO, and other styles
40

Ohta, Atsuyuki, Koh Naito, Yoshihisa Okuda, and Iwao Kawabe. "Geochemical characteristics of Antarctic deep-sea ferromanganese nodules from highly oxic deep-sea water." Dept. of Earth and Planetary Sciences, Nagoya University, 1999. http://hdl.handle.net/2237/2843.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Grant, Hazel Christine. "The role of Weddell Sea deep and bottom waters in ventilating the deep ocean." Thesis, University of East Anglia, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.492970.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Chavva, Venkataramana Reddy. "Development of a deep level transient spectrometer and some deep levelstudies of Gallium Arsenide." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1993. http://hub.hku.hk/bib/B31211252.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Rodés-Guirao, Lucas. "Deep Learning for Digital Typhoon : Exploring a typhoon satellite image dataset using deep learning." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-249514.

Full text
Abstract:
Efficient early warning systems can help in the management of natural disaster events, by allowing for adequate evacuations and resources administration. Several approaches have been used to implement proper early warning systems, such as simulations or statistical models, which rely on the collection of meteorological data. Data-driven techniques have been proven to be effective to build statistical models, being able to generalise to unseen data. Motivated by this, in this work, we explore deep learning techniques applied to the typhoon meteorological satellite image dataset "Digital Typhoon".  We focus on intensity measurement and categorisation of different natural phenomena. Firstly, we build a classifier to differentiate natural tropical cyclones and extratropical cyclones and, secondly, we implement a regression model to estimate the centre pressure value of a typhoon. In addition, we also explore cleaning methodologies to ensure that the data used is reliable. The results obtained show that deep learning techniques can be effective under certain circumstances, providing reliable classification and regression models and feature extractors. More research to draw more conclusions and validate the obtained results is expected in the future.
Effektiva varningssystem kan hjälpa till med hanteringen av naturkatastrofer genom att möjliggöra tillräckliga evakueringar och resursfördelningar. Flera olika tillvägagångssätt har använts för att genomföra lämpliga tidiga varningssystem, såsom simuleringar eller statistiska modeller, som bygger på insamling av meteorologiska data. Datadriven teknik har visat sig vara effektiv för att bygga statistiska modeller som kan generalisera till okända data. Motiverat av detta, utforskar examensarbetet tekniker baserade på djupinlärning, vilka tillämpas på ett dataset med meteorologiska satellitbilder, Digital Typhoon".  Vi fokuserar på intensitetsmätning och kategorisering av olika naturfenomen.  Först bygger vi en klassificerare för att skilja mellan naturliga tropiska cykloner och extratropiska cykloner. Därefter implementerar vi en regressionsmodell för att uppskatta en tyfons mittrycksvärde. Dessutom utforskar vi rengöringsmetoder för att säkerställa att de data som används är tillförlitliga.  De erhållna resultaten visar att tekniker för djupinlärning kan vara effektiva under vissa omständigheter, vilket ger tillförlitliga klassificerings- och regressionsmodeller samt extraktorer. Mer forskning för att dra fler slutsatser och validera de erhållna resultaten förväntas i framtiden.
Els sistemes d’alerta ràpida poden ajudar en la gestió dels esdeveniments de desastres naturals, permetent una evacuació i administració dels recursos adequada. En aquest sentit s’han utilitzat diferentes tècniques per implementar sistemes d’alerta, com ara simulacions o models estadístics, tots ells basats en la recollida de dades meteorològiques. S’ha demostrat que les tècniques basades en dades són eficaces a l’hora de construir models estadístics, podent generalitzar-se a a noves dades. Motivat per això, en aquest treball, explorem l’ús de tècniques d’aprenentatge profund (o deep learning) aplicades a les imatges meteorològiquesper satèl·lit de tifons del projecte "Digital Typhoon". Ens centrem en la mesura i la categorització de la intensitat de diferentsfenòmens naturals. En primer lloc, construïm un classificador per diferenciar ciclonstropicals naturals i ciclons extratropicals i, en segon lloc, implementemun model de regressió per estimar el valor de pressió central d’un tifó.A més, també explorem metodologies de neteja per garantir que lesdades utilitzades siguin fiables. Els resultats obtinguts mostren que les tècniques d’aprenentatgeprofundes poden ser efectives en determinades circumstàncies, proporcionant models fiables de classificació/regressió i extractors de característiques.Es preveu que hi hagi més recerques per obtenir més conclusions i validar els resultats obtinguts en el futur.
APA, Harvard, Vancouver, ISO, and other styles
44

Kabir, Md Faisal. "Application of Deep Learning in Deep Space Wireless Signal Identification for Intelligent Channel Sensing." University of Toledo / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1588886429314726.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Gibert, Llauradó Daniel. "Going Deep into the Cat and the Mouse Game: Deep Learning for Malware Classification." Doctoral thesis, Universitat de Lleida, 2020. http://hdl.handle.net/10803/671776.

Full text
Abstract:
La lluita contra el programari maliciós no s'ha interromput mai des dels inicis de l'era digital, esdevenint una carrera armamentística cíclica i interminable; a mesura que els analistes en seguretat i investigadors milloren les seves defenses, els desenvolupadors de programari maliciós continuen innovant, trobant nous vectors d'infecció i millorant les tècniques d'ofuscació. Recentment, degut al creixement massiu i continu del programari maliciós, es requereixen nous mètodes per a complementar els existents i així poder protegir satisfactòriament els sistemes de nous atacs i variants. L'objectiu d'aquesta tesis doctoral és el disseny, implementació i avaluació de mètodes d'aprenentatge automàtic per a la detecció i classificació de programari maliciós, a causa de la seva capacitat per a manipular grans volums de dades així com la seva habilitat de generalització. La recerca s'ha estructurat en quatre parts. La primera part proporciona una descripció completa dels mètodes i característiques utilitzats per a la detecció i classicació de programari maliciós. La segona part consisteix en l'automatització del procés d'extracció de característiques utilitzant tècniques d'aprenentatge profund. La tercera part consisteix en la investigació de mecanismes per a combinar múltiples modalitats o fonts d'informació per a incrementar la robustesa dels classificadors basats en aprenentatge profund. La quarta part d'aquesta tesis presenta els principals problemes i reptes als que s'enfronten els analistes en seguretat, com el problema de la desigualtat entre el nombre de mostres per família, l'aprenentatge advers, entre altres. Tanmateix, proporciona una extensa avaluació dels diferents mètodes d'aprenentatge automàtic contra vàries tècniques d'ofuscació, i analitza la utilitat d'aquestes per a augmentar el conjunt de dades d'entrenament i reduir la desigualtat de mostres per família.
La lucha contra el software malicioso no se ha interrumpido desde los inicios de la era digital, resultando en una carrera armamentística, cíclica e interminable; a medida que los analistas de seguridad y investigadores mejoran sus defensas, los desarrolladores de software malicioso siguen innovando, hallando nuevos vectores de infección y mejorando las técnicas de ofuscación. Recientemente, debido al crecimiento masivo y continuo del malware, se requieren nuevos métodos para complementar los existentes y así poder proteger los sistemas de nuevos ataques y variantes. El objetivo de esta tesis doctoral es el diseño, implementación y evaluación de métodos de aprendizaje automático para la detección y clasificación de software malicioso, debido a su capacidad para manejar grandes volúmenes de datos y su habilidad de generalización. La tesis se ha estructurado en cuatro partes. La primera parte proporciona una descripción completa de los métodos y características empleados para la detección y clasificación de software malicioso. La segunda parte consiste en la automatización del proceso de extracción de características mediante aprendizaje profundo. La tercera parte consiste en la investigación de mecanismos para combinar múltiples modalidades o fuentes de información y así, incrementar la robustez de los modelos de clasificación. La cuarta parte de esta tesis presenta los principales problemas y retos a los que se enfrentan los analistas de seguridad, como el problema de la desigualdad entre el número de muestras por familia, el aprendizaje adverso, entre otros. Asimismo, proporciona una extensa evaluación de los distintos métodos de aprendizaje profundo contra varias técnicas de ofuscación, y analiza la utilidad de estas para aumentar el conjunto de entrenamiento y reducir la desigualdad de muestras por familia.
The fight against malware has never stopped since the dawn of computing. This fight has turned out to be a never-ending and cyclical arms race: as security analysts and researchers improve their defenses, malware developers continue to innovate, and new infection vectors and enhance their obfuscation techniques. Lately, due to the massive growth of malware streams, new methods have to be devised to complement traditional detection approaches and keep pace with new attacks and variants. The aim of this thesis is the design, implementation, and evaluation of machine learning approaches for the task of malware detection and classification, due to its ability to handle large volumes of data and to generalize to never-before-seen malware. This thesis is structured into four main parts. The first part provides a systematic and detailed overview of machine learning techniques to tackle the problem of malware detection and classification. The second part is devoted to automating the feature engineering process through deep learning. The third part of this thesis is devoted to investigating mechanisms to combine multiple modalities of information to increase the robustness of deep learning classifiers. The fourth part of this dissertation discusses the main issues and challenges faced by security researchers such as the availability of public benchmarks for malware research, and the problems of class imbalance, concept drift and adversarial learning. To this end, it provides an extensive evaluation of deep learning approaches for malware classification against common metamorphic techniques, and it explores their usage to augment the training set and reduce class imbalance.
APA, Harvard, Vancouver, ISO, and other styles
46

Squadrani, Lorenzo. "Deep neural networks and thermodynamics." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2020.

Find full text
Abstract:
Deep learning is the most effective and used approach to artificial intelligence, and yet it is far from being properly understood. The understanding of it is the way to go to further improve its effectiveness and in the best case to gain some understanding of the "natural" intelligence. We attempt a step in this direction with the aim of physics. We describe a convolutional neural network for image classification (trained on CIFAR-10) within the descriptive framework of Thermodynamics. In particular we define and study the temperature of each component of the network. Our results provides a new point of view on deep learning models, which may be a starting point towards a better understanding of artificial intelligence.
APA, Harvard, Vancouver, ISO, and other styles
47

Franceschelli, Giorgio. "Generative Deep Learning and Creativity." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021.

Find full text
Abstract:
“Non ha la presunzione di originare nulla; può solo fare ciò che noi sappiamo ordinarle di fare”. Così, oltre 150 anni fa, Lady Lovelace commentava la Macchina Analitica di Babbage, l’antenato dei nostri computer. Una frase che, a distanza di tanti anni, suona quasi come una sfida: grazie alla diffusione delle tecniche di Generative Deep Learning e alle ricerche nell’ambito della Computational Creativity, sempre più sforzi sono stati destinati allo smentire l’ormai celebre Obiezione della Lovelace. Proprio a partire da questa, quattro domande formano i capisaldi della Computational Creativity: se è possibile sfruttare tecniche computazionali per comprendere la creatività umana; e, soprattutto, se i computer possono fare cose che sembrino creative (se non che siano effettivamente creative), e se possono imparare a riconoscere la creatività. Questa tesi si propone dunque di inserirsi in tale contesto, esplorando queste ultime due questioni grazie a tecniche di Deep Learning. In particolare, sulla base della definizione di creatività proposta da Margaret Boden, è presentata una metrica data dalla somma pesata di tre singole componenti (valore, novità e sorpresa) per il riconoscimento della creatività. In aggiunta, sfruttando tale misura, è presentato anche UCAN (Unexpectedly Creative Adversarial Network), un modello generativo orientato alla creatività, che impara a produrre opere creative massimizzando la metrica di cui sopra. Sia il generatore sia la metrica sono stati testati sul contesto della poesia americana del diciannovesimo secolo; i risultati ottenuti mostrano come la metrica sia effettivamente in grado di intercettare la traiettoria storica, e come possa rappresentare un importante passo avanti per lo studio della Computational Creativity; il generatore, pur non ottenendo risultati altrettanto eccellenti, si pone quale punto di partenza per la definizione futura di un modello effettivamente creativo.
APA, Harvard, Vancouver, ISO, and other styles
48

Dikdogmus, Halil. "RISER CONCEPTS FOR DEEP WATERS." Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for marin teknikk, 2012. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-18528.

Full text
Abstract:
Oil and gas exploration and production activities in deep and ultra deep waters in hostile environments necessitates the need to develop innovative riser systems capable of ensuring transfer of fluids from the seabed to a floating vessel and vice versa, with little or no issues with respect to influences of environmental loads and vessel motions.The design of the riser system must focus on different types of loading and load effects than for traditional water-depth. A variety of different riser concepts are proposed, both with respect to geometric shape and selection of materials.In the last few years, steel catenary risers have been a preferred riser solution for deep-water field developments due to its simple engineering concept, cost effective, flexibility in using different host platform and flexibility in geographical and environmental conditions. In this report, a case study considering a steel catenary riser operating in 1000 m water depth was conducted. The riser was subjected to extreme environmental conditions and static and dynamic response analyses were performed by the computer program RIFLEX.Last, parametric study is carried out to investigate the effects of parameter variation based on some parameters like current profiles, mesh density, wall thickness and so on. These parameters have significant effect on the structural response, especially in the touch down region.
APA, Harvard, Vancouver, ISO, and other styles
49

Mancevo, del Castillo Ayala Diego. "Compressing Deep Convolutional Neural Networks." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-217316.

Full text
Abstract:
Deep Convolutional Neural Networks and "deep learning" in general stand at the cutting edge on a range of applications, from image based recognition and classification to natural language processing, speech and speaker recognition and reinforcement learning. Very deep models however are often large, complex and computationally expensive to train and evaluate. Deep learning models are thus seldom deployed natively in environments where computational resources are scarce or expensive. To address this problem we turn our attention towards a range of techniques that we collectively refer to as "model compression" where a lighter student model is trained to approximate the output produced by the model we wish to compress. To this end, the output from the original model is used to craft the training labels of the smaller student model. This work contains some experiments on CIFAR-10 and demonstrates how to use the aforementioned techniques to compress a people counting model whose precision, recall and F1-score are improved by as much as 14% against our baseline.
APA, Harvard, Vancouver, ISO, and other styles
50

Bruno, Chelsea A. "Vocal Synthesis and Deep Listening." FIU Digital Commons, 2014. http://digitalcommons.fiu.edu/etd/1245.

Full text
Abstract:
My composition, Maitreya, combines vocal synthesis techniques with the theoretical concept of Deep Listening. This essay discusses developments in vocal synthesis and digital signal processing (DSP) software that can be performed in real-time and contributed to my composition. Deep Listening involves meditative practices to make one more aware of sounds that are both audible and inaudible. The composition utilizes recordings of male and female voices that recite poetry, chant, and are phase-vocoded. The composition also features various DSP techniques, and a custom-built modular synthesizer. The composition has three sections that were compiled and edited in Ableton Live 8.2.2.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography