Academic literature on the topic 'Deepl learning'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Deepl learning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Deepl learning"

1

Chagas, Edgar Thiago De Oliveira. "Deep Learning e suas aplicações na atualidade." Revista Científica Multidisciplinar Núcleo do Conhecimento 04, no. 05 (May 8, 2019): 05–26. http://dx.doi.org/10.32749/nucleodoconhecimento.com.br/administracao/deep-learning.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Chagas, Edgar Thiago De Oliveira. "Deep Learning and its applications today." Revista Científica Multidisciplinar Núcleo do Conhecimento 04, no. 05 (May 8, 2019): 05–26. http://dx.doi.org/10.32749/nucleodoconhecimento.com.br/business-administration/deep-learning-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Jaiswal, Tarun, and Sushma Jaiswal. "Deep Learning in Medicine." International Journal of Trend in Scientific Research and Development Volume-3, Issue-4 (June 30, 2019): 212–17. http://dx.doi.org/10.31142/ijtsrd23641.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Zitar, Raed Abu, Ammar EL-Hassan, and Oraib AL-Sahlee. "Deep Learning Recommendation System for Course Learning Outcomes Assessment." Journal of Advanced Research in Dynamical and Control Systems 11, no. 10-SPECIAL ISSUE (October 31, 2019): 1491–78. http://dx.doi.org/10.5373/jardcs/v11sp10/20192993.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Evseenko, Alla, and Dmitrii Romannikov. "Application of Deep Q-learning and double Deep Q-learning algorithms to the task of control an inverted pendulum." Transaction of Scientific Papers of the Novosibirsk State Technical University, no. 1-2 (August 26, 2020): 7–25. http://dx.doi.org/10.17212/2307-6879-2020-1-2-7-25.

Full text
Abstract:
Today, such a branch of science as «artificial intelligence» is booming in the world. Systems built on the basis of artificial intelligence methods have the ability to perform functions that are traditionally considered the prerogative of man. Artificial intelligence has a wide range of research areas. One such area is machine learning. This article discusses the algorithms of one of the approaches of machine learning – reinforcement learning (RL), according to which a lot of research and development has been carried out over the past seven years. Development and research on this approach is mainly carried out to solve problems in Atari 2600 games or in other similar ones. In this article, reinforcement training will be applied to one of the dynamic objects – an inverted pendulum. As a model of this object, we consider a model of an inverted pendulum on a cart taken from the Gym library, which contains many models that are used to test and analyze reinforcement learning algorithms. The article describes the implementation and study of two algorithms from this approach, Deep Q-learning and Double Deep Q-learning. As a result, training, testing and training time graphs for each algorithm are presented, on the basis of which it is concluded that it is desirable to use the Double Deep Q-learning algorithm, because the training time is approximately 2 minutes and provides the best control for the model of an inverted pendulum on a cart.
APA, Harvard, Vancouver, ISO, and other styles
6

Jaiswal, Tarun, and Sushma Jaiswal. "Deep Learning Based Pain Treatment." International Journal of Trend in Scientific Research and Development Volume-3, Issue-4 (June 30, 2019): 193–211. http://dx.doi.org/10.31142/ijtsrd23639.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Sha hao, 沙浩, 刘阳哲 Liu Yangzhe, and 张永兵 Zhang Yongbing. "基于深度学习的傅里叶叠层成像技术." Laser & Optoelectronics Progress 58, no. 18 (2021): 1811020. http://dx.doi.org/10.3788/lop202158.1811020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Park, Ingyu, and Unjoo Lee. "Automatic, Qualitative Scoring of the Clock Drawing Test (CDT) Based on U-Net, CNN and Mobile Sensor Data." Sensors 21, no. 15 (August 3, 2021): 5239. http://dx.doi.org/10.3390/s21155239.

Full text
Abstract:
The Clock Drawing Test (CDT) is a rapid, inexpensive, and popular screening tool for cognitive functions. In spite of its qualitative capabilities in diagnosis of neurological diseases, the assessment of the CDT has depended on quantitative methods as well as manual paper based methods. Furthermore, due to the impact of the advancement of mobile smart devices imbedding several sensors and deep learning algorithms, the necessity of a standardized, qualitative, and automatic scoring system for CDT has been increased. This study presents a mobile phone application, mCDT, for the CDT and suggests a novel, automatic and qualitative scoring method using mobile sensor data and deep learning algorithms: CNN, a convolutional network, U-Net, a convolutional network for biomedical image segmentation, and the MNIST (Modified National Institute of Standards and Technology) database. To obtain DeepC, a trained model for segmenting a contour image from a hand drawn clock image, U-Net was trained with 159 CDT hand-drawn images at 128 × 128 resolution, obtained via mCDT. To construct DeepH, a trained model for segmenting the hands in a clock image, U-Net was trained with the same 159 CDT 128 × 128 resolution images. For obtaining DeepN, a trained model for classifying the digit images from a hand drawn clock image, CNN was trained with the MNIST database. Using DeepC, DeepH and DeepN with the sensor data, parameters of contour (0–3 points), numbers (0–4 points), hands (0–5 points), and the center (0–1 points) were scored for a total of 13 points. From 219 subjects, performance testing was completed with images and sensor data obtained via mCDT. For an objective performance analysis, all the images were scored and crosschecked by two clinical experts in CDT scaling. Performance test analysis derived a sensitivity, specificity, accuracy and precision for the contour parameter of 89.33, 92.68, 89.95 and 98.15%, for the hands parameter of 80.21, 95.93, 89.04 and 93.90%, for the numbers parameter of 83.87, 95.31, 87.21 and 97.74%, and for the center parameter of 98.42, 86.21, 96.80 and 97.91%, respectively. From these results, the mCDT application and its scoring system provide utility in differentiating dementia disease subtypes, being valuable in clinical practice and for studies in the field.
APA, Harvard, Vancouver, ISO, and other styles
9

Tolentino, Lean Karlo S., Ronnie O. Serfa Juan, August C. Thio-ac, Maria Abigail B. Pamahoy, Joni Rose R. Forteza, and Xavier Jet O. Garcia. "Static Sign Language Recognition Using Deep Learning." International Journal of Machine Learning and Computing 9, no. 6 (December 2019): 821–27. http://dx.doi.org/10.18178/ijmlc.2019.9.6.879.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Nizami Huseyn, Elcin. "APPLICATION OF DEEP LEARNING IN MEDICAL IMAGING." NATURE AND SCIENCE 03, no. 04 (October 27, 2020): 7–13. http://dx.doi.org/10.36719/2707-1146/04/7-13.

Full text
Abstract:
Medical imaging technology plays an important role in the detection, diagnosis and treatment of diseases. Due to the instability of human expert experience, machine learning technology is expected to assist researchers and physicians to improve the accuracy of imaging diagnosis and reduce the imbalance of medical resources. This article systematically summarizes some methods of deep learning technology, introduces the application research of deep learning technology in medical imaging, and discusses the limitations of deep learning technology in medical imaging. Key words: Artificial Intelligence, Deep Learning, Medical Imaging, big data
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Deepl learning"

1

Dufourq, Emmanuel. "Evolutionary deep learning." Doctoral thesis, Faculty of Science, 2019. http://hdl.handle.net/11427/30357.

Full text
Abstract:
The primary objective of this thesis is to investigate whether evolutionary concepts can improve the performance, speed and convenience of algorithms in various active areas of machine learning research. Deep neural networks are exhibiting an explosion in the number of parameters that need to be trained, as well as the number of permutations of possible network architectures and hyper-parameters. There is little guidance on how to choose these and brute-force experimentation is prohibitively time consuming. We show that evolutionary algorithms can help tame this explosion of freedom, by developing an algorithm that robustly evolves near optimal deep neural network architectures and hyper-parameters across a wide range of image and sentiment classification problems. We further develop an algorithm that automatically determines whether a given data science problem is of classification or regression type, successfully choosing the correct problem type with more than 95% accuracy. Together these algorithms show that a great deal of the current "art" in the design of deep learning networks - and in the job of the data scientist - can be automated. Having discussed the general problem of optimising deep learning networks the thesis moves on to a specific application: the automated extraction of human sentiment from text and images of human faces. Our results reveal that our approach is able to outperform several public and/or commercial text sentiment analysis algorithms using an evolutionary algorithm that learned to encode and extend sentiment lexicons. A second analysis looked at using evolutionary algorithms to estimate text sentiment while simultaneously compressing text data. An extensive analysis of twelve sentiment datasets reveal that accurate compression is possible with 3.3% loss in classification accuracy even with 75% compression of text size, which is useful in environments where data volumes are a problem. Finally, the thesis presents improvements to automated sentiment analysis of human faces to identify emotion, an area where there has been a tremendous amount of progress using convolutional neural networks. We provide a comprehensive critique of past work, highlight recommendations and list some open, unanswered questions in facial expression recognition using convolutional neural networks. One serious challenge when implementing such networks for facial expression recognition is the large number of trainable parameters which results in long training times. We propose a novel method based on evolutionary algorithms, to reduce the number of trainable parameters whilst simultaneously retaining classification performance, and in some cases achieving superior performance. We are robustly able to reduce the number of parameters on average by 95% with no loss in classification accuracy. Overall our analyses show that evolutionary algorithms are a valuable addition to machine learning in the deep learning era: automating, compressing and/or improving results significantly, depending on the desired goal.
APA, Harvard, Vancouver, ISO, and other styles
2

Hussein, Ahmed. "Deep learning based approaches for imitation learning." Thesis, Robert Gordon University, 2018. http://hdl.handle.net/10059/3117.

Full text
Abstract:
Imitation learning refers to an agent's ability to mimic a desired behaviour by learning from observations. The field is rapidly gaining attention due to recent advances in computational and communication capabilities as well as rising demand for intelligent applications. The goal of imitation learning is to describe the desired behaviour by providing demonstrations rather than instructions. This enables agents to learn complex behaviours with general learning methods that require minimal task specific information. However, imitation learning faces many challenges. The objective of this thesis is to advance the state of the art in imitation learning by adopting deep learning methods to address two major challenges of learning from demonstrations. Firstly, representing the demonstrations in a manner that is adequate for learning. We propose novel Convolutional Neural Networks (CNN) based methods to automatically extract feature representations from raw visual demonstrations and learn to replicate the demonstrated behaviour. This alleviates the need for task specific feature extraction and provides a general learning process that is adequate for multiple problems. The second challenge is generalizing a policy over unseen situations in the training demonstrations. This is a common problem because demonstrations typically show the best way to perform a task and don't offer any information about recovering from suboptimal actions. Several methods are investigated to improve the agent's generalization ability based on its initial performance. Our contributions in this area are three fold. Firstly, we propose an active data aggregation method that queries the demonstrator in situations of low confidence. Secondly, we investigate combining learning from demonstrations and reinforcement learning. A deep reward shaping method is proposed that learns a potential reward function from demonstrations. Finally, memory architectures in deep neural networks are investigated to provide context to the agent when taking actions. Using recurrent neural networks addresses the dependency between the state-action sequences taken by the agent. The experiments are conducted in simulated environments on 2D and 3D navigation tasks that are learned from raw visual data, as well as a 2D soccer simulator. The proposed methods are compared to state of the art deep reinforcement learning methods. The results show that deep learning architectures can learn suitable representations from raw visual data and effectively map them to atomic actions. The proposed methods for addressing generalization show improvements over using supervised learning and reinforcement learning alone. The results are thoroughly analysed to identify the benefits of each approach and situations in which it is most suitable.
APA, Harvard, Vancouver, ISO, and other styles
3

Zhang, Jingwei [Verfasser], and Wolfram [Akademischer Betreuer] Burgard. "Learning navigation policies with deep reinforcement learning." Freiburg : Universität, 2021. http://d-nb.info/1235325571/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Wülfing, Jan [Verfasser], and Martin [Akademischer Betreuer] Riedmiller. "Stable deep reinforcement learning." Freiburg : Universität, 2019. http://d-nb.info/1204826188/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

White, Martin. "Deep Learning Software Repositories." W&M ScholarWorks, 2017. https://scholarworks.wm.edu/etd/1516639667.

Full text
Abstract:
Bridging the abstraction gap between artifacts and concepts is the essence of software engineering (SE) research problems. SE researchers regularly use machine learning to bridge this gap, but there are three fundamental issues with traditional applications of machine learning in SE research. Traditional applications are too reliant on labeled data. They are too reliant on human intuition, and they are not capable of learning expressive yet efficient internal representations. Ultimately, SE research needs approaches that can automatically learn representations of massive, heterogeneous, datasets in situ, apply the learned features to a particular task and possibly transfer knowledge from task to task. Improvements in both computational power and the amount of memory in modern computer architectures have enabled new approaches to canonical machine learning tasks. Specifically, these architectural advances have enabled machines that are capable of learning deep, compositional representations of massive data depots. The rise of deep learning has ushered in tremendous advances in several fields. Given the complexity of software repositories, we presume deep learning has the potential to usher in new analytical frameworks and methodologies for SE research and the practical applications it reaches. This dissertation examines and enables deep learning algorithms in different SE contexts. We demonstrate that deep learners significantly outperform state-of-the-practice software language models at code suggestion on a Java corpus. Further, these deep learners for code suggestion automatically learn how to represent lexical elements. We use these representations to transmute source code into structures for detecting similar code fragments at different levels of granularity—without declaring features for how the source code is to be represented. Then we use our learning-based framework for encoding fragments to intelligently select and adapt statements in a codebase for automated program repair. In our work on code suggestion, code clone detection, and automated program repair, everything for representing lexical elements and code fragments is mined from the source code repository. Indeed, our work aims to move SE research from the art of feature engineering to the science of automated discovery.
APA, Harvard, Vancouver, ISO, and other styles
6

Halle, Alex, and Alexander Hasse. "Topologieoptimierung mittels Deep Learning." Technische Universität Chemnitz, 2019. https://monarch.qucosa.de/id/qucosa%3A34343.

Full text
Abstract:
Die Topologieoptimierung ist die Suche einer optimalen Bauteilgeometrie in Abhängigkeit des Einsatzfalls. Für komplexe Probleme kann die Topologieoptimierung aufgrund eines hohen Detailgrades viel Zeit- und Rechenkapazität erfordern. Diese Nachteile der Topologieoptimierung sollen mittels Deep Learning reduziert werden, so dass eine Topologieoptimierung dem Konstrukteur als sekundenschnelle Hilfe dient. Das Deep Learning ist die Erweiterung künstlicher neuronaler Netzwerke, mit denen Muster oder Verhaltensregeln erlernt werden können. So soll die bislang numerisch berechnete Topologieoptimierung mit dem Deep Learning Ansatz gelöst werden. Hierzu werden Ansätze, Berechnungsschema und erste Schlussfolgerungen vorgestellt und diskutiert.
APA, Harvard, Vancouver, ISO, and other styles
7

Goh, Hanlin. "Learning deep visual representations." Paris 6, 2013. http://www.theses.fr/2013PA066356.

Full text
Abstract:
Les avancées récentes en apprentissage profond et en traitement d'image présentent l'opportunité d'unifier ces deux champs de recherche complémentaires pour une meilleure résolution du problème de classification d'images dans des catégories sémantiques. L'apprentissage profond apporte au traitement d'image le pouvoir de représentation nécessaire à l'amélioration des performances des méthodes de classification d'images. Cette thèse propose de nouvelles méthodes d'apprentissage de représentations visuelles profondes pour la résolution de cette tache. L'apprentissage profond a été abordé sous deux angles. D'abord nous nous sommes intéressés à l'apprentissage non supervisé de représentations latentes ayant certaines propriétés à partir de données en entrée. Il s'agit ici d'intégrer une connaissance à priori, à travers un terme de régularisation, dans l'apprentissage d'une machine de Boltzmann restreinte (RBM). Nous proposons plusieurs formes de régularisation qui induisent différentes propriétés telles que la parcimonie, la sélectivité et l'organisation en structure topographique. Le second aspect consiste au passage graduel de l'apprentissage non supervisé à l'apprentissage supervisé de réseaux profonds. Ce but est réalisé par l'introduction sous forme de supervision, d'une information relative à la catégorie sémantique. Deux nouvelles méthodes sont proposées. Le premier est basé sur une régularisation top-down de réseaux de croyance profonds à base de RBMs. Le second optimise un cout intégrant un critre de reconstruction et un critre de supervision pour l'entrainement d'autoencodeurs profonds. Les méthodes proposées ont été appliquées au problme de classification d'images. Nous avons adopté le modèle sac-de-mots comme modèle de base parce qu'il offre d'importantes possibilités grâce à l'utilisation de descripteurs locaux robustes et de pooling par pyramides spatiales qui prennent en compte l'information spatiale de l'image. L'apprentissage profonds avec agrÉgation spatiale est utilisé pour apprendre un dictionnaire hiÉrarchique pour l'encodage de reprÉsentations visuelles de niveau intermÉdiaire. Cette mÉthode donne des rÉsultats trs compétitifs en classification de scènes et d'images. Les dictionnaires visuels appris contiennent diverses informations non-redondantes ayant une structure spatiale cohérente. L'inférence est aussi très rapide. Nous avons par la suite optimisé l'étape de pooling sur la base du codage produit par le dictionnaire hiérarchique précédemment appris en introduisant introduit une nouvelle paramétrisation dérivable de l'opération de pooling qui permet un apprentissage par descente de gradient utilisant l'algorithme de rétro-propagation. Ceci est la premire tentative d'unification de l'apprentissage profond et du modèle de sac de mots. Bien que cette fusion puisse sembler évidente, l'union de plusieurs aspects de l'apprentissage profond de représentations visuelles demeure une tache complexe à bien des égards et requiert encore un effort de recherche important
Recent advancements in the areas of deep learning and visual information processing have presented an opportunity to unite both fields. These complementary fields combine to tackle the problem of classifying images into their semantic categories. Deep learning brings learning and representational capabilities to a visual processing model that is adapted for image classification. This thesis addresses problems that lead to the proposal of learning deep visual representations for image classification. The problem of deep learning is tackled on two fronts. The first aspect is the problem of unsupervised learning of latent representations from input data. The main focus is the integration of prior knowledge into the learning of restricted Boltzmann machines (RBM) through regularization. Regularizers are proposed to induce sparsity, selectivity and topographic organization in the coding to improve discrimination and invariance. The second direction introduces the notion of gradually transiting from unsupervised layer-wise learning to supervised deep learning. This is done through the integration of bottom-up information with top-down signals. Two novel implementations supporting this notion are explored. The first method uses top-down regularization to train a deep network of RBMs. The second method combines predictive and reconstructive loss functions to optimize a stack of encoder-decoder networks. The proposed deep learning techniques are applied to tackle the image classification problem. The bag-of-words model is adopted due to its strengths in image modeling through the use of local image descriptors and spatial pooling schemes. Deep learning with spatial aggregation is used to learn a hierarchical visual dictionary for encoding the image descriptors into mid-level representations. This method achieves leading image classification performances for object and scene images. The learned dictionaries are diverse and non-redundant. The speed of inference is also high. From this, a further optimization is performed for the subsequent pooling step. This is done by introducing a differentiable pooling parameterization and applying the error backpropagation algorithm. This thesis represents one of the first attempts to synthesize deep learning and the bag-of-words model. This union results in many challenging research problems, leaving much room for further study in this area
APA, Harvard, Vancouver, ISO, and other styles
8

Geirsson, Gunnlaugur. "Deep learning exotic derivatives." Thesis, Uppsala universitet, Avdelningen för systemteknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-430410.

Full text
Abstract:
Monte Carlo methods in derivative pricing are computationally expensive, in particular for evaluating models partial derivatives with regard to inputs. This research proposes the use of deep learning to approximate such valuation models for highly exotic derivatives, using automatic differentiation to evaluate input sensitivities. Deep learning models are trained to approximate Phoenix Autocall valuation using a proprietary model used by Svenska Handelsbanken AB. Models are trained on large datasets of low-accuracy (10^4 simulations) Monte Carlo data, successfully learning the true model with an average error of 0.1% on validation data generated by 10^8 simulations. A specific model parametrisation is proposed for 2-day valuation only, to be recalibrated interday using transfer learning. Automatic differentiation approximates sensitivity to (normalised) underlying asset prices with a mean relative error generally below 1.6%. Overall error when predicting sensitivity to implied volatililty is found to lie within 10%-40%. Near identical results are found by finite difference as automatic differentiation in both cases. Automatic differentiation is not successful at capturing sensitivity to interday contract change in value, though errors of 8%-25% are achieved by finite difference. Model recalibration by transfer learning proves to converge over 15 times faster and with up to 14% lower relative error than training using random initialisation. The results show that deep learning models can efficiently learn Monte Carlo valuation, and that these can be quickly recalibrated by transfer learning. The deep learning model gradient computed by automatic differentiation proves a good approximation of the true model sensitivities. Future research proposals include studying optimised recalibration schedules, using training data generated by single Monte Carlo price paths, and studying additional parameters and contracts.
APA, Harvard, Vancouver, ISO, and other styles
9

Arnold, Ludovic. "Learning Deep Representations : Toward a better new understanding of the deep learning paradigm." Phd thesis, Université Paris Sud - Paris XI, 2013. http://tel.archives-ouvertes.fr/tel-00842447.

Full text
Abstract:
Since 2006, deep learning algorithms which rely on deep architectures with several layers of increasingly complex representations have been able to outperform state-of-the-art methods in several settings. Deep architectures can be very efficient in terms of the number of parameters required to represent complex operations which makes them very appealing to achieve good generalization with small amounts of data. Although training deep architectures has traditionally been considered a difficult problem, a successful approach has been to employ an unsupervised layer-wise pre-training step to initialize deep supervised models. First, unsupervised learning has many benefits w.r.t. generalization because it only relies on unlabeled data which is easily found. Second, the possibility to learn representations layer by layer instead of all layers at once improves generalization further and reduces computational time. However, deep learning is a very recent approach and still poses a lot of theoretical and practical questions concerning the consistency of layer-wise learning with many layers and difficulties such as evaluating performance, performing model selection and optimizing layers. In this thesis we first discuss the limitations of the current variational justification for layer-wise learning which does not generalize well to many layers. We ask if a layer-wise method can ever be truly consistent, i.e. capable of finding an optimal deep model by training one layer at a time without knowledge of the upper layers. We find that layer-wise learning can in fact be consistent and can lead to optimal deep generative models. To do this, we introduce the Best Latent Marginal (BLM) upper bound, a new criterion which represents the maximum log-likelihood of a deep generative model where the upper layers are unspecified. We prove that maximizing this criterion for each layer leads to an optimal deep architecture, provided the rest of the training goes well. Although this criterion cannot be computed exactly, we show that it can be maximized effectively by auto-encoders when the encoder part of the model is allowed to be as rich as possible. This gives a new justification for stacking models trained to reproduce their input and yields better results than the state-of-the-art variational approach. Additionally, we give a tractable approximation of the BLM upper-bound and show that it can accurately estimate the final log-likelihood of models. Taking advantage of these theoretical advances, we propose a new method for performing layer-wise model selection in deep architectures, and a new criterion to assess whether adding more layers is warranted. As for the difficulty of training layers, we also study the impact of metrics and parametrization on the commonly used gradient descent procedure for log-likelihood maximization. We show that gradient descent is implicitly linked with the metric of the underlying space and that the Euclidean metric may often be an unsuitable choice as it introduces a dependence on parametrization and can lead to a breach of symmetry. To mitigate this problem, we study the benefits of the natural gradient and show that it can restore symmetry, regrettably at a high computational cost. We thus propose that a centered parametrization may alleviate the problem with almost no computational overhead.
APA, Harvard, Vancouver, ISO, and other styles
10

Rodés-Guirao, Lucas. "Deep Learning for Digital Typhoon : Exploring a typhoon satellite image dataset using deep learning." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-249514.

Full text
Abstract:
Efficient early warning systems can help in the management of natural disaster events, by allowing for adequate evacuations and resources administration. Several approaches have been used to implement proper early warning systems, such as simulations or statistical models, which rely on the collection of meteorological data. Data-driven techniques have been proven to be effective to build statistical models, being able to generalise to unseen data. Motivated by this, in this work, we explore deep learning techniques applied to the typhoon meteorological satellite image dataset "Digital Typhoon".  We focus on intensity measurement and categorisation of different natural phenomena. Firstly, we build a classifier to differentiate natural tropical cyclones and extratropical cyclones and, secondly, we implement a regression model to estimate the centre pressure value of a typhoon. In addition, we also explore cleaning methodologies to ensure that the data used is reliable. The results obtained show that deep learning techniques can be effective under certain circumstances, providing reliable classification and regression models and feature extractors. More research to draw more conclusions and validate the obtained results is expected in the future.
Effektiva varningssystem kan hjälpa till med hanteringen av naturkatastrofer genom att möjliggöra tillräckliga evakueringar och resursfördelningar. Flera olika tillvägagångssätt har använts för att genomföra lämpliga tidiga varningssystem, såsom simuleringar eller statistiska modeller, som bygger på insamling av meteorologiska data. Datadriven teknik har visat sig vara effektiv för att bygga statistiska modeller som kan generalisera till okända data. Motiverat av detta, utforskar examensarbetet tekniker baserade på djupinlärning, vilka tillämpas på ett dataset med meteorologiska satellitbilder, Digital Typhoon".  Vi fokuserar på intensitetsmätning och kategorisering av olika naturfenomen.  Först bygger vi en klassificerare för att skilja mellan naturliga tropiska cykloner och extratropiska cykloner. Därefter implementerar vi en regressionsmodell för att uppskatta en tyfons mittrycksvärde. Dessutom utforskar vi rengöringsmetoder för att säkerställa att de data som används är tillförlitliga.  De erhållna resultaten visar att tekniker för djupinlärning kan vara effektiva under vissa omständigheter, vilket ger tillförlitliga klassificerings- och regressionsmodeller samt extraktorer. Mer forskning för att dra fler slutsatser och validera de erhållna resultaten förväntas i framtiden.
Els sistemes d’alerta ràpida poden ajudar en la gestió dels esdeveniments de desastres naturals, permetent una evacuació i administració dels recursos adequada. En aquest sentit s’han utilitzat diferentes tècniques per implementar sistemes d’alerta, com ara simulacions o models estadístics, tots ells basats en la recollida de dades meteorològiques. S’ha demostrat que les tècniques basades en dades són eficaces a l’hora de construir models estadístics, podent generalitzar-se a a noves dades. Motivat per això, en aquest treball, explorem l’ús de tècniques d’aprenentatge profund (o deep learning) aplicades a les imatges meteorològiquesper satèl·lit de tifons del projecte "Digital Typhoon". Ens centrem en la mesura i la categorització de la intensitat de diferentsfenòmens naturals. En primer lloc, construïm un classificador per diferenciar ciclonstropicals naturals i ciclons extratropicals i, en segon lloc, implementemun model de regressió per estimar el valor de pressió central d’un tifó.A més, també explorem metodologies de neteja per garantir que lesdades utilitzades siguin fiables. Els resultats obtinguts mostren que les tècniques d’aprenentatgeprofundes poden ser efectives en determinades circumstàncies, proporcionant models fiables de classificació/regressió i extractors de característiques.Es preveu que hi hagi més recerques per obtenir més conclusions i validar els resultats obtinguts en el futur.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Deepl learning"

1

Saefken, Benjamin, Alexander Silbersdorff, and Christoph Weisser, eds. Learning deep. Göttingen: Göttingen University Press, 2020. http://dx.doi.org/10.17875/gup2020-1338.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wani, M. Arif, Mehmed Kantardzic, and Moamar Sayed-Mouchaweh, eds. Deep Learning Applications. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-1816-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Dong, Hao, Zihan Ding, and Shanghang Zhang, eds. Deep Reinforcement Learning. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-4095-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Sewak, Mohit. Deep Reinforcement Learning. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-13-8285-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kim, Phil. MATLAB Deep Learning. Berkeley, CA: Apress, 2017. http://dx.doi.org/10.1007/978-1-4842-2845-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Calin, Ovidiu. Deep Learning Architectures. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-36721-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Matsushita, Kayo, ed. Deep Active Learning. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-10-5660-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Michelucci, Umberto. Applied Deep Learning. Berkeley, CA: Apress, 2018. http://dx.doi.org/10.1007/978-1-4842-3790-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Moons, Bert, Daniel Bankman, and Marian Verhelst. Embedded Deep Learning. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-319-99223-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

El-Amir, Hisham, and Mahmoud Hamdy. Deep Learning Pipeline. Berkeley, CA: Apress, 2020. http://dx.doi.org/10.1007/978-1-4842-5349-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Deepl learning"

1

Schmidhuber, Jürgen. "Deep Learning." In Encyclopedia of Machine Learning and Data Mining, 1–11. Boston, MA: Springer US, 2016. http://dx.doi.org/10.1007/978-1-4899-7502-7_909-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Schmidhuber, Jürgen. "Deep Learning." In Encyclopedia of Machine Learning and Data Mining, 338–48. Boston, MA: Springer US, 2017. http://dx.doi.org/10.1007/978-1-4899-7687-1_909.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Du, Ke-Lin, and M. N. S. Swamy. "Deep Learning." In Neural Networks and Statistical Learning, 717–36. London: Springer London, 2019. http://dx.doi.org/10.1007/978-1-4471-7452-3_24.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Žižka, Jan, František Dařena, and Arnošt Svoboda. "Deep Learning." In Text Mining with Machine Learning, 223–34. First. | Boca Raton : CRC Press, 2019.: CRC Press, 2019. http://dx.doi.org/10.1201/9780429469275-11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Rebala, Gopinath, Ajay Ravi, and Sanjay Churiwala. "Deep Learning." In An Introduction to Machine Learning, 127–40. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-15729-6_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Aggarwal, Manasvi, and M. N. Murty. "Deep Learning." In Machine Learning in Social Networks, 35–66. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-33-4022-0_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Watson, Samuel S. "Deep Learning." In Data Science for Mathematicians, 409–40. First edition. | Boca Raton, FL : CRC Press, 2020.: Chapman and Hall/CRC, 2020. http://dx.doi.org/10.1201/9780429398292-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Alshamrani, Rayan, and Xiaogang Ma. "Deep Learning." In Encyclopedia of Big Data, 1–5. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-319-32001-4_533-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Varga, Ervin. "Deep Learning." In Practical Data Science with Python 3, 427–50. Berkeley, CA: Apress, 2019. http://dx.doi.org/10.1007/978-1-4842-4859-1_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Akerkar, Rajendra. "Deep Learning." In Artificial Intelligence for Business, 33–40. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-97436-1_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Deepl learning"

1

"[Title page i]." In 2019 International Conference on Deep Learning and Machine Learning in Emerging Applications (Deep-ML). IEEE, 2019. http://dx.doi.org/10.1109/deep-ml.2019.00001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

"[Title page iii]." In 2019 International Conference on Deep Learning and Machine Learning in Emerging Applications (Deep-ML). IEEE, 2019. http://dx.doi.org/10.1109/deep-ml.2019.00002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

"[Copyright notice]." In 2019 International Conference on Deep Learning and Machine Learning in Emerging Applications (Deep-ML). IEEE, 2019. http://dx.doi.org/10.1109/deep-ml.2019.00003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

"Table of contents." In 2019 International Conference on Deep Learning and Machine Learning in Emerging Applications (Deep-ML). IEEE, 2019. http://dx.doi.org/10.1109/deep-ml.2019.00004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

"Message from the DEEP-ML 2019 Chairs." In 2019 International Conference on Deep Learning and Machine Learning in Emerging Applications (Deep-ML). IEEE, 2019. http://dx.doi.org/10.1109/deep-ml.2019.00005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

"DEEP-ML 2019 Organizing Committee." In 2019 International Conference on Deep Learning and Machine Learning in Emerging Applications (Deep-ML). IEEE, 2019. http://dx.doi.org/10.1109/deep-ml.2019.00006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

"DEEP-ML 2019 Program Committee." In 2019 International Conference on Deep Learning and Machine Learning in Emerging Applications (Deep-ML). IEEE, 2019. http://dx.doi.org/10.1109/deep-ml.2019.00007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

"Keynote Abstracts." In 2019 International Conference on Deep Learning and Machine Learning in Emerging Applications (Deep-ML). IEEE, 2019. http://dx.doi.org/10.1109/deep-ml.2019.00008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Kaskavalci, Halil Can, and Sezer Goren. "A Deep Learning Based Distributed Smart Surveillance Architecture using Edge and Cloud Computing." In 2019 International Conference on Deep Learning and Machine Learning in Emerging Applications (Deep-ML). IEEE, 2019. http://dx.doi.org/10.1109/deep-ml.2019.00009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Lee, Kyu Beom, and Hyu Soung Shin. "An Application of a Deep Learning Algorithm for Automatic Detection of Unexpected Accidents Under Bad CCTV Monitoring Conditions in Tunnels." In 2019 International Conference on Deep Learning and Machine Learning in Emerging Applications (Deep-ML). IEEE, 2019. http://dx.doi.org/10.1109/deep-ml.2019.00010.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Deepl learning"

1

Caldeira, Joao. Deeply Uncertain: Comparing Methods of Uncertainty Quantification in Deep Learning Algorithms. Office of Scientific and Technical Information (OSTI), April 2020. http://dx.doi.org/10.2172/1623354.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Catanach, Thomas, and Jed Duersch. Efficient Generalizable Deep Learning. Office of Scientific and Technical Information (OSTI), September 2018. http://dx.doi.org/10.2172/1760400.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Groh, Micah. NOvA Reconstruction using Deep Learning. Office of Scientific and Technical Information (OSTI), June 2018. http://dx.doi.org/10.2172/1462092.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Geiss, Andrew, Joseph Hardin, Sam Silva, William Jr., Adam Varble, and Jiwen Fan. Deep Learning for Ensemble Forecasting. Office of Scientific and Technical Information (OSTI), April 2021. http://dx.doi.org/10.2172/1769692.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Balaji, Praveen. Detecting Stellar Streams through Deep Learning. Office of Scientific and Technical Information (OSTI), August 2019. http://dx.doi.org/10.2172/1637622.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Li, Li. Deep Learning for Hydro-Biogeochemistry Processes. Office of Scientific and Technical Information (OSTI), March 2021. http://dx.doi.org/10.2172/1769693.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Draelos, Timothy John, Nadine E. Miner, Christopher C. Lamb, Craig Michael Vineyard, Kristofor David Carlson, Conrad D. James, and James Bradley Aimone. Neurogenesis Deep Learning: Extending deep networks to accommodate new classes. Office of Scientific and Technical Information (OSTI), December 2016. http://dx.doi.org/10.2172/1505351.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Jiang, M., and B. Matei. Mesh Failure Prediction Using Deep Learning Techniques. Office of Scientific and Technical Information (OSTI), February 2020. http://dx.doi.org/10.2172/1601556.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Albanesi, Stefania, and Domonkos Vamossy. Predicting Consumer Default: A Deep Learning Approach. Cambridge, MA: National Bureau of Economic Research, August 2019. http://dx.doi.org/10.3386/w26165.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Doria, David, Bryan Dawson, and Manuel Vindiola. Enhanced Experience Replay for Deep Reinforcement Learning. Fort Belvoir, VA: Defense Technical Information Center, November 2015. http://dx.doi.org/10.21236/ada624278.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography