Academic literature on the topic 'Machine and deep learning'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Machine and deep learning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Machine and deep learning"

1

Liu, Qingzhong, Zhaoxian Zhou, Sarbagya Ratna Shakya, Prathyusha Uduthalapally, Mengyu Qiao, and Andrew H. Sung. "Smartphone Sensor-Based Activity Recognition by Using Machine Learning and Deep Learning Algorithms." International Journal of Machine Learning and Computing 8, no. 2 (April 2018): 121–26. http://dx.doi.org/10.18178/ijmlc.2018.8.2.674.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Gadri, Said. "Efficient Arabic Handwritten Character Recognition based on Machine Learning and Deep Learning Approaches." Journal of Advanced Research in Dynamical and Control Systems 12, SP7 (July 25, 2020): 9–17. http://dx.doi.org/10.5373/jardcs/v12sp7/20202076.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Poomka, Pumrapee, Nittaya Kerdprasop, and Kittisak Kerdprasop. "Machine Learning Versus Deep Learning Performances on the Sentiment Analysis of Product Reviews." International Journal of Machine Learning and Computing 11, no. 2 (March 2021): 103–9. http://dx.doi.org/10.18178/ijmlc.2021.11.2.1021.

Full text
Abstract:
At this current digital era, business platforms have been drastically shifted toward online stores on internet. With the internet-based platform, customers can order goods easily using their smart phones and get delivery at their place without going to the shopping mall. However, the drawback of this business platform is that customers do not really know about the quality of the products they ordered. Therefore, such platform service often provides the review section to let previous customers leave a review about the received product. The reviews are a good source to analyze customer's satisfaction. Business owners can assess review trend as either positive or negative based on a feedback score that customers had given, but it takes too much time for human to analyze this data. In this research, we develop computational models using machine learning techniques to classify product reviews as positive or negative based on the sentiment analysis. In our experiments, we use the book review data from amazon.com to develop the models. For a machine learning based strategy, the data had been transformed with the bag of word technique before developing models using logistic regression, naïve bayes, support vector machine, and neural network algorithms. For a deep learning strategy, the word embedding is a technique that we used to transform data before applying the long short-term memory and gated recurrent unit techniques. On comparing performance of machine learning against deep learning models, we compare results from the two methods with both the preprocessed dataset and the non-preprocessed dataset. The result is that the bag of words with neural network outperforms other techniques on both non-preprocess and preprocess datasets.
APA, Harvard, Vancouver, ISO, and other styles
4

Fischer, Andreas M., Basel Yacoub, Rock H. Savage, John D. Martinez, Julian L. Wichmann, Pooyan Sahbaee, Sasa Grbic, Akos Varga-Szemes, and U. Joseph Schoepf. "Machine Learning/Deep Neuronal Network." Journal of Thoracic Imaging 35 (May 2020): S21—S27. http://dx.doi.org/10.1097/rti.0000000000000498.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Wang, Tianlei, Jiuwen Cao, Xiaoping Lai, and Badong Chen. "Deep Weighted Extreme Learning Machine." Cognitive Computation 10, no. 6 (October 1, 2018): 890–907. http://dx.doi.org/10.1007/s12559-018-9602-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Mishra, Chandrahas, and D. L. Gupta. "Deep Machine Learning and Neural Networks: An Overview." IAES International Journal of Artificial Intelligence (IJ-AI) 6, no. 2 (June 1, 2017): 66. http://dx.doi.org/10.11591/ijai.v6.i2.pp66-73.

Full text
Abstract:
Deep learning is a technique of machine learning in artificial intelligence area. Deep learning in a refined "machine learning" algorithm that far surpasses a considerable lot of its forerunners in its capacities to perceive syllables and picture. Deep learning is as of now a greatly dynamic examination territory in machine learning and example acknowledgment society. It has increased colossal triumphs in an expansive zone of utilizations, for example, speech recognition, computer vision and natural language processing and numerous industry item. Neural network is used to implement the machine learning or to design intelligent machines. In this paper brief introduction to all machine learning paradigm and application area of deep machine learning and different types of neural networks with applications is discussed.
APA, Harvard, Vancouver, ISO, and other styles
7

Rajendra Kumar, P., and E. B. K. Manash. "Deep learning: a branch of machine learning." Journal of Physics: Conference Series 1228 (May 2019): 012045. http://dx.doi.org/10.1088/1742-6596/1228/1/012045.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kibria, Md Golam, and Mehmet Sevkli. "Application of Deep Learning for Credit Card Approval: A Comparison with Two Machine Learning Techniques." International Journal of Machine Learning and Computing 11, no. 4 (August 2021): 286–90. http://dx.doi.org/10.18178/ijmlc.2021.11.4.1049.

Full text
Abstract:
The increased credit card defaulters have forced the companies to think carefully before the approval of credit applications. Credit card companies usually use their judgment to determine whether a credit card should be issued to the customer satisfying certain criteria. Some machine learning algorithms have also been used to support the decision. The main objective of this paper is to build a deep learning model based on the UCI (University of California, Irvine) data sets, which can support the credit card approval decision. Secondly, the performance of the built model is compared with the other two traditional machine learning algorithms: logistic regression (LR) and support vector machine (SVM). Our results show that the overall performance of our deep learning model is slightly better than that of the other two models.
APA, Harvard, Vancouver, ISO, and other styles
9

Wiebe, Nathan, Ashish Kapoor, and Krysta M. Svore. "Quantum deep learning." Quantum Information and Computation 16, no. 7&8 (May 2016): 541–87. http://dx.doi.org/10.26421/qic16.7-8-1.

Full text
Abstract:
In recent years, deep learning has had a profound impact on machine learning and artificial intelligence. At the same time, algorithms for quantum computers have been shown to efficiently solve some problems that are intractable on conventional, classical computers. We show that quantum computing not only reduces the time required to train a deep restricted Boltzmann machine, but also provides a richer and more comprehensive framework for deep learning than classical computing and leads to significant improvements in the optimization of the underlying objective function. Our quantum methods also permit efficient training of multilayer and fully connected models.
APA, Harvard, Vancouver, ISO, and other styles
10

Evseenko, Alla, and Dmitrii Romannikov. "Application of Deep Q-learning and double Deep Q-learning algorithms to the task of control an inverted pendulum." Transaction of Scientific Papers of the Novosibirsk State Technical University, no. 1-2 (August 26, 2020): 7–25. http://dx.doi.org/10.17212/2307-6879-2020-1-2-7-25.

Full text
Abstract:
Today, such a branch of science as «artificial intelligence» is booming in the world. Systems built on the basis of artificial intelligence methods have the ability to perform functions that are traditionally considered the prerogative of man. Artificial intelligence has a wide range of research areas. One such area is machine learning. This article discusses the algorithms of one of the approaches of machine learning – reinforcement learning (RL), according to which a lot of research and development has been carried out over the past seven years. Development and research on this approach is mainly carried out to solve problems in Atari 2600 games or in other similar ones. In this article, reinforcement training will be applied to one of the dynamic objects – an inverted pendulum. As a model of this object, we consider a model of an inverted pendulum on a cart taken from the Gym library, which contains many models that are used to test and analyze reinforcement learning algorithms. The article describes the implementation and study of two algorithms from this approach, Deep Q-learning and Double Deep Q-learning. As a result, training, testing and training time graphs for each algorithm are presented, on the basis of which it is concluded that it is desirable to use the Double Deep Q-learning algorithm, because the training time is approximately 2 minutes and provides the best control for the model of an inverted pendulum on a cart.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Machine and deep learning"

1

Fan, Shuangfei. "Deep Representation Learning on Labeled Graphs." Diss., Virginia Tech, 2020. http://hdl.handle.net/10919/96596.

Full text
Abstract:
We introduce recurrent collective classification (RCC), a variant of ICA analogous to recurrent neural network prediction. RCC accommodates any differentiable local classifier and relational feature functions. We provide gradient-based strategies for optimizing over model parameters to more directly minimize the loss function. In our experiments, this direct loss minimization translates to improved accuracy and robustness on real network data. We demonstrate the robustness of RCC in settings where local classification is very noisy, settings that are particularly challenging for ICA. As a new way to train generative models, generative adversarial networks (GANs) have achieved considerable success in image generation, and this framework has also recently been applied to data with graph structures. We identify the drawbacks of existing deep frameworks for generating graphs, and we propose labeled-graph generative adversarial networks (LGGAN) to train deep generative models for graph-structured data with node labels. We test the approach on various types of graph datasets, such as collections of citation networks and protein graphs. Experiment results show that our model can generate diverse labeled graphs that match the structural characteristics of the training data and outperforms all baselines in terms of quality, generality, and scalability. To further evaluate the quality of the generated graphs, we apply it to a downstream task for graph classification, and the results show that LGGAN can better capture the important aspects of the graph structure.
Doctor of Philosophy
Graphs are one of the most important and powerful data structures for conveying the complex and correlated information among data points. In this research, we aim to provide more robust and accurate models for some graph specific tasks, such as collective classification and graph generation, by designing deep learning models to learn better task-specific representations for graphs. First, we studied the collective classification problem in graphs and proposed recurrent collective classification, a variant of the iterative classification algorithm that is more robust to situations where predictions are noisy or inaccurate. Then we studied the problem of graph generation using deep generative models. We first proposed a deep generative model using the GAN framework that generates labeled graphs. Then in order to support more applications and also get more control over the generated graphs, we extended the problem of graph generation to conditional graph generation which can then be applied to various applications for modeling graph evolution and transformation.
APA, Harvard, Vancouver, ISO, and other styles
2

Zhuang, Zhongfang. "Deep Learning on Attributed Sequences." Digital WPI, 2019. https://digitalcommons.wpi.edu/etd-dissertations/507.

Full text
Abstract:
Recent research in feature learning has been extended to sequence data, where each instance consists of a sequence of heterogeneous items with a variable length. However, in many real-world applications, the data exists in the form of attributed sequences, which is composed of a set of fixed-size attributes and variable-length sequences with dependencies between them. In the attributed sequence context, feature learning remains challenging due to the dependencies between sequences and their associated attributes. In this dissertation, we focus on analyzing and building deep learning models for four new problems on attributed sequences. First, we propose a framework, called NAS, to produce feature representations of attributed sequences in an unsupervised fashion. The NAS is capable of producing task independent embeddings that can be used in various mining tasks of attributed sequences. Second, we study the problem of deep metric learning on attributed sequences. The goal is to learn a distance metric based on pairwise user feedback. In this task, we propose a framework, called MLAS, to learn a distance metric that measures the similarity and dissimilarity between attributed sequence feedback pairs. Third, we study the problem of one-shot learning on attributed sequences. This problem is important for a variety of real-world applications ranging from fraud prevention to network intrusion detection. We design a deep learning framework OLAS to tackle this problem. Once the OLAS is trained, we can then use it to make predictions for not only the new data but also for entire previously unseen new classes. Lastly, we investigate the problem of attributed sequence classification with attention model. This is challenging that now we need to assess the importance of each item in each sequence considering both the sequence itself and the associated attributes. In this work, we propose a framework, called AMAS, to classify attributed sequences using the information from the sequences, metadata, and the computed attention. Our extensive experiments on real-world datasets demonstrate that the proposed solutions significantly improve the performance of each task over the state-of-the-art methods on attributed sequences.
APA, Harvard, Vancouver, ISO, and other styles
3

Elmarakeby, Haitham Abdulrahman. "Deep Learning for Biological Problems." Diss., Virginia Tech, 2017. http://hdl.handle.net/10919/86264.

Full text
Abstract:
The last decade has witnessed a tremendous increase in the amount of available biological data. Different technologies for measuring the genome, epigenome, transcriptome, proteome, metabolome, and microbiome in different organisms are producing large amounts of high-dimensional data every day. High-dimensional data provides unprecedented challenges and opportunities to gain a better understanding of biological systems. Unlike other data types, biological data imposes more constraints on researchers. Biologists are not only interested in accurate predictive models that capture complex input-output relationships, but they also seek a deep understanding of these models. In the last few years, deep models have achieved better performance in computational prediction tasks compared to other approaches. Deep models have been extensively used in processing natural data, such as images, text, and recently sound. However, application of deep models in biology is limited. Here, I propose to use deep models for output prediction, dimension reduction, and feature selection of biological data to get better interpretation and understanding of biological systems. I demonstrate the applicability of deep models in a domain that has a high and direct impact on health care. In this research, novel deep learning models have been introduced to solve pressing biological problems. The research shows that deep models can be used to automatically extract features from raw inputs without the need to manually craft features. Deep models are used to reduce the dimensionality of the input space, which resulted in faster training. Deep models are shown to have better performance and less variant output when compared to other shallow models even when an ensemble of shallow models is used. Deep models are shown to be able to process non-classical inputs such as sequences. Deep models are shown to be able to naturally process input sequences to automatically extract useful features.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
4

Arnold, Ludovic. "Learning Deep Representations : Toward a better new understanding of the deep learning paradigm." Phd thesis, Université Paris Sud - Paris XI, 2013. http://tel.archives-ouvertes.fr/tel-00842447.

Full text
Abstract:
Since 2006, deep learning algorithms which rely on deep architectures with several layers of increasingly complex representations have been able to outperform state-of-the-art methods in several settings. Deep architectures can be very efficient in terms of the number of parameters required to represent complex operations which makes them very appealing to achieve good generalization with small amounts of data. Although training deep architectures has traditionally been considered a difficult problem, a successful approach has been to employ an unsupervised layer-wise pre-training step to initialize deep supervised models. First, unsupervised learning has many benefits w.r.t. generalization because it only relies on unlabeled data which is easily found. Second, the possibility to learn representations layer by layer instead of all layers at once improves generalization further and reduces computational time. However, deep learning is a very recent approach and still poses a lot of theoretical and practical questions concerning the consistency of layer-wise learning with many layers and difficulties such as evaluating performance, performing model selection and optimizing layers. In this thesis we first discuss the limitations of the current variational justification for layer-wise learning which does not generalize well to many layers. We ask if a layer-wise method can ever be truly consistent, i.e. capable of finding an optimal deep model by training one layer at a time without knowledge of the upper layers. We find that layer-wise learning can in fact be consistent and can lead to optimal deep generative models. To do this, we introduce the Best Latent Marginal (BLM) upper bound, a new criterion which represents the maximum log-likelihood of a deep generative model where the upper layers are unspecified. We prove that maximizing this criterion for each layer leads to an optimal deep architecture, provided the rest of the training goes well. Although this criterion cannot be computed exactly, we show that it can be maximized effectively by auto-encoders when the encoder part of the model is allowed to be as rich as possible. This gives a new justification for stacking models trained to reproduce their input and yields better results than the state-of-the-art variational approach. Additionally, we give a tractable approximation of the BLM upper-bound and show that it can accurately estimate the final log-likelihood of models. Taking advantage of these theoretical advances, we propose a new method for performing layer-wise model selection in deep architectures, and a new criterion to assess whether adding more layers is warranted. As for the difficulty of training layers, we also study the impact of metrics and parametrization on the commonly used gradient descent procedure for log-likelihood maximization. We show that gradient descent is implicitly linked with the metric of the underlying space and that the Euclidean metric may often be an unsuitable choice as it introduces a dependence on parametrization and can lead to a breach of symmetry. To mitigate this problem, we study the benefits of the natural gradient and show that it can restore symmetry, regrettably at a high computational cost. We thus propose that a centered parametrization may alleviate the problem with almost no computational overhead.
APA, Harvard, Vancouver, ISO, and other styles
5

Tegendal, Lukas. "Watermarking in Audio using Deep Learning." Thesis, Linköpings universitet, Datorseende, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-159191.

Full text
Abstract:
Watermarking is a technique used to used to mark the ownership in media such as audio or images by embedding a watermark, e.g. copyrights information, into the media. A good watermarking method should perform this embedding without affecting the quality of the media. Recent methods for watermarking in images uses deep learning to embed and extract the watermark in the images. In this thesis, we investigate watermarking in the hearable frequencies of audio using deep learning. More specifically, we try to create a watermarking method for audio that is robust to noise in the carrier, and that allows for the extraction of the embedded watermark from the audio after being played over-the-air. The proposed method consists of two deep convolutional neural network trained end-to-end on music with simulated noise. Experiments show that the proposed method successfully creates watermarks robust to simulated noise with moderate quality reductions, but it is not robust to the real world noise introduced after playing and recording the audio over-the-air.
APA, Harvard, Vancouver, ISO, and other styles
6

Shi, Shaohuai. "Communication optimizations for distributed deep learning." HKBU Institutional Repository, 2020. https://repository.hkbu.edu.hk/etd_oa/813.

Full text
Abstract:
With the increasing amount of data and the growing computing power, deep learning techniques using deep neural networks (DNNs) have been successfully applied in many practical artificial intelligence applications. The mini-batch stochastic gradient descent (SGD) algorithm and its variants are the most widely used algorithms in training deep models. The SGD algorithm is an iterative algorithm that needs to update the model parameters many times by traversing the training data, which is very time-consuming even using the single powerful GPU or TPU. Therefore, it becomes a common practice to exploit multiple processors (e.g., GPUs or TPUs) to accelerate the training process using distributed SGD. However, the iterative nature of distributed SGD requires multiple processors to iteratively communicate with each other to collaboratively update the model parameters. The intensive communication cost easily becomes the system bottleneck and limits the system scalability. In this thesis, we study the communication-efficient techniques for distributed SGD to improve the system scalability and thus accelerate the training process. We identify the performance issues in distributed SGD through benchmarking and modeling and then propose several communication optimization algorithms to address the communication issues. First, we build a performance model with a directed acyclic graph (DAG) to modeling the training process of distributed SGD and verify the model with extensive benchmarks on existing state-of-the-art deep learning frameworks including Caffe, MXNet, TensorFlow, and CNTK. Our benchmarking and modeling point out that existing optimizations for the communication problems are sub-optimal, which we need to address in this thesis. Second, to address the startup problem (due to the high latency of each communication) of layer-wise communications with wait-free backpropagation (WFBP), we propose an optimal gradient merging solution for WFBP, named MG-WFBP, that exploits the layer-wise property to well overlap the communication tasks with the computing tasks and can be adaptive to the training environments. Experiments are conducted on dense-GPU clusters with Ethernet and InfiniBand, and the results show that MG-WFBP can well address the startup problem in distributed training of layer-wise structured DNNs. Third, to make the high computing-intensive training tasks be possible in GPU clusters with low- bandwidth interconnect, we investigate the gradient compression techniques in distributed training. The top-{dollar}k{dollar} sparsification can well compress the communication traffic with little impact on the model convergence, but it suffers from a linear communication complexity to the number of workers so that top-{dollar}k{dollar} sparsification cannot scale well in large-scale clusters. To address the problem, we propose a global top-{dollar}k{dollar} (gTop-{dollar}k{dollar}) sparsification algorithm that reduces the communication complexity to be logarithmic to the number of workers. We also provide detailed theoretical analysis for the gTop-{dollar}k{dollar} SGD training algorithm, and the theoretical results show that our gTop-{dollar}k{dollar} SGD has the same order of convergence rate with SGD. Experiments are conducted on up to 64-GPU cluster to verify that gTop-{dollar}k{dollar} SGD significantly improves the system scalability with only a slight impact on the model convergence. Lastly, to enjoy the both benefits of the pipelining technique and the gradient sparsification algorithm, we propose a new distributed training algorithm, layer-wise adaptive gradient sparsification SGD (LAGS-SGD), which supports layer-wise sparsification and communication, and we theoretically and empirically prove that the LAGS-SGD preserves the convergence properties. To further alliterate the impact of the startup problem of layer-wise communications in LAGS-SGD, we also propose the optimal gradient merging solution for LAGS-SGD, named OMGS-SGD, and theoretical prove its optimality. The experimental results on a 16-node GPU cluster connected 1Gbps Ethernet show that OMGS-SGD can always improve the system scalability while the model convergence properties are not affected
APA, Harvard, Vancouver, ISO, and other styles
7

Manda, Kundan Reddy. "Sentiment Analysis of Twitter Data Using Machine Learning and Deep Learning Methods." Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-18447.

Full text
Abstract:
Background: Twitter, Facebook, WordPress, etc. act as the major sources of information exchange in today's world. The tweets on Twitter are mainly based on the public opinion on a product, event or topic and thus contains large volumes of unprocessed data. Synthesis and Analysis of this data is very important and difficult due to the size of the dataset. Sentiment analysis is chosen as the apt method to analyse this data as this method does not go through all the tweets but rather relates to the sentiments of these tweets in terms of positive, negative and neutral opinions. Sentiment Analysis is normally performed in 3 ways namely Machine learning-based approach, Sentiment lexicon-based approach, and Hybrid approach. The Machine learning based approach uses machine learning algorithms and deep learning algorithms for analysing the data, whereas the sentiment lexicon-based approach uses lexicons in analysing the data and they contain vocabulary of positive and negative words. The Hybrid approach uses a combination of both Machine learning and sentiment lexicon approach for classification. Objectives: The primary objectives of this research are: To identify the algorithms and metrics for evaluating the performance of Machine Learning Classifiers. To compare the metrics from the identified algorithms depending on the size of the dataset that affects the performance of the best-suited algorithm for sentiment analysis. Method: The method chosen to address the research questions is Experiment. Through which the identified algorithms are evaluated with the selected metrics. Results: The identified machine learning algorithms are Naïve Bayes, Random Forest, XGBoost and the deep learning algorithm is CNN-LSTM. The algorithms are evaluated with respect to the metrics namely precision, accuracy, F1 score, recall and compared. CNN-LSTM model is best suited for sentiment analysis on twitter data with respect to the selected size of the dataset. Conclusion: Through the analysis of results, the aim of this research is achieved in identifying the best-suited algorithm for sentiment analysis on twitter data with respect to the selected dataset. CNN-LSTM model results in having the highest accuracy of 88% among the selected algorithms for the sentiment analysis of Twitter data with respect to the selected dataset.
APA, Harvard, Vancouver, ISO, and other styles
8

Flowers, Bryse Austin. "Adversarial RFML: Evading Deep Learning Enabled Signal Classification." Thesis, Virginia Tech, 2019. http://hdl.handle.net/10919/91987.

Full text
Abstract:
Deep learning has become an ubiquitous part of research in all fields, including wireless communications. Researchers have shown the ability to leverage deep neural networks (DNNs) that operate on raw in-phase and quadrature samples, termed Radio Frequency Machine Learning (RFML), to synthesize new waveforms, control radio resources, as well as detect and classify signals. While there are numerous advantages to RFML, this thesis answers the question "is it secure?" DNNs have been shown, in other applications such as Computer Vision (CV), to be vulnerable to what are known as adversarial evasion attacks, which consist of corrupting an underlying example with a small, intelligently crafted, perturbation that causes a DNN to misclassify the example. This thesis develops the first threat model that encompasses the unique adversarial goals and capabilities that are present in RFML. Attacks that occur with direct digital access to the RFML classifier are differentiated from physical attacks that must propagate over-the-air (OTA) and are thus subject to impairments due to the wireless channel or inaccuracies in the signal detection stage. This thesis first finds that RFML systems are vulnerable to current adversarial evasion attacks using the well known Fast Gradient Sign Method originally developed for CV applications. However, these current adversarial evasion attacks do not account for the underlying communications and therefore the adversarial advantage is limited because the signal quickly becomes unintelligible. In order to envision new threats, this thesis goes on to develop a new adversarial evasion attack that takes into account the underlying communications and wireless channel models in order to create adversarial evasion attacks with more intelligible underlying communications that generalize to OTA attacks.
Master of Science
Deep learning is beginning to permeate many commercial products and is being included in prototypes for next generation wireless communications devices. This technology can provide huge breakthroughs in autonomy; however, it is not sufficient to study the effectiveness of deep learning in an idealized laboratory environment, the real world is often harsh and/or adversarial. Therefore, it is important to know how, and when, these deep learning enabled devices will fail in the presence of bad actors before they are deployed in high risk environments, such as battlefields or connected autonomous vehicle communications. This thesis studies a small subset of the security vulnerabilities of deep learning enabled wireless communications devices by attempting to evade deep learning enabled signal classification by an eavesdropper while maintaining effective wireless communications with a cooperative receiver. The primary goal of this thesis is to define the threats to, and identify the current vulnerabilities of, deep learning enabled signal classification systems, because a system can only be secured once its vulnerabilities are known.
APA, Harvard, Vancouver, ISO, and other styles
9

Franch, Gabriele. "Deep Learning for Spatiotemporal Nowcasting." Doctoral thesis, Università degli studi di Trento, 2021. http://hdl.handle.net/11572/295096.

Full text
Abstract:
Nowcasting – short-term forecasting using current observations – is a key challenge that human activities have to face on a daily basis. We heavily rely on short-term meteorological predictions in domains such as aviation, agriculture, mobility, and energy production. One of the most important and challenging task for meteorology is the nowcasting of extreme events, whose anticipation is highly needed to mitigate risk in terms of social or economic costs and human safety. The goal of this thesis is to contribute with new machine learning methods to improve the spatio-temporal precision of nowcasting of extreme precipitation events. This work relies on recent advances in deep learning for nowcasting, adding methods targeted at improving nowcasting using ensembles and trained on novel original data resources. Indeed, the new curated multi-year radar scan dataset (TAASRAD19) is introduced that contains more than 350.000 labelled precipitation records over 10 years, to provide a baseline benchmark, and foster reproducibility of machine learning modeling. A TrajGRU model is applied to TAASRAD19, and implemented in an operational prototype. The thesis also introduces a novel method for fast analog search based on manifold learning: the tool leverages the entire dataset history in less than 5 seconds and demonstrates the feasibility of predictive ensembles. In the final part of the thesis, the new deep learning architecture ConvSG based on stacked generalization is presented, introducing novel concepts for deep learning in precipitation nowcasting: ConvSG is specifically designed to improve predictions of extreme precipitation regimes over published methods, and shows a 117% skill improvement on extreme rain regimes over a single member. Moreover, ConvSG shows superior or equal skills compared to Lagrangian Extrapolation models for all rain rates, achieving a 49% average improvement in predictive skill over extrapolation on the higher precipitation regimes.
APA, Harvard, Vancouver, ISO, and other styles
10

Rigaki, Maria. "Adversarial Deep Learning Against Intrusion Detection Classifiers." Thesis, Luleå tekniska universitet, Datavetenskap, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-64577.

Full text
Abstract:
Traditional approaches in network intrusion detection follow a signature-based ap- proach, however the use of anomaly detection approaches based on machine learning techniques have been studied heavily for the past twenty years. The continuous change in the way attacks are appearing, the volume of attacks, as well as the improvements in the big data analytics space, make machine learning approaches more alluring than ever. The intention of this thesis is to show that using machine learning in the intrusion detection domain should be accompanied with an evaluation of its robustness against adversaries. Several adversarial techniques have emerged lately from the deep learning research, largely in the area of image classification. These techniques are based on the idea of introducing small changes in the original input data in order to make a machine learning model to misclassify it. This thesis follows a big data Analytics methodol- ogy and explores adversarial machine learning techniques that have emerged from the deep learning domain, against machine learning classifiers used for network intrusion detection. The study looks at several well known classifiers and studies their performance under attack over several metrics, such as accuracy, F1-score and receiver operating character- istic. The approach used assumes no knowledge of the original classifier and examines both general and targeted misclassification. The results show that using relatively sim- ple methods for generating adversarial samples it is possible to lower the detection accuracy of intrusion detection classifiers from 5% to 28%. Performance degradation is achieved using a methodology that is simpler than previous approaches and it re- quires only 6.25% change between the original and the adversarial sample, making it a candidate for a practical adversarial approach.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Machine and deep learning"

1

Kang, Mingu, Sujan Gonugondla, and Naresh R. Shanbhag. Deep In-memory Architectures for Machine Learning. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-35971-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Tetko, Igor V., Věra Kůrková, Pavel Karpov, and Fabian Theis, eds. Artificial Neural Networks and Machine Learning – ICANN 2019: Deep Learning. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-30484-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Gopi, E. S., ed. Machine Learning, Deep Learning and Computational Intelligence for Wireless Communication. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-0289-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Tache, Nicole, ed. Learning TensorFlow: A Guide to Building Deep Learning Systems. Beijing: O'Reilly Media, 2017.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Bisong, Ekaba. Building Machine Learning and Deep Learning Models on Google Cloud Platform. Berkeley, CA: Apress, 2019. http://dx.doi.org/10.1007/978-1-4842-4470-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Mangrulkar, Ramchandra S., Antonis Michalas, Narendra M. Shekokar, Meera Narvekar, and Pallavi V. Chavan. Design of Intelligent Applications Using Machine Learning and Deep Learning Techniques. Boca Raton: Chapman and Hall/CRC, 2021. http://dx.doi.org/10.1201/9781003133681.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Iba, Hitoshi. Evolutionary Approach to Machine Learning and Deep Neural Networks. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-13-0200-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Suganthi, K., R. Karthik, G. Rajesh, and Peter Ho Chiung Ching. Machine Learning and Deep Learning Techniques in Wireless and Mobile Networking Systems. Boca Raton: CRC Press, 2021. http://dx.doi.org/10.1201/9781003107477.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Cinelli, Lucas Pinheiro, Matheus Araújo Marins, Eduardo Antônio Barros da Silva, and Sérgio Lima Netto. Variational Methods for Machine Learning with Applications to Deep Networks. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-70679-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Devi, K. Gayathri. Artificial Intelligence Trends for Data Analytics Using Machine Learning and Deep Learning Approaches. Edited by Mamata Rath and Nguyen Thi Dieu Linh. Boca Raton, FL : CRC Press, 2021. | Series: Artificial intelligence (AI). Elementary to advanced practices: CRC Press, 2020. http://dx.doi.org/10.1201/9780367854737.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Machine and deep learning"

1

Kim, Phil. "Machine Learning." In MATLAB Deep Learning, 1–18. Berkeley, CA: Apress, 2017. http://dx.doi.org/10.1007/978-1-4842-2845-6_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Vermeulen, Andreas François. "Unsupervised Learning: Deep Learning." In Industrial Machine Learning, 225–41. Berkeley, CA: Apress, 2019. http://dx.doi.org/10.1007/978-1-4842-5316-8_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Norris, Donald J. "Machine Learning: Deep Learning." In Beginning Artificial Intelligence with the Raspberry Pi, 211–47. Berkeley, CA: Apress, 2017. http://dx.doi.org/10.1007/978-1-4842-2743-5_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Nath, Vishnu, and Stephen E. Levinson. "Machine Learning." In Autonomous Robotics and Deep Learning, 39–45. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-05603-6_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Žižka, Jan, František Dařena, and Arnošt Svoboda. "Deep Learning." In Text Mining with Machine Learning, 223–34. First. | Boca Raton : CRC Press, 2019.: CRC Press, 2019. http://dx.doi.org/10.1201/9780429469275-11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Rebala, Gopinath, Ajay Ravi, and Sanjay Churiwala. "Deep Learning." In An Introduction to Machine Learning, 127–40. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-15729-6_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Aggarwal, Manasvi, and M. N. Murty. "Deep Learning." In Machine Learning in Social Networks, 35–66. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-33-4022-0_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Joshi, Ameet V. "Deep Learning." In Machine Learning and Artificial Intelligence, 117–26. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-26622-6_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Kubat, Miroslav. "Deep Learning." In An Introduction to Machine Learning, 327–51. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-81935-4_16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Sun, Shiliang, Liang Mao, Ziang Dong, and Lidan Wu. "Multiview Deep Learning." In Multiview Machine Learning, 105–38. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-13-3029-2_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Machine and deep learning"

1

DeGuchy, Omar, Alex Ho, and Roummel F. Marcia. "Image disambiguation with deep neural networks." In Applications of Machine Learning, edited by Michael E. Zelinski, Tarek M. Taha, Jonathan Howe, Abdul A. Awwal, and Khan M. Iftekharuddin. SPIE, 2019. http://dx.doi.org/10.1117/12.2530230.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kee Wong, Yew. "Machine Learning and Deep Learning Technologies." In 2nd International Conference on Machine Learning, IOT and Blockchain (MLIOB 2021). Academy and Industry Research Collaboration Center (AIRCC), 2021. http://dx.doi.org/10.5121/csit.2021.111214.

Full text
Abstract:
In the information era, enormous amounts of data have become available on hand to decision makers. Big data refers to datasets that are not only big, but also high in variety and velocity, which makes them difficult to handle using traditional tools and techniques. Due to the rapid growth of such data, solutions need to be studied and provided in order to handle and extract value and knowledge from these datasets. Machine learning is a method of data analysis that automates analytical model building. It is a branch of artificial intelligence based on the idea that systems can learn from data, identify patterns and make decisions with minimal human intervention. Such minimal human intervention can be provided using machine learning, which is the application of advanced deep learning techniques on big data. This paper aims to analyse some of the different machine learning and deep learning algorithms and methods, aswell as the opportunities provided by the AI applications in various decision making domains.
APA, Harvard, Vancouver, ISO, and other styles
3

"DEEP-ML 2019 Organizing Committee." In 2019 International Conference on Deep Learning and Machine Learning in Emerging Applications (Deep-ML). IEEE, 2019. http://dx.doi.org/10.1109/deep-ml.2019.00006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

"DEEP-ML 2019 Program Committee." In 2019 International Conference on Deep Learning and Machine Learning in Emerging Applications (Deep-ML). IEEE, 2019. http://dx.doi.org/10.1109/deep-ml.2019.00007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Alom, Zahangir, Theus Aspiras, Tarek Taha, and Vijayan K. Asari. "Histopathological image classification with deep convolutional neural networks." In Applications of Machine Learning, edited by Michael E. Zelinski, Tarek M. Taha, Jonathan Howe, Abdul A. Awwal, and Khan M. Iftekharuddin. SPIE, 2019. http://dx.doi.org/10.1117/12.2530291.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Sengupta, Sourya, Amitojdeep Singh, John Zelek, and Vasudevan Lakshminarayanan. "Cross-domain diabetic retinopathy detection using deep learning." In Applications of Machine Learning, edited by Michael E. Zelinski, Tarek M. Taha, Jonathan Howe, Abdul A. Awwal, and Khan M. Iftekharuddin. SPIE, 2019. http://dx.doi.org/10.1117/12.2529450.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Mandal, Aditya Chandra, Abhijeet Phatak, Jayaram Jothi balaji, and Vasudevan Lakshminarayanan. "A deep-learning approach to pupillometry." In Applications of Machine Learning 2021, edited by Michael E. Zelinski, Tarek M. Taha, and Jonathan Howe. SPIE, 2021. http://dx.doi.org/10.1117/12.2594315.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Coates, Adam. "Deep Learning for Machine Vision." In British Machine Vision Conference 2013. British Machine Vision Association, 2013. http://dx.doi.org/10.5244/c.27.1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

"[Title page i]." In 2019 International Conference on Deep Learning and Machine Learning in Emerging Applications (Deep-ML). IEEE, 2019. http://dx.doi.org/10.1109/deep-ml.2019.00001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

"[Title page iii]." In 2019 International Conference on Deep Learning and Machine Learning in Emerging Applications (Deep-ML). IEEE, 2019. http://dx.doi.org/10.1109/deep-ml.2019.00002.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Machine and deep learning"

1

Bruckner, Daniel. ML-o-Scope: A Diagnostic Visualization System for Deep Machine Learning Pipelines. Fort Belvoir, VA: Defense Technical Information Center, May 2014. http://dx.doi.org/10.21236/ada605112.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Vesselinov, Velimir Valentinov. Machine Learning. Office of Scientific and Technical Information (OSTI), January 2019. http://dx.doi.org/10.2172/1492563.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Valiant, L. G. Machine Learning. Fort Belvoir, VA: Defense Technical Information Center, January 1993. http://dx.doi.org/10.21236/ada283386.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Chase, Melissa P. Machine Learning. Fort Belvoir, VA: Defense Technical Information Center, April 1990. http://dx.doi.org/10.21236/ada223732.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kagie, Matthew J., and Park Hays. FORTE Machine Learning. Office of Scientific and Technical Information (OSTI), August 2016. http://dx.doi.org/10.2172/1561828.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Lin, Youzuo. Machine Learning in Subsurface. Office of Scientific and Technical Information (OSTI), August 2018. http://dx.doi.org/10.2172/1467315.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Skryzalin, Jacek, Kenneth Goss, and Benjamin Jackson. Securing machine learning models. Office of Scientific and Technical Information (OSTI), September 2020. http://dx.doi.org/10.2172/1661020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Mohan, Arvind. Machine Learning for Turbulence. Office of Scientific and Technical Information (OSTI), May 2021. http://dx.doi.org/10.2172/1782626.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Catanach, Thomas, and Jed Duersch. Efficient Generalizable Deep Learning. Office of Scientific and Technical Information (OSTI), September 2018. http://dx.doi.org/10.2172/1760400.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Vesselinov, Velimir Valentinov. TensorDecompostions : Unsupervised machine learning methods. Office of Scientific and Technical Information (OSTI), February 2019. http://dx.doi.org/10.2172/1493534.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography