Academic literature on the topic 'Convolutional Deep Belief Networks'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Convolutional Deep Belief Networks.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Convolutional Deep Belief Networks"

1

Chu, Joseph Lin, and Adam Krzyźak. "The Recognition Of Partially Occluded Objects with Support Vector Machines, Convolutional Neural Networks and Deep Belief Networks." Journal of Artificial Intelligence and Soft Computing Research 4, no. 1 (January 1, 2014): 5–19. http://dx.doi.org/10.2478/jaiscr-2014-0021.

Full text
Abstract:
Abstract Biologically inspired artificial neural networks have been widely used for machine learning tasks such as object recognition. Deep architectures, such as the Convolutional Neural Network, and the Deep Belief Network have recently been implemented successfully for object recognition tasks. We conduct experiments to test the hypothesis that certain primarily generative models such as the Deep Belief Network should perform better on the occluded object recognition task than purely discriminative models such as Convolutional Neural Networks and Support Vector Machines. When the generative models are run in a partially discriminative manner, the data does not support the hypothesis. It is also found that the implementation of Gaussian visible units in a Deep Belief Network trained on occluded image data allows it to also learn to effectively classify non-occluded images
APA, Harvard, Vancouver, ISO, and other styles
2

Guang Huo, Qi Zhang, Yangrui Zhang, Yuanning Liu, Huan Guo, and Wenyu Li. "Multi-Source Heterogeneous Iris Recognition Using Stacked Convolutional Deep Belief Networks-Deep Belief Network Model." Pattern Recognition and Image Analysis 31, no. 1 (January 2021): 81–90. http://dx.doi.org/10.1134/s1054661821010119.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Li, Yang, and Chu Li. "Singer Recognition Based on Convolutional Deep Belief Networks." IOP Conference Series: Materials Science and Engineering 435 (November 5, 2018): 012005. http://dx.doi.org/10.1088/1757-899x/435/1/012005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Phan, NhatHai, Xintao Wu, and Dejing Dou. "Preserving differential privacy in convolutional deep belief networks." Machine Learning 106, no. 9-10 (July 13, 2017): 1681–704. http://dx.doi.org/10.1007/s10994-017-5656-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Tang, Binbin, Xiao Liu, Jie Lei, Mingli Song, Dapeng Tao, Shuifa Sun, and Fangmin Dong. "DeepChart: Combining deep convolutional networks and deep belief networks in chart classification." Signal Processing 124 (July 2016): 156–61. http://dx.doi.org/10.1016/j.sigpro.2015.09.027.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Rakhmanenko, I. A., A. A. Shelupanov, and E. Y. Kostyuchenko. "Automatic text-independent speaker verification using convolutional deep belief network." Computer Optics 44, no. 4 (August 2020): 596–605. http://dx.doi.org/10.18287/2412-6179-co-621.

Full text
Abstract:
This paper is devoted to the use of the convolutional deep belief network as a speech feature extractor for automatic text-independent speaker verification. The paper describes the scope and problems of automatic speaker verification systems. Types of modern speaker verification systems and types of speech features used in speaker verification systems are considered. The structure and learning algorithm of convolutional deep belief networks is described. The use of speech features extracted from three layers of a trained convolution deep belief network is proposed. Experimental studies of the proposed features were performed on two speech corpora: own speech corpus including audio recordings of 50 speakers and TIMIT speech corpus including audio recordings of 630 speakers. The accuracy of the proposed features was assessed using different types of classifiers. Direct use of these features did not increase the accuracy compared to the use of traditional spectral speech features, such as mel-frequency cepstral coefficients. However, the use of these features in the classifiers ensemble made it possible to achieve a reduction of the equal error rate to 0.21% on 50-speaker speech corpus and to 0.23% on the TIMIT speech corpus.
APA, Harvard, Vancouver, ISO, and other styles
7

Wang, Haibo, and Xiaojun Bi. "Contractive Slab and Spike Convolutional Deep Belief Network." Neural Processing Letters 49, no. 3 (August 9, 2018): 1697–722. http://dx.doi.org/10.1007/s11063-018-9897-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Nizami Huseyn, Elcin. "APPLICATION OF DEEP LEARNING TECHNOLOGY IN DISEASE DIAGNOSIS." NATURE AND SCIENCE 04, no. 05 (December 28, 2020): 4–11. http://dx.doi.org/10.36719/2707-1146/05/4-11.

Full text
Abstract:
The rapid development of deep learning technology provides new methods and ideas for assisting physicians in high-precision disease diagnosis. This article reviews the principles and features of deep learning models commonly used in medical disease diagnosis, namely convolutional neural networks, deep belief networks, restricted Boltzmann machines, and recurrent neural network models. Based on several typical diseases, the application of deep learning technology in the field of disease diagnosis is introduced; finally, the future development direction is proposed based on the limitations of current deep learning technology in disease diagnosis. Keywords: Artificial Intelligence; Deep Learning; Disease Diagnosis; Neural Network
APA, Harvard, Vancouver, ISO, and other styles
9

Brosch, Tom, and Roger Tam. "Efficient Training of Convolutional Deep Belief Networks in the Frequency Domain for Application to High-Resolution 2D and 3D Images." Neural Computation 27, no. 1 (January 2015): 211–27. http://dx.doi.org/10.1162/neco_a_00682.

Full text
Abstract:
Deep learning has traditionally been computationally expensive, and advances in training methods have been the prerequisite for improving its efficiency in order to expand its application to a variety of image classification problems. In this letter, we address the problem of efficient training of convolutional deep belief networks by learning the weights in the frequency domain, which eliminates the time-consuming calculation of convolutions. An essential consideration in the design of the algorithm is to minimize the number of transformations to and from frequency space. We have evaluated the running time improvements using two standard benchmark data sets, showing a speed-up of up to 8 times on 2D images and up to 200 times on 3D volumes. Our training algorithm makes training of convolutional deep belief networks on 3D medical images with a resolution of up to 128 × 128 × 128 voxels practical, which opens new directions for using deep learning for medical image analysis.
APA, Harvard, Vancouver, ISO, and other styles
10

Kumar, P. S. Jagadeesh, Yanmin Yuan, Yang Yung, Mingmin Pan, and Wenli Hu. "Robotic simulation of human brain using convolutional deep belief networks." International Journal of Intelligent Machines and Robotics 1, no. 2 (2018): 180. http://dx.doi.org/10.1504/ijimr.2018.094922.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Convolutional Deep Belief Networks"

1

Liu, Ye. "Application of Convolutional Deep Belief Networks to Domain Adaptation." The Ohio State University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=osu1397728737.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Nassar, Alaa S. N. "A Hybrid Multibiometric System for Personal Identification Based on Face and Iris Traits. The Development of an automated computer system for the identification of humans by integrating facial and iris features using Localization, Feature Extraction, Handcrafted and Deep learning Techniques." Thesis, University of Bradford, 2018. http://hdl.handle.net/10454/16917.

Full text
Abstract:
Multimodal biometric systems have been widely applied in many real-world applications due to its ability to deal with a number of significant limitations of unimodal biometric systems, including sensitivity to noise, population coverage, intra-class variability, non-universality, and vulnerability to spoofing. This PhD thesis is focused on the combination of both the face and the left and right irises, in a unified hybrid multimodal biometric identification system using different fusion approaches at the score and rank level. Firstly, the facial features are extracted using a novel multimodal local feature extraction approach, termed as the Curvelet-Fractal approach, which based on merging the advantages of the Curvelet transform with Fractal dimension. Secondly, a novel framework based on merging the advantages of the local handcrafted feature descriptors with the deep learning approaches is proposed, Multimodal Deep Face Recognition (MDFR) framework, to address the face recognition problem in unconstrained conditions. Thirdly, an efficient deep learning system is employed, termed as IrisConvNet, whose architecture is based on a combination of Convolutional Neural Network (CNN) and Softmax classifier to extract discriminative features from an iris image. Finally, The performance of the unimodal and multimodal systems has been evaluated by conducting a number of extensive experiments on large-scale unimodal databases: FERET, CAS-PEAL-R1, LFW, CASIA-Iris-V1, CASIA-Iris-V3 Interval, MMU1 and IITD and MMU1, and SDUMLA-HMT multimodal dataset. The results obtained have demonstrated the superiority of the proposed systems compared to the previous works by achieving new state-of-the-art recognition rates on all the employed datasets with less time required to recognize the person’s identity.Multimodal biometric systems have been widely applied in many real-world applications due to its ability to deal with a number of significant limitations of unimodal biometric systems, including sensitivity to noise, population coverage, intra-class variability, non-universality, and vulnerability to spoofing. This PhD thesis is focused on the combination of both the face and the left and right irises, in a unified hybrid multimodal biometric identification system using different fusion approaches at the score and rank level. Firstly, the facial features are extracted using a novel multimodal local feature extraction approach, termed as the Curvelet-Fractal approach, which based on merging the advantages of the Curvelet transform with Fractal dimension. Secondly, a novel framework based on merging the advantages of the local handcrafted feature descriptors with the deep learning approaches is proposed, Multimodal Deep Face Recognition (MDFR) framework, to address the face recognition problem in unconstrained conditions. Thirdly, an efficient deep learning system is employed, termed as IrisConvNet, whose architecture is based on a combination of Convolutional Neural Network (CNN) and Softmax classifier to extract discriminative features from an iris image. Finally, The performance of the unimodal and multimodal systems has been evaluated by conducting a number of extensive experiments on large-scale unimodal databases: FERET, CAS-PEAL-R1, LFW, CASIA-Iris-V1, CASIA-Iris-V3 Interval, MMU1 and IITD and MMU1, and SDUMLA-HMT multimodal dataset. The results obtained have demonstrated the superiority of the proposed systems compared to the previous works by achieving new state-of-the-art recognition rates on all the employed datasets with less time required to recognize the person’s identity.
Higher Committee for Education Development in Iraq
APA, Harvard, Vancouver, ISO, and other styles
3

Mancevo, del Castillo Ayala Diego. "Compressing Deep Convolutional Neural Networks." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-217316.

Full text
Abstract:
Deep Convolutional Neural Networks and "deep learning" in general stand at the cutting edge on a range of applications, from image based recognition and classification to natural language processing, speech and speaker recognition and reinforcement learning. Very deep models however are often large, complex and computationally expensive to train and evaluate. Deep learning models are thus seldom deployed natively in environments where computational resources are scarce or expensive. To address this problem we turn our attention towards a range of techniques that we collectively refer to as "model compression" where a lighter student model is trained to approximate the output produced by the model we wish to compress. To this end, the output from the original model is used to craft the training labels of the smaller student model. This work contains some experiments on CIFAR-10 and demonstrates how to use the aforementioned techniques to compress a people counting model whose precision, recall and F1-score are improved by as much as 14% against our baseline.
APA, Harvard, Vancouver, ISO, and other styles
4

Faulkner, Ryan. "Dyna learning with deep belief networks." Thesis, McGill University, 2011. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=97177.

Full text
Abstract:
The objective of reinforcement learning is to find "good" actions in an environment where feedback is provided through a numerical reward, and the current state (i.e. sensory input) is assumed to be available at each time step. The notion of "good" is defined as maximizing the expected cumulative returns over time. Sometimes it is useful to construct models of the environment to aid in solving the problem. We investigate Dyna-style reinforcement learning, a powerful approach for problems where not much real data is available. The main idea is to supplement real trajectories with simulated ones sampled from a learned model of the environment. However, in large state spaces, the problem of learning a good generative model of the environment has been open so far. We propose to use deep belief networks to learn an environment model. Deep belief networks (Hinton, 2006) are generative models that have been effective in learning the time dependency relationships among complex data. It has been shown that such models can be learned in a reasonable amount of time when they are built using energy models. We present our algorithm for using deep belief networks as a generative model for simulating the environment within the Dyna architecture, along with very promising empirical results.
L'objectif de l'apprentissage par renforcement est de choisir de bonnes actions dansun environnement où les informations sont fournies par une récompense numérique, etl'état actuel (données sensorielles) est supposé être disponible à chaque pas de temps. Lanotion de "correct" est définie comme étant la maximisation des rendements attendus cumulatifsdans le temps. Il est parfois utile de construire des modèles de l'environnementpour aider à résoudre le problème. Nous étudions l'apprentissage par renforcement destyleDyna, une approche performante dans les situations où les données réelles disponiblesne sont pas nombreuses. L'idée principale est de compléter les trajectoires réelles aveccelles simulées échantillonnées partir d'un modèle appri de l'environnement. Toutefois,dans les domaines à plusieurs états, le problème de l'apprentissage d'un bon modèlegénératif de l'environnement est jusqu'à présent resté ouvert. Nous proposons d'utiliserles réseaux profonds de croyance pour apprendre un modèle de l'environnement. Lesréseaux de croyance profonds (Hinton, 2006) sont des modèles génératifs qui sont efficaces pourl'apprentissage des relations de dépendance temporelle parmi des données complexes. Ila été démontré que de tels modèles peuvent être appris dans un laps de temps raisonnablequand ils sont construits en utilisant des modèles de l'énergie. Nous présentons notre algorithmepour l'utilisation des réseaux de croyance profonds en tant que modèle génératifpour simuler l'environnement dans l'architecture Dyna, ainsi que des résultats empiriquesprometteurs.
APA, Harvard, Vancouver, ISO, and other styles
5

Avramova, Vanya. "Curriculum Learning with Deep Convolutional Neural Networks." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-178453.

Full text
Abstract:
Curriculum learning is a machine learning technique inspired by the way humans acquire knowledge and skills: by mastering simple concepts first, and progressing through information with increasing difficulty to grasp more complex topics. Curriculum Learning, and its derivatives Self Paced Learning (SPL) and Self Paced Learning with Diversity (SPLD), have been previously applied within various machine learning contexts: Support Vector Machines (SVMs), perceptrons, and multi-layer neural networks, where they have been shown to improve both training speed and model accuracy. This project ventured to apply the techniques within the previously unexplored context of deep learning, by investigating how they affect the performance of a deep convolutional neural network (ConvNet) trained on a large labeled image dataset. The curriculum was formed by presenting the training samples to the network in order of increasing difficulty, measured by the sample's loss value based on the network's objective function. The project evaluated SPL and SPLD, and proposed two new curriculum learning sub-variants, p-SPL and p-SPLD, which allow for a smooth progresson of sample inclusion during training. The project also explored the "inversed" versions of the SPL, SPLD, p-SPL and p-SPLD techniques, where the samples were selected for the curriculum in order of decreasing difficulty. The experiments demonstrated that all learning variants perform fairly similarly, within ≈1% average test accuracy margin, based on five trained models per variant. Surprisingly, models trained with the inversed version of the algorithms performed slightly better than the standard curriculum training variants. The SPLD-Inversed, SPL-Inversed and SPLD networks also registered marginally higher accuracy results than the network trained with the usual random sample presentation. The results suggest that while sample ordering does affect the training process, the optimal order in which samples are presented may vary based on the data set and algorithm used. The project also investigated whether some samples were more beneficial for the training process than others. Based on sample difficulty, subsets of samples were removed from the training data set. The models trained on the remaining samples were compared to a default model trained on all samples. On the data set used, removing the “easiest” 10% of samples had no effect on the achieved test accuracy compared to the default model, and removing the “easiest” 40% of samples reduced model accuracy by only ≈1% (compared to ≈6% loss when 40% of the "most difficult" samples were removed, and ≈3% loss when 40% of samples were randomly removed). Taking away the "easiest" samples first (up to a certain percentage of the data set) affected the learning process less negatively than removing random samples, while removing the "most difficult" samples first had the most detrimental effect. The results suggest that the networks derived most learning value from the "difficult" samples, and that a large subset of the "easiest" samples can be excluded from training with minimal impact on the attained model accuracy. Moreover, it is possible to identify these samples early during training, which can greatly reduce the training time for these models.
APA, Harvard, Vancouver, ISO, and other styles
6

Ayoub, Issa. "Multimodal Affective Computing Using Temporal Convolutional Neural Network and Deep Convolutional Neural Networks." Thesis, Université d'Ottawa / University of Ottawa, 2019. http://hdl.handle.net/10393/39337.

Full text
Abstract:
Affective computing has gained significant attention from researchers in the last decade due to the wide variety of applications that can benefit from this technology. Often, researchers describe affect using emotional dimensions such as arousal and valence. Valence refers to the spectrum of negative to positive emotions while arousal determines the level of excitement. Describing emotions through continuous dimensions (e.g. valence and arousal) allows us to encode subtle and complex affects as opposed to discrete emotions, such as the basic six emotions: happy, anger, fear, disgust, sad and neutral. Recognizing spontaneous and subtle emotions remains a challenging problem for computers. In our work, we employ two modalities of information: video and audio. Hence, we extract visual and audio features using deep neural network models. Given that emotions are time-dependent, we apply the Temporal Convolutional Neural Network (TCN) to model the variations in emotions. Additionally, we investigate an alternative model that combines a Convolutional Neural Network (CNN) and a Recurrent Neural Network (RNN). Given our inability to fit the latter deep model into the main memory, we divide the RNN into smaller segments and propose a scheme to back-propagate gradients across all segments. We configure the hyperparameters of all models using Gaussian processes to obtain a fair comparison between the proposed models. Our results show that TCN outperforms RNN for the recognition of the arousal and valence emotional dimensions. Therefore, we propose the adoption of TCN for emotion detection problems as a baseline method for future work. Our experimental results show that TCN outperforms all RNN based models yielding a concordance correlation coefficient of 0.7895 (vs. 0.7544) on valence and 0.8207 (vs. 0.7357) on arousal on the validation dataset of SEWA dataset for emotion prediction.
APA, Harvard, Vancouver, ISO, and other styles
7

Härenstam-Nielsen, Linus. "Deep Convolutional Networks with Recurrence for Eye-Tracking." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-240608.

Full text
Abstract:
This thesis explores the use of temporally recurrent connections in convolutional neural networks for eye-tracking. We specifically investigate the impact of replacing the convolutional layers in a regular CNN with convolutional LSTMs and replacing the fully connected feature layers with regular RNNs and LSTMs. This requires us to transition from a static single-frame input model to a time-dependent multipleframe input model. Doing so naturally introduces extra complexity to the eye-tracking pipeline, so we highlight the advantages and disadvantages. Our results show that adding LSTM-cells to the convolutional layers and RNN-cells to the feature layers can increase eyetracking performance, but also that LSTM-recurrence in the featurelayers can be detrimental to performance.
Denna uppsats utforskar användandet av minnesceller i faltningsbaserade neuralnätverk för ögonföljning. Vi undersöker specifikt inverkan av att byta ut faltningslager med faltningsbaserade LSTMer och att byta ut de fullt sammankopplade feature-lagren med vanliga RNNer och LSTMer. Vi beskriver hur man bör gå från en statisk modell som tar en bild i taget som input till en tidsberoende modell som tar flera bilder som input. Vi understryker även fördelar och nackdelar med en sådan övergång. Vi visar att LSTM-celler i faltningslagren och RNNceller i featurelagren kan förbättra eye-trackingprestandan, men ävenatt LSTM-celler i featurelagren kan försämra prestandan.
APA, Harvard, Vancouver, ISO, and other styles
8

Larsson, Susanna. "Monocular Depth Estimation Using Deep Convolutional Neural Networks." Thesis, Linköpings universitet, Datorseende, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-159981.

Full text
Abstract:
For a long time stereo-cameras have been deployed in visual Simultaneous Localization And Mapping (SLAM) systems to gain 3D information. Even though stereo-cameras show good performance, the main disadvantage is the complex and expensive hardware setup it requires, which limits the use of the system. A simpler and cheaper alternative are monocular cameras, however monocular images lack the important depth information. Recent works have shown that having access to depth maps in monocular SLAM system is beneficial since they can be used to improve the 3D reconstruction. This work proposes a deep neural network that predicts dense high-resolution depth maps from monocular RGB images by casting the problem as a supervised regression task. The network architecture follows an encoder-decoder structure in which multi-scale information is captured and skip-connections are used to recover details. The network is trained and evaluated on the KITTI dataset achieving results comparable to state-of-the-art methods. With further development, this network shows good potential to be incorporated in a monocular SLAM system to improve the 3D reconstruction.
APA, Harvard, Vancouver, ISO, and other styles
9

Imbulgoda, Liyangahawatte Gihan Janith Mendis. "Hardware Implementation and Applications of Deep Belief Networks." University of Akron / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=akron1476707730643462.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Caron, Mathilde. "Unsupervised Representation Learning with Clustering in Deep Convolutional Networks." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-227926.

Full text
Abstract:
This master thesis tackles the problem of unsupervised learning of visual representations with deep Convolutional Neural Networks (CNN). This is one of the main actual challenges in image recognition to close the gap between unsupervised and supervised representation learning. We propose a novel and simple way of training CNN on fully unlabeled datasets. Our method jointly optimizes a grouping of the representations and trains a CNN using the groups as supervision. We evaluate the models trained with our method on standard transfer learning experiments from the literature. We find out that our method outperforms all self-supervised and unsupervised state-of-the-art approaches. More importantly, our method outperforms those methods even when the unsupervised training set is not ImageNet but an arbitrary subset of images from Flickr.
Detta examensarbete behandlar problemet med oövervakat lärande av visuella representationer med djupa konvolutionella neurala nätverk (CNN). Detta är en av de viktigaste faktiska utmaningarna i datorseende för att överbrygga klyftan mellan oövervakad och övervakad representationstjänst. Vi föreslår ett nytt och enkelt sätt att träna CNN på helt omärkta dataset. Vår metod består i att tillsammans optimera en gruppering av representationerna och träna ett CNN med hjälp av grupperna som tillsyn. Vi utvärderar modellerna som tränats med vår metod på standardöverföringslärande experiment från litteraturen. Vi finner att vår metod överträffar alla självövervakade och oövervakade, toppmoderna tillvägagångssätt, hur sofistikerade de än är. Ännu viktigare är att vår metod överträffar de metoderna även när den oövervakade träningsuppsättningen inte är ImageNet men en godtycklig delmängd av bilder från Flickr.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Convolutional Deep Belief Networks"

1

Lu, Le, Yefeng Zheng, Gustavo Carneiro, and Lin Yang, eds. Deep Learning and Convolutional Neural Networks for Medical Image Computing. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-42999-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lu, Le, Xiaosong Wang, Gustavo Carneiro, and Lin Yang, eds. Deep Learning and Convolutional Neural Networks for Medical Imaging and Clinical Informatics. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-13969-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Michelucci, Umberto. Advanced Applied Deep Learning: Convolutional Neural Networks and Object Detection. Apress, 2019.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Masters, Timothy. Deep Belief Nets in C++ and CUDA C: Volume 3: Convolutional Nets. Apress, 2018.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Practical Convolutional Neural Networks: Implement advanced deep learning models using Python. Packt Publishing - ebooks Account, 2018.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Wang, Xiaosong, Lin Yang, Le Lu, and Gustavo Carneiro. Deep Learning and Convolutional Neural Networks for Medical Imaging and Clinical Informatics. Springer, 2019.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Deep belief nets in C++ and CUDA C. CreateSpace Independent Publishing Platform, 2015.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Yang, Lin, Le Lu, Yefeng Zheng, and Gustavo Carneiro. Deep Learning and Convolutional Neural Networks for Medical Image Computing: Precision Medicine, High Performance and Large-Scale Datasets. Springer, 2018.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Yang, Lin, Le Lu, Yefeng Zheng, and Gustavo Carneiro. Deep Learning and Convolutional Neural Networks for Medical Image Computing: Precision Medicine, High Performance and Large-Scale Datasets. Springer, 2017.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Masters, Timothy. Deep Belief Nets in C++ and CUDA C: Volume 1: Restricted Boltzmann Machines and Supervised Feedforward Networks. Apress, 2018.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Convolutional Deep Belief Networks"

1

Kaiser, Jacques, David Zimmerer, J. Camilo Vasquez Tieck, Stefan Ulbrich, Arne Roennau, and Rüdiger Dillmann. "Spiking Convolutional Deep Belief Networks." In Artificial Neural Networks and Machine Learning – ICANN 2017, 3–11. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-68612-7_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hu, Dan, Xingshe Zhou, and Junjie Wu. "Visual Tracking Based on Convolutional Deep Belief Network." In Lecture Notes in Computer Science, 103–15. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-23216-4_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wicht, Baptiste, Andreas Fischer, and Jean Hennebert. "Keyword Spotting with Convolutional Deep Belief Networks and Dynamic Time Warping." In Artificial Neural Networks and Machine Learning – ICANN 2016, 113–20. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-44781-0_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Guo, Lei, Shijie Li, Xin Niu, and Yong Dou. "A Study on Layer Connection Strategies in Stacked Convolutional Deep Belief Networks." In Communications in Computer and Information Science, 81–90. Berlin, Heidelberg: Springer Berlin Heidelberg, 2014. http://dx.doi.org/10.1007/978-3-662-45646-0_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Chu, Joseph Lin, and Adam Krzyżak. "Application of Support Vector Machines, Convolutional Neural Networks and Deep Belief Networks to Recognition of Partially Occluded Objects." In Artificial Intelligence and Soft Computing, 34–46. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-07173-2_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Calin, Ovidiu. "Convolutional Networks." In Deep Learning Architectures, 517–42. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-36721-3_16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Webb, Geoffrey I., Johannes Fürnkranz, Johannes Fürnkranz, Johannes Fürnkranz, Geoffrey Hinton, Claude Sammut, Joerg Sander, et al. "Deep Belief Networks." In Encyclopedia of Machine Learning, 269. Boston, MA: Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-30164-8_209.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Banerjee, Tathagat, Dhruv Batta, Aditya Jain, S. Karthikeyan, Himanshu Mehndiratta, and K. Hari Kishan. "Deep Belief Convolutional Neural Network with Artificial Image Creation by GANs Based Diagnosis of Pneumonia in Radiological Samples of the Pectoralis Major." In Lecture Notes in Electrical Engineering, 979–1002. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-0749-3_75.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ketkar, Nikhil. "Convolutional Neural Networks." In Deep Learning with Python, 63–78. Berkeley, CA: Apress, 2017. http://dx.doi.org/10.1007/978-1-4842-2766-4_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Salvaris, Mathew, Danielle Dean, and Wee Hyong Tok. "Convolutional Neural Networks." In Deep Learning with Azure, 131–60. Berkeley, CA: Apress, 2018. http://dx.doi.org/10.1007/978-1-4842-3679-6_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Convolutional Deep Belief Networks"

1

Liu, Xiao, Binbin Tang, Zhenyang Wang, Xianghua Xu, Shiliang Pu, Dapeng Tao, and Mingli Song. "Chart classification by combining deep convolutional networks and deep belief networks." In 2015 13th International Conference on Document Analysis and Recognition (ICDAR). IEEE, 2015. http://dx.doi.org/10.1109/icdar.2015.7333872.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ren, Yuanfang, and Yan Wu. "Convolutional deep belief networks for feature extraction of EEG signal." In 2014 International Joint Conference on Neural Networks (IJCNN). IEEE, 2014. http://dx.doi.org/10.1109/ijcnn.2014.6889383.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Zhao, Weichen, and Junshe An. "Wireless Signal Fngerprint Extraction Based on Convolutional Deep Belief Network." In 2021 13th International Conference on Communication Software and Networks (ICCSN). IEEE, 2021. http://dx.doi.org/10.1109/iccsn52437.2021.9463643.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Wu, Huiyi, John Soraghan, Anja Lowit, and Gaetano Di-Caterina. "A Deep Learning Method for Pathological Voice Detection Using Convolutional Deep Belief Networks." In Interspeech 2018. ISCA: ISCA, 2018. http://dx.doi.org/10.21437/interspeech.2018-1351.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Lee, Honglak, Roger Grosse, Rajesh Ranganath, and Andrew Y. Ng. "Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations." In the 26th Annual International Conference. New York, New York, USA: ACM Press, 2009. http://dx.doi.org/10.1145/1553374.1553453.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Huang, G. B., Honglak Lee, and E. Learned-Miller. "Learning hierarchical representations for face verification with convolutional deep belief networks." In 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2012. http://dx.doi.org/10.1109/cvpr.2012.6247968.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Chaturvedi, Iti, Erik Cambria, Soujanya Poria, and Rajiv Bajpai. "Bayesian Deep Convolution Belief Networks for Subjectivity Detection." In 2016 IEEE 16th International Conference on Data Mining Workshops (ICDMW). IEEE, 2016. http://dx.doi.org/10.1109/icdmw.2016.0134.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Jin, Xinyu, Chunhui Ma, Yuchen Zhang, and Lanjuan Li. "Classification of Lung Nodules Based on Convolutional Deep Belief Network." In 2017 10th International Symposium on Computational Intelligence and Design (ISCID). IEEE, 2017. http://dx.doi.org/10.1109/iscid.2017.57.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Wang, Dandan, Ming Li, and Xiaoxu Li. "Face Detection Algorithm Based on Convolutional Pooling Deep Belief Network." In 2017 2nd International Conference on Electrical, Automation and Mechanical Engineering (EAME 2017). Paris, France: Atlantis Press, 2017. http://dx.doi.org/10.2991/eame-17.2017.64.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Tanase, Radu, Mihai Datcu, and Dan Raducanu. "A convolutional deep belief network for polarimetric SAR data feature extraction." In IGARSS 2016 - 2016 IEEE International Geoscience and Remote Sensing Symposium. IEEE, 2016. http://dx.doi.org/10.1109/igarss.2016.7730968.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography