Academic literature on the topic 'Backpropagation and Boltzmann Machine algorithms'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Backpropagation and Boltzmann Machine algorithms.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Backpropagation and Boltzmann Machine algorithms"

1

D., T. V. Dharmajee Rao, and V. Ramana K. "A Novel Approach for Efficient Training of Deep Neural Networks." Indonesian Journal of Electrical Engineering and Computer Science 11, no. 3 (2018): 954–61. https://doi.org/10.11591/ijeecs.v11.i3.pp954-961.

Full text
Abstract:
Deep Neural Network training algorithms consumes long training time, especially when the number of hidden layers and nodes is large. Matrix multiplication is the key operation carried out at every node of each layer for several hundreds of thousands of times during the training of Deep Neural Network. Blocking is a well-proven optimization technique to improve the performance of matrix multiplication. Blocked Matrix multiplication algorithms can easily be parallelized to accelerate the performance further. This paper proposes a novel approach of implementing Parallel Blocked Matrix multiplicat
APA, Harvard, Vancouver, ISO, and other styles
2

Dharmajee Rao, D. T. V., and K. V. Ramana. "A Novel Approach for Efficient Training of Deep Neural Networks." Indonesian Journal of Electrical Engineering and Computer Science 11, no. 3 (2018): 954. http://dx.doi.org/10.11591/ijeecs.v11.i3.pp954-961.

Full text
Abstract:
<p style="text-indent: 1.27cm; margin-bottom: 0.35cm; line-height: 115%;" align="justify"><span style="font-family: Arial,serif;"><span style="font-size: small;"><em>Deep Neural Network training algorithms consumes long training time, especially when the number of hidden layers and nodes is large. Matrix multiplication is the key operation carried out at every node of each layer for several hundreds of thousands of times during the training of Deep Neural Network. Blocking is a well-proven optimization technique to improve the performance of matrix multiplication. Block
APA, Harvard, Vancouver, ISO, and other styles
3

Pearlmutter, Barak A. "Fast Exact Multiplication by the Hessian." Neural Computation 6, no. 1 (1994): 147–60. http://dx.doi.org/10.1162/neco.1994.6.1.147.

Full text
Abstract:
Just storing the Hessian H (the matrix of second derivatives δ2E/δwiδwj of the error E with respect to each pair of weights) of a large neural network is difficult. Since a common use of a large matrix like H is to compute its product with various vectors, we derive a technique that directly calculates Hv, where v is an arbitrary vector. To calculate Hv, we first define a differential operator Rv{f(w)} = (δ/δr)f(w + rv)|r=0, note that Rv{▽w} = Hv and Rv{w} = v, and then apply Rv{·} to the equations used to compute ▽w. The result is an exact and numerically stable procedure for computing Hv, wh
APA, Harvard, Vancouver, ISO, and other styles
4

XU, LEI, STAN KLASA, and ALAN YUILLE. "RECENT ADVANCES ON TECHNIQUES OF STATIC FEEDFORWARD NETWORKS WITH SUPERVISED LEARNING." International Journal of Neural Systems 03, no. 03 (1992): 253–90. http://dx.doi.org/10.1142/s0129065792000218.

Full text
Abstract:
The rediscovery and popularization of the backpropagation training technique for multilayer perceptrons as well as the invention of the Boltzmann machine learning algorithm has given a new boost to the study on supervised learning networks. In recent years, besides widely spread applications and various further improvements of the classical backpropagation technique, many new supervised learning models, techniques as well as theories, have also been proposed in a vast number of publications. This paper tries to give a rather systematic review on the recent advances on supervised learning techn
APA, Harvard, Vancouver, ISO, and other styles
5

O'Reilly, Randall C. "Biologically Plausible Error-Driven Learning Using Local Activation Differences: The Generalized Recirculation Algorithm." Neural Computation 8, no. 5 (1996): 895–938. http://dx.doi.org/10.1162/neco.1996.8.5.895.

Full text
Abstract:
The error backpropagation learning algorithm (BP) is generally considered biologically implausible because it does not use locally available, activation-based variables. A version of BP that can be computed locally using bidirectional activation recirculation (Hinton and McClelland 1988) instead of backpropagated error derivatives is more biologically plausible. This paper presents a generalized version of the recirculation algorithm (GeneRec), which overcomes several limitations of the earlier algorithm by using a generic recurrent network with sigmoidal units that can learn arbitrary input/o
APA, Harvard, Vancouver, ISO, and other styles
6

S. Manoha. "A Deep Dive into Training Algorithms for Deep Belief Networks." Journal of Information Systems Engineering and Management 10, no. 13s (2025): 178–86. https://doi.org/10.52783/jisem.v10i13s.2021.

Full text
Abstract:
Deep Belief Networks (DBNs) have emerged as powerful tools for feature learning, representation, and generative modeling. This paper presents a comprehensive exploration of the various training algorithms employed in the training of DBNs. DBNs, composed of multiple layers of stochastic hidden units, have found applications in diverse domains such as computer vision, natural language processing, and bioinformatics. The paper begins by delving into the pre-training phase, where Restricted Boltzmann Machines (RBMs) play a central role. We review the Contrastive Divergence (CD) and Persistent Cont
APA, Harvard, Vancouver, ISO, and other styles
7

Abdel-Jaber, Hussein, Disha Devassy, Azhar Al Salam, Lamya Hidaytallah, and Malak EL-Amir. "A Review of Deep Learning Algorithms and Their Applications in Healthcare." Algorithms 15, no. 2 (2022): 71. http://dx.doi.org/10.3390/a15020071.

Full text
Abstract:
Deep learning uses artificial neural networks to recognize patterns and learn from them to make decisions. Deep learning is a type of machine learning that uses artificial neural networks to mimic the human brain. It uses machine learning methods such as supervised, semi-supervised, or unsupervised learning strategies to learn automatically in deep architectures and has gained much popularity due to its superior ability to learn from huge amounts of data. It was found that deep learning approaches can be used for big data analysis successfully. Applications include virtual assistants such as A
APA, Harvard, Vancouver, ISO, and other styles
8

Ajay, A. Gidd, and S. Shewale Ajinkya. "One Look at Deep Learning Algorithms." Recent Innovations in Wireless Network Security 2, no. 1 (2020): 1–5. https://doi.org/10.5281/zenodo.3819806.

Full text
Abstract:
<em>Deep learning is a mainly an area of machine learning. The intention of this paper is to give a quick overview of deep learning algorithms. In this paper the author is discussed the most popular deep learning algorithm which are used most frequent in the current scenario. Now a days deep learning has caught special attention because it can solve very complex problem with less computation.</em>
APA, Harvard, Vancouver, ISO, and other styles
9

Wu, Zhiyong, Xiangqian Ding, and Guangrui Zhang. "A Novel Method for Classification of ECG Arrhythmias Using Deep Belief Networks." International Journal of Computational Intelligence and Applications 15, no. 04 (2016): 1650021. http://dx.doi.org/10.1142/s1469026816500218.

Full text
Abstract:
In this paper, a novel approach based on deep belief networks (DBN) for electrocardiograph (ECG) arrhythmias classification is proposed. The construction process of ECG classification model consists of two steps: features learning for ECG signals and supervised fine-tuning. In order to deeply extract features from continuous ECG signals, two types of restricted Boltzmann machine (RBM) including Gaussian–Bernoulli and Bernoulli–Bernoulli are stacked to form DBN. The parameters of RBM can be learned by two training algorithms such as contrastive divergence and persistent contrastive divergence.
APA, Harvard, Vancouver, ISO, and other styles
10

Reddy, G. Vinoda, Sreedevi Kadiyala, Chandra Srinivasan Potluri, et al. "An Intrusion Detection Using Machine Learning Algorithm Multi-Layer Perceptron (MlP): A Classification Enhancement in Wireless Sensor Network (WSN)." International Journal on Recent and Innovation Trends in Computing and Communication 10, no. 2s (2022): 139–45. http://dx.doi.org/10.17762/ijritcc.v10i2s.5920.

Full text
Abstract:
During several decades, there has been a meteoric rise in the development and use of cutting-edge technology. The Wireless Sensor Network (WSN) is a groundbreaking innovation that relies on a vast network of individual sensor nodes. The sensor nodes in the network are responsible for collecting data and uploading it to the cloud. When networks with little resources are deployed harshly and without regulation, security risks occur. Since the rate at which new information is being generated is increasing at an exponential rate, WSN communication has become the most challenging and complex aspect
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Backpropagation and Boltzmann Machine algorithms"

1

Cheng, Martin Chun-Sheng, and pjcheng@ozemail com au. "Dynamical Near Optimal Training for Interval Type-2 Fuzzy Neural Network (T2FNN) with Genetic Algorithm." Griffith University. School of Microelectronic Engineering, 2003. http://www4.gu.edu.au:8080/adt-root/public/adt-QGU20030722.172812.

Full text
Abstract:
Type-2 fuzzy logic system (FLS) cascaded with neural network, called type-2 fuzzy neural network (T2FNN), is presented in this paper to handle uncertainty with dynamical optimal learning. A T2FNN consists of type-2 fuzzy linguistic process as the antecedent part and the two-layer interval neural network as the consequent part. A general T2FNN is computational intensive due to the complexity of type 2 to type 1 reduction. Therefore the interval T2FNN is adopted in this paper to simplify the computational process. The dynamical optimal training algorithm for the two-layer consequent part of inte
APA, Harvard, Vancouver, ISO, and other styles
2

Cheng, Martin Chun-Sheng. "Dynamical Near Optimal Training for Interval Type-2 Fuzzy Neural Network (T2FNN) with Genetic Algorithm." Thesis, Griffith University, 2003. http://hdl.handle.net/10072/366350.

Full text
Abstract:
Type-2 fuzzy logic system (FLS) cascaded with neural network, called type-2 fuzzy neural network (T2FNN), is presented in this paper to handle uncertainty with dynamical optimal learning. A T2FNN consists of type-2 fuzzy linguistic process as the antecedent part and the two-layer interval neural network as the consequent part. A general T2FNN is computational intensive due to the complexity of type 2 to type 1 reduction. Therefore the interval T2FNN is adopted in this paper to simplify the computational process. The dynamical optimal training algorithm for the two-layer consequent part of inte
APA, Harvard, Vancouver, ISO, and other styles
3

Barnard, S. J. "Short term load forecasting by a modified backpropagation trained neural network." Thesis, 2012. http://hdl.handle.net/10210/5828.

Full text
Abstract:
M. Ing.<br>This dissertation describes the development of a feedforwa.rd neural network, trained by means of an accelerated backpropagation algorithm, used for the short term load forecasting on real world data. It is argued that the new learning algorithm. I-Prop, - is a faster training - algorithm due to the fact that the learning rate is optimally predicted and changed according to a more efficient formula (without the need for extensive memory) which speeds up the training process. The neural network developed was tested for the month of December 1994, specifically to test the artificial n
APA, Harvard, Vancouver, ISO, and other styles
4

Upadhya, Vidyadhar. "Efficient Algorithms for Learning Restricted Boltzmann Machines." Thesis, 2020. https://etd.iisc.ac.in/handle/2005/4840.

Full text
Abstract:
The probabilistic generative models learn useful features from unlabeled data which can be used for subsequent problem-specific tasks, such as classification, regression or information retrieval. The RBM is one such important energy based probabilistic generative model. RBMs are also the building blocks for several deep generative models. It is difficult to train and evaluate RBMs mainly because the normalizing constant (known as the partition function) for the distribution that they represent is computationally hard to evaluate. Therefore, various approximate methods (based noisy gradient
APA, Harvard, Vancouver, ISO, and other styles
5

Dauphin, Yann. "Advances in scaling deep learning algorithms." Thèse, 2015. http://hdl.handle.net/1866/13710.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Backpropagation and Boltzmann Machine algorithms"

1

Deep Learning. MIT Press, 2019.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Backpropagation and Boltzmann Machine algorithms"

1

Cai, Xianggao, Zhanpeng Xu, Guoming Lai, Chengwei Wu, and Xiaola Lin. "GPU-Accelerated Restricted Boltzmann Machine for Collaborative Filtering." In Algorithms and Architectures for Parallel Processing. Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33078-0_22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ashman, I., T. Vladimirova, C. Jesshope, and R. Peel. "Parallel Boltzmann Machine Topologies for Simulated Annealing Realisation of Combinatorial Problems." In Artificial Neural Nets and Genetic Algorithms. Springer Vienna, 1995. http://dx.doi.org/10.1007/978-3-7091-7535-4_78.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

van Tulder, Gijs, and Marleen de Bruijne. "Learning Features for Tissue Classification with the Classification Restricted Boltzmann Machine." In Medical Computer Vision: Algorithms for Big Data. Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-13972-2_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kosmatopoulos, Elias B., and Manolis A. Christodoulou. "The Boltzmann ECE Neural Network: A Learning Machine for Estimating Unknown Probability Distributions." In Artificial Neural Nets and Genetic Algorithms. Springer Vienna, 1993. http://dx.doi.org/10.1007/978-3-7091-7533-0_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Lafargue, V., P. Garda, and E. Belhaire. "An Analog Implementation of the Boltzmann Machine with Programmable Learning Algorithms." In VLSI for Neural Networks and Artificial Intelligence. Springer US, 1994. http://dx.doi.org/10.1007/978-1-4899-1331-9_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ji, Jinbao, Zongxiang Hu, Weiqi Zhang, and Sen Yang. "Development of Deep Learning Algorithms, Frameworks and Hardwares." In Proceeding of 2021 International Conference on Wireless Communications, Networking and Applications. Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-2456-9_71.

Full text
Abstract:
AbstractAs the core algorithm of artificial intelligence, deep learning has brought new breakthroughs and opportunities to all walks of life. This paper summarizes the principles of deep learning algorithms such as Autoencoder (AE), Boltzmann Machine (BM), Deep Belief Network (DBM), Convolutional Neural Network (CNN), Recurrent Neural Network (RNN) and Recursive Neural Network (RNN). The characteristics and differences of deep learning frameworks such as Tensorflow, Caffe, Theano and PyTorch are compared and analyzed. Finally, the application and performance of hardware platforms such as CPU and GPU in deep learning acceleration are introduced. In this paper, the development and application of deep learning algorithm, framework and hardware technology can provide reference and basis for the selection of deep learning technology.
APA, Harvard, Vancouver, ISO, and other styles
7

Jaworski, Maciej, Piotr Duda, Danuta Rutkowska, and Leszek Rutkowski. "On Handling Missing Values in Data Stream Mining Algorithms Based on the Restricted Boltzmann Machine." In Communications in Computer and Information Science. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-36802-9_37.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Rene, Eldon R., Shishir Kumar Behera, and Hung Suck Park. "Predicting Adsorption Behavior in Engineered Floodplain Filtration System Using Backpropagation Neural Networks." In Machine Learning Algorithms for Problem Solving in Computational Applications. IGI Global, 2012. http://dx.doi.org/10.4018/978-1-4666-1833-6.ch011.

Full text
Abstract:
Engineered floodplain filtration (EFF) system is an eco-friendly low-cost water treatment process wherein water contaminants can be removed, by adsorption and-or degraded by microorganisms, as the infiltrating water moves from the wastewater treatment plants to the rivers. An artificial neural network (ANN) based approach was used in this study to approximate and interpret the complex input/output relationships, essentially to understand the breakthrough times in EFF. The input parameters to the ANN model were inlet concentration of a pharmaceutical, ibuprofen (ppm) and flow rate (md– 1), and the output parameters were six concentration-time pairs (C, t). These C, t pairs were the times in the breakthrough profile, when 1%, 5%, 25%, 50%, 75%, and 95% of the pollutant was present at the outlet of the system. The most dependable condition for the network was selected by a trial and error approach and by estimating the determination coefficient (R2) value (&gt;0.99) achieved during prediction of the testing set. The proposed ANN model for EFF operation could be used as a potential alternative for knowledge-based models through proper training and testing of variables.
APA, Harvard, Vancouver, ISO, and other styles
9

King, R. D., R. Henery, C. Feng, and A. Sutherland. "A Comparative Study of Classification Algorithms: Statistical, Machine Learning and Neural Network." In Machine Intelligence 13. Oxford University PressOxford, 1994. http://dx.doi.org/10.1093/oso/9780198538509.003.0013.

Full text
Abstract:
Abstract The aim of the Stat Log project is to compare the performance of statistical, machine learning, and neural network algorithms, on large real world problems. This paper describes the completed work on classification in the Stat Log project. Classification is here defined to be the problem, given a set of multivariate data with assigned classes, of estimating the probability from a set of attributes describing a new example sampled from the same source that it has a pre-defined class. We gathered together a representative collection of algorithms from statistics (Naive Bayes, K-nearest Neighbour, Kernel density, Linear discriminant, Quadratic discriminant, Logistic regression, Projection pursuit, Bayesian networks), machine learning (CART, C4.5, NewID, AC2, CAL5, CN2, ITrule —only propositional symbolic algorithms were considered), and neural networks (Backpropagation, Radial basis functions, Kohonen). We then applied these algorithms to eight large real world classification problems: four from image analysis, two from medicine, and one each from engineering and finance. Our results are still provisional, but we can draw a number of tentative conclusions about the applicability of particular algorithms to particular database types. For example: we found that K-nearest Neighbour can perform well on complex image analysis problems if the attributes are properly scaled, but it is very slow; machine learning algorithms are very fast and robust to non-Normal features of databases, but may be out-performed if particular distribution assumptions hold. We additionally found that many classification algorithms need to be extended to deal better with cost functions (problems where the classes have an ordered relationship are a special case of this).
APA, Harvard, Vancouver, ISO, and other styles
10

Kumar, Sumit, and Sanlap Acharya. "Application of Machine Learning Algorithms in Stock Market Prediction." In Handbook of Research on Smart Technology Models for Business and Industry. IGI Global, 2020. http://dx.doi.org/10.4018/978-1-7998-3645-2.ch007.

Full text
Abstract:
The prediction of stock prices has always been a very challenging problem for investors. Using machine learning techniques to predict stock prices is also one of the favourite topics for academics working in this domain. This chapter discusses five supervised learning techniques and two unsupervised learning techniques to solve the problem of stock price prediction and has compared the performances of all the algorithms. Among the supervised learning techniques, Long Short-Term Memory (LSTM) algorithm performed better than the others whereas, among the unsupervised learning techniques, Restricted Boltzmann Machine (RBM) performed better. RBM is found to be performing even better than LSTM.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Backpropagation and Boltzmann Machine algorithms"

1

Miefthawati, Nanda Putri, Sukma Akbar, Zulfatri Aini, and Sutoyo. "Comparison Analysis of Forecasting Accuracy for Electricity Consumption Using Extreme Learning Machine and Backpropagation Algorithms." In 2024 FORTEI-International Conference on Electrical Engineering (FORTEI-ICEE). IEEE, 2024. https://doi.org/10.1109/fortei-icee64706.2024.10824578.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ma, Zhengchao, Maoya Hsu, Hao Hu, et al. "Hybrid Strategies for Interpretability of Rate of Penetration Prediction: Automated Machine Learning and SHAP Interpretation." In 58th U.S. Rock Mechanics/Geomechanics Symposium. ARMA, 2024. http://dx.doi.org/10.56952/arma-2024-0315.

Full text
Abstract:
ABSTRACT: Accurate prediction of rate of penetration (ROP) during petroleum drilling is crucial to optimize and guide field operations. However, due to the complex nonlinear relationship between drilling parameters and ROP, traditional empirical models often struggle to accurately predict ROP. This study introduces an automated machine learning (AutoML) for ROP prediction and utilizes SHAP (SHapley Additive exPlanations) to interpret the prediction results. The workflow framework based on this collaborative prediction strategy enables automated processing of data and automatic stacking ensembl
APA, Harvard, Vancouver, ISO, and other styles
3

Kajino, Hiroshi. "A Functional Dynamic Boltzmann Machine." In Twenty-Sixth International Joint Conference on Artificial Intelligence. International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/276.

Full text
Abstract:
Dynamic Boltzmann machines (DyBMs) are recently developed generative models of a time series. They are designed to learn a time series by efficient online learning algorithms, whilst taking long-term dependencies into account with help of eligibility traces, recursively updatable memory units storing descriptive statistics of all the past data. The current DyBMs assume a finite-dimensional time series and cannot be applied to a functional time series, in which the dimension goes to infinity (e.g., spatiotemporal data on a continuous space). In this paper, we present a functional dynamic Boltzm
APA, Harvard, Vancouver, ISO, and other styles
4

Bellgard, M. I., and C. P. Tsang. "Some experiments on the use of genetic algorithms in a Boltzmann machine." In 1991 IEEE International Joint Conference on Neural Networks. IEEE, 1991. http://dx.doi.org/10.1109/ijcnn.1991.170327.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Bao, Lin, Xiaoyan Sun, Dunwei Gong, Yong Zhang, and Biao Xu. "Enhanced Interactive Estimation of Distribution Algorithms with Attention Mechanism and Restricted Boltzmann Machine." In 2020 IEEE Congress on Evolutionary Computation (CEC). IEEE, 2020. http://dx.doi.org/10.1109/cec48606.2020.9185740.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Passos, Leandro Aparecido, and João Paulo Papa. "On the Training Algorithms for Restricted Boltzmann Machines." In XXXII Conference on Graphics, Patterns and Images. Sociedade Brasileira de Computação - SBC, 2019. http://dx.doi.org/10.5753/sibgrapi.est.2019.8294.

Full text
Abstract:
Deep learning techniques have been studied extensively in the last years due to their good results related to essential tasks on a large range of applications, such as speech and face recognition, as well as object classification. Restrict Boltzmann Machines (RBMs) are among the most employed techniques, which are energy-based stochastic neural networks composed of two layers of neurons whose objective is to estimate the connection weights between them. Recently, the scientific community spent much effort on sampling methods since the effectiveness of RBMs is directly related to the success of
APA, Harvard, Vancouver, ISO, and other styles
7

Baldi, Pierre, Peter Sadowski, and Zhiqin Lu. "Learning in the Machine: Random Backpropagation and the Deep Learning Channel (Extended Abstract)." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/885.

Full text
Abstract:
Random backpropagation (RBP) is a variant of the backpropagation algorithm for training neural networks, where the transpose of the forward matrices are replaced by fixed random matrices in the calculation of the weight updates. It is remarkable both because of its effectiveness, in spite of using random matrices to communicate error information, and because it completely removes the requirement of maintaining symmetric weights in a physical neural system. To better understand RBP, we compare different algorithms in terms of the information available locally to each neuron. In the process, we
APA, Harvard, Vancouver, ISO, and other styles
8

Li, Shuangyin, Rong Pan, and Jun Yan. "Self-paced Compensatory Deep Boltzmann Machine for Semi-Structured Document Embedding." In Twenty-Sixth International Joint Conference on Artificial Intelligence. International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/304.

Full text
Abstract:
In the last decade, there has been a huge amount of documents with different types of rich metadata information, which belongs to the Semi-Structured Documents (SSDs), appearing in many real applications. It is an interesting research work to model this type of text data following the way how humans understand text with informative metadata. In the paper, we introduce a Self-paced Compensatory Deep Boltzmann Machine (SCDBM) architecture that learns a deep neural network by using metadata information to learn deep structure layer-wisely for Semi-Structured Documents (SSDs) embedding in a self-p
APA, Harvard, Vancouver, ISO, and other styles
9

Won, Stephen, and S. Susan Young. "Assessing the accuracy of image tracking algorithms on visible and thermal imagery using a deep restricted Boltzmann machine." In SPIE Defense, Security, and Sensing, edited by Harold Szu and Liyi Dai. SPIE, 2012. http://dx.doi.org/10.1117/12.918342.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Shukla, Kumar A., Ayush Choudhary, Somay Vaidh, and Uma Devi K.S. "GBMLP-RBM: A Novel Stacking Ensemble Learning Framework Using Retricted Boltzmann Machine and Gradient Boosting Algorithms for Heart Disease Classification." In 2023 Innovations in Power and Advanced Computing Technologies (i-PACT). IEEE, 2023. http://dx.doi.org/10.1109/i-pact58649.2023.10434311.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!