To see the other types of publications on this topic, follow the link: Hidden units.

Journal articles on the topic 'Hidden units'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Hidden units.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Tyrcha, Joanna, and John Hertz. "Network inference with hidden units." Mathematical Biosciences and Engineering 11, no. 1 (2014): 149–65. http://dx.doi.org/10.3934/mbe.2014.11.149.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Krotov, Dmitry, and John J. Hopfield. "Unsupervised learning by competing hidden units." Proceedings of the National Academy of Sciences 116, no. 16 (2019): 7723–31. http://dx.doi.org/10.1073/pnas.1820458116.

Full text
Abstract:
It is widely believed that end-to-end training with the backpropagation algorithm is essential for learning good feature detectors in early layers of artificial neural networks, so that these detectors are useful for the task performed by the higher layers of that neural network. At the same time, the traditional form of backpropagation is biologically implausible. In the present paper we propose an unusual learning rule, which has a degree of biological plausibility and which is motivated by Hebb’s idea that change of the synapse strength should be local—i.e., should depend only on the activi
APA, Harvard, Vancouver, ISO, and other styles
3

Côté, Marc-Alexandre, and Hugo Larochelle. "An Infinite Restricted Boltzmann Machine." Neural Computation 28, no. 7 (2016): 1265–88. http://dx.doi.org/10.1162/neco_a_00848.

Full text
Abstract:
We present a mathematical construction for the restricted Boltzmann machine (RBM) that does not require specifying the number of hidden units. In fact, the hidden layer size is adaptive and can grow during training. This is obtained by first extending the RBM to be sensitive to the ordering of its hidden units. Then, with a carefully chosen definition of the energy function, we show that the limit of infinitely many hidden units is well defined. As with RBM, approximate maximum likelihood training can be performed, resulting in an algorithm that naturally and adaptively adds trained hidden uni
APA, Harvard, Vancouver, ISO, and other styles
4

Setiono, Rudy. "Feedforward Neural Network Construction Using Cross Validation." Neural Computation 13, no. 12 (2001): 2865–77. http://dx.doi.org/10.1162/089976601317098565.

Full text
Abstract:
This article presents an algorithm that constructs feedforward neural networks with a single hidden layer for pattern classification. The algorithm starts with a small number of hidden units in the network and adds more hidden units as needed to improve the network's predictive accuracy. To determine when to stop adding new hidden units, the algorithm makes use of a subset of the available training samples for cross validation. New hidden units are added to the network only if they improve the classification accuracy of the network on the training samples and on the cross-validation samples. E
APA, Harvard, Vancouver, ISO, and other styles
5

Luo, Heng, Ruimin Shen, Changyong Niu, and Carsten Ullrich. "Sparse Group Restricted Boltzmann Machines." Proceedings of the AAAI Conference on Artificial Intelligence 25, no. 1 (2011): 429–34. http://dx.doi.org/10.1609/aaai.v25i1.7923.

Full text
Abstract:
Since learning in Boltzmann machines is typically quite slow, there is a need to restrict connections within hidden layers. However, theresulting states of hidden units exhibit statistical dependencies. Based on this observation, we propose using l1/l2 regularization upon the activation probabilities of hidden units in restricted Boltzmann machines to capture the local dependencies among hidden units. This regularization not only encourages hidden units of many groups to be inactive given observed data but also makes hidden units within a group compete with each other for modeling observed dat
APA, Harvard, Vancouver, ISO, and other styles
6

Zemel, Richard S., and Geoffrey E. Hinton. "Learning Population Codes by Minimizing Description Length." Neural Computation 7, no. 3 (1995): 549–64. http://dx.doi.org/10.1162/neco.1995.7.3.549.

Full text
Abstract:
The minimum description length (MDL) principle can be used to train the hidden units of a neural network to extract a representation that is cheap to describe but nonetheless allows the input to be reconstructed accurately. We show how MDL can be used to develop highly redundant population codes. Each hidden unit has a location in a low-dimensional implicit space. If the hidden unit activities form a bump of a standard shape in this space, they can be cheaply encoded by the center of this bump. So the weights from the input units to the hidden units in an autoencoder are trained to make the ac
APA, Harvard, Vancouver, ISO, and other styles
7

BLATT, MARCELO, EYTAN DOMANY, and IDO KANTER. "ON THE EQUIVALENCE OF TWO-LAYERED PERCEPTRONS WITH BINARY NEURONS." International Journal of Neural Systems 06, no. 03 (1995): 225–31. http://dx.doi.org/10.1142/s0129065795000160.

Full text
Abstract:
We consider two-layered perceptrons consisting of N binary input units, K binary hidden units and one binary output unit, in the limit N≫K≥1. We prove that the weights of a regular irreducible network are uniquely determined by its input-output map up to some obvious global symmetries. A network is regular if its K weight vectors from the input layer to the K hidden units are linearly independent. A (single layered) perceptron is said to be irreducible if its output depends on every one of its input units; and a two-layered perceptron is irreducible if the K+1 perceptrons that constitute such
APA, Harvard, Vancouver, ISO, and other styles
8

Morral, J. E. "Chemical Diffusivities and Their Hidden Concentration Units." Journal of Phase Equilibria and Diffusion 35, no. 5 (2014): 581–86. http://dx.doi.org/10.1007/s11669-014-0308-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Williams, Christopher K. I. "Computation with Infinite Neural Networks." Neural Computation 10, no. 5 (1998): 1203–16. http://dx.doi.org/10.1162/089976698300017412.

Full text
Abstract:
For neural networks with a wide class of weight priors, it can be shown that in the limit of an infinite number of hidden units, the prior over functions tends to a gaussian process. In this article, analytic forms are derived for the covariance function of the gaussian processes corresponding to networks with sigmoidal and gaussian hidden units. This allows predictions to be made efficiently using networks with an infinite number of hidden units and shows, somewhat paradoxically, that it may be easier to carry out Bayesian prediction with infinite networks rather than finite ones.
APA, Harvard, Vancouver, ISO, and other styles
10

Prasetyo, Simeon Yuda. "Prediksi Gagal Jantung Menggunakan Artificial Neural Network." Jurnal SAINTEKOM 13, no. 1 (2023): 79–88. http://dx.doi.org/10.33020/saintekom.v13i1.379.

Full text
Abstract:
Cardiovascular disease or heart problems are the leading cause of death worldwide. According to WHO (World Health Organization) every year there are more than 17.9 million deaths worldwide. In previous studies, there have been many studies related to the application of machine learning to predict heart failure and obtained quite good results, ranging from 85 percent to 90 percent, with sophisticated models optimized using neural networks. In this research, experiments were carried out using similar architectures based on the state of the art from previous research, namely Artificial Neural Net
APA, Harvard, Vancouver, ISO, and other styles
11

Rawal, Amit. "The usefulness of ‘hidden' units of specific stress." Textile Research Journal 90, no. 19-20 (2020): 2350–51. http://dx.doi.org/10.1177/0040517520944539.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Chechetkin, V. R., and V. V. Lobzin. "Nucleosome Units and Hidden Periodicities in DNA Sequences." Journal of Biomolecular Structure and Dynamics 15, no. 5 (1998): 937–47. http://dx.doi.org/10.1080/07391102.1998.10508214.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Benaim, Michel. "On Functional Approximation with Normalized Gaussian Units." Neural Computation 6, no. 2 (1994): 319–33. http://dx.doi.org/10.1162/neco.1994.6.2.319.

Full text
Abstract:
Feedforward neural networks with a single hidden layer using normalized gaussian units are studied. It is proved that such neural networks are capable of universal approximation in a satisfactory sense. Then, a hybrid learning rule as per Moody and Darken that combines unsupervised learning of hidden units and supervised learning of output units is considered. By using the method of ordinary differential equations for adaptive algorithms (ODE method) it is shown that the asymptotic properties of the learning rule may be studied in terms of an autonomous cascade of dynamical systems. Some recen
APA, Harvard, Vancouver, ISO, and other styles
14

Ross, Matt, Nareg Berberian, Albino Nikolla, and Sylvain Chartier. "Dynamic multilayer growth: Parallel vs. sequential approaches." PLOS ONE 19, no. 5 (2024): e0301513. http://dx.doi.org/10.1371/journal.pone.0301513.

Full text
Abstract:
The decision of when to add a new hidden unit or layer is a fundamental challenge for constructive algorithms. It becomes even more complex in the context of multiple hidden layers. Growing both network width and depth offers a robust framework for leveraging the ability to capture more information from the data and model more complex representations. In the context of multiple hidden layers, should growing units occur sequentially with hidden units only being grown in one layer at a time or in parallel with hidden units growing across multiple layers simultaneously? The effects of growing seq
APA, Harvard, Vancouver, ISO, and other styles
15

Qosimov, Abdulaziz, and Avazjon Avlaqulov. "Implicit (hidden) and precise reader." Software Engineering and Applied Sciences 1, no. 3 (2025): 38–41. https://doi.org/10.5281/zenodo.15175005.

Full text
Abstract:
<strong>Abstract:</strong><em> The article presents a detailed analysis of the relationship &ldquo;writer-text-reader&rdquo; - this is not a &ldquo;question-answer&rdquo; conversation, but introductory chain mechanisms that serve to systematize the entire process. It discusses the vitality of the work, which is relevant today, and what you need to pay attention to in order to achieve a specific target group. This process is a dialogue of meanings and contexts that demonstrates the versatility of a literary work. The reader moves elements of the text, thereby forming separate semantic links and
APA, Harvard, Vancouver, ISO, and other styles
16

Setiono, Rudy. "Extracting Rules from Neural Networks by Pruning and Hidden-Unit Splitting." Neural Computation 9, no. 1 (1997): 205–25. http://dx.doi.org/10.1162/neco.1997.9.1.205.

Full text
Abstract:
An algorithm for extracting rules from a standard three-layer feedforward neural network is proposed. The trained network is first pruned not only to remove redundant connections in the network but, more important, to detect the relevant inputs. The algorithm generates rules from the pruned network by considering only a small number of activation values at the hidden units. If the number of inputs connected to a hidden unit is sufficiently small, then rules that describe how each of its activation values is obtained can be readily generated. Otherwise the hidden unit will be split and treated
APA, Harvard, Vancouver, ISO, and other styles
17

Tesauro, Gerald, Yu He, and Subutai Ahmad. "Asymptotic Convergence of Backpropagation." Neural Computation 1, no. 3 (1989): 382–91. http://dx.doi.org/10.1162/neco.1989.1.3.382.

Full text
Abstract:
We calculate analytically the rate of convergence at long times in the backpropagation learning algorithm for networks with and without hidden units. For networks without hidden units using the standard quadratic error function and a sigmoidal transfer function, we find that the error decreases as 1/t for large t, and the output states approach their target values as 1/√t. It is possible to obtain a different convergence rate for certain error and transfer functions, but the convergence can never be faster than 1/t. These results are unaffected by a momentum term in the learning algorithm, but
APA, Harvard, Vancouver, ISO, and other styles
18

Lillah, M. Abu Jihad. "Analyze of Teachers’ Hidden Competencies in Muadalah Education Units." At-Ta'dib 16, no. 1 (2021): 88. http://dx.doi.org/10.21111/at-tadib.v16i1.6185.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

TSUZURUGI, Jun, Norikazu TAKAHASHI, and Shin ISHII. "Associative Memory with Hidden Units by a Hybrid Learning." Transactions of the Institute of Systems, Control and Information Engineers 15, no. 11 (2002): 600–606. http://dx.doi.org/10.5687/iscie.15.600.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Wang, Wei, Yuwen Deng, Jiyuan Wu, et al. "The hidden costs of concealing outdoor air conditioning units." Nature Cities 1, no. 11 (2024): 722–24. http://dx.doi.org/10.1038/s44284-024-00148-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Rynkiewicz, J. "Asymptotic statistics for multilayer perceptron with ReLU hidden units." Neurocomputing 342 (May 2019): 16–23. http://dx.doi.org/10.1016/j.neucom.2018.11.097.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Sejnowski, Terrence J., Paul K. Kienker, and Geoffrey E. Hinton. "Learning symmetry groups with hidden units: Beyond the perceptron." Physica D: Nonlinear Phenomena 22, no. 1-3 (1986): 260–75. http://dx.doi.org/10.1016/0167-2789(86)90245-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Kainen, Paul C., and Věra Kůrková. "An Integral Upper Bound for Neural Network Approximation." Neural Computation 21, no. 10 (2009): 2970–89. http://dx.doi.org/10.1162/neco.2009.04-08-745.

Full text
Abstract:
Complexity of one-hidden-layer networks is studied using tools from nonlinear approximation and integration theory. For functions with suitable integral representations in the form of networks with infinitely many hidden units, upper bounds are derived on the speed of decrease of approximation error as the number of network units increases. These bounds are obtained for various norms using the framework of Bochner integration. Results are applied to perceptron networks.
APA, Harvard, Vancouver, ISO, and other styles
24

Moreno, J. Manuel Torres, and Mirta B. Gordon. "Efficient Adaptive Learning for Classification Tasks with Binary Units." Neural Computation 10, no. 4 (1998): 1007–30. http://dx.doi.org/10.1162/089976698300017601.

Full text
Abstract:
This article presents a new incremental learning algorithm for classification tasks, called Net Lines, which is well adapted for both binary and real-valued input patterns. It generates small, compact feedforward neural networks with one hidden layer of binary units and binary output units. A convergence theorem ensures that solutions with a finite number of hidden units exist for both binary and real-valued input patterns. An implementation for problems with more than two classes, valid for any binary classifier, is proposed. The generalization error and the size of the resulting networks are
APA, Harvard, Vancouver, ISO, and other styles
25

Han, Zi Bo, and Jin Fang Yang. "Method and Application of RBF Network Structure Optimization." Advanced Materials Research 472-475 (February 2012): 1668–75. http://dx.doi.org/10.4028/www.scientific.net/amr.472-475.1668.

Full text
Abstract:
The hidden unit number of RBF neural networks directly influences the performances of the whole net. A new strategy to prune the hidden units based on the singular value decomposition (SVD) of matrixes is proposed in the paper. At the basis of a structure involving enough more hidden units, the paper analyzes the outputs corresponding to some training samples with the SVD method and finds out the internal relations of them, then removes redundant ones according to the contribution rate of every hidden unit to the whole network, simplifies the structure of RBF neural network at last. The optimi
APA, Harvard, Vancouver, ISO, and other styles
26

Xing, Jing, and Richard A. Andersen. "Models of the Posterior Parietal Cortex Which Perform Multimodal Integration and Represent Space in Several Coordinate Frames." Journal of Cognitive Neuroscience 12, no. 4 (2000): 601–14. http://dx.doi.org/10.1162/089892900562363.

Full text
Abstract:
Many neurons in the posterior-parietal cortex (PPC) have saccadic responses to visual and auditory targets. The responses are modulated by eye position and head position. These findings suggest that PPC integrates multisensory inputs and may provide information about saccadic targets represented in different coordinate frames. In addition to an eye-centered output representation, PPC may also project to brain areas which contain head-centered and body-centered representations of the space. In this report, possible coordinate transformations in PPC were examined by comparing several sets of mod
APA, Harvard, Vancouver, ISO, and other styles
27

Akhand, M. A. H., and Kazuyuki Murase. "A Minimal Neural Network Ensemble Construction Method: A Constructive Approach." Journal of Advanced Computational Intelligence and Intelligent Informatics 11, no. 6 (2007): 582–92. http://dx.doi.org/10.20965/jaciii.2007.p0582.

Full text
Abstract:
This paper presents a neural network ensemble (NNE) construction method for classification problems. The proposed method automatically determines a minimal NNE architecture and thus called the Minimal Neural Network Ensemble Construction (MNNEC) method. To determine minimal architecture, it starts with a single neural network (NN) with a minimal number of hidden units. During training process, it adds additional NN(s) with cumulative number(s) of hidden units. In conventional methods, in contrast, the number of NNs for NNE and the number of hidden nodes for each NN should be predetermined. At
APA, Harvard, Vancouver, ISO, and other styles
28

Lockery, Shawn R., Yan Fang, and Terrence J. Sejnowski. "A Dynamic Neural Network Model of Sensorimotor Transformations in the Leech." Neural Computation 2, no. 3 (1990): 274–82. http://dx.doi.org/10.1162/neco.1990.2.3.274.

Full text
Abstract:
Interneurons in leech ganglia receive multiple sensory inputs and make synaptic contacts with many motor neurons. These “hidden” units coordinate several different behaviors. We used physiological and anatomical constraints to construct a model of the local bending reflex. Dynamic networks were trained on experimentally derived input-output patterns using recurrent backpropagation. Units in the model were modified to include electrical synapses and multiple synaptic time constants. The properties of the hidden units that emerged in the simulations matched those in the leech. The model and data
APA, Harvard, Vancouver, ISO, and other styles
29

Onah, J. N., C. O. Omeje, D. U. Onyishi, and J. Oluwadurotimi. "Single topology neural network-based voltage collapse prediction of developing power systems." Nigerian Journal of Technology 43, no. 2 (2024): 309–16. http://dx.doi.org/10.4314/njt.v43i2.14.

Full text
Abstract:
Most modern power systems operate within the vicinity of saddle-node bifurcation points because the network operators are hard put to estimating the margin to voltage collapse before the blackout. As a result, voltage stability analysis and control are growing concerns amongst electric power utilities. The selection of the hidden layer units and the training function algorithms for back propagation artificial neural network training are major challenges. Hitherto, comparative analyses of the training functions were made. Thereafter, the complexity of the artificial neural network topology was
APA, Harvard, Vancouver, ISO, and other styles
30

Moorhead, Ian R., Nigel D. Haig, and Richard A. Clement. "An Investigation of Trained Neural Networks from a Neurophysiological Perspective." Perception 18, no. 6 (1989): 793–803. http://dx.doi.org/10.1068/p180793.

Full text
Abstract:
The application of theoretical neural networks to preprocessed images was investigated with the aim of developing a computational recognition system. The neural networks were trained by means of a back-propagation algorithm, to respond selectively to computer-generated bars and edges. The receptive fields of the trained networks were then mapped, in terms of both their synaptic weights and their responses to spot stimuli. There was a direct relationship between the pattern of weights on the inputs to the hidden units (the units in the intermediate layer between the input and the output units),
APA, Harvard, Vancouver, ISO, and other styles
31

Munro, Edwin E., Larry E. Shupe, and Eberhard E. Fetz. "Integration and Differentiation in Dynamic Recurrent Neural Networks." Neural Computation 6, no. 3 (1994): 405–19. http://dx.doi.org/10.1162/neco.1994.6.3.405.

Full text
Abstract:
Dynamic neural networks with recurrent connections were trained by backpropagation to generate the differential or the leaky integral of a nonrepeating frequency-modulated sinusoidal signal. The trained networks performed these operations on arbitrary input waveforms. Reducing the network size by deleting ineffective hidden units and combining redundant units, and then retraining the network produced a minimal network that computed the same function and revealed the underlying computational algorithm. Networks could also be trained to compute simultaneously the differential and integral of the
APA, Harvard, Vancouver, ISO, and other styles
32

BURGESS, NEIL. "A CONSTRUCTIVE ALGORITHM THAT CONVERGES FOR REAL-VALUED INPUT PATTERNS." International Journal of Neural Systems 05, no. 01 (1994): 59–66. http://dx.doi.org/10.1142/s0129065794000074.

Full text
Abstract:
A constructive algorithm is presented which combines the architecture of Cascade Correlation and the training of perceptron-like hidden units with the specific error-correcting roles of Upstart. Convergence to zero errors is proved for any consistent classification of real-valued pattern vectors. Addition of one extra element to each pattern allows hyper-spherical decision regions and enables convergence on real-valued inputs for existing constructive algorithms. Simulations demonstrate robust convergence and economical construction of hidden units in the benchmark “N-bit parity” and “twin spi
APA, Harvard, Vancouver, ISO, and other styles
33

Shanken, Andrew M. "Unit." Representations 143, no. 1 (2018): 91–117. http://dx.doi.org/10.1525/rep.2018.143.1.91.

Full text
Abstract:
This essay peers through the peephole of the word unit to reveal the word’s journey across multiple fields from the mid-nineteenth century through the present. A keyword hidden in plain sight, unit links science and the world of measurement to society (family units), politics (political units), architecture (housing units), cities (neighborhood units), and, more recently, big data, the carceral state (crime units), and managerial oversight.
APA, Harvard, Vancouver, ISO, and other styles
34

Chung, Stephen. "Learning by Competition of Self-Interested Reinforcement Learning Agents." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 6 (2022): 6384–93. http://dx.doi.org/10.1609/aaai.v36i6.20589.

Full text
Abstract:
An artificial neural network can be trained by uniformly broadcasting a reward signal to units that implement a REINFORCE learning rule. Though this presents a biologically plausible alternative to backpropagation in training a network, the high variance associated with it renders it impractical to train deep networks. The high variance arises from the inefficient structural credit assignment since a single reward signal is used to evaluate the collective action of all units. To facilitate structural credit assignment, we propose replacing the reward signal to hidden units with the change in t
APA, Harvard, Vancouver, ISO, and other styles
35

Hartman, Eric J., James D. Keeler, and Jacek M. Kowalski. "Layered Neural Networks with Gaussian Hidden Units as Universal Approximations." Neural Computation 2, no. 2 (1990): 210–15. http://dx.doi.org/10.1162/neco.1990.2.2.210.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Devos, M., and G. A. Orban. "Modeling Orientation Discrimination at Multiple Reference Orientations with a Neural Network." Neural Computation 2, no. 2 (1990): 152–61. http://dx.doi.org/10.1162/neco.1990.2.2.152.

Full text
Abstract:
We trained a multilayer perceptron with backpropagation to perform stimulus orientation discrimination at multiple references using biologically plausible values as input and output. Hidden units are necessary for good performance only when the network must operate at multiple reference orientations. The orientation tuning curves of the hidden units change with reference. Our results suggest that at least for simple parameter discriminations such as orientation discrimination, one of the main functions of further processing in the visual system beyond striate cortex is to combine signals repre
APA, Harvard, Vancouver, ISO, and other styles
37

SANGER, TERENCE D. "OPTIMAL HIDDEN UNITS FOR TWO-LAYER NONLINEAR FEEDFORWARD NEURAL NETWORKS." International Journal of Pattern Recognition and Artificial Intelligence 05, no. 04 (1991): 545–61. http://dx.doi.org/10.1142/s0218001491000314.

Full text
Abstract:
The output layer of a feedforward neural network approximates nonlinear functions as a linear combination of a fixed set of basis functions, or "features". These features are learned by the hidden-layer units, often by a supervised algorithm such as a back-propagation algorithm. This paper investigates features which are optimal for computing desired output functions from a given distribution of input data, and which must therefore be learned using a mixed supervised and unsupervised algorithm. A definition is proposed for optimal nonlinear features, and a constructive method, which has an ite
APA, Harvard, Vancouver, ISO, and other styles
38

Orponen, Pekka. "The Computational Power of Discrete Hopfield Nets with Hidden Units." Neural Computation 8, no. 2 (1996): 403–15. http://dx.doi.org/10.1162/neco.1996.8.2.403.

Full text
Abstract:
We prove that polynomial size discrete Hopfield networks with hidden units compute exactly the class of Boolean functions PSPACE/poly, i.e., the same functions as are computed by polynomial space-bounded nonuniform Turing machines. As a corollary to the construction, we observe also that networks with polynomially bounded interconnection weights compute exactly the class of functions P/poly, i.e., the class computed by polynomial time-bounded nonuniform Turing machines.
APA, Harvard, Vancouver, ISO, and other styles
39

Moser, Eduard, and Tiko Kameda. "Bounds on the number of hidden units of boltzmann machines." Neural Networks 5, no. 6 (1992): 911–21. http://dx.doi.org/10.1016/s0893-6080(05)80087-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Hirose, Yoshio, Koichi Yamashita, and Shimpei Hijiya. "Back-propagation algorithm which varies the number of hidden units." Neural Networks 4, no. 1 (1991): 61–66. http://dx.doi.org/10.1016/0893-6080(91)90032-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Trenn, S. "Multilayer Perceptrons: Approximation Order and Necessary Number of Hidden Units." IEEE Transactions on Neural Networks 19, no. 5 (2008): 836–44. http://dx.doi.org/10.1109/tnn.2007.912306.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Doukas, Haris, Apostolos Arsenopoulos, Miltiades Lazoglou, Alexandros Nikas, and Alexandros Flamos. "Wind repowering: Unveiling a hidden asset." Renewable & Sustainable Energy Reviews 162 (April 15, 2022): 112457. https://doi.org/10.1016/j.rser.2022.112457.

Full text
Abstract:
Given the abundant availability of resources, the market potential, and their cost competitiveness, onshore wind farms and photovoltaic units are expected to drive the overall growth of renewable energy sources in the next decade. However, Europe is a small and densely populated continent, which results in many countries experiencing a severe shortage of suitable land sites for installing new wind and photovoltaic facilities. This, combined with the fact that many existing wind turbines and photovoltaic units reach the end of their operational lifetime, has laid the groundwork for &lsquo;repow
APA, Harvard, Vancouver, ISO, and other styles
43

HUYNH, HIEU TRUNG, YONGGWAN WON, and JUNG-JA KIM. "AN IMPROVEMENT OF EXTREME LEARNING MACHINE FOR COMPACT SINGLE-HIDDEN-LAYER FEEDFORWARD NEURAL NETWORKS." International Journal of Neural Systems 18, no. 05 (2008): 433–41. http://dx.doi.org/10.1142/s0129065708001695.

Full text
Abstract:
Recently, a novel learning algorithm called extreme learning machine (ELM) was proposed for efficiently training single-hidden-layer feedforward neural networks (SLFNs). It was much faster than the traditional gradient-descent-based learning algorithms due to the analytical determination of output weights with the random choice of input weights and hidden layer biases. However, this algorithm often requires a large number of hidden units and thus slowly responds to new observations. Evolutionary extreme learning machine (E-ELM) was proposed to overcome this problem; it used the differential ev
APA, Harvard, Vancouver, ISO, and other styles
44

Miazhynskaia, Tatiana, Sylvia Frühwirth-Schnatter, and Georg Dorffner. "Neural Network Models for Conditional Distribution Under Bayesian Analysis." Neural Computation 20, no. 2 (2008): 504–22. http://dx.doi.org/10.1162/neco.2007.3182.

Full text
Abstract:
We use neural networks (NN) as a tool for a nonlinear autoregression to predict the second moment of the conditional density of return series. The NN models are compared to the popular econometric GARCH(1,1) model. We estimate the models in a Bayesian framework using Markov chain Monte Carlo posterior simulations. The interlinked aspects of the proposed Bayesian methodology are identification of NN hidden units and treatment of NN complexity based on model evidence. The empirical study includes the application of the designed strategy to market data, where we found a strong support for a nonli
APA, Harvard, Vancouver, ISO, and other styles
45

Fang, Hui, Victoria Wang, and Motonori Yamaguchi. "Dissecting Deep Learning Networks—Visualizing Mutual Information." Entropy 20, no. 11 (2018): 823. http://dx.doi.org/10.3390/e20110823.

Full text
Abstract:
Deep Learning (DL) networks are recent revolutionary developments in artificial intelligence research. Typical networks are stacked by groups of layers that are further composed of many convolutional kernels or neurons. In network design, many hyper-parameters need to be defined heuristically before training in order to achieve high cross-validation accuracies. However, accuracy evaluation from the output layer alone is not sufficient to specify the roles of the hidden units in associated networks. This results in a significant knowledge gap between DL’s wider applications and its limited theo
APA, Harvard, Vancouver, ISO, and other styles
46

Satoh, Seiya, Kenta Yamagishi, and Tatsuji Takahashi. "Comparing feedforward neural networks using independent component analysis on hidden units." PLOS ONE 18, no. 8 (2023): e0290435. http://dx.doi.org/10.1371/journal.pone.0290435.

Full text
Abstract:
Neural networks are widely used for classification and regression tasks, but they do not always perform well, nor explicitly inform us of the rationale for their predictions. In this study we propose a novel method of comparing a pair of different feedforward neural networks, which draws on independent components obtained by independent component analysis (ICA) on the hidden layers of these networks. It can compare different feedforward neural networks even when they have different structures, as well as feedforward neural networks that learned partially different datasets, yielding insights i
APA, Harvard, Vancouver, ISO, and other styles
47

Baum, Eric B. "A Polynomial Time Algorithm That Learns Two Hidden Unit Nets." Neural Computation 2, no. 4 (1990): 510–22. http://dx.doi.org/10.1162/neco.1990.2.4.510.

Full text
Abstract:
Let N be the class of functions realizable by feedforward linear threshold nets with n input units, two hidden units each of zero threshold, and an output unit. This class is also essentially equivalent to the class of intersections of two open half spaces that are bounded by planes through the origin. We give an algorithm that probably almost correctly (PAC) learns this class from examples and membership queries. The algorithm runs in time polynomial in n, ∊ (the accuracy parameter), and δ (the confidence parameter). If only examples are allowed, but not membership queries, we give an algorit
APA, Harvard, Vancouver, ISO, and other styles
48

Fung, Hon-Kwok, and Leong Kwan Li. "Minimal Feedforward Parity Networks Using Threshold Gates." Neural Computation 13, no. 2 (2001): 319–26. http://dx.doi.org/10.1162/089976601300014556.

Full text
Abstract:
This article presents preliminary research on the general problem of reducing the number of neurons needed in a neural network so that the network can perform a specific recognition task. We consider a single-hidden-layer feedforward network in which only McCulloch-Pitts units are employed in the hidden layer. We show that if only interconnections between adjacent layers are allowed, the minimum size of the hidden layer required to solve the n-bit parity problem is n when n ≤ 4.
APA, Harvard, Vancouver, ISO, and other styles
49

Suykens, Johan A. K. "Deep Restricted Kernel Machines Using Conjugate Feature Duality." Neural Computation 29, no. 8 (2017): 2123–63. http://dx.doi.org/10.1162/neco_a_00984.

Full text
Abstract:
The aim of this letter is to propose a theory of deep restricted kernel machines offering new foundations for deep learning with kernel machines. From the viewpoint of deep learning, it is partially related to restricted Boltzmann machines, which are characterized by visible and hidden units in a bipartite graph without hidden-to-hidden connections and deep learning extensions as deep belief networks and deep Boltzmann machines. From the viewpoint of kernel machines, it includes least squares support vector machines for classification and regression, kernel principal component analysis (PCA),
APA, Harvard, Vancouver, ISO, and other styles
50

GALIANO, I., E. SANCHIS, F. CASACUBERTA, and I. TORRES. "ACOUSTIC-PHONETIC DECODING OF SPANISH CONTINUOUS SPEECH." International Journal of Pattern Recognition and Artificial Intelligence 08, no. 01 (1994): 155–80. http://dx.doi.org/10.1142/s0218001494000073.

Full text
Abstract:
The design of current acoustic-phonetic decoders for a specific language involves the selection of an adequate set of sublexical units, and a choice of the mathematical framework for modelling the corresponding units. In this work, the baseline chosen for continuous Spanish speech consists of 23 sublexical units that roughly correspond to the 24 Spanish phonemes. The process of selection of such a baseline was based on language phonetic criteria and some experiments with an available speech corpora. On the other hand, two types of models were chosen for this work, conventional Hidden Markov Mo
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!