To see the other types of publications on this topic, follow the link: Supervised neural networks.

Journal articles on the topic 'Supervised neural networks'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Supervised neural networks.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Yeh, I.-Cheng, and Kuan-Cheng Lin. "Supervised Learning Probabilistic Neural Networks." Neural Processing Letters 34, no. 2 (July 22, 2011): 193–208. http://dx.doi.org/10.1007/s11063-011-9191-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hush, D. R., and B. G. Horne. "Progress in supervised neural networks." IEEE Signal Processing Magazine 10, no. 1 (January 1993): 8–39. http://dx.doi.org/10.1109/79.180705.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Tomasov, Adrian, Martin Holik, Vaclav Oujezsky, Tomas Horvath, and Petr Munster. "GPON PLOAMd Message Analysis Using Supervised Neural Networks." Applied Sciences 10, no. 22 (November 18, 2020): 8139. http://dx.doi.org/10.3390/app10228139.

Full text
Abstract:
This paper discusses the possibility of analyzing the orchestration protocol used in gigabit-capable passive optical networks (GPONs). Considering the fact that a GPON is defined by the International Telecommunication Union Telecommunication sector (ITU-T) as a set of recommendations, implementation across device vendors might exhibit few differences, which complicates analysis of such protocols. Therefore, machine learning techniques are used (e.g., neural networks) to evaluate differences in GPONs among various device vendors. As a result, this paper compares three neural network models base
APA, Harvard, Vancouver, ISO, and other styles
4

Hammer, Barbara. "Neural Smithing – Supervised Learning in Feedforward Artificial Neural Networks." Pattern Analysis & Applications 4, no. 1 (March 2001): 73–74. http://dx.doi.org/10.1007/s100440170029.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Sarukkai, Ramesh R. "Supervised Networks That Self-Organize Class Outputs." Neural Computation 9, no. 3 (March 1, 1997): 637–48. http://dx.doi.org/10.1162/neco.1997.9.3.637.

Full text
Abstract:
Supervised, neural network, learning algorithms have proved very successful at solving a variety of learning problems; however, they suffer from a common problem of requiring explicit output labels. In this article, it is shown that pattern classification can be achieved, in a multilayered, feedforward, neural network, without requiring explicit output labels, by a process of supervised self-organization. The class projection is achieved by optimizing appropriate within-class uniformity and between-class discernibility criteria. The mapping function and the class labels are developed together
APA, Harvard, Vancouver, ISO, and other styles
6

Doyle, J. R. "Supervised learning in N-tuple neural networks." International Journal of Man-Machine Studies 33, no. 1 (July 1990): 21–40. http://dx.doi.org/10.1016/s0020-7373(05)80113-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Secco, Jacopo, Mauro Poggio, and Fernando Corinto. "Supervised neural networks with memristor binary synapses." International Journal of Circuit Theory and Applications 46, no. 1 (January 2018): 221–33. http://dx.doi.org/10.1002/cta.2429.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Sporea, Ioana, and André Grüning. "Supervised Learning in Multilayer Spiking Neural Networks." Neural Computation 25, no. 2 (February 2013): 473–509. http://dx.doi.org/10.1162/neco_a_00396.

Full text
Abstract:
We introduce a supervised learning algorithm for multilayer spiking neural networks. The algorithm overcomes a limitation of existing learning algorithms: it can be applied to neurons firing multiple spikes in artificial neural networks with hidden layers. It can also, in principle, be used with any linearizable neuron model and allows different coding schemes of spike train patterns. The algorithm is applied successfully to classic linearly nonseparable benchmarks such as the XOR problem and the Iris data set, as well as to more complex classification and mapping problems. The algorithm has b
APA, Harvard, Vancouver, ISO, and other styles
9

Wang, Juexin, Anjun Ma, Qin Ma, Dong Xu, and Trupti Joshi. "Inductive inference of gene regulatory network using supervised and semi-supervised graph neural networks." Computational and Structural Biotechnology Journal 18 (2020): 3335–43. http://dx.doi.org/10.1016/j.csbj.2020.10.022.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Xu, Jianqiao, Zhaolu Zuo, Danchao Wu, Bing Li, Xiaoni Li, and Deyi Kong. "Bearing Defect Detection with Unsupervised Neural Networks." Shock and Vibration 2021 (August 19, 2021): 1–11. http://dx.doi.org/10.1155/2021/9544809.

Full text
Abstract:
Bearings always suffer from surface defects, such as scratches, black spots, and pits. Those surface defects have great effects on the quality and service life of bearings. Therefore, the defect detection of the bearing has always been the focus of the bearing quality control. Deep learning has been successfully applied to the objection detection due to its excellent performance. However, it is difficult to realize automatic detection of bearing surface defects based on data-driven-based deep learning due to few samples data of bearing defects on the actual production line. Sample preprocessin
APA, Harvard, Vancouver, ISO, and other styles
11

Zhao, Shijie, Yan Cui, Linwei Huang, Li Xie, Yaowu Chen, Junwei Han, Lei Guo, Shu Zhang, Tianming Liu, and Jinglei Lv. "Supervised Brain Network Learning Based on Deep Recurrent Neural Networks." IEEE Access 8 (2020): 69967–78. http://dx.doi.org/10.1109/access.2020.2984948.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Tan, Junyang, Dan Xia, Shiyun Dong, Honghao Zhu, and Binshi Xu. "Research On Pre-Training Method and Generalization Ability of Big Data Recognition Model of the Internet of Things." ACM Transactions on Asian and Low-Resource Language Information Processing 20, no. 5 (July 20, 2021): 1–15. http://dx.doi.org/10.1145/3433539.

Full text
Abstract:
The Internet of Things and big data are currently hot concepts and research fields. The mining, classification, and recognition of big data in the Internet of Things system are the key links that are widely of concern at present. The artificial neural network is beneficial for multi-dimensional data classification and recognition because of its strong feature extraction and self-learning ability. Pre-training is an effective method to address the gradient diffusion problem in deep neural networks and could result in better generalization. This article focuses on the performance of supervised p
APA, Harvard, Vancouver, ISO, and other styles
13

Nobukawa, Sou, Haruhiko Nishimura, and Teruya Yamanishi. "Pattern Classification by Spiking Neural Networks Combining Self-Organized and Reward-Related Spike-Timing-Dependent Plasticity." Journal of Artificial Intelligence and Soft Computing Research 9, no. 4 (October 1, 2019): 283–91. http://dx.doi.org/10.2478/jaiscr-2019-0009.

Full text
Abstract:
Abstract Many recent studies have applied to spike neural networks with spike-timing-dependent plasticity (STDP) to machine learning problems. The learning abilities of dopamine-modulated STDP (DA-STDP) for reward-related synaptic plasticity have also been gathering attention. Following these studies, we hypothesize that a network structure combining self-organized STDP and reward-related DA-STDP can solve the machine learning problem of pattern classification. Therefore, we studied the ability of a network in which recurrent spiking neural networks are combined with STDP for non-supervised le
APA, Harvard, Vancouver, ISO, and other styles
14

BELATRECHE, AMMAR, LIAM P. MAGUIRE, MARTIN MCGINNITY, and QING XIANG WU. "EVOLUTIONARY DESIGN OF SPIKING NEURAL NETWORKS." New Mathematics and Natural Computation 02, no. 03 (November 2006): 237–53. http://dx.doi.org/10.1142/s179300570600049x.

Full text
Abstract:
Unlike traditional artificial neural networks (ANNs), which use a high abstraction of real neurons, spiking neural networks (SNNs) offer a biologically plausible model of realistic neurons. They differ from classical artificial neural networks in that SNNs handle and communicate information by means of timing of individual pulses, an important feature of neuronal systems being ignored by models based on rate coding scheme. However, in order to make the most of these realistic neuronal models, good training algorithms are required. Most existing learning paradigms tune the synaptic weights in a
APA, Harvard, Vancouver, ISO, and other styles
15

Zenke, Friedemann, and Surya Ganguli. "SuperSpike: Supervised Learning in Multilayer Spiking Neural Networks." Neural Computation 30, no. 6 (June 2018): 1514–41. http://dx.doi.org/10.1162/neco_a_01086.

Full text
Abstract:
A vast majority of computation in the brain is performed by spiking neural networks. Despite the ubiquity of such spiking, we currently lack an understanding of how biological spiking neural circuits learn and compute in vivo, as well as how we can instantiate such capabilities in artificial spiking circuits in silico. Here we revisit the problem of supervised learning in temporally coding multilayer spiking neural networks. First, by using a surrogate gradient approach, we derive SuperSpike, a nonlinear voltage-based three-factor learning rule capable of training multilayer networks of determ
APA, Harvard, Vancouver, ISO, and other styles
16

Wettayaprasit, W., C. Lursinsap, and C. H. Chu. "Extracting linguistic quantitative rules from supervised neural networks." International Journal of Knowledge-based and Intelligent Engineering Systems 8, no. 3 (January 10, 2005): 161–70. http://dx.doi.org/10.3233/kes-2004-8304.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Song, Xingguo, Haibo Gao, Liang Ding, Pol D. Spanos, Zongquan Deng, and Zhijun Li. "Locally supervised neural networks for approximating terramechanics models." Mechanical Systems and Signal Processing 75 (June 2016): 57–74. http://dx.doi.org/10.1016/j.ymssp.2015.12.028.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Alshehhi, Rasha, Chris S. Hanson, Laurent Gizon, and Shravan Hanasoge. "Supervised neural networks for helioseismic ring-diagram inversions." Astronomy & Astrophysics 622 (February 2019): A124. http://dx.doi.org/10.1051/0004-6361/201834237.

Full text
Abstract:
Context. The inversion of ring fit parameters to obtain subsurface flow maps in ring-diagram analysis for eight years of SDO observations is computationally expensive, requiring ∼3200 CPU hours. Aims. In this paper we apply machine-learning techniques to the inversion step of the ring diagram pipeline in order to speed up the calculations. Specifically, we train a predictor for subsurface flows using the mode fit parameters and the previous inversion results to replace future inversion requirements. Methods. We utilize artificial neural networks (ANNs) as a supervised learning method for predi
APA, Harvard, Vancouver, ISO, and other styles
19

Cheung, Man-Fung, Kevin M. Passino, and Stephen Yurkovich. "Supervised Training of Neural Networks via Ellipsoid Algorithms." Neural Computation 6, no. 4 (July 1994): 748–60. http://dx.doi.org/10.1162/neco.1994.6.4.748.

Full text
Abstract:
In this paper we show that two ellipsoid algorithms can be used to train single-layer neural networks with general staircase nonlinearities. The ellipsoid algorithms have several advantages over other conventional training approaches including (1) explicit convergence results and automatic determination of linear separability, (2) an elimination of problems with picking initial values for the weights, (3) guarantees that the trained weights are in some “acceptable region,” (4) certain “robustness” characteristics, and (5) a training approach for neural networks with a wider variety of activati
APA, Harvard, Vancouver, ISO, and other styles
20

Sperduti, A., and A. Starita. "Supervised neural networks for the classification of structures." IEEE Transactions on Neural Networks 8, no. 3 (May 1997): 714–35. http://dx.doi.org/10.1109/72.572108.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Mazzatorta, Paolo, Marjan Vračko, Aneta Jezierska, and Emilio Benfenati. "Modeling Toxicity by Using Supervised Kohonen Neural Networks." Journal of Chemical Information and Computer Sciences 43, no. 2 (March 2003): 485–92. http://dx.doi.org/10.1021/ci0256182.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Gazula, S., and M. R. Kabuka. "Design of supervised classifiers using Boolean neural networks." IEEE Transactions on Pattern Analysis and Machine Intelligence 17, no. 12 (1995): 1239–46. http://dx.doi.org/10.1109/34.476519.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Smith, Alice E., and Cihan H. Dagli. "Controlling industrial processes through supervised, feedforward neural networks." Computers & Industrial Engineering 21, no. 1-4 (January 1991): 247–51. http://dx.doi.org/10.1016/0360-8352(91)90096-o.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Hu, Yaxian, Senlin Luo, Longfei Han, Limin Pan, and Tiemei Zhang. "Deep supervised learning with mixture of neural networks." Artificial Intelligence in Medicine 102 (January 2020): 101764. http://dx.doi.org/10.1016/j.artmed.2019.101764.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Qin, Shanshan, Nayantara Mudur, and Cengiz Pehlevan. "Contrastive Similarity Matching for Supervised Learning." Neural Computation 33, no. 5 (April 13, 2021): 1300–1328. http://dx.doi.org/10.1162/neco_a_01374.

Full text
Abstract:
Abstract We propose a novel biologically plausible solution to the credit assignment problem motivated by observations in the ventral visual pathway and trained deep neural networks. In both, representations of objects in the same category become progressively more similar, while objects belonging to different categories become less similar. We use this observation to motivate a layer-specific learning goal in a deep network: each layer aims to learn a representational similarity matrix that interpolates between previous and later layers. We formulate this idea using a contrastive similarity m
APA, Harvard, Vancouver, ISO, and other styles
26

Wu, Wei, Guangmin Hu, and Fucai Yu. "Ricci Curvature-Based Semi-Supervised Learning on an Attributed Network." Entropy 23, no. 3 (February 27, 2021): 292. http://dx.doi.org/10.3390/e23030292.

Full text
Abstract:
In recent years, on the basis of drawing lessons from traditional neural network models, people have been paying more and more attention to the design of neural network architectures for processing graph structure data, which are called graph neural networks (GNN). GCN, namely, graph convolution networks, are neural network models in GNN. GCN extends the convolution operation from traditional data (such as images) to graph data, and it is essentially a feature extractor, which aggregates the features of neighborhood nodes into those of target nodes. In the process of aggregating features, GCN
APA, Harvard, Vancouver, ISO, and other styles
27

Kulathunga, Nalinda, Nishath Rajiv Ranasinghe, Daniel Vrinceanu, Zackary Kinsman, Lei Huang, and Yunjiao Wang. "Effects of Nonlinearity and Network Architecture on the Performance of Supervised Neural Networks." Algorithms 14, no. 2 (February 5, 2021): 51. http://dx.doi.org/10.3390/a14020051.

Full text
Abstract:
The nonlinearity of activation functions used in deep learning models is crucial for the success of predictive models. Several simple nonlinear functions, including Rectified Linear Unit (ReLU) and Leaky-ReLU (L-ReLU) are commonly used in neural networks to impose the nonlinearity. In practice, these functions remarkably enhance the model accuracy. However, there is limited insight into the effects of nonlinearity in neural networks on their performance. Here, we investigate the performance of neural network models as a function of nonlinearity using ReLU and L-ReLU activation functions in the
APA, Harvard, Vancouver, ISO, and other styles
28

MAGOULAS, GEORGE D., and MICHAEL N. VRAHATIS. "ADAPTIVE ALGORITHMS FOR NEURAL NETWORK SUPERVISED LEARNING: A DETERMINISTIC OPTIMIZATION APPROACH." International Journal of Bifurcation and Chaos 16, no. 07 (July 2006): 1929–50. http://dx.doi.org/10.1142/s0218127406015805.

Full text
Abstract:
Networks of neurons can perform computations that even modern computers find very difficult to simulate. Most of the existing artificial neurons and artificial neural networks are considered biologically unrealistic, nevertheless the practical success of the backpropagation algorithm and the powerful capabilities of feedforward neural networks have made neural computing very popular in several application areas. A challenging issue in this context is learning internal representations by adjusting the weights of the network connections. To this end, several first-order and second-order algorith
APA, Harvard, Vancouver, ISO, and other styles
29

Shin, Sungho, Jongwon Kim, Yeonguk Yu, Seongju Lee, and Kyoobin Lee. "Self-Supervised Transfer Learning from Natural Images for Sound Classification." Applied Sciences 11, no. 7 (March 29, 2021): 3043. http://dx.doi.org/10.3390/app11073043.

Full text
Abstract:
We propose the implementation of transfer learning from natural images to audio-based images using self-supervised learning schemes. Through self-supervised learning, convolutional neural networks (CNNs) can learn the general representation of natural images without labels. In this study, a convolutional neural network was pre-trained with natural images (ImageNet) via self-supervised learning; subsequently, it was fine-tuned on the target audio samples. Pre-training with the self-supervised learning scheme significantly improved the sound classification performance when validated on the follo
APA, Harvard, Vancouver, ISO, and other styles
30

Aragon-Calvo, M. A., and J. C. Carvajal. "Self-supervised learning with physics-aware neural networks – I. Galaxy model fitting." Monthly Notices of the Royal Astronomical Society 498, no. 3 (September 7, 2020): 3713–19. http://dx.doi.org/10.1093/mnras/staa2228.

Full text
Abstract:
ABSTRACT Estimating the parameters of a model describing a set of observations using a neural network is, in general, solved in a supervised way. In cases when we do not have access to the model’s true parameters, this approach can not be applied. Standard unsupervised learning techniques, on the other hand, do not produce meaningful or semantic representations that can be associated with the model’s parameters. Here we introduce a novel self-supervised hybrid network architecture that combines traditional neural network elements with analytic or numerical models, which represent a physical pr
APA, Harvard, Vancouver, ISO, and other styles
31

Tang, Zheng, Xu Gang Wang, Hiroki Tamura, and Masahiro Ishii. "An Algorithm of Supervised Learning for Multilayer Neural Networks." Neural Computation 15, no. 5 (May 1, 2003): 1125–42. http://dx.doi.org/10.1162/089976603765202686.

Full text
Abstract:
A method of supervised learning for multilayer artificial neural networks to escape local minima is proposed. The learning model has two phases: a backpropagation phase and a gradient ascent phase. The backpropagation phase performs steepest descent on a surface in weight space whose height at any point in weight space is equal to an error measure, and it finds a set of weights minimizing this error measure. When the backpropagation gets stuck in local minima, the gradient ascent phase attempts to fill up the valley by modifying gain parameters in a gradient ascent direction of the error measu
APA, Harvard, Vancouver, ISO, and other styles
32

Zhang, Pengfei, and Xiaoming Ju. "Adversarial Sample Detection with Gaussian Mixture Conditional Generative Adversarial Networks." Mathematical Problems in Engineering 2021 (September 13, 2021): 1–18. http://dx.doi.org/10.1155/2021/8268249.

Full text
Abstract:
It is important to detect adversarial samples in the physical world that are far away from the training data distribution. Some adversarial samples can make a machine learning model generate a highly overconfident distribution in the testing stage. Thus, we proposed a mechanism for detecting adversarial samples based on semisupervised generative adversarial networks (GANs) with an encoder-decoder structure; this mechanism can be applied to any pretrained neural network without changing the network’s structure. The semisupervised GANs also give us insight into the behavior of adversarial sample
APA, Harvard, Vancouver, ISO, and other styles
33

Orukwo, Joy Oyinye, and Ledisi Giok Kabari. "Diagnosing Diabetes Using Artificial Neural Networks." European Journal of Engineering Research and Science 5, no. 2 (February 27, 2020): 221–24. http://dx.doi.org/10.24018/ejers.2020.5.2.1774.

Full text
Abstract:
Diabetes has always been a silent killer and the number of people suffering from it has increased tremendously in the last few decades. More often than not, people continue with their normal lifestyle, unaware that their health is at severe risk and with each passing day diabetes goes undetected. Artificial Neural Networks have become extensively useful in medical diagnosis as it provides a powerful tool to help analyze, model and make sense of complex clinical data. This study developed a diabetes diagnosis system using feed-forward neural network with supervised learning algorithm. The neura
APA, Harvard, Vancouver, ISO, and other styles
34

Dapkus, Paulius, Liudas Mažeika, and Vytautas Sliesoraitis. "A study of supervised combined neural-network-based ultrasonic method for reconstruction of spatial distribution of material properties." Information Technology And Control 49, no. 3 (September 23, 2020): 381–94. http://dx.doi.org/10.5755/j01.itc.49.3.26792.

Full text
Abstract:
This paper examines the performance of the commonly used neural-network-based classifiers for investigating a structural noise in metals as grain size estimation. The biggest problem which aims to identify the object structure grain size based on metal features or the object structure itself. When the structure data is obtained, a proposed feature extraction method is used to extract the feature of the object. Afterwards, the extracted features are used as the inputs for the classifiers. This research studies is focused to use basic ultrasonic sensors to obtain objects structural grain size wh
APA, Harvard, Vancouver, ISO, and other styles
35

Hodges, Jaret, and Soumya Mohan. "Machine Learning in Gifted Education: A Demonstration Using Neural Networks." Gifted Child Quarterly 63, no. 4 (September 9, 2019): 243–52. http://dx.doi.org/10.1177/0016986219867483.

Full text
Abstract:
Machine learning algorithms are used in language processing, automated driving, and for prediction. Though the theory of machine learning has existed since the 1950s, it was not until the advent of advanced computing that their potential has begun to be realized. Gifted education is a field where machine learning has yet to be utilized, even though one of the underlying problems of gifted education is classification, which is an area where learning algorithms have become exceptionally accurate. We provide a brief overview of machine learning with a focus on neural networks and supervised learn
APA, Harvard, Vancouver, ISO, and other styles
36

Zhang, Yingxue, Soumyasundar Pal, Mark Coates, and Deniz Ustebay. "Bayesian Graph Convolutional Neural Networks for Semi-Supervised Classification." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 5829–36. http://dx.doi.org/10.1609/aaai.v33i01.33015829.

Full text
Abstract:
Recently, techniques for applying convolutional neural networks to graph-structured data have emerged. Graph convolutional neural networks (GCNNs) have been used to address node and graph classification and matrix completion. Although the performance has been impressive, the current implementations have limited capability to incorporate uncertainty in the graph structure. Almost all GCNNs process a graph as though it is a ground-truth depiction of the relationship between nodes, but often the graphs employed in applications are themselves derived from noisy data or modelling assumptions. Spuri
APA, Harvard, Vancouver, ISO, and other styles
37

Ben Boubaker, Ourida. "APPLYING NEURAL NETWORKS FOR SUPERVISED LEARNING OF MEDICAL DATA." International Journal of Data Mining & Knowledge Management Process 09, no. 03 (May 31, 2019): 29–38. http://dx.doi.org/10.5121/ijdkp.2019.9303.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Hu, Yipeng, Marc Modat, Eli Gibson, Wenqi Li, Nooshin Ghavami, Ester Bonmati, Guotai Wang, et al. "Weakly-supervised convolutional neural networks for multimodal image registration." Medical Image Analysis 49 (October 2018): 1–13. http://dx.doi.org/10.1016/j.media.2018.07.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Zhang, Malu, Hong Qu, Xiurui Xie, and Jürgen Kurths. "Supervised learning in spiking neural networks with noise-threshold." Neurocomputing 219 (January 2017): 333–49. http://dx.doi.org/10.1016/j.neucom.2016.09.044.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Ding, Zhengming, Nasser M. Nasrabadi, and Yun Fu. "Semi-supervised Deep Domain Adaptation via Coupled Neural Networks." IEEE Transactions on Image Processing 27, no. 11 (November 2018): 5214–24. http://dx.doi.org/10.1109/tip.2018.2851067.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Gyer, M. S. "Adjuncts and alternatives to neural networks for supervised classification." IEEE Transactions on Systems, Man, and Cybernetics 22, no. 1 (1992): 35–46. http://dx.doi.org/10.1109/21.141309.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Černá, Lenka, and Milan Chytrý. "Supervised classification of plant communities with artificial neural networks." Journal of Vegetation Science 16, no. 4 (February 24, 2005): 407–14. http://dx.doi.org/10.1111/j.1654-1103.2005.tb02380.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Amorim, Willian Paraguassu, Gustavo Henrique Rosa, Rogério Thomazella, José Eduardo Cogo Castanho, Fábio Romano Lofrano Dotto, Oswaldo Pons Rodrigues Júnior, Aparecido Nilceu Marana, and João Paulo Papa. "Semi-supervised learning with connectivity-driven convolutional neural networks." Pattern Recognition Letters 128 (December 2019): 16–22. http://dx.doi.org/10.1016/j.patrec.2019.08.012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Ludwig, Oswaldo, and Urbano Nunes. "Novel Maximum-Margin Training Algorithms for Supervised Neural Networks." IEEE Transactions on Neural Networks 21, no. 6 (June 2010): 972–84. http://dx.doi.org/10.1109/tnn.2010.2046423.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

López-Vázquez, G., M. Ornelas-Rodriguez, A. Espinal, J. A. Soria-Alcaraz, A. Rojas-Domínguez, H. J. Puga-Soberanes, J. M. Carpio, and H. Rostro-Gonzalez. "Evolutionary Spiking Neural Networks for Solving Supervised Classification Problems." Computational Intelligence and Neuroscience 2019 (March 28, 2019): 1–13. http://dx.doi.org/10.1155/2019/4182639.

Full text
Abstract:
This paper presents a grammatical evolution (GE)-based methodology to automatically design third generation artificial neural networks (ANNs), also known as spiking neural networks (SNNs), for solving supervised classification problems. The proposal performs the SNN design by exploring the search space of three-layered feedforward topologies with configured synaptic connections (weights and delays) so that no explicit training is carried out. Besides, the designed SNNs have partial connections between input and hidden layers which may contribute to avoid redundancies and reduce the dimensional
APA, Harvard, Vancouver, ISO, and other styles
46

keyan, M. Karthi. "Semi Supervised Document Classification Model Using Artificial Neural Networks." International Journal of Computer Trends and Technology 34, no. 1 (April 25, 2016): 52–58. http://dx.doi.org/10.14445/22312803/ijctt-v34p109.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Tang, Rongxin, Hualin Liu, Jingbo Wei, and Wenchao Tang. "Supervised learning with convolutional neural networks for hyperspectral visualization." Remote Sensing Letters 11, no. 4 (February 6, 2020): 363–72. http://dx.doi.org/10.1080/2150704x.2020.1717014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Bruzzone, L., and D. Fernández Prieto. "Supervised training technique for radial basis function neural networks." Electronics Letters 34, no. 11 (1998): 1115. http://dx.doi.org/10.1049/el:19980789.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Polikar, R., L. Upda, S. S. Upda, and V. Honavar. "Learn++: an incremental learning algorithm for supervised neural networks." IEEE Transactions on Systems, Man and Cybernetics, Part C (Applications and Reviews) 31, no. 4 (2001): 497–508. http://dx.doi.org/10.1109/5326.983933.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Jain, Lakhmi C., Manjeevan Seera, Chee Peng Lim, and P. Balasubramaniam. "A review of online learning in supervised neural networks." Neural Computing and Applications 25, no. 3-4 (December 31, 2013): 491–509. http://dx.doi.org/10.1007/s00521-013-1534-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!