To see the other types of publications on this topic, follow the link: Neural network learning.

Journal articles on the topic 'Neural network learning'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Neural network learning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Jiang, Yiming, Chenguang Yang, Shi-lu Dai, and Beibei Ren. "Deterministic learning enhanced neutral network control of unmanned helicopter." International Journal of Advanced Robotic Systems 13, no. 6 (2016): 172988141667111. http://dx.doi.org/10.1177/1729881416671118.

Full text
Abstract:
In this article, a neural network–based tracking controller is developed for an unmanned helicopter system with guaranteed global stability in the presence of uncertain system dynamics. Due to the coupling and modeling uncertainties of the helicopter systems, neutral networks approximation techniques are employed to compensate the unknown dynamics of each subsystem. In order to extend the semiglobal stability achieved by conventional neural control to global stability, a switching mechanism is also integrated into the control design, such that the resulted neural controller is always valid wit
APA, Harvard, Vancouver, ISO, and other styles
2

Mahat, Norpah, Nor Idayunie Nording, Jasmani Bidin, Suzanawati Abu Hasan, and Teoh Yeong Kin. "Artificial Neural Network (ANN) to Predict Mathematics Students’ Performance." Journal of Computing Research and Innovation 7, no. 1 (2022): 29–38. http://dx.doi.org/10.24191/jcrinn.v7i1.264.

Full text
Abstract:
Predicting students’ academic performance is very essential to produce high-quality students. The main goal is to continuously help students to increase their ability in the learning process and to help educators as well in improving their teaching skills. Therefore, this study was conducted to predict mathematics students’ performance using Artificial Neural Network (ANN). The secondary data from 382 mathematics students from UCI Machine Learning Repository Data Sets used to train the neural networks. The neural network model built using nntool. Two inputs are used which are the first and the
APA, Harvard, Vancouver, ISO, and other styles
3

Lin, Shaobo, Jinshan Zeng, and Xiaoqin Zhang. "Constructive Neural Network Learning." IEEE Transactions on Cybernetics 49, no. 1 (2019): 221–32. http://dx.doi.org/10.1109/tcyb.2017.2771463.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Baba, Norio. "TD Learning with Neural Networks." Journal of Robotics and Mechatronics 10, no. 4 (1998): 289–94. http://dx.doi.org/10.20965/jrm.1998.p0289.

Full text
Abstract:
Temporal difference (TD) learning (TD learning), proposed by Sutton in the late 1980s, is very interesting prediction using obtained predictions for future prediction. Applying this learning to neural networks helps improve prediction performance using neural networks, after certain problems are solved. Major problems are as follows: 1) Prediction Pt at time t is assumed to be scalar in Sutton's original paper, raising the problem of ""what is the rule for updating weight vector of the neural network if the neural network has multiple outputs?"" 2) How do we derive individual components of gra
APA, Harvard, Vancouver, ISO, and other styles
5

Hamdan, Baida Abdulredha. "Neural Network Principles and its Application." Webology 19, no. 1 (2022): 3955–70. http://dx.doi.org/10.14704/web/v19i1/web19261.

Full text
Abstract:
Neural networks which also known as artificial neural networks is generally a computing dependent technique that formed and designed to create a simulation to the real brain of a human to be used as a problem solving method. Artificial neural networks gain their abilities by the method of training or learning, each method have a certain input and output which called results too, this method of learning works to create forming probability-weighted associations among both of input and the result which stored and saved across the net specifically among its data structure, any training process is
APA, Harvard, Vancouver, ISO, and other styles
6

Gao, Yuan, Laurence T. Yang, Dehua Zheng, Jing Yang, and Yaliang Zhao. "Quantized Tensor Neural Network." ACM/IMS Transactions on Data Science 2, no. 4 (2021): 1–18. http://dx.doi.org/10.1145/3491255.

Full text
Abstract:
Tensor network as an effective computing framework for efficient processing and analysis of high-dimensional data has been successfully applied in many fields. However, the performance of traditional tensor networks still cannot match the strong fitting ability of neural networks, so some data processing algorithms based on tensor networks cannot achieve the same excellent performance as deep learning models. To further improve the learning ability of tensor network, we propose a quantized tensor neural network in this article (QTNN), which integrates the advantages of neural networks and tens
APA, Harvard, Vancouver, ISO, and other styles
7

Ma, Hongli, Fang Xie, Tao Chen, Lei Liang, and Jie Lu. "Image recognition algorithms based on deep learning." Journal of Physics: Conference Series 2137, no. 1 (2021): 012056. http://dx.doi.org/10.1088/1742-6596/2137/1/012056.

Full text
Abstract:
Abstract Convolutional neural network is a very important research direction in deep learning technology. According to the current development of convolutional network, in this paper, convolutional neural networks are induced. Firstly, this paper induces the development process of convolutional neural network; then it introduces the structure of convolutional neural network and some typical convolutional neural networks. Finally, several examples of the application of deep learning is introduced.
APA, Harvard, Vancouver, ISO, and other styles
8

Javed, Abbas, Hadi Larijani, Ali Ahmadinia, and Rohinton Emmanuel. "RANDOM NEURAL NETWORK LEARNING HEURISTICS." Probability in the Engineering and Informational Sciences 31, no. 4 (2017): 436–56. http://dx.doi.org/10.1017/s0269964817000201.

Full text
Abstract:
The random neural network (RNN) is a probabilitsic queueing theory-based model for artificial neural networks, and it requires the use of optimization algorithms for training. Commonly used gradient descent learning algorithms may reside in local minima, evolutionary algorithms can be also used to avoid local minima. Other techniques such as artificial bee colony (ABC), particle swarm optimization (PSO), and differential evolution algorithms also perform well in finding the global minimum but they converge slowly. The sequential quadratic programming (SQP) optimization algorithm can find the o
APA, Harvard, Vancouver, ISO, and other styles
9

Peretto, Pierre. "LEARNING LEARNING SETS IN NEURAL NETWORKS." International Journal of Neural Systems 01, no. 01 (1989): 31–40. http://dx.doi.org/10.1142/s0129065789000438.

Full text
Abstract:
Learning sets are experiments in which animals use their past practical experience to improve their behaviors, in particular to yield convenient responses to unknown situations. We propose a neural network architecture which reproduces this generalization process. The model rests on three main ideas: — the same motor coding networks are used as input networks in the learning stage and as output networks in the retrieving phase; — the core of the system is made up of a number of randomly generated feedforward pathways, — a simple Hebbian learning rule selects among the pathways those which fit
APA, Harvard, Vancouver, ISO, and other styles
10

Banzi, Jamal, Isack Bulugu, and Zhongfu Ye. "Deep Predictive Neural Network: Unsupervised Learning for Hand Pose Estimation." International Journal of Machine Learning and Computing 9, no. 4 (2019): 432–39. http://dx.doi.org/10.18178/ijmlc.2019.9.4.822.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Yin, Chun Hua, Jia Wei Chen, and Lei Chen. "Weight to Vision Neural Network Information Processing Influence Research." Advanced Materials Research 605-607 (December 2012): 2131–36. http://dx.doi.org/10.4028/www.scientific.net/amr.605-607.2131.

Full text
Abstract:
Many factors influence vision neural network information processing process, for example: Signal initial value, weight, time and number of learning. This paper discussed the importance of weight in vision neural network information processing process. Different weight values can cause different results in neural networks learning. We structure a vision neural network model with three layers based on synapse dynamics at first. Then we change the weights of the vision neural network model’s to make the three layers a neural network of learning Chinese characters. At last we change the initial we
APA, Harvard, Vancouver, ISO, and other styles
12

Konečný, V., O. Trenz, and E. Svobodová. "Classification of companies with theassistance of self-learning neural networks." Agricultural Economics (Zemědělská ekonomika) 56, No. 2 (2010): 51–58. http://dx.doi.org/10.17221/60/2009-agricecon.

Full text
Abstract:
The article is focused on rating classification of financial situation of enterprises using self-learning artificial neural networks. This is such a situation where the sets of objects of the particular classes are not well-known. Otherwise, it would be possible to use a multi-layer neural network with learning according to models. The advantage of a self-learning network is particularly the fact that its classification is not burdened by a subjective view. With reference to complexity, this sorting into groups may be very difficult even for experienced experts. The article also comprises the
APA, Harvard, Vancouver, ISO, and other styles
13

Peng, Yun, and Zonglin Zhou. "A neural network learning method for belief networks." International Journal of Intelligent Systems 11, no. 11 (1998): 893–915. http://dx.doi.org/10.1002/(sici)1098-111x(199611)11:11<893::aid-int3>3.0.co;2-u.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Ramprasad, Nagesh. "Understanding Neural Networks for Machine Learning using Microsoft Neural Network Algorithm." International Journal of Computer Applications 150, no. 3 (2016): 32–38. http://dx.doi.org/10.5120/ijca2016911481.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

MATSUOKA, Kiyotoshi. "Learning models of neural network." Hikaku seiri seikagaku(Comparative Physiology and Biochemistry) 12, no. 4 (1995): 390–97. http://dx.doi.org/10.3330/hikakuseiriseika.12.390.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Sarigül, Mehmet, and Mutlu Avci. "Q LEARNING REGRESSION NEURAL NETWORK." Neural Network World 28, no. 5 (2018): 415–31. http://dx.doi.org/10.14311/nnw.2018.28.023.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Peterfreund, N., and A. Guez. "Structure-based neural network learning." IEEE Transactions on Circuits and Systems I: Fundamental Theory and Applications 44, no. 12 (1997): 1143–49. http://dx.doi.org/10.1109/81.645155.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Abbott, L. F. "Learning in neural network memories." Network: Computation in Neural Systems 1, no. 1 (1990): 105–22. http://dx.doi.org/10.1088/0954-898x_1_1_008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Wilamowski, B. M., and Hao Yu. "Neural Network Learning Without Backpropagation." IEEE Transactions on Neural Networks 21, no. 11 (2010): 1793–803. http://dx.doi.org/10.1109/tnn.2010.2073482.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Ng, Geok See, D. Shi, A. Wahab, and H. Singh. "Entropy Learning in Neural Network." ASEAN Journal on Science and Technology for Development 20, no. 3&4 (2017): 307–22. http://dx.doi.org/10.29037/ajstd.362.

Full text
Abstract:
In this paper, entropy term is used in the learning phase of a neural network. As learning progresses, more hidden nodes get into saturation. The early creation of such hidden nodes may impair generalisation. Hence entropy approach is proposed to dampen the early creation of such nodes. The entropy learning also helps to increase the importance of relevant nodes while dampening the less important nodes. At the end of learning, the less important nodes can then be eliminated to reduce the memory requirements of the neural network.
APA, Harvard, Vancouver, ISO, and other styles
21

Watkin, T. L. H. "On optimal neural network learning." Physica A: Statistical Mechanics and its Applications 200, no. 1-4 (1993): 628–35. http://dx.doi.org/10.1016/0378-4371(93)90569-p.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Afridi, Muhammad Ishaq. "Cognition in a Cognitive Routing System for Mobile Ad-Hoc Network through Leaning Automata and Neural Network." Applied Mechanics and Materials 421 (September 2013): 694–700. http://dx.doi.org/10.4028/www.scientific.net/amm.421.694.

Full text
Abstract:
A cognitive routing system intelligently selects one protocol at a time for specific routing conditions and environment in MANET. Cognition or self-learning can be achieved in a cognitive routing system for mobile ad-hoc network (MANET) through a learning system like learning automata or neural networks. This article covers the application of learning automata and neural network to achieve cognition in MANET routing system. Mobile Ad-hoc networks are dynamic in nature and lack any fixed infrastructure, so the implementation of cognition enhances the performance of overall routing system in the
APA, Harvard, Vancouver, ISO, and other styles
23

Zengguo Sun, Zengguo Sun, Guodong Zhao Zengguo Sun, Rafał Scherer Guodong Zhao, Wei Wei Rafał Scherer, and Marcin Woźniak Wei Wei. "Overview of Capsule Neural Networks." 網際網路技術學刊 23, no. 1 (2022): 033–44. http://dx.doi.org/10.53106/160792642022012301004.

Full text
Abstract:
&lt;p&gt;As a vector transmission network structure, the capsule neural network has been one of the research hotspots in deep learning since it was proposed in 2017. In this paper, the latest research progress of capsule networks is analyzed and summarized. Firstly, we summarize the shortcomings of convolutional neural networks and introduce the basic concept of capsule network. Secondly, we analyze and summarize the improvements in the dynamic routing mechanism and network structure of the capsule network in recent years and the combination of the capsule network with other network structures
APA, Harvard, Vancouver, ISO, and other styles
24

Nizami Huseyn, Elcin. "APPLICATION OF DEEP LEARNING TECHNOLOGY IN DISEASE DIAGNOSIS." NATURE AND SCIENCE 04, no. 05 (2020): 4–11. http://dx.doi.org/10.36719/2707-1146/05/4-11.

Full text
Abstract:
The rapid development of deep learning technology provides new methods and ideas for assisting physicians in high-precision disease diagnosis. This article reviews the principles and features of deep learning models commonly used in medical disease diagnosis, namely convolutional neural networks, deep belief networks, restricted Boltzmann machines, and recurrent neural network models. Based on several typical diseases, the application of deep learning technology in the field of disease diagnosis is introduced; finally, the future development direction is proposed based on the limitations of cu
APA, Harvard, Vancouver, ISO, and other styles
25

Bashar, Dr Abul. "SURVEY ON EVOLVING DEEP LEARNING NEURAL NETWORK ARCHITECTURES." December 2019 2019, no. 2 (2019): 73–82. http://dx.doi.org/10.36548/jaicn.2019.2.003.

Full text
Abstract:
The deep learning being a subcategory of the machine learning follows the human instincts of learning by example to produce accurate results. The deep learning performs training to the computer frame work to directly classify the tasks from the documents available either in the form of the text, image, or the sound. Most often the deep learning utilizes the neural network to perform the accurate classification and is referred as the deep neural networks; one of the most common deep neural networks used in a broader range of applications is the convolution neural network that provides an automa
APA, Harvard, Vancouver, ISO, and other styles
26

Nitta, Tohru. "Learning Transformations with Complex-Valued Neurocomputing." International Journal of Organizational and Collective Intelligence 3, no. 2 (2012): 81–116. http://dx.doi.org/10.4018/joci.2012040103.

Full text
Abstract:
The ability of the 1-n-1 complex-valued neural network to learn 2D affine transformations has been applied to the estimation of optical flows and the generation of fractal images. The complex-valued neural network has the adaptability and the generalization ability as inherent nature. This is the most different point between the ability of the 1-n-1 complex-valued neural network to learn 2D affine transformations and the standard techniques for 2D affine transformations such as the Fourier descriptor. It is important to clarify the properties of complex-valued neural networks in order to accel
APA, Harvard, Vancouver, ISO, and other styles
27

Popic, Jan, Borko Boskovic, and Janez Brest. "Deep Learning and the Game of Checkers." MENDEL 27, no. 2 (2021): 1–6. http://dx.doi.org/10.13164/mendel.2021.2.001.

Full text
Abstract:
In this paper we present an approach which given only a set of rules is able to learn to play the game of Checkers. We utilize neural networks and reinforced learning combined with Monte Carlo Tree Search and alpha-beta pruning. Any human influence or knowledge is removed by generating needed data, for training neural network, using self-play. After a certain number of finished games, we initialize the training and transfer better neural network version to next iteration. We compare different obtained versions of neural networks and their progress in playing the game of Checkers. Every new ver
APA, Harvard, Vancouver, ISO, and other styles
28

Bodyansky, E. V., and Т. Е. Antonenko. "Deep neo-fuzzy neural network and its learning." Bionics of Intelligence 1, no. 92 (2019): 3–8. http://dx.doi.org/10.30837/bi.2019.1(92).01.

Full text
Abstract:
Optimizing the learning speedof deep neural networks is an extremely important issue. Modern approaches focus on the use of neural networksbased on the Rosenblatt perceptron. But the results obtained are not satisfactory for industrial and scientific needs inthe context of the speed of learning neural networks. Also, this approach stumbles upon the problems of a vanishingand exploding gradient. To solve the problem, the paper proposed using a neo-fuzzy neuron, whose properties arebased on the F-transform. The article discusses the use of neo-fuzzy neuron as the main component of the neuralnetw
APA, Harvard, Vancouver, ISO, and other styles
29

V.M., Sineglazov, and Chumachenko O.I. "Structural-parametric synthesis of deep learning neural networks." Artificial Intelligence 25, no. 4 (2020): 42–51. http://dx.doi.org/10.15407/jai2020.04.042.

Full text
Abstract:
The structural-parametric synthesis of neural networks of deep learning, in particular convolutional neural networks used in image processing, is considered. The classification of modern architectures of convolutional neural networks is given. It is shown that almost every convolutional neural network, depending on its topology, has unique blocks that determine its essential features (for example, Squeeze and Excitation Block, Convolutional Block of Attention Module (Channel attention module, Spatial attention module), Residual block, Inception module, ResNeXt block. It is stated the problem o
APA, Harvard, Vancouver, ISO, and other styles
30

Migalev, A. S., and P. M. Gotovtsev. "Modeling the Learning of a Spiking Neural Network with Synaptic Delays." Nelineinaya Dinamika 15, no. 3 (2019): 365–80. http://dx.doi.org/10.20537/nd190313.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Ito, Toshio. "Supervised Learning Methods of Bilinear Neural Network Systems Using Discrete Data." International Journal of Machine Learning and Computing 6, no. 5 (2016): 235–40. http://dx.doi.org/10.18178/ijmlc.2016.6.5.604.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Kumar, Vikash, Shivam Kumar Gupta, Harsh Sharma, Uchit Bhadauriya, and Chandra Prakash Varma. "Voice Isolation Using Artificial Neural Network." International Journal for Research in Applied Science and Engineering Technology 10, no. 5 (2022): 1249–53. http://dx.doi.org/10.22214/ijraset.2022.42237.

Full text
Abstract:
Abstract: The paper reflects the use of Artificial Neural Networks with the help of various machine learning algorithms for voice isolation. In particular, we consider the case of a voice sample recognition by analyzing the speech signals with the help of machine learning algorithms such as artificial neural networks, independent component analysis, activation function. The strategies by which our central nervous network decodes the network stimuli same as artificial neural network will analyze the given speech sample. After first step, a set of machine learning algorithms will be used like in
APA, Harvard, Vancouver, ISO, and other styles
33

Baldi, Pierre, and Fernando Pineda. "Contrastive Learning and Neural Oscillations." Neural Computation 3, no. 4 (1991): 526–45. http://dx.doi.org/10.1162/neco.1991.3.4.526.

Full text
Abstract:
The concept of Contrastive Learning (CL) is developed as a family of possible learning algorithms for neural networks. CL is an extension of Deterministic Boltzmann Machines to more general dynamical systems. During learning, the network oscillates between two phases. One phase has a teacher signal and one phase has no teacher signal. The weights are updated using a learning rule that corresponds to gradient descent on a contrast function that measures the discrepancy between the free network and the network with a teacher signal. The CL approach provides a general unified framework for develo
APA, Harvard, Vancouver, ISO, and other styles
34

Akil, Ibnu. "NEURAL NETWORK FUNDAMENTAL DAN IMPLEMENTASI DALAM PEMROGRAMAN." INTI Nusa Mandiri 14, no. 2 (2020): 189–94. http://dx.doi.org/10.33480/inti.v14i2.1179.

Full text
Abstract:
Nowadays machine learning and deep learning are becoming a trend in the world of information system. They are actually is part of an artificial intelligence domain. However, so many people don’t understand that machine learning and deep learning are built using neural networks. Therefore, in order to understand how machine learning and deep learning works, we must understand the basic concept of the neural network first. In this article, the writer describes the basic theory, math function of a neural network, and the example of implementation into the java programming language. The writer hop
APA, Harvard, Vancouver, ISO, and other styles
35

CRAVEN, MARK W., and JUDE W. SHAVLIK. "VISUALIZING LEARNING AND COMPUTATION IN ARTIFICIAL NEURAL NETWORKS." International Journal on Artificial Intelligence Tools 01, no. 03 (1992): 399–425. http://dx.doi.org/10.1142/s0218213092000260.

Full text
Abstract:
Scientific visualization is the process of using graphical images to form succinct and lucid representations of numerical data. Visualization has proven to be a useful method for understanding both learning and computation in artificial neural networks. While providing a powerful and general technique for inductive learning, artificial neural networks are difficult to comprehend because they form representations that are encoded by a large number of real-valued parameters. By viewing these parameters pictorially, a better understanding can be gained of how a network maps inputs into outputs. I
APA, Harvard, Vancouver, ISO, and other styles
36

Verma, Vikas, Meng Qu, Kenji Kawaguchi, et al. "GraphMix: Improved Training of GNNs for Semi-Supervised Learning." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 11 (2021): 10024–32. http://dx.doi.org/10.1609/aaai.v35i11.17203.

Full text
Abstract:
We present GraphMix, a regularization method for Graph Neural Network based semi-supervised object classification, whereby we propose to train a fully-connected network jointly with the graph neural network via parameter sharing and interpolation-based regularization. Further, we provide a theoretical analysis of how GraphMix improves the generalization bounds of the underlying graph neural network, without making any assumptions about the "aggregation" layer or the depth of the graph neural networks. We experimentally validate this analysis by applying GraphMix to various architectures such a
APA, Harvard, Vancouver, ISO, and other styles
37

Ho, Jiacang, and Dae-Ki Kang. "Brick Assembly Networks: An Effective Network for Incremental Learning Problems." Electronics 9, no. 11 (2020): 1929. http://dx.doi.org/10.3390/electronics9111929.

Full text
Abstract:
Deep neural networks have achieved high performance in image classification, image generation, voice recognition, natural language processing, etc.; however, they still have confronted several open challenges that need to be solved such as incremental learning problem, overfitting in neural networks, hyperparameter optimization, lack of flexibility and multitasking, etc. In this paper, we focus on the incremental learning problem which is related with machine learning methodologies that continuously train an existing model with additional knowledge. To the best of our knowledge, a simple and d
APA, Harvard, Vancouver, ISO, and other styles
38

Tsai, Chih-Fong, and Chihli Hung. "Modeling credit scoring using neural network ensembles." Kybernetes 43, no. 7 (2014): 1114–23. http://dx.doi.org/10.1108/k-01-2014-0016.

Full text
Abstract:
Purpose – Credit scoring is important for financial institutions in order to accurately predict the likelihood of business failure. Related studies have shown that machine learning techniques, such as neural networks, outperform many statistical approaches to solving this type of problem, and advanced machine learning techniques, such as classifier ensembles and hybrid classifiers, provide better prediction performance than single machine learning based classification techniques. However, it is not known which type of advanced classification technique performs better in terms of financial dist
APA, Harvard, Vancouver, ISO, and other styles
39

Tzougas, George, and Konstantin Kutzkov. "Enhancing Logistic Regression Using Neural Networks for Classification in Actuarial Learning." Algorithms 16, no. 2 (2023): 99. http://dx.doi.org/10.3390/a16020099.

Full text
Abstract:
We developed a methodology for the neural network boosting of logistic regression aimed at learning an additional model structure from the data. In particular, we constructed two classes of neural network-based models: shallow–dense neural networks with one hidden layer and deep neural networks with multiple hidden layers. Furthermore, several advanced approaches were explored, including the combined actuarial neural network approach, embeddings and transfer learning. The model training was achieved by minimizing either the deviance or the cross-entropy loss functions, leading to fourteen neur
APA, Harvard, Vancouver, ISO, and other styles
40

Matsumoto, Kazuma, Takato Tatsumi, Hiroyuki Sato, Tim Kovacs, and Keiki Takadama. "XCSR Learning from Compressed Data Acquired by Deep Neural Network." Journal of Advanced Computational Intelligence and Intelligent Informatics 21, no. 5 (2017): 856–67. http://dx.doi.org/10.20965/jaciii.2017.p0856.

Full text
Abstract:
The correctness rate of classification of neural networks is improved by deep learning, which is machine learning of neural networks, and its accuracy is higher than the human brain in some fields. This paper proposes the hybrid system of the neural network and the Learning Classifier System (LCS). LCS is evolutionary rule-based machine learning using reinforcement learning. To increase the correctness rate of classification, we combine the neural network and the LCS. This paper conducted benchmark experiments to verify the proposed system. The experiment revealed that: 1) the correctness rate
APA, Harvard, Vancouver, ISO, and other styles
41

Zhao, Jing, Zhao Lin Han, and Yuan Yuan Fang. "Fuzzy Neural Network Hybrid Learning Control on AUV." Advanced Materials Research 468-471 (February 2012): 1732–35. http://dx.doi.org/10.4028/www.scientific.net/amr.468-471.1732.

Full text
Abstract:
A novel controller based on the fuzzy B-spline neural network is presented, which combines the advantages of qualitative defining capability of fuzzy logic, quantitative learning ability of neural networks and excellent local controlling ability of B-spline basis functions, which are being used as fuzzy functions. A hybrid learning algorithm of the controller is proposed as well. The results show that it is feasible to design the fuzzy neural network control of autonomous underwater vehicle by the hybrid learning algorithm.
APA, Harvard, Vancouver, ISO, and other styles
42

Hindarto, Djarot, and Handri Santoso. "Performance Comparison of Supervised Learning Using Non-Neural Network and Neural Network." Jurnal Nasional Pendidikan Teknik Informatika (JANAPATI) 11, no. 1 (2022): 49. http://dx.doi.org/10.23887/janapati.v11i1.40768.

Full text
Abstract:
Currently, the development of mobile phones and mobile applications based on the Android operating system is increasing rapidly. Many new companies and startups are digitally transforming by using mobile apps to provide disruptive digital services to replace existing old-fashioned services. This transformation prompted attackers to create malicious software (malware) using sophisticated methods to target victims of Android phone users. The purpose of this study is to identify Android APK files by classifying them using Artificial Neural Network (ANN) and Non-Neural Network (NNN). ANN is a Mult
APA, Harvard, Vancouver, ISO, and other styles
43

Zambra, Matteo, Amos Maritan, and Alberto Testolin. "Emergence of Network Motifs in Deep Neural Networks." Entropy 22, no. 2 (2020): 204. http://dx.doi.org/10.3390/e22020204.

Full text
Abstract:
Network science can offer fundamental insights into the structural and functional properties of complex systems. For example, it is widely known that neuronal circuits tend to organize into basic functional topological modules, called network motifs. In this article, we show that network science tools can be successfully applied also to the study of artificial neural networks operating according to self-organizing (learning) principles. In particular, we study the emergence of network motifs in multi-layer perceptrons, whose initial connectivity is defined as a stack of fully-connected, bipart
APA, Harvard, Vancouver, ISO, and other styles
44

CHOI, Young-Seok. "Neuromorphic Learning: Deep Spiking Neural Network." Physics and High Technology 28, no. 4 (2019): 16–21. http://dx.doi.org/10.3938/phit.28.014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Mardin, A. "Neural Network Learning and Expert Systems." AI Communications 7, no. 3-4 (1994): 238–40. http://dx.doi.org/10.3233/aic-1994-73-412.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Anguita, D., and A. Boni. "Improved neural network for SVM learning." IEEE Transactions on Neural Networks 13, no. 5 (2002): 1243–44. http://dx.doi.org/10.1109/tnn.2002.1031958.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Konečný, Vladimír, Anděla Matiášová, and Ivana Rábová. "Learning of N-layers neural network." Acta Universitatis Agriculturae et Silviculturae Mendelianae Brunensis 53, no. 6 (2005): 75–84. http://dx.doi.org/10.11118/actaun200553060075.

Full text
Abstract:
In the last decade we can observe increasing number of applications based on the Artificial Intelligence that are designed to solve problems from different areas of human activity. The reason why there is so much interest in these technologies is that the classical way of solutions does not exist or these technologies are not suitable because of their robustness. They are often used in applications like Business Intelligence that enable to obtain useful information for high-quality decision-making and to increase competitive advantage.One of the most widespread tools for the Artificial Intelli
APA, Harvard, Vancouver, ISO, and other styles
48

Abbott, L. F., and T. B. Kepler. "Optimal learning in neural network memories." Journal of Physics A: Mathematical and General 22, no. 14 (1989): L711—L717. http://dx.doi.org/10.1088/0305-4470/22/14/011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Burgess, Neil, J. L. Shapiro, and M. A. Moore. "Neural network models of list learning." Network: Computation in Neural Systems 2, no. 4 (1991): 399–422. http://dx.doi.org/10.1088/0954-898x_2_4_005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Broadbent, H. A., and J. Lucas. "Neural network model of serial learning." ACM SIGAPL APL Quote Quad 19, no. 4 (1989): 54–61. http://dx.doi.org/10.1145/75145.75152.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!