Academic literature on the topic 'Neural network learning'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Neural network learning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Neural network learning"

1

Jiang, Yiming, Chenguang Yang, Shi-lu Dai, and Beibei Ren. "Deterministic learning enhanced neutral network control of unmanned helicopter." International Journal of Advanced Robotic Systems 13, no. 6 (2016): 172988141667111. http://dx.doi.org/10.1177/1729881416671118.

Full text
Abstract:
In this article, a neural network–based tracking controller is developed for an unmanned helicopter system with guaranteed global stability in the presence of uncertain system dynamics. Due to the coupling and modeling uncertainties of the helicopter systems, neutral networks approximation techniques are employed to compensate the unknown dynamics of each subsystem. In order to extend the semiglobal stability achieved by conventional neural control to global stability, a switching mechanism is also integrated into the control design, such that the resulted neural controller is always valid wit
APA, Harvard, Vancouver, ISO, and other styles
2

Mahat, Norpah, Nor Idayunie Nording, Jasmani Bidin, Suzanawati Abu Hasan, and Teoh Yeong Kin. "Artificial Neural Network (ANN) to Predict Mathematics Students’ Performance." Journal of Computing Research and Innovation 7, no. 1 (2022): 29–38. http://dx.doi.org/10.24191/jcrinn.v7i1.264.

Full text
Abstract:
Predicting students’ academic performance is very essential to produce high-quality students. The main goal is to continuously help students to increase their ability in the learning process and to help educators as well in improving their teaching skills. Therefore, this study was conducted to predict mathematics students’ performance using Artificial Neural Network (ANN). The secondary data from 382 mathematics students from UCI Machine Learning Repository Data Sets used to train the neural networks. The neural network model built using nntool. Two inputs are used which are the first and the
APA, Harvard, Vancouver, ISO, and other styles
3

Lin, Shaobo, Jinshan Zeng, and Xiaoqin Zhang. "Constructive Neural Network Learning." IEEE Transactions on Cybernetics 49, no. 1 (2019): 221–32. http://dx.doi.org/10.1109/tcyb.2017.2771463.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Baba, Norio. "TD Learning with Neural Networks." Journal of Robotics and Mechatronics 10, no. 4 (1998): 289–94. http://dx.doi.org/10.20965/jrm.1998.p0289.

Full text
Abstract:
Temporal difference (TD) learning (TD learning), proposed by Sutton in the late 1980s, is very interesting prediction using obtained predictions for future prediction. Applying this learning to neural networks helps improve prediction performance using neural networks, after certain problems are solved. Major problems are as follows: 1) Prediction Pt at time t is assumed to be scalar in Sutton's original paper, raising the problem of ""what is the rule for updating weight vector of the neural network if the neural network has multiple outputs?"" 2) How do we derive individual components of gra
APA, Harvard, Vancouver, ISO, and other styles
5

Hamdan, Baida Abdulredha. "Neural Network Principles and its Application." Webology 19, no. 1 (2022): 3955–70. http://dx.doi.org/10.14704/web/v19i1/web19261.

Full text
Abstract:
Neural networks which also known as artificial neural networks is generally a computing dependent technique that formed and designed to create a simulation to the real brain of a human to be used as a problem solving method. Artificial neural networks gain their abilities by the method of training or learning, each method have a certain input and output which called results too, this method of learning works to create forming probability-weighted associations among both of input and the result which stored and saved across the net specifically among its data structure, any training process is
APA, Harvard, Vancouver, ISO, and other styles
6

Gao, Yuan, Laurence T. Yang, Dehua Zheng, Jing Yang, and Yaliang Zhao. "Quantized Tensor Neural Network." ACM/IMS Transactions on Data Science 2, no. 4 (2021): 1–18. http://dx.doi.org/10.1145/3491255.

Full text
Abstract:
Tensor network as an effective computing framework for efficient processing and analysis of high-dimensional data has been successfully applied in many fields. However, the performance of traditional tensor networks still cannot match the strong fitting ability of neural networks, so some data processing algorithms based on tensor networks cannot achieve the same excellent performance as deep learning models. To further improve the learning ability of tensor network, we propose a quantized tensor neural network in this article (QTNN), which integrates the advantages of neural networks and tens
APA, Harvard, Vancouver, ISO, and other styles
7

Ma, Hongli, Fang Xie, Tao Chen, Lei Liang, and Jie Lu. "Image recognition algorithms based on deep learning." Journal of Physics: Conference Series 2137, no. 1 (2021): 012056. http://dx.doi.org/10.1088/1742-6596/2137/1/012056.

Full text
Abstract:
Abstract Convolutional neural network is a very important research direction in deep learning technology. According to the current development of convolutional network, in this paper, convolutional neural networks are induced. Firstly, this paper induces the development process of convolutional neural network; then it introduces the structure of convolutional neural network and some typical convolutional neural networks. Finally, several examples of the application of deep learning is introduced.
APA, Harvard, Vancouver, ISO, and other styles
8

Javed, Abbas, Hadi Larijani, Ali Ahmadinia, and Rohinton Emmanuel. "RANDOM NEURAL NETWORK LEARNING HEURISTICS." Probability in the Engineering and Informational Sciences 31, no. 4 (2017): 436–56. http://dx.doi.org/10.1017/s0269964817000201.

Full text
Abstract:
The random neural network (RNN) is a probabilitsic queueing theory-based model for artificial neural networks, and it requires the use of optimization algorithms for training. Commonly used gradient descent learning algorithms may reside in local minima, evolutionary algorithms can be also used to avoid local minima. Other techniques such as artificial bee colony (ABC), particle swarm optimization (PSO), and differential evolution algorithms also perform well in finding the global minimum but they converge slowly. The sequential quadratic programming (SQP) optimization algorithm can find the o
APA, Harvard, Vancouver, ISO, and other styles
9

Peretto, Pierre. "LEARNING LEARNING SETS IN NEURAL NETWORKS." International Journal of Neural Systems 01, no. 01 (1989): 31–40. http://dx.doi.org/10.1142/s0129065789000438.

Full text
Abstract:
Learning sets are experiments in which animals use their past practical experience to improve their behaviors, in particular to yield convenient responses to unknown situations. We propose a neural network architecture which reproduces this generalization process. The model rests on three main ideas: — the same motor coding networks are used as input networks in the learning stage and as output networks in the retrieving phase; — the core of the system is made up of a number of randomly generated feedforward pathways, — a simple Hebbian learning rule selects among the pathways those which fit
APA, Harvard, Vancouver, ISO, and other styles
10

Banzi, Jamal, Isack Bulugu, and Zhongfu Ye. "Deep Predictive Neural Network: Unsupervised Learning for Hand Pose Estimation." International Journal of Machine Learning and Computing 9, no. 4 (2019): 432–39. http://dx.doi.org/10.18178/ijmlc.2019.9.4.822.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Neural network learning"

1

Xu, Shuxiang, University of Western Sydney, and of Informatics Science and Technology Faculty. "Neuron-adaptive neural network models and applications." THESIS_FIST_XXX_Xu_S.xml, 1999. http://handle.uws.edu.au:8081/1959.7/275.

Full text
Abstract:
Artificial Neural Networks have been widely probed by worldwide researchers to cope with the problems such as function approximation and data simulation. This thesis deals with Feed-forward Neural Networks (FNN's) with a new neuron activation function called Neuron-adaptive Activation Function (NAF), and Feed-forward Higher Order Neural Networks (HONN's) with this new neuron activation function. We have designed a new neural network model, the Neuron-Adaptive Neural Network (NANN), and mathematically proved that one NANN can approximate any piecewise continuous function to any desired accuracy
APA, Harvard, Vancouver, ISO, and other styles
2

Rimer, Michael Edwin. "Improving Neural Network Classification Training." Diss., CLICK HERE for online access, 2007. http://contentdm.lib.byu.edu/ETD/image/etd2094.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kan, Wing Kay. "A probabilistic neural network for associative learning." Thesis, Imperial College London, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.283809.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

O'Hara, Tobias Anthony Marett. "Learning classifier systems with neural network representation." Thesis, University of the West of England, Bristol, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.436911.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Keisala, Simon. "Designing an Artificial Neural Network for state evaluation in Arimaa : Using a Convolutional Neural Network." Thesis, Linköpings universitet, Artificiell intelligens och integrerade datorsystem, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-143188.

Full text
Abstract:
Agents being able to play board games such as Tic Tac Toe, Chess, Go and Arimaa has been, and still is, a major difficulty in Artificial Intelligence. For the mentioned board games, there is a certain amount of legal moves a player can do in a specific board state. Tic Tac Toe have in average around 4-5 legal moves, with a total amount of 255168 possible games. Both Chess, Go and Arimaa have an increased amount of possible legal moves to do, and an almost infinite amount of possible games, making it impossible to have complete knowledge of the outcome. This thesis work have created various Neu
APA, Harvard, Vancouver, ISO, and other styles
6

Suárez-Varela, Macià José Rafael. "Enabling knowledge-defined networks : deep reinforcement learning, graph neural networks and network analytics." Doctoral thesis, Universitat Politècnica de Catalunya, 2020. http://hdl.handle.net/10803/669212.

Full text
Abstract:
Significant breakthroughs in the last decade in the Machine Learning (ML) field have ushered in a new era of Artificial Intelligence (AI). Particularly, recent advances in Deep Learning (DL) have enabled to develop a new breed of modeling and optimization tools with a plethora of applications in different fields like natural language processing, or computer vision. In this context, the Knowledge-Defined Networking (KDN) paradigm highlights the lack of adoption of AI techniques in computer networks and – as a result – proposes a novel architecture that relies on Software-Defined Networking (
APA, Harvard, Vancouver, ISO, and other styles
7

Sarpangala, Kishan. "Semantic Segmentation Using Deep Learning Neural Architectures." University of Cincinnati / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=ucin157106185092304.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kabore, Raogo. "Hybrid deep neural network anomaly detection system for SCADA networks." Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2020. http://www.theses.fr/2020IMTA0190.

Full text
Abstract:
Les systèmes SCADA sont de plus en plus ciblés par les cyberattaques en raison de nombreuses vulnérabilités dans le matériel, les logiciels, les protocoles et la pile de communication. Ces systèmes utilisent aujourd'hui du matériel, des logiciels, des systèmes d'exploitation et des protocoles standard. De plus, les systèmes SCADA qui étaient auparavant isolés sont désormais interconnectés aux réseaux d'entreprise et à Internet, élargissant ainsi la surface d'attaque. Dans cette thèse, nous utilisons une approche deep learning pour proposer un réseau de neurones profonds hybride efficace pour l
APA, Harvard, Vancouver, ISO, and other styles
9

Mosher, Stephen Glenn. "Neural Network Applications in Seismology." Thesis, Université d'Ottawa / University of Ottawa, 2021. http://hdl.handle.net/10393/42329.

Full text
Abstract:
Neural networks are extremely versatile tools, as evidenced by their widespread adoption into many fields in the sciences and beyond, including the geosciences. In seismology neural networks have been primarily used to automatically detect and discriminate seismic signals within time-series data, as well as provide location estimates for their sources. However, as neural network research has significantly progressed over the past three decades, so too have its applications in seismology. Such applications now include earthquake early warning systems based on smartphone data collected from large
APA, Harvard, Vancouver, ISO, and other styles
10

Batbayar, Batsukh, and S3099885@student rmit edu au. "Improving Time Efficiency of Feedforward Neural Network Learning." RMIT University. Electrical and Computer Engineering, 2009. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20090303.114706.

Full text
Abstract:
Feedforward neural networks have been widely studied and used in many applications in science and engineering. The training of this type of networks is mainly undertaken using the well-known backpropagation based learning algorithms. One major problem with this type of algorithms is the slow training convergence speed, which hinders their applications. In order to improve the training convergence speed of this type of algorithms, many researchers have developed different improvements and enhancements. However, the slow convergence problem has not been fully addressed. This thesis makes seve
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Neural network learning"

1

Thrun, Sebastian. Explanation-Based Neural Network Learning. Springer US, 1996. http://dx.doi.org/10.1007/978-1-4613-1381-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Modular learning in neural networks: A modularized approach to neural network classification. Wiley, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Gallant, Stephen I. Neural network learning and expert systems. MIT Press, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Gridin, Ivan. Automated Deep Learning Using Neural Network Intelligence. Apress, 2022. http://dx.doi.org/10.1007/978-1-4842-8149-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Thrun, Sebastian. Explanation-based neural network learning: A lifelong learning approach. Kluwer Academic Publishers, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Thrun, Sebastian. Explanation-Based Neural Network Learning: A Lifelong Learning Approach. Springer US, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Schmajuk, Nestor A. Animal learning and cognition: A neural network approach. Cambridge University Press, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Neural network design and the complexity of learning. MIT Press, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Radi, Amr Mohamed. Discovery of neural network learning rules using genetic programming. University of Birmingham, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Choo, Kiam. Learning hyperparameters for neural network models using Hamiltonian dynamics. National Library of Canada, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Neural network learning"

1

Kim, Phil. "Neural Network." In MATLAB Deep Learning. Apress, 2017. http://dx.doi.org/10.1007/978-1-4842-2845-6_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

El-Amir, Hisham, and Mahmoud Hamdy. "Convolutional Neural Network." In Deep Learning Pipeline. Apress, 2019. http://dx.doi.org/10.1007/978-1-4842-5349-6_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kim, Phil. "Convolutional Neural Network." In MATLAB Deep Learning. Apress, 2017. http://dx.doi.org/10.1007/978-1-4842-2845-6_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Sejnowski, Terrence J. "Neural Network Learning Algorithms." In Neural Computers. Springer Berlin Heidelberg, 1989. http://dx.doi.org/10.1007/978-3-642-83740-1_31.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Tsai, Kao-Tai. "Neural Network." In Machine Learning for Knowledge Discovery with R. Chapman and Hall/CRC, 2021. http://dx.doi.org/10.1201/9781003205685-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Lee, Taesam, Vijay P. Singh, and Kyung Hwa Cho. "Neural Network." In Deep Learning for Hydrometeorology and Environmental Science. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-64777-3_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ayyadevara, V. Kishore. "Recurrent Neural Network." In Pro Machine Learning Algorithms. Apress, 2018. http://dx.doi.org/10.1007/978-1-4842-3564-5_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ayyadevara, V. Kishore. "Artificial Neural Network." In Pro Machine Learning Algorithms. Apress, 2018. http://dx.doi.org/10.1007/978-1-4842-3564-5_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ayyadevara, V. Kishore. "Convolutional Neural Network." In Pro Machine Learning Algorithms. Apress, 2018. http://dx.doi.org/10.1007/978-1-4842-3564-5_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Webb, Geoffrey I., Eamonn Keogh, Risto Miikkulainen, Risto Miikkulainen, and Michele Sebag. "Neural Network Architecture." In Encyclopedia of Machine Learning. Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-30164-8_587.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Neural network learning"

1

Zhan, Tiffany. "Hyper-Parameter Tuning in Deep Neural Network Learning." In 8th International Conference on Artificial Intelligence and Applications (AI 2022). Academy and Industry Research Collaboration Center (AIRCC), 2022. http://dx.doi.org/10.5121/csit.2022.121809.

Full text
Abstract:
Deep learning has been increasingly used in various applications such as image and video recognition, recommender systems, image classification, image segmentation, medical image analysis, natural language processing, brain–computer interfaces, and financial time series. In deep learning, a convolutional neural network (CNN) is regularized versions of multilayer perceptrons. Multilayer perceptrons usually mean fully connected networks, that is, each neuron in one layer is connected to all neurons in the next layer. The full connectivity of these networks makes them prone to overfitting data. T
APA, Harvard, Vancouver, ISO, and other styles
2

Yu, Francis T. S., Taiwei Lu, and Don A. Gregory. "Self-Learning Optical Neural Network." In Spatial Light Modulators and Applications. Optica Publishing Group, 1990. http://dx.doi.org/10.1364/slma.1990.mb4.

Full text
Abstract:
One of the features in neural computing must be the adaptability to changeable environment and to recognize unknown objects. In general, there are two types of learning processes that are used in the human brain; supervised and unsupervised learnings [1]. In a supervised learning process, the artificial neural network has to be taught when to learn and when to process the information. Nevertheless, if an unknown object is presented to the artificial neural network during the processing, the network may provide an error output result. On the other hand, for unsupervised learning (also called se
APA, Harvard, Vancouver, ISO, and other styles
3

Tang, Houcheng, and Leila Notash. "Neural Network Based Transfer Learning of Manipulator Inverse Displacement Analysis." In ASME 2020 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2020. http://dx.doi.org/10.1115/detc2020-22508.

Full text
Abstract:
Abstract In this paper, a neural network based transfer learning approach of inverse displacement analysis of robot manipulators is studied. Neural networks with different structures are applied utilizing data from different configurations of a manipulator for training purposes. Then the transfer learning was conducted between manipulators with different geometric layouts. The training is performed on both the neural networks with pretrained initial parameters and the neural networks with random initialization. To investigate the rate of convergence of data fitting comprehensively, different v
APA, Harvard, Vancouver, ISO, and other styles
4

Chen, Huiyuan, and Jing Li. "Learning Data-Driven Drug-Target-Disease Interaction via Neural Tensor Network." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/477.

Full text
Abstract:
Precise medicine recommendations provide more effective treatments and cause fewer drug side effects. A key step is to understand the mechanistic relationships among drugs, targets, and diseases. Tensor-based models have the ability to explore relationships of drug-target-disease based on large amount of labeled data. However, existing tensor models fail to capture complex nonlinear dependencies among tensor data. In addition, rich medical knowledge are far less studied, which may lead to unsatisfied results. Here we propose a Neural Tensor Network (NeurTN) to assist personalized medicine trea
APA, Harvard, Vancouver, ISO, and other styles
5

Zheng, Shengjie, Lang Qian, Pingsheng Li, Chenggang He, Xiaoqi Qin, and Xiaojian Li. "An Introductory Review of Spiking Neural Network and Artificial Neural Network: From Biological Intelligence to Artificial Intelligence." In 8th International Conference on Artificial Intelligence (ARIN 2022). Academy and Industry Research Collaboration Center (AIRCC), 2022. http://dx.doi.org/10.5121/csit.2022.121010.

Full text
Abstract:
Stemming from the rapid development of artificial intelligence, which has gained expansive success in pattern recognition, robotics, and bioinformatics, neuroscience is also gaining tremendous progress. A kind of spiking neural network with biological interpretability is gradually receiving wide attention, and this kind of neural network is also regarded as one of the directions toward general artificial intelligence. This review summarizes the basic properties of artificial neural networks as well as spiking neural networks. Our focus is on the biological background and theoretical basis of s
APA, Harvard, Vancouver, ISO, and other styles
6

Kumar, Manish, and Devendra P. Garg. "Neural Network Based Intelligent Learning of Fuzzy Logic Controller Parameters." In ASME 2004 International Mechanical Engineering Congress and Exposition. ASMEDC, 2004. http://dx.doi.org/10.1115/imece2004-59589.

Full text
Abstract:
Design of an efficient fuzzy logic controller involves the optimization of parameters of fuzzy sets and proper choice of rule base. There are several techniques reported in recent literature that use neural network architecture and genetic algorithms to learn and optimize a fuzzy logic controller. This paper presents methodologies to learn and optimize fuzzy logic controller parameters that use learning capabilities of neural network. Concepts of model predictive control (MPC) have been used to obtain optimal signal to train the neural network via backpropagation. The strategies developed have
APA, Harvard, Vancouver, ISO, and other styles
7

Bian, Shaoping, Kebin Xu, and Jing Hong. "Near neighbor neurons interconnected neural network." In OSA Annual Meeting. Optica Publishing Group, 1989. http://dx.doi.org/10.1364/oam.1989.tht27.

Full text
Abstract:
When the Hopfield neural network is extended to deal with a 2-D image composed of N×N pixels, the weight interconnection is a fourth-rank tensor with N4 elements. Each neuron is interconnected with all other neurons of the network. For an image, N will be large. So N4, the number of elements of the interconnection tensor, will be so large as to make the neural network's learning time (which corresponds to the precalculation of the interconnection tensor elements) too long. It is also difficult to implement the 2-D Hopfield neural network optically.
APA, Harvard, Vancouver, ISO, and other styles
8

Zhang, Zhen, Hongxia Yang, Jiajun Bu, et al. "ANRL: Attributed Network Representation Learning via Deep Neural Networks." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/438.

Full text
Abstract:
Network representation learning (RL) aims to transform the nodes in a network into low-dimensional vector spaces while preserving the inherent properties of the network. Though network RL has been intensively studied, most existing works focus on either network structure or node attribute information. In this paper, we propose a novel framework, named ANRL, to incorporate both the network structure and node attribute information in a principled way. Specifically, we propose a neighbor enhancement autoencoder to model the node attribute information, which reconstructs its target neighbors inste
APA, Harvard, Vancouver, ISO, and other styles
9

Huynh, Alex V., John F. Walkup, and Thomas F. Krile. "Optical quadratic perceptron neural network." In OSA Annual Meeting. Optica Publishing Group, 1990. http://dx.doi.org/10.1364/oam.1990.thy35.

Full text
Abstract:
Optical quadratic neural networks are currently being investigated because of their advantages with respect to linear neural networks.1 A quadratic neuron has previously been implemented by using a photorefractive barium titanate crystal.2 This approach has been improved and enhanced to realize a neural network that implements the perceptron learning algorithm. The input matrix, which is an encoded version of the input vector, is placed on a mask, and the interconnection matrix is computer-generated on a monochrome liquid-crystal television. By performing the four-wave mixing operation, the ba
APA, Harvard, Vancouver, ISO, and other styles
10

Ramakrishnan, Nipun, and Tarun Soni. "Network Traffic Prediction Using Recurrent Neural Networks." In 2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA). IEEE, 2018. http://dx.doi.org/10.1109/icmla.2018.00035.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Neural network learning"

1

Matteucci, Matteo. ELeaRNT: Evolutionary Learning of Rich Neural Network Topologies. Defense Technical Information Center, 2006. http://dx.doi.org/10.21236/ada456062.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Markova, Oksana, Serhiy Semerikov та Maiia Popel. СoCalc as a Learning Tool for Neural Network Simulation in the Special Course “Foundations of Mathematic Informatics”. Sun SITE Central Europe, 2018. http://dx.doi.org/10.31812/0564/2250.

Full text
Abstract:
The role of neural network modeling in the learning сontent of special course “Foundations of Mathematic Informatics” was discussed. The course was developed for the students of technical universities – future IT-specialists and directed to breaking the gap between theoretic computer science and it’s applied applications: software, system and computing engineering. CoCalc was justified as a learning tool of mathematical informatics in general and neural network modeling in particular. The elements of technique of using CoCalc at studying topic “Neural network and pattern recognition” of the sp
APA, Harvard, Vancouver, ISO, and other styles
3

Thompson, Richard F. A Biological Neural Network Analysis of Learning and Memory. Defense Technical Information Center, 1991. http://dx.doi.org/10.21236/ada241837.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Vassilev, Apostol. BowTie – A deep learning feedforward neural network for sentiment analysis. National Institute of Standards and Technology, 2019. http://dx.doi.org/10.6028/nist.cswp.04222019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Vassilev, Apostol. BowTie – A deep learning feedforward neural network for sentiment analysis. National Institute of Standards and Technology, 2019. http://dx.doi.org/10.6028/nist.cswp.8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kirichek, Galina, Vladyslav Harkusha, Artur Timenko, and Nataliia Kulykovska. System for detecting network anomalies using a hybrid of an uncontrolled and controlled neural network. [б. в.], 2020. http://dx.doi.org/10.31812/123456789/3743.

Full text
Abstract:
In this article realization method of attacks and anomalies detection with the use of training of ordinary and attacking packages, respectively. The method that was used to teach an attack on is a combination of an uncontrollable and controlled neural network. In an uncontrolled network, attacks are classified in smaller categories, taking into account their features and using the self- organized map. To manage clusters, a neural network based on back-propagation method used. We use PyBrain as the main framework for designing, developing and learning perceptron data. This framework has a suffi
APA, Harvard, Vancouver, ISO, and other styles
7

Thompson, Richard F. A Biological Neural Network Analysis of Learning and Memory: The Cerebellum and Sensory Motor Conditioning. Defense Technical Information Center, 1995. http://dx.doi.org/10.21236/ada304568.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Gabe V. Garcia. Eddy Current Signature Classification of Steam Generator Tube Defects Using A Learning Vector Quantization Neural Network. Office of Scientific and Technical Information (OSTI), 2005. http://dx.doi.org/10.2172/836575.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Grossberg, Stephen. Neural Network Models of Vector Coding, Learning, and Trajectory Formation During Planned and Reactive Arm and Eye Movements. Defense Technical Information Center, 1989. http://dx.doi.org/10.21236/ada206737.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Tayeb, Shahab. Taming the Data in the Internet of Vehicles. Mineta Transportation Institute, 2022. http://dx.doi.org/10.31979/mti.2022.2014.

Full text
Abstract:
As an emerging field, the Internet of Vehicles (IoV) has a myriad of security vulnerabilities that must be addressed to protect system integrity. To stay ahead of novel attacks, cybersecurity professionals are developing new software and systems using machine learning techniques. Neural network architectures improve such systems, including Intrusion Detection System (IDSs), by implementing anomaly detection, which differentiates benign data packets from malicious ones. For an IDS to best predict anomalies, the model is trained on data that is typically pre-processed through normalization and f
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!