Academic literature on the topic 'Artificial neural networks; Learning algorithms'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Artificial neural networks; Learning algorithms.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Artificial neural networks; Learning algorithms"

1

Gumus, Fatma, and Derya Yiltas-Kaplan. "Congestion Prediction System With Artificial Neural Networks." International Journal of Interdisciplinary Telecommunications and Networking 12, no. 3 (July 2020): 28–43. http://dx.doi.org/10.4018/ijitn.2020070103.

Full text
Abstract:
Software Defined Network (SDN) is a programmable network architecture that provides innovative solutions to the problems of the traditional networks. Congestion control is still an uncharted territory for this technology. In this work, a congestion prediction scheme has been developed by using neural networks. Minimum Redundancy Maximum Relevance (mRMR) feature selection algorithm was performed on the data collected from the OMNET++ simulation. The novelty of this study also covers the implementation of mRMR in an SDN congestion prediction problem. After evaluating the relevance scores, two highest ranking features were used. On the learning stage Nonlinear Autoregressive Exogenous Neural Network (NARX), Nonlinear Autoregressive Neural Network, and Nonlinear Feedforward Neural Network algorithms were executed. These algorithms had not been used before in SDNs according to the best of the authors knowledge. The experiments represented that NARX was the best prediction algorithm. This machine learning approach can be easily integrated to different topologies and application areas.
APA, Harvard, Vancouver, ISO, and other styles
2

Javed, Abbas, Hadi Larijani, Ali Ahmadinia, and Rohinton Emmanuel. "RANDOM NEURAL NETWORK LEARNING HEURISTICS." Probability in the Engineering and Informational Sciences 31, no. 4 (May 22, 2017): 436–56. http://dx.doi.org/10.1017/s0269964817000201.

Full text
Abstract:
The random neural network (RNN) is a probabilitsic queueing theory-based model for artificial neural networks, and it requires the use of optimization algorithms for training. Commonly used gradient descent learning algorithms may reside in local minima, evolutionary algorithms can be also used to avoid local minima. Other techniques such as artificial bee colony (ABC), particle swarm optimization (PSO), and differential evolution algorithms also perform well in finding the global minimum but they converge slowly. The sequential quadratic programming (SQP) optimization algorithm can find the optimum neural network weights, but can also get stuck in local minima. We propose to overcome the shortcomings of these various approaches by using hybridized ABC/PSO and SQP. The resulting algorithm is shown to compare favorably with other known techniques for training the RNN. The results show that hybrid ABC learning with SQP outperforms other training algorithms in terms of mean-squared error and normalized root-mean-squared error.
APA, Harvard, Vancouver, ISO, and other styles
3

Başeski, Emre. "Heliport Detection Using Artificial Neural Networks." Photogrammetric Engineering & Remote Sensing 86, no. 9 (September 1, 2020): 541–46. http://dx.doi.org/10.14358/pers.86.9.541.

Full text
Abstract:
Automatic image exploitation is a critical technology for quick content analysis of high-resolution remote sensing images. The presence of a heliport on an image usually implies an important facility, such as military facilities. Therefore, detection of heliports can reveal critical information about the content of an image. In this article, two learning-based algorithms are presented that make use of artificial neural networks to detect H-shaped, light-colored heliports. The first algorithm is based on shape analysis of the heliport candidate segments using classical artificial neural networks. The second algorithm uses deep-learning techniques. While deep learning can solve difficult problems successfully, classical-learning approaches can be tuned easily to obtain fast and reasonable results. Therefore, although the main objective of this article is heliport detection, it also compares a deep-learning based approach with a classical learning-based approach and discusses advantages and disadvantages of both techniques.
APA, Harvard, Vancouver, ISO, and other styles
4

YAO, XIN. "EVOLUTIONARY ARTIFICIAL NEURAL NETWORKS." International Journal of Neural Systems 04, no. 03 (September 1993): 203–22. http://dx.doi.org/10.1142/s0129065793000171.

Full text
Abstract:
Evolutionary artificial neural networks (EANNs) can be considered as a combination of artificial neural networks (ANNs) and evolutionary search procedures such as genetic algorithms (GAs). This paper distinguishes among three levels of evolution in EANNs, i.e. the evolution of connection weights, architectures and learning rules. It first reviews each kind of evolution in detail and then analyses major issues related to each kind of evolution. It is shown in the paper that although there is a lot of work on the evolution of connection weights and architectures, research on the evolution of learning rules is still in its early stages. Interactions among different levels of evolution are far from being understood. It is argued in the paper that the evolution of learning rules and its interactions with other levels of evolution play a vital role in EANNs.
APA, Harvard, Vancouver, ISO, and other styles
5

Sporea, Ioana, and André Grüning. "Supervised Learning in Multilayer Spiking Neural Networks." Neural Computation 25, no. 2 (February 2013): 473–509. http://dx.doi.org/10.1162/neco_a_00396.

Full text
Abstract:
We introduce a supervised learning algorithm for multilayer spiking neural networks. The algorithm overcomes a limitation of existing learning algorithms: it can be applied to neurons firing multiple spikes in artificial neural networks with hidden layers. It can also, in principle, be used with any linearizable neuron model and allows different coding schemes of spike train patterns. The algorithm is applied successfully to classic linearly nonseparable benchmarks such as the XOR problem and the Iris data set, as well as to more complex classification and mapping problems. The algorithm has been successfully tested in the presence of noise, requires smaller networks than reservoir computing, and results in faster convergence than existing algorithms for similar tasks such as SpikeProp.
APA, Harvard, Vancouver, ISO, and other styles
6

Shah, Habib, Rozaida Ghazali, Nazri Mohd Nawi, and Mustafa Mat Deris. "G-HABC Algorithm for Training Artificial Neural Networks." International Journal of Applied Metaheuristic Computing 3, no. 3 (July 2012): 1–19. http://dx.doi.org/10.4018/jamc.2012070101.

Full text
Abstract:
Learning problems for Neural Network (NN) has widely been explored in the past two decades. Researchers have focused more on population-based algorithms because of its natural behavior processing. The population-based algorithms are Ant Colony Optimization (ACO), Artificial Bee Colony (ABC), and recently Hybrid Ant Bee Colony (HABC) algorithm produced an easy way for NN training. These social based techniques are mostly used for finding best weight values and over trapping local minima in NN learning. Typically, NN trained by traditional approach, namely the Backpropagation (BP) algorithm, has difficulties such as trapping in local minima and slow convergence. The new method named Global Hybrid Ant Bee Colony (G-HABC) algorithm which can overcome the gaps in BP is used to train the NN for Boolean Function classification task. The simulation results of the NN when trained with the proposed hybrid method were compared with that of Levenberg-Marquardt (LM) and ordinary ABC. From the results, the proposed G-HABC algorithm has shown to provide a better learning performance for NNs with reduced CPU time and higher success rates.
APA, Harvard, Vancouver, ISO, and other styles
7

Ding, Shuo, and Qing Hui Wu. "A MATLAB-Based Study on Approximation Performances of Improved Algorithms of Typical BP Neural Networks." Applied Mechanics and Materials 313-314 (March 2013): 1353–56. http://dx.doi.org/10.4028/www.scientific.net/amm.313-314.1353.

Full text
Abstract:
BP neural networks are widely used and the algorithms are various. This paper studies the advantages and disadvantages of improved algorithms of five typical BP networks, based on artificial neural network theories. First, the learning processes of improved algorithms of the five typical BP networks are elaborated on mathematically. Then a specific network is designed on the platform of MATLAB 7.0 to conduct approximation test for a given nonlinear function. At last, a comparison is made between the training speeds and memory consumption of the five BP networks. The simulation results indicate that for small scaled and medium scaled networks, LM optimization algorithm has the best approximation ability, followed by Quasi-Newton algorithm, conjugate gradient method, resilient BP algorithm, adaptive learning rate algorithm. Keywords: BP neural network; Improved algorithm; Function approximation; MATLAB
APA, Harvard, Vancouver, ISO, and other styles
8

MAGOULAS, GEORGE D., and MICHAEL N. VRAHATIS. "ADAPTIVE ALGORITHMS FOR NEURAL NETWORK SUPERVISED LEARNING: A DETERMINISTIC OPTIMIZATION APPROACH." International Journal of Bifurcation and Chaos 16, no. 07 (July 2006): 1929–50. http://dx.doi.org/10.1142/s0218127406015805.

Full text
Abstract:
Networks of neurons can perform computations that even modern computers find very difficult to simulate. Most of the existing artificial neurons and artificial neural networks are considered biologically unrealistic, nevertheless the practical success of the backpropagation algorithm and the powerful capabilities of feedforward neural networks have made neural computing very popular in several application areas. A challenging issue in this context is learning internal representations by adjusting the weights of the network connections. To this end, several first-order and second-order algorithms have been proposed in the literature. This paper provides an overview of approaches to backpropagation training, emphazing on first-order adaptive learning algorithms that build on the theory of nonlinear optimization, and proposes a framework for their analysis in the context of deterministic optimization.
APA, Harvard, Vancouver, ISO, and other styles
9

Gülmez, Burak, and Sinem Kulluk. "Social Spider Algorithm for Training Artificial Neural Networks." International Journal of Business Analytics 6, no. 4 (October 2019): 32–49. http://dx.doi.org/10.4018/ijban.2019100103.

Full text
Abstract:
Artificial neural networks (ANNs) are one of the most widely used techniques for generalization, classification, and optimization. ANNs are inspired from the human brain and perform some abilities automatically like learning new information and making new inferences. Back-propagation (BP) is the most common algorithm for training ANNs. But the processing of the BP algorithm is too slow, and it can be trapped into local optima. The meta-heuristic algorithms overcome these drawbacks and are frequently used in training ANNs. In this study, a new generation meta-heuristic, the Social Spider (SS) algorithm, is adapted for training ANNs. The performance of the algorithm is compared with conventional and meta-heuristic algorithms on classification benchmark problems in the literature. The algorithm is also applied to real-world data in order to predict the production of a factory in Kayseri and compared with some regression-based algorithms and ANNs models. The obtained results and comparisons on classification benchmark datasets have shown that the SS algorithm is a competitive algorithm for training ANNs. On the real-world production dataset, the SS algorithm has outperformed all compared algorithms. As a result of experimental studies, the SS algorithm is highly capable for training ANNs and can be used for both classification and regression.
APA, Harvard, Vancouver, ISO, and other styles
10

Daskin, Ammar. "A Quantum Implementation Model for Artificial Neural Networks." Quanta 7, no. 1 (February 20, 2018): 7. http://dx.doi.org/10.12743/quanta.v7i1.65.

Full text
Abstract:
The learning process for multilayered neural networks with many nodes makes heavy demands on computational resources. In some neural network models, the learning formulas, such as the Widrow–Hoff formula, do not change the eigenvectors of the weight matrix while flatting the eigenvalues. In infinity, these iterative formulas result in terms formed by the principal components of the weight matrix, namely, the eigenvectors corresponding to the non-zero eigenvalues. In quantum computing, the phase estimation algorithm is known to provide speedups over the conventional algorithms for the eigenvalue-related problems. Combining the quantum amplitude amplification with the phase estimation algorithm, a quantum implementation model for artificial neural networks using the Widrow–Hoff learning rule is presented. The complexity of the model is found to be linear in the size of the weight matrix. This provides a quadratic improvement over the classical algorithms.Quanta 2018; 7: 7–18.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Artificial neural networks; Learning algorithms"

1

Sannossian, Hermineh Y. "A study of artificial neural networks and their learning algorithms." Thesis, Loughborough University, 1992. https://dspace.lboro.ac.uk/2134/11194.

Full text
Abstract:
The work presented in this thesis is mainly involved in the study of Artificial Neural Networks (ANNs) and their learning strategies. The ANN simulator incorporating the Backpropagation (BP) algorithm is designed and analysed and run on a MIMD parallel computer namely the Balance 8000 multiprocessor machine. Initially, an overview of the learning algorithms of ANNs are described. Some of the acceleration techniques including Heuristic methods for the BP like algorithms are introduced. The software design of the simulator for both On-line and Batch BP is described. Two different strategies for parallelism are considered and the results of the speedups of both algorithms are compared. Later a Heuristic algorithm (GRBH) for accelerating the BP method is introduced and the results are compared with the BP using a variety of expositing examples. The simulator is used to train networks for invariant character recognition using moments. The trained networks are tested for different examples and the results are analysed. The thesis concludes with a chapter summarizing the main results and suggestions for further study.
APA, Harvard, Vancouver, ISO, and other styles
2

Ghosh, Ranadhir, and n/a. "A Novel Hybrid Learning Algorithm For Artificial Neural Networks." Griffith University. School of Information Technology, 2003. http://www4.gu.edu.au:8080/adt-root/public/adt-QGU20030808.162355.

Full text
Abstract:
Last few decades have witnessed the use of artificial neural networks (ANN) in many real-world applications and have offered an attractive paradigm for a broad range of adaptive complex systems. In recent years ANN have enjoyed a great deal of success and have proven useful in wide variety pattern recognition or feature extraction tasks. Examples include optical character recognition, speech recognition and adaptive control to name a few. To keep the pace with its huge demand in diversified application areas, many different kinds of ANN architecture and learning types have been proposed by the researchers to meet varying needs. A novel hybrid learning approach for the training of a feed-forward ANN has been proposed in this thesis. The approach combines evolutionary algorithms with matrix solution methods such as singular value decomposition, Gram-Schmidt etc., to achieve optimum weights for hidden and output layers. The proposed hybrid method is to apply evolutionary algorithm in the first layer and least square method (LS) in the second layer of the ANN. The methodology also finds optimum number of hidden neurons using a hierarchical combination methodology structure for weights and architecture. A learning algorithm has many facets that can make a learning algorithm good for a particular application area. Often there are trade offs between classification accuracy and time complexity, nevertheless, the problem of memory complexity remains. This research explores all the different facets of the proposed new algorithm in terms of classification accuracy, convergence property, generalization ability, time and memory complexity.
APA, Harvard, Vancouver, ISO, and other styles
3

Chen, Hsinchun. "Machine Learning for Information Retrieval: Neural Networks, Symbolic Learning, and Genetic Algorithms." Wiley Periodicals, Inc, 1995. http://hdl.handle.net/10150/106427.

Full text
Abstract:
Artificial Intelligence Lab, Department of MIS, University of Arizona
Information retrieval using probabilistic techniques has attracted significant attention on the part of researchers in information and computer science over the past few decades. In the 1980s, knowledge-based techniques also made an impressive contribution to “intelligent” information retrieval and indexing. More recently, information science researchers have turned to other newer artificial-intelligence- based inductive learning techniques including neural networks, symbolic learning, and genetic algorithms. These newer techniques, which are grounded on diverse paradigms, have provided great opportunities for researchers to enhance the information processing and retrieval capabilities of current information storage and retrieval systems. In this article, we first provide an overview of these newer techniques and their use in information science research. To familiarize readers with these techniques, we present three popular methods: the connectionist Hopfield network; the symbolic ID3/ID5R; and evolution- based genetic algorithms. We discuss their knowledge representations and algorithms in the context of information retrieval. Sample implementation and testing results from our own research are also provided for each technique. We believe these techniques are promising in their ability to analyze user queries, identify users’ information needs, and suggest alternatives for search. With proper user-system interactions, these methods can greatly complement the prevailing full-text, keywordbased, probabilistic, and knowledge-based techniques.
APA, Harvard, Vancouver, ISO, and other styles
4

Bubie, Walter C. "Algorithm animation and its application to artificial neural network learning /." Online version of thesis, 1991. http://hdl.handle.net/1850/11055.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Hofer, Daniel G. Sbarbaro. "Connectionist feedforward networks for control of nonlinear systems." Thesis, University of Glasgow, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.390248.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Khalid, Fahad. "Measure-based Learning Algorithms : An Analysis of Back-propagated Neural Networks." Thesis, Blekinge Tekniska Högskola, Avdelningen för för interaktion och systemdesign, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-4795.

Full text
Abstract:
In this thesis we present a theoretical investigation of the feasibility of using a problem specific inductive bias for back-propagated neural networks. We argue that if a learning algorithm is biased towards optimizing a certain performance measure, it is plausible to assume that it will generate a higher performance score when evaluated using that particular measure. We use the term measure function for a multi-criteria evaluation function that can also be used as an inherent function in learning algorithms, in order to customize the bias of a learning algorithm for a specific problem. Hence, the term measure-based learning algorithms. We discuss different characteristics of the most commonly used performance measures and establish similarities among them. The characteristics of individual measures and the established similarities are then correlated to the characteristics of the backpropagation algorithm, in order to explore the applicability of introducing a measure function to backpropagated neural networks. Our study shows that there are certain characteristics of the error back-propagation mechanism and the inherent gradient search method that limit the set of measures that can be used for the measure function. Also, we highlight the significance of taking the representational bias of the neural network into account when developing methods for measure-based learning. The overall analysis of the research shows that measure-based learning is a promising area of research with potential for further exploration. We suggest directions for future research that might help realize measure-based neural networks.
The study is an investigation on the feasibility of using a generic inductive bias for backpropagation artificial neural networks, which could incorporate any one or a combination of problem specific performance metrics to be optimized. We have identified several limitations of both the standard error backpropagation mechanism as well the inherent gradient search approach. These limitations suggest exploration of methods other than backpropagation, as well use of global search methods instead of gradient search. Also, we emphasize the importance of taking the representational bias of the neural network in consideration, since only a combination of both procedural and representational bias can provide highly optimal solutions.
APA, Harvard, Vancouver, ISO, and other styles
7

Rimer, Michael Edwin. "Improving Neural Network Classification Training." Diss., CLICK HERE for online access, 2007. http://contentdm.lib.byu.edu/ETD/image/etd2094.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Singh, Y., and M. Mars. "A pilot study to integrate HIV drug resistance gold standard interpretation algorithms using neural networks." Journal for New Generation Sciences, Vol 11, Issue 2: Central University of Technology, Free State, Bloemfontein, 2013. http://hdl.handle.net/11462/639.

Full text
Abstract:
Published Article
There are several HIV drug resistant interpretation algorithms which produce different resistance measures even if applied to the same resistance profile. This discrepancy leads to confusion in the mind of the physician when choosing the best ARV therapy.
APA, Harvard, Vancouver, ISO, and other styles
9

Ncube, Israel. "Stochastic approximation of artificial neural network-type learning algorithms, a dynamical systems approach." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/NQ60559.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Topalli, Ayca Kumluca. "Hybrid Learning Algorithm For Intelligent Short-term Load Forecasting." Phd thesis, METU, 2003. http://etd.lib.metu.edu.tr/upload/627505/index.pdf.

Full text
Abstract:
Short-term load forecasting (STLF) is an important part of the power generation process. For years, it has been achieved by traditional approaches stochastic like time series
but, new methods based on artificial intelligence emerged recently in literature and started to replace the old ones in the industry. In order to follow the latest developments and to have a modern system, it is aimed to make a research on STLF in Turkey, by neural networks. For this purpose, a method is proposed to forecast Turkey&rsquo
s total electric load one day in advance. A hybrid learning scheme that combines off-line learning with real-time forecasting is developed to make use of the available past data for adapting the weights and to further adjust these connections according to the changing conditions. It is also suggested to tune the step size iteratively for better accuracy. Since a single neural network model cannot cover all load types, data are clustered due to the differences in their characteristics. Apart from this, special days are extracted from the normal training sets and handled separately. In this way, a solution is proposed for all load types, including working days, weekends and special holidays. For the selection of input parameters, a technique based on principal component analysis is suggested. A traditional ARMA model is constructed for the same data as a benchmark and results are compared. Proposed method gives lower percent errors all the time, especially for holiday loads. The average error for year 2002 data is obtained as 1.60%.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Artificial neural networks; Learning algorithms"

1

1941-, Venetsanopoulos A. N., ed. Artificial neural networks: Learning algorithms, performance evaluation, and applications. Boston: Kluwer Academic, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Virkumar, Vazirani Umesh, ed. An introduction to computational learning theory. Cambridge, Mass: MIT Press, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Włodzisław, Duch, Érdi Péter, Masulli Francesco, Palm Günther, and SpringerLink (Online service), eds. Artificial Neural Networks and Machine Learning – ICANN 2012: 22nd International Conference on Artificial Neural Networks, Lausanne, Switzerland, September 11-14, 2012, Proceedings, Part I. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Włodzisław, Duch, Érdi Péter, Masulli Francesco, Palm Günther, and SpringerLink (Online service), eds. Artificial Neural Networks and Machine Learning – ICANN 2012: 22nd International Conference on Artificial Neural Networks, Lausanne, Switzerland, September 11-14, 2012, Proceedings, Part II. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kolehmainen, Mikko. Adaptive and Natural Computing Algorithms: 9th International Conference, ICANNGA 2009, Kuopio, Finland, April 23-25, 2009, Revised Selected Papers. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

International, Conference on Artificial Neural Networks and Genetic Algorithms (2007 Warsaw Poland). Adaptive and natural computing algorithms: 8th international conference, ICANNGA 2007, Warsaw, Poland, April 11-14, 2007 : proceedings. Berlin: Springer, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Mars, P. Learning algorithms: Theory and applications in signal processing, control, and communications. Boca Raton: CRC Press, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Delay learning in artificial neural networks. London: Chapman & Hall, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Myers, Catherine. Delay learning in artificial neural networks. London: Chapman & Hall, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Myers, Catherine E. Delay learning in artificial neural networks. London: Chapman & Hall, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Artificial neural networks; Learning algorithms"

1

Karayiannis, N. B., and A. N. Venetsanopoulos. "Fast Learning Algorithms for Neural Networks." In Artificial Neural Networks, 141–93. Boston, MA: Springer US, 1993. http://dx.doi.org/10.1007/978-1-4757-4547-4_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Karayiannis, N. B., and A. N. Venetsanopoulos. "ELEANNE: Efficient LEarning Algorithms for Neural NEtworks." In Artificial Neural Networks, 87–139. Boston, MA: Springer US, 1993. http://dx.doi.org/10.1007/978-1-4757-4547-4_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Karayiannis, N. B., and A. N. Venetsanopoulos. "ALADIN: Algorithms for Learning and Architecture DetermINation." In Artificial Neural Networks, 195–218. Boston, MA: Springer US, 1993. http://dx.doi.org/10.1007/978-1-4757-4547-4_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ayyadevara, V. Kishore. "Artificial Neural Network." In Pro Machine Learning Algorithms, 135–65. Berkeley, CA: Apress, 2018. http://dx.doi.org/10.1007/978-1-4842-3564-5_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Davoian, Kristina, and Wolfram-M. Lippe. "Mixing Different Search Biases in Evolutionary Learning Algorithms." In Artificial Neural Networks – ICANN 2009, 111–20. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-04274-4_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Giudici, Matteo, Filippo Queirolo, and Maurizio Valle. "Stochastic Supervised Learning Algorithms with Local and Adaptive Learning Rate for Recognising Hand-Written Characters." In Artificial Neural Networks — ICANN 2002, 619–24. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-46084-5_101.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Neruda, R. "Canonical Genetic Learning of RBF Networks Is Faster." In Artificial Neural Nets and Genetic Algorithms, 350–53. Vienna: Springer Vienna, 1998. http://dx.doi.org/10.1007/978-3-7091-6492-1_77.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Neruda, Roman. "Functional Equivalence and Genetic Learning of RBF Networks." In Artificial Neural Nets and Genetic Algorithms, 53–56. Vienna: Springer Vienna, 1995. http://dx.doi.org/10.1007/978-3-7091-7535-4_16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Gas, B., and R. Natowicz. "Unsupervised Learning of Temporal Sequences by Neural Networks." In Artificial Neural Nets and Genetic Algorithms, 253–56. Vienna: Springer Vienna, 1995. http://dx.doi.org/10.1007/978-3-7091-7535-4_67.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Bermak, A., and H. Poulard. "On VLSI Implementation of Multiple Output Sequential Learning Networks." In Artificial Neural Nets and Genetic Algorithms, 93–97. Vienna: Springer Vienna, 1998. http://dx.doi.org/10.1007/978-3-7091-6492-1_20.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Artificial neural networks; Learning algorithms"

1

McNeill, D. K. "Competitive learning algorithms in adaptive educational toys." In Fifth International Conference on Artificial Neural Networks. IEE, 1997. http://dx.doi.org/10.1049/cp:19970727.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ida, Yasutoshi, Yasuhiro Fujiwara, and Sotetsu Iwamura. "Adaptive Learning Rate via Covariance Matrix Based Preconditioning for Deep Neural Networks." In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/267.

Full text
Abstract:
Adaptive learning rate algorithms such as RMSProp are widely used for training deep neural networks. RMSProp offers efficient training since it uses first order gradients to approximate Hessian-based preconditioning. However, since the first order gradients include noise caused by stochastic optimization, the approximation may be inaccurate. In this paper, we propose a novel adaptive learning rate algorithm called SDProp. Its key idea is effective handling of the noise by preconditioning based on covariance matrix. For various neural networks, our approach is more efficient and effective than RMSProp and its variant.
APA, Harvard, Vancouver, ISO, and other styles
3

Qi, Yu, Jiangrong Shen, Yueming Wang, Huajin Tang, Hang Yu, Zhaohui Wu, and Gang Pan. "Jointly Learning Network Connections and Link Weights in Spiking Neural Networks." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/221.

Full text
Abstract:
Spiking neural networks (SNNs) are considered to be biologically plausible and power-efficient on neuromorphic hardware. However, unlike the brain mechanisms, most existing SNN algorithms have fixed network topologies and connection relationships. This paper proposes a method to jointly learn network connections and link weights simultaneously. The connection structures are optimized by the spike-timing-dependent plasticity (STDP) rule with timing information, and the link weights are optimized by a supervised algorithm. The connection structures and the weights are learned alternately until a termination condition is satisfied. Experiments are carried out using four benchmark datasets. Our approach outperforms classical learning methods such as STDP, Tempotron, SpikeProp, and a state-of-the-art supervised algorithm. In addition, the learned structures effectively reduce the number of connections by about 24%, thus facilitate the computational efficiency of the network.
APA, Harvard, Vancouver, ISO, and other styles
4

Eggensperger, Katharina, Marius Lindauer, and Frank Hutter. "Neural Networks for Predicting Algorithm Runtime Distributions." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/200.

Full text
Abstract:
Many state-of-the-art algorithms for solving hard combinatorial problems in artificial intelligence (AI) include elements of stochasticity that lead to high variations in runtime, even for a fixed problem instance. Knowledge about the resulting runtime distributions (RTDs) of algorithms on given problem instances can be exploited in various meta-algorithmic procedures, such as algorithm selection, portfolios, and randomized restarts. Previous work has shown that machine learning can be used to individually predict mean, median and variance of RTDs. To establish a new state-of-the-art in predicting RTDs, we demonstrate that the parameters of an RTD should be learned jointly and that neural networks can do this well by directly optimizing the likelihood of an RTD given runtime observations. In an empirical study involving five algorithms for SAT solving and AI planning, we show that neural networks predict the true RTDs of unseen instances better than previous methods, and can even do so when only few runtime observations are available per training instance.
APA, Harvard, Vancouver, ISO, and other styles
5

Fang, Haowen, Amar Shrestha, Ziyi Zhao, and Qinru Qiu. "Exploiting Neuron and Synapse Filter Dynamics in Spatial Temporal Learning of Deep Spiking Neural Network." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/388.

Full text
Abstract:
The recently discovered spatial-temporal information processing capability of bio-inspired Spiking neural networks (SNN) has enabled some interesting models and applications. However designing large-scale and high-performance model is yet a challenge due to the lack of robust training algorithms. A bio-plausible SNN model with spatial-temporal property is a complex dynamic system. Synapses and neurons behave as filters capable of preserving temporal information. As such neuron dynamics and filter effects are ignored in existing training algorithms, the SNN downgrades into a memoryless system and loses the ability of temporal signal processing. Furthermore, spike timing plays an important role in information representation, but conventional rate-based spike coding models only consider spike trains statistically, and discard information carried by its temporal structures. To address the above issues, and exploit the temporal dynamics of SNNs, we formulate SNN as a network of infinite impulse response (IIR) filters with neuron nonlinearity. We proposed a training algorithm that is capable to learn spatial-temporal patterns by searching for the optimal synapse filter kernels and weights. The proposed model and training algorithm are applied to construct associative memories and classifiers for synthetic and public datasets including MNIST, NMNIST, DVS 128 etc. Their accuracy outperforms state-of-the-art approaches.
APA, Harvard, Vancouver, ISO, and other styles
6

He, Yu, Jianxin Li, Yangqiu Song, Mutian He, and Hao Peng. "Time-evolving Text Classification with Deep Neural Networks." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/310.

Full text
Abstract:
Traditional text classification algorithms are based on the assumption that data are independent and identically distributed. However, in most non-stationary scenarios, data may change smoothly due to long-term evolution and short-term fluctuation, which raises new challenges to traditional methods. In this paper, we present the first attempt to explore evolutionary neural network models for time-evolving text classification. We first introduce a simple way to extend arbitrary neural networks to evolutionary learning by using a temporal smoothness framework, and then propose a diachronic propagation framework to incorporate the historical impact into currently learned features through diachronic connections. Experiments on real-world news data demonstrate that our approaches greatly and consistently outperform traditional neural network models in both accuracy and stability.
APA, Harvard, Vancouver, ISO, and other styles
7

Prechelt, L. "A quantitative study of experimental neural network learning algorithm evaluation practices." In 4th International Conference on Artificial Neural Networks. IEE, 1995. http://dx.doi.org/10.1049/cp:19950558.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Banerjee, Amit, Issam Abu-Mahfouz, and AHM Esfakur Rahman. "Multi-Objective Optimization of Parameters for Milling Using Evolutionary Algorithms and Artificial Neural Networks." In ASME 2019 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2019. http://dx.doi.org/10.1115/imece2019-11438.

Full text
Abstract:
Abstract Model-based design of manufacturing processes have been gaining popularity since the advent of machine learning algorithms such as evolutionary algorithms and artificial neural networks (ANN). The problem of selecting the best machining parameters can be cast an optimization problem given a cost function and by utilizing an input-output connectionist framework using as ANNs. In this paper, we present a comparison of various evolutionary algorithms for parameter optimization of an end-milling operation based on a well-known cost function from literature. We propose a modification to the cost function for milling and include an additional objective of minimizing surface roughness and by using NSGA-II, a multi-objective optimization algorithm. We also present comparison of several population-based evolutionary search algorithms such as variants of particle swarm optimization, differential evolution and NSGA-II.
APA, Harvard, Vancouver, ISO, and other styles
9

Gu, Pengjie, Rong Xiao, Gang Pan, and Huajin Tang. "STCA: Spatio-Temporal Credit Assignment with Delayed Feedback in Deep Spiking Neural Networks." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/189.

Full text
Abstract:
The temporal credit assignment problem, which aims to discover the predictive features hidden in distracting background streams with delayed feedback, remains a core challenge in biological and machine learning. To address this issue, we propose a novel spatio-temporal credit assignment algorithm called STCA for training deep spiking neural networks (DSNNs). We present a new spatiotemporal error backpropagation policy by defining a temporal based loss function, which is able to credit the network losses to spatial and temporal domains simultaneously. Experimental results on MNIST dataset and a music dataset (MedleyDB) demonstrate that STCA can achieve comparable performance with other state-of-the-art algorithms with simpler architectures. Furthermore, STCA successfully discovers predictive sensory features and shows the highest performance in the unsegmented sensory event detection tasks.
APA, Harvard, Vancouver, ISO, and other styles
10

Zhang, Yikai, Hui Qu, Chao Chen, and Dimitris Metaxas. "Taming the Noisy Gradient: Train Deep Neural Networks with Small Batch Sizes." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/604.

Full text
Abstract:
Deep learning architectures are usually proposed with millions of parameters, resulting in a memory issue when training deep neural networks with stochastic gradient descent type methods using large batch sizes. However, training with small batch sizes tends to produce low quality solution due to the large variance of stochastic gradients. In this paper, we tackle this problem by proposing a new framework for training deep neural network with small batches/noisy gradient. During optimization, our method iteratively applies a proximal type regularizer to make loss function strongly convex. Such regularizer stablizes the gradient, leading to better training performance. We prove that our algorithm achieves comparable convergence rate as vanilla SGD even with small batch size. Our framework is simple to implement and can be potentially combined with many existing optimization algorithms. Empirical results show that our method outperforms SGD and Adam when batch size is small. Our implementation is available at https://github.com/huiqu18/TRAlgorithm.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Artificial neural networks; Learning algorithms"

1

Powell, Jr., James Estes. Learning Memento archive routing with Character-based Artificial Neural Networks. Office of Scientific and Technical Information (OSTI), October 2018. http://dx.doi.org/10.2172/1477616.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Arhin, Stephen, Babin Manandhar, Hamdiat Baba Adam, and Adam Gatiba. Predicting Bus Travel Times in Washington, DC Using Artificial Neural Networks (ANNs). Mineta Transportation Institute, April 2021. http://dx.doi.org/10.31979/mti.2021.1943.

Full text
Abstract:
Washington, DC is ranked second among cities in terms of highest public transit commuters in the United States, with approximately 9% of the working population using the Washington Metropolitan Area Transit Authority (WMATA) Metrobuses to commute. Deducing accurate travel times of these metrobuses is an important task for transit authorities to provide reliable service to its patrons. This study, using Artificial Neural Networks (ANN), developed prediction models for transit buses to assist decision-makers to improve service quality and patronage. For this study, we used six months of Automatic Vehicle Location (AVL) and Automatic Passenger Counting (APC) data for six Washington Metropolitan Area Transit Authority (WMATA) bus routes operating in Washington, DC. We developed regression models and Artificial Neural Network (ANN) models for predicting travel times of buses for different peak periods (AM, Mid-Day and PM). Our analysis included variables such as number of served bus stops, length of route between bus stops, average number of passengers in the bus, average dwell time of buses, and number of intersections between bus stops. We obtained ANN models for travel times by using approximation technique incorporating two separate algorithms: Quasi-Newton and Levenberg-Marquardt. The training strategy for neural network models involved feed forward and errorback processes that minimized the generated errors. We also evaluated the models with a Comparison of the Normalized Squared Errors (NSE). From the results, we observed that the travel times of buses and the dwell times at bus stops generally increased over time of the day. We gathered travel time equations for buses for the AM, Mid-Day and PM Peaks. The lowest NSE for the AM, Mid-Day and PM Peak periods corresponded to training processes using Quasi-Newton algorithm, which had 3, 2 and 5 perceptron layers, respectively. These prediction models could be adapted by transit agencies to provide the patrons with accurate travel time information at bus stops or online.
APA, Harvard, Vancouver, ISO, and other styles
3

Hart, Carl R., D. Keith Wilson, Chris L. Pettit, and Edward T. Nykaza. Machine-Learning of Long-Range Sound Propagation Through Simulated Atmospheric Turbulence. U.S. Army Engineer Research and Development Center, July 2021. http://dx.doi.org/10.21079/11681/41182.

Full text
Abstract:
Conventional numerical methods can capture the inherent variability of long-range outdoor sound propagation. However, computational memory and time requirements are high. In contrast, machine-learning models provide very fast predictions. This comes by learning from experimental observations or surrogate data. Yet, it is unknown what type of surrogate data is most suitable for machine-learning. This study used a Crank-Nicholson parabolic equation (CNPE) for generating the surrogate data. The CNPE input data were sampled by the Latin hypercube technique. Two separate datasets comprised 5000 samples of model input. The first dataset consisted of transmission loss (TL) fields for single realizations of turbulence. The second dataset consisted of average TL fields for 64 realizations of turbulence. Three machine-learning algorithms were applied to each dataset, namely, ensemble decision trees, neural networks, and cluster-weighted models. Observational data come from a long-range (out to 8 km) sound propagation experiment. In comparison to the experimental observations, regression predictions have 5–7 dB in median absolute error. Surrogate data quality depends on an accurate characterization of refractive and scattering conditions. Predictions obtained through a single realization of turbulence agree better with the experimental observations.
APA, Harvard, Vancouver, ISO, and other styles
4

Idakwo, Gabriel, Sundar Thangapandian, Joseph Luttrell, Zhaoxian Zhou, Chaoyang Zhang, and Ping Gong. Deep learning-based structure-activity relationship modeling for multi-category toxicity classification : a case study of 10K Tox21 chemicals with high-throughput cell-based androgen receptor bioassay data. Engineer Research and Development Center (U.S.), July 2021. http://dx.doi.org/10.21079/11681/41302.

Full text
Abstract:
Deep learning (DL) has attracted the attention of computational toxicologists as it offers a potentially greater power for in silico predictive toxicology than existing shallow learning algorithms. However, contradicting reports have been documented. To further explore the advantages of DL over shallow learning, we conducted this case study using two cell-based androgen receptor (AR) activity datasets with 10K chemicals generated from the Tox21 program. A nested double-loop cross-validation approach was adopted along with a stratified sampling strategy for partitioning chemicals of multiple AR activity classes (i.e., agonist, antagonist, inactive, and inconclusive) at the same distribution rates amongst the training, validation and test subsets. Deep neural networks (DNN) and random forest (RF), representing deep and shallow learning algorithms, respectively, were chosen to carry out structure-activity relationship-based chemical toxicity prediction. Results suggest that DNN significantly outperformed RF (p < 0.001, ANOVA) by 22–27% for four metrics (precision, recall, F-measure, and AUPRC) and by 11% for another (AUROC). Further in-depth analyses of chemical scaffolding shed insights on structural alerts for AR agonists/antagonists and inactive/inconclusive compounds, which may aid in future drug discovery and improvement of toxicity prediction modeling.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography