Academic literature on the topic 'Artificial neural networks; Learning algorithms'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Artificial neural networks; Learning algorithms.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Artificial neural networks; Learning algorithms"
Gumus, Fatma, and Derya Yiltas-Kaplan. "Congestion Prediction System With Artificial Neural Networks." International Journal of Interdisciplinary Telecommunications and Networking 12, no. 3 (July 2020): 28–43. http://dx.doi.org/10.4018/ijitn.2020070103.
Full textJaved, Abbas, Hadi Larijani, Ali Ahmadinia, and Rohinton Emmanuel. "RANDOM NEURAL NETWORK LEARNING HEURISTICS." Probability in the Engineering and Informational Sciences 31, no. 4 (May 22, 2017): 436–56. http://dx.doi.org/10.1017/s0269964817000201.
Full textBaşeski, Emre. "Heliport Detection Using Artificial Neural Networks." Photogrammetric Engineering & Remote Sensing 86, no. 9 (September 1, 2020): 541–46. http://dx.doi.org/10.14358/pers.86.9.541.
Full textYAO, XIN. "EVOLUTIONARY ARTIFICIAL NEURAL NETWORKS." International Journal of Neural Systems 04, no. 03 (September 1993): 203–22. http://dx.doi.org/10.1142/s0129065793000171.
Full textSporea, Ioana, and André Grüning. "Supervised Learning in Multilayer Spiking Neural Networks." Neural Computation 25, no. 2 (February 2013): 473–509. http://dx.doi.org/10.1162/neco_a_00396.
Full textShah, Habib, Rozaida Ghazali, Nazri Mohd Nawi, and Mustafa Mat Deris. "G-HABC Algorithm for Training Artificial Neural Networks." International Journal of Applied Metaheuristic Computing 3, no. 3 (July 2012): 1–19. http://dx.doi.org/10.4018/jamc.2012070101.
Full textDing, Shuo, and Qing Hui Wu. "A MATLAB-Based Study on Approximation Performances of Improved Algorithms of Typical BP Neural Networks." Applied Mechanics and Materials 313-314 (March 2013): 1353–56. http://dx.doi.org/10.4028/www.scientific.net/amm.313-314.1353.
Full textMAGOULAS, GEORGE D., and MICHAEL N. VRAHATIS. "ADAPTIVE ALGORITHMS FOR NEURAL NETWORK SUPERVISED LEARNING: A DETERMINISTIC OPTIMIZATION APPROACH." International Journal of Bifurcation and Chaos 16, no. 07 (July 2006): 1929–50. http://dx.doi.org/10.1142/s0218127406015805.
Full textGülmez, Burak, and Sinem Kulluk. "Social Spider Algorithm for Training Artificial Neural Networks." International Journal of Business Analytics 6, no. 4 (October 2019): 32–49. http://dx.doi.org/10.4018/ijban.2019100103.
Full textDaskin, Ammar. "A Quantum Implementation Model for Artificial Neural Networks." Quanta 7, no. 1 (February 20, 2018): 7. http://dx.doi.org/10.12743/quanta.v7i1.65.
Full textDissertations / Theses on the topic "Artificial neural networks; Learning algorithms"
Sannossian, Hermineh Y. "A study of artificial neural networks and their learning algorithms." Thesis, Loughborough University, 1992. https://dspace.lboro.ac.uk/2134/11194.
Full textGhosh, Ranadhir, and n/a. "A Novel Hybrid Learning Algorithm For Artificial Neural Networks." Griffith University. School of Information Technology, 2003. http://www4.gu.edu.au:8080/adt-root/public/adt-QGU20030808.162355.
Full textChen, Hsinchun. "Machine Learning for Information Retrieval: Neural Networks, Symbolic Learning, and Genetic Algorithms." Wiley Periodicals, Inc, 1995. http://hdl.handle.net/10150/106427.
Full textInformation retrieval using probabilistic techniques has attracted significant attention on the part of researchers in information and computer science over the past few decades. In the 1980s, knowledge-based techniques also made an impressive contribution to “intelligent” information retrieval and indexing. More recently, information science researchers have turned to other newer artificial-intelligence- based inductive learning techniques including neural networks, symbolic learning, and genetic algorithms. These newer techniques, which are grounded on diverse paradigms, have provided great opportunities for researchers to enhance the information processing and retrieval capabilities of current information storage and retrieval systems. In this article, we first provide an overview of these newer techniques and their use in information science research. To familiarize readers with these techniques, we present three popular methods: the connectionist Hopfield network; the symbolic ID3/ID5R; and evolution- based genetic algorithms. We discuss their knowledge representations and algorithms in the context of information retrieval. Sample implementation and testing results from our own research are also provided for each technique. We believe these techniques are promising in their ability to analyze user queries, identify users’ information needs, and suggest alternatives for search. With proper user-system interactions, these methods can greatly complement the prevailing full-text, keywordbased, probabilistic, and knowledge-based techniques.
Bubie, Walter C. "Algorithm animation and its application to artificial neural network learning /." Online version of thesis, 1991. http://hdl.handle.net/1850/11055.
Full textHofer, Daniel G. Sbarbaro. "Connectionist feedforward networks for control of nonlinear systems." Thesis, University of Glasgow, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.390248.
Full textKhalid, Fahad. "Measure-based Learning Algorithms : An Analysis of Back-propagated Neural Networks." Thesis, Blekinge Tekniska Högskola, Avdelningen för för interaktion och systemdesign, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-4795.
Full textThe study is an investigation on the feasibility of using a generic inductive bias for backpropagation artificial neural networks, which could incorporate any one or a combination of problem specific performance metrics to be optimized. We have identified several limitations of both the standard error backpropagation mechanism as well the inherent gradient search approach. These limitations suggest exploration of methods other than backpropagation, as well use of global search methods instead of gradient search. Also, we emphasize the importance of taking the representational bias of the neural network in consideration, since only a combination of both procedural and representational bias can provide highly optimal solutions.
Rimer, Michael Edwin. "Improving Neural Network Classification Training." Diss., CLICK HERE for online access, 2007. http://contentdm.lib.byu.edu/ETD/image/etd2094.pdf.
Full textSingh, Y., and M. Mars. "A pilot study to integrate HIV drug resistance gold standard interpretation algorithms using neural networks." Journal for New Generation Sciences, Vol 11, Issue 2: Central University of Technology, Free State, Bloemfontein, 2013. http://hdl.handle.net/11462/639.
Full textThere are several HIV drug resistant interpretation algorithms which produce different resistance measures even if applied to the same resistance profile. This discrepancy leads to confusion in the mind of the physician when choosing the best ARV therapy.
Ncube, Israel. "Stochastic approximation of artificial neural network-type learning algorithms, a dynamical systems approach." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/NQ60559.pdf.
Full textTopalli, Ayca Kumluca. "Hybrid Learning Algorithm For Intelligent Short-term Load Forecasting." Phd thesis, METU, 2003. http://etd.lib.metu.edu.tr/upload/627505/index.pdf.
Full textbut, new methods based on artificial intelligence emerged recently in literature and started to replace the old ones in the industry. In order to follow the latest developments and to have a modern system, it is aimed to make a research on STLF in Turkey, by neural networks. For this purpose, a method is proposed to forecast Turkey&rsquo
s total electric load one day in advance. A hybrid learning scheme that combines off-line learning with real-time forecasting is developed to make use of the available past data for adapting the weights and to further adjust these connections according to the changing conditions. It is also suggested to tune the step size iteratively for better accuracy. Since a single neural network model cannot cover all load types, data are clustered due to the differences in their characteristics. Apart from this, special days are extracted from the normal training sets and handled separately. In this way, a solution is proposed for all load types, including working days, weekends and special holidays. For the selection of input parameters, a technique based on principal component analysis is suggested. A traditional ARMA model is constructed for the same data as a benchmark and results are compared. Proposed method gives lower percent errors all the time, especially for holiday loads. The average error for year 2002 data is obtained as 1.60%.
Books on the topic "Artificial neural networks; Learning algorithms"
1941-, Venetsanopoulos A. N., ed. Artificial neural networks: Learning algorithms, performance evaluation, and applications. Boston: Kluwer Academic, 1993.
Find full textVirkumar, Vazirani Umesh, ed. An introduction to computational learning theory. Cambridge, Mass: MIT Press, 1994.
Find full textWłodzisław, Duch, Érdi Péter, Masulli Francesco, Palm Günther, and SpringerLink (Online service), eds. Artificial Neural Networks and Machine Learning – ICANN 2012: 22nd International Conference on Artificial Neural Networks, Lausanne, Switzerland, September 11-14, 2012, Proceedings, Part I. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012.
Find full textWłodzisław, Duch, Érdi Péter, Masulli Francesco, Palm Günther, and SpringerLink (Online service), eds. Artificial Neural Networks and Machine Learning – ICANN 2012: 22nd International Conference on Artificial Neural Networks, Lausanne, Switzerland, September 11-14, 2012, Proceedings, Part II. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012.
Find full textKolehmainen, Mikko. Adaptive and Natural Computing Algorithms: 9th International Conference, ICANNGA 2009, Kuopio, Finland, April 23-25, 2009, Revised Selected Papers. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009.
Find full textInternational, Conference on Artificial Neural Networks and Genetic Algorithms (2007 Warsaw Poland). Adaptive and natural computing algorithms: 8th international conference, ICANNGA 2007, Warsaw, Poland, April 11-14, 2007 : proceedings. Berlin: Springer, 2007.
Find full textMars, P. Learning algorithms: Theory and applications in signal processing, control, and communications. Boca Raton: CRC Press, 1996.
Find full textMyers, Catherine. Delay learning in artificial neural networks. London: Chapman & Hall, 1992.
Find full textMyers, Catherine E. Delay learning in artificial neural networks. London: Chapman & Hall, 1992.
Find full textBook chapters on the topic "Artificial neural networks; Learning algorithms"
Karayiannis, N. B., and A. N. Venetsanopoulos. "Fast Learning Algorithms for Neural Networks." In Artificial Neural Networks, 141–93. Boston, MA: Springer US, 1993. http://dx.doi.org/10.1007/978-1-4757-4547-4_4.
Full textKarayiannis, N. B., and A. N. Venetsanopoulos. "ELEANNE: Efficient LEarning Algorithms for Neural NEtworks." In Artificial Neural Networks, 87–139. Boston, MA: Springer US, 1993. http://dx.doi.org/10.1007/978-1-4757-4547-4_3.
Full textKarayiannis, N. B., and A. N. Venetsanopoulos. "ALADIN: Algorithms for Learning and Architecture DetermINation." In Artificial Neural Networks, 195–218. Boston, MA: Springer US, 1993. http://dx.doi.org/10.1007/978-1-4757-4547-4_5.
Full textAyyadevara, V. Kishore. "Artificial Neural Network." In Pro Machine Learning Algorithms, 135–65. Berkeley, CA: Apress, 2018. http://dx.doi.org/10.1007/978-1-4842-3564-5_7.
Full textDavoian, Kristina, and Wolfram-M. Lippe. "Mixing Different Search Biases in Evolutionary Learning Algorithms." In Artificial Neural Networks – ICANN 2009, 111–20. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-04274-4_12.
Full textGiudici, Matteo, Filippo Queirolo, and Maurizio Valle. "Stochastic Supervised Learning Algorithms with Local and Adaptive Learning Rate for Recognising Hand-Written Characters." In Artificial Neural Networks — ICANN 2002, 619–24. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-46084-5_101.
Full textNeruda, R. "Canonical Genetic Learning of RBF Networks Is Faster." In Artificial Neural Nets and Genetic Algorithms, 350–53. Vienna: Springer Vienna, 1998. http://dx.doi.org/10.1007/978-3-7091-6492-1_77.
Full textNeruda, Roman. "Functional Equivalence and Genetic Learning of RBF Networks." In Artificial Neural Nets and Genetic Algorithms, 53–56. Vienna: Springer Vienna, 1995. http://dx.doi.org/10.1007/978-3-7091-7535-4_16.
Full textGas, B., and R. Natowicz. "Unsupervised Learning of Temporal Sequences by Neural Networks." In Artificial Neural Nets and Genetic Algorithms, 253–56. Vienna: Springer Vienna, 1995. http://dx.doi.org/10.1007/978-3-7091-7535-4_67.
Full textBermak, A., and H. Poulard. "On VLSI Implementation of Multiple Output Sequential Learning Networks." In Artificial Neural Nets and Genetic Algorithms, 93–97. Vienna: Springer Vienna, 1998. http://dx.doi.org/10.1007/978-3-7091-6492-1_20.
Full textConference papers on the topic "Artificial neural networks; Learning algorithms"
McNeill, D. K. "Competitive learning algorithms in adaptive educational toys." In Fifth International Conference on Artificial Neural Networks. IEE, 1997. http://dx.doi.org/10.1049/cp:19970727.
Full textIda, Yasutoshi, Yasuhiro Fujiwara, and Sotetsu Iwamura. "Adaptive Learning Rate via Covariance Matrix Based Preconditioning for Deep Neural Networks." In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/267.
Full textQi, Yu, Jiangrong Shen, Yueming Wang, Huajin Tang, Hang Yu, Zhaohui Wu, and Gang Pan. "Jointly Learning Network Connections and Link Weights in Spiking Neural Networks." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/221.
Full textEggensperger, Katharina, Marius Lindauer, and Frank Hutter. "Neural Networks for Predicting Algorithm Runtime Distributions." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/200.
Full textFang, Haowen, Amar Shrestha, Ziyi Zhao, and Qinru Qiu. "Exploiting Neuron and Synapse Filter Dynamics in Spatial Temporal Learning of Deep Spiking Neural Network." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/388.
Full textHe, Yu, Jianxin Li, Yangqiu Song, Mutian He, and Hao Peng. "Time-evolving Text Classification with Deep Neural Networks." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/310.
Full textPrechelt, L. "A quantitative study of experimental neural network learning algorithm evaluation practices." In 4th International Conference on Artificial Neural Networks. IEE, 1995. http://dx.doi.org/10.1049/cp:19950558.
Full textBanerjee, Amit, Issam Abu-Mahfouz, and AHM Esfakur Rahman. "Multi-Objective Optimization of Parameters for Milling Using Evolutionary Algorithms and Artificial Neural Networks." In ASME 2019 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2019. http://dx.doi.org/10.1115/imece2019-11438.
Full textGu, Pengjie, Rong Xiao, Gang Pan, and Huajin Tang. "STCA: Spatio-Temporal Credit Assignment with Delayed Feedback in Deep Spiking Neural Networks." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/189.
Full textZhang, Yikai, Hui Qu, Chao Chen, and Dimitris Metaxas. "Taming the Noisy Gradient: Train Deep Neural Networks with Small Batch Sizes." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/604.
Full textReports on the topic "Artificial neural networks; Learning algorithms"
Powell, Jr., James Estes. Learning Memento archive routing with Character-based Artificial Neural Networks. Office of Scientific and Technical Information (OSTI), October 2018. http://dx.doi.org/10.2172/1477616.
Full textArhin, Stephen, Babin Manandhar, Hamdiat Baba Adam, and Adam Gatiba. Predicting Bus Travel Times in Washington, DC Using Artificial Neural Networks (ANNs). Mineta Transportation Institute, April 2021. http://dx.doi.org/10.31979/mti.2021.1943.
Full textHart, Carl R., D. Keith Wilson, Chris L. Pettit, and Edward T. Nykaza. Machine-Learning of Long-Range Sound Propagation Through Simulated Atmospheric Turbulence. U.S. Army Engineer Research and Development Center, July 2021. http://dx.doi.org/10.21079/11681/41182.
Full textIdakwo, Gabriel, Sundar Thangapandian, Joseph Luttrell, Zhaoxian Zhou, Chaoyang Zhang, and Ping Gong. Deep learning-based structure-activity relationship modeling for multi-category toxicity classification : a case study of 10K Tox21 chemicals with high-throughput cell-based androgen receptor bioassay data. Engineer Research and Development Center (U.S.), July 2021. http://dx.doi.org/10.21079/11681/41302.
Full text