To see the other types of publications on this topic, follow the link: Artificial neural networks; Learning algorithms.

Journal articles on the topic 'Artificial neural networks; Learning algorithms'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Artificial neural networks; Learning algorithms.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Gumus, Fatma, and Derya Yiltas-Kaplan. "Congestion Prediction System With Artificial Neural Networks." International Journal of Interdisciplinary Telecommunications and Networking 12, no. 3 (July 2020): 28–43. http://dx.doi.org/10.4018/ijitn.2020070103.

Full text
Abstract:
Software Defined Network (SDN) is a programmable network architecture that provides innovative solutions to the problems of the traditional networks. Congestion control is still an uncharted territory for this technology. In this work, a congestion prediction scheme has been developed by using neural networks. Minimum Redundancy Maximum Relevance (mRMR) feature selection algorithm was performed on the data collected from the OMNET++ simulation. The novelty of this study also covers the implementation of mRMR in an SDN congestion prediction problem. After evaluating the relevance scores, two highest ranking features were used. On the learning stage Nonlinear Autoregressive Exogenous Neural Network (NARX), Nonlinear Autoregressive Neural Network, and Nonlinear Feedforward Neural Network algorithms were executed. These algorithms had not been used before in SDNs according to the best of the authors knowledge. The experiments represented that NARX was the best prediction algorithm. This machine learning approach can be easily integrated to different topologies and application areas.
APA, Harvard, Vancouver, ISO, and other styles
2

Javed, Abbas, Hadi Larijani, Ali Ahmadinia, and Rohinton Emmanuel. "RANDOM NEURAL NETWORK LEARNING HEURISTICS." Probability in the Engineering and Informational Sciences 31, no. 4 (May 22, 2017): 436–56. http://dx.doi.org/10.1017/s0269964817000201.

Full text
Abstract:
The random neural network (RNN) is a probabilitsic queueing theory-based model for artificial neural networks, and it requires the use of optimization algorithms for training. Commonly used gradient descent learning algorithms may reside in local minima, evolutionary algorithms can be also used to avoid local minima. Other techniques such as artificial bee colony (ABC), particle swarm optimization (PSO), and differential evolution algorithms also perform well in finding the global minimum but they converge slowly. The sequential quadratic programming (SQP) optimization algorithm can find the optimum neural network weights, but can also get stuck in local minima. We propose to overcome the shortcomings of these various approaches by using hybridized ABC/PSO and SQP. The resulting algorithm is shown to compare favorably with other known techniques for training the RNN. The results show that hybrid ABC learning with SQP outperforms other training algorithms in terms of mean-squared error and normalized root-mean-squared error.
APA, Harvard, Vancouver, ISO, and other styles
3

Başeski, Emre. "Heliport Detection Using Artificial Neural Networks." Photogrammetric Engineering & Remote Sensing 86, no. 9 (September 1, 2020): 541–46. http://dx.doi.org/10.14358/pers.86.9.541.

Full text
Abstract:
Automatic image exploitation is a critical technology for quick content analysis of high-resolution remote sensing images. The presence of a heliport on an image usually implies an important facility, such as military facilities. Therefore, detection of heliports can reveal critical information about the content of an image. In this article, two learning-based algorithms are presented that make use of artificial neural networks to detect H-shaped, light-colored heliports. The first algorithm is based on shape analysis of the heliport candidate segments using classical artificial neural networks. The second algorithm uses deep-learning techniques. While deep learning can solve difficult problems successfully, classical-learning approaches can be tuned easily to obtain fast and reasonable results. Therefore, although the main objective of this article is heliport detection, it also compares a deep-learning based approach with a classical learning-based approach and discusses advantages and disadvantages of both techniques.
APA, Harvard, Vancouver, ISO, and other styles
4

YAO, XIN. "EVOLUTIONARY ARTIFICIAL NEURAL NETWORKS." International Journal of Neural Systems 04, no. 03 (September 1993): 203–22. http://dx.doi.org/10.1142/s0129065793000171.

Full text
Abstract:
Evolutionary artificial neural networks (EANNs) can be considered as a combination of artificial neural networks (ANNs) and evolutionary search procedures such as genetic algorithms (GAs). This paper distinguishes among three levels of evolution in EANNs, i.e. the evolution of connection weights, architectures and learning rules. It first reviews each kind of evolution in detail and then analyses major issues related to each kind of evolution. It is shown in the paper that although there is a lot of work on the evolution of connection weights and architectures, research on the evolution of learning rules is still in its early stages. Interactions among different levels of evolution are far from being understood. It is argued in the paper that the evolution of learning rules and its interactions with other levels of evolution play a vital role in EANNs.
APA, Harvard, Vancouver, ISO, and other styles
5

Sporea, Ioana, and André Grüning. "Supervised Learning in Multilayer Spiking Neural Networks." Neural Computation 25, no. 2 (February 2013): 473–509. http://dx.doi.org/10.1162/neco_a_00396.

Full text
Abstract:
We introduce a supervised learning algorithm for multilayer spiking neural networks. The algorithm overcomes a limitation of existing learning algorithms: it can be applied to neurons firing multiple spikes in artificial neural networks with hidden layers. It can also, in principle, be used with any linearizable neuron model and allows different coding schemes of spike train patterns. The algorithm is applied successfully to classic linearly nonseparable benchmarks such as the XOR problem and the Iris data set, as well as to more complex classification and mapping problems. The algorithm has been successfully tested in the presence of noise, requires smaller networks than reservoir computing, and results in faster convergence than existing algorithms for similar tasks such as SpikeProp.
APA, Harvard, Vancouver, ISO, and other styles
6

Shah, Habib, Rozaida Ghazali, Nazri Mohd Nawi, and Mustafa Mat Deris. "G-HABC Algorithm for Training Artificial Neural Networks." International Journal of Applied Metaheuristic Computing 3, no. 3 (July 2012): 1–19. http://dx.doi.org/10.4018/jamc.2012070101.

Full text
Abstract:
Learning problems for Neural Network (NN) has widely been explored in the past two decades. Researchers have focused more on population-based algorithms because of its natural behavior processing. The population-based algorithms are Ant Colony Optimization (ACO), Artificial Bee Colony (ABC), and recently Hybrid Ant Bee Colony (HABC) algorithm produced an easy way for NN training. These social based techniques are mostly used for finding best weight values and over trapping local minima in NN learning. Typically, NN trained by traditional approach, namely the Backpropagation (BP) algorithm, has difficulties such as trapping in local minima and slow convergence. The new method named Global Hybrid Ant Bee Colony (G-HABC) algorithm which can overcome the gaps in BP is used to train the NN for Boolean Function classification task. The simulation results of the NN when trained with the proposed hybrid method were compared with that of Levenberg-Marquardt (LM) and ordinary ABC. From the results, the proposed G-HABC algorithm has shown to provide a better learning performance for NNs with reduced CPU time and higher success rates.
APA, Harvard, Vancouver, ISO, and other styles
7

Ding, Shuo, and Qing Hui Wu. "A MATLAB-Based Study on Approximation Performances of Improved Algorithms of Typical BP Neural Networks." Applied Mechanics and Materials 313-314 (March 2013): 1353–56. http://dx.doi.org/10.4028/www.scientific.net/amm.313-314.1353.

Full text
Abstract:
BP neural networks are widely used and the algorithms are various. This paper studies the advantages and disadvantages of improved algorithms of five typical BP networks, based on artificial neural network theories. First, the learning processes of improved algorithms of the five typical BP networks are elaborated on mathematically. Then a specific network is designed on the platform of MATLAB 7.0 to conduct approximation test for a given nonlinear function. At last, a comparison is made between the training speeds and memory consumption of the five BP networks. The simulation results indicate that for small scaled and medium scaled networks, LM optimization algorithm has the best approximation ability, followed by Quasi-Newton algorithm, conjugate gradient method, resilient BP algorithm, adaptive learning rate algorithm. Keywords: BP neural network; Improved algorithm; Function approximation; MATLAB
APA, Harvard, Vancouver, ISO, and other styles
8

MAGOULAS, GEORGE D., and MICHAEL N. VRAHATIS. "ADAPTIVE ALGORITHMS FOR NEURAL NETWORK SUPERVISED LEARNING: A DETERMINISTIC OPTIMIZATION APPROACH." International Journal of Bifurcation and Chaos 16, no. 07 (July 2006): 1929–50. http://dx.doi.org/10.1142/s0218127406015805.

Full text
Abstract:
Networks of neurons can perform computations that even modern computers find very difficult to simulate. Most of the existing artificial neurons and artificial neural networks are considered biologically unrealistic, nevertheless the practical success of the backpropagation algorithm and the powerful capabilities of feedforward neural networks have made neural computing very popular in several application areas. A challenging issue in this context is learning internal representations by adjusting the weights of the network connections. To this end, several first-order and second-order algorithms have been proposed in the literature. This paper provides an overview of approaches to backpropagation training, emphazing on first-order adaptive learning algorithms that build on the theory of nonlinear optimization, and proposes a framework for their analysis in the context of deterministic optimization.
APA, Harvard, Vancouver, ISO, and other styles
9

Gülmez, Burak, and Sinem Kulluk. "Social Spider Algorithm for Training Artificial Neural Networks." International Journal of Business Analytics 6, no. 4 (October 2019): 32–49. http://dx.doi.org/10.4018/ijban.2019100103.

Full text
Abstract:
Artificial neural networks (ANNs) are one of the most widely used techniques for generalization, classification, and optimization. ANNs are inspired from the human brain and perform some abilities automatically like learning new information and making new inferences. Back-propagation (BP) is the most common algorithm for training ANNs. But the processing of the BP algorithm is too slow, and it can be trapped into local optima. The meta-heuristic algorithms overcome these drawbacks and are frequently used in training ANNs. In this study, a new generation meta-heuristic, the Social Spider (SS) algorithm, is adapted for training ANNs. The performance of the algorithm is compared with conventional and meta-heuristic algorithms on classification benchmark problems in the literature. The algorithm is also applied to real-world data in order to predict the production of a factory in Kayseri and compared with some regression-based algorithms and ANNs models. The obtained results and comparisons on classification benchmark datasets have shown that the SS algorithm is a competitive algorithm for training ANNs. On the real-world production dataset, the SS algorithm has outperformed all compared algorithms. As a result of experimental studies, the SS algorithm is highly capable for training ANNs and can be used for both classification and regression.
APA, Harvard, Vancouver, ISO, and other styles
10

Daskin, Ammar. "A Quantum Implementation Model for Artificial Neural Networks." Quanta 7, no. 1 (February 20, 2018): 7. http://dx.doi.org/10.12743/quanta.v7i1.65.

Full text
Abstract:
The learning process for multilayered neural networks with many nodes makes heavy demands on computational resources. In some neural network models, the learning formulas, such as the Widrow–Hoff formula, do not change the eigenvectors of the weight matrix while flatting the eigenvalues. In infinity, these iterative formulas result in terms formed by the principal components of the weight matrix, namely, the eigenvectors corresponding to the non-zero eigenvalues. In quantum computing, the phase estimation algorithm is known to provide speedups over the conventional algorithms for the eigenvalue-related problems. Combining the quantum amplitude amplification with the phase estimation algorithm, a quantum implementation model for artificial neural networks using the Widrow–Hoff learning rule is presented. The complexity of the model is found to be linear in the size of the weight matrix. This provides a quadratic improvement over the classical algorithms.Quanta 2018; 7: 7–18.
APA, Harvard, Vancouver, ISO, and other styles
11

Khan, Dr Rafiqul Zaman, and Haider Allamy. "Training Algorithms for Supervised Machine Learning: Comparative Study." INTERNATIONAL JOURNAL OF MANAGEMENT & INFORMATION TECHNOLOGY 4, no. 3 (July 25, 2013): 354–60. http://dx.doi.org/10.24297/ijmit.v4i3.773.

Full text
Abstract:
Supervised machine learning is an important task for learning artificial neural networks; therefore a demand for selected supervised learning algorithms such as back propagation algorithm, decision tree learning algorithm and perceptron algorithm has been arise in order to perform the learning stage of the artificial neural networks. In this paper; a comparative study has been presented for the aforementioned algorithms to evaluate their performance within a range of specific parameters such as speed of learning, overfitting avoidance, and their accuracy. Besides these parameters we have included their benefits and limitations to unveil their hidden features and provide more details regarding their performance. We have found the decision tree algorithm is the best as compared with other algorithms that can solve the complex problems with a remarkable speed.
APA, Harvard, Vancouver, ISO, and other styles
12

Zhang, Dai Yuan. "Shape Control Learning Algorithm for Neural Networks." Applied Mechanics and Materials 347-350 (August 2013): 2270–74. http://dx.doi.org/10.4028/www.scientific.net/amm.347-350.2270.

Full text
Abstract:
A new kind of shape control learning algorithm (SCLA) for training neural networks is proposed. We use the rational cubic spline (with quadratic denominator) to implement a new neural system for shape control, and construct a new kind of artificial neural networks based on given patterns. The shape can be controlled by some shape parameters, which is much different from the known algorithms for training neural networks. The numerical experiments indicate that the new method proposed in this paper demonstrates good results.
APA, Harvard, Vancouver, ISO, and other styles
13

Matveeva, Nataliya. "ARTIFICIAL NEURAL NETWORKS IN MEDICAL DIAGNOSIS." System technologies 2, no. 133 (March 1, 2021): 33–41. http://dx.doi.org/10.34185/1562-9945-2-133-2021-05.

Full text
Abstract:
Artificial neural networks are finding many uses in the medical diagnosis application. The article examines cases of renopathy in type 2 diabetes. Data are symptoms of disease. The multilayer perceptron networks (MLP) is used as a classifier to distinguish between a sick and a healthy person. The results of applying artificial neural networks for diagnose renopathy based on selected symptoms show the network's ability to recognize to recognize diseases corresponding to human symptoms. Various parameters, structures and learning algorithms of neural networks were tested in the modeling process.
APA, Harvard, Vancouver, ISO, and other styles
14

Faris, Hossam, Ibrahim Aljarah, Nailah Al-Madi, and Seyedali Mirjalili. "Optimizing the Learning Process of Feedforward Neural Networks Using Lightning Search Algorithm." International Journal on Artificial Intelligence Tools 25, no. 06 (October 27, 2016): 1650033. http://dx.doi.org/10.1142/s0218213016500330.

Full text
Abstract:
Evolutionary Neural Networks are proven to be beneficial in solving challenging datasets mainly due to the high local optima avoidance. Stochastic operators in such techniques reduce the probability of stagnation in local solutions and assist them to supersede conventional training algorithms such as Back Propagation (BP) and Levenberg-Marquardt (LM). According to the No-Free-Lunch (NFL), however, there is no optimization technique for solving all optimization problems. This means that a Neural Network trained by a new algorithm has the potential to solve a new set of problems or outperform the current techniques in solving existing problems. This motivates our attempts to investigate the efficiency of the recently proposed Evolutionary Algorithm called Lightning Search Algorithm (LSA) in training Neural Network for the first time in the literature. The LSA-based trainer is benchmarked on 16 popular medical diagnosis problems and compared to BP, LM, and 6 other evolutionary trainers. The quantitative and qualitative results show that the LSA algorithm is able to show not only better local solutions avoidance but also faster convergence speed compared to the other algorithms employed. In addition, the statistical test conducted proves that the LSA-based trainer is significantly superior in comparison with the current algorithms on the majority of datasets.
APA, Harvard, Vancouver, ISO, and other styles
15

Azzam, Jamal Abdul Fatah. "Learning Of Artificial Neural Networks by Genetic Algorithms (Dept.E)." MEJ. Mansoura Engineering Journal 35, no. 1 (November 17, 2020): 21–33. http://dx.doi.org/10.21608/bfemu.2020.123574.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Pehlevan, Cengiz, and Dmitri B. Chklovskii. "Neuroscience-Inspired Online Unsupervised Learning Algorithms: Artificial Neural Networks." IEEE Signal Processing Magazine 36, no. 6 (November 2019): 88–96. http://dx.doi.org/10.1109/msp.2019.2933846.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Zorins, Aleksejs, and Peteris Grabusts. "Artificial Neural Networks and Human Brain: Survey of Improvement Possibilities of Learning." Environment. Technology. Resources. Proceedings of the International Scientific and Practical Conference 3 (June 16, 2015): 228. http://dx.doi.org/10.17770/etr2015vol3.165.

Full text
Abstract:
<p class="R-AbstractKeywords">There are numerous applications of Artificial Neural Networks (ANN) at the present time and there are different learning algorithms, topologies, hybrid methods etc. It is strongly believed that ANN is built using human brain’s functioning principles but still ANN is very primitive and tricky way for real problem solving. In the recent years modern neurophysiology advanced to a big extent in understanding human brain functions and structure, however, there is a lack of this knowledge application to real ANN learning algorithms. Each learning algorithm and each network topology should be carefully developed to solve more or less complex problem in real life. One may say that almost each serious application requires its own network topology, algorithm and data pre-processing. This article presents a survey of several ways to improve ANN learning possibilities according to human brain structure and functioning, especially one example of this concept – neuroplasticity – automatic adaptation of ANN topology to problem domain.</p>
APA, Harvard, Vancouver, ISO, and other styles
18

Plawiak, Pawel, and Ryszard Tadeusiewicz. "Approximation of phenol concentration using novel hybrid computational intelligence methods." International Journal of Applied Mathematics and Computer Science 24, no. 1 (March 1, 2014): 165–81. http://dx.doi.org/10.2478/amcs-2014-0013.

Full text
Abstract:
Abstract This paper presents two innovative evolutionary-neural systems based on feed-forward and recurrent neural networks used for quantitative analysis. These systems have been applied for approximation of phenol concentration. Their performance was compared against the conventional methods of artificial intelligence (artificial neural networks, fuzzy logic and genetic algorithms). The proposed systems are a combination of data preprocessing methods, genetic algorithms and the Levenberg-Marquardt (LM) algorithm used for learning feed forward and recurrent neural networks. The initial weights and biases of neural networks chosen by the use of a genetic algorithm are then tuned with an LM algorithm. The evaluation is made on the basis of accuracy and complexity criteria. The main advantage of proposed systems is the elimination of random selection of the network weights and biases, resulting in increased efficiency of the systems.
APA, Harvard, Vancouver, ISO, and other styles
19

Yang, Xu, Songgaojun Deng, Mengyao Ji, Jinfeng Zhao, and Wenhao Zheng. "Neural Network Evolving Algorithm Based on the Triplet Codon Encoding Method." Genes 9, no. 12 (December 13, 2018): 626. http://dx.doi.org/10.3390/genes9120626.

Full text
Abstract:
Artificial intelligence research received more and more attention nowadays. Neural Evolution (NE) is one very important branch of AI, which waves the power of evolutionary algorithms to generate Artificial Neural Networks (ANNs). How to use the evolutionary advantages of network topology and weights to solve the application of Artificial Neural Networks is the main problem in the field of NE. In this paper, a novel DNA encoding method based on the triple codon is proposed. Additionally, a NE algorithm Triplet Codon Encoding Neural Network Evolving Algorithm (TCENNE) based on this encoding method is presented to verify the rationality and validity of the coding design. The results show that TCENNE is very effective and more robust than NE algorithms, due to the coding design. Also, it is shown that it can realize the co-evolution of network topology and weights and outperform other neural evolution systems in challenging reinforcement learning tasks.
APA, Harvard, Vancouver, ISO, and other styles
20

Garro, Beatriz A., and Roberto A. Vázquez. "Designing Artificial Neural Networks Using Particle Swarm Optimization Algorithms." Computational Intelligence and Neuroscience 2015 (2015): 1–20. http://dx.doi.org/10.1155/2015/369298.

Full text
Abstract:
Artificial Neural Network (ANN) design is a complex task because its performance depends on the architecture, the selected transfer function, and the learning algorithm used to train the set of synaptic weights. In this paper we present a methodology that automatically designs an ANN using particle swarm optimization algorithms such as Basic Particle Swarm Optimization (PSO), Second Generation of Particle Swarm Optimization (SGPSO), and a New Model of PSO called NMPSO. The aim of these algorithms is to evolve, at the same time, the three principal components of an ANN: the set of synaptic weights, the connections or architecture, and the transfer functions for each neuron. Eight different fitness functions were proposed to evaluate the fitness of each solution and find the best design. These functions are based on the mean square error (MSE) and the classification error (CER) and implement a strategy to avoid overtraining and to reduce the number of connections in the ANN. In addition, the ANN designed with the proposed methodology is compared with those designed manually using the well-known Back-Propagation and Levenberg-Marquardt Learning Algorithms. Finally, the accuracy of the method is tested with different nonlinear pattern classification problems.
APA, Harvard, Vancouver, ISO, and other styles
21

Thakur, Amey. "Fundamentals of Neural Networks." International Journal for Research in Applied Science and Engineering Technology 9, no. VIII (August 15, 2021): 407–26. http://dx.doi.org/10.22214/ijraset.2021.37362.

Full text
Abstract:
The purpose of this study is to familiarise the reader with the foundations of neural networks. Artificial Neural Networks (ANNs) are algorithm-based systems that are modelled after Biological Neural Networks (BNNs). Neural networks are an effort to use the human brain's information processing skills to address challenging real-world AI issues. The evolution of neural networks and their significance are briefly explored. ANNs and BNNs are contrasted, and their qualities, benefits, and disadvantages are discussed. The drawbacks of the perceptron model and their improvement by the sigmoid neuron and ReLU neuron are briefly discussed. In addition, we give a bird's-eye view of the different Neural Network models. We study neural networks (NNs) and highlight the different learning approaches and algorithms used in Machine Learning and Deep Learning. We also discuss different types of NNs and their applications. A brief introduction to Neuro-Fuzzy and its applications with a comprehensive review of NN technological advances is provided.
APA, Harvard, Vancouver, ISO, and other styles
22

Amali, D. Geraldine Bessie, and Dinakaran M. "A Review of Heuristic Global Optimization Based Artificial Neural Network Training Approahes." IAES International Journal of Artificial Intelligence (IJ-AI) 6, no. 1 (March 1, 2017): 26. http://dx.doi.org/10.11591/ijai.v6.i1.pp26-32.

Full text
Abstract:
Artificial Neural Networks have earned popularity in recent years because of their ability to approximate nonlinear functions. Training a neural network involves minimizing the mean square error between the target and network output. The error surface is nonconvex and highly multimodal. Finding the minimum of a multimodal function is a NP complete problem and cannot be solved completely. Thus application of heuristic global optimization algorithms that computes a good global minimum to neural network training is of interest. This paper reviews the various heuristic global optimization algorithms used for training feedforward neural networks and recurrent neural networks. The training algorithms are compared in terms of the learning rate, convergence speed and accuracy of the output produced by the neural network. The paper concludes by suggesting directions for novel ANN training algorithms based on recent advances in global optimization.
APA, Harvard, Vancouver, ISO, and other styles
23

Katerynych, L., M. Veres, and E. Safarov. "Neural networks’ learning process acceleration." PROBLEMS IN PROGRAMMING, no. 2-3 (September 2020): 313–21. http://dx.doi.org/10.15407/pp2020.02-03.313.

Full text
Abstract:
This study is devoted to evaluating the process of training of a parallel system in the form of an artificial neural network, which is built using a genetic algorithm. The methods that allow to achieve this goal are computer simulation of a neural network on multi-core CPUs and a genetic algorithm for finding the weights of an artificial neural network. The performance of sequential and parallel training processes of artificial neural network is compared.
APA, Harvard, Vancouver, ISO, and other styles
24

Chauhan, Seema, and R. K. Shrivastava. "Reference evapotranspiration forecasting using different artificial neural networks algorithms." Canadian Journal of Civil Engineering 36, no. 9 (September 2009): 1491–505. http://dx.doi.org/10.1139/l09-074.

Full text
Abstract:
The present study aims to apply artificial neural networks (ANNs) for reference evapotranspiration (ETo) prediction. Three different feed-forward artifical neural network (ANN) models, each using varied input combinations of previous months ETo, have been trained and tested. The output of the network was the one-month-ahead ETo. The networks learned to forecast one-month-ahead ETo for Mahanadi reservoir project area using the three learning methods namely quasi-Newton algorithm, Levenberg–Marquardt algorithm and backpropagation with adaptive learning rate algorithm. The training results were compared with each other, and performance evaluations were done for untrained data. The performance evaluations measured were standard error of estimates (SEE), raw standard error of estimates (RSEE), and model efficiency. The best ANN architecture for prediction of ETo was obtained for Mahanadi reservoir project area. The monthly reference evapotranspiration data were estimated by the Penman–Monteith method and used for training and testing of the ANN models. Further ANNs predicted results were compared with those obtained using the statistical multiple regression technique. Based on results obtained, the ANN model with architecture of 3–9-1 (three, nine, and one neuron(s) in the input, hidden, and output layers, respectively) trained using quasi-Newton algorithm was found to be the best amongst all the models with minimum SEE and RSEE of 0.45 and 0.45 mm/d respectively and maximum model efficiency of 93%. It is concluded that ANN can be used to predict ETo.
APA, Harvard, Vancouver, ISO, and other styles
25

Chiueh, T. D., and R. M. Goodman. "Learning algorithms for neural networks with ternary weights." Neural Networks 1 (January 1988): 166. http://dx.doi.org/10.1016/0893-6080(88)90203-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Zhou, Gang, Yicheng Ji, Xiding Chen, and Fangfang Zhang. "Artificial Neural Networks and the Mass Appraisal of Real Estate." International Journal of Online Engineering (iJOE) 14, no. 03 (March 30, 2018): 180. http://dx.doi.org/10.3991/ijoe.v14i03.8420.

Full text
Abstract:
<p>With the rapid development of computer, artificial intelligence and big data technology, artificial neural networks have become one of the most powerful machine learning algorithms. In the practice, most of the applications of artificial neural networks use back propagation neural network and its variation. Besides the back propagation neural network, various neural networks have been developing in order to improve the performance of standard models. Though neural networks are well known method in the research of real estate, there is enormous space for future research in order to enhance their function. Some scholars combine genetic algorithm, geospatial information, support vector machine model, particle swarm optimization with artificial neural networks to appraise the real estate, which is helpful for the existing appraisal technology. The mass appraisal of real estate in this paper includes the real estate valuation in the transaction and the tax base valuation in the real estate holding. In this study we focus on the theoretical development of artificial neural networks and mass appraisal of real estate, artificial neural networks model evolution and algorithm improvement, artificial neural networks practice and application, and review the existing literature about artificial neural networks and mass appraisal of real estate. Finally, we provide some suggestions for the mass appraisal of China's real estate.</p>
APA, Harvard, Vancouver, ISO, and other styles
27

De Groff, Dolores, and Perambur Neelakanta. "Faster Convergent Artificial Neural Networks." INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY 17, no. 1 (January 16, 2018): 7126–32. http://dx.doi.org/10.24297/ijct.v17i1.7106.

Full text
Abstract:
Proposed in this paper is a novel fast-convergence algorithm applied to neural networks (ANNs) with a learning rate based on the eigenvalues of the associated Hessian matrix of the input data. That is, the learning rate applied to the backpropagation algorithm changes dynamically with the input data used for training. The best choice of learning rate to converge to an accurate value quickly is derived. This newly proposed fast-convergence algorithm is applied to a traditional multilayer ANN architecture with feed-forward and backpropagation techniques. The proposed strategy is applied to various functions learned by the ANN through training. Learning curves obtained using calculated learning rates according to the novel method proposed are compared to learning curves utilizing an arbitrary learning rate to demonstrate the usefulness of this novel technique. This study shows that convergence to accurate values can be achieved much more quickly (a reduction in iterations by a factor of hundred) using the techniques proposed here. This approach is illustrated in this research work with derivations and pertinent examples to illustrate the method and learning curves obtained.
APA, Harvard, Vancouver, ISO, and other styles
28

Kłosowski, Grzegorz, and Tomasz Rymarczyk. "USING NEURAL NETWORKS AND DEEP LEARNING ALGORITHMS IN ELECTRICAL IMPEDANCE TOMOGRAPHY." Informatics Control Measurement in Economy and Environment Protection 7, no. 3 (September 30, 2017): 99–102. http://dx.doi.org/10.5604/01.3001.0010.5226.

Full text
Abstract:
This paper refers to the cases of the use of Artificial Neural Networks and Convolutional Neural Networks in impedance tomography. Machine Learning methods can be used to teach computers different technical problems. The efficient use of conventional artificial neural networks in tomography is possible able to effectively visualize objects. The first step of implementation Deep Learning methods in Electrical Impedance Tomography was performed in this work.
APA, Harvard, Vancouver, ISO, and other styles
29

Samimi, Ahmad Jafari. "Forecasting Government Size in Iran Using Artificial Neural Network." Journal of Economics and Behavioral Studies 3, no. 5 (November 15, 2011): 274–78. http://dx.doi.org/10.22610/jebs.v3i5.280.

Full text
Abstract:
In this study, artificial neural network (ANN) for forecasting government size in Iran is applied. The purpose of the study is comparison various architectures, transfer functions and learning algorithms on the operation of network, for this purpose the annual data from 1971-2007 of selected variable are used. Variables are tax income, oil revenue, population, openness, government expenditure, GDP and GDP per capita; these variables are selected based on economic theories. Result shows that networks with various training algorithms and transfer functions have different results. Best architecture is a network with two hidden layer and twelve (12) neuron in hidden layers with hyperbolic tangent transfer function both in hidden and output layers with Quasi -Newton training algorithm. Base on findings in this study suggested in using neural network must be careful in selecting the architecture, transfer function and training algorithms.
APA, Harvard, Vancouver, ISO, and other styles
30

BELATRECHE, AMMAR, LIAM P. MAGUIRE, MARTIN MCGINNITY, and QING XIANG WU. "EVOLUTIONARY DESIGN OF SPIKING NEURAL NETWORKS." New Mathematics and Natural Computation 02, no. 03 (November 2006): 237–53. http://dx.doi.org/10.1142/s179300570600049x.

Full text
Abstract:
Unlike traditional artificial neural networks (ANNs), which use a high abstraction of real neurons, spiking neural networks (SNNs) offer a biologically plausible model of realistic neurons. They differ from classical artificial neural networks in that SNNs handle and communicate information by means of timing of individual pulses, an important feature of neuronal systems being ignored by models based on rate coding scheme. However, in order to make the most of these realistic neuronal models, good training algorithms are required. Most existing learning paradigms tune the synaptic weights in an unsupervised way using an adaptation of the famous Hebbian learning rule, which is based on the correlation between the pre- and post-synaptic neurons activity. Nonetheless, supervised learning is more appropriate when prior knowledge about the outcome of the network is available. In this paper, a new approach for supervised training is presented with a biologically plausible architecture. An adapted evolutionary strategy (ES) is used for adjusting the synaptic strengths and delays, which underlie the learning and memory processes in the nervous system. The algorithm is applied to complex non-linearly separable problems, and the results show that the network is able to perform learning successfully by means of temporal encoding of presented patterns.
APA, Harvard, Vancouver, ISO, and other styles
31

Kornev, P. A., and A. N. Pylkin. "Fundamentals of optimization of training algorithms for artificial neural networks." E3S Web of Conferences 224 (2020): 01022. http://dx.doi.org/10.1051/e3sconf/202022401022.

Full text
Abstract:
In the modern IT industry, the basis for the nearest progress is artificial intelligence technologies and, in particular, artificial neuron systems. The so-called neural networks are constantly being improved within the framework of their many learning algorithms for a wide range of tasks. In the paper, a class of approximation problems is distinguished as one of the most common classes of problems in artificial intelligence systems. The aim of the paper is to study the most recommended learning algorithms, select the most optimal one and find ways to improve it according to various characteristics. Several of the most commonly used learning algorithms for approximation are considered. In the course of computational experiments, the most advantageous aspects of all the presented algorithms are revealed. A method is proposed for improving the computational characteristics of the algorithms under study.
APA, Harvard, Vancouver, ISO, and other styles
32

Farmaha, Ihor, Marian Banaś, Vasyl Savchyn, Bohdan Lukashchuk, and Taras Farmaha. "Wound image segmentation using clustering based algorithms." New Trends in Production Engineering 2, no. 1 (October 1, 2019): 570–78. http://dx.doi.org/10.2478/ntpe-2019-0062.

Full text
Abstract:
Abstract Classic methods of measurement and analysis of the wounds on the images are very time consuming and inaccurate. Automation of this process will improve measurement accuracy and speed up the process. Research is aimed to create an algorithm based on machine learning for automated segmentation based on clustering algorithms Methods. Algorithms used: SLIC (Simple Linear Iterative Clustering), Deep Embedded Clustering (that is based on artificial neural networks and k-means). Because of insufficient amount of labeled data, classification with artificial neural networks can't reach good results. Clustering, on the other hand is an unsupervised learning technique and doesn't need human interaction. Combination of traditional clustering methods for image segmentation with artificial neural networks leads to combination of advantages of both of them. Preliminary step to adapt Deep Embedded Clustering to work with bio-medical images is introduced and is based on SLIC algorithm for image segmentation. Segmentation with this method, after model training, leads to better results than with traditional SLIC.
APA, Harvard, Vancouver, ISO, and other styles
33

Thakur, Amey. "Neuro-Fuzzy: Artificial Neural Networks & Fuzzy Logic." International Journal for Research in Applied Science and Engineering Technology 9, no. 9 (September 30, 2021): 128–35. http://dx.doi.org/10.22214/ijraset.2021.37930.

Full text
Abstract:
Abstract: Neuro Fuzzy is a hybrid system that combines Artificial Neural Networks with Fuzzy Logic. Provides a great deal of freedom when it comes to thinking. This phrase, on the other hand, is frequently used to describe a system that combines both approaches. There are two basic streams of neural network and fuzzy system study. Modelling several elements of the human brain (structure, reasoning, learning, perception, and so on) as well as artificial systems and data: pattern clustering and recognition, function approximation, system parameter estimate, and so on. In general, neural networks and fuzzy logic systems are parameterized nonlinear computing methods for numerical data processing (signals, images, stimuli). These algorithms can be integrated into dedicated hardware or implemented on a general-purpose computer. The network system acquires knowledge through a learning process. Internal parameters are used to store the learned information (weights). Keywords: Artificial Neural Networks (ANNs), Neural Networks (NNs), Fuzzy Logic (FL), Neuro-Fuzzy, Probability Reasoning, Soft Computing, Fuzzification, Defuzzification, Fuzzy Inference Systems, Membership Function.
APA, Harvard, Vancouver, ISO, and other styles
34

Grabusts, Pēteris. "APPLICATION OF CLUSTERING METHOD IN THE RBF NEURAL NETWORKS." Environment. Technology. Resources. Proceedings of the International Scientific and Practical Conference 1 (June 20, 2001): 257. http://dx.doi.org/10.17770/etr2001vol1.1928.

Full text
Abstract:
This paper describes one of classification algorithms, cluster analysis, that plays a significant role in the implementation of learning algorithm as applied to RBF-type artificial mural networks. The mathematical description of the K-means clustering algorithm is given and its implementation is demonstrated by experiment.
APA, Harvard, Vancouver, ISO, and other styles
35

Leema N., Khanna H. Nehemiah, Elgin Christo V. R., and Kannan A. "Evaluation of Parameter Settings for Training Neural Networks Using Backpropagation Algorithms." International Journal of Operations Research and Information Systems 11, no. 4 (October 2020): 62–85. http://dx.doi.org/10.4018/ijoris.2020100104.

Full text
Abstract:
Artificial neural networks (ANN) are widely used for classification, and the training algorithm commonly used is the backpropagation (BP) algorithm. The major bottleneck faced in the backpropagation neural network training is in fixing the appropriate values for network parameters. The network parameters are initial weights, biases, activation function, number of hidden layers and the number of neurons per hidden layer, number of training epochs, learning rate, minimum error, and momentum term for the classification task. The objective of this work is to investigate the performance of 12 different BP algorithms with the impact of variations in network parameter values for the neural network training. The algorithms were evaluated with different training and testing samples taken from the three benchmark clinical datasets, namely, Pima Indian Diabetes (PID), Hepatitis, and Wisconsin Breast Cancer (WBC) dataset obtained from the University of California Irvine (UCI) machine learning repository.
APA, Harvard, Vancouver, ISO, and other styles
36

Saiful, Muhammad, Lalu Muhammad Samsu, and Fathurrahman Fathurrahman. "Sistem Deteksi Infeksi COVID-19 Pada Hasil X-Ray Rontgen menggunakan Algoritma Convolutional Neural Network (CNN)." Infotek : Jurnal Informatika dan Teknologi 4, no. 2 (July 31, 2021): 217–27. http://dx.doi.org/10.29408/jit.v4i2.3582.

Full text
Abstract:
The development of the world's technology is growing rapidly, especially in the field of health in the form of detection tools of various objects, including disease objects. The technology in point is part of artificial intelligence that is able to recognize a set of imagery and classify automatically with deep learning techniques. One of the deep learning networks widely used is convolutional neural network with computer vision technology. One of the problems with computer vision that is still developing is object detection as a useful technology to recognize objects in the image as if humans knew the object of the image. In this case, a computer machine is trained in learning using artificial neural networks. One of the sub types of artificial neural networks that are able to handle computer vision problems is by using deep learning techniques with convolutional neural network algorithms. The purpose of this research is to find out how to design the system, the network architecture used for COVID-19 infection detection. The system cannot perform detection of other objects. The results of COVID-19 infection detection with convolutional neural network algorithm show unlimited accuracy value that ranges from 60-99%.
APA, Harvard, Vancouver, ISO, and other styles
37

Bakurova, Anna, Olesia Yuskiv, Dima Shyrokorad, Anton Riabenko, and Elina Tereschenko. "NEURAL NETWORK FORECASTING OF ENERGY CONSUMPTION OF A METALLURGICAL ENTERPRISE." Innovative Technologies and Scientific Solutions for Industries, no. 1 (15) (March 31, 2021): 14–22. http://dx.doi.org/10.30837/itssi.2021.15.014.

Full text
Abstract:
The subject of the research is the methods of constructing and training neural networks as a nonlinear modeling apparatus for solving the problem of predicting the energy consumption of metallurgical enterprises. The purpose of this work is to develop a model for forecasting the consumption of the power system of a metallurgical enterprise and its experimental testing on the data available for research of PJSC "Dneprospetsstal". The following tasks have been solved: analysis of the time series of power consumption; building a model with the help of which data on electricity consumption for a historical period is processed; building the most accurate forecast of the actual amount of electricity for the day ahead; assessment of the forecast quality. Methods used: time series analysis, neural network modeling, short-term forecasting of energy consumption in the metallurgical industry. The results obtained: to develop a model for predicting the energy consumption of a metallurgical enterprise based on artificial neural networks, the MATLAB complex with the Neural Network Toolbox was chosen. When conducting experiments, based on the available statistical data of a metallurgical enterprise, a selection of architectures and algorithms for learning neural networks was carried out. The best results were shown by the feedforward and backpropagation network, architecture with nonlinear autoregressive and learning algorithms: Levenberg-Marquard nonlinear optimization, Bayesian Regularization method and conjugate gradient method. Another approach, deep learning, is also considered, namely the neural network with long short-term memory LSTM and the adam learning algorithm. Such a deep neural network allows you to process large amounts of input information in a short time and build dependencies with uninformative input information. The LSTM network turned out to be the most effective among the considered neural networks, for which the indicator of the maximum prediction error had the minimum value. Conclusions: analysis of forecasting results using the developed models showed that the chosen approach with experimentally selected architectures and learning algorithms meets the necessary requirements for forecast accuracy when developing a forecasting model based on artificial neural networks. The use of models will allow automating high-precision operational hourly forecasting of energy consumption in market conditions. Keywords: energy consumption; forecasting; artificial neural network; time series.
APA, Harvard, Vancouver, ISO, and other styles
38

Avdic, Senada, Roumiana Chakarova, and Imre Pazsit. "Analysis of the experimental positron lifetime spectra by neural networks." Nuclear Technology and Radiation Protection 18, no. 1 (2003): 16–21. http://dx.doi.org/10.2298/ntrp0301016a.

Full text
Abstract:
This paper deals with the analysis of experimental positron lifetime spectra in polymer materials by using various algorithms of neural networks. A method based on the use of artificial neural networks for unfolding the mean lifetime and intensity of the spectral components of simulated positron lifetime spectra was previously suggested and tested on simulated data [Pzzsitetal, Applied Surface Science, 149 (1998), 97]. In this work, the applicability of the method to the analysis of experimental positron spectra has been verified in the case of spectra from polymer materials with three components. It has been demonstrated that the backpropagation neural network can determine the spectral parameters with a high accuracy and perform the decomposi-tion of lifetimes which differ by 10% or more. The backpropagation network has not been suitable for the identification of both the parameters and the number of spectral components. Therefore, a separate artificial neural network module has been designed to solve the classification problem. Module types based on self-organizing map and learning vector quantization algorithms have been tested. The learning vector quantization algorithm was found to have better performance and reliability. A complete artificial neural network analysis tool of positron lifetime spectra has been constructed to include a spectra classification module and parameter evaluation modules for spectra with a different number of components. In this way, both flexibility and high resolution can be achieved.
APA, Harvard, Vancouver, ISO, and other styles
39

Li, Xiao Guang. "Research on the Development and Applications of Artificial Neural Networks." Applied Mechanics and Materials 556-562 (May 2014): 6011–14. http://dx.doi.org/10.4028/www.scientific.net/amm.556-562.6011.

Full text
Abstract:
Intelligent control is a class of control techniques that use various AI computing approaches like neural networks, Bayesian probability, fuzzy logic, machine learning, evolutionary computation and genetic algorithms. In computer science and related fields, artificial neural networks are computational models inspired by animals’ central nervous systems (in particular the brain) that are capable of machine learning and pattern recognition. They are usually presented as systems of interconnected “neurons” that can compute values from inputs by feeding information through the network. Like other machine learning methods, neural networks have been used to solve a wide variety of tasks that are hard to solve using ordinary rule-based programming, including computer vision and speech recognition.
APA, Harvard, Vancouver, ISO, and other styles
40

Mohamed, Soha Abd El-Moamen, Marghany Hassan Mohamed, and Mohammed F. Farghally. "A New Cascade-Correlation Growing Deep Learning Neural Network Algorithm." Algorithms 14, no. 5 (May 19, 2021): 158. http://dx.doi.org/10.3390/a14050158.

Full text
Abstract:
In this paper, a proposed algorithm that dynamically changes the neural network structure is presented. The structure is changed based on some features in the cascade correlation algorithm. Cascade correlation is an important algorithm that is used to solve the actual problem by artificial neural networks as a new architecture and supervised learning algorithm. This process optimizes the architectures of the network which intends to accelerate the learning process and produce better performance in generalization. Many researchers have to date proposed several growing algorithms to optimize the feedforward neural network architectures. The proposed algorithm has been tested on various medical data sets. The results prove that the proposed algorithm is a better method to evaluate the accuracy and flexibility resulting from it.
APA, Harvard, Vancouver, ISO, and other styles
41

Inoue, Isao H. "CREST project focused on neuromorphic devices and networks." Impact 2020, no. 5 (November 9, 2020): 6–9. http://dx.doi.org/10.21820/23987073.2020.5.6.

Full text
Abstract:
The artificial neural network is a type of electronic circuit modelled after the human brain. It contains thousands of artificial neurons and synapses that, in general, assemble to execute algorithms that can allow the neural network to incorporate a large amount of input data. One of the algorithms is known as deep learnig (DL), which is a kind of statistical processing to learn and infer several features of the big data while consuming tremendous energy. A team, led by Dr Isao H Inoue of the National Institute of Advanced Industrial Science and Technology (AIST), is working on a five-and-a-half-year CREST project until March 2025 to develop a novel neuromorphic architecture that can do the learning and inference without using such an algorithm, thus in low power consumption.
APA, Harvard, Vancouver, ISO, and other styles
42

Ali, Irfan, and Lana Sularto. "Optimasi Parameter Artificial Neural Network Menggunakan Algoritma Genetika Untuk Prediksi Kelulusan Mahasiswa." Jurnal ICT : Information Communication & Technology 18, no. 1 (August 7, 2019): 54–59. http://dx.doi.org/10.36054/jict-ikmi.v18i1.52.

Full text
Abstract:
It is difficult to predict student graduation status in a college. Higher education needs to predict student behavior from active students so that it can be seen the failure factor of students who do not graduate on time. Data mining classification techniques used to predict students are using artificial neural networks. Artificial neural network is one method to predict student graduation. This researcher tries to apply artificial neural network methods using genetic algorithms to predict student graduation. In this study using the learning rate parameter 0.1 with optimization using genetic algorithms then evaluating to get accuracy. The results of this study get an accuracy value for artificial neural network models of 71.48% and accuracy for artificial neural network models based on genetic algorithms by 99.33% with an accuracy difference of 27.85%.
APA, Harvard, Vancouver, ISO, and other styles
43

MILARÉ, CLAUDIA R., ANDRÉ C. P. DE L. F. DE CARVALHO, and MARIA C. MONARD. "AN APPROACH TO EXPLAIN NEURAL NETWORKS USING SYMBOLIC ALGORITHMS." International Journal of Computational Intelligence and Applications 02, no. 04 (December 2002): 365–76. http://dx.doi.org/10.1142/s1469026802000695.

Full text
Abstract:
Although Artificial Neural Networks have been satisfactorily employed in several problems, such as clustering, pattern recognition, dynamic systems control and prediction, they still suffer from significant limitations. One of them is that the induced concept representation is not usually comprehensible to humans. Several techniques have been suggested to extract meaningful knowledge from trained networks. This paper proposes the use of symbolic learning algorithms, commonly used by the Machine Learning community, such as C4.5, C4.5rules and CN2, to extract symbolic representations from trained networks. The approach proposed is similar to that used by the Trepan algorithm, which extracts symbolic representations, expressed as decision trees, from trained networks. Experimental results are presented and discussed in order to compare the knowledge extracted from Artificial Neural Networks using the proposed approach and the Trepan approach. Results are compared regarding two aspects: fidelity and comprehensibility.
APA, Harvard, Vancouver, ISO, and other styles
44

Wohl, Peter. "EFFICIENCY THROUGH REDUCED COMMUNICATION IN MESSAGE PASSING SIMULATION OF NEURAL NETWORKS." International Journal on Artificial Intelligence Tools 02, no. 01 (March 1993): 133–62. http://dx.doi.org/10.1142/s0218213093000096.

Full text
Abstract:
Neural algorithms require massive computation and very high communication bandwidth and are naturally expressed at a level of granularity finer than parallel systems can exploit efficiently. Mapping Neural Networks onto parallel computers has traditionally implied a form of clustering neurons and weights to increase the granularity. SIMD simulations may exceed a million connections per second using thousands of processors, but are often tailored to particular networks and learning algorithms. MIMD simulations required an even larger granularity to run efficiently and often trade flexibility for speed. An alternative technique based on pipelining fewer but larger messages through parallel. “broadcast/accumulate trees” is explored. “Lazy” allocation of messages reduces communication and memory requirements, curbing excess parallelism at run time. The mapping is flexible to changes in network architecture and learning algorithm and is suited for a variety of computer configurations. The method pushes the limits of parallelizing backpropagation and feed-forward type algorithms. Results exceed a million connections per second already on 30 processors and are up to ten times superior to previous results on similar hardware. The implementation techniques can also be applied in conjunction with others, including systolic and VLSI.
APA, Harvard, Vancouver, ISO, and other styles
45

Manjunath, R., Shyam vasudev, and Narendranath Udupa. "Differential Learning Algorithm for Artificial Neural Networks." International Journal of Computer Applications 1, no. 18 (February 25, 2010): 69–74. http://dx.doi.org/10.5120/381-571.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

DRAELOS, TIM, and DON HUSH. "A CONSTRUCTIVE NEURAL NETWORK ALGORITHM FOR FUNCTION APPROXIMATION USING LOCALLY FIT SIGMOIDS." International Journal on Artificial Intelligence Tools 07, no. 03 (September 1998): 373–98. http://dx.doi.org/10.1142/s0218213098000172.

Full text
Abstract:
A study of the function approximation capabilities of single hidden layer neural networks strongly motivates the investigation of constructive learning techniques as a means of realizing established error bounds. Learning characteristics employed by constructive algorithms provide ideas for development of new algorithms applicable to the function approximation problem. In addition, constructive techniques offer efficient methods for network construction and weight determination. The development of a novel neural network algorithm, the Constructive Locally Fit Sigmoids (CLFS) function approximation algorithm, is presented in detail. Basis functions of global extent (piecewise linear sigmoidal functions) are locally fit to the target function, resulting in a pool of candidate hidden layer nodes from which a function approximation is obtained. This algorithm provides a methodology of selecting nodes in a meaningful way from the infinite set of possibilities and synthesizes an n node single hidden layer network with empirical and analytical results that strongly indicate an O(1/n) mean squared training error bound under certain assumptions. The algorithm operates in polynomial time in the number of network nodes and the input dimension. Empirical results demonstrate its effectiveness on several multidimensional function approximate problems relative to contemporary constructive and nonconstructive algorithms.
APA, Harvard, Vancouver, ISO, and other styles
47

Ding, Hong, and Madan M. Gupta. "Learning Fuzzy Set Neural Networks by Genetic Algorithms." Journal of Intelligent and Fuzzy Systems 5, no. 2 (1997): 113–27. http://dx.doi.org/10.3233/ifs-1997-5203.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Balli, Serkan, and Faruk Sen. "Performance evaluation of artificial neural networks for identification of failure modes in composite plates." Materials Testing 63, no. 6 (June 1, 2021): 565–70. http://dx.doi.org/10.1515/mt-2020-0094.

Full text
Abstract:
Abstract The aim of this work is to identify failure modes of double pinned sandwich composite plates by using artificial neural networks learning algorithms and then analyze their accuracies for identification. Mechanically pinned specimens with two serial pins/bolts for sandwich composite plates were used for recognition of failure modes which were obtained in previous experimental studies. In addition, the empirical data of the preceding work was determined with various geometric parameters for various applied preload moments. In this study, these geometric parameters and fastened/bolted joint forms were used for training by artificial neural networks. Consequently, ten different backpropagation training algorithms of artificial neural network were applied for classification by using one hundred data values containing three geometrical parameters. According to obtained results, it was seen that the Levenberg-Marquardt backpropagation training algorithm was the most successful algorithm with 93 % accuracy rate and it was appropriate for modeling of this problem. Additionally, performances of all backpropagation training algorithms were discussed taking into account accuracy and error ratios.
APA, Harvard, Vancouver, ISO, and other styles
49

Świć, Antoni, Dariusz Wołos, Arkadiusz Gola, and Grzegorz Kłosowski. "The Use of Neural Networks and Genetic Algorithms to Control Low Rigidity Shafts Machining." Sensors 20, no. 17 (August 19, 2020): 4683. http://dx.doi.org/10.3390/s20174683.

Full text
Abstract:
The article presents an original machine-learning-based automated approach for controlling the process of machining of low-rigidity shafts using artificial intelligence methods. Three models of hybrid controllers based on different types of neural networks and genetic algorithms were developed. In this study, an objective function optimized by a genetic algorithm was replaced with a neural network trained on real-life data. The task of the genetic algorithm is to select the optimal values of the input parameters of a neural network to ensure minimum deviation. Both input vector values and the neural network’s output values are real numbers, which means the problem under consideration is regressive. The performance of three types of neural networks was analyzed: a classic multilayer perceptron network, a nonlinear autoregressive network with exogenous input (NARX) prediction network, and a deep recurrent long short-term memory (LSTM) network. Algorithmic machine learning methods were used to achieve a high level of automation of the control process. By training the network on data from real measurements, we were able to control the reliability of the turning process, taking into account many factors that are usually overlooked during mathematical modelling. Positive results of the experiments confirm the effectiveness of the proposed method for controlling low-rigidity shaft turning.
APA, Harvard, Vancouver, ISO, and other styles
50

Đuriš, Jelena, Ivana Kurćubić, and Svetlana Ibrić. "Review of machine learning algorithms' application in pharmaceutical technology." Arhiv za farmaciju 71, no. 4 (2021): 302–17. http://dx.doi.org/10.5937/arhfarm71-32499.

Full text
Abstract:
Machine learning algorithms, and artificial intelligence in general, have a wide range of applications in the field of pharmaceutical technology. Starting from the formulation development, through a great potential for integration within the Quality by design framework, these data science tools provide a better understanding of the pharmaceutical formulations and respective processing. Machine learning algorithms can be especially helpful with the analysis of the large volume of data generated by the Process analytical technologies. This paper provides a brief explanation of the artificial neural networks, as one of the most frequently used machine learning algorithms. The process of the network training and testing is described and accompanied with illustrative examples of machine learning tools applied in the context of pharmaceutical formulation development and related technologies, as well as an overview of the future trends. Recently published studies on more sophisticated methods, such as deep neural networks and light gradient boosting machine algorithm, have been described. The interested reader is also referred to several official documents (guidelines) that pave the way for a more structured representation of the machine learning models in their prospective submissions to the regulatory bodies.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography