To see the other types of publications on this topic, follow the link: Artificial Neural Network Training.

Journal articles on the topic 'Artificial Neural Network Training'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Artificial Neural Network Training.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Bui, Ngoc Tam, and Hiroshi Hasegawa. "Training Artificial Neural Network Using Modification of Differential Evolution Algorithm." International Journal of Machine Learning and Computing 5, no. 1 (February 2015): 1–6. http://dx.doi.org/10.7763/ijmlc.2015.v5.473.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Didmanidze, I. Sh, G. A. Kakhiani, and D. Z. Didmanidze. "TRAINING OF ARTIFICIAL NEURAL NETWORK." Journal of Numerical and Applied Mathematics, no. 1 (135) (2021): 110–14. http://dx.doi.org/10.17721/2706-9699.2021.1.14.

Full text
Abstract:
The methodology of neural networks is even more often applied in tasks of management and decision-making, including in the sphere of trade and finance. The basis of neural networks is made by nonlinear adaptive systems which proved the efficiency at the solution of problems of forecasting.
APA, Harvard, Vancouver, ISO, and other styles
3

Cavallaro, Lucia, Ovidiu Bagdasar, Pasquale De Meo, Giacomo Fiumara, and Antonio Liotta. "Artificial neural networks training acceleration through network science strategies." Soft Computing 24, no. 23 (September 9, 2020): 17787–95. http://dx.doi.org/10.1007/s00500-020-05302-y.

Full text
Abstract:
AbstractThe development of deep learning has led to a dramatic increase in the number of applications of artificial intelligence. However, the training of deeper neural networks for stable and accurate models translates into artificial neural networks (ANNs) that become unmanageable as the number of features increases. This work extends our earlier study where we explored the acceleration effects obtained by enforcing, in turn, scale freeness, small worldness, and sparsity during the ANN training process. The efficiency of that approach was confirmed by recent studies (conducted independently) where a million-node ANN was trained on non-specialized laptops. Encouraged by those results, our study is now focused on some tunable parameters, to pursue a further acceleration effect. We show that, although optimal parameter tuning is unfeasible, due to the high non-linearity of ANN problems, we can actually come up with a set of useful guidelines that lead to speed-ups in practical cases. We find that significant reductions in execution time can generally be achieved by setting the revised fraction parameter ($$\zeta $$ ζ ) to relatively low values.
APA, Harvard, Vancouver, ISO, and other styles
4

Sagala, Noviyanti, Cynthia Hayat, and Frahselia Tandipuang. "Identification of fat-soluble vitamins deficiency using artificial neural network." Jurnal Teknologi dan Sistem Komputer 8, no. 1 (October 17, 2019): 6–11. http://dx.doi.org/10.14710/jtsiskom.8.1.2020.6-11.

Full text
Abstract:
The fat-soluble vitamins (A, D, E, K) deficiency remain frequent universally and may have consequential adverse resultants and causing slow appearance symptoms gradually and intensify over time. The vitamin deficiency detection requires an experienced physician to notice the symptoms and to review a blood test’s result (high-priced). This research aims to create an early detection system of fat-soluble vitamin deficiency using artificial neural network Back-propagation. The method was implemented by converting deficiency symptoms data into training data to be used to produce a weight of ANN and testing data. We employed Gradient Descent and Logsig as an activation function. The distribution of training data and test data was 71 and 30, respectively. The best architecture generated an accuracy of 95 % in a combination of parameters using 150 hidden layers, 10000 epoch, error target 0.0001, learning rate 0.25.
APA, Harvard, Vancouver, ISO, and other styles
5

Golubinskiy, Andrey, and Andrey Tolstykh. "Hybrid method of conventional neural network training." Informatics and Automation 20, no. 2 (March 30, 2021): 463–90. http://dx.doi.org/10.15622/ia.2021.20.2.8.

Full text
Abstract:
The paper proposes a hybrid method for training convolutional neural networks. The method consists of combining second and first-order methods for different elements of the architecture of a convolutional neural network. The hybrid convolution neural network training method allows to achieve significantly better convergence compared to Adam; however, it requires fewer computational operations to implement. Using the proposed method, it is possible to train networks on which learning paralysis occurs when using first-order methods. Moreover, the proposed method could adjust its computational complexity to the hardware on which the computation is performed; at the same time, the hybrid method allows using the mini-packet learning approach. The analysis of the ratio of computations between convolutional neural networks and fully connected artificial neural networks is presented. The mathematical apparatus of error optimization of artificial neural networks is considered, including the method of backpropagation of the error, the Levenberg-Marquardt algorithm. The main limitations of these methods that arise when training a convolutional neural network are analyzed. The analysis of the stability of the proposed method when the initialization parameters are changed. The results of the applicability of the method in various problems are presented.
APA, Harvard, Vancouver, ISO, and other styles
6

Gursoy, Osman, and Haidar Sharif. "Parallel computing for artificial neural network training." Periodicals of Engineering and Natural Sciences (PEN) 6, no. 1 (January 9, 2018): 1. http://dx.doi.org/10.21533/pen.v6i1.143.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ding, Xue, and Hong Hong Yang. "A Study on the Image Classification Techniques Based on Wavelet Artificial Neural Network Algorithm." Applied Mechanics and Materials 602-605 (August 2014): 3512–14. http://dx.doi.org/10.4028/www.scientific.net/amm.602-605.3512.

Full text
Abstract:
With the ever-changing education information technology, it is a big problem for the universities and college that how to classify the thousands of copies of the image during the art examination marking process. This paper is to explore the application of artificial intelligence techniques, and to do accurate classification of a large number of images within a limited time and under the help of computer. It is can be seen that the proposed method is feasible through the application of the results of the actual work. Artificial neural network training Artificial neural network training methods have two mainly style, which are Incremental Training and Batch Training, and take the amount of different network training mission as the distinction standard. First, to introduce the Incremental Training [1], that means whenever the network receives the input vector and target vector, it have to adjust once the connection weights and thresholds. It is an online learning method. The other one is Batch Training [2], that means no longer adjust the connection and immediately, but perform bulk adjustment, and after a given volume of the input vector and target vector. Both training methods can be applied, whether it is static or dynamic neural network. Different results will be obtained by artificial neural network for the use of different training methods. When using artificial neural networks to solve specific problems, learning methods, training methods and artificial neural network function should be selected according to the expected results of question type and its specific requirements [3-4]. The selection of parameters of wavelet neural networks and adaptive learning
APA, Harvard, Vancouver, ISO, and other styles
8

Amali, D. Geraldine Bessie, and Dinakaran M. "A Review of Heuristic Global Optimization Based Artificial Neural Network Training Approahes." IAES International Journal of Artificial Intelligence (IJ-AI) 6, no. 1 (March 1, 2017): 26. http://dx.doi.org/10.11591/ijai.v6.i1.pp26-32.

Full text
Abstract:
Artificial Neural Networks have earned popularity in recent years because of their ability to approximate nonlinear functions. Training a neural network involves minimizing the mean square error between the target and network output. The error surface is nonconvex and highly multimodal. Finding the minimum of a multimodal function is a NP complete problem and cannot be solved completely. Thus application of heuristic global optimization algorithms that computes a good global minimum to neural network training is of interest. This paper reviews the various heuristic global optimization algorithms used for training feedforward neural networks and recurrent neural networks. The training algorithms are compared in terms of the learning rate, convergence speed and accuracy of the output produced by the neural network. The paper concludes by suggesting directions for novel ANN training algorithms based on recent advances in global optimization.
APA, Harvard, Vancouver, ISO, and other styles
9

Soylak, Mustafa, Tuğrul Oktay, and İlke Turkmen. "A simulation-based method using artificial neural networks for solving the inverse kinematic problem of articulated robots." Proceedings of the Institution of Mechanical Engineers, Part E: Journal of Process Mechanical Engineering 231, no. 3 (September 27, 2015): 470–79. http://dx.doi.org/10.1177/0954408915608755.

Full text
Abstract:
In our article, inverse kinematic problem of a plasma cutting robot with three degree of freedom is solved using artificial neural networks. Artificial neural network was trained using joint angle values according to cartesian coordinates ( x, y, z) of end point of a robotic arm. The Levenberg–Marquardt training algorithm was applied to educate artificial neural network. To validate the designed neural network, it was tested using a new test data set which is not applied in training. A simulation was performed on a three-dimensional model of MSC.ADAMS software using angle values obtained from artificial neural network test. It was revealed from this simulation that trajectory of plasma cutting torch obtained using artificial neural network agreed well with desired trajectory.
APA, Harvard, Vancouver, ISO, and other styles
10

Kuptsov, P. V., A. V. Kuptsova, and N. V. Stankevich. "Artificial Neural Network as a Universal Model of Nonlinear Dynamical Systems." Nelineinaya Dinamika 17, no. 1 (2021): 5–21. http://dx.doi.org/10.20537/nd210102.

Full text
Abstract:
We suggest a universal map capable of recovering the behavior of a wide range of dynamical systems given by ODEs. The map is built as an artificial neural network whose weights encode a modeled system. We assume that ODEs are known and prepare training datasets using the equations directly without computing numerical time series. Parameter variations are taken into account in the course of training so that the network model captures bifurcation scenarios of the modeled system. The theoretical benefit from this approach is that the universal model admits applying common mathematical methods without needing to develop a unique theory for each particular dynamical equations. From the practical point of view the developed method can be considered as an alternative numerical method for solving dynamical ODEs suitable for running on contemporary neural network specific hardware. We consider the Lorenz system, the Rössler system and also the Hindmarch – Rose model. For these three examples the network model is created and its dynamics is compared with ordinary numerical solutions. A high similarity is observed for visual images of attractors, power spectra, bifurcation diagrams and Lyapunov exponents.
APA, Harvard, Vancouver, ISO, and other styles
11

Mohasseb, M., A. El-Rabbany, O. Abd El-Alim, and R. Rashad. "DGPS Correction Prediction Using Artificial Neural Networks." Journal of Navigation 60, no. 2 (April 20, 2007): 291–301. http://dx.doi.org/10.1017/s0373463307004158.

Full text
Abstract:
This paper focuses on modelling and predicting differential GPS corrections transmitted by marine radio-beacon systems using artificial neural networks. Various neural network structures with various training algorithms were examined, including Linear, Radial Biases, and Feedforward. Matlab Neural Network toolbox is used for this purpose. Data sets used in building the model are the transmitted pseudorange corrections and broadcast navigation message. Model design is passed through several stages, namely data collection, preprocessing, model building, and finally model validation. It is found that feedforward neural network with automated regularization is the most suitable for our data. In training the neural network, different approaches are used to take advantage of the pseudorange corrections history while taking into account the required time for prediction and storage limitations. Three data structures are considered in training the neural network, namely all round, compound, and average. Of the various data structures examined, it is found that the average data structure is the most suitable. It is shown that the developed model is capable of predicting the differential correction with an accuracy level comparable to that of beacon-transmitted real-time DGPS correction.
APA, Harvard, Vancouver, ISO, and other styles
12

Yamazaki, Akio, and Teresa B. Ludermir. "Neural Network Training with Global Optimization Techniques." International Journal of Neural Systems 13, no. 02 (April 2003): 77–86. http://dx.doi.org/10.1142/s0129065703001467.

Full text
Abstract:
This paper presents an approach of using Simulated Annealing and Tabu Search for the simultaneous optimization of neural network architectures and weights. The problem considered is the odor recognition in an artificial nose. Both methods have produced networks with high classification performance and low complexity. Generalization has been improved by using the backpropagation algorithm for fine tuning. The combination of simple and traditional search methods has shown to be very suitable for generating compact and efficient networks.
APA, Harvard, Vancouver, ISO, and other styles
13

Hui, Lucas Chi Kwong, Kwok-Yan Lam, and Chee Weng Chea. "Global optimisation in neural network training." Neural Computing & Applications 5, no. 1 (March 1997): 58–64. http://dx.doi.org/10.1007/bf01414103.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Vujicic, D., R. Pavlovic, D. Milosevic, B. Djordjevic, S. Randjic, and D. Stojic. "Classification of asteroid families with artificial neural networks." Serbian Astronomical Journal, no. 201 (2020): 39–48. http://dx.doi.org/10.2298/saj2001039v.

Full text
Abstract:
This paper describes an artificial neural network for classification of asteroids into families. The data used for artificial neural network training and testing were obtained by the Hierarchical Clustering Method (HCM). We have shown that an artificial neural networks can be used as a validation method for the HCM on families with a large number of members.
APA, Harvard, Vancouver, ISO, and other styles
15

Chatterjee, Baisakhi, and Himadri Nath Saha. "Parameter Training in MANET using Artificial Neural Network." International Journal of Computer Network and Information Security 11, no. 9 (September 8, 2019): 1–8. http://dx.doi.org/10.5815/ijcnis.2019.09.01.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Popova, Yu V. "Artificial Neural Network in the CATS Training System." Digital Transformation, no. 2 (August 6, 2019): 53–59. http://dx.doi.org/10.38086/2522-9613-2019-2-53-59.

Full text
Abstract:
This paper presents a variant of using an artificial neural network (ANN) for adaptive learning. The main idea of using ANN is to apply it for a specific educational material, so that after completing the course or its separate topic, the student can determine, not only his level of knowledge, without the teacher’s participation, but also get some recommendations on what material needs to be studied further due to gaps in the studied issues. This approach allows you to build an individual learning trajectory, significantly reduce the time to study academic disciplines and improve the quality of the educational process. The training of an artificial neural network takes place according to the method of back propagation of an error. The developed ANN can be applied to study any academic discipline with a different number of topics and control questions. The research results are implemented and tested in the CATS adaptive training system. This system is the author's development.
APA, Harvard, Vancouver, ISO, and other styles
17

Alwaisi, Shaimaa Safaa Ahmed. "Training Of Artificial Neural Network Using Metaheuristic Algorithm." International Journal of Intelligent Systems and Applications in Engineering Special Issue, Special Issue (July 31, 2017): 12–16. http://dx.doi.org/10.18201/ijisae.2017specialissue31417.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Yaghini, Masoud, Mohammad M. Khoshraftar, and Mehdi Fallahi. "A hybrid algorithm for artificial neural network training." Engineering Applications of Artificial Intelligence 26, no. 1 (January 2013): 293–301. http://dx.doi.org/10.1016/j.engappai.2012.01.023.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Baptista, Darío, and Fernando Morgado-Dias. "A survey of artificial neural network training tools." Neural Computing and Applications 23, no. 3-4 (June 14, 2013): 609–15. http://dx.doi.org/10.1007/s00521-013-1408-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Mohmad Hassim, Yana Mazwin, and Rozaida Ghazali. "Using Artificial Bee Colony to Improve Functional Link Neural Network Training." Applied Mechanics and Materials 263-266 (December 2012): 2102–8. http://dx.doi.org/10.4028/www.scientific.net/amm.263-266.2102.

Full text
Abstract:
Artificial Neural Networks have emerged as an important tool for classification and have been widely used to classify non-linearly separable pattern. The most popular artificial neural networks model is a Multilayer Perceptron (MLP) that is able to perform classification task with significant success. However due to the complexity of MLP structure and also problems such as local minima trapping, over fitting and weight interference have made neural network training difficult. Thus, the easy way to avoid these problems is by removing the hidden layers. This paper presents the ability of Functional Link Neural Network (FLNN) in overcoming the complexity structure of MLP, using it single layer architecture and proposes an Artificial Bee Colony (ABC) optimization for training the FLNN. The proposed technique is expected to provide better learning scheme for a classifier in order to get more accurate classification result.
APA, Harvard, Vancouver, ISO, and other styles
21

Biswas, Saroj Kumar, Manomita Chakraborty, Biswajit Purkayastha, Pinki Roy, and Dalton Meitei Thounaojam. "Rule Extraction from Training Data Using Neural Network." International Journal on Artificial Intelligence Tools 26, no. 03 (December 22, 2016): 1750006. http://dx.doi.org/10.1142/s0218213017500063.

Full text
Abstract:
Data Mining is a powerful technology to help organization to concentrate on most important data by extracting useful information from large database. One of the most commonly used techniques in data mining is Artificial Neural Network due to its high performance in many application domains. Despite many advantages of Artificial Neural Network, one of its main drawbacks is its inherent black box nature which is the main problem of using Artificial Neural Network in data mining. Therefore, this paper proposes a rule extraction algorithm from neural network using classified and misclassified data to convert the black box nature of Artificial Neural Network into a white box. The proposed algorithm is a modification of the existing algorithm, Rule Extraction by Reverse Engineering (RxREN). The proposed algorithm extracts rules from trained neural network for datasets with mixed mode attributes using pedagogical approach. The proposed algorithm uses both classified as well as misclassified data to find out the data ranges of significant attributes in respective classes, which is the innovation of the proposed algorithm. The experimental results clearly show that the performance of the proposed algorithm is superior to existing algorithms.
APA, Harvard, Vancouver, ISO, and other styles
22

Duan, Hong Yan, You Tang Li, and Shuai Tan. "Application of ANN Back-Propagation for Fracture Design Parameters of Medium Carbon Steel in Extra-Low Cycle Axial Fatigue Loading." Key Engineering Materials 345-346 (August 2007): 445–48. http://dx.doi.org/10.4028/www.scientific.net/kem.345-346.445.

Full text
Abstract:
The fracture problems of medium carbon steel under extra-low cycle axial fatigue loading were studied using artificial neural network in this paper. The training data were used in the formation of training set of artificial neural network. The artificial neural network model exhibited excellent comparison with the experimental results. It was concluded that predicted fracture design parameters by the trained neural network model seem more reasonable compared to approximate methods. Training artificial neural network model was introduced at first. And then the Training data for the development of the neural network model was obtained from the experiments. The input parameters, notch depth and tip radius of the notch, and the output, the cycle times of fracture were used during the network training. The neural network architecture is designed. The artificial neural network model was developed using back propagation architecture with three layers jump connections, where every layer was connected or linked to every previous layer. The number of hidden neurons was determined according to special formula. The performance of system is summarized at last. The result show that the training model has good performance, and the experimental data and predicted data from artificial neural network are in good coherence.
APA, Harvard, Vancouver, ISO, and other styles
23

Samimi, Ahmad Jafari. "Forecasting Government Size in Iran Using Artificial Neural Network." Journal of Economics and Behavioral Studies 3, no. 5 (November 15, 2011): 274–78. http://dx.doi.org/10.22610/jebs.v3i5.280.

Full text
Abstract:
In this study, artificial neural network (ANN) for forecasting government size in Iran is applied. The purpose of the study is comparison various architectures, transfer functions and learning algorithms on the operation of network, for this purpose the annual data from 1971-2007 of selected variable are used. Variables are tax income, oil revenue, population, openness, government expenditure, GDP and GDP per capita; these variables are selected based on economic theories. Result shows that networks with various training algorithms and transfer functions have different results. Best architecture is a network with two hidden layer and twelve (12) neuron in hidden layers with hyperbolic tangent transfer function both in hidden and output layers with Quasi -Newton training algorithm. Base on findings in this study suggested in using neural network must be careful in selecting the architecture, transfer function and training algorithms.
APA, Harvard, Vancouver, ISO, and other styles
24

Akdeniz, Esra, Erol Egrioglu, Eren Bas, and Ufuk Yolcu. "An ARMA Type Pi-Sigma Artificial Neural Network for Nonlinear Time Series Forecasting." Journal of Artificial Intelligence and Soft Computing Research 8, no. 2 (April 1, 2018): 121–32. http://dx.doi.org/10.1515/jaiscr-2018-0009.

Full text
Abstract:
Abstract Real-life time series have complex and non-linear structures. Artificial Neural Networks have been frequently used in the literature to analyze non-linear time series. High order artificial neural networks, in view of other artificial neural network types, are more adaptable to the data because of their expandable model order. In this paper, a new recurrent architecture for Pi-Sigma artificial neural networks is proposed. A learning algorithm based on particle swarm optimization is also used as a tool for the training of the proposed neural network. The proposed new high order artificial neural network is applied to three real life time series data and also a simulation study is performed for Istanbul Stock Exchange data set.
APA, Harvard, Vancouver, ISO, and other styles
25

KONOVALOV, S. "FEATURES OF DIAGNOSTIC ARTIFICIAL NEURAL NETWORKS FOR HYBRID EXPERT SYSTEMS." Digital Technologies 26 (2019): 36–46. http://dx.doi.org/10.33243/2313-7010-26-36-46.

Full text
Abstract:
In the proposed article, various methods of constructing an artificial neural network as one of the components of a hybrid expert system for diagnosis were investigated. A review of foreign literature in recent years was conducted, where hybrid expert systems were considered as an integral part of complex technical systems in the field of security. The advantages and disadvantages of artificial neural networks are listed, and the main problems in creating hybrid expert systems for diagnostics are indicated, proving the relevance of further development of artificial neural networks for hybrid expert systems. The approaches to the analysis of natural language sentences, which are used for the work of hybrid expert systems with artificial neural networks, are considered. A bulletin board is shown, its structure and principle of operation are described. The structure of the bulletin board is divided into levels and sublevels. At sublevels, a confidence factor is applied. The dependence of the values of the confidence factor on the fulfillment of a particular condition is shown. The links between the levels and sublevels of the bulletin board are also described. As an artificial neural network architecture, the «key-threshold» model is used, the rule of neuron operation is shown. In addition, an artificial neural network has the property of training, based on the application of the penalty property, which is able to calculate depending on the accident situation. The behavior of a complex technical system, as well as its faulty states, are modeled using a model that describes the structure and behavior of a given system. To optimize the data of a complex technical system, an evolutionary algorithm is used to minimize the objective function. Solutions to the optimization problem consist of Pareto solution vectors. Optimization and training tasks are solved by using the Hopfield network. In general, a hybrid expert system is described using semantic networks, which consist of vertices and edges. The reference model of a complex technical system is stored in the knowledge base and updated during the acquisition of new knowledge. In an emergency, or about its premise, with the help of neural networks, a search is made for the cause and the control action necessary to eliminate the accident. The considered approaches, interacting with each other, can improve the operation of diagnostic artificial neural networks in the case of emergency management, showing more accurate data in a short time. In addition, the use of such a network for analyzing the state of health, as well as forecasting based on diagnostic data using the example of a complex technical system, is presented.
APA, Harvard, Vancouver, ISO, and other styles
26

Hertz, J. A., T. W. Kjær, E. N. Eskandar, and B. J. Richmond. "MEASURING NATURAL NEURAL PROCESSING WITH ARTIFICIAL NEURAL NETWORKS." International Journal of Neural Systems 03, supp01 (January 1992): 91–103. http://dx.doi.org/10.1142/s0129065792000425.

Full text
Abstract:
We show how to use artificial neural networks as a quantitative tool in studying real neuronal processing in the monkey visual system. Training a network to classify neuronal signals according to the stimulus that elicited them permits us to calculate the information transmitted by these signals. We illustrate this for neurons in the primary visual cortex with measurements of the information transmitted about visual stimuli and for cells in inferior temporal cortex with measurements of information about behavioral context. For the latter neurons we also illustrate how artificial neural networks can be used to model the computation they do.
APA, Harvard, Vancouver, ISO, and other styles
27

Manjula Devi, R., S. Kuppuswami, and R. C. Suganthe. "Fast Linear Adaptive Skipping Training Algorithm for Training Artificial Neural Network." Mathematical Problems in Engineering 2013 (2013): 1–9. http://dx.doi.org/10.1155/2013/346949.

Full text
Abstract:
Artificial neural network has been extensively consumed training model for solving pattern recognition tasks. However, training a very huge training data set using complex neural network necessitates excessively high training time. In this correspondence, a new fast Linear Adaptive Skipping Training (LAST) algorithm for training artificial neural network (ANN) is instituted. The core essence of this paper is to ameliorate the training speed of ANN by exhibiting only the input samples that do not categorize perfectly in the previous epoch which dynamically reducing the number of input samples exhibited to the network at every single epoch without affecting the network’s accuracy. Thus decreasing the size of the training set can reduce the training time, thereby ameliorating the training speed. This LAST algorithm also determines how many epochs the particular input sample has to skip depending upon the successful classification of that input sample. This LAST algorithm can be incorporated into any supervised training algorithms. Experimental result shows that the training speed attained by LAST algorithm is preferably higher than that of other conventional training algorithms.
APA, Harvard, Vancouver, ISO, and other styles
28

Pomerleau, Dean A. "Efficient Training of Artificial Neural Networks for Autonomous Navigation." Neural Computation 3, no. 1 (February 1991): 88–97. http://dx.doi.org/10.1162/neco.1991.3.1.88.

Full text
Abstract:
The ALVINN (Autonomous Land Vehicle In a Neural Network) project addresses the problem of training artificial neural networks in real time to perform difficult perception tasks. ALVINN is a backpropagation network designed to drive the CMU Navlab, a modified Chevy van. This paper describes the training techniques that allow ALVINN to learn in under 5 minutes to autonomously control the Navlab by watching the reactions of a human driver. Using these techniques, ALVINN has been trained to drive in a variety of circumstances including single-lane paved and unpaved roads, and multilane lined and unlined roads, at speeds of up to 20 miles per hour.
APA, Harvard, Vancouver, ISO, and other styles
29

Katerynych, L., M. Veres, and E. Safarov. "Neural networks’ learning process acceleration." PROBLEMS IN PROGRAMMING, no. 2-3 (September 2020): 313–21. http://dx.doi.org/10.15407/pp2020.02-03.313.

Full text
Abstract:
This study is devoted to evaluating the process of training of a parallel system in the form of an artificial neural network, which is built using a genetic algorithm. The methods that allow to achieve this goal are computer simulation of a neural network on multi-core CPUs and a genetic algorithm for finding the weights of an artificial neural network. The performance of sequential and parallel training processes of artificial neural network is compared.
APA, Harvard, Vancouver, ISO, and other styles
30

Hua Ang, Ji, Sheng-Uei Guan, Kay Chen Tan, and Abdullah Al Mamun. "Interference-less neural network training." Neurocomputing 71, no. 16-18 (October 2008): 3509–24. http://dx.doi.org/10.1016/j.neucom.2007.10.012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Javed, Abbas, Hadi Larijani, Ali Ahmadinia, and Rohinton Emmanuel. "RANDOM NEURAL NETWORK LEARNING HEURISTICS." Probability in the Engineering and Informational Sciences 31, no. 4 (May 22, 2017): 436–56. http://dx.doi.org/10.1017/s0269964817000201.

Full text
Abstract:
The random neural network (RNN) is a probabilitsic queueing theory-based model for artificial neural networks, and it requires the use of optimization algorithms for training. Commonly used gradient descent learning algorithms may reside in local minima, evolutionary algorithms can be also used to avoid local minima. Other techniques such as artificial bee colony (ABC), particle swarm optimization (PSO), and differential evolution algorithms also perform well in finding the global minimum but they converge slowly. The sequential quadratic programming (SQP) optimization algorithm can find the optimum neural network weights, but can also get stuck in local minima. We propose to overcome the shortcomings of these various approaches by using hybridized ABC/PSO and SQP. The resulting algorithm is shown to compare favorably with other known techniques for training the RNN. The results show that hybrid ABC learning with SQP outperforms other training algorithms in terms of mean-squared error and normalized root-mean-squared error.
APA, Harvard, Vancouver, ISO, and other styles
32

Meier, Roger W., Don R. Alexander, and Reed B. Freeman. "Using Artificial Neural Networks as a Forward Approach to Backcalculation." Transportation Research Record: Journal of the Transportation Research Board 1570, no. 1 (January 1997): 126–33. http://dx.doi.org/10.3141/1570-15.

Full text
Abstract:
In recent years, artificial neural networks have successfully been trained to backcalculate pavement layer moduli from the results of falling weight deflectometer (FWD) tests. These neural networks provide the same solutions as existing programs, only thousands of times faster. Unfortunately, their use is constrained to the test conditions assumed during network training. These limitations arise from practical aspects of neural network training and cannot be circumvented easily. The goal of this research was to develop a backcalculation program combining the speed of neural networks and the flexibility of conventional programs to produce the same solutions as existing programs. This was accomplished by forgoing neural network backcalculation in favor of neural network forward-calculation, that is, using neural networks in place of complex numerical models for computing the forward-problem solutions used by the conventional backcalculation programs. A suite of neural networks, covering a range of flexible pavement structures, was trained using data generated by WESLEA, the forward-problem solver used in the WESDEF backcalculation program. When tested on 110 experimental FWD results, a version of WESDEF augmented by the neural networks provided statistically identical answers 42 times faster, on average, than the original. Provisions have been made for periodic upgrades as additional networks are trained for other pavement types and test conditions. Meanwhile, the original WESLEA can still be used when an appropriate network is unavailable. This preserves the flexibility of the original program while taking maximum advantage of the speed gains afforded by the neural networks.
APA, Harvard, Vancouver, ISO, and other styles
33

Ni, S. H., C. H. Juang, and P. C. Lu. "Estimation of Dynamic Properties of Sand Using Artificial Neural Networks." Transportation Research Record: Journal of the Transportation Research Board 1526, no. 1 (January 1996): 1–5. http://dx.doi.org/10.1177/0361198196152600101.

Full text
Abstract:
Dynamic properties of soils are usually determined by time-consuming laboratory tests. This study presents a method for estimating dynamic soil parameters using artificial neural networks. A simple feedforward neural network with back-propagation training algorithm is used. The neural network is trained with actual laboratory data, which consists of six input variables. They are the standard penetration test value, the void ratio, the unit weight, the water content, the effective overburden pressure, and the mean effective confining pressure. The output layer consists of a single neuron, representing shear modulus or damping ratio. Results of the neural network training and testing show that predictions of shear modulus by the neural network approach is reliable although it is less successful in predicting damping ratio.
APA, Harvard, Vancouver, ISO, and other styles
34

Tripathi, Kshitij, Rajendra G. Vyas, and Anil K. Gupta. "Document Classification Using Artificial Neural Network." Asian Journal of Computer Science and Technology 8, no. 2 (May 5, 2019): 55–58. http://dx.doi.org/10.51983/ajcst-2019.8.2.2140.

Full text
Abstract:
The Document classification system is the field of data mining in which the format of data is based on bag of words (BoW) or document vector model and the task is to build a machine which after successfully learn the characteristic of given data set, predicts the category of the document to which the word vector belongs. In this approach document is represented by BoW where every single word is used as feature which occurs in a document. The proposed article presents artificial neural network approach which is hybrid of n-fold cross validation and training-validation-test approach for classification of data.
APA, Harvard, Vancouver, ISO, and other styles
35

Zhou, Shi Guan, Guo Jun Li, Zai Fei Luo, and Yan Zheng. "Analog Circuit Fault Diagnosis Based on LVQ Neural Network." Applied Mechanics and Materials 380-384 (August 2013): 828–32. http://dx.doi.org/10.4028/www.scientific.net/amm.380-384.828.

Full text
Abstract:
As an application of artificial intelligence technique in the field of analog circuit fault diagnosis, intelligent fault diagnosis system based on artificial neural network achieved certain success in practice. However, because neural network need for normalization preprocessing of sample before training, prolong the time of fault diagnosis, which is limited in the actual use of the diagnosis system. And the characteristics of LVQ(learning vector quantization) network is not need for normalization and other preprocessing of training samples, therefore, reducing the training time of neural networks. In this paper, the structure and training methods of the LVQ neural network are presented and the specific implementation of the diagnosis system is illustrated with examples. Simulation results show that the mathematical model has a better diagnostic effect. Compared with other methods, this diagnostic method, with the broad application prospect of its structure and method, is simple and practical and so on.
APA, Harvard, Vancouver, ISO, and other styles
36

Amran, Ikhwan Muzammil, and Anas Fathul Ariffin. "Forecasting Malaysian Exchange Rate using Artificial Neural Network." Jurnal Intelek 15, no. 2 (July 28, 2020): 136–45. http://dx.doi.org/10.24191/ji.v15i2.323.

Full text
Abstract:
In todays fast paced global economy, the accuracy in forecasting the foreign exchange rate or predicting the trend is a critical key for any future business to come. The use of computational intelligence based techniques for forecasting has been proved to be successful for quite some time. This study presents a computational advance for forecasting the Foreign Exchange Rate in Kuala Lumpur for Ringgit Malaysia against US Dollar. A neural network based model has been used in forecasting the days ahead of exchange rate. The aims of this research are to make a prediction of Foreign Exchange Rate in Kuala Lumpur for Ringgit Malaysia against US Dollar using artificial neural network and determine practicality of the model. The Alyuda NeuroIntelligence software was utilized to analyze and to predict the data. After the data has been processed and the structural network compared to each other, the network of 2-4-1 has been chosen by outperforming other networks. This network selection criteria are based on Akaike Information Criterion (AIC) value which shows the lowest of them all. The training algorithm that applied is Quasi-Netwon based on the lowest recorded absolute training error. Hence, it is believed that experimental results demonstrate that Artificial Neural Network based model can closely predict the future exchange rate.
APA, Harvard, Vancouver, ISO, and other styles
37

Bas, Eren. "The Training Of Multiplicative Neuron Model Based Artificial Neural Networks With Differential Evolution Algorithm For Forecasting." Journal of Artificial Intelligence and Soft Computing Research 6, no. 1 (January 1, 2016): 5–11. http://dx.doi.org/10.1515/jaiscr-2016-0001.

Full text
Abstract:
Abstract In recent years, artificial neural networks have been commonly used for time series forecasting by researchers from various fields. There are some types of artificial neural networks and feed forward artificial neural networks model is one of them. Although feed forward artificial neural networks gives successful forecasting results they have a basic problem. This problem is architecture selection problem. In order to eliminate this problem, Yadav et al. (2007) proposed multiplicative neuron model artificial neural network. In this study, differential evolution algorithm is proposed for the training of multiplicative neuron model for forecasting. The proposed method is applied to two well-known different real world time series data.
APA, Harvard, Vancouver, ISO, and other styles
38

Aljarrah, Mohammad Fuad, Mohammad Ali Khasawneh, Aslam Ali Al-Omari, and Mohammad Emad Alshorman. "Prediction of Bending Beam Rheometer Test Outputs Using Artificial Neural Networks." Key Engineering Materials 821 (September 2019): 500–505. http://dx.doi.org/10.4028/www.scientific.net/kem.821.500.

Full text
Abstract:
The major objective of this study is to investigate the possibility of using Artificial Neural Networks in creating prediction models capable of estimating Bending Beam Rheometer outputs; namely creep stiffness, and m-value based on test temperature, modifier content; in our case waste vegetable oil, and testing time interval. A feedforward backpropagation neural network with Bayesian Regulation training algorithm and an SSE performance function was implemented. It was found that the neural network model shows high predictive powers with training and testing performance of 99.8% and 99.2% respectively. Plots between laboratory obtained values and neural network predicted outputs were also considered, and a strong correlation between the two methods was concluded. Therefore, it was reasonable to state that using neural networks to build prediction models in order to find BBR test values is justified.
APA, Harvard, Vancouver, ISO, and other styles
39

Zhou, P., and J. Austin. "Learning criteria for training neural network classifiers." Neural Computing & Applications 7, no. 4 (December 1998): 334–42. http://dx.doi.org/10.1007/bf01428124.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Yan, Peizhi, and Yi Feng. "Using Convolution and Deep Learning in Gomoku Game Artificial Intelligence." Parallel Processing Letters 28, no. 03 (September 2018): 1850011. http://dx.doi.org/10.1142/s0129626418500111.

Full text
Abstract:
Gomoku is an ancient board game. The traditional approach to solving the Gomoku game is to apply tree search on a Gomoku game tree. Although the rules of Gomoku are straightforward, the game tree complexity is enormous. Unlike many other board games such as chess and Shogun, the Gomoku board state is more intuitive. That is to say, analyzing the visual patterns on a Gomoku game board is fundamental to play this game. In this paper, we designed a deep convolutional neural network model to help the machine learn from the training data (collected from human players). Based on this original neural network model, we made some changes and get two variant neural networks. We compared the performance of the original neural network with its variants in our experiments. Our original neural network model got 69% accuracy on the training data and 38% accuracy on the testing data. Because the decision made by the neural network is intuitive, we also designed a hard-coded convolution-based Gomoku evaluation function to assist the neural network in making decisions. This hybrid Gomoku artificial intelligence (AI) further improved the performance of a pure neural network-based Gomoku AI.
APA, Harvard, Vancouver, ISO, and other styles
41

Jang, Ilsik, Seeun Oh, Yumi Kim, Changhyup Park, and Hyunjeong Kang. "Well-placement optimisation using sequential artificial neural networks." Energy Exploration & Exploitation 36, no. 3 (September 6, 2017): 433–49. http://dx.doi.org/10.1177/0144598717729490.

Full text
Abstract:
In this study, a new algorithm is proposed by employing artificial neural networks in a sequential manner, termed the sequential artificial neural network, to obtain a global solution for optimizing the drilling location of oil or gas reservoirs. The developed sequential artificial neural network is used to successively narrow the search space to efficiently obtain the global solution. When training each artificial neural network, pre-defined amount of data within the new search space are added to the training dataset to improve the estimation performance. When the size of the search space meets a stopping criterion, reservoir simulations are performed for data in the search space, and a global solution is determined among the simulation results. The proposed method was applied to optimise a horizontal well placement in a coalbed methane reservoir. The results show a superior performance in optimisation while significantly reducing the number of simulations compared to the particle-swarm optimisation algorithm.
APA, Harvard, Vancouver, ISO, and other styles
42

Amalia, Sitti. "Identification Of Number Using Artificial Neural Network Backpropagation." MATEC Web of Conferences 215 (2018): 01011. http://dx.doi.org/10.1051/matecconf/201821501011.

Full text
Abstract:
This research proposed to design and implementation system of voice pattern recognition in the form of numbers with offline pronunciation. Artificial intelligent with backpropagation algorithm used on the simulation test. The test has been done to 100 voice files which got from 10 person voices for 10 different numbers. The words are consisting of number 0 to 9. The trial has been done with artificial neural network parameters such as tolerance value and the sum of a neuron. The best result is shown at tolerance value varied and a sum of the neuron is fixed. The percentage of this network training with optimal architecture and network parameter for each training data and new data are 82,2% and 53,3%. Therefore if tolerance value is fixed and a sum of neuron varied gave 82,2% for training data and 54,4% for new data
APA, Harvard, Vancouver, ISO, and other styles
43

Akkar, Hanan A. R., and Faris B. Ali Jasim. "Intelligent Training Algorithm for Artificial Neural Network EEG Classifications." International Journal of Intelligent Systems and Applications 10, no. 5 (May 8, 2018): 33–41. http://dx.doi.org/10.5815/ijisa.2018.05.04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

WANG, Gai-Liang, and Yan WU. "TRAINING ARTIFICIAL NEURAL NETWORK BY INVADING ADAPTIVE GENETIC ALGORITHM." JOURNAL OF INFRARED AND MILLIMETER WAVES 29, no. 2 (May 6, 2010): 136–39. http://dx.doi.org/10.3724/sp.j.1010.2010.00136.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Hommertzheim, Don, John Huffman, and Ihsan Sabuncuoglu. "Training an artificial neural network the pure pursuit maneuver." Computers & Operations Research 18, no. 4 (January 1991): 343–53. http://dx.doi.org/10.1016/0305-0548(91)90095-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Scott, C. B., and Eric Mjolsness. "Multilevel Artificial Neural Network Training for Spatially Correlated Learning." SIAM Journal on Scientific Computing 41, no. 5 (January 2019): S297—S320. http://dx.doi.org/10.1137/18m1191506.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Havryliuk, Volodymyr. "Artificial neural network based detection of neutral relay defects." MATEC Web of Conferences 294 (2019): 03001. http://dx.doi.org/10.1051/matecconf/201929403001.

Full text
Abstract:
The problem considered in the work is concerned to the automatic detecting and identifying defects in a neutral relay. The special design of electromechanical neutral relays is responsible for the strong asymmetry of its output signal for all possible safety-critical influences, and therefore neutral relays have negligible values of dangerous failures rate. To ensure the safe operation of relay-based train control systems, electromechanical relays should be periodically subjected to routine maintenance, during which their main operating parameters are measured, and the relays are set up in accordance with technical regulations. These measurements are mainly done manually, so they take a lot of time (up to four hours per relay), are expensive, and the results are subjective. In recent years, fault diagnosis methods based on artificial neural networks (ANN) have received considerable attention. The ANN-based classification of relay defects using the time dependence of the transient current in the relay coil during its switching is very promising for practical utilization, but for efficient use of ANN a lot of data is required to train the artificial neural network. To reduce the ANN training time, a pre-processing of the time dependence of relay transient current was proposed using wavelet transform and wavelet energy entropy, which makes it possible to reveal the features of the main defects of the relay armature, contact springs, and magnetic system. The effectiveness of the proposed approach for automatic detecting and identifying of the neutral relays defects was confirmed during testing of the relays with various artificially created defects.
APA, Harvard, Vancouver, ISO, and other styles
48

SONG, YANGPO, and XIAOQI PENG. "MODELING METHOD USING COMBINED ARTIFICIAL NEURAL NETWORK." International Journal of Computational Intelligence and Applications 10, no. 02 (June 2011): 189–98. http://dx.doi.org/10.1142/s1469026811003057.

Full text
Abstract:
To improve the modeling performance — such as accuracy and robustness — of artificial neural network (ANN), a new combined ANN and corresponding optimal modeling method are proposed in this paper. The combined ANN consists of two parallel sub-networks, and methods such as "early stopping" and "data resampling" are jointly used in training process to reduce the sensitivity of the modeling performance to its structure. To achieve better performance, the structure of combined ANN is proposed to be adjusted dynamically according to the information of expectation error and real error. Simulation experimental results verify that the optimal modeling method using combined ANN can achieve much better performance than the traditional method.
APA, Harvard, Vancouver, ISO, and other styles
49

Bogomaz, O., M. Shulha, D. Kotov, A. Koloskov, and A. Zalizovski. "An artificial neural network for analysis of ionograms obtained by ionosonde at the Ukrainian Antarctic Akademik Vernadsky station." Ukrainian Antarctic Journal, no. 2 (December 2020): 59–67. http://dx.doi.org/10.33275/1727-7485.2.2020.653.

Full text
Abstract:
The article presents the developed artificial neural network for F2 ionosphere layer traces scaling on ionograms obtained using the IPS-42 ionosonde installed at the Ukrainian Antarctic Akademik Vernadsky station. The parameters of the IPS-42 ionosonde and the features of the data obtained with it, in particular the format of the output files, are presented. The advantages of using an artificial neural network for identification of traces on ionograms are demonstrated. Usually, an automatic scaling of the ionograms requires a lot of machine time however implementation of an artificial neural network speeds up computations significantly allowing to process incoming ionograms even in the real time mode. The choice of architecture of an artificial neural network is substantiated. The U-Net architecture was chosen. The method of creating and training the neural network is described. The artificial neural network development process included choosing the number of layers, types of activation functions, optimization method and input layer size. Software developed was written in Python programming language with use of the Keras library. Examples of data used for training of the artificial neural network are shown. The results of testing an artificial neural network are presented. The data obtained with the artificial neural network are compared with the results of manual processing of ionograms. Data for training the artificial neural network were obtained in March, 2017 using the IPS-42 ionosonde installed at the Ukrainian Antarctic Akademik Vernadsky station; data for testing were obtained in 2017 and 2020. The developed artificial neural network has minor flaws but they are easily eliminated by retraining the network on a more representative dataset (obtained in various years and seasons). The general results of testing indicate good prospects in further developing this artificial neural network and software for working with it.
APA, Harvard, Vancouver, ISO, and other styles
50

Tutumluer, Erol, and Roger W. Meier. "Attempt at Resilient Modulus Modeling Using Artificial Neural Networks." Transportation Research Record: Journal of the Transportation Research Board 1540, no. 1 (January 1996): 1–6. http://dx.doi.org/10.1177/0361198196154000101.

Full text
Abstract:
The pitfalls inherent in the indiscriminate application of artificial neural networks to numerical modeling problems are illustrated. An example is used of an apparently successful (but ultimately unsuccessful) attempt at training a neural network constitutive model for computing the resilient modulus of gravels as a function of stress state and various material properties. Issues such as the quantity and quality of data needed to successfully train a neural network are explored, and the importance of an independent test set to verify network performance is examined.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography