Academic literature on the topic 'Artificial Neural Network Training'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Artificial Neural Network Training.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Artificial Neural Network Training"

1

Bui, Ngoc Tam, and Hiroshi Hasegawa. "Training Artificial Neural Network Using Modification of Differential Evolution Algorithm." International Journal of Machine Learning and Computing 5, no. 1 (February 2015): 1–6. http://dx.doi.org/10.7763/ijmlc.2015.v5.473.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Didmanidze, I. Sh, G. A. Kakhiani, and D. Z. Didmanidze. "TRAINING OF ARTIFICIAL NEURAL NETWORK." Journal of Numerical and Applied Mathematics, no. 1 (135) (2021): 110–14. http://dx.doi.org/10.17721/2706-9699.2021.1.14.

Full text
Abstract:
The methodology of neural networks is even more often applied in tasks of management and decision-making, including in the sphere of trade and finance. The basis of neural networks is made by nonlinear adaptive systems which proved the efficiency at the solution of problems of forecasting.
APA, Harvard, Vancouver, ISO, and other styles
3

Cavallaro, Lucia, Ovidiu Bagdasar, Pasquale De Meo, Giacomo Fiumara, and Antonio Liotta. "Artificial neural networks training acceleration through network science strategies." Soft Computing 24, no. 23 (September 9, 2020): 17787–95. http://dx.doi.org/10.1007/s00500-020-05302-y.

Full text
Abstract:
AbstractThe development of deep learning has led to a dramatic increase in the number of applications of artificial intelligence. However, the training of deeper neural networks for stable and accurate models translates into artificial neural networks (ANNs) that become unmanageable as the number of features increases. This work extends our earlier study where we explored the acceleration effects obtained by enforcing, in turn, scale freeness, small worldness, and sparsity during the ANN training process. The efficiency of that approach was confirmed by recent studies (conducted independently) where a million-node ANN was trained on non-specialized laptops. Encouraged by those results, our study is now focused on some tunable parameters, to pursue a further acceleration effect. We show that, although optimal parameter tuning is unfeasible, due to the high non-linearity of ANN problems, we can actually come up with a set of useful guidelines that lead to speed-ups in practical cases. We find that significant reductions in execution time can generally be achieved by setting the revised fraction parameter ($$\zeta $$ ζ ) to relatively low values.
APA, Harvard, Vancouver, ISO, and other styles
4

Sagala, Noviyanti, Cynthia Hayat, and Frahselia Tandipuang. "Identification of fat-soluble vitamins deficiency using artificial neural network." Jurnal Teknologi dan Sistem Komputer 8, no. 1 (October 17, 2019): 6–11. http://dx.doi.org/10.14710/jtsiskom.8.1.2020.6-11.

Full text
Abstract:
The fat-soluble vitamins (A, D, E, K) deficiency remain frequent universally and may have consequential adverse resultants and causing slow appearance symptoms gradually and intensify over time. The vitamin deficiency detection requires an experienced physician to notice the symptoms and to review a blood test’s result (high-priced). This research aims to create an early detection system of fat-soluble vitamin deficiency using artificial neural network Back-propagation. The method was implemented by converting deficiency symptoms data into training data to be used to produce a weight of ANN and testing data. We employed Gradient Descent and Logsig as an activation function. The distribution of training data and test data was 71 and 30, respectively. The best architecture generated an accuracy of 95 % in a combination of parameters using 150 hidden layers, 10000 epoch, error target 0.0001, learning rate 0.25.
APA, Harvard, Vancouver, ISO, and other styles
5

Golubinskiy, Andrey, and Andrey Tolstykh. "Hybrid method of conventional neural network training." Informatics and Automation 20, no. 2 (March 30, 2021): 463–90. http://dx.doi.org/10.15622/ia.2021.20.2.8.

Full text
Abstract:
The paper proposes a hybrid method for training convolutional neural networks. The method consists of combining second and first-order methods for different elements of the architecture of a convolutional neural network. The hybrid convolution neural network training method allows to achieve significantly better convergence compared to Adam; however, it requires fewer computational operations to implement. Using the proposed method, it is possible to train networks on which learning paralysis occurs when using first-order methods. Moreover, the proposed method could adjust its computational complexity to the hardware on which the computation is performed; at the same time, the hybrid method allows using the mini-packet learning approach. The analysis of the ratio of computations between convolutional neural networks and fully connected artificial neural networks is presented. The mathematical apparatus of error optimization of artificial neural networks is considered, including the method of backpropagation of the error, the Levenberg-Marquardt algorithm. The main limitations of these methods that arise when training a convolutional neural network are analyzed. The analysis of the stability of the proposed method when the initialization parameters are changed. The results of the applicability of the method in various problems are presented.
APA, Harvard, Vancouver, ISO, and other styles
6

Gursoy, Osman, and Haidar Sharif. "Parallel computing for artificial neural network training." Periodicals of Engineering and Natural Sciences (PEN) 6, no. 1 (January 9, 2018): 1. http://dx.doi.org/10.21533/pen.v6i1.143.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ding, Xue, and Hong Hong Yang. "A Study on the Image Classification Techniques Based on Wavelet Artificial Neural Network Algorithm." Applied Mechanics and Materials 602-605 (August 2014): 3512–14. http://dx.doi.org/10.4028/www.scientific.net/amm.602-605.3512.

Full text
Abstract:
With the ever-changing education information technology, it is a big problem for the universities and college that how to classify the thousands of copies of the image during the art examination marking process. This paper is to explore the application of artificial intelligence techniques, and to do accurate classification of a large number of images within a limited time and under the help of computer. It is can be seen that the proposed method is feasible through the application of the results of the actual work. Artificial neural network training Artificial neural network training methods have two mainly style, which are Incremental Training and Batch Training, and take the amount of different network training mission as the distinction standard. First, to introduce the Incremental Training [1], that means whenever the network receives the input vector and target vector, it have to adjust once the connection weights and thresholds. It is an online learning method. The other one is Batch Training [2], that means no longer adjust the connection and immediately, but perform bulk adjustment, and after a given volume of the input vector and target vector. Both training methods can be applied, whether it is static or dynamic neural network. Different results will be obtained by artificial neural network for the use of different training methods. When using artificial neural networks to solve specific problems, learning methods, training methods and artificial neural network function should be selected according to the expected results of question type and its specific requirements [3-4]. The selection of parameters of wavelet neural networks and adaptive learning
APA, Harvard, Vancouver, ISO, and other styles
8

Amali, D. Geraldine Bessie, and Dinakaran M. "A Review of Heuristic Global Optimization Based Artificial Neural Network Training Approahes." IAES International Journal of Artificial Intelligence (IJ-AI) 6, no. 1 (March 1, 2017): 26. http://dx.doi.org/10.11591/ijai.v6.i1.pp26-32.

Full text
Abstract:
Artificial Neural Networks have earned popularity in recent years because of their ability to approximate nonlinear functions. Training a neural network involves minimizing the mean square error between the target and network output. The error surface is nonconvex and highly multimodal. Finding the minimum of a multimodal function is a NP complete problem and cannot be solved completely. Thus application of heuristic global optimization algorithms that computes a good global minimum to neural network training is of interest. This paper reviews the various heuristic global optimization algorithms used for training feedforward neural networks and recurrent neural networks. The training algorithms are compared in terms of the learning rate, convergence speed and accuracy of the output produced by the neural network. The paper concludes by suggesting directions for novel ANN training algorithms based on recent advances in global optimization.
APA, Harvard, Vancouver, ISO, and other styles
9

Soylak, Mustafa, Tuğrul Oktay, and İlke Turkmen. "A simulation-based method using artificial neural networks for solving the inverse kinematic problem of articulated robots." Proceedings of the Institution of Mechanical Engineers, Part E: Journal of Process Mechanical Engineering 231, no. 3 (September 27, 2015): 470–79. http://dx.doi.org/10.1177/0954408915608755.

Full text
Abstract:
In our article, inverse kinematic problem of a plasma cutting robot with three degree of freedom is solved using artificial neural networks. Artificial neural network was trained using joint angle values according to cartesian coordinates ( x, y, z) of end point of a robotic arm. The Levenberg–Marquardt training algorithm was applied to educate artificial neural network. To validate the designed neural network, it was tested using a new test data set which is not applied in training. A simulation was performed on a three-dimensional model of MSC.ADAMS software using angle values obtained from artificial neural network test. It was revealed from this simulation that trajectory of plasma cutting torch obtained using artificial neural network agreed well with desired trajectory.
APA, Harvard, Vancouver, ISO, and other styles
10

Kuptsov, P. V., A. V. Kuptsova, and N. V. Stankevich. "Artificial Neural Network as a Universal Model of Nonlinear Dynamical Systems." Nelineinaya Dinamika 17, no. 1 (2021): 5–21. http://dx.doi.org/10.20537/nd210102.

Full text
Abstract:
We suggest a universal map capable of recovering the behavior of a wide range of dynamical systems given by ODEs. The map is built as an artificial neural network whose weights encode a modeled system. We assume that ODEs are known and prepare training datasets using the equations directly without computing numerical time series. Parameter variations are taken into account in the course of training so that the network model captures bifurcation scenarios of the modeled system. The theoretical benefit from this approach is that the universal model admits applying common mathematical methods without needing to develop a unique theory for each particular dynamical equations. From the practical point of view the developed method can be considered as an alternative numerical method for solving dynamical ODEs suitable for running on contemporary neural network specific hardware. We consider the Lorenz system, the Rössler system and also the Hindmarch – Rose model. For these three examples the network model is created and its dynamics is compared with ordinary numerical solutions. A high similarity is observed for visual images of attractors, power spectra, bifurcation diagrams and Lyapunov exponents.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Artificial Neural Network Training"

1

Rimer, Michael Edwin. "Improving Neural Network Classification Training." Diss., CLICK HERE for online access, 2007. http://contentdm.lib.byu.edu/ETD/image/etd2094.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Åström, Fredrik. "Neural Network on Compute Shader : Running and Training a Neural Network using GPGPU." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-2036.

Full text
Abstract:
In this thesis I look into how one can train and run an artificial neural network using Compute Shader and what kind of performance can be expected. An artificial neural network is a computational model that is inspired by biological neural networks, e.g. a brain. Finding what kind of performance can be expected was done by creating an implementation that uses Compute Shader and then compare it to the FANN library, i.e. a fast artificial neural network library written in C. The conclusion is that you can improve performance by training an artificial neural network on the compute shader as long as you are using non-trivial datasets and neural network configurations.
APA, Harvard, Vancouver, ISO, and other styles
3

Sneath, Evan B. "Artificial neural network training for semi-autonomous robotic surgery applications." University of Cincinnati / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1416231638.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Inoue, Isao. "On the Effect of Training Data on Artificial Neural Network Models for Prediction." 名古屋大学大学院国際言語文化研究科, 2010. http://hdl.handle.net/2237/14090.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kaster, Joshua M. "Training Convolutional Neural Network Classifiers Using Simultaneous Scaled Supercomputing." University of Dayton / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1588973772607826.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Buys, Stefan. "Genetic algorithm for Artificial Neural Network training for the purpose of Automated Part Recognition." Thesis, Nelson Mandela Metropolitan University, 2012. http://hdl.handle.net/10948/d1008356.

Full text
Abstract:
Object or part recognition is of major interest in industrial environments. Current methods implement expensive camera based solutions. There is a need for a cost effective alternative to be developed. One of the proposed methods is to overcome the hardware, camera, problem by implementing a software solution. Artificial Neural Networks (ANN) are to be used as the underlying intelligent software as they have high tolerance for noise and have the ability to generalize. A colleague has implemented a basic ANN based system comprising of an ANN and three cost effective laser distance sensors. However, the system is only able to identify 3 different parts and needed hard coding changes made by trial and error. This is not practical for industrial use in a production environment where there are a large quantity of different parts to be identified that change relatively regularly. The ability to easily train more parts is required. Difficulties associated with traditional mathematically guided training methods are discussed, which leads to the development of a Genetic Algorithm (GA) based evolutionary training method that overcomes these difficulties and makes accurate part recognition possible. An ANN hybridised with GA training is introduced and a general solution encoding scheme which is used to encode the required ANN connection weights. Experimental tests were performed in order to determine the ideal GA performance and control parameters as studies have indicated that different GA control parameters can lead to large differences in training accuracy. After performing these tests, the training accuracy was analyzed by investigation into GA performance as well as hardware based part recognition performance. This analysis identified the ideal GA control parameters when training an ANN for the purpose of part recognition and showed that the ANN generally trained well and could generalize well on data not presented to it during training.
APA, Harvard, Vancouver, ISO, and other styles
7

Griffin, Glenn R. "Predicting Naval Aviator Flight Training Performances using Multiple Regression and an Artificial Neural Network." NSUWorks, 1995. http://nsuworks.nova.edu/gscis_etd/548.

Full text
Abstract:
The Navy needs improved methods for assigning naval aviators (pilots) to fixed-wing and rotary-winged aircraft. At present, individual flight grades in primary training are used to assign naval aviator trainees to intermediate fixed wing or helicopter training. This study evaluated the potential of a series of single- and multitask tests to account for additional significant variance in the prediction of flight grade training performance for a sample of naval aviator trainees. Subjects were tested on a series of cognitive and perceptual psychomotor tests. The subjects then entered the Navy Flight Training Program. Subject's flight grades were obtained at the end of primary training. Multiple regression and artificial neural network procedures were evaluated to determine their relative efficiency in the prediction of flight grade training performance. All single- and multitask test measures evaluated as a part of this study were significantly related to the primary training flight grade criterion. Two psychomotor and one dichotic listening test measures contributed significant added variance to a multiple regression equation , beyond that of selection tests E (5, 428) = 27.19, R squared = .24, multiple R = .49 , 2 < .01. A follow-on analysis indicated a split-half validation correlation coefficient of £ = .38, 2 < .01 using multiple regression and as high as £ = .41, 2 < .01 using a neural network procedure. No statistically significant differences were found between the correlation coefficients resulting from the application of multiple regression and neural network validation procedures. Both procedures predicted the flight grade criterion equally well, although the neural network applications consistently provided slightly higher correlations between actual and predicted flight grades.
APA, Harvard, Vancouver, ISO, and other styles
8

Hsu, Kuo-Lin, Hoshin Vijai Gupta, and Soroosh Sorooshian. "A SUPERIOR TRAINING STRATEGY FOR THREE-LAYER FEEDFORWARD ARTIFICIAL NEURAL NETWORKS." Department of Hydrology and Water Resources, University of Arizona (Tucson, AZ), 1996. http://hdl.handle.net/10150/614171.

Full text
Abstract:
A new algorithm is proposed for the identification of three-layer feedforward artificial neural networks. The algorithm, entitled LLSSIM, partitions the weight space into two major groups: the input- hidden and hidden -output weights. The input- hidden weights are trained using a multi -start SIMPLEX algorithm and the hidden -output weights are identified using a conditional linear- least- square estimation approach. Architectural design is accomplished by progressive addition of nodes to the hidden layer. The LLSSIM approach provides globally superior weight estimates with fewer function evaluations than the conventional back propagation (BPA) and adaptive back propagation (ABPA) strategies. Monte -carlo testing on the XOR problem, two function approximation problems, and a rainfall- runoff modeling problem show LLSSIM to be more effective, efficient and stable than BPA and ABPA.
APA, Harvard, Vancouver, ISO, and other styles
9

George, Abhinav Kurian. "Fault tolerance and re-training analysis on neural networks." University of Cincinnati / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1552391639148868.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Chen, Lihui. "Modelling continuous sequential behaviour to enhance training and generalization in neural networks." Thesis, University of St Andrews, 1993. http://hdl.handle.net/10023/13485.

Full text
Abstract:
This thesis is a conceptual and empirical approach to embody modelling of continuous sequential behaviour in neural learning. The aim is to enhance the feasibility of training and capacity for generalisation. By examining the sequential aspects of the passing of time in a neural network, it is suggested that an alteration to the usual goal weight condition may be made to model these aspects. The notion of a goal weight path is introduced, with a path-based backpropagation (PBP) framework being proposed. Two models using PBP have been investigated in the thesis. One is called Feedforward Continuous BackPropagation (FCBP) which is a generalization of conventional BackPropagation; the other is called Recurrent Continuous BackPropagation (RCBP) which provides a neural dynamic system for I/O associations. Both models make use of the continuity underlying analogue-binary associations and analogue-analogue associations within a fixed neural network topology. A graphical simulator cbptool for Sun workstations has been designed and implemented for supporting the research. The capabilities of FCBP and RCBP have been explored through experiments. The results for FCBP and RCBP confirm the modelling theory. The fundamental alteration made on conventional backpropagation brings substantial improvement in training and generalization to enhance the power of backpropagation.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Artificial Neural Network Training"

1

Kattan, Ali. Artificial neural network training and software implementation techniques. Hauppauge, N.Y: Nova Science Publishers, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kattan, Ali. Artificial neural network training and software implementation techniques. Hauppauge, N.Y: Nova Science Publishers, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kattan, Ali. Artificial neural network training and software implementation techniques. Hauppauge, N.Y: Nova Science Publishers, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Guan, Biing T. Modeling training site vegetation coverage probability with a random optimization procedure: An artificial neural network approach. [Champaign, IL]: US Army Corps of Engineers, Construction Engineering Research Laboratories, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

N, Sundararajan, and Foo Shou King, eds. Parallel implementations of backpropagation neural networks on transputers: A study of training set parallelism. Singapore: World Scientific, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Shanmuganathan, Subana, and Sandhya Samarasinghe, eds. Artificial Neural Network Modelling. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-28495-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

S, Mohan. Artificial neural network modelling. Roorkee: Indian National Committee on Hydrology, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Gacon, David M. Speeding up neural network training. Manchester: UMIST, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Neural network models in artificial intelligence. New York: E. Horwood, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

C, Jain L., and Johnson R. P, eds. Neural network training using genetic algorithms. Singapore: World Scientific, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Artificial Neural Network Training"

1

da Silva, Ivan Nunes, Danilo Hernane Spatti, Rogerio Andrade Flauzino, Luisa Helena Bartocci Liboni, and Silas Franco dos Reis Alves. "Artificial Neural Network Architectures and Training Processes." In Artificial Neural Networks, 21–28. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-43162-8_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wuraola, Adedamola, and Nitish Patel. "Stochasticity-Assisted Training in Artificial Neural Network." In Neural Information Processing, 591–602. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-04179-3_52.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Köppen, Mario. "On the Training of a Kolmogorov Network." In Artificial Neural Networks — ICANN 2002, 474–79. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-46084-5_77.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Livshin, Igor. "Neural Network Prediction Outside the Training Range." In Artificial Neural Networks with Java, 109–63. Berkeley, CA: Apress, 2019. http://dx.doi.org/10.1007/978-1-4842-4421-0_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Czarnowski, Ireneusz, and Piotr Jedrzejowicz. "An Approach to Artificial Neural Network Training." In Research and Development in Intelligent Systems XIX, 149–62. London: Springer London, 2003. http://dx.doi.org/10.1007/978-1-4471-0651-7_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kordos, Mirosław. "Instance Selection Optimization for Neural Network Training." In Artificial Intelligence and Soft Computing, 610–20. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-39378-0_52.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Reeves, Colin R. "Training Set Selection in Neural Network Applications." In Artificial Neural Nets and Genetic Algorithms, 476–78. Vienna: Springer Vienna, 1995. http://dx.doi.org/10.1007/978-3-7091-7535-4_123.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Zhou, Guian, and Jennie Si. "Improving neural network training based on Jacobian rank deficiency." In Artificial Neural Networks — ICANN 96, 531–36. Berlin, Heidelberg: Springer Berlin Heidelberg, 1996. http://dx.doi.org/10.1007/3-540-61510-5_91.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Patan, Krzysztof, and Maciej Patan. "Selection of Training Data for Locally Recurrent Neural Network." In Artificial Neural Networks – ICANN 2010, 134–37. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15822-3_16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Cavallaro, Lucia, Ovidiu Bagdasar, Pasquale De Meo, Giacomo Fiumara, and Antonio Liotta. "Artificial Neural Networks Training Acceleration Through Network Science Strategies." In Lecture Notes in Computer Science, 330–36. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-40616-5_27.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Artificial Neural Network Training"

1

Mehdizadeh, Nasser S., Payam Sinaei, and Ali L. Nichkoohi. "Modeling Jones’ Reduced Chemical Mechanism of Methane Combustion With Artificial Neural Network." In ASME 2010 3rd Joint US-European Fluids Engineering Summer Meeting collocated with 8th International Conference on Nanochannels, Microchannels, and Minichannels. ASMEDC, 2010. http://dx.doi.org/10.1115/fedsm-icnmm2010-31186.

Full text
Abstract:
The present work reports a way of using Artificial Neural Networks for modeling and integrating the governing chemical kinetics differential equations of Jones’ reduced chemical mechanism for methane combustion. The chemical mechanism is applicable to both diffusion and premixed laminar flames. A feed-forward multi-layer neural network is incorporated as neural network architecture. In order to find sets of input-output data, for adapting the neural network’s synaptic weights in the training phase, a thermochemical analysis is embedded to find the chemical species mole fractions. An analysis of computational performance along with a comparison between the neural network approach and other conventional methods, used to represent the chemistry, are presented and the ability of neural networks for representing a non-linear chemical system is illustrated.
APA, Harvard, Vancouver, ISO, and other styles
2

Si, Tapas, Arunava De, and Anup Kumar Bhattacharjee. "Grammatical swarm for Artificial Neural Network training." In 2014 International Conference on Circuit, Power and Computing Technologies (ICCPCT). IEEE, 2014. http://dx.doi.org/10.1109/iccpct.2014.7055036.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Scanzio, Stefano, Sandro Cumani, Roberto Gemello, Franco Mana, and P. Laface. "Parallel implementation of artificial neural network training." In 2010 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 2010. http://dx.doi.org/10.1109/icassp.2010.5495108.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Skokan, Marek, Marek Bundzel, and Peter Sincak. "Pseudo-distance based artificial neural network training." In 2008 6th International Symposium on Applied Machine Intelligence and Informatics. IEEE, 2008. http://dx.doi.org/10.1109/sami.2008.4469134.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Vaughan, Neil, Venketesh N. Dubey, Michael Y. K. Wee, and Richard Isaacs. "Artificial Neural Network to Predict Patient Body Circumferences and Ligament Thicknesses." In ASME 2013 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2013. http://dx.doi.org/10.1115/detc2013-13088.

Full text
Abstract:
An artificial neural network has been implemented and trained with clinical data from 23088 patients. The aim was to predict a patient’s body circumferences and ligament thickness from patient data. A fully connected feed-forward neural network is used, containing no loops and one hidden layer and the learning mechanism is back-propagation of error. Neural network inputs were mass, height, age and gender. There are eight hidden neurons and one output. The network can generate estimates for waist, arm, calf and thigh circumferences and thickness of skin, fat, Supraspinous and interspinous ligaments, ligamentum flavum and epidural space. Data was divided into a training set of 11000 patients and an unseen test data set of 12088 patients. Twenty five training cycles were completed. After each training cycle neuron outputs advanced closer to the clinically measured data. Waist circumference was predicted within 3.92cm (3.10% error), thigh circumference 2.00cm, (2.81% error), arm circumference 1.21cm (2.48% error), calf circumference 1.41cm, (3.40% error), triceps skinfold 3.43mm, (7.80% error), subscapular skinfold 3.54mm, (8.46% error) and BMI was estimated within 0.46 (0.69% error). The neural network has been extended to predict ligament thicknesses using data from MRI. These predictions will then be used to configure a simulator to offer a patient-specific training experience.
APA, Harvard, Vancouver, ISO, and other styles
6

Vershkov, N., V. Kuchukov, N. Kuchukova, N. Kucherov, and E. Shiriaev. "Optimization of computational complexity of an artificial neural network." In 3rd International Workshop on Information, Computation, and Control Systems for Distributed Environments 2021. Crossref, 2021. http://dx.doi.org/10.47350/iccs-de.2021.17.

Full text
Abstract:
The article deals with the modelling of Artificial Neural Networks as an information transmission system to optimize their computational complexity. The analysis of existing theoretical approaches to optimizing the structure and training of neural networks is carried out. In the process of constructing the model, the well-known problem of isolating a deterministic signal on the background of noise and adapting it to solving the problem of assigning an input implementation to a certain cluster is considered. A layer of neurons is considered as an information transformer with a kernel for solving a certain class of problems: orthogonal transformation, matched filtering, and nonlinear transformation for recognizing the input implementation with a given accuracy. Based on the analysis of the proposed model, it is concluded that it is possible to reduce the number of neurons in the layers of neural network and to reduce the number of features for training the classifier.
APA, Harvard, Vancouver, ISO, and other styles
7

Boybat, Irem, Cecilia Giovinazzo, Elmira Shahrabi, Igor Krawczuk, Iason Giannopoulos, Christophe Piveteau, Manuel Le Gallo, et al. "Multi-ReRAM Synapses for Artificial Neural Network Training." In 2019 IEEE International Symposium on Circuits and Systems (ISCAS). IEEE, 2019. http://dx.doi.org/10.1109/iscas.2019.8702714.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Lari, Nazanin Sadeghi, and Mohammad Saniee Abadeh. "Training artificial neural network by krill-herd algorithm." In 2014 IEEE 7th Joint International Information Technology and Artificial Intelligence Conference (ITAIC). IEEE, 2014. http://dx.doi.org/10.1109/itaic.2014.7065006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Akha, M. A. H., Pintu Chandra Shill, and K. Murase. "Neural network ensembles based on Artificial Training Examples." In 2009 12th International Conference on Computer and Information Technology (ICCIT). IEEE, 2009. http://dx.doi.org/10.1109/iccit.2009.5407262.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Canayaz, Murat, and Recep Ozdag. "Training artificial neural network with Chaotic Cricket Algorithm." In 2018 26th Signal Processing and Communications Applications Conference (SIU). IEEE, 2018. http://dx.doi.org/10.1109/siu.2018.8404254.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Artificial Neural Network Training"

1

Reifman, Jaques, and Javier Vitela. Artificial Neural Network Training with Conjugate Gradients for Diagnosing Transients in Nuclear Power Plants. Office of Scientific and Technical Information (OSTI), March 1993. http://dx.doi.org/10.2172/10198077.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Arhin, Stephen, Babin Manandhar, Hamdiat Baba Adam, and Adam Gatiba. Predicting Bus Travel Times in Washington, DC Using Artificial Neural Networks (ANNs). Mineta Transportation Institute, April 2021. http://dx.doi.org/10.31979/mti.2021.1943.

Full text
Abstract:
Washington, DC is ranked second among cities in terms of highest public transit commuters in the United States, with approximately 9% of the working population using the Washington Metropolitan Area Transit Authority (WMATA) Metrobuses to commute. Deducing accurate travel times of these metrobuses is an important task for transit authorities to provide reliable service to its patrons. This study, using Artificial Neural Networks (ANN), developed prediction models for transit buses to assist decision-makers to improve service quality and patronage. For this study, we used six months of Automatic Vehicle Location (AVL) and Automatic Passenger Counting (APC) data for six Washington Metropolitan Area Transit Authority (WMATA) bus routes operating in Washington, DC. We developed regression models and Artificial Neural Network (ANN) models for predicting travel times of buses for different peak periods (AM, Mid-Day and PM). Our analysis included variables such as number of served bus stops, length of route between bus stops, average number of passengers in the bus, average dwell time of buses, and number of intersections between bus stops. We obtained ANN models for travel times by using approximation technique incorporating two separate algorithms: Quasi-Newton and Levenberg-Marquardt. The training strategy for neural network models involved feed forward and errorback processes that minimized the generated errors. We also evaluated the models with a Comparison of the Normalized Squared Errors (NSE). From the results, we observed that the travel times of buses and the dwell times at bus stops generally increased over time of the day. We gathered travel time equations for buses for the AM, Mid-Day and PM Peaks. The lowest NSE for the AM, Mid-Day and PM Peak periods corresponded to training processes using Quasi-Newton algorithm, which had 3, 2 and 5 perceptron layers, respectively. These prediction models could be adapted by transit agencies to provide the patrons with accurate travel time information at bus stops or online.
APA, Harvard, Vancouver, ISO, and other styles
3

Powell, Bruce C. Artificial Neural Network Analysis System. Fort Belvoir, VA: Defense Technical Information Center, February 2001. http://dx.doi.org/10.21236/ada392390.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Karakowski, Joseph A., and Hai H. Phu. A Fuzzy Hypercube Artificial Neural Network Classifier. Fort Belvoir, VA: Defense Technical Information Center, October 1998. http://dx.doi.org/10.21236/ada354805.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Stengel, Robert F. New Methods of Neural Network Training. Fort Belvoir, VA: Defense Technical Information Center, June 1999. http://dx.doi.org/10.21236/ada370007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Sgurev, Vassil. Artificial Neural Networks as a Network Flow with Capacities. "Prof. Marin Drinov" Publishing House of Bulgarian Academy of Sciences, September 2018. http://dx.doi.org/10.7546/crabs.2018.09.12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Vitela, J. E., U. R. Hanebutte, and J. Reifman. An artificial neural network controller for intelligent transportation systems applications. Office of Scientific and Technical Information (OSTI), April 1996. http://dx.doi.org/10.2172/219376.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Vela, Daniel. Forecasting latin-american yield curves: an artificial neural network approach. Bogotá, Colombia: Banco de la República, March 2013. http://dx.doi.org/10.32468/be.761.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Wilson, Charles L., James L. Blue, and Omid M. Omidvar. The effect of training dynamics on neural network performance. Gaithersburg, MD: National Institute of Standards and Technology, 1995. http://dx.doi.org/10.6028/nist.ir.5696.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Hsieh, Bernard B., and Charles L. Bartos. Riverflow/River Stage Prediction for Military Applications Using Artificial Neural Network Modeling. Fort Belvoir, VA: Defense Technical Information Center, August 2000. http://dx.doi.org/10.21236/ada382991.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography