Academic literature on the topic 'Artificial Neural Network Training'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Artificial Neural Network Training.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Artificial Neural Network Training"

1

Bui, Ngoc Tam, and Hiroshi Hasegawa. "Training Artificial Neural Network Using Modification of Differential Evolution Algorithm." International Journal of Machine Learning and Computing 5, no. 1 (2015): 1–6. http://dx.doi.org/10.7763/ijmlc.2015.v5.473.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Didmanidze, I. Sh, G. A. Kakhiani, and D. Z. Didmanidze. "TRAINING OF ARTIFICIAL NEURAL NETWORK." Journal of Numerical and Applied Mathematics, no. 1 (135) (2021): 110–14. http://dx.doi.org/10.17721/2706-9699.2021.1.14.

Full text
Abstract:
The methodology of neural networks is even more often applied in tasks of management and decision-making, including in the sphere of trade and finance. The basis of neural networks is made by nonlinear adaptive systems which proved the efficiency at the solution of problems of forecasting.
APA, Harvard, Vancouver, ISO, and other styles
3

Cavallaro, Lucia, Ovidiu Bagdasar, Pasquale De Meo, Giacomo Fiumara, and Antonio Liotta. "Artificial neural networks training acceleration through network science strategies." Soft Computing 24, no. 23 (2020): 17787–95. http://dx.doi.org/10.1007/s00500-020-05302-y.

Full text
Abstract:
AbstractThe development of deep learning has led to a dramatic increase in the number of applications of artificial intelligence. However, the training of deeper neural networks for stable and accurate models translates into artificial neural networks (ANNs) that become unmanageable as the number of features increases. This work extends our earlier study where we explored the acceleration effects obtained by enforcing, in turn, scale freeness, small worldness, and sparsity during the ANN training process. The efficiency of that approach was confirmed by recent studies (conducted independently)
APA, Harvard, Vancouver, ISO, and other styles
4

Sagala, Noviyanti, Cynthia Hayat, and Frahselia Tandipuang. "Identification of fat-soluble vitamins deficiency using artificial neural network." Jurnal Teknologi dan Sistem Komputer 8, no. 1 (2019): 6–11. http://dx.doi.org/10.14710/jtsiskom.8.1.2020.6-11.

Full text
Abstract:
The fat-soluble vitamins (A, D, E, K) deficiency remain frequent universally and may have consequential adverse resultants and causing slow appearance symptoms gradually and intensify over time. The vitamin deficiency detection requires an experienced physician to notice the symptoms and to review a blood test’s result (high-priced). This research aims to create an early detection system of fat-soluble vitamin deficiency using artificial neural network Back-propagation. The method was implemented by converting deficiency symptoms data into training data to be used to produce a weight of ANN an
APA, Harvard, Vancouver, ISO, and other styles
5

Golubinskiy, Andrey, and Andrey Tolstykh. "Hybrid method of conventional neural network training." Informatics and Automation 20, no. 2 (2021): 463–90. http://dx.doi.org/10.15622/ia.2021.20.2.8.

Full text
Abstract:
The paper proposes a hybrid method for training convolutional neural networks. The method consists of combining second and first-order methods for different elements of the architecture of a convolutional neural network. The hybrid convolution neural network training method allows to achieve significantly better convergence compared to Adam; however, it requires fewer computational operations to implement. Using the proposed method, it is possible to train networks on which learning paralysis occurs when using first-order methods. Moreover, the proposed method could adjust its computational co
APA, Harvard, Vancouver, ISO, and other styles
6

Gursoy, Osman, and Haidar Sharif. "Parallel computing for artificial neural network training." Periodicals of Engineering and Natural Sciences (PEN) 6, no. 1 (2018): 1. http://dx.doi.org/10.21533/pen.v6i1.143.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ding, Xue, and Hong Hong Yang. "A Study on the Image Classification Techniques Based on Wavelet Artificial Neural Network Algorithm." Applied Mechanics and Materials 602-605 (August 2014): 3512–14. http://dx.doi.org/10.4028/www.scientific.net/amm.602-605.3512.

Full text
Abstract:
With the ever-changing education information technology, it is a big problem for the universities and college that how to classify the thousands of copies of the image during the art examination marking process. This paper is to explore the application of artificial intelligence techniques, and to do accurate classification of a large number of images within a limited time and under the help of computer. It is can be seen that the proposed method is feasible through the application of the results of the actual work. Artificial neural network training Artificial neural network training methods
APA, Harvard, Vancouver, ISO, and other styles
8

Amali, D. Geraldine Bessie, and Dinakaran M. "A Review of Heuristic Global Optimization Based Artificial Neural Network Training Approahes." IAES International Journal of Artificial Intelligence (IJ-AI) 6, no. 1 (2017): 26. http://dx.doi.org/10.11591/ijai.v6.i1.pp26-32.

Full text
Abstract:
Artificial Neural Networks have earned popularity in recent years because of their ability to approximate nonlinear functions. Training a neural network involves minimizing the mean square error between the target and network output. The error surface is nonconvex and highly multimodal. Finding the minimum of a multimodal function is a NP complete problem and cannot be solved completely. Thus application of heuristic global optimization algorithms that computes a good global minimum to neural network training is of interest. This paper reviews the various heuristic global optimization algorith
APA, Harvard, Vancouver, ISO, and other styles
9

Soylak, Mustafa, Tuğrul Oktay, and İlke Turkmen. "A simulation-based method using artificial neural networks for solving the inverse kinematic problem of articulated robots." Proceedings of the Institution of Mechanical Engineers, Part E: Journal of Process Mechanical Engineering 231, no. 3 (2015): 470–79. http://dx.doi.org/10.1177/0954408915608755.

Full text
Abstract:
In our article, inverse kinematic problem of a plasma cutting robot with three degree of freedom is solved using artificial neural networks. Artificial neural network was trained using joint angle values according to cartesian coordinates ( x, y, z) of end point of a robotic arm. The Levenberg–Marquardt training algorithm was applied to educate artificial neural network. To validate the designed neural network, it was tested using a new test data set which is not applied in training. A simulation was performed on a three-dimensional model of MSC.ADAMS software using angle values obtained from
APA, Harvard, Vancouver, ISO, and other styles
10

Kuptsov, P. V., A. V. Kuptsova, and N. V. Stankevich. "Artificial Neural Network as a Universal Model of Nonlinear Dynamical Systems." Nelineinaya Dinamika 17, no. 1 (2021): 5–21. http://dx.doi.org/10.20537/nd210102.

Full text
Abstract:
We suggest a universal map capable of recovering the behavior of a wide range of dynamical systems given by ODEs. The map is built as an artificial neural network whose weights encode a modeled system. We assume that ODEs are known and prepare training datasets using the equations directly without computing numerical time series. Parameter variations are taken into account in the course of training so that the network model captures bifurcation scenarios of the modeled system. The theoretical benefit from this approach is that the universal model admits applying common mathematical methods wit
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Artificial Neural Network Training"

1

Rimer, Michael Edwin. "Improving Neural Network Classification Training." Diss., CLICK HERE for online access, 2007. http://contentdm.lib.byu.edu/ETD/image/etd2094.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Åström, Fredrik. "Neural Network on Compute Shader : Running and Training a Neural Network using GPGPU." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-2036.

Full text
Abstract:
In this thesis I look into how one can train and run an artificial neural network using Compute Shader and what kind of performance can be expected. An artificial neural network is a computational model that is inspired by biological neural networks, e.g. a brain. Finding what kind of performance can be expected was done by creating an implementation that uses Compute Shader and then compare it to the FANN library, i.e. a fast artificial neural network library written in C. The conclusion is that you can improve performance by training an artificial neural network on the compute shader as long
APA, Harvard, Vancouver, ISO, and other styles
3

Sneath, Evan B. "Artificial neural network training for semi-autonomous robotic surgery applications." University of Cincinnati / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1416231638.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Inoue, Isao. "On the Effect of Training Data on Artificial Neural Network Models for Prediction." 名古屋大学大学院国際言語文化研究科, 2010. http://hdl.handle.net/2237/14090.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kaster, Joshua M. "Training Convolutional Neural Network Classifiers Using Simultaneous Scaled Supercomputing." University of Dayton / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1588973772607826.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Buys, Stefan. "Genetic algorithm for Artificial Neural Network training for the purpose of Automated Part Recognition." Thesis, Nelson Mandela Metropolitan University, 2012. http://hdl.handle.net/10948/d1008356.

Full text
Abstract:
Object or part recognition is of major interest in industrial environments. Current methods implement expensive camera based solutions. There is a need for a cost effective alternative to be developed. One of the proposed methods is to overcome the hardware, camera, problem by implementing a software solution. Artificial Neural Networks (ANN) are to be used as the underlying intelligent software as they have high tolerance for noise and have the ability to generalize. A colleague has implemented a basic ANN based system comprising of an ANN and three cost effective laser distance sensors. Howe
APA, Harvard, Vancouver, ISO, and other styles
7

Griffin, Glenn R. "Predicting Naval Aviator Flight Training Performances using Multiple Regression and an Artificial Neural Network." NSUWorks, 1995. http://nsuworks.nova.edu/gscis_etd/548.

Full text
Abstract:
The Navy needs improved methods for assigning naval aviators (pilots) to fixed-wing and rotary-winged aircraft. At present, individual flight grades in primary training are used to assign naval aviator trainees to intermediate fixed wing or helicopter training. This study evaluated the potential of a series of single- and multitask tests to account for additional significant variance in the prediction of flight grade training performance for a sample of naval aviator trainees. Subjects were tested on a series of cognitive and perceptual psychomotor tests. The subjects then entered the Navy Fli
APA, Harvard, Vancouver, ISO, and other styles
8

Hsu, Kuo-Lin, Hoshin Vijai Gupta, and Soroosh Sorooshian. "A SUPERIOR TRAINING STRATEGY FOR THREE-LAYER FEEDFORWARD ARTIFICIAL NEURAL NETWORKS." Department of Hydrology and Water Resources, University of Arizona (Tucson, AZ), 1996. http://hdl.handle.net/10150/614171.

Full text
Abstract:
A new algorithm is proposed for the identification of three-layer feedforward artificial neural networks. The algorithm, entitled LLSSIM, partitions the weight space into two major groups: the input- hidden and hidden -output weights. The input- hidden weights are trained using a multi -start SIMPLEX algorithm and the hidden -output weights are identified using a conditional linear- least- square estimation approach. Architectural design is accomplished by progressive addition of nodes to the hidden layer. The LLSSIM approach provides globally superior weight estimates with fewer functio
APA, Harvard, Vancouver, ISO, and other styles
9

George, Abhinav Kurian. "Fault tolerance and re-training analysis on neural networks." University of Cincinnati / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1552391639148868.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Chen, Lihui. "Modelling continuous sequential behaviour to enhance training and generalization in neural networks." Thesis, University of St Andrews, 1993. http://hdl.handle.net/10023/13485.

Full text
Abstract:
This thesis is a conceptual and empirical approach to embody modelling of continuous sequential behaviour in neural learning. The aim is to enhance the feasibility of training and capacity for generalisation. By examining the sequential aspects of the passing of time in a neural network, it is suggested that an alteration to the usual goal weight condition may be made to model these aspects. The notion of a goal weight path is introduced, with a path-based backpropagation (PBP) framework being proposed. Two models using PBP have been investigated in the thesis. One is called Feedforward Contin
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Artificial Neural Network Training"

1

Kattan, Ali. Artificial neural network training and software implementation techniques. Nova Science Publishers, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kattan, Ali. Artificial neural network training and software implementation techniques. Nova Science Publishers, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kattan, Ali. Artificial neural network training and software implementation techniques. Nova Science Publishers, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Guan, Biing T. Modeling training site vegetation coverage probability with a random optimization procedure: An artificial neural network approach. US Army Corps of Engineers, Construction Engineering Research Laboratories, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

N, Sundararajan, and Foo Shou King, eds. Parallel implementations of backpropagation neural networks on transputers: A study of training set parallelism. World Scientific, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Shanmuganathan, Subana, and Sandhya Samarasinghe, eds. Artificial Neural Network Modelling. Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-28495-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

S, Mohan. Artificial neural network modelling. Indian National Committee on Hydrology, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Gacon, David M. Speeding up neural network training. UMIST, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Neural network models in artificial intelligence. E. Horwood, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

C, Jain L., and Johnson R. P, eds. Neural network training using genetic algorithms. World Scientific, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Artificial Neural Network Training"

1

da Silva, Ivan Nunes, Danilo Hernane Spatti, Rogerio Andrade Flauzino, Luisa Helena Bartocci Liboni, and Silas Franco dos Reis Alves. "Artificial Neural Network Architectures and Training Processes." In Artificial Neural Networks. Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-43162-8_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wuraola, Adedamola, and Nitish Patel. "Stochasticity-Assisted Training in Artificial Neural Network." In Neural Information Processing. Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-04179-3_52.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Köppen, Mario. "On the Training of a Kolmogorov Network." In Artificial Neural Networks — ICANN 2002. Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-46084-5_77.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Livshin, Igor. "Neural Network Prediction Outside the Training Range." In Artificial Neural Networks with Java. Apress, 2019. http://dx.doi.org/10.1007/978-1-4842-4421-0_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Czarnowski, Ireneusz, and Piotr Jedrzejowicz. "An Approach to Artificial Neural Network Training." In Research and Development in Intelligent Systems XIX. Springer London, 2003. http://dx.doi.org/10.1007/978-1-4471-0651-7_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kordos, Mirosław. "Instance Selection Optimization for Neural Network Training." In Artificial Intelligence and Soft Computing. Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-39378-0_52.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Reeves, Colin R. "Training Set Selection in Neural Network Applications." In Artificial Neural Nets and Genetic Algorithms. Springer Vienna, 1995. http://dx.doi.org/10.1007/978-3-7091-7535-4_123.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Zhou, Guian, and Jennie Si. "Improving neural network training based on Jacobian rank deficiency." In Artificial Neural Networks — ICANN 96. Springer Berlin Heidelberg, 1996. http://dx.doi.org/10.1007/3-540-61510-5_91.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Patan, Krzysztof, and Maciej Patan. "Selection of Training Data for Locally Recurrent Neural Network." In Artificial Neural Networks – ICANN 2010. Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15822-3_16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Cavallaro, Lucia, Ovidiu Bagdasar, Pasquale De Meo, Giacomo Fiumara, and Antonio Liotta. "Artificial Neural Networks Training Acceleration Through Network Science Strategies." In Lecture Notes in Computer Science. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-40616-5_27.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Artificial Neural Network Training"

1

Mehdizadeh, Nasser S., Payam Sinaei, and Ali L. Nichkoohi. "Modeling Jones’ Reduced Chemical Mechanism of Methane Combustion With Artificial Neural Network." In ASME 2010 3rd Joint US-European Fluids Engineering Summer Meeting collocated with 8th International Conference on Nanochannels, Microchannels, and Minichannels. ASMEDC, 2010. http://dx.doi.org/10.1115/fedsm-icnmm2010-31186.

Full text
Abstract:
The present work reports a way of using Artificial Neural Networks for modeling and integrating the governing chemical kinetics differential equations of Jones’ reduced chemical mechanism for methane combustion. The chemical mechanism is applicable to both diffusion and premixed laminar flames. A feed-forward multi-layer neural network is incorporated as neural network architecture. In order to find sets of input-output data, for adapting the neural network’s synaptic weights in the training phase, a thermochemical analysis is embedded to find the chemical species mole fractions. An analysis o
APA, Harvard, Vancouver, ISO, and other styles
2

Si, Tapas, Arunava De, and Anup Kumar Bhattacharjee. "Grammatical swarm for Artificial Neural Network training." In 2014 International Conference on Circuit, Power and Computing Technologies (ICCPCT). IEEE, 2014. http://dx.doi.org/10.1109/iccpct.2014.7055036.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Scanzio, Stefano, Sandro Cumani, Roberto Gemello, Franco Mana, and P. Laface. "Parallel implementation of artificial neural network training." In 2010 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 2010. http://dx.doi.org/10.1109/icassp.2010.5495108.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Skokan, Marek, Marek Bundzel, and Peter Sincak. "Pseudo-distance based artificial neural network training." In 2008 6th International Symposium on Applied Machine Intelligence and Informatics. IEEE, 2008. http://dx.doi.org/10.1109/sami.2008.4469134.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Vaughan, Neil, Venketesh N. Dubey, Michael Y. K. Wee, and Richard Isaacs. "Artificial Neural Network to Predict Patient Body Circumferences and Ligament Thicknesses." In ASME 2013 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2013. http://dx.doi.org/10.1115/detc2013-13088.

Full text
Abstract:
An artificial neural network has been implemented and trained with clinical data from 23088 patients. The aim was to predict a patient’s body circumferences and ligament thickness from patient data. A fully connected feed-forward neural network is used, containing no loops and one hidden layer and the learning mechanism is back-propagation of error. Neural network inputs were mass, height, age and gender. There are eight hidden neurons and one output. The network can generate estimates for waist, arm, calf and thigh circumferences and thickness of skin, fat, Supraspinous and interspinous ligam
APA, Harvard, Vancouver, ISO, and other styles
6

Vershkov, N., V. Kuchukov, N. Kuchukova, N. Kucherov, and E. Shiriaev. "Optimization of computational complexity of an artificial neural network." In 3rd International Workshop on Information, Computation, and Control Systems for Distributed Environments 2021. Crossref, 2021. http://dx.doi.org/10.47350/iccs-de.2021.17.

Full text
Abstract:
The article deals with the modelling of Artificial Neural Networks as an information transmission system to optimize their computational complexity. The analysis of existing theoretical approaches to optimizing the structure and training of neural networks is carried out. In the process of constructing the model, the well-known problem of isolating a deterministic signal on the background of noise and adapting it to solving the problem of assigning an input implementation to a certain cluster is considered. A layer of neurons is considered as an information transformer with a kernel for solvin
APA, Harvard, Vancouver, ISO, and other styles
7

Boybat, Irem, Cecilia Giovinazzo, Elmira Shahrabi, et al. "Multi-ReRAM Synapses for Artificial Neural Network Training." In 2019 IEEE International Symposium on Circuits and Systems (ISCAS). IEEE, 2019. http://dx.doi.org/10.1109/iscas.2019.8702714.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Lari, Nazanin Sadeghi, and Mohammad Saniee Abadeh. "Training artificial neural network by krill-herd algorithm." In 2014 IEEE 7th Joint International Information Technology and Artificial Intelligence Conference (ITAIC). IEEE, 2014. http://dx.doi.org/10.1109/itaic.2014.7065006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Akha, M. A. H., Pintu Chandra Shill, and K. Murase. "Neural network ensembles based on Artificial Training Examples." In 2009 12th International Conference on Computer and Information Technology (ICCIT). IEEE, 2009. http://dx.doi.org/10.1109/iccit.2009.5407262.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Canayaz, Murat, and Recep Ozdag. "Training artificial neural network with Chaotic Cricket Algorithm." In 2018 26th Signal Processing and Communications Applications Conference (SIU). IEEE, 2018. http://dx.doi.org/10.1109/siu.2018.8404254.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Artificial Neural Network Training"

1

Reifman, Jaques, and Javier Vitela. Artificial Neural Network Training with Conjugate Gradients for Diagnosing Transients in Nuclear Power Plants. Office of Scientific and Technical Information (OSTI), 1993. http://dx.doi.org/10.2172/10198077.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Arhin, Stephen, Babin Manandhar, Hamdiat Baba Adam, and Adam Gatiba. Predicting Bus Travel Times in Washington, DC Using Artificial Neural Networks (ANNs). Mineta Transportation Institute, 2021. http://dx.doi.org/10.31979/mti.2021.1943.

Full text
Abstract:
Washington, DC is ranked second among cities in terms of highest public transit commuters in the United States, with approximately 9% of the working population using the Washington Metropolitan Area Transit Authority (WMATA) Metrobuses to commute. Deducing accurate travel times of these metrobuses is an important task for transit authorities to provide reliable service to its patrons. This study, using Artificial Neural Networks (ANN), developed prediction models for transit buses to assist decision-makers to improve service quality and patronage. For this study, we used six months of Automati
APA, Harvard, Vancouver, ISO, and other styles
3

Powell, Bruce C. Artificial Neural Network Analysis System. Defense Technical Information Center, 2001. http://dx.doi.org/10.21236/ada392390.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Karakowski, Joseph A., and Hai H. Phu. A Fuzzy Hypercube Artificial Neural Network Classifier. Defense Technical Information Center, 1998. http://dx.doi.org/10.21236/ada354805.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Stengel, Robert F. New Methods of Neural Network Training. Defense Technical Information Center, 1999. http://dx.doi.org/10.21236/ada370007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Sgurev, Vassil. Artificial Neural Networks as a Network Flow with Capacities. "Prof. Marin Drinov" Publishing House of Bulgarian Academy of Sciences, 2018. http://dx.doi.org/10.7546/crabs.2018.09.12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Vitela, J. E., U. R. Hanebutte, and J. Reifman. An artificial neural network controller for intelligent transportation systems applications. Office of Scientific and Technical Information (OSTI), 1996. http://dx.doi.org/10.2172/219376.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Vela, Daniel. Forecasting latin-american yield curves: an artificial neural network approach. Banco de la República, 2013. http://dx.doi.org/10.32468/be.761.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Wilson, Charles L., James L. Blue, and Omid M. Omidvar. The effect of training dynamics on neural network performance. National Institute of Standards and Technology, 1995. http://dx.doi.org/10.6028/nist.ir.5696.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Hsieh, Bernard B., and Charles L. Bartos. Riverflow/River Stage Prediction for Military Applications Using Artificial Neural Network Modeling. Defense Technical Information Center, 2000. http://dx.doi.org/10.21236/ada382991.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!