To see the other types of publications on this topic, follow the link: Neural network: performance.

Journal articles on the topic 'Neural network: performance'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Neural network: performance.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Panda, Subodh, Bikash Swain, and Sandeep Mishra. "Boiler Performance Optimization Using Process Neural Network." Indian Journal of Applied Research 3, no. 7 (October 1, 2011): 298–300. http://dx.doi.org/10.15373/2249555x/july2013/93.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

CR, Dhivyaa, Sudhakar R, Nithya K, and Prabhakar E. "Performance Analysis of Convolutional Neural Network for Retinal Image Classification." International Journal of Psychosocial Rehabilitation 23, no. 4 (December 20, 2019): 1149–59. http://dx.doi.org/10.37200/ijpr/v23i4/pr190441.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Li, Xiao Hu, Feng Xu, Jin Hua Zhang, and Su Nan Wang. "A New Small-World Neural Network with its Performance on Fault Tolerance." Advanced Materials Research 629 (December 2012): 719–24. http://dx.doi.org/10.4028/www.scientific.net/amr.629.719.

Full text
Abstract:
Many artificial neural networks are the simple simulation of brain neural network’s architecture and function. However, how to rebuild new artificial neural network which architecture is similar to biological neural networks is worth studying. In this study, a new multilayer feedforward small-world neural network is presented using the results form research on complex network. Firstly, a new multilayer feedforward small-world neural network which relies on the rewiring probability heavily is built up on the basis of the construction ideology of Watts-Strogatz networks model and community structure. Secondly, fault tolerance is employed in investigating the performances of new small-world neural network. When the network with connection fault or neuron damage is used to test the fault tolerance performance under different rewiring probability, simulation results show that the fault tolerance capability of small-world neural network outmatches that of the same scale regular network when the fault probability is more than 40%, while random network has the best fault tolerance capability.
APA, Harvard, Vancouver, ISO, and other styles
4

Yen, Gary G., and Haiming Lu. "Hierarchical Rank Density Genetic Algorithm for Radial-Basis Function Neural Network Design." International Journal of Computational Intelligence and Applications 03, no. 03 (September 2003): 213–32. http://dx.doi.org/10.1142/s1469026803000975.

Full text
Abstract:
In this paper, we propose a genetic algorithm based design procedure for a radial-basis function neural network. A Hierarchical Rank Density Genetic Algorithm (HRDGA) is used to evolve the neural network's topology and parameters simultaneously. Compared with traditional genetic algorithm based designs for neural networks, the hierarchical approach addresses several deficiencies highlighted in literature. In addition, the rank-density based fitness assignment technique is used to optimize the performance and topology of the evolved neural network to deal with the confliction between the training performance and network complexity. Instead of producing a single optimal solution, HRDGA provides a set of near-optimal neural networks to the designers so that they can have more flexibility for the final decision-making based on certain preferences. In terms of searching for a near-complete set of candidate networks with high performances, the networks designed by the proposed algorithm prove to be competitive, or even superior, to three other traditional radial-basis function networks for predicting Mackey–Glass chaotic time series.
APA, Harvard, Vancouver, ISO, and other styles
5

Jeong, Yeongsang, and Sungshin Kim. "A Study of Arrow Performance using Artificial Neural Network." Journal of Korean Institute of Intelligent Systems 24, no. 5 (October 25, 2014): 548–53. http://dx.doi.org/10.5391/jkiis.2014.24.5.548.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Tun, Myat Thida. "Implementation and Performance Evaluation of Neural Network for English Alphabet Recognition System." International Journal of Trend in Scientific Research and Development Volume-2, Issue-5 (August 31, 2018): 474–78. http://dx.doi.org/10.31142/ijtsrd15863.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Suhailayani Suhaimi, Nur, Zalinda Othman, and Mohd Ridzwan Yaakub. "Analyzing Prediction Performance between Wavelet Neural Network and Product-Unit Neural Network." Journal of Physics: Conference Series 1432 (January 2020): 012081. http://dx.doi.org/10.1088/1742-6596/1432/1/012081.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Stevens, R., J. Ikeda, A. Casillas, J. Palacio-Cayetano, and S. Clyman. "Artificial neural network-based performance assessments." Computers in Human Behavior 15, no. 3-4 (May 1999): 295–313. http://dx.doi.org/10.1016/s0747-5632(99)00025-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Jasic, Teo, and Douglas Wood. "Neural network protocols and model performance." Neurocomputing 55, no. 3-4 (October 2003): 747–53. http://dx.doi.org/10.1016/s0925-2312(03)00437-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Wilson, Charles L., James L. Blue, and Omid M. Omidvar. "Training Dynamics and Neural Network Performance." Neural Networks 10, no. 5 (July 1997): 907–23. http://dx.doi.org/10.1016/s0893-6080(96)00119-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Porto-Pazos, Ana B., Noha Veiguela, Pablo Mesejo, Marta Navarrete, Alberto Alvarellos, Oscar Ibáñez, Alejandro Pazos, and Alfonso Araque. "Artificial Astrocytes Improve Neural Network Performance." PLoS ONE 6, no. 4 (April 19, 2011): e19109. http://dx.doi.org/10.1371/journal.pone.0019109.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Ding, Shuo, and Xiao Heng Chang. "A MATLAB-Based Study on the Realization and Approximation Performance of RBF Neural Networks." Applied Mechanics and Materials 325-326 (June 2013): 1746–49. http://dx.doi.org/10.4028/www.scientific.net/amm.325-326.1746.

Full text
Abstract:
BP neural network is a kind of widely used feed-forward network. However its innate shortcomings are gradually giving rise to the study of other networks. Currently one of the research focuses in the area of feed-forward networks is radial basis function neural network. To test the radial basis function neural network for nonlinear function approximation capability, this paper first introduces the theories of RBF networks, as well as the structure, function approximation and learning algorithm of radial basis function neural network. Then a simulation test is carried out to compare BPNN and RBFNN. The simulation results indicate that RBFNN is simpler in structure, faster in speed and better in approximation performance. That is to say RBFNN is superior to BPNN in many aspects. But when solving the same problem, the structure of radial basis networks is more complicated than that of BP neural networks. Keywords: Radial basis function; Neural network; Function approximation; Simulation; MATLAB
APA, Harvard, Vancouver, ISO, and other styles
13

Manoharan J, Samuel. "Capsule Network Algorithm for Performance Optimization of Text Classification." March 2021 3, no. 1 (April 3, 2021): 1–9. http://dx.doi.org/10.36548/jscp.2021.1.001.

Full text
Abstract:
In regions of visual inference, optimized performance is demonstrated by capsule networks on structured data. Classification of hierarchical multi-label text is performed with a simple capsule network algorithm in this paper. It is further compared to support vector machine (SVM), Long Short Term Memory (LSTM), artificial neural network (ANN), convolutional Neural Network (CNN) and other neural and non-neural network architectures to demonstrate its superior performance. The Blurb Genre Collection (BGC) and Web of Science (WOS) datasets are used for experimental purpose. The encoded latent data is combined with the algorithm while handling structurally diverse categories and rare events in hierarchical multi-label text applications.
APA, Harvard, Vancouver, ISO, and other styles
14

Gevaert, Wouter, Georgi Tsenov, and Valeri Mladenov. "Neural networks used for speech recognition." Journal of Automatic Control 20, no. 1 (2010): 1–7. http://dx.doi.org/10.2298/jac1001001g.

Full text
Abstract:
In this paper is presented an investigation of the speech recognition classification performance. This investigation on the speech recognition classification performance is performed using two standard neural networks structures as the classifier. The utilized standard neural network types include Feed-forward Neural Network (NN) with back propagation algorithm and a Radial Basis Functions Neural Networks.
APA, Harvard, Vancouver, ISO, and other styles
15

Li, Wei, Shaogang Gong, and Xiatian Zhu. "Neural Graph Embedding for Neural Architecture Search." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 4707–14. http://dx.doi.org/10.1609/aaai.v34i04.5903.

Full text
Abstract:
Existing neural architecture search (NAS) methods often operate in discrete or continuous spaces directly, which ignores the graphical topology knowledge of neural networks. This leads to suboptimal search performance and efficiency, given the factor that neural networks are essentially directed acyclic graphs (DAG). In this work, we address this limitation by introducing a novel idea of neural graph embedding (NGE). Specifically, we represent the building block (i.e. the cell) of neural networks with a neural DAG, and learn it by leveraging a Graph Convolutional Network to propagate and model the intrinsic topology information of network architectures. This results in a generic neural network representation integrable with different existing NAS frameworks. Extensive experiments show the superiority of NGE over the state-of-the-art methods on image classification and semantic segmentation.
APA, Harvard, Vancouver, ISO, and other styles
16

Jiang, Yiming, Chenguang Yang, Shi-lu Dai, and Beibei Ren. "Deterministic learning enhanced neutral network control of unmanned helicopter." International Journal of Advanced Robotic Systems 13, no. 6 (November 28, 2016): 172988141667111. http://dx.doi.org/10.1177/1729881416671118.

Full text
Abstract:
In this article, a neural network–based tracking controller is developed for an unmanned helicopter system with guaranteed global stability in the presence of uncertain system dynamics. Due to the coupling and modeling uncertainties of the helicopter systems, neutral networks approximation techniques are employed to compensate the unknown dynamics of each subsystem. In order to extend the semiglobal stability achieved by conventional neural control to global stability, a switching mechanism is also integrated into the control design, such that the resulted neural controller is always valid without any concern on either initial conditions or range of state variables. In addition, deterministic learning is applied to the neutral network learning control, such that the adaptive neutral networks are able to store the learned knowledge that could be reused to construct neutral network controller with improved control performance. Simulation studies are carried out on a helicopter model to illustrate the effectiveness of the proposed control design.
APA, Harvard, Vancouver, ISO, and other styles
17

Wen, Hui, Tao Yan, Zhiqiang Liu, and Deli Chen. "Integrated neural network model with pre-RBF kernels." Science Progress 104, no. 3 (July 2021): 003685042110261. http://dx.doi.org/10.1177/00368504211026111.

Full text
Abstract:
To improve the network performance of radial basis function (RBF) and back-propagation (BP) networks on complex nonlinear problems, an integrated neural network model with pre-RBF kernels is proposed. The proposed method is based on the framework of a single optimized BP network and an RBF network. By integrating and connecting the RBF kernel mapping layer and BP neural network, the local features of a sample set can be effectively extracted to improve separability; subsequently, the connected BP network can be used to perform learning and classification in the kernel space. Experiments on an artificial dataset and three benchmark datasets show that the proposed model combines the advantages of RBF and BP networks, as well as improves the performances of the two networks. Finally, the effectiveness of the proposed method is verified.
APA, Harvard, Vancouver, ISO, and other styles
18

Li, Keping, Shuang Gu, and Dongyang Yan. "A Link Prediction Method Based on Neural Networks." Applied Sciences 11, no. 11 (June 3, 2021): 5186. http://dx.doi.org/10.3390/app11115186.

Full text
Abstract:
Link prediction to optimize network performance is of great significance in network evolution. Because of the complexity of network systems and the uncertainty of network evolution, it faces many challenges. This paper proposes a new link prediction method based on neural networks trained on scale-free networks as input data, and optimized networks trained by link prediction models as output data. In order to solve the influence of the generalization of the neural network on the experiments, a greedy link pruning strategy is applied. We consider network efficiency and the proposed global network structure reliability as objectives to comprehensively evaluate link prediction performance and the advantages of the neural network method. The experimental results demonstrate that the neural network method generates the optimized networks with better network efficiency and global network structure reliability than the traditional link prediction models.
APA, Harvard, Vancouver, ISO, and other styles
19

Hanumantha Rao, T. V. K., Saurabh Mishra, and Sudhir Kumar Singh. "Automatic Electrocardiographic Analysis Using Artificial Neural Network Models." Advanced Materials Research 403-408 (November 2011): 3587–93. http://dx.doi.org/10.4028/www.scientific.net/amr.403-408.3587.

Full text
Abstract:
In this paper, the artificial neural network method was used for Electrocardiogram (ECG) pattern analysis. The analysis of the ECG can benefit from the wide availability of computing technology as far as features and performances as well. This paper presents some results achieved by carrying out the classification tasks by integrating the most common features of ECG analysis. Four types of ECG patterns were chosen from the MIT-BIH database to be recognized, including normal sinus rhythm, long term atrial fibrillation, sudden cardiac death and congestive heart failure. The R-R interval features were performed as the characteristic representation of the original ECG signals to be fed into the neural network models. Two types of artificial neural network models, SOM (Self- Organizing maps) and RBF (Radial Basis Function) networks were separately trained and tested for ECG pattern recognition and experimental results of the different models have been compared. The trade-off between the time consuming training of artificial neural networks and their performance is also explored. The Radial Basis Function network exhibited the best performance and reached an overall accuracy of 93% and the Kohonen Self- Organizing map network reached an overall accuracy of 87.5%.
APA, Harvard, Vancouver, ISO, and other styles
20

Ding, Shuo, Xiao Heng Chang, and Qing Hui Wu. "Approximation Performance of BP Neural Networks Improved by Heuristic Approach." Applied Mechanics and Materials 411-414 (September 2013): 1952–55. http://dx.doi.org/10.4028/www.scientific.net/amm.411-414.1952.

Full text
Abstract:
Among all improved BP neural network algorithms, the one improved by heuristic approach is studied in this paper. Firstly, three types of improved heuristic algorithms of BP neural network are programmed in the environment of MATLAB7.0. Then network training and simulation test are conducted taking a nonlinear function as an example. The approximation performances of BP neural networks improved by different numerical optimization approaches are compared to aid the selection of proper numerical optimization approach.
APA, Harvard, Vancouver, ISO, and other styles
21

Azlah, Muhammad Azfar Firdaus, Lee Suan Chua, Fakhrul Razan Rahmad, Farah Izana Abdullah, and Sharifah Rafidah Wan Alwi. "Review on Techniques for Plant Leaf Classification and Recognition." Computers 8, no. 4 (October 21, 2019): 77. http://dx.doi.org/10.3390/computers8040077.

Full text
Abstract:
Plant systematics can be classified and recognized based on their reproductive system (flowers) and leaf morphology. Neural networks is one of the most popular machine learning algorithms for plant leaf classification. The commonly used neutral networks are artificial neural network (ANN), probabilistic neural network (PNN), convolutional neural network (CNN), k-nearest neighbor (KNN) and support vector machine (SVM), even some studies used combined techniques for accuracy improvement. The utilization of several varying preprocessing techniques, and characteristic parameters in feature extraction appeared to improve the performance of plant leaf classification. The findings of previous studies are critically compared in terms of their accuracy based on the applied neural network techniques. This paper aims to review and analyze the implementation and performance of various methodologies on plant classification. Each technique has its advantages and limitations in leaf pattern recognition. The quality of leaf images plays an important role, and therefore, a reliable source of leaf database must be used to establish the machine learning algorithm prior to leaf recognition and validation.
APA, Harvard, Vancouver, ISO, and other styles
22

Wongsathan, Rati, and Pasit Pothong. "Heart Disease Classification Using Artificial Neural Networks." Applied Mechanics and Materials 781 (August 2015): 624–27. http://dx.doi.org/10.4028/www.scientific.net/amm.781.624.

Full text
Abstract:
Neural Networks (NNs) has emerged as an importance tool for classification in the field of decision making. The main objective of this work is to design the structure and select the optimized parameter in the neural networks to implement the heart disease classifier. Three types of neural networks, i.e. Multi-layered Perceptron Neural Network (MLP-NN), Radial Basis Function Neural Networks (RBF-NN), and Generalized Regression Neural Network (GR-NN) have been used to test the performance of heart disease classification. The classification accuracy obtained by RBFNN gave a very high performance than MLP-NN and GR-NN respectively. The performance of accuracy is very promising compared with the previously reported another type of neural networks.
APA, Harvard, Vancouver, ISO, and other styles
23

HUANG, WEI, KIN KEUNG LAI, YOSHITERU NAKAMORI, SHOUYANG WANG, and LEAN YU. "NEURAL NETWORKS IN FINANCE AND ECONOMICS FORECASTING." International Journal of Information Technology & Decision Making 06, no. 01 (March 2007): 113–40. http://dx.doi.org/10.1142/s021962200700237x.

Full text
Abstract:
Artificial neural networks (ANNs) have been widely applied to finance and economic forecasting as a powerful modeling technique. By reviewing the related literature, we discuss the input variables, type of neural network models, performance comparisons for the prediction of foreign exchange rates, stock market index and economic growth. Economic fundamentals are important in driving exchange rates, stock market index price and economic growth. Most neural network inputs for exchange rate prediction are univariate, while those for stock market index prices and economic growth predictions are multivariate in most cases. There are mixed comparison results of forecasting performance between neural networks and other models. The reasons may be the difference of data, forecasting horizons, types of neural network models and so on. Prediction performance of neural networks can be improved by being integrated with other technologies. Nonlinear combining forecasting by neural networks also provides encouraging results.
APA, Harvard, Vancouver, ISO, and other styles
24

DRUCKER, HARRIS, ROBERT SCHAPIRE, and PATRICE SIMARD. "BOOSTING PERFORMANCE IN NEURAL NETWORKS." International Journal of Pattern Recognition and Artificial Intelligence 07, no. 04 (August 1993): 705–19. http://dx.doi.org/10.1142/s0218001493000352.

Full text
Abstract:
A boosting algorithm, based on the probably approximately correct (PAC) learning model is used to construct an ensemble of neural networks that significantly improves performance (compared to a single network) in optical character recognition (OCR) problems. The effect of boosting is reported on four handwritten image databases consisting of 12000 digits from segmented ZIP Codes from the United States Postal Service and the following from the National Institute of Standards and Technology: 220000 digits, 45000 upper case letters, and 45000 lower case letters. We use two performance measures: the raw error rate (no rejects) and the reject rate required to achieve a 1% error rate on the patterns not rejected. Boosting improved performance significantly, and, in some cases, dramatically.
APA, Harvard, Vancouver, ISO, and other styles
25

Partal, Turgay. "River flow forecasting using different artificial neural network algorithms and wavelet transform." Canadian Journal of Civil Engineering 36, no. 1 (January 2009): 26–38. http://dx.doi.org/10.1139/l08-090.

Full text
Abstract:
In this study, the wavelet–neural network structure that combines wavelet transform and artificial neural networks has been employed to forecast the river flows of Turkey. Discrete wavelet transforms, which are useful to obtain to the periodic components of the measured data, have significantly positive effects on artificial neural network modeling performance. Generally, the feed-forward back-propagation method was studied with respect to artificial neural network applications to water resources data. In this study, the performance of generalized neural networks and radial basis neural networks were compared with feed-forward back-propagation methods. Six different models were studied for forecasting of monthly river flows. It was seen that the wavelet and feed-forward back-propagation model was superior to the other models in terms of selected performance criteria.
APA, Harvard, Vancouver, ISO, and other styles
26

SAWITRI, MADE NITA DWI, I. WAYAN SUMARJAYA, and NI KETUT TARI TASTRAWATI. "PERAMALAN MENGGUNAKAN METODE BACKPROPAGATION NEURAL NETWORK." E-Jurnal Matematika 7, no. 3 (September 2, 2018): 264. http://dx.doi.org/10.24843/mtk.2018.v07.i03.p213.

Full text
Abstract:
The purpose of the study is to forecast the price of rice in the city of Denpasar in 2017 using backpropagation neural network method. Backpropagation neural network is a model of artificial neural network by finding the optimal weight value. Artificial neural networks are information processing systems that have certain performance characteristics similar to that of human neural networks. This analysis uses time series data of rice prices in the city of Denpasar from January 2001 until December 2016. The results of this research, concludes that the lowest rice price is predicted in July 2017 at Rp9791.5 while the highest rice price in April 2017 for Rp9839.4.
APA, Harvard, Vancouver, ISO, and other styles
27

Mansor, Muhammad Naufal, and Mohd Nazri Rejab. "Neural Network Performance Comparison in Infant Pain Expression Classifications." Applied Mechanics and Materials 475-476 (December 2013): 1104–9. http://dx.doi.org/10.4028/www.scientific.net/amm.475-476.1104.

Full text
Abstract:
Infant pain is a non-stationary made by infants in response to certain situations. This infant facial expression can be used to identify physical or psychology status of infant. The aim of this work is to compare the performance of features in infant pain classification. Fast Fourier Transform (FFT), and Singular value Decomposition (SVD) features are computed at different classifier. Two different case studies such as normal and pain are performed. Two different types of radial basis artificial neural networks namely, Probabilistic Neural Network (PNN) and General Regression Neural Network (GRNN) are used to classify the infant pain. The results emphasized that the proposed features and classification algorithms can be used to aid the medical professionals for diagnosing pathological status of infant pain.
APA, Harvard, Vancouver, ISO, and other styles
28

Aleksendrić, Dragan, and David C. Barton. "Neural network prediction of disc brake performance." Tribology International 42, no. 7 (July 2009): 1074–80. http://dx.doi.org/10.1016/j.triboint.2009.03.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Wibig, Tadeusz. "Neural Network Performance for Complex Minimization Problem." Communications and Network 02, no. 01 (2010): 31–37. http://dx.doi.org/10.4236/cn.2010.21004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Horiki, Ken-ichi, and Tadashi Fukuda. "Pavement Performance Forecasting Model Using Neural Network." Doboku Gakkai Ronbunshu, no. 496 (1994): 99–102. http://dx.doi.org/10.2208/jscej.1994.496_99.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Jain, Bharat A., and Barin N. Nag. "Performance Evaluation of Neural Network Decision Models." Journal of Management Information Systems 14, no. 2 (September 1997): 201–16. http://dx.doi.org/10.1080/07421222.1997.11518171.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Peterson, Kyle D. "Recurrent Neural Network to Forecast Sprint Performance." Applied Artificial Intelligence 32, no. 7-8 (August 2018): 692–706. http://dx.doi.org/10.1080/08839514.2018.1505214.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

SAITO, Hiroki, Baotong WANG, Ryusuke NUMAKURA, Takahiro BAMBA, and Kazuo YONEKURA. "Performance Prediction of Turbocharger using Neural Network." Proceedings of Design & Systems Conference 2018.28 (2018): 3406. http://dx.doi.org/10.1299/jsmedsd.2018.28.3406.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Karbachevsky, Alex, Chaim Baskin, Evgenii Zheltonozhskii, Yevgeny Yermolin, Freddy Gabbay, Alex M. Bronstein, and Avi Mendelson. "Early-Stage Neural Network Hardware Performance Analysis." Sustainability 13, no. 2 (January 13, 2021): 717. http://dx.doi.org/10.3390/su13020717.

Full text
Abstract:
The demand for running NNs in embedded environments has increased significantly in recent years due to the significant success of convolutional neural network (CNN) approaches in various tasks, including image recognition and generation. The task of achieving high accuracy on resource-restricted devices, however, is still considered to be challenging, which is mainly due to the vast number of design parameters that need to be balanced. While the quantization of CNN parameters leads to a reduction of power and area, it can also generate unexpected changes in the balance between communication and computation. This change is hard to evaluate, and the lack of balance may lead to lower utilization of either memory bandwidth or computational resources, thereby reducing performance. This paper introduces a hardware performance analysis framework for identifying bottlenecks in the early stages of CNN hardware design. We demonstrate how the proposed method can help in evaluating different architecture alternatives of resource-restricted CNN accelerators (e.g., part of real-time embedded systems) early in design stages and, thus, prevent making design mistakes.
APA, Harvard, Vancouver, ISO, and other styles
35

Behboodi, Bahareh, Sung-Ho Lim, Miguel Luna, Hyeon-Ae Jeon, and Ji-Woong Choi. "Artificial and convolutional neural networks for assessing functional connectivity in resting-state functional near infrared spectroscopy." Journal of Near Infrared Spectroscopy 27, no. 3 (March 21, 2019): 191–205. http://dx.doi.org/10.1177/0967033519836623.

Full text
Abstract:
Functional connectivity derived from resting-state functional near infrared spectroscopy has gained attention of recent scholars because of its capability in providing valuable insight into intrinsic networks and various neurological disorders in a human brain. Several progressive methodologies in detecting resting-state functional connectivity patterns in functional near infrared spectroscopy, such as seed-based correlation analysis and independent component analysis as the most widely used methods, were adopted in previous studies. Although these two methods provide complementary information each other, the conventional seed-based method shows degraded performance compared to the independent component analysis-based scheme in terms of the sensitivity and specificity. In this study, artificial neural network and convolutional neural network were utilized in order to overcome the performance degradation of the conventional seed-based method. First of all, the results of artificial neural network- and convolutional neural network-based method illustrated the superior performance in terms of specificity and sensitivity compared to both conventional approaches. Second, artificial neural network, convolutional neural network, and independent component analysis methods showed more robustness compared to seed-based method. Moreover, resting-state functional connectivity patterns derived from artificial neural network- and convolutional neural network-based methods in sensorimotor and motor areas were consistent with the previous findings. The main contribution of the present work is to emphasize that artificial neural network as well as convolutional neural network can be exploited for a high-performance seed-based method to estimate the temporal relation among brain networks during resting state.
APA, Harvard, Vancouver, ISO, and other styles
36

Tang, Dou Nan, Min Yang, and Mei Hui Zhang. "Travel Mode Choice Modeling: A Comparison of Bayesian Networks and Neural Networks." Applied Mechanics and Materials 209-211 (October 2012): 717–23. http://dx.doi.org/10.4028/www.scientific.net/amm.209-211.717.

Full text
Abstract:
In recent years, Bayesian networks and neural networks have been widely applied to the travel demand prediction area. However, their prediction performance is rarely directly compared. By experimental tests conducted using the same dataset, a Bayesian network model and a neural network model are compared for the travel mode analysis for the first time in this paper. It is found that the fully Bayesian network model tends to overfit the training set when the network itself is considerable complicated. The TAN structure otherwise has a better generalization performance and can achieve a better and more stable prediction performance, for its prediction accuracy 75.4%±0.63%, compared to the BP neural network model ,which prediction accuracy is 72.2%±3.01%. Experiment and statistical tests demonstrate the superiority of Bayesian networks and we propose using Bayesian networks, especially TAN, instead of neural networks in the travel mode choice prediction field.
APA, Harvard, Vancouver, ISO, and other styles
37

Dong, Yiping, Ce Li, Zhen Lin, and Takahiro Watanabe. "Multiple Network-on-Chip Model for High Performance Neural Network." JSTS:Journal of Semiconductor Technology and Science 10, no. 1 (March 31, 2010): 28–36. http://dx.doi.org/10.5573/jsts.2010.10.1.028.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

He, Feng Qin, Ping Zhou, and Jian Gang Wang. "Modeling on Hydrocyclone Separation Performance by Neural Network." Applied Mechanics and Materials 105-107 (September 2011): 185–88. http://dx.doi.org/10.4028/www.scientific.net/amm.105-107.185.

Full text
Abstract:
A 17-27-5 type BP neural network model was built, whose sampled data was got by hydrocyclone separation experiments; another 6-30-5 type BP neural network was also built, whose sampled data came from the simulation results of the LZVV of a hydrocyclone with CFD code FLUENT. The two neural network models also have good predictive validity aimed at hydrocyclone separation performance. It demonstrates LZVV structural parameters can embody hydrocyclone separation performance and reduce input parameter numbers of neural network model. It also indicates that the predictive model of hydrocyclone separation performance can be built by neural network.
APA, Harvard, Vancouver, ISO, and other styles
39

Afridi, Muhammad Ishaq. "Cognition in a Cognitive Routing System for Mobile Ad-Hoc Network through Leaning Automata and Neural Network." Applied Mechanics and Materials 421 (September 2013): 694–700. http://dx.doi.org/10.4028/www.scientific.net/amm.421.694.

Full text
Abstract:
A cognitive routing system intelligently selects one protocol at a time for specific routing conditions and environment in MANET. Cognition or self-learning can be achieved in a cognitive routing system for mobile ad-hoc network (MANET) through a learning system like learning automata or neural networks. This article covers the application of learning automata and neural network to achieve cognition in MANET routing system. Mobile Ad-hoc networks are dynamic in nature and lack any fixed infrastructure, so the implementation of cognition enhances the performance of overall routing system in these networks. In learning automata the process of learning is different from reasoning or decision making. Learning automata require little knowledge to take decisions. Neural network can be improved by increasing the number of neurons and changing parameters. Self-training enhance neural network performance and it select suitable protocol for a given network environment. Cognition in MANET is either based upon learning automata as in some wireless sensor networks or specialized cognitive neural networks like Elman network. Learning automata do not follow predetermine rules and has the ability to learn and evolve. The interaction of learning automata with the MANET environment results in the evolution of cognition system.
APA, Harvard, Vancouver, ISO, and other styles
40

Liu, Wei, Cheng Kun Liu, Da Min Zhuang, Zhong Qi Liu, and Xiu Gan Yuan. "Study on Pilot Performance Model." Advanced Materials Research 383-390 (November 2011): 2545–49. http://dx.doi.org/10.4028/www.scientific.net/amr.383-390.2545.

Full text
Abstract:
In order to evaluate pilot performance objectively, back propagation (BP) neural network model of 621423 form in topology with eye movement data was established. Data source of BP neural networks that came from former experiment and random interpolation was divided into training set and test set and normalized. Based on neural networks toolbox in Matlab, hidden layer nodes of BP networks were determined with empirical formula and experimental comparison ; BP algorithms in the toolbox were optimized; The training set data and test data were input into model for training and simulation; Pilot performance of the three skill levels was predicated and evaluated. The research shows that pilot performance can be accurately evaluated by setting up BP neural networks model with eye movement data and the evaluation method can provide a reference for flight training.
APA, Harvard, Vancouver, ISO, and other styles
41

Huan, Li, and Li Chao. "FlexRay Vehicle Network Predictive Control Based on Neural Network." MATEC Web of Conferences 232 (2018): 01042. http://dx.doi.org/10.1051/matecconf/201823201042.

Full text
Abstract:
We propose a design method of FlexRay vehicle network forecasting control based on the neural network to solve the security and reliability of FlexRay network control system, where the control performance and stability of the system are reduced when transmiting data under heavy load, by sampling the working state of the vehicle network at the present time to predict the next-time network state, and adapting to the dynamic load in the vehicular network system by on-line adaptive workload adjustment. The method used the nonlinear neural network model to predict the performance of the future model. The controller calculated the control input and optimized the performance of the next-time network model. The simulation results from the Matlab/Simulink showed that the neural network predictive control had good learning ability and adaptability. It could improve the performance of FlexRay vehicle network control system.
APA, Harvard, Vancouver, ISO, and other styles
42

Chaouachi, Aymen, Rashad M. Kamel, and Ken Nagasaka. "Neural Network Ensemble-Based Solar Power Generation Short-Term Forecasting." Journal of Advanced Computational Intelligence and Intelligent Informatics 14, no. 1 (January 20, 2010): 69–75. http://dx.doi.org/10.20965/jaciii.2010.p0069.

Full text
Abstract:
This paper presents the applicability of artificial neural networks for 24 hour ahead solar power generation forecasting of a 20 kW photovoltaic system, the developed forecasting is suitable for a reliable Microgrid energy management. In total four neural networks were proposed, namely: multi-layred perceptron, radial basis function, recurrent and a neural network ensemble consisting in ensemble of bagged networks. Forecasting reliability of the proposed neural networks was carried out in terms forecasting error performance basing on statistical and graphical methods. The experimental results showed that all the proposed networks achieved an acceptable forecasting accuracy. In term of comparison the neural network ensemble gives the highest precision forecasting comparing to the conventional networks. In fact, each network of the ensemble over-fits to some extent and leads to a diversity which enhances the noise tolerance and the forecasting generalization performance comparing to the conventional networks.
APA, Harvard, Vancouver, ISO, and other styles
43

Parks, Allen D. "Characterizing Computation in Artificial Neural Networks by their Diclique Covers and Forman-Ricci Curvatures." European Journal of Engineering Research and Science 5, no. 2 (February 13, 2020): 171–77. http://dx.doi.org/10.24018/ejers.2020.5.2.1689.

Full text
Abstract:
The relationships between the structural topology of artificial neural networks, their computational flow, and their performance is not well understood. Consequently, a unifying mathematical framework that describes computational performance in terms of their underlying structure does not exist. This paper makes a modest contribution to understanding the structure-computational flow relationship in artificial neural networks from the perspective of the dicliques that cover the structure of an artificial neural network and the Forman-Ricci curvature of an artificial neural network’s connections. Special diclique cover digraph representations of artificial neural networks useful for network analysis are introduced and it is shown that such covers generate semigroups that provide algebraic representations of neural network connectivity.
APA, Harvard, Vancouver, ISO, and other styles
44

HUSAINI, NOOR AIDA, ROZAIDA GHAZALI, NAZRI MOHD NAWI, and LOKMAN HAKIM ISMAIL. "THE EFFECT OF NETWORK PARAMETERS ON PI-SIGMA NEURAL NETWORK FOR TEMPERATURE FORECASTING." International Journal of Modern Physics: Conference Series 09 (January 2012): 440–47. http://dx.doi.org/10.1142/s2010194512005521.

Full text
Abstract:
In this paper, we present the effect of network parameters to forecast temperature of a suburban area in Batu Pahat, Johor. The common ways of predicting the temperature using Neural Network has been applied for most meteorological parameters. However, researchers frequently neglected the network parameters which might affect the Neural Network's performance. Therefore, this study tends to explore the effect of network parameters by using Pi Sigma Neural Network (PSNN) with backpropagation algorithm. The network's performance is evaluated using the historical dataset of temperature in Batu Pahat for one step-ahead and benchmarked against Multilayer Perceptron (MLP) for comparison. We found out that, network parameters have significantly affected the performance of PSNN for temperature forecasting. Towards the end of this paper, we concluded the best forecasting model to predict the temperature based on the comparison of our study.
APA, Harvard, Vancouver, ISO, and other styles
45

Li, Naihan, Shujie Liu, Yanqing Liu, Sheng Zhao, and Ming Liu. "Neural Speech Synthesis with Transformer Network." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 6706–13. http://dx.doi.org/10.1609/aaai.v33i01.33016706.

Full text
Abstract:
Although end-to-end neural text-to-speech (TTS) methods (such as Tacotron2) are proposed and achieve state-of-theart performance, they still suffer from two problems: 1) low efficiency during training and inference; 2) hard to model long dependency using current recurrent neural networks (RNNs). Inspired by the success of Transformer network in neural machine translation (NMT), in this paper, we introduce and adapt the multi-head attention mechanism to replace the RNN structures and also the original attention mechanism in Tacotron2. With the help of multi-head self-attention, the hidden states in the encoder and decoder are constructed in parallel, which improves training efficiency. Meanwhile, any two inputs at different times are connected directly by a self-attention mechanism, which solves the long range dependency problem effectively. Using phoneme sequences as input, our Transformer TTS network generates mel spectrograms, followed by a WaveNet vocoder to output the final audio results. Experiments are conducted to test the efficiency and performance of our new network. For the efficiency, our Transformer TTS network can speed up the training about 4.25 times faster compared with Tacotron2. For the performance, rigorous human tests show that our proposed model achieves state-of-the-art performance (outperforms Tacotron2 with a gap of 0.048) and is very close to human quality (4.39 vs 4.44 in MOS).
APA, Harvard, Vancouver, ISO, and other styles
46

Durga, V. Sathya, and Thangakumar Jeyaprakash. "Predicting Academic Performance of Deaf Students Using Feed Forward Neural Network and An Improved PSO Algorithm." Webology 18, Special Issue 01 (April 29, 2021): 112–26. http://dx.doi.org/10.14704/web/v18si01/web18048.

Full text
Abstract:
Literacy rate of deaf students is very less in India. So there is a need to build an effective academic prediction model for identifying weak deaf students. Many machine learning techniques like Decision tree, Support Vector Machine, Neural Network are used to build prediction models. But the most preferred technique is neural network. It is found out that regression model build with neural networks takes more time to converge and the error rate is quite high. To solve the problems of neural network, we use Particle Swarm Optimization (PSO) for weight adjustment in the neural network. But, one of the main drawback of PSO lies in setting the initial parameters. So, a new PSO algorithm which determines the initial weight of the neural network using regression equation is proposed. The results show that neural network build with the proposed PSO algorithm performs well than neural network build with basic PSO algorithm. The Mean Square Error (MSE) achieved in this work is 0.0998, which is comparatively less than many existing models.
APA, Harvard, Vancouver, ISO, and other styles
47

Liu, Han, Fang Zhen Song, Ming Ming Li, and Bo Song. "Ship Performance Research Based on BP Neural Network." Applied Mechanics and Materials 709 (December 2014): 176–79. http://dx.doi.org/10.4028/www.scientific.net/amm.709.176.

Full text
Abstract:
The problem is solved that it is hard to provide analysis formulas about the maximum equivalent stress, the maximum shear stress and the structural geometric parameters for a ship. The finite element calculation is done with orthogonal experimental design under the most dangerous case. The data obtained are used as the training and test samples to establish BP neural network models of ship’s maximum equivalent stress and maximum shear stress. With the aid of Neural network toolbox in MATLAB, the topological structure of BP neural network mapping relationship between the whole ship performance indexes and design variables is established. The training and testing are completed with the data tested by the shipyard and the correctness of this network is verified. The neural network required for further optimization design is obtained. The neural network is helpful in reducing the ship mass without exceeding the allowable stress.
APA, Harvard, Vancouver, ISO, and other styles
48

Rathi, K. J., and M. S. Ali. "Neural Network Controller for Power Electronics Circuits." IAES International Journal of Artificial Intelligence (IJ-AI) 6, no. 2 (June 1, 2017): 49. http://dx.doi.org/10.11591/ijai.v6.i2.pp49-55.

Full text
Abstract:
Artificial Intelligence (AI) techniques, particularly the neural networks, are recently having significant impact on power electronics. This paper explores the perspective of neural network applications in the intelligent control for power electronics circuits. The Neural Network Controller (NNC) is designed to track the output voltage and to improve the performance of power electronics circuits. The controller is designed and simulated using MATLAB-SIMULINK
APA, Harvard, Vancouver, ISO, and other styles
49

LEUNG, F. H. F., S. H. LING, and H. K. LAM. "AN IMPROVED GENETIC-ALGORITHM-BASED NEURAL-TUNED NEURAL NETWORK." International Journal of Computational Intelligence and Applications 07, no. 04 (December 2008): 469–92. http://dx.doi.org/10.1142/s1469026808002375.

Full text
Abstract:
This paper presents a neural-tuned neural network (NTNN), which is trained by an improved genetic algorithm (GA). The NTNN consists of a common neural network and a modified neural network (MNN). In the MNN, a neuron model with two activation functions is introduced. An improved GA is proposed to train the parameters of the proposed network. A set of improved genetic operations are presented, which show superior performance over the traditional GA. The proposed network structure can increase the search space of the network and offer better performance than the traditional feed-forward neural network. Two application examples are given to illustrate the merits of the proposed network and the improved GA.
APA, Harvard, Vancouver, ISO, and other styles
50

Yang, Zhong, and Hua Du. "Application of Neural Network in Network Intrusion Detection." Applied Mechanics and Materials 644-650 (September 2014): 3334–37. http://dx.doi.org/10.4028/www.scientific.net/amm.644-650.3334.

Full text
Abstract:
In this paper, we combined the clustering algorithm and neural network algorithm, proposes a new FORBF neural network algorithm based on FCM and OLS, and apply it to the research of intrusion detection system. The simulation results show that, the algorithm has satisfactory performance.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography