Segui questo link per vedere altri tipi di pubblicazioni sul tema: Neural network.

Articoli di riviste sul tema "Neural network"

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Vedi i top-50 articoli di riviste per l'attività di ricerca sul tema "Neural network".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Vedi gli articoli di riviste di molte aree scientifiche e compila una bibliografia corretta.

1

Navghare, Tukaram, Aniket Muley, and Vinayak Jadhav. "Siamese Neural Networks for Kinship Prediction: A Deep Convolutional Neural Network Approach." Indian Journal Of Science And Technology 17, no. 4 (January 26, 2024): 352–58. http://dx.doi.org/10.17485/ijst/v17i4.3018.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

O. H. Abdelwahed, O. H. Abdelwahed, and M. El-Sayed Wahed. "Optimizing Single Layer Cellular Neural Network Simulator using Simulated Annealing Technique with Neural Networks." Indian Journal of Applied Research 3, no. 6 (October 1, 2011): 91–94. http://dx.doi.org/10.15373/2249555x/june2013/31.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Tran, Loc. "Directed Hypergraph Neural Network." Journal of Advanced Research in Dynamical and Control Systems 12, SP4 (March 31, 2020): 1434–41. http://dx.doi.org/10.5373/jardcs/v12sp4/20201622.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Antipova, E. S., and S. A. Rashkovskiy. "Autoassociative Hamming Neural Network." Nelineinaya Dinamika 17, no. 2 (2021): 175–93. http://dx.doi.org/10.20537/nd210204.

Testo completo
Abstract (sommario):
An autoassociative neural network is suggested which is based on the calculation of Hamming distances, while the principle of its operation is similar to that of the Hopfield neural network. Using standard patterns as an example, we compare the efficiency of pattern recognition for the autoassociative Hamming network and the Hopfield network. It is shown that the autoassociative Hamming network successfully recognizes standard patterns with a degree of distortion up to $40\%$ and more than $60\%$, while the Hopfield network ceases to recognize the same patterns with a degree of distortion of more than $25\%$ and less than $75\%$. A scheme of the autoassociative Hamming neural network based on McCulloch – Pitts formal neurons is proposed. It is shown that the autoassociative Hamming network can be considered as a dynamical system which has attractors that correspond to the reference patterns. The Lyapunov function of this dynamical system is found and the equations of its evolution are derived.
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Perfetti, R. "A neural network to design neural networks." IEEE Transactions on Circuits and Systems 38, no. 9 (1991): 1099–103. http://dx.doi.org/10.1109/31.83884.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Zengguo Sun, Zengguo Sun, Guodong Zhao Zengguo Sun, Rafał Scherer Guodong Zhao, Wei Wei Rafał Scherer, and Marcin Woźniak Wei Wei. "Overview of Capsule Neural Networks." 網際網路技術學刊 23, no. 1 (January 2022): 033–44. http://dx.doi.org/10.53106/160792642022012301004.

Testo completo
Abstract (sommario):
<p>As a vector transmission network structure, the capsule neural network has been one of the research hotspots in deep learning since it was proposed in 2017. In this paper, the latest research progress of capsule networks is analyzed and summarized. Firstly, we summarize the shortcomings of convolutional neural networks and introduce the basic concept of capsule network. Secondly, we analyze and summarize the improvements in the dynamic routing mechanism and network structure of the capsule network in recent years and the combination of the capsule network with other network structures. Finally, we compile the applications of capsule network in many fields, including computer vision, natural language, and speech processing. Our purpose in writing this article is to provide methods and means that can be used for reference in the research and practical applications of capsule networks.</p> <p> </p>
Gli stili APA, Harvard, Vancouver, ISO e altri
7

D, Sreekanth. "Metro Water Fraudulent Prediction in Houses Using Convolutional Neural Network and Recurrent Neural Network." Revista Gestão Inovação e Tecnologias 11, no. 4 (July 10, 2021): 1177–87. http://dx.doi.org/10.47059/revistageintec.v11i4.2177.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Mahat, Norpah, Nor Idayunie Nording, Jasmani Bidin, Suzanawati Abu Hasan, and Teoh Yeong Kin. "Artificial Neural Network (ANN) to Predict Mathematics Students’ Performance." Journal of Computing Research and Innovation 7, no. 1 (March 30, 2022): 29–38. http://dx.doi.org/10.24191/jcrinn.v7i1.264.

Testo completo
Abstract (sommario):
Predicting students’ academic performance is very essential to produce high-quality students. The main goal is to continuously help students to increase their ability in the learning process and to help educators as well in improving their teaching skills. Therefore, this study was conducted to predict mathematics students’ performance using Artificial Neural Network (ANN). The secondary data from 382 mathematics students from UCI Machine Learning Repository Data Sets used to train the neural networks. The neural network model built using nntool. Two inputs are used which are the first and the second period grade while one target output is used which is the final grade. This study also aims to identify which training function is the best among three Feed-Forward Neural Networks known as Network1, Network2 and Network3. Three types of training functions have been selected in this study, which are Levenberg-Marquardt (TRAINLM), Gradient descent with momentum (TRAINGDM) and Gradient descent with adaptive learning rate (TRAINGDA). Each training function will be compared based on Performance value, correlation coefficient, gradient and epoch. MATLAB R2020a was used for data processing. The results show that the TRAINLM function is the most suitable function in predicting mathematics students’ performance because it has a higher correlation coefficient and a lower Performance value.
Gli stili APA, Harvard, Vancouver, ISO e altri
9

FUKUSHIMA, Kunihiko. "Neocognitron: Deep Convolutional Neural Network." Journal of Japan Society for Fuzzy Theory and Intelligent Informatics 27, no. 4 (2015): 115–25. http://dx.doi.org/10.3156/jsoft.27.4_115.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

CVS, Rajesh, and Nadikoppula Pardhasaradhi. "Analysis of Artificial Neural-Network." International Journal of Trend in Scientific Research and Development Volume-2, Issue-6 (October 31, 2018): 418–28. http://dx.doi.org/10.31142/ijtsrd18482.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
11

R, Adarsh, and Dr Suma. "Neural Network for Financial Forecasting." International Journal of Research Publication and Reviews 5, no. 5 (May 26, 2024): 13455–58. http://dx.doi.org/10.55248/gengpi.5.0524.1476.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
12

Boonsatit, Nattakan, Santhakumari Rajendran, Chee Peng Lim, Anuwat Jirawattanapanit, and Praneesh Mohandas. "New Adaptive Finite-Time Cluster Synchronization of Neutral-Type Complex-Valued Coupled Neural Networks with Mixed Time Delays." Fractal and Fractional 6, no. 9 (September 13, 2022): 515. http://dx.doi.org/10.3390/fractalfract6090515.

Testo completo
Abstract (sommario):
The issue of adaptive finite-time cluster synchronization corresponding to neutral-type coupled complex-valued neural networks with mixed delays is examined in this research. A neutral-type coupled complex-valued neural network with mixed delays is more general than that of a traditional neural network, since it considers distributed delays, state delays and coupling delays. In this research, a new adaptive control technique is developed to synchronize neutral-type coupled complex-valued neural networks with mixed delays in finite time. To stabilize the resulting closed-loop system, the Lyapunov stability argument is leveraged to infer the necessary requirements on the control factors. The effectiveness of the proposed method is illustrated through simulation studies.
Gli stili APA, Harvard, Vancouver, ISO e altri
13

JORGENSEN, THOMAS D., BARRY P. HAYNES, and CHARLOTTE C. F. NORLUND. "PRUNING ARTIFICIAL NEURAL NETWORKS USING NEURAL COMPLEXITY MEASURES." International Journal of Neural Systems 18, no. 05 (October 2008): 389–403. http://dx.doi.org/10.1142/s012906570800166x.

Testo completo
Abstract (sommario):
This paper describes a new method for pruning artificial neural networks, using a measure of the neural complexity of the neural network. This measure is used to determine the connections that should be pruned. The measure computes the information-theoretic complexity of a neural network, which is similar to, yet different from previous research on pruning. The method proposed here shows how overly large and complex networks can be reduced in size, whilst retaining learnt behaviour and fitness. The technique proposed here helps to discover a network topology that matches the complexity of the problem it is meant to solve. This novel pruning technique is tested in a robot control domain, simulating a racecar. It is shown, that the proposed pruning method is a significant improvement over the most commonly used pruning method Magnitude Based Pruning. Furthermore, some of the pruned networks prove to be faster learners than the benchmark network that they originate from. This means that this pruning method can also help to unleash hidden potential in a network, because the learning time decreases substantially for a pruned a network, due to the reduction of dimensionality of the network.
Gli stili APA, Harvard, Vancouver, ISO e altri
14

Kim Soon, Gan, Chin Kim On, Nordaliela Mohd Rusli, Tan Soo Fun, Rayner Alfred, and Tan Tse Guan. "Comparison of simple feedforward neural network, recurrent neural network and ensemble neural networks in phishing detection." Journal of Physics: Conference Series 1502 (March 2020): 012033. http://dx.doi.org/10.1088/1742-6596/1502/1/012033.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
15

Tetko, Igor V. "Neural Network Studies. 4. Introduction to Associative Neural Networks." Journal of Chemical Information and Computer Sciences 42, no. 3 (March 26, 2002): 717–28. http://dx.doi.org/10.1021/ci010379o.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
16

Hylton, Todd. "Thermodynamic Neural Network." Entropy 22, no. 3 (February 25, 2020): 256. http://dx.doi.org/10.3390/e22030256.

Testo completo
Abstract (sommario):
A thermodynamically motivated neural network model is described that self-organizes to transport charge associated with internal and external potentials while in contact with a thermal reservoir. The model integrates techniques for rapid, large-scale, reversible, conservative equilibration of node states and slow, small-scale, irreversible, dissipative adaptation of the edge states as a means to create multiscale order. All interactions in the network are local and the network structures can be generic and recurrent. Isolated networks show multiscale dynamics, and externally driven networks evolve to efficiently connect external positive and negative potentials. The model integrates concepts of conservation, potentiation, fluctuation, dissipation, adaptation, equilibration and causation to illustrate the thermodynamic evolution of organization in open systems. A key conclusion of the work is that the transport and dissipation of conserved physical quantities drives the self-organization of open thermodynamic systems.
Gli stili APA, Harvard, Vancouver, ISO e altri
17

Shpinareva, Irina M., Anastasia A. Yakushina, Lyudmila A. Voloshchuk, and Nikolay D. Rudnichenko. "Detection and classification of network attacks using the deep neural network cascade." Herald of Advanced Information Technology 4, no. 3 (October 15, 2021): 244–54. http://dx.doi.org/10.15276/hait.03.2021.4.

Testo completo
Abstract (sommario):
This article shows the relevance of developing a cascade of deep neural networks for detecting and classifying network attacks based on an analysis of the practical use of network intrusion detection systems to protect local computer networks. A cascade of deep neural networks consists of two elements. The first network is a hybrid deep neural network that contains convolutional neural network layers and long short-term memory layers to detect attacks. The second network is a CNN convolutional neural network for classifying the most popular classes of network attacks such as Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnais-sance, Shellcode, and Worms. At the stage of tuning and training the cascade of deep neural networks, the selection of hyperparame-ters was carried out, which made it possible to improve the quality of the model. Among the available public datasets, one ofthe current UNSW-NB15 datasets was selected, taking into account modern traffic. For the data set under consideration, a data prepro-cessing technology has been developed. The cascade of deep neural networks was trained, tested, and validated on the UNSW-NB15 dataset. The cascade of deep neural networks was tested on real network traffic, which showed its ability to detect and classify at-tacks in a computer network. The use of a cascade of deep neural networks, consisting of a hybrid neural network CNN + LSTM and a neural network CNNhas improved the accuracy of detecting and classifying attacks in computer networks and reduced the fre-quency of false alarms in detecting network attacks
Gli stili APA, Harvard, Vancouver, ISO e altri
18

Gao, Yuan, Laurence T. Yang, Dehua Zheng, Jing Yang, and Yaliang Zhao. "Quantized Tensor Neural Network." ACM/IMS Transactions on Data Science 2, no. 4 (November 30, 2021): 1–18. http://dx.doi.org/10.1145/3491255.

Testo completo
Abstract (sommario):
Tensor network as an effective computing framework for efficient processing and analysis of high-dimensional data has been successfully applied in many fields. However, the performance of traditional tensor networks still cannot match the strong fitting ability of neural networks, so some data processing algorithms based on tensor networks cannot achieve the same excellent performance as deep learning models. To further improve the learning ability of tensor network, we propose a quantized tensor neural network in this article (QTNN), which integrates the advantages of neural networks and tensor networks, namely, the powerful learning ability of neural networks and the simplicity of tensor networks. The QTNN model can be further regarded as a generalized multilayer nonlinear tensor network, which can efficiently extract low-dimensional features of the data while maintaining the original structure information. In addition, to more effectively represent the local information of data, we introduce multiple convolution layers in QTNN to extract the local features. We also develop a high-order back-propagation algorithm for training the parameters of QTNN. We conducted classification experiments on multiple representative datasets to further evaluate the performance of proposed models, and the experimental results show that QTNN is simpler and more efficient while compared to the classic deep learning models.
Gli stili APA, Harvard, Vancouver, ISO e altri
19

Li, Wei, Shaogang Gong, and Xiatian Zhu. "Neural Graph Embedding for Neural Architecture Search." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 4707–14. http://dx.doi.org/10.1609/aaai.v34i04.5903.

Testo completo
Abstract (sommario):
Existing neural architecture search (NAS) methods often operate in discrete or continuous spaces directly, which ignores the graphical topology knowledge of neural networks. This leads to suboptimal search performance and efficiency, given the factor that neural networks are essentially directed acyclic graphs (DAG). In this work, we address this limitation by introducing a novel idea of neural graph embedding (NGE). Specifically, we represent the building block (i.e. the cell) of neural networks with a neural DAG, and learn it by leveraging a Graph Convolutional Network to propagate and model the intrinsic topology information of network architectures. This results in a generic neural network representation integrable with different existing NAS frameworks. Extensive experiments show the superiority of NGE over the state-of-the-art methods on image classification and semantic segmentation.
Gli stili APA, Harvard, Vancouver, ISO e altri
20

Jiang, Yiming, Chenguang Yang, Shi-lu Dai, and Beibei Ren. "Deterministic learning enhanced neutral network control of unmanned helicopter." International Journal of Advanced Robotic Systems 13, no. 6 (November 28, 2016): 172988141667111. http://dx.doi.org/10.1177/1729881416671118.

Testo completo
Abstract (sommario):
In this article, a neural network–based tracking controller is developed for an unmanned helicopter system with guaranteed global stability in the presence of uncertain system dynamics. Due to the coupling and modeling uncertainties of the helicopter systems, neutral networks approximation techniques are employed to compensate the unknown dynamics of each subsystem. In order to extend the semiglobal stability achieved by conventional neural control to global stability, a switching mechanism is also integrated into the control design, such that the resulted neural controller is always valid without any concern on either initial conditions or range of state variables. In addition, deterministic learning is applied to the neutral network learning control, such that the adaptive neutral networks are able to store the learned knowledge that could be reused to construct neutral network controller with improved control performance. Simulation studies are carried out on a helicopter model to illustrate the effectiveness of the proposed control design.
Gli stili APA, Harvard, Vancouver, ISO e altri
21

Hamdan, Baida Abdulredha. "Neural Network Principles and its Application." Webology 19, no. 1 (January 20, 2022): 3955–70. http://dx.doi.org/10.14704/web/v19i1/web19261.

Testo completo
Abstract (sommario):
Neural networks which also known as artificial neural networks is generally a computing dependent technique that formed and designed to create a simulation to the real brain of a human to be used as a problem solving method. Artificial neural networks gain their abilities by the method of training or learning, each method have a certain input and output which called results too, this method of learning works to create forming probability-weighted associations among both of input and the result which stored and saved across the net specifically among its data structure, any training process is depending on identifying the net difference between processed output which is usually a prediction and the real targeted output which occurs as an error, then a series of adjustments achieved to gain a proper learning result, this process called supervised learning. Artificial neural networks have found and proved itself in many applications in a variety of fields due to their capacity to recreate and simulate nonlinear phenomena. System identification and control (process control, vehicle control, quantum chemistry, trajectory prediction, and natural resource management. Etc.) In addition to face recognition which proved to be very effective. Neural network was proved to be a very promising technique in many fields due to its accuracy and problem solving properties.
Gli stili APA, Harvard, Vancouver, ISO e altri
22

Deeba, Farah, She Kun, Fayaz Ali Dharejo, Hameer Langah, and Hira Memon. "Digital Watermarking Using Deep Neural Network." International Journal of Machine Learning and Computing 10, no. 2 (February 2020): 277–82. http://dx.doi.org/10.18178/ijmlc.2020.10.2.932.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
23

Hamid, Sofia, and Mrigana Walia. "Convolution Neural Network Based Image Recognition." International Journal of Science and Research (IJSR) 10, no. 2 (February 27, 2021): 1673–77. https://doi.org/10.21275/sr21225214136.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
24

O., Sheeba, Jithin George, Rajin P. K., Nisha Thomas, and Thomas George. "Glaucoma Detection Using Artificial Neural Network." International Journal of Engineering and Technology 6, no. 2 (2014): 158–61. http://dx.doi.org/10.7763/ijet.2014.v6.687.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
25

Mahmood, Suzan A., and Loay E. George. "Speaker Identification Using Backpropagation Neural Network." Journal of Zankoy Sulaimani - Part A 11, no. 1 (September 23, 2007): 61–66. http://dx.doi.org/10.17656/jzs.10181.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
26

Chung-Hsing Chen, Chung-Hsing Chen, and Ko-Wei Huang Chung-Hsing Chen. "Document Classification Using Lightweight Neural Network." 網際網路技術學刊 24, no. 7 (December 2023): 1505–11. http://dx.doi.org/10.53106/160792642023122407012.

Testo completo
Abstract (sommario):
<p>In recent years, OCR data has been used for learning and analyzing document classification. In addition, some neural networks have used image recognition for training, such as the network published by the ImageNet Large Scale Visual Recognition Challenge for document image training, AlexNet, GoogleNet, and MobileNet. Document image classification is important in data extraction processes and often requires significant computing power. Furthermore, it is difficult to implement image classification using general computers without a graphics processing unit (GPU). Therefore, this study proposes a lightweight neural network application that can perform document image classification on general computers or the Internet of Things (IoT) without a GPU. Plustek Inc. provided 3065 receipts belonging to 58 categories. Three datasets were considered as test samples while the remaining were considered as training samples to train the network to obtain a classifier. After the experiments, the classifier achieved 98.26% accuracy, and only 3 out of 174 samples showed errors.</p> <p> </p>
Gli stili APA, Harvard, Vancouver, ISO e altri
27

Herold, Christopher D., Robert L. Fitzgerald, David A. Herold, and Taiwei Lu. "Neural Network." Laboratory Automation News 1, no. 3 (July 1996): 16–17. http://dx.doi.org/10.1177/221106829600100304.

Testo completo
Abstract (sommario):
A hybrid neural network (HNN) developed by Physical Optics Corporation (Torrance, CA) is helping a team of scientists with the San Diego Veterans Administration Medical Center and University of California, San Diego Pathology Department automate the detection and identification of Tuberculosis and other mycobacterial infections.
Gli stili APA, Harvard, Vancouver, ISO e altri
28

Kumar, G. Prem, and P. Venkataram. "Network restoration using recurrent neural networks." International Journal of Network Management 8, no. 5 (September 1998): 264–73. http://dx.doi.org/10.1002/(sici)1099-1190(199809/10)8:5<264::aid-nem298>3.0.co;2-o.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
29

Ouyang, Xuming, and Cunguang Feng. "Interpretable Neural Network Construction: From Neural Network to Interpretable Neural Tree." Journal of Physics: Conference Series 1550 (May 2020): 032154. http://dx.doi.org/10.1088/1742-6596/1550/3/032154.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
30

Sineglazov, Victor, and Petro Chynnyk. "Quantum Convolution Neural Network." Electronics and Control Systems 2, no. 76 (June 23, 2023): 40–45. http://dx.doi.org/10.18372/1990-5548.76.17667.

Testo completo
Abstract (sommario):
In this work, quantum convolutional neural networks are considered in the task of recognizing handwritten digits. A proprietary quantum scheme for the convolutional layer of a quantum convolutional neural network is proposed. A proprietary quantum scheme for the pooling layer of a quantum convolutional neural network is proposed. The results of learning quantum convolutional neural networks are analyzed. The built models were compared and the best one was selected based on the accuracy, recall, precision and f1-score metrics. A comparative analysis was made with the classic convolutional neural network based on accuracy, recall, precision and f1-score metrics. The object of the study is the task of recognizing numbers. The subject of research is convolutional neural network, quantum convolutional neural network. The result of this work can be applied in the further research of quantum computing in the tasks of artificial intelligence.
Gli stili APA, Harvard, Vancouver, ISO e altri
31

Rajesh, CVS. "Basics and Features of Artificial Neural Networks." International Journal of Trend in Scientific Research and Development 2, no. 2 (February 23, 2018): 1065–69. https://doi.org/10.31142/ijtsrd9578.

Testo completo
Abstract (sommario):
The models of the computing for the perform the pattern recognition methods by the performance and the structure of the biological neural network. A network consists of computing units which can display the features of the biological network. In this paper, the features of the neural network that motivate the study of the neural computing are discussed and the differences in processing by the brain and a computer presented, historical development of neural network principle, artificial neural network ANN terminology, neuron models and topology are discussed. Rajesh CVS | M. Padmanabham &quot;Basics and Features of Artificial Neural Networks&quot; Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-2 | Issue-2 , February 2018, URL: https://www.ijtsrd.com/papers/ijtsrd9578.pdf
Gli stili APA, Harvard, Vancouver, ISO e altri
32

Begum, Afsana, Md Masiur Rahman, and Sohana Jahan. "Medical diagnosis using artificial neural networks." Mathematics in Applied Sciences and Engineering 5, no. 2 (June 4, 2024): 149–64. http://dx.doi.org/10.5206/mase/17138.

Testo completo
Abstract (sommario):
Medical diagnosis using Artificial Neural Networks (ANN) and computer-aided diagnosis with deep learning is currently a very active research area in medical science. In recent years, for medical diagnosis, neural network models are broadly considered since they are ideal for recognizing different kinds of diseases including autism, cancer, tumor lung infection, etc. It is evident that early diagnosis of any disease is vital for successful treatment and improved survival rates. In this research, five neural networks, Multilayer neural network (MLNN), Probabilistic neural network (PNN), Learning vector quantization neural network (LVQNN), Generalized regression neural network (GRNN), and Radial basis function neural network (RBFNN) have been explored. These networks are applied to several benchmarking data collected from the University of California Irvine (UCI) Machine Learning Repository. Results from numerical experiments indicate that each network excels at recognizing specific physical issues. In the majority of cases, both the Learning Vector Quantization Neural Network and the Probabilistic Neural Network demonstrate superior performance compared to the other networks.
Gli stili APA, Harvard, Vancouver, ISO e altri
33

Zhang, Yongqiang, Haijie Pang, Jinlong Ma, Guilei Ma, Xiaoming Zhang, and Menghua Man. "Research on Anti-Interference Performance of Spiking Neural Network Under Network Connection Damage." Brain Sciences 15, no. 3 (February 20, 2025): 217. https://doi.org/10.3390/brainsci15030217.

Testo completo
Abstract (sommario):
Background: With the development of artificial intelligence, memristors have become an ideal choice to optimize new neural network architectures and improve computing efficiency and energy efficiency due to their combination of storage and computing power. In this context, spiking neural networks show the ability to resist Gaussian noise, spike interference, and AC electric field interference by adjusting synaptic plasticity. The anti-interference ability to spike neural networks has become an important direction of electromagnetic protection bionics research. Methods: Therefore, this research constructs two types of spiking neural network models with LIF model as nodes: VGG-SNN and FCNN-SNN, and combines pruning algorithm to simulate network connection damage during the training process. By comparing and analyzing the millimeter wave radar human motion dataset and MNIST dataset with traditional artificial neural networks, the anti-interference performance of spiking neural networks and traditional artificial neural networks under the same probability of edge loss was deeply explored. Results: The experimental results show that on the millimeter wave radar human motion dataset, the accuracy of the spiking neural network decreased by 5.83% at a sparsity of 30%, while the accuracy of the artificial neural network decreased by 18.71%. On the MNIST dataset, the accuracy of the spiking neural network decreased by 3.91% at a sparsity of 30%, while the artificial neural network decreased by 10.13%. Conclusions: Therefore, under the same network connection damage conditions, spiking neural networks exhibit unique anti-interference performance advantages. The performance of spiking neural networks in information processing and pattern recognition is relatively more stable and outstanding. Further analysis reveals that factors such as network structure, encoding method, and learning algorithm have a significant impact on the anti-interference performance of both.
Gli stili APA, Harvard, Vancouver, ISO e altri
34

Gao, Wei. "New Evolutionary Neural Network Based on Continuous Ant Colony Optimization." Applied Mechanics and Materials 58-60 (June 2011): 1773–78. http://dx.doi.org/10.4028/www.scientific.net/amm.58-60.1773.

Testo completo
Abstract (sommario):
The evolutionary neural network can be generated combining the evolutionary optimization algorithm and neural network. Based on analysis of shortcomings of previously proposed evolutionary neural networks, combining the continuous ant colony optimization proposed by author and BP neural network, a new evolutionary neural network whose architecture and connection weights evolve simultaneously is proposed. At last, through the typical XOR problem, the new evolutionary neural network is compared and analyzed with BP neural network and traditional evolutionary neural networks based on genetic algorithm and evolutionary programming. The computing results show that the precision and efficiency of the new neural network are all better.
Gli stili APA, Harvard, Vancouver, ISO e altri
35

ABDI, H. "A NEURAL NETWORK PRIMER." Journal of Biological Systems 02, no. 03 (September 1994): 247–81. http://dx.doi.org/10.1142/s0218339094000179.

Testo completo
Abstract (sommario):
Neural networks are composed of basic units somewhat analogous to neurons. These units are linked to each other by connections whose strength is modifiable as a result of a learning process or algorithm. Each of these units integrates independently (in paral lel) the information provided by its synapses in order to evaluate its state of activation. The unit response is then a linear or nonlinear function of its activation. Linear algebra concepts are used, in general, to analyze linear units, with eigenvectors and eigenvalues being the core concepts involved. This analysis makes clear the strong similarity between linear neural networks and the general linear model developed by statisticians. The linear models presented here are the perceptron and the linear associator. The behavior of nonlinear networks can be described within the framework of optimization and approximation techniques with dynamical systems (e.g., like those used to model spin glasses). One of the main notions used with nonlinear unit networks is the notion of attractor. When the task of the network is to associate a response with some specific input patterns, the most popular nonlinear technique consists of using hidden layers of neurons trained with back-propagation of error. The nonlinear models presented are the Hopfield network, the Boltzmann machine, the back-propagation network and the radial basis function network.
Gli stili APA, Harvard, Vancouver, ISO e altri
36

Gong, Xiao Lu, Zhi Jian Hu, Meng Lin Zhang, and He Wang. "Wind Power Forecasting Using Wavelet Decomposition and Elman Neural Network." Advanced Materials Research 608-609 (December 2012): 628–32. http://dx.doi.org/10.4028/www.scientific.net/amr.608-609.628.

Testo completo
Abstract (sommario):
The relevant data sequences provided by numerical weather prediction are decomposed into different frequency bands by using the wavelet decomposition for wind power forecasting. The Elman neural network models are established at different frequency bands respectively, then the output of different networks are combined to get the eventual prediction result. For comparison, Elman neutral network and BP neutral network are used to predict wind power directly. Several error indicators are given to evaluate prediction results of the three methods. The simulation results show that the Elman neural network can achieve good results and that prediction accuracy can be further improved by using the wavelet decomposition simultaneously.
Gli stili APA, Harvard, Vancouver, ISO e altri
37

Zakiya Manzoor Khan, Et al. "Network Intrusion Detection Using Autoencode Neural Network." International Journal on Recent and Innovation Trends in Computing and Communication 11, no. 10 (November 2, 2023): 1678–88. http://dx.doi.org/10.17762/ijritcc.v11i10.8739.

Testo completo
Abstract (sommario):
In today's interconnected digital landscape, safeguarding computer networks against unauthorized access and cyber threats is of paramount importance. NIDS play a crucial role in identifying and mitigating potential security breaches. This research paper explores the application of autoencoder neural networks, a subset of deep learning techniques, in the realm of Network Intrusion Detection.Autoencoder neural networks are known for their ability to learn and represent data in a compressed, low-dimensional form. This study investigates their potential in modeling network traffic patterns and identifying anomalous activities. By training autoencoder networks on both normal and malicious network traffic data, we aim to create effective intrusion detection models that can distinguish between benign and malicious network behavior.The paper provides an in-depth analysis of the architecture and training methodologies of autoencoder neural networks for intrusion detection. It also explores various data preprocessing techniques and feature engineering approaches to enhance the model's performance. Additionally, the research evaluates the robustness and scalability of autoencoder-based NIDS in real-world network environments. Furthermore, ethical considerations in network intrusion detection, including privacy concerns and false positive rates, are discussed. It addresses the need for a balanced approach that ensures network security while respecting user privacy and minimizing disruptions. operation. This approach compresses the majority samples &amp; increases the minority sample count in tough samples so that the IDS can achieve greater classification accuracy.
Gli stili APA, Harvard, Vancouver, ISO e altri
38

Suk-Hwan, Jung, and Chung Yong-Joo. "Sound event detection using deep neural networks." TELKOMNIKA Telecommunication, Computing, Electronics and Control 18, no. 5 (November 18, 2020): 2587~2596. https://doi.org/10.12928/TELKOMNIKA.v18i5.14246.

Testo completo
Abstract (sommario):
We applied various architectures of deep neural networks for sound event detection and compared their performance using two different datasets. Feed forward neural network (FNN), convolutional neural network (CNN), recurrent neural network (RNN) and convolutional recurrent neural network (CRNN) were implemented using hyper-parameters optimized for each architecture and dataset. The results show that the performance of deep neural networks varied significantly depending on the learning rate, which can be optimized by conducting a series of experiments on the validation data over predetermined ranges. Among the implemented architectures, the CRNN performed best under all testing conditions, followed by CNN. Although RNN was effective in tracking the time-correlation information in audio signals, it exhibited inferior performance compared to the CNN and the CRNN. Accordingly, it is necessary to develop more optimization strategies for implementing RNN in sound event detection.
Gli stili APA, Harvard, Vancouver, ISO e altri
39

Miao, Lu, Wei Fan, Yu Liu, Yingjie Qin, Deyang Chen, and Jiayan Cui. "Optimization of PSO-BP neural network for short-term wind power prediction." International Journal of Low-Carbon Technologies 19 (2024): 2687–92. http://dx.doi.org/10.1093/ijlct/ctae234.

Testo completo
Abstract (sommario):
Abstract This paper uses a back propagation (BP) neural network to predict short-term wind power. Since the initial weights and thresholds of BP neural networks significantly impact their performance, we use the optimized particle swarm optimization (PSO) to obtain the critical parameters of BP neural networks. Specifically, we optimize the PSO to make it easier to get better parameters. The experimental results show that the BP neural network's mean relative error (MRE) is 11.91%, 15.18%, and 8.56%, respectively. In comparison, the MRE of the optimized BP neural network is 5.09%, 7.21%, and 4.44%.
Gli stili APA, Harvard, Vancouver, ISO e altri
40

Marton, Sascha, Stefan Lüdtke, and Christian Bartelt. "Explanations for Neural Networks by Neural Networks." Applied Sciences 12, no. 3 (January 18, 2022): 980. http://dx.doi.org/10.3390/app12030980.

Testo completo
Abstract (sommario):
Understanding the function learned by a neural network is crucial in many domains, e.g., to detect a model’s adaption to concept drift in online learning. Existing global surrogate model approaches generate explanations by maximizing the fidelity between the neural network and a surrogate model on a sample-basis, which can be very time-consuming. Therefore, these approaches are not applicable in scenarios where timely or frequent explanations are required. In this paper, we introduce a real-time approach for generating a symbolic representation of the function learned by a neural network. Our idea is to generate explanations via another neural network (called the Interpretation Network, or I-Net), which maps network parameters to a symbolic representation of the network function. We show that the training of an I-Net for a family of functions can be performed up-front and subsequent generation of an explanation only requires querying the I-Net once, which is computationally very efficient and does not require training data. We empirically evaluate our approach for the case of low-order polynomials as explanations, and show that it achieves competitive results for various data and function complexities. To the best of our knowledge, this is the first approach that attempts to learn mapping from neural networks to symbolic representations.
Gli stili APA, Harvard, Vancouver, ISO e altri
41

Setiono, Rudy. "Feedforward Neural Network Construction Using Cross Validation." Neural Computation 13, no. 12 (December 1, 2001): 2865–77. http://dx.doi.org/10.1162/089976601317098565.

Testo completo
Abstract (sommario):
This article presents an algorithm that constructs feedforward neural networks with a single hidden layer for pattern classification. The algorithm starts with a small number of hidden units in the network and adds more hidden units as needed to improve the network's predictive accuracy. To determine when to stop adding new hidden units, the algorithm makes use of a subset of the available training samples for cross validation. New hidden units are added to the network only if they improve the classification accuracy of the network on the training samples and on the cross-validation samples. Extensive experimental results show that the algorithm is effective in obtaining networks with predictive accuracy rates that are better than those obtained by state-of-the-art decision tree methods.
Gli stili APA, Harvard, Vancouver, ISO e altri
42

Dr., Rakesh Kumar Bhujade, and Stuti Asthana Dr. "An Novel Approach on the Number of Hidden Nodes Optimizing in Artificial Neural Network." International Journal of Applied Engineering & Technology 4, no. 2 (September 30, 2022): 106–9. https://doi.org/10.5281/zenodo.7413107.

Testo completo
Abstract (sommario):
Forecasting, classification, and data analysis may all gain from improved pattern recognition results. Neural Networks are very effective and adaptable for pattern recognition and a variety of other real-world problems, such as signal processing and classification concerns. Providing improved pattern recognition results for predicting, categorizing, and data analysis. To provide correct results, neural networks need sufficient data pre-processing, architecture selection, and network training; nevertheless, the performance of a neural network is reliant on the size of its network. The correct preparation of data, selection of architecture, and training of the network are all required for ANN to provide satisfactory results; yet, the efficacy of a neural network still depends on its size. Fundamental to the field of artificial neural networks is the detection of hidden neurons in neural networks. The random selection of hidden neurons may result in either under fitting or over fitting. Over fitting happens when the network&#39;s ability to expand beyond the test data is hindered as a result of an abnormally close fit between the data and the network. After training the ANN with real-world data, the proposed method calculates the approximately optimal number of hidden nodes. The number of hidden nodes is determined depending on the similarity of the input data, as per the specified method.
Gli stili APA, Harvard, Vancouver, ISO e altri
43

Ding, Shuo, Xiao Heng Chang, and Qing Hui Wu. "Application of Probabilistic Neural Network in Pattern Classification." Applied Mechanics and Materials 441 (December 2013): 738–41. http://dx.doi.org/10.4028/www.scientific.net/amm.441.738.

Testo completo
Abstract (sommario):
The network model of probabilistic neural network and its method of pattern classification and discrimination are first introduced in this paper. Then probabilistic neural network and three usually used back propagation neural networks are established through MATLAB7.0. The pattern classification of dots on a two-dimensional plane is taken as an example. Probabilistic neural network and improved back propagation neural networks are used to classify these dots respectively. Their classification results are compared with each other. The simulation results show that compared with back propagation neural networks, probabilistic neural network has simpler learning rules, faster training speed and it needs fewer training samples; the pattern classification method based on probabilistic neural network is very effective, and it is superior to the one based on back propagation neural networks in classifying speed, accuracy as well as generalization ability.
Gli stili APA, Harvard, Vancouver, ISO e altri
44

D., Geraldine Bessie Amali, and M. Dinakaran. "A Review of Heuristic Global Optimization Based Artificial Neural Network Training Approahes." IAES International Journal of Artificial Intelligence (IJ-AI) 6, no. 1 (March 1, 2017): 26–32. https://doi.org/10.5281/zenodo.4108225.

Testo completo
Abstract (sommario):
Artificial Neural Networks have earned popularity in recent years because of their ability to approximate nonlinear functions. Training a neural network involves minimizing the mean square error between the target and network output. The error surface is nonconvex and highly multimodal. Finding the minimum of a multimodal function is a NP complete problem and cannot be solved completely. Thus application of heuristic global optimization algorithms that computes a good global minimum to neural network training is of interest. This paper reviews the various heuristic global optimization algorithms used for training feedforward neural networks and recurrent neural networks. The training algorithms are compared in terms of the learning rate, convergence speed and accuracy of the output produced by the neural network. The paper concludes by suggesting directions for novel ANN training algorithms based on recent advances in global optimization.
Gli stili APA, Harvard, Vancouver, ISO e altri
45

Labinsky, Alexander. "NEURAL NETWORK APPROACH TO COGNITIVE MODELING." MONITORING AND EXPERTISE IN SAFETY SYSTEM 2024, no. 3 (October 22, 2024): 38–44. http://dx.doi.org/10.61260/2304-0130-2024-3-38-44.

Testo completo
Abstract (sommario):
Some features of cognitive modeling are presented, including the prerequisites for a cognitive approach to solving complex problems. Cognitive modeling involves the use of various artificial neural networks, including convolutional neural networks. The classification of artificial neural networks according to various characteristics is given. The features of self-organizing neural networks and networks using deep learning methods are considered. The artificial neural network, which is a three-layer unidirectional direct propagation network, the interface of a computer program used to approximate functions using the specified neural network, as well as the solution of the image recognition problem using an artificial convolutional neural network, in which the neural network parameters are adjusted for each recognizable image fragment in order to adaptively filter the image, are considered in detail. The analysis of images in video surveillance systems in order to detect fires allows them to be detected at an early stage and, thus, prevent the fire propogation.
Gli stili APA, Harvard, Vancouver, ISO e altri
46

Sultana, Zakia, Md Ashikur Rahman Khan, and Nusrat Jahan. "Early Breast Cancer Detection Utilizing Artificial Neural Network." WSEAS TRANSACTIONS ON BIOLOGY AND BIOMEDICINE 18 (March 18, 2021): 32–42. http://dx.doi.org/10.37394/23208.2021.18.4.

Testo completo
Abstract (sommario):
Breast cancer is one of the most dangerous cancer diseases for women in worldwide. A Computeraided diagnosis system is very helpful for radiologist for diagnosing micro calcification patterns earlier and faster than typical screening techniques. Maximum breast cancer cells are eventually form a lump or mass called a tumor. Moreover, some tumors are cancerous and some are not cancerous. The cancerous tumors are called malignant and non-cancerous tumors are called benign. The benign tumors are not dangerous to health. But the unchecked malignant tumors have the ability to spread in other organs of the body. For that early detection of benign and malignant tumor is important for confining the death of breast cancer. In these research study different neural networks such as, Multilayer Perceptron (MLP) Neural Network, Jordan/Elman Neural Network, Modular Neural Network (MNN), Generalized Feed-Forward Neural Network (GFFNN), Self-Organizing Feature Map (SOFM) Neural Network, Support Vector Machine (SVM) Neural Network, Probabilistic Neural Network (PNN) and Recurrent Neural Network (RNN) are used for classifying breast cancer tumor. And compare the results of these networks to find the best neural network for detecting breast cancer. The networks are tested on Wisconsin breast cancer (WBC) database. Finally, the comparing result showed that Probabilistic Neural Network shows the best detection result than other networks.
Gli stili APA, Harvard, Vancouver, ISO e altri
47

Levin, Maxim, Anastasia Sevostyanova, Stanislav Nagornov, Irina Kovalenko, and Ekaterina Levina. "METHOD OF CONSTRUCTING A NEURAL NETWORK BASED ON BIOMATERIALS." SCIENCE IN THE CENTRAL RUSSIA, no. 6 (December 27, 2024): 105–13. https://doi.org/10.35887/2305-2538-2024-6-105-113.

Testo completo
Abstract (sommario):
The modern approach to building neural networks evolves and continues to develop in improving the mathematical model of neuron functioning, which leads to new differences from real biological analogues, since a highly simplified model of the basic element (neuron) is used to model modern neural networks. The purpose of this work is to calculate the information capacity of a neural network built on a biological neuron, to provide evidence of the prospects for studying methods for building a neural network using biological neurons. A mathematical description of the main structural elements of a biological neural network is given: neurons, axons and dendrites and neurotransmitters, as key parameters for data exchange between neurons. It is shown that biological neural networks, unlike artificial ones, require up to 6000 times less energy for computing work and lower production costs. Artificial neural networks are an expensive technology due to the high costs of training and development, despite the fact that many manufacturers mass-produce integrated circuits. It was found that using a biochip it is possible to reduce these costs many times, because living cells learn much faster, are distinguished by high neuroplasticity. But in creating a neural network based on a biological neuron, problems were noted: maintaining the vital activity of a biological neural network; synthesis of cells for the system; as well as integration of biomaterials and a processor. In the work, the information capacity of a biological neural network was calculated and compared with different types of artificial ones (unidirectional sigmoid neural network, radical neural network, recurrent neural networks, recurrent networks based on a perceptron, neural networks with self-organization based on competition, fuzzy neural networks). The results of experimental calculations showed that a neural network built on biomaterials is 92% higher in information capacity compared to a computer model, which indicates the achievement of the goals set in the work.
Gli stili APA, Harvard, Vancouver, ISO e altri
48

Kalinin, Maxim, Vasiliy Krundyshev, and Evgeny Zubkov. "Estimation of applicability of modern neural network methods for preventing cyberthreats to self-organizing network infrastructures of digital economy platforms,." SHS Web of Conferences 44 (2018): 00044. http://dx.doi.org/10.1051/shsconf/20184400044.

Testo completo
Abstract (sommario):
The problems of applying neural network methods for solving problems of preventing cyberthreats to flexible self-organizing network infrastructures of digital economy platforms: vehicle adhoc networks, wireless sensor networks, industrial IoT, “smart buildings” and “smart cities” are considered. The applicability of the classic perceptron neural network, recurrent, deep, LSTM neural networks and neural networks ensembles in the restricting conditions of fast training and big data processing are estimated. The use of neural networks with a complex architecture– recurrent and LSTM neural networks – is experimentally justified for building a system of intrusion detection for self-organizing network infrastructures.
Gli stili APA, Harvard, Vancouver, ISO e altri
49

Azlah, Muhammad Azfar Firdaus, Lee Suan Chua, Fakhrul Razan Rahmad, Farah Izana Abdullah, and Sharifah Rafidah Wan Alwi. "Review on Techniques for Plant Leaf Classification and Recognition." Computers 8, no. 4 (October 21, 2019): 77. http://dx.doi.org/10.3390/computers8040077.

Testo completo
Abstract (sommario):
Plant systematics can be classified and recognized based on their reproductive system (flowers) and leaf morphology. Neural networks is one of the most popular machine learning algorithms for plant leaf classification. The commonly used neutral networks are artificial neural network (ANN), probabilistic neural network (PNN), convolutional neural network (CNN), k-nearest neighbor (KNN) and support vector machine (SVM), even some studies used combined techniques for accuracy improvement. The utilization of several varying preprocessing techniques, and characteristic parameters in feature extraction appeared to improve the performance of plant leaf classification. The findings of previous studies are critically compared in terms of their accuracy based on the applied neural network techniques. This paper aims to review and analyze the implementation and performance of various methodologies on plant classification. Each technique has its advantages and limitations in leaf pattern recognition. The quality of leaf images plays an important role, and therefore, a reliable source of leaf database must be used to establish the machine learning algorithm prior to leaf recognition and validation.
Gli stili APA, Harvard, Vancouver, ISO e altri
50

Kurniawati, Ika. "Deep Learning Model Based on Particle Swam Optimization for Buzzer Detection." Journal of Informatics Information System Software Engineering and Applications (INISTA) 7, no. 1 (December 3, 2024): 22–32. https://doi.org/10.20895/inista.v7i1.1622.

Testo completo
Abstract (sommario):
Along with the development of the internet, the presence of buzzers is increasingly widespread on social platforms, especially on Twitter. Buzzers have played an important role in influencing and spreading misinformation, manipulating public opinion, and harassing and intimidating online social media users. Therefore, an effective detection algorithm is needed to detect buzzer accounts that endanger social networks because they affect neutrality. In this research, we propose a Deep Neural Network model to detect buzzer accounts on Twitter. We conducted experiments on 1000 datasets using PSO-based Deep Neural Network models and Ada Boost-based Deep Neural Networks to obtain the best model in detecting buzzer accounts. The results show that the performance of the PSO-based Deep Neural Network is the best with 98.90% accuracy compared to Ada Boost-based Deep Neural Network 95.30% or without feature weight and boosting algorithms with 46.60% accuracy. This clearly shows the superiority of our proposed method. These results are expected to help maintain neutral information on social media and minimize noise in the data that will be used for sentiment analysis research.
Gli stili APA, Harvard, Vancouver, ISO e altri
Offriamo sconti su tutti i piani premium per gli autori le cui opere sono incluse in raccolte letterarie tematiche. Contattaci per ottenere un codice promozionale unico!

Vai alla bibliografia