Segui questo link per vedere altri tipi di pubblicazioni sul tema: Artificial neural networks.

Articoli di riviste sul tema "Artificial neural networks"

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Vedi i top-50 articoli di riviste per l'attività di ricerca sul tema "Artificial neural networks".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Vedi gli articoli di riviste di molte aree scientifiche e compila una bibliografia corretta.

1

N, Vikram. "Artificial Neural Networks." International Journal of Research Publication and Reviews 4, no. 4 (April 23, 2023): 4308–9. http://dx.doi.org/10.55248/gengpi.4.423.37858.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Yashchenko, V. O. "Artificial brain. Biological and artificial neural networks, advantages, disadvantages, and prospects for development." Mathematical machines and systems 2 (2023): 3–17. http://dx.doi.org/10.34121/1028-9763-2023-2-3-17.

Testo completo
Abstract (sommario):
The article analyzes the problem of developing artificial neural networks within the framework of creating an artificial brain. The structure and functions of the biological brain are considered. The brain performs many functions such as controlling the organism, coordinating movements, processing information, memory, thinking, attention, and regulating emotional states, and consists of billions of neurons interconnected by a multitude of connections in a biological neural network. The structure and functions of biological neural networks are discussed, and their advantages and disadvantages are described in detail compared to artificial neural networks. Biological neural networks solve various complex tasks in real-time, which are still inaccessible to artificial networks, such as simultaneous perception of information from different sources, including vision, hearing, smell, taste, and touch, recognition and analysis of signals from the environment with simultaneous decision-making in known and uncertain situations. Overall, despite all the advantages of biological neural networks, artificial intelligence continues to rapidly progress and gradually win positions over the biological brain. It is assumed that in the future, artificial neural networks will be able to approach the capabilities of the human brain and even surpass it. The comparison of human brain neural networks with artificial neural networks is carried out. Deep neural networks, their training and use in various applications are described, and their advantages and disadvantages are discussed in detail. Possible ways for further development of this direction are analyzed. The Human Brain project aimed at creating a computer model that imitates the functions of the human brain and the advanced artificial intelligence project – ChatGPT – are briefly considered. To develop an artificial brain, a new type of neural network is proposed – neural-like growing networks, the structure and functions of which are similar to natural biological networks. A simplified scheme of the structure of an artificial brain based on a neural-like growing network is presented in the paper.
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Parks, Allen D. "Characterizing Computation in Artificial Neural Networks by their Diclique Covers and Forman-Ricci Curvatures." European Journal of Engineering Research and Science 5, no. 2 (February 13, 2020): 171–77. http://dx.doi.org/10.24018/ejers.2020.5.2.1689.

Testo completo
Abstract (sommario):
The relationships between the structural topology of artificial neural networks, their computational flow, and their performance is not well understood. Consequently, a unifying mathematical framework that describes computational performance in terms of their underlying structure does not exist. This paper makes a modest contribution to understanding the structure-computational flow relationship in artificial neural networks from the perspective of the dicliques that cover the structure of an artificial neural network and the Forman-Ricci curvature of an artificial neural network’s connections. Special diclique cover digraph representations of artificial neural networks useful for network analysis are introduced and it is shown that such covers generate semigroups that provide algebraic representations of neural network connectivity.
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Parks, Allen D. "Characterizing Computation in Artificial Neural Networks by their Diclique Covers and Forman-Ricci Curvatures." European Journal of Engineering and Technology Research 5, no. 2 (February 13, 2020): 171–77. http://dx.doi.org/10.24018/ejeng.2020.5.2.1689.

Testo completo
Abstract (sommario):
The relationships between the structural topology of artificial neural networks, their computational flow, and their performance is not well understood. Consequently, a unifying mathematical framework that describes computational performance in terms of their underlying structure does not exist. This paper makes a modest contribution to understanding the structure-computational flow relationship in artificial neural networks from the perspective of the dicliques that cover the structure of an artificial neural network and the Forman-Ricci curvature of an artificial neural network’s connections. Special diclique cover digraph representations of artificial neural networks useful for network analysis are introduced and it is shown that such covers generate semigroups that provide algebraic representations of neural network connectivity.
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Partridge, Derek, Sarah Rae, and Wen Jia Wang. "Artificial Neural Networks." Journal of the Royal Society of Medicine 92, no. 7 (July 1999): 385. http://dx.doi.org/10.1177/014107689909200723.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Moore, K. L. "Artificial neural networks." IEEE Potentials 11, no. 1 (February 1992): 23–28. http://dx.doi.org/10.1109/45.127697.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Dalton, J., and A. Deshmane. "Artificial neural networks." IEEE Potentials 10, no. 2 (April 1991): 33–36. http://dx.doi.org/10.1109/45.84097.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Yoon, Youngohc, and Lynn Peterson. "Artificial neural networks." ACM SIGMIS Database: the DATABASE for Advances in Information Systems 23, no. 1 (March 1992): 55–57. http://dx.doi.org/10.1145/134347.134362.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

MAKHOUL, JOHN. "Artificial Neural Networks." INVESTIGATIVE RADIOLOGY 25, no. 6 (June 1990): 748–50. http://dx.doi.org/10.1097/00004424-199006000-00027.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Watt, R. C., E. S. Maslana, M. J. Navabi, and K. C. Mylrea. "ARTIFICIAL NEURAL NETWORKS." Anesthesiology 77, Supplement (September 1992): A506. http://dx.doi.org/10.1097/00000542-199209001-00506.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
11

Allinson, N. M. "Artificial Neural Networks." Electronics & Communications Engineering Journal 2, no. 6 (1990): 249. http://dx.doi.org/10.1049/ecej:19900051.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
12

Drew, Philip J., and John R. T. Monson. "Artificial neural networks." Surgery 127, no. 1 (January 2000): 3–11. http://dx.doi.org/10.1067/msy.2000.102173.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
13

McGuire, Tim. "Artificial neural networks." Computer Audit Update 1997, no. 7 (July 1997): 25–29. http://dx.doi.org/10.1016/s0960-2593(97)84495-3.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
14

Fulcher, John. "Artificial neural networks." Computer Standards & Interfaces 16, no. 3 (July 1994): 183–84. http://dx.doi.org/10.1016/0920-5489(94)90010-8.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
15

Buyse, Marc, and Pascal Piedbois. "Artificial neural networks." Lancet 350, no. 9085 (October 1997): 1175. http://dx.doi.org/10.1016/s0140-6736(05)63819-6.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
16

Drew, Philip, Leonardo Bottaci, Graeme S. Duthie, and John RT Monson. "Artificial neural networks." Lancet 350, no. 9085 (October 1997): 1175–76. http://dx.doi.org/10.1016/s0140-6736(05)63820-2.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
17

Piuri, Vincenzo, and Cesare Alippi. "Artificial neural networks." Journal of Systems Architecture 44, no. 8 (April 1998): 565–67. http://dx.doi.org/10.1016/s1383-7621(97)00063-5.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
18

Roy, Asim. "Artificial neural networks." ACM SIGKDD Explorations Newsletter 1, no. 2 (January 2000): 33–38. http://dx.doi.org/10.1145/846183.846192.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
19

Griffith, John. "Artificial Neural Networks:." Medical Decision Making 20, no. 2 (April 2000): 243–44. http://dx.doi.org/10.1177/0272989x0002000210.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
20

Hopfield, J. J. "Artificial neural networks." IEEE Circuits and Devices Magazine 4, no. 5 (September 1988): 3–10. http://dx.doi.org/10.1109/101.8118.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
21

Dayhoff, Judith E., and James M. DeLeo. "Artificial neural networks." Cancer 91, S8 (2001): 1615–35. http://dx.doi.org/10.1002/1097-0142(20010415)91:8+<1615::aid-cncr1175>3.0.co;2-l.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
22

Raol, Jitendra R., and Sunilkumar S. Mankame. "Artificial neural networks." Resonance 1, no. 2 (February 1996): 47–54. http://dx.doi.org/10.1007/bf02835699.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
23

Sarwar, Abid. "Diagnosis of hyperglycemia using Artificial Neural Networks." International Journal of Trend in Scientific Research and Development Volume-2, Issue-1 (December 31, 2017): 606–10. http://dx.doi.org/10.31142/ijtsrd7045.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
24

CVS, Rajesh, and M. Padmanabham. "Basics and Features of Artificial Neural Networks." International Journal of Trend in Scientific Research and Development Volume-2, Issue-2 (February 28, 2018): 1065–69. http://dx.doi.org/10.31142/ijtsrd9578.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
25

Qahtan, Helal, Ekram Osman, and Ebrahim Alhamidi. "Transmission Line Protection Using Artificial Neural Networks." International Journal of Science and Research (IJSR) 11, no. 11 (November 5, 2022): 1404–8. http://dx.doi.org/10.21275/mr221123175925.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
26

Prithvi, P., and T. Kishore Kumar. "Speech Emotion Recognition using Artificial Neural Networks." International Journal of Scientific Engineering and Research 4, no. 5 (May 27, 2016): 8–10. https://doi.org/10.70729/ijser15784.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
27

Wang, Jun. "Artificial neural networks versus natural neural networks." Decision Support Systems 11, no. 5 (June 1994): 415–29. http://dx.doi.org/10.1016/0167-9236(94)90016-7.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
28

Volodymyr, Dudnyk, Sinenko Yuriy, Matsyk Mykhailo, Demchenko Yevhen, Zhyvotovskyi Ruslan, Repilo Iurii, Zabolotnyi Oleg, Simonenko Alexander, Pozdniakov Pavlo, and Shyshatskyi Andrii. "DEVELOPMENT OF A METHOD FOR TRAINING ARTIFICIAL NEURAL NETWORKS FOR INTELLIGENT DECISION SUPPORT SYSTEMS." Eastern-European Journal of Enterprise Technologies 3, no. 2 (105) (June 30, 2020): 37–47. https://doi.org/10.15587/1729-4061.2020.203301.

Testo completo
Abstract (sommario):
A method for training artificial neural networks for intelligent decision support systems has been developed. The method provides training not only of the synaptic weights of the artificial neural network, but also the type and parameters of the membership function, architecture and parameters of an individual network node. The architecture of artificial neural networks is trained if it is not possible to ensure the specified quality of functioning of artificial neural networks due to the training of parameters of an artificial neural network. The choice of architecture, type and parameters of the membership function takes into account the computing resources of the tool and the type and amount of information received at the input of the artificial neural network. The specified method allows the training of an individual network node and the combination of network nodes. The development of the proposed method is due to the need for training artificial neural networks for intelligent decision support systems, in order to process more information, with unambiguous decisions being made. This training method provides on average 10&ndash;18&nbsp;% higher learning efficiency of artificial neural networks and does not accumulate errors during training. The specified method will allow training artificial neural networks, identifying effective measures to improve the functioning of artificial neural networks, increasing the efficiency of artificial neural networks through training the parameters and architecture of artificial neural networks. The method will allow reducing the use of computing resources of decision support systems, developing measures aimed at improving the efficiency of training artificial neural networks and increasing the efficiency of information processing in artificial neural networks
Gli stili APA, Harvard, Vancouver, ISO e altri
29

Begum, Afsana, Md Masiur Rahman, and Sohana Jahan. "Medical diagnosis using artificial neural networks." Mathematics in Applied Sciences and Engineering 5, no. 2 (June 4, 2024): 149–64. http://dx.doi.org/10.5206/mase/17138.

Testo completo
Abstract (sommario):
Medical diagnosis using Artificial Neural Networks (ANN) and computer-aided diagnosis with deep learning is currently a very active research area in medical science. In recent years, for medical diagnosis, neural network models are broadly considered since they are ideal for recognizing different kinds of diseases including autism, cancer, tumor lung infection, etc. It is evident that early diagnosis of any disease is vital for successful treatment and improved survival rates. In this research, five neural networks, Multilayer neural network (MLNN), Probabilistic neural network (PNN), Learning vector quantization neural network (LVQNN), Generalized regression neural network (GRNN), and Radial basis function neural network (RBFNN) have been explored. These networks are applied to several benchmarking data collected from the University of California Irvine (UCI) Machine Learning Repository. Results from numerical experiments indicate that each network excels at recognizing specific physical issues. In the majority of cases, both the Learning Vector Quantization Neural Network and the Probabilistic Neural Network demonstrate superior performance compared to the other networks.
Gli stili APA, Harvard, Vancouver, ISO e altri
30

JORGENSEN, THOMAS D., BARRY P. HAYNES, and CHARLOTTE C. F. NORLUND. "PRUNING ARTIFICIAL NEURAL NETWORKS USING NEURAL COMPLEXITY MEASURES." International Journal of Neural Systems 18, no. 05 (October 2008): 389–403. http://dx.doi.org/10.1142/s012906570800166x.

Testo completo
Abstract (sommario):
This paper describes a new method for pruning artificial neural networks, using a measure of the neural complexity of the neural network. This measure is used to determine the connections that should be pruned. The measure computes the information-theoretic complexity of a neural network, which is similar to, yet different from previous research on pruning. The method proposed here shows how overly large and complex networks can be reduced in size, whilst retaining learnt behaviour and fitness. The technique proposed here helps to discover a network topology that matches the complexity of the problem it is meant to solve. This novel pruning technique is tested in a robot control domain, simulating a racecar. It is shown, that the proposed pruning method is a significant improvement over the most commonly used pruning method Magnitude Based Pruning. Furthermore, some of the pruned networks prove to be faster learners than the benchmark network that they originate from. This means that this pruning method can also help to unleash hidden potential in a network, because the learning time decreases substantially for a pruned a network, due to the reduction of dimensionality of the network.
Gli stili APA, Harvard, Vancouver, ISO e altri
31

Rajesh, CVS. "Basics and Features of Artificial Neural Networks." International Journal of Trend in Scientific Research and Development 2, no. 2 (February 23, 2018): 1065–69. https://doi.org/10.31142/ijtsrd9578.

Testo completo
Abstract (sommario):
The models of the computing for the perform the pattern recognition methods by the performance and the structure of the biological neural network. A network consists of computing units which can display the features of the biological network. In this paper, the features of the neural network that motivate the study of the neural computing are discussed and the differences in processing by the brain and a computer presented, historical development of neural network principle, artificial neural network ANN terminology, neuron models and topology are discussed. Rajesh CVS | M. Padmanabham &quot;Basics and Features of Artificial Neural Networks&quot; Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-2 | Issue-2 , February 2018, URL: https://www.ijtsrd.com/papers/ijtsrd9578.pdf
Gli stili APA, Harvard, Vancouver, ISO e altri
32

Mahat, Norpah, Nor Idayunie Nording, Jasmani Bidin, Suzanawati Abu Hasan, and Teoh Yeong Kin. "Artificial Neural Network (ANN) to Predict Mathematics Students’ Performance." Journal of Computing Research and Innovation 7, no. 1 (March 30, 2022): 29–38. http://dx.doi.org/10.24191/jcrinn.v7i1.264.

Testo completo
Abstract (sommario):
Predicting students’ academic performance is very essential to produce high-quality students. The main goal is to continuously help students to increase their ability in the learning process and to help educators as well in improving their teaching skills. Therefore, this study was conducted to predict mathematics students’ performance using Artificial Neural Network (ANN). The secondary data from 382 mathematics students from UCI Machine Learning Repository Data Sets used to train the neural networks. The neural network model built using nntool. Two inputs are used which are the first and the second period grade while one target output is used which is the final grade. This study also aims to identify which training function is the best among three Feed-Forward Neural Networks known as Network1, Network2 and Network3. Three types of training functions have been selected in this study, which are Levenberg-Marquardt (TRAINLM), Gradient descent with momentum (TRAINGDM) and Gradient descent with adaptive learning rate (TRAINGDA). Each training function will be compared based on Performance value, correlation coefficient, gradient and epoch. MATLAB R2020a was used for data processing. The results show that the TRAINLM function is the most suitable function in predicting mathematics students’ performance because it has a higher correlation coefficient and a lower Performance value.
Gli stili APA, Harvard, Vancouver, ISO e altri
33

Oleg, Sova, Shyshatskyi Andrii, Zhuravskyi Yurii, Salnikova Olha, Zubov Oleksandr, Zhyvotovskyi Ruslan, Romanenko Іgor, Kalashnikov Yevhen, Shulhin Artem, and Simonenko Alexander. "DEVELOPMENT OF A METHODOLOGY FOR TRAINING ARTIFICIAL NEURAL NETWORKS FOR INTELLIGENT DECISION SUPPORT SYSTEMS." Eastern-European Journal of Enterprise Technologies 2, no. 4 (104) (April 30, 2020): 6–14. https://doi.org/10.15587/1729-4061.2020.199469.

Testo completo
Abstract (sommario):
The method of training artificial neural networks for intelligent decision support systems is developed. A distinctive feature of the proposed method is that it provides training not only of the synaptic weights of the artificial neural network, but also the type and parameters of the membership function. If it is impossible to provide the specified quality of functioning of artificial neural networks due to the learning of the parameters of the artificial neural network, the architecture of artificial neural networks is trained. The choice of architecture, type and parameters of the membership function is based on the computing resources of the tool and taking into account the type and amount of information supplied to the input of the artificial neural network. Due to the use of the proposed methodology, there is no accumulation of errors of training artificial neural networks as a result of processing information that is fed to the input of artificial neural networks. Also, a distinctive feature of the developed method is that the preliminary calculation data are not required for data calculation. The development of the proposed methodology is due to the need to train artificial neural networks for intelligent decision support systems in order to process more information with the uniqueness of decisions made. According to the results of the study, it is found that the mentioned training method provides on average 10&ndash;18&nbsp;% higher efficiency of training artificial neural networks and does not accumulate errors during training. This method will allow training artificial neural networks through the learning of parameters and architecture, identifying effective measures to improve the efficiency of artificial neural networks. This methodology will allow reducing the use of computing resources of decision support systems and developing measures aimed at improving the efficiency of training artificial neural networks; increasing the efficiency of information processing in artificial neural networks
Gli stili APA, Harvard, Vancouver, ISO e altri
34

Walczak, Steven. "Artificial Neural Network Research in Online Social Networks." International Journal of Virtual Communities and Social Networking 10, no. 4 (October 2018): 1–15. http://dx.doi.org/10.4018/ijvcsn.2018100101.

Testo completo
Abstract (sommario):
Artificial neural networks are a machine learning method ideal for solving classification and prediction problems using Big Data. Online social networks and virtual communities provide a plethora of data. Artificial neural networks have been used to determine the emotional meaning of virtual community posts, determine age and sex of users, classify types of messages, and make recommendations for additional content. This article reviews and examines the utilization of artificial neural networks in online social network and virtual community research. An artificial neural network to predict the maintenance of online social network “friends” is developed to demonstrate the applicability of artificial neural networks for virtual community research.
Gli stili APA, Harvard, Vancouver, ISO e altri
35

Demiralay, Raziye. "Estimating of student success with artificial neural networks." New Trends and Issues Proceedings on Humanities and Social Sciences 03, no. 07 (July 23, 2017): 21–27. http://dx.doi.org/10.18844/prosoc.v2i7.1980.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
36

Klyuchko, O. M. "APPLICATION OF ARTIFICIAL NEURAL NETWORKS METHOD IN BIOTECHNOLOGY." Biotechnologia Acta 10, no. 4 (August 2017): 5–13. http://dx.doi.org/10.15407/biotech10.04.005.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
37

Oleg, Sova, Turinskyi Oleksandr, Shyshatskyi Andrii, Dudnyk Volodymyr, Zhyvotovskyi Ruslan, Prokopenko Yevgen, Hurskyi Taras, Hordiichuk Valerii, Nikitenko Anton, and Remez Artem. "DEVELOPMENT OF AN ALGORITHM TO TRAIN ARTIFICIAL NEURAL NETWORKS FOR INTELLIGENT DECISION SUPPORT SYSTEMS." Eastern-European Journal of Enterprise Technologies 1, no. 9 (103) (February 29, 2020): 46–55. https://doi.org/10.15587/1729-4061.2020.192711.

Testo completo
Abstract (sommario):
The algorithm to train artificial neural networks for intelligent decision support systems has been constructed. A distinctive feature of the proposed algorithm is that it conducts training not only for synaptic weights of an artificial neural network, but also for the type and parameters of membership function. In case of inability to ensure the assigned quality of functioning of artificial neural networks due to training of parameters of artificial neural network, the architecture of artificial neural networks is trained. The choice of the architecture, type and parameters of membership function occurs taking into consideration the computation resources of the facility and taking into consideration the type and the amount of information entering the input of an artificial neural network. In addition, when using the proposed algorithm, there is no accumulation of an error of artificial neural networks training as a result of processing the information entering the input of artificial neural networks. Development of the proposed algorithm was predetermined by the need to train artificial neural networks for intelligent decision support systems in order to process more information given the unambiguity of decisions being made. The research results revealed that the specified training algorithm provides on average 16&ndash;23 % higher the efficiency of training artificial neural networks training that is on average by 16&ndash;23 % higher and does not accumulate errors in the course of training. The specified algorithm will make it possible to conduct training of artificial neural networks; to determine effective measures to enhance the efficiency of functioning of artificial neural networks. The developed algorithm will also enable the improvement of the efficiency of functioning of artificial neural networks due to training the parameters and the architecture of artificial neural networks. The proposed algorithm reduces the use of computational resources of decision support systems. The application of the developed algorithm makes it possible to work out the measures aimed at improving the effectiveness of training artificial neural networks and to increase the efficiency of information processing
Gli stili APA, Harvard, Vancouver, ISO e altri
38

Mahdi, Qasim Abbood, Andrii Shyshatskyi, Oleksandr Symonenko, Nadiia Protas, Oleksandr Trotsko, Volodymyr Kyvliuk, Artem Shulhin, Petro Steshenko, Eduard Ostapchuk, and Tetiana Holenkovska. "Development of a method for training artificial neural networks for intelligent decision support systems." Eastern-European Journal of Enterprise Technologies 1, no. 9(115) (February 28, 2022): 35–44. http://dx.doi.org/10.15587/1729-4061.2022.251637.

Testo completo
Abstract (sommario):
We developed a method of training artificial neural networks for intelligent decision support systems. A distinctive feature of the proposed method consists in training not only the synaptic weights of an artificial neural network, but also the type and parameters of the membership function. In case of impossibility to ensure a given quality of functioning of artificial neural networks by training the parameters of an artificial neural network, the architecture of artificial neural networks is trained. The choice of architecture, type and parameters of the membership function is based on the computing resources of the device and taking into account the type and amount of information coming to the input of the artificial neural network. Another distinctive feature of the developed method is that no preliminary calculation data are required to calculate the input data. The development of the proposed method is due to the need for training artificial neural networks for intelligent decision support systems, in order to process more information, while making unambiguous decisions. According to the results of the study, this training method provides on average 10–18 % higher efficiency of training artificial neural networks and does not accumulate training errors. This method will allow training artificial neural networks by training the parameters and architecture, determining effective measures to improve the efficiency of artificial neural networks. This method will allow reducing the use of computing resources of decision support systems, developing measures to improve the efficiency of training artificial neural networks, increasing the efficiency of information processing in artificial neural networks.
Gli stili APA, Harvard, Vancouver, ISO e altri
39

Qasim, Abbood Mahdi, Shyshatskyi Andrii, Symonenko Oleksandr, Protas Nadiia, Trotsko Oleksandr, Kyvliuk Volodymyr, Shulhin Artem, Steshenko Petro, Ostapchuk Eduard, and Holenkovska Tetiana. "Development of a method for training artificial neural networks for intelligent decision support systems." Eastern-European Journal of Enterprise Technologies 1, no. 9 (115) (February 28, 2022): 35–44. https://doi.org/10.15587/1729-4061.2022.251637.

Testo completo
Abstract (sommario):
We developed a method of training artificial neural networks for intelligent decision support systems. A distinctive feature of the proposed method consists in training not only the synaptic weights of an artificial neural network, but also the type and parameters of the membership function. In case of impossibility to ensure a given quality of functioning of artificial neural networks by training the parameters of an artificial neural network, the architecture of artificial neural networks is trained. The choice of architecture, type and parameters of the membership function is based on the computing resources of the device and taking into account the type and amount of information coming to the input of the artificial neural network. Another distinctive feature of the developed method is that no preliminary calculation data are required to calculate the input data. The development of the proposed method is due to the need for training artificial neural networks for intelligent decision support systems, in order to process more information, while making unambiguous decisions. According to the results of the study, this training method provides on average 10&ndash;18&nbsp;% higher efficiency of training artificial neural networks and does not accumulate training errors. This method will allow training artificial neural networks by training the parameters and architecture, determining effective measures to improve the efficiency of artificial neural networks. This method will allow reducing the use of computing resources of decision support systems, developing measures to improve the efficiency of training artificial neural networks, increasing the efficiency of information processing in artificial neural networks.
Gli stili APA, Harvard, Vancouver, ISO e altri
40

Schaub, Nicholas J., and Nathan Hotaling. "Assessing Efficiency in Artificial Neural Networks." Applied Sciences 13, no. 18 (September 14, 2023): 10286. http://dx.doi.org/10.3390/app131810286.

Testo completo
Abstract (sommario):
The purpose of this work was to develop an assessment technique and subsequent metrics that help in developing an understanding of the balance between network size and task performance in simple model networks. Here, exhaustive tests on simple model neural networks and datasets are used to validate both the assessment approach and the metrics derived from it. The concept of neural layer state space is introduced as a simple mechanism for understanding layer utilization, where a state is the on/off activation state of all neurons in a layer for an input. Neural efficiency is computed from state space to measure neural layer utilization, and a second metric called the artificial intelligence quotient (aIQ) was created to balance neural network performance and neural efficiency. To study aIQ and neural efficiency, two simple neural networks were trained on MNIST: a fully connected network (LeNet-300-100) and a convolutional neural network (LeNet-5). The LeNet-5 network with the highest aIQ was 2.32% less accurate but contained 30,912 times fewer parameters than the network with the highest accuracy. Both batch normalization and dropout layers were found to increase neural efficiency. Finally, networks with a high aIQ are shown to be resistant to memorization and overtraining as well as capable of learning proper digit classification with an accuracy of 92.51%, even when 75% of the class labels are randomized. These results demonstrate the utility of aIQ and neural efficiency as metrics for determining the performance and size of a small network using exemplar data.
Gli stili APA, Harvard, Vancouver, ISO e altri
41

Еськов, В. М., М. А. Филатов, Г. В. Газя, and Н. Ф. Стратан. "Artificial Intellect with Artificial Neural Networks." Успехи кибернетики / Russian Journal of Cybernetics, no. 3 (October 11, 2021): 44–52. http://dx.doi.org/10.51790/2712-9942-2021-2-3-6.

Testo completo
Abstract (sommario):
В настоящее время не существует единого определения искусственного интеллекта. Требуется такая классификация задач, которые должны решать системы искусственного интеллекта. В сообщении дана классификация задач при использовании искусственных нейросетей (в виде получения субъективно и объективно новой информации). Показаны преимущества таких нейросетей (неалгоритмизируемые задачи) и показан класс систем (третьего типа — биосистем), которые принципиально не могут изучаться в рамках статистики (и всей науки). Для изучения таких биосистем (с уникальными выборками) предлагается использовать искусственные нейросети, которые решают задачи системного синтеза (отыскание параметров порядка). Сейчас такие задачи решает человек в режиме эвристики, что не моделируется современными системами искусственного интеллекта. Currently, there is no single definition of artificial intelligence. We need a Such categorization of tasks to be solved by artificial intelligence. The paper proposes a task categorization for artificial neural networks (in terms of obtaining subjectively and objectively new information). The advantages of such neural networks (non-algorithmizable problems) are shown, and a class of systems (third type biosystems) which cannot be studied by statistical methods (and all science) is presented. To study such biosystems (with unique samples) it is suggested to use artificial neural networks able to perform system synthesis (search for order parameters). Nowadays such problems are solved by humans through heuristics, and this process cannot be modeled by the existing artificial intelligence systems.
Gli stili APA, Harvard, Vancouver, ISO e altri
42

Basu, Abhirup, Pinaki Bisaws, Sarmi Ghosh, and Debarshi Datta. "Reconfigurable Artificial Neural Networks." International Journal of Computer Applications 179, no. 6 (December 15, 2017): 5–8. http://dx.doi.org/10.5120/ijca2017915961.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
43

Chen, Tin-Chih, Cheng-Li Liu, and Hong-Dar Lin. "Advanced Artificial Neural Networks." Algorithms 11, no. 7 (July 10, 2018): 102. http://dx.doi.org/10.3390/a11070102.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
44

Xin Yao. "Evolving artificial neural networks." Proceedings of the IEEE 87, no. 9 (1999): 1423–47. http://dx.doi.org/10.1109/5.784219.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
45

YAO, XIN. "EVOLUTIONARY ARTIFICIAL NEURAL NETWORKS." International Journal of Neural Systems 04, no. 03 (September 1993): 203–22. http://dx.doi.org/10.1142/s0129065793000171.

Testo completo
Abstract (sommario):
Evolutionary artificial neural networks (EANNs) can be considered as a combination of artificial neural networks (ANNs) and evolutionary search procedures such as genetic algorithms (GAs). This paper distinguishes among three levels of evolution in EANNs, i.e. the evolution of connection weights, architectures and learning rules. It first reviews each kind of evolution in detail and then analyses major issues related to each kind of evolution. It is shown in the paper that although there is a lot of work on the evolution of connection weights and architectures, research on the evolution of learning rules is still in its early stages. Interactions among different levels of evolution are far from being understood. It is argued in the paper that the evolution of learning rules and its interactions with other levels of evolution play a vital role in EANNs.
Gli stili APA, Harvard, Vancouver, ISO e altri
46

Boutsinas, B., and M. N. Vrahatis. "Artificial nonmonotonic neural networks." Artificial Intelligence 132, no. 1 (October 2001): 1–38. http://dx.doi.org/10.1016/s0004-3702(01)00126-6.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
47

Anil, Nihal. "A Review on NEAT and Other Reinforcement Algorithms in Robotics." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 03 (March 28, 2024): 1–5. http://dx.doi.org/10.55041/ijsrem29667.

Testo completo
Abstract (sommario):
Artificial neural networks, or ANNs, find widespread use in real-world scenarios ranging from pattern recognition to robotics control. Choosing an architecture (which includes a model for the neurons) and a learning algorithm are important decisions when building a neural network for a particular purpose. An automated method for resolving these issues is provided by evolutionary search techniques.Genetic algorithms are used to artificially evolve neural networks in a process known as neuroevolution (NE), which has shown great promise in solving challenging reinforcement learning problems. This study offers a thorough review of the state-of-the-art techniques for evolving artificial neural networks (ANNs), with a focus on optimizing their performance-enhancing capabilities. Key Words: Artificial neural networks,Neuroevolution, NeuroEvolution of Augmenting Topologies,Brain computer interface.
Gli stili APA, Harvard, Vancouver, ISO e altri
48

Maksutova, K., N. Saparkhojayev, and Dusmat Zhamangarin. "DEVELOPMENT OF AN ONTOLOGICAL MODEL OF DEEP LEARNING NEURAL NETWORKS." Bulletin D. Serikbayev of EKTU, no. 1 (March 2024): 190–201. http://dx.doi.org/10.51885/1561-4212_2024_1_190.

Testo completo
Abstract (sommario):
This research paper examines the challenges and prospects associated with the integration of artificial neural networks and knowledge bases. The focus is on leveraging this integration to address practical problems. The paper explores the development, training, and integration of artificial neural net- works, emphasizing their adaptation to knowledge bases. This adaptation involves processes such as in- tegration, communication, representation of ontological structures, and interpretation by the knowledge base of the artificial neural network's representation through input and output. The paper also delves into the direction of establishing an intellectual environment conducive to the development, training, and integration of adapted artificial neural networks with knowledge bases. The knowledge base embedded in an artificial neural network is constructed using a homogeneous semantic network, and knowledge processing employs a multi-agent approach. The representation of artificial neural networks and their specifications within a unified semantic model of knowledge representation is detailed, encompassing text-based specifications in the language of knowledge representation with theoretical semantics. The models shared with the knowledge base include dynamic and other types that vary in their capabilities for knowledge representation. Furthermore, the paper conducts an analysis of approaches to creating artificial neural networks across various libraries of the high-level programming language Python. It explores techniques for developing arti- ficial neural networks within the Python development environment, investigating the key features and func- tions of these libraries. A comparative analysis of neural networks created in object-oriented programming languages is provided, along with the development of an ontological model for deep learning neural net- works.
Gli stili APA, Harvard, Vancouver, ISO e altri
49

Yashchenko, V. O. "Neural-like growing networks in the development of general intelligence. Neural-like growing networks (P. II)." Mathematical machines and systems 1 (2023): 3–29. http://dx.doi.org/10.34121/1028-9763-2023-1-3-29.

Testo completo
Abstract (sommario):
This article is devoted to the development of general artificial intelligence (AGI) based on a new type of neural networks – “neural-like growing networks”. It consists of two parts. The first one was published in N4, 2022, and describes an artificial neural-like element (artificial neuron) in terms of its functionality, which is as close as possible to a biological neuron. An artificial neural-like element is the main element in building neural-like growing networks. The second part deals with the structures and functions of artificial and natural neural networks. The paper proposes a new approach for creating neural-like growing networks as a means of developing AGI that is as close as possible to the natural intelligence of a person. The intelligence of man and living organisms is formed by their nervous system. According to I.P. Pavlov's definition, the main mechanism of higher nervous activity is the reflex activity of the nervous system. In the nerve cell, the main storage of unconditioned reflexes is the deoxyribonucleic acid (DNA) molecule. The article describes ribosomal protein synthesis that contributes to the implementation of unconditioned reflexes and the formation of conditioned reflexes as the basis for learning biological objects. The first part of the work shows that the structure and functions of ribosomes almost completely coincide with the structure and functions of the Turing machine. Turing invented this machine to prove the fundamental (theoretical) possibility of constructing arbitrarily complex algorithms from extremely simple operations, and the operations themselves are performed automatically. Here arises a stunning analogy, nature created DNA and the ribosome to build complex algorithms for creating biological objects and their communication with each other and with the external environment, and the ribosomal protein synthesis is carried out by many ribosomes at the same time. It was concluded that the nerve cells of the brain are analog multi-machine complexes – ultra-fast molecular supercomputers with an unusually simple analog programming device.
Gli stili APA, Harvard, Vancouver, ISO e altri
50

Næs, Tormod, Knut Kvaal, Tomas Isaksson, and Charles Miller. "Artificial Neural Networks in Multivariate Calibration." Journal of Near Infrared Spectroscopy 1, no. 1 (January 1993): 1–11. http://dx.doi.org/10.1255/jnirs.1.

Testo completo
Abstract (sommario):
This paper is about the use of artificial neural networks for multivariate calibration. We discuss network architecture and estimation as well as the relationship between neural networks and related linear and non-linear techniques. A feed-forward network is tested on two applications of near infrared spectroscopy, both of which have been treated previously and which have indicated non-linear features. In both cases, the network gives more precise prediction results than the linear calibration method of PCR.
Gli stili APA, Harvard, Vancouver, ISO e altri
Offriamo sconti su tutti i piani premium per gli autori le cui opere sono incluse in raccolte letterarie tematiche. Contattaci per ottenere un codice promozionale unico!