To see the other types of publications on this topic, follow the link: Artificial Neuron.

Journal articles on the topic 'Artificial Neuron'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Artificial Neuron.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Sharp, A. A., L. F. Abbott, and E. Marder. "Artificial electrical synapses in oscillatory networks." Journal of Neurophysiology 67, no. 6 (1992): 1691–94. http://dx.doi.org/10.1152/jn.1992.67.6.1691.

Full text
Abstract:
1. We use an electronic circuit to artificially electrically couple neurons. 2. Strengthening the coupling between an oscillating neuron and a hyperpolarized, passive neuron can either increase or decrease the frequency of the oscillator depending on the properties of the oscillator. 3. The result of electrically coupling two neuronal oscillators depends on the membrane potentials, intrinsic properties of the neurons, and the coupling strength. 4. The interplay between chemical inhibitory synapses and electrical synapses can be studied by creating both chemical and electrical synapses between two cultured neurons and by artificially strengthening the electrical synapse between the ventricular dilator and one pyloric dilator neuron of the stomatogastric ganglion.
APA, Harvard, Vancouver, ISO, and other styles
2

Torres-Treviño, Luis M., Angel Rodríguez-Liñán, Luis González-Estrada, and Gustavo González-Sanmiguel. "Single Gaussian Chaotic Neuron: Numerical Study and Implementation in an Embedded System." Discrete Dynamics in Nature and Society 2013 (2013): 1–11. http://dx.doi.org/10.1155/2013/318758.

Full text
Abstract:
Artificial Gaussian neurons are very common structures of artificial neural networks like radial basis function. These artificial neurons use a Gaussian activation function that includes two parameters called the center of mass (cm) and sensibility factor (λ). Changes on these parameters determine the behavior of the neuron. When the neuron has a feedback output, complex chaotic behavior is displayed. This paper presents a study and implementation of this particular neuron. Stability of fixed points, bifurcation diagrams, and Lyapunov exponents help to determine the dynamical nature of the neuron, and its implementation on embedded system illustrates preliminary results toward embedded chaos computation.
APA, Harvard, Vancouver, ISO, and other styles
3

Alvarellos-González, Alberto, Alejandro Pazos, and Ana B. Porto-Pazos. "Computational Models of Neuron-Astrocyte Interactions Lead to Improved Efficacy in the Performance of Neural Networks." Computational and Mathematical Methods in Medicine 2012 (2012): 1–10. http://dx.doi.org/10.1155/2012/476324.

Full text
Abstract:
The importance of astrocytes, one part of the glial system, for information processing in the brain has recently been demonstrated. Regarding information processing in multilayer connectionist systems, it has been shown that systems which include artificial neurons and astrocytes (Artificial Neuron-Glia Networks) have well-known advantages over identical systems including only artificial neurons. Since the actual impact of astrocytes in neural network function is unknown, we have investigated, using computational models, different astrocyte-neuron interactions for information processing; different neuron-glia algorithms have been implemented for training and validation of multilayer Artificial Neuron-Glia Networks oriented toward classification problem resolution. The results of the tests performed suggest that all the algorithms modelling astrocyte-induced synaptic potentiation improved artificial neural network performance, but their efficacy depended on the complexity of the problem.
APA, Harvard, Vancouver, ISO, and other styles
4

Chen, Xiu, and Yi Wang. "A Chaotic Neuron and its Ability to Prevent Overfitting." Frontiers in Computing and Intelligent Systems 5, no. 1 (2023): 53–61. http://dx.doi.org/10.54097/fcis.v5i1.11673.

Full text
Abstract:
Chaotic neuron is a neural model based on chaos theory, which combines the complex dynamic behavior of biological neurons with the characteristics of chaotic systems. Inspired by the chaotic firing characteristics of biological neurons, a novel chaotic neuron model and its response activation function LMCU are proposed in this paper. Based on one-dimensional chaotic mapping, this chaotic neuron model takes the emissivity of chaotic firing characteristics of biological neurons as its response output, so that it has the nonlinear response and chaotic characteristics of biological neurons. Different from the traditional neuron model, it makes full use of the nonlinear dynamics of the chaotic system to achieve the activation output. In this paper, we apply the proposed chaotic neurons to artificial neural networks by using LeNet-5 models on MNIST and CIFAR-10 datasets, and compare them with common activation functions. The application of chaotic neurons can effectively reduce the overfitting phenomenon of artificial neural network, significantly reduce the generalization error of the model, and greatly improve the overall performance of artificial neural network. The innovative design of this chaotic neuron model provides a new cornerstone for the future development of artificial neural networks.
APA, Harvard, Vancouver, ISO, and other styles
5

Zigunovs, Maksims. "THE ALZHEIMER’S DISEASE IMPACT ON ARTIFICIAL NEURAL NETWORKS." ENVIRONMENT. TECHNOLOGIES. RESOURCES. Proceedings of the International Scientific and Practical Conference 2 (June 17, 2021): 205–9. http://dx.doi.org/10.17770/etr2021vol2.6632.

Full text
Abstract:
The Alzheimer’s Disease main impact on the brain is the memory loss effect. Therefore, in the “neuron world” this makes a disorder of signal impulses and disconnects neurons that causes the neuron death and memory loss. The research main aim is to determine the average loss of signal and develop memory loss prediction models for artificial neuron network. The Izhikevich neural networking model is often used for constructing neuron neural electrical signal modeling. The neuron model signal rhythm and spikes are used as model neuron characteristics for understanding if the system is stable at certain moment and in time. In addition, the electrical signal parameters are used in similar way as they are used in a biological brain. During the research the neural network initial conditions are assumed to be randomly selected in specified the working neuron average sigma I parameters range.
APA, Harvard, Vancouver, ISO, and other styles
6

Panda, Sashmita. "Comparative study of single biological neuron with an artificial neuron." BOHR Journal of Biocomputing and Nano Technology 1, no. 1 (2023): 9–16. http://dx.doi.org/10.54646/bjbnt.2023.02.

Full text
Abstract:
A number of artificial neural models have been presented in the literature in an effort to suggest a more accurate representation of a single biological neuron. There are numerous publications on synthetic neurons that attempted to replicate a single biological neuron, however, such models were unable to generate the spiking patterns of a real biological neuron. Therefore, there is still scope to design and research improved spiking neural models that more accurately reflect the functions of a biological neuron. This motivation drives extensive modification of an artificial neuron model to produce the spike patterns of a real biological neuron. The modified single artificial neuron model that has been proposed exhibits the functions of a biological neuron. It’s still crucial to model spiking bio-neuron behavior. Modeling a spiking bio-neuron is still an important exercise in view of possible applications of the underlying features in the areas of neuromorphic engineering, cognitive radio, and spiking neural networks
APA, Harvard, Vancouver, ISO, and other styles
7

M. Mijwil, Maad. "Artificial Neural Networks Advantages and Disadvantages." Mesopotamian Journal of Big Data 2021 (August 23, 2021): 29–31. http://dx.doi.org/10.58496/mjbd/2021/006.

Full text
Abstract:
Artificial neural networks (ANNs) are the modelling of the human brain with the simplest definition, and the building blocks are neurons. There are about 100 billion neurons in the human brain. Each neuron has a connection point between 1,000 and 100,000. In the human brain, information is stored in such a way as to be distributed, and we can extract more than one piece of this information from our memory in parallel when necessary. We are not mistaken when we say that a human brain is made up of thousands of very, very powerful parallel processors. In multi-layer artificial neural networks, there are also neurons placed in a similar manner to the human brain. Each neuron is connected to other neurons with specific coefficients. During training, information is distributed to these connection points so that the network is learned.
APA, Harvard, Vancouver, ISO, and other styles
8

Ruzek, Martin. "ARTIFICIAL NEURAL NETWORK FOR MODELS OF HUMAN OPERATOR." Acta Polytechnica CTU Proceedings 12 (December 15, 2017): 99. http://dx.doi.org/10.14311/app.2017.12.0099.

Full text
Abstract:
This paper presents a new approach to mental functions modeling with the use of artificial neural networks. The artificial neural networks seems to be a promising method for the modeling of a human operator because the architecture of the ANN is directly inspired by the biological neuron. On the other hand, the classical paradigms of artificial neural networks are not suitable because they simplify too much the real processes in biological neural network. The search for a compromise between the complexity of biological neural network and the practical feasibility of the artificial network led to a new learning algorithm. This algorithm is based on the classical multilayered neural network; however, the learning rule is different. The neurons are updating their parameters in a way that is similar to real biological processes. The basic idea is that the neurons are competing for resources and the criterion to decide which neuron will survive is the usefulness of the neuron to the whole neural network. The neuron is not using "teacher" or any kind of superior system, the neuron receives only the information that is present in the biological system. The learning process can be seen as searching of some equilibrium point that is equal to a state with maximal importance of the neuron for the neural network. This position can change if the environment changes. The name of this type of learning, the homeostatic artificial neural network, originates from this idea, as it is similar to the process of homeostasis known in any living cell. The simulation results suggest that this type of learning can be useful also in other tasks of artificial learning and recognition.
APA, Harvard, Vancouver, ISO, and other styles
9

Tomov, Konstantin, and Galina Momcheva. "Multi-Activation Dendritic Neural Network (MA-DNN) Working Example of Dendritic-Based Artificial Neural Network." Cybernetics and Information Technologies 23, no. 3 (2023): 145–62. http://dx.doi.org/10.2478/cait-2023-0030.

Full text
Abstract:
Abstract Throughout the years neural networks have been based on the perceptron model of the artificial neuron. Attempts to stray from it are few to none. The perceptron simply works and that has discouraged research around other neuron models. New discoveries highlight the importance of dendrites in the neuron, but the perceptron model does not include them. This brings us to the goal of the paper which is to present and test different models of artificial neurons that utilize dendrites to create an artificial neuron that better represents the biological neuron. The authors propose two models. One is made with the purpose of testing the idea of the dendritic neuron. The distinguishing feature of the second model is that it implements activation functions after its dendrites. Results from the second model suggest that it performs as well as or even better than the perceptron model.
APA, Harvard, Vancouver, ISO, and other styles
10

Yashchenko, V. O. "Neural-like growing networks in the development of general intelligence. Neural-like element (P. I)." Mathematical machines and systems 4 (2022): 15–36. http://dx.doi.org/10.34121/1028-9763-2022-4-15-36.

Full text
Abstract:
The article discusses a new approach to the creation of artificial neurons and neural networks as the means of developing artificial intelligence similar to natural. The article consists of two parts. In the first one, the system of artificial intelligence formation is considered in comparison with the system of natural intelligence formation. Based on the consideration and analysis of the structure and functions of a biological neuron, it was concluded that memory is stored in brain neurons at the molecular level. Information perceived by a person from the moment of his birth and throughout his life is stored in the endoplasmic reticulum of the neuron. There are about 100 billion neurons in the human brain, and each neuron contains millions of ribosomes that synthesize a mediator consisting of about 10,000 molecules. If we assume that one mole-cule corresponds to one unit of information, then human memory is unlimited. In the nerve cell, there is a synthesis of biologically active substances necessary for the analysis and memorizing information. The “factory” for the production of proteins is the endoplasmic reticulum which accumulates millions of ribosomes. One ribosome synthesizes protein at a rate of 15–20 amino acids per second. Considering that the functional structure of ribosomes is similar to the Turing machine, we can conclude that the neuron is an analog multimachine complex – an ultra-fast molecular multimachine supercomputer with an unusually simple analog programming device. An artificial neuron proposed by J. McCulloch and W. Pitts is considered a highly simplified mathematical model of a biological neuron. A maximally approximate analogue of a biological neuron, a neural-like element, is proposed. A description of the neural-like element is given. The process of perception and memorizing information in a neuron-like element is shown in comparison with a similar process in a nerve cell of the brain.
APA, Harvard, Vancouver, ISO, and other styles
11

Lin, Haidan, and Yiran Shen. "A VO2 Neuristor Based on Microstrip Line Coupling." Micromachines 14, no. 2 (2023): 337. http://dx.doi.org/10.3390/mi14020337.

Full text
Abstract:
The neuromorphic network based on artificial neurons and synapses can solve computational difficulties, and its energy efficiency is incomparable to the traditional von Neumann architecture. As a new type of circuit component, nonvolatile memristors are very similar to biological synapses in structure and function. Only one memristor can simulate the function of a synapse. Therefore, memristors provide a new way to build hardware-based artificial neural networks. To build such an artificial neural network, in addition to the artificial synapses, artificial neurons are also needed to realize the distribution of information and the adjustment of synaptic weights. As the VO2 volatile local active memristor is complementary to nonvolatile memristors, it can be used to simulate the function of neurons. However, determining how to better realize the function of neurons with simple circuits is one of the current key problems to be solved in this field. This paper considers the influence of distribution parameters on circuit performance under the action of high-frequency and high-speed signals. Two Mott VO2 memristor units are connected and coupled with microstrip lines to simulate the Hodgkin–Huxley neuron model. It is found that the proposed memristor neuron based on microstrip lines shows the characteristics of neuron action potential: amplification and threshold.
APA, Harvard, Vancouver, ISO, and other styles
12

Jia, Dongbao, Weixiang Xu, Dengzhi Liu, Zhongxun Xu, Zhaoman Zhong, and Xinxin Ban. "Verification of Classification Model and Dendritic Neuron Model Based on Machine Learning." Discrete Dynamics in Nature and Society 2022 (July 4, 2022): 1–14. http://dx.doi.org/10.1155/2022/3259222.

Full text
Abstract:
Artificial neural networks have achieved a great success in simulating the information processing mechanism and process of neuron supervised learning, such as classification. However, traditional artificial neurons still have many problems such as slow and difficult training. This paper proposes a new dendrite neuron model (DNM), which combines metaheuristic algorithm and dendrite neuron model effectively. Eight learning algorithms including traditional backpropagation, classic evolutionary algorithms such as biogeography-based optimization, particle swarm optimization, genetic algorithm, population-based incremental learning, competitive swarm optimization, differential evolution, and state-of-the-art jSO algorithm are used for training of dendritic neuron model. The optimal combination of user-defined parameters of model has been systemically investigated, and four different datasets involving classification problem are investigated using proposed DNM. Compared with common machine learning methods such as decision tree, support vector machine, k-nearest neighbor, and artificial neural networks, dendritic neuron model trained by biogeography-based optimization has significant advantages. It has the characteristics of simple structure and low cost and can be used as a neuron model to solve practical problems with a high precision.
APA, Harvard, Vancouver, ISO, and other styles
13

Han, Mengqiao, Liyuan Pan, and Xiabi Liu. "MA-Net: Rethinking Neural Unit in the Light of Astrocytes." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 3 (2024): 2040–48. http://dx.doi.org/10.1609/aaai.v38i3.27975.

Full text
Abstract:
The artificial neuron (N-N) model-based networks have accomplished extraordinary success for various vision tasks. However, as a simplification of the mammal neuron model, their structure is locked during training, resulting in overfitting and over-parameters. The astrocyte, newly explored by biologists, can adaptively modulate neuronal communication by inserting itself between neurons. The communication, between the astrocyte and neuron, is bidirectionally and shows the potential to alleviate issues raised by unidirectional communication in the N-N model. In this paper, we first elaborate on the artificial Multi-Astrocyte-Neuron (MA-N) model, which enriches the functionality of the artificial neuron model. Our MA-N model is formulated at both astrocyte- and neuron-level that mimics the bidirectional communication with temporal and joint mechanisms. Then, we construct the MA-Net network with the MA-N model, whose neural connections can be continuously and adaptively modulated during training. Experiments show that our MA-Net advances new state-of-the-art on multiple tasks while significantly reducing its parameters by connection optimization.
APA, Harvard, Vancouver, ISO, and other styles
14

Ribar, Srdjan, Vojislav V. Mitic, and Goran Lazovic. "Neural Networks Application on Human Skin Biophysical Impedance Characterizations." Biophysical Reviews and Letters 16, no. 01 (2021): 9–19. http://dx.doi.org/10.1142/s1793048021500028.

Full text
Abstract:
Artificial neural networks (ANNs) are basically the structures that perform input–output mapping. This mapping mimics the signal processing in biological neural networks. The basic element of biological neural network is a neuron. Neurons receive input signals from other neurons or the environment, process them, and generate their output which represents the input to another neuron of the network. Neurons can change their sensitivity to input signals. Each neuron has a simple rule to process an input signal. Biological neural networks have the property that signals are processed through many parallel connections (massively parallel processing). The activity of all neurons in these parallel connections is summed and represents the output of the whole network. The main feature of biological neural networks is that changes in the sensitivity of the neurons lead to changes in the operation of the entire network. This is called adaptation and is correlated with the learning process of living organisms. In this paper, a set of artificial neural networks are used for classifying the human skin biophysical impedance data.
APA, Harvard, Vancouver, ISO, and other styles
15

Chafik, Sanaa, Mounim A. El Yacoubi, Imane Daoudi, and Hamid El Ouardi. "Unsupervised deep neuron-per-neuron hashing." Applied Intelligence 49, no. 6 (2019): 2218–32. http://dx.doi.org/10.1007/s10489-018-1353-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Wang, Yu, Xintong Chen, Daqi Shen, et al. "Artificial Neurons Based on Ag/V2C/W Threshold Switching Memristors." Nanomaterials 11, no. 11 (2021): 2860. http://dx.doi.org/10.3390/nano11112860.

Full text
Abstract:
Artificial synapses and neurons are two critical, fundamental bricks for constructing hardware neural networks. Owing to its high-density integration, outstanding nonlinearity, and modulated plasticity, memristors have attracted emerging attention on emulating biological synapses and neurons. However, fabricating a low-power and robust memristor-based artificial neuron without extra electrical components is still a challenge for brain-inspired systems. In this work, we demonstrate a single two-dimensional (2D) MXene(V2C)-based threshold switching (TS) memristor to emulate a leaky integrate-and-fire (LIF) neuron without auxiliary circuits, originating from the Ag diffusion-based filamentary mechanism. Moreover, our V2C-based artificial neurons faithfully achieve multiple neural functions including leaky integration, threshold-driven fire, self-relaxation, and linear strength-modulated spike frequency characteristics. This work demonstrates that three-atom-type MXene (e.g., V2C) memristors may provide an efficient method to construct the hardware neuromorphic computing systems.
APA, Harvard, Vancouver, ISO, and other styles
17

Behdad, Rachid, Stephane Binczak, Alexey S. Dmitrichev, Vladimir I. Nekorkin, and Jean-Marie Bilbault. "Artificial Electrical Morris–Lecar Neuron." IEEE Transactions on Neural Networks and Learning Systems 26, no. 9 (2015): 1875–84. http://dx.doi.org/10.1109/tnnls.2014.2360072.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Chen, Shinya E., Rajiv Giridharagopal, and David S. Ginger. "Artificial neuron transmits chemical signals." Nature Materials 22, no. 4 (2023): 416–18. http://dx.doi.org/10.1038/s41563-023-01509-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

V, Yashchenko. "A new approach to the development of artificial intelligence similar to human intelligence." Artificial Intelligence 28, AI.2023.28(1)) (2023): 105–21. http://dx.doi.org/10.15407/jai2023.01.105.

Full text
Abstract:
The article discusses a new approach to the creation of artificial neurons and neural networks as a means of developing artificial intelligence similar to natural. Based on the consideration and analysis of the works of physiologists on the structure and functions of a biological neuron, it was found that the information perceived by a person is stored in the neurons of the brain at the molecular level, it also suggested that the nucleus and endoplasmic reticulum are elements of processing, transformation and storage of temporary and long temporary memory. In addition, it was suggested that the nerve cell of the brain is molecular, using signals or information represented by a continuously variable physical quantity, a supercomputer, that performs the analysis, synthesis, processing and storage of information. Huge amounts of information perceived by a person from the moment of his birth and throughout his life are stored in the endoplasmic reticulum of the neuron. There are about 100 billion neurons in the human brain, and each neuron contains millions of membrane-bound ribosomes that biosynthesize and modify protein molecules that are transferred to the secretory vesicles. When one synaptic vesicle is emptied, a portion of the mediator is ejected into the synaptic cleft, including about 10,000 molecules. If we assume that one molecule corresponds to one unit of information, then human memory is unlimited. A detailed consideration of the fundamentals of the functioning of a biological neuron from the standpoint of a cybernetic system approach led to the understanding that it is necessary to develop an artificial neuron of a new type. This made it possible to develop the most approximate analog of a biological neuron - a neural-like element and a neural-like growing network. A description of the neural-like element is given. The process of perception and memorization of information in a neural-like element with the simultaneous formation of a neural-like growing network and their application in models of intelligent systems is shown.
APA, Harvard, Vancouver, ISO, and other styles
20

Gerasimova, Svetlana A., Anna Beltyukova, Anastasia Fedulina, Maria Matveeva, Albina V. Lebedeva, and Alexander N. Pisarchik. "Living-Neuron-Based Autogenerator." Sensors 23, no. 16 (2023): 7016. http://dx.doi.org/10.3390/s23167016.

Full text
Abstract:
We present a novel closed-loop system designed to integrate biological and artificial neurons of the oscillatory type into a unified circuit. The system comprises an electronic circuit based on the FitzHugh-Nagumo model, which provides stimulation to living neurons in acute hippocampal mouse brain slices. The local field potentials generated by the living neurons trigger a transition in the FitzHugh–Nagumo circuit from an excitable state to an oscillatory mode, and in turn, the spikes produced by the electronic circuit synchronize with the living-neuron spikes. The key advantage of this hybrid electrobiological autogenerator lies in its capability to control biological neuron signals, which holds significant promise for diverse neuromorphic applications.
APA, Harvard, Vancouver, ISO, and other styles
21

Vazquez, Roberto A., and Beatriz A. Garro. "Training Spiking Neural Models Using Artificial Bee Colony." Computational Intelligence and Neuroscience 2015 (2015): 1–14. http://dx.doi.org/10.1155/2015/947098.

Full text
Abstract:
Spiking neurons are models designed to simulate, in a realistic manner, the behavior of biological neurons. Recently, it has been proven that this type of neurons can be applied to solve pattern recognition problems with great efficiency. However, the lack of learning strategies for training these models do not allow to use them in several pattern recognition problems. On the other hand, several bioinspired algorithms have been proposed in the last years for solving a broad range of optimization problems, including those related to the field of artificial neural networks (ANNs). Artificial bee colony (ABC) is a novel algorithm based on the behavior of bees in the task of exploring their environment to find a food source. In this paper, we describe how the ABC algorithm can be used as a learning strategy to train a spiking neuron aiming to solve pattern recognition problems. Finally, the proposed approach is tested on several pattern recognition problems. It is important to remark that to realize the powerfulness of this type of model only one neuron will be used. In addition, we analyze how the performance of these models is improved using this kind of learning strategy.
APA, Harvard, Vancouver, ISO, and other styles
22

Gupta, Pallavi, Nandhini Balasubramaniam, Hwan-You Chang, Fan-Gang Tseng, and Tuhin Subhra Santra. "A Single-Neuron: Current Trends and Future Prospects." Cells 9, no. 6 (2020): 1528. http://dx.doi.org/10.3390/cells9061528.

Full text
Abstract:
The brain is an intricate network with complex organizational principles facilitating a concerted communication between single-neurons, distinct neuron populations, and remote brain areas. The communication, technically referred to as connectivity, between single-neurons, is the center of many investigations aimed at elucidating pathophysiology, anatomical differences, and structural and functional features. In comparison with bulk analysis, single-neuron analysis can provide precise information about neurons or even sub-neuron level electrophysiology, anatomical differences, pathophysiology, structural and functional features, in addition to their communications with other neurons, and can promote essential information to understand the brain and its activity. This review highlights various single-neuron models and their behaviors, followed by different analysis methods. Again, to elucidate cellular dynamics in terms of electrophysiology at the single-neuron level, we emphasize in detail the role of single-neuron mapping and electrophysiological recording. We also elaborate on the recent development of single-neuron isolation, manipulation, and therapeutic progress using advanced micro/nanofluidic devices, as well as microinjection, electroporation, microelectrode array, optical transfection, optogenetic techniques. Further, the development in the field of artificial intelligence in relation to single-neurons is highlighted. The review concludes with between limitations and future prospects of single-neuron analyses.
APA, Harvard, Vancouver, ISO, and other styles
23

Zhang, Yinxing, Ziliang Fang, and Xiaobing Yan. "HfO2-based memristor-CMOS hybrid implementation of artificial neuron model." Applied Physics Letters 120, no. 21 (2022): 213502. http://dx.doi.org/10.1063/5.0091286.

Full text
Abstract:
Memristors with threshold switching behavior are increasingly used in the study of neuromorphic computing, which are frequently used to simulate synaptic functions due to their high integration and simple structure. However, building a neuron circuit to simulate the characteristics of biological neurons is still a challenge. In this work, we demonstrate a leaky integrate-and-fire model of neurons, which is presented by a memristor-CMOS hybrid circuit based on a threshold device of a TiN/HfO2/InGaZnO4/Si structure. Moreover, we achieve multiple neural functions based on the neuron model, including leaky integration, threshold-driven fire, and strength-modulated spike frequency characteristics. This work shows that HfO2-based threshold devices can realize the basic functions of spiking neurons and have great potential in artificial neural networks.
APA, Harvard, Vancouver, ISO, and other styles
24

Volchikhin, V. I., A. I. Ivanov, T. A. Zolotareva, and D. M. Skudnev. "Synthesis of four new neuro-statistical tests for testing the hypothesis of independence of small samples of biometric data." Journal of Physics: Conference Series 2094, no. 3 (2021): 032013. http://dx.doi.org/10.1088/1742-6596/2094/3/032013.

Full text
Abstract:
Abstract The paper considers the analysis of small samples according to several statistical criteria to test the hypothesis of independence, since the direct calculation of the correlation coefficients using the Pearson formula gives an unacceptably high error on small biometric samples. Each of the classical statistical criteria for testing the hypothesis of independence can be replaced with an equivalent artificial neuron. Neuron training is performed based on the condition of obtaining equal probabilities of errors of the first and second kind. To improve the quality of decisions made, it is necessary to use a variety of statistical criteria, both known and new. It is necessary to form networks of artificial neurons, generalizing the number of artificial neurons that is necessary for practical use. It is shown that the classical formula for calculating the correlation coefficients can be modified with four options. This allows you to create a network of 5 artificial neurons, which is not yet able to reduce the probability of errors in comparison with the classical formula. A gain in the confidence level in the future can only be obtained when using a network of more than 23 artificial neurons, if we apply the simplest code to detect and correct errors.
APA, Harvard, Vancouver, ISO, and other styles
25

Whittle, P. "A stochastic model of an artificial neuron." Advances in Applied Probability 23, no. 4 (1991): 809–22. http://dx.doi.org/10.2307/1427677.

Full text
Abstract:
A simple model of a neuron is proposed, not intended to be biologically faithful, but to incorporate dynamic and stochastic features which seem to be realistic for both the natural and the artificial case. It is shown that the use of feedback for assemblies of such neurons can produce bistable behaviour and sharpen the discrimination of the assembly to the level of input. Particular attention is paid to bistable devices which are to serve as bit-stores (and so constitute components of a memory) and which suffer a disturbing input due to mutual interference. Approximate expressions are obtained for the equilibrium distribution of the excitation level for such assemblies and for the expected escape time of such an assembly from a metastable excitation level.
APA, Harvard, Vancouver, ISO, and other styles
26

Whittle, P. "A stochastic model of an artificial neuron." Advances in Applied Probability 23, no. 04 (1991): 809–22. http://dx.doi.org/10.1017/s0001867800023958.

Full text
Abstract:
A simple model of a neuron is proposed, not intended to be biologically faithful, but to incorporate dynamic and stochastic features which seem to be realistic for both the natural and the artificial case. It is shown that the use of feedback for assemblies of such neurons can produce bistable behaviour and sharpen the discrimination of the assembly to the level of input. Particular attention is paid to bistable devices which are to serve as bit-stores (and so constitute components of a memory) and which suffer a disturbing input due to mutual interference. Approximate expressions are obtained for the equilibrium distribution of the excitation level for such assemblies and for the expected escape time of such an assembly from a metastable excitation level.
APA, Harvard, Vancouver, ISO, and other styles
27

Kumar, Sushil, and Bipin Kumar Tripathi. "On the learning machine with compensatory aggregation based neurons in quaternionic domain." Journal of Computational Design and Engineering 6, no. 1 (2018): 33–48. http://dx.doi.org/10.1016/j.jcde.2018.04.002.

Full text
Abstract:
Abstract The nonlinear spatial grouping process of synapses is one of the fascinating methodologies for neuro-computing researchers to achieve the computational power of a neuron. Generally, researchers use neuron models that are based on summation (linear), product (linear) or radial basis (nonlinear) aggregation for the processing of synapses, to construct multi-layered feed-forward neural networks, but all these neuron models and their corresponding neural networks have their advantages or disadvantages. The multi-layered network generally uses for accomplishing the global approximation of input–output mapping but sometimes getting stuck into local minima, while the nonlinear radial basis function (RBF) network is based on exponentially decaying that uses for local approximation to input–output mapping. Their advantages and disadvantages motivated to design two new artificial neuron models based on compensatory aggregation functions in the quaternionic domain. The net internal potentials of these neuron models are developed with the compositions of basic summation (linear) and radial basis (nonlinear) operations on quaternionic-valued input signals. The neuron models based on these aggregation functions ensure faster convergence, better training, and prediction accuracy. The learning and generalization capabilities of these neurons are verified through various three-dimensional transformations and time series predictions as benchmark problems. Highlights Two new CSU and CPU neuron models for quaternionic signals are proposed. Net potentials based on the compositions of summation and radial basis functions. The nonlinear grouping of synapses achieve the computational power of proposed neurons. The neuron models ensure faster convergence, better training and prediction accuracy. The learning and generalization capabilities of CSU/CPU are verified by various benchmark problems.
APA, Harvard, Vancouver, ISO, and other styles
28

HERRMANN, CHRISTOPH S., and ANDREAS KLAUS. "AUTAPSE TURNS NEURON INTO OSCILLATOR." International Journal of Bifurcation and Chaos 14, no. 02 (2004): 623–33. http://dx.doi.org/10.1142/s0218127404009338.

Full text
Abstract:
Recently, neurobiologists have discovered axons on neurons which synapse on the same neuron's dendrites — so-called autapses. It is not yet clear what functional significance autapses offer for neural behavior. This is an ideal case for using a physical simulation to investigate how an autapse alters the firing of a neuron. We simulated a neural basket cell via the Hodgkin–Huxley equations and implemented an autapse which feeds back onto the soma of the neuron. The behavior of the cell was compared with and without autaptic feedback. Our artificial autapse neuron (AAN) displays oscillatory behavior which is not observed for the same model neuron without autapse. The neuron oscillates between two functional states: one where it fires at high frequency and another where firing is suppressed. This behavior is called "spike bursting" and represents a common pattern recorded from cerebral neurons.
APA, Harvard, Vancouver, ISO, and other styles
29

Abreu Rodrigues, Fabiano de. "IMPLANTAÇÃO DE NEURÔNIOS ARTIFICIAIS NO CÓRTEX PRÉ-FRONTAL." RECISATEC - REVISTA CIENTÍFICA SAÚDE E TECNOLOGIA - ISSN 2763-8405 2, no. 11 (2022): e211207. http://dx.doi.org/10.53612/recisatec.v2i11.207.

Full text
Abstract:
Os neurônios atuam no sistema nervoso sendo responsáveis pela propagação do impulso nervoso e consideradas as unidades básicas desse sistema. O neurônio artificial é inspirado no neurônio biológico. Por meio do entendimento do funcionamento do neurônio biológico no cérebro, e partindo daí, cria um modelo de inteligência artificial. Objetivo: Compreender os benefícios da implantação de neurônios artificiais no córtex pré-frontal e como ocorre seu desenvolvimento. Métodos: O atual artigo é uma revisão de literatura desenvolvida por meio das bases de dados: SciELO, PubMed, Psycinfo. Com o auxílio das palavras chaves em português: cérebro, neurônios, córtex, neurônio artificial e em inglês: brain, neurons, córtex, artificial neuron. Conclusão: O modelo de neurônio artificial é um avanço na ciência, porém ainda são necessários diversos estudos para aprimoramento. Tal método pode trazer benefícios para a saúde e em doenças mentais.
APA, Harvard, Vancouver, ISO, and other styles
30

Alia, Giuseppe, and Enrico Martinelli. "NEUROM: a ROM based RNS digital neuron." Neural Networks 18, no. 2 (2005): 179–89. http://dx.doi.org/10.1016/j.neunet.2004.11.006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Cazé, Romain D. "Any neuron can perform linearly non-separable computations." F1000Research 10 (July 6, 2021): 539. http://dx.doi.org/10.12688/f1000research.53961.1.

Full text
Abstract:
Multiple studies have shown how dendrites enable some neurons to perform linearly non-separable computations. These works focus on cells with an extended dendritic arbor where voltage can vary independently, turning dendritic branches into local non-linear subunits. However, these studies leave a large fraction of the nervous system unexplored. Many neurons, e.g. granule cells, have modest dendritic trees and are electrically compact. It is impossible to decompose them into multiple independent subunits. Here, we upgraded the integrate and fire neuron to account for saturating dendrites. This artificial neuron has a unique membrane voltage and can be seen as a single layer. We present a class of linearly non-separable computations and how our neuron can perform them. We thus demonstrate that even a single layer neuron with dendrites has more computational capacity than without. Because any neuron has one or more layer, and all dendrites do saturate, we show that any dendrited neuron can implement linearly non-separable computations.
APA, Harvard, Vancouver, ISO, and other styles
32

Cazé, Romain D. "Any neuron can perform linearly non-separable computations." F1000Research 10 (September 16, 2021): 539. http://dx.doi.org/10.12688/f1000research.53961.2.

Full text
Abstract:
Multiple studies have shown how dendrites enable some neurons to perform linearly non-separable computations. These works focus on cells with an extended dendritic arbor where voltage can vary independently, turning dendritic branches into local non-linear subunits. However, these studies leave a large fraction of the nervous system unexplored. Many neurons, e.g. granule cells, have modest dendritic trees and are electrically compact. It is impossible to decompose them into multiple independent subunits. Here, we upgraded the integrate and fire neuron to account for saturating dendrites. This artificial neuron has a unique membrane voltage and can be seen as a single layer. We present a class of linearly non-separable computations and how our neuron can perform them. We thus demonstrate that even a single layer neuron with dendrites has more computational capacity than without. Because any neuron has one or more layer, and all dendrites do saturate, we show that any dendrited neuron can implement linearly non-separable computations.
APA, Harvard, Vancouver, ISO, and other styles
33

Vidybida, Alexander. "Relation Between Firing Statistics of Spiking Neuron with Instantaneous Feedback and Without Feedback." Fluctuation and Noise Letters 14, no. 04 (2015): 1550034. http://dx.doi.org/10.1142/s0219477515500340.

Full text
Abstract:
We consider a class of spiking neuron models, defined by a set of conditions which are typical for basic threshold-type models like leaky integrate-and-fire, or binding neuron model and also for some artificial neurons. A neuron is fed with a point renewal process. A relation between the three probability density functions (PDF): (i) PDF of input interspike intervals ISIs, (ii) PDF of output interspike intervals of a neuron with a feedback and (iii) PDF for that same neuron without feedback is derived. This allows to calculate any one of the three PDFs provided the remaining two are given. Similar relation between corresponding means and variances is derived. The relations are checked exactly for the binding neuron model stimulated with Poisson stream.
APA, Harvard, Vancouver, ISO, and other styles
34

Cazé, Romain D. "All neurons can perform linearly non-separable computations." F1000Research 10 (June 8, 2022): 539. http://dx.doi.org/10.12688/f1000research.53961.3.

Full text
Abstract:
Multiple studies have shown how dendrites enable some neurons to perform linearly non-separable computations. These works focus on cells with an extended dendritic arbor where voltage can vary independently, turning dendritic branches into local non-linear subunits. However, these studies leave a large fraction of the nervous system unexplored. Many neurons, e.g. granule cells, have modest dendritic trees and are electrically compact. It is impossible to decompose them into multiple independent subunits. Here, we upgraded the integrate and fire neuron to account for saturation due to interacting synapses. This artificial neuron has a unique membrane voltage and can be seen as a single layer. We present a class of linearly non-separable computations and how our neuron can perform them. We thus demonstrate that even a single layer neuron with interacting synapses has more computational capacity than without. Because all neurons have one or more layer, we show that all neurons can potentially implement linearly non-separable computations.
APA, Harvard, Vancouver, ISO, and other styles
35

Wang, Qiguang, Guangchen Pan, and Yanfeng Jiang. "An Ultra-Low Power Threshold Voltage Variable Artificial Retina Neuron." Electronics 11, no. 3 (2022): 365. http://dx.doi.org/10.3390/electronics11030365.

Full text
Abstract:
An artificial retina neuron is proposed and implemented by CMOS technology. It can be used as an image sensor in the Artificial Intelligence (AI) field with the benefit of ultra-low power consumption. The artificial neuron can generate signals in spike shape with pre-designed frequencies under different light intensities. The power consumption is reduced by removing the film capacitor. The comparator is adopted to improve the stability of the circuit, and the power consumption of the comparator is optimized. The power consumption of the proposed CMOS neuron circuit is suppressed. The ultra-low-power artificial neuron with variable threshold shows a frequency range of 0.8–80 kHz when the input current is varied from 1 pA to 150 pA. The minimum DC power is 35 pW when the input current is 5 pA. The minimum energy of the neuron is 3 fJ. The proposed ultra-low-power artificial retina neuron has wide potential applications in the field of AI.
APA, Harvard, Vancouver, ISO, and other styles
36

Hong, Yuwan, Yanming Liu, Ruonan Li, and He Tian. "Emerging functions of two-dimensional materials in memristive neurons." Journal of Physics: Materials 7, no. 3 (2024): 032001. http://dx.doi.org/10.1088/2515-7639/ad467b.

Full text
Abstract:
Abstract Neuromorphic computing (NC), considered as a promising candidate for future computer architecture, can facilitate more biomimetic intelligence while reducing energy consumption. Neuron is one of the critical building blocks of NC systems. Researchers have been engaged in promoting neuron devices with better electrical properties and more biomimetic functions. Two-dimensional (2D) materials, with ultrathin layers, diverse band structures, featuring excellent electronic properties and various sensing abilities, are promised to realize these requirements. Here, the progress of artificial neurons brought by 2D materials is reviewed, from the perspective of electrical performance of neuron devices, from stability, tunability to power consumption and on/off ratio. Rose up to system-level applications, algorithms and hardware implementation of spiking neural network, stochastic neural network and artificial perception system based on 2D materials are reviewed. 2D materials not only facilitate the realization of NC systems but also increase the integration density. Finally, current challenges and perspectives on developing 2D material-based neurons and NC systems are systematically analyzed, from the bottom 2D materials fabrication to novel neural devices, more brain-like computational algorithms and systems.
APA, Harvard, Vancouver, ISO, and other styles
37

Nakamura, K., and T. Ono. "Lateral hypothalamus neuron involvement in integration of natural and artificial rewards and cue signals." Journal of Neurophysiology 55, no. 1 (1986): 163–81. http://dx.doi.org/10.1152/jn.1986.55.1.163.

Full text
Abstract:
Involvement of rat lateral hypothalamus (LHA) neurons in integration of motivation, reward, and learning processes was studied by recording single-neuron activity during cuetone discrimination, learning behavior to obtain glucose, or electrical rewarding intracranial self-stimulation (ICSS) of the posterior LHA. To relate the activity of an LHA neuron to glucose, ICSS, and anticipatory cues, the same licking task was used to obtain both rewards. Each neuron was tested with rewards alone and then with rewards signaled by cuetone stimuli (CTS), CTS1+ = 1,200 Hz for glucose, CTS2+ = 4,300 Hz for ICSS, and CTS- = 2,800 Hz for no reward. The activity of 318 neurons in the LHA was analyzed. Of these, 212 (66.7%) responded to one or both rewarding stimuli (glucose, 115; ICSS, 193). Usually, both rewards affected the same neuron in the same direction. Of 96 neurons that responded to both rewards, the responses of 72 (75%) were similar, i.e., either both excitatory or both inhibitory. When a tone was associated with glucose or ICSS reward, 81 of the 212 neurons that responded to either or both rewards and none of 106 neurons that failed to respond to either reward acquired a response to the respective CTS. Usually, the response to a tone was in the same direction as the reward response. Of 45 neurons that responded to both glucose and CTS1+, 38 (84.4%) were similar, and of 66 that responded to both ICSS and CTS2+, 47 (71.2%) were similar. The neural response to a tone was acquired rapidly after licking behavior was learned and was extinguished equally rapidly before licking stopped in extinction. The latency of the neural response to CTS1+ was 10-150 ms (58.7 +/- 40.9 ms, mean +/- SE, n = 31), and that of the first lick was 100-370 ms (204.8 +/- 59.1 ms, n = 31). The latency of neural responses to CTS2+ was 10-230 ms (68.3 +/- 53.5 ms, n = 33), and that of the first lick was 90-370 ms (212.4 +/- 58.5 ms, n = 33). There was no significant difference between the neural response latencies for the two cue tones nor between the lick latencies for the different rewards. Neurons inhibited by glucose or ICSS reward were distributed widely in the LHA, whereas most excited neurons were in the posterodorsal subarea; fewer were in the anteroventral subarea. Neurons responding to the CTS for glucose or ICSS were found more frequently in the posterior region.(ABSTRACT TRUNCATED AT 400 WORDS)
APA, Harvard, Vancouver, ISO, and other styles
38

Sharp, A. A., M. B. O'Neil, L. F. Abbott, and E. Marder. "Dynamic clamp: computer-generated conductances in real neurons." Journal of Neurophysiology 69, no. 3 (1993): 992–95. http://dx.doi.org/10.1152/jn.1993.69.3.992.

Full text
Abstract:
1. We describe a new method, the dynamic clamp, that uses a computer as an interactive tool to introduce simulated voltage and ligand mediated conductances into real neurons. 2. We simulate a gamma-aminobutyric acid (GABA) response of a cultured stomatogastric ganglion neuron to illustrate that the dynamic clamp effectively introduces a conductance into the target neuron. 3. To demonstrate an artificial voltage-dependent conductance, we simulate the action of a voltage-dependent proctolin response on a neuron in the intact stomatogastric ganglion. We show that shifts in the activation curve and the maximal conductance of the response produce different effects on the target neuron. 4. The dynamic clamp is used to construct reciprocal inhibitory synapses between two stomatogastric ganglion neurons that are not coupled naturally, illustrating that this method can be used to form new networks at will.
APA, Harvard, Vancouver, ISO, and other styles
39

Sibai, F. N. "A fault-tolerant digital artificial neuron." IEEE Design & Test of Computers 10, no. 4 (1993): 76–82. http://dx.doi.org/10.1109/54.245965.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Li, Sai, Wang Kang, Yangqi Huang, Xichao Zhang, Yan Zhou, and Weisheng Zhao. "Magnetic skyrmion-based artificial neuron device." Nanotechnology 28, no. 31 (2017): 31LT01. http://dx.doi.org/10.1088/1361-6528/aa7af5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Yuan, Jia-Hui, Xiao-Kuo Yang, Bin Zhang, et al. "Activation function and computing performance of spin neuron driven by magnetic field and strain." Acta Physica Sinica 70, no. 20 (2021): 207502. http://dx.doi.org/10.7498/aps.70.20210611.

Full text
Abstract:
The spin neuron is an emerging artificial neural device which has many advantages such as ultra-low power consumption, strong nonlinearity, and high integration. Besides, it has ability to remember and calculate at the same time. So it is seen as a suitable and excellent candidate for the new generation of neural network. In this paper, a spin neuron driven by magnetic field and strain is proposed. The micromagnetic model of the device is realized by using the OOMMF micromagnetic simulation software, and the numerical model of the device is also established by using the LLG equation. More importantly, a three-layer neural network is composed of spin neurons constructed respectively using three materials (Terfenol-D, FeGa, Ni). It is used to study the activation functions and the ability to recognize the MNIST handwritten datasets.c Results show that the spin neuron can successfully achieve the random magnetization switching to simulate the activation behavior of the biological neuron. Moreover, the results show that if the ranges of the inputting magnetic fields are different, the three materials' neurons can all reach the saturation accuracy. It is expected to replace the traditional CMOS neuron. And the overall power consumption of intelligent computing can be further reduced by using appropriate materials. If we input the magnetic fields in the same range, the recognition speed of the spin neuron made of Ni is the slowest in the three materials. The results can establish a theoretical foundation for the design and the applications of the new artificial neural networks and the intelligent circuits.
APA, Harvard, Vancouver, ISO, and other styles
42

Sánchez, Carlos, Diego Caso, and Farkhad G. Aliev. "Artificial Neuron Based on the Bloch-Point Domain Wall in Ferromagnetic Nanowires." Materials 17, no. 10 (2024): 2425. http://dx.doi.org/10.3390/ma17102425.

Full text
Abstract:
Nanomagnetism and spintronics are currently active areas of research, with one of the main goals being the creation of low-energy-consuming magnetic memories based on nanomagnet switching. These types of devices could also be implemented in neuromorphic computing by crafting artificial neurons (ANs) that emulate the characteristics of biological neurons through the implementation of neuron models such as the widely used leaky integrate-and-fire (LIF) with a refractory period. In this study, we have carried out numerical simulations of a 120 nm diameter, 250 nm length ferromagnetic nanowire (NW) with the aim of exploring the design of an artificial neuron based on the creation and destruction of a Bloch-point domain wall. To replicate signal integration, we applied pulsed trains of spin currents to the opposite faces of the ferromagnetic NW. These pulsed currents (previously studied only in the continuous form) are responsible for inducing transitions between the stable single vortex (SV) state and the metastable Bloch point domain wall (BP-DW) state. To ensure the system exhibits leak and refractory properties, the NW was placed in a homogeneous magnetic field of the order of mT in the axial direction. The suggested configuration fulfills the requirements and characteristics of a biological neuron, potentially leading to the future creation of artificial neural networks (ANNs) based on reversible changes in the topology of magnetic NWs.
APA, Harvard, Vancouver, ISO, and other styles
43

Márquez-Vera, Carlos Antonio, Zaineb Yakoub, Marco Antonio Márquez Vera, and Alfian Ma'arif. "Spiking PID Control Applied in the Van de Vusse Reaction." International Journal of Robotics and Control Systems 1, no. 4 (2021): 488–500. http://dx.doi.org/10.31763/ijrcs.v1i4.490.

Full text
Abstract:
Artificial neural networks (ANN) can approximate signals and give interesting results in pattern recognition; some works use neural networks for control applications. However, biological neurons do not generate similar signals to the obtained by ANN. The spiking neurons are an interesting topic since they simulate the real behavior depicted by biological neurons. This paper employed a spiking neuron to compute a PID control, which is further applied to the Van de Vusse reaction. This reaction, as the inverse pendulum, is a benchmark used to work with systems that has inverse response producing the output to undershoot. One problem is how to code information that the neuron can interpret and decode the peak generated by the neuron to interpret the neuron's behavior. In this work, a spiking neuron is used to compute a PID control by coding in time the peaks generated by the neuron. The neuron has as synaptic weights the PID gains, and the peak observed in the axon is the coded control signal. The neuron adaptation tries to obtain the necessary weights to generate the peak instant necessary to control the chemical reaction. The simulation results show the possibility of using this kind of neuron for control issues and the possibility of using a spiking neural network to overcome the undershoot obtained due to the inverse response of the chemical reaction.
APA, Harvard, Vancouver, ISO, and other styles
44

Sutariya, Vijaykumar, Anastasia Groshev, Prabodh Sadana, Deepak Bhatia, and Yashwant Pathak. "Artificial Neural Network in Drug Delivery and Pharmaceutical Research." Open Bioinformatics Journal 7, no. 1 (2013): 49–62. http://dx.doi.org/10.2174/1875036201307010049.

Full text
Abstract:
Artificial neural networks (ANNs) technology models the pattern recognition capabilities of the neural networks of the brain. Similarly to a single neuron in the brain, artificial neuron unit receives inputs from many external sources, processes them, and makes decisions. Interestingly, ANN simulates the biological nervous system and draws on analogues of adaptive biological neurons. ANNs do not require rigidly structured experimental designs and can map functions using historical or incomplete data, which makes them a powerful tool for simulation of various non-linear systems.ANNs have many applications in various fields, including engineering, psychology, medicinal chemistry and pharmaceutical research. Because of their capacity for making predictions, pattern recognition, and modeling, ANNs have been very useful in many aspects of pharmaceutical research including modeling of the brain neural network, analytical data analysis, drug modeling, protein structure and function, dosage optimization and manufacturing, pharmacokinetics and pharmacodynamics modeling, and in vitro in vivo correlations. This review discusses the applications of ANNs in drug delivery and pharmacological research.
APA, Harvard, Vancouver, ISO, and other styles
45

Takahata, M., and M. Hisada. "Local nonspiking interneurons involved in gating of the descending motor pathway in crayfish." Journal of Neurophysiology 56, no. 3 (1986): 718–31. http://dx.doi.org/10.1152/jn.1986.56.3.718.

Full text
Abstract:
Uropod motor neurons in the terminal abdominal ganglion of crayfish are continuously excited during the abdominal posture movement so that subthreshold excitatory postsynaptic potentials from the descending statocyst pathway can elicit spike activity in the motor neurons only while the abdominal posture system is in operation. Local nonspiking interneurons in the terminal ganglion were also found to show sustained membrane potential change during the fictive abdominal posture movement. Artificial membrane potential change of these interneurons by intracellular current injection in the same direction as that actually observed during the abdominal movement caused similar excitation of uropod motor neurons. Artificial cancellation of the membrane potential change of these interneurons during the abdominal movement also caused cancellation of the excitation of uropod motor neurons. We concluded that the continuous excitation of uropod motor neurons during the fictive abdominal movement was mediated, at least partly, by the local nonspiking interneurons. Fourteen (36%) out of 39 examined nonspiking interneurons were judged to be involved in the excitation of uropod motor neurons during the fictive abdominal movement. Another 25 interneurons (64%) were found not to be involved in the excitation of motor neurons, although most of them had a strong effect on the uropod motor neuron activity when their membrane potential was changed artificially. The interneurons that were involved in the excitation of motor neurons during the abdominal movement included both of the two major structural types of nonspiking interneurons in the terminal ganglion, i.e., those in the anterolateral portion and those in the posterolateral portion. No strict correlation was found between the structure of nonspiking interneurons and their function in the control of uropod motor neuron activity.
APA, Harvard, Vancouver, ISO, and other styles
46

Nagai, T., H. Katayama, K. Aihara, and T. Yamamoto. "Pruning of rat cortical taste neurons by an artificial neural network model." Journal of Neurophysiology 74, no. 3 (1995): 1010–19. http://dx.doi.org/10.1152/jn.1995.74.3.1010.

Full text
Abstract:
1. Taste qualities are believed to be coded in the activity of ensembles of taste neurons. However, it is not clear whether all neurons are equally responsible for coding. To clarify the point, the relative contribution of each taste neuron to coding needs to be assessed. 2. We constructed simple three-layer neural networks with input units representing cortical taste neurons of the rat. The networks were trained by the back-propagation learning algorithm to classify the neural response patterns to the basic taste stimuli (sucrose, HCl, quinine hydrochloride, and NaCl). The networks had four output units representing the basic taste qualities, the values of which provide a measure for similarity of test stimuli (salts, tartaric acid, and umami substances) to the basic taste stimuli. 3. Trained networks discriminated the response patterns to the test stimuli in a plausible manner in light of previous physiological and psychological experiments. Profiles of output values of the networks paralleled those of across-neuron correlations with respect to the highest or second-highest values in the profiles. 4. We evaluated relative contributions of input units to the taste discrimination of the network by examining their significance Sj, which is defined as the sum of the absolute values of the connection weights from the jth input unit to the hidden layer. When the input units with weaker connection weights (e.g., 15 of 39 input units) were "pruned" from the trained network, the ability of the network to discriminate the basic taste qualities as well as other test stimuli was not greatly affected. On the other hand, the taste discrimination of the network progressively deteriorated much more rapidly with pruning of input units with stronger connection weights. 5. These results suggest that cortical taste neurons differentially contribute to the coding of taste qualities. The pruning technique may enable the evaluation of a given taste neuron in terms of its relative contribution to the coding, with Sj providing a quantitative measure for such evaluation.
APA, Harvard, Vancouver, ISO, and other styles
47

Paeen Afrakoti, Iman Esmaili, Vahdat Nazerian, and Tole Sutikno. "Spiking ink drop spread clustering algorithm and its memristor crossbar conceptual hardware design." International Journal of Electrical and Computer Engineering (IJECE) 13, no. 6 (2023): 7125. http://dx.doi.org/10.11591/ijece.v13i6.pp7125-7136.

Full text
Abstract:
<span lang="EN-US">In this study, a novel neuro-fuzzy clustering algorithm is proposed based on spiking neural network and ink drop spread (IDS) concepts. The proposed structure is a one-layer artificial neural network with leaky integrate and fire (LIF) neurons. The structure implements the IDS algorithm as a fuzzy concept. Each training data will result in firing the corresponding input neuron and its neighboring neurons. A synchronous time coding algorithm is used to manage input and output neurons firing time. For an input data, one or several output neurons of the network will fire; confidence degree of the network to outputs is defined as the relative delay of the firing times with respect to the synchronous pulse. A memristor crossbar-based hardware is utilized for hardware implementation of the proposed algorithm. The simulation result corroborates that the proposed algorithm can be used as a neuro-fuzzy clustering and vector quantization algorithm.</span>
APA, Harvard, Vancouver, ISO, and other styles
48

Jo, Yooyeon, Dae Kyu Lee, and Joon Young Kwak. "Recent Progress in Development of Artificial Neuromorphic Devices Based on Emerging Materials." Ceramist 25, no. 4 (2022): 454–74. http://dx.doi.org/10.31613/ceramist.2022.25.4.08.

Full text
Abstract:
In the fourth industrial revolution, the efficient processing of huge amounts of data is important due to the development of artificial intelligence (AI), internet of things (IoT), and machine learning (ML). The conventional computing system, which is known as von Neumann architecture, has been facing bottleneck problems because of the physical separation of memory and central processing unit (CPU). Many researchers have interested to study on neuromorphic computing, inspired by the human brain, to solve the bottleneck problems. The development of artificial neuromorphic devices, such as neuron and synaptic devices, is important to successfully demonstrate a neuromorphic computing hardware. Various Si CMOS transistor-based circuits have been investigated to implement the behaviors of the biological neuron and synapse; however, they are not suitable for mimicking the large-scale biological neural networks because of Si CMOS transistor’s scalability and power consumption issues. In this report, we review the recent research progress in artificial neurons and synaptic devices based on emerging materials and discuss the future research direction of artificial neural networks.
APA, Harvard, Vancouver, ISO, and other styles
49

Ahmad, Maruf, Lei Zhang, Kelvin Tsun Wai Ng, and Muhammad E. H. Chowdhury. "Complex-Exponential-Based Bio-Inspired Neuron Model Implementation in FPGA Using Xilinx System Generator and Vivado Design Suite." Biomimetics 8, no. 8 (2023): 621. http://dx.doi.org/10.3390/biomimetics8080621.

Full text
Abstract:
This research investigates the implementation of complex-exponential-based neurons in FPGA, which can pave the way for implementing bio-inspired spiking neural networks to compensate for the existing computational constraints in conventional artificial neural networks. The increasing use of extensive neural networks and the complexity of models in handling big data lead to higher power consumption and delays. Hence, finding solutions to reduce computational complexity is crucial for addressing power consumption challenges. The complex exponential form effectively encodes oscillating features like frequency, amplitude, and phase shift, streamlining the demanding calculations typical of conventional artificial neurons through levering the simple phase addition of complex exponential functions. The article implements such a two-neuron and a multi-neuron neural model using the Xilinx System Generator and Vivado Design Suite, employing 8-bit, 16-bit, and 32-bit fixed-point data format representations. The study evaluates the accuracy of the proposed neuron model across different FPGA implementations while also providing a detailed analysis of operating frequency, power consumption, and resource usage for the hardware implementations. BRAM-based Vivado designs outperformed Simulink regarding speed, power, and resource efficiency. Specifically, the Vivado BRAM-based approach supported up to 128 neurons, showcasing optimal LUT and FF resource utilization. Such outcomes accommodate choosing the optimal design procedure for implementing spiking neural networks on FPGAs.
APA, Harvard, Vancouver, ISO, and other styles
50

XU, JIAN-XIN, and XIN DENG. "STUDY ON CHEMOTAXIS BEHAVIORS OF C. ELEGANS USING DYNAMIC NEURAL NETWORK MODELS: FROM ARTIFICIAL TO BIOLOGICAL MODEL." Journal of Biological Systems 18, spec01 (2010): 3–33. http://dx.doi.org/10.1142/s0218339010003597.

Full text
Abstract:
With the anatomical understanding of the neural connection of the nematode Caenorhabditis elegans (C. elegans), its chemotaxis behaviors are investigated in this paper through the association with the biological nerve connections. The chemotaxis behaviors include food attraction, toxin avoidance and mixed-behaviors (finding food and avoiding toxin concurrently). Eight dynamic neural network (DNN) models, two artifical models and six biological models, are used to learn and implement the chemotaxis behaviors of C. elegans. The eight DNN models are classified into two classes with either single sensory neuron or dual sensory neurons. The DNN models are trained to learn certain switching logics according to different chemotaxis behaviors using real time recurrent learning algorithm (RTRL). First we show the good performance of the two artifical models in food attraction, toxin avoidance and the mixed-behaviors. Next, six neural wire diagrams from sensory neurons to motor neurons are extracted from the anatomical nerve connection of C. elegans. Then the extracted biological wire diagrams are trained using RTRL directly, which is the first time in this field of research by associating chemotaxis behaviors with biological neural models. An interesting discovery is the need for a memory neuron when single-sensory models are used, which is consistent with the anatomical understanding on a specific neuron that functions as a memory. In the simulations, the chemotaxis behaviors of C. elegans can be depicted by several switch logical functions which can be learned by RTRL for both artifical and biological models.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography