Academic literature on the topic 'Artificial Neuron'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Artificial Neuron.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Artificial Neuron"

1

Sharp, A. A., L. F. Abbott, and E. Marder. "Artificial electrical synapses in oscillatory networks." Journal of Neurophysiology 67, no. 6 (1992): 1691–94. http://dx.doi.org/10.1152/jn.1992.67.6.1691.

Full text
Abstract:
1. We use an electronic circuit to artificially electrically couple neurons. 2. Strengthening the coupling between an oscillating neuron and a hyperpolarized, passive neuron can either increase or decrease the frequency of the oscillator depending on the properties of the oscillator. 3. The result of electrically coupling two neuronal oscillators depends on the membrane potentials, intrinsic properties of the neurons, and the coupling strength. 4. The interplay between chemical inhibitory synapses and electrical synapses can be studied by creating both chemical and electrical synapses between two cultured neurons and by artificially strengthening the electrical synapse between the ventricular dilator and one pyloric dilator neuron of the stomatogastric ganglion.
APA, Harvard, Vancouver, ISO, and other styles
2

Torres-Treviño, Luis M., Angel Rodríguez-Liñán, Luis González-Estrada, and Gustavo González-Sanmiguel. "Single Gaussian Chaotic Neuron: Numerical Study and Implementation in an Embedded System." Discrete Dynamics in Nature and Society 2013 (2013): 1–11. http://dx.doi.org/10.1155/2013/318758.

Full text
Abstract:
Artificial Gaussian neurons are very common structures of artificial neural networks like radial basis function. These artificial neurons use a Gaussian activation function that includes two parameters called the center of mass (cm) and sensibility factor (λ). Changes on these parameters determine the behavior of the neuron. When the neuron has a feedback output, complex chaotic behavior is displayed. This paper presents a study and implementation of this particular neuron. Stability of fixed points, bifurcation diagrams, and Lyapunov exponents help to determine the dynamical nature of the neuron, and its implementation on embedded system illustrates preliminary results toward embedded chaos computation.
APA, Harvard, Vancouver, ISO, and other styles
3

Alvarellos-González, Alberto, Alejandro Pazos, and Ana B. Porto-Pazos. "Computational Models of Neuron-Astrocyte Interactions Lead to Improved Efficacy in the Performance of Neural Networks." Computational and Mathematical Methods in Medicine 2012 (2012): 1–10. http://dx.doi.org/10.1155/2012/476324.

Full text
Abstract:
The importance of astrocytes, one part of the glial system, for information processing in the brain has recently been demonstrated. Regarding information processing in multilayer connectionist systems, it has been shown that systems which include artificial neurons and astrocytes (Artificial Neuron-Glia Networks) have well-known advantages over identical systems including only artificial neurons. Since the actual impact of astrocytes in neural network function is unknown, we have investigated, using computational models, different astrocyte-neuron interactions for information processing; different neuron-glia algorithms have been implemented for training and validation of multilayer Artificial Neuron-Glia Networks oriented toward classification problem resolution. The results of the tests performed suggest that all the algorithms modelling astrocyte-induced synaptic potentiation improved artificial neural network performance, but their efficacy depended on the complexity of the problem.
APA, Harvard, Vancouver, ISO, and other styles
4

Chen, Xiu, and Yi Wang. "A Chaotic Neuron and its Ability to Prevent Overfitting." Frontiers in Computing and Intelligent Systems 5, no. 1 (2023): 53–61. http://dx.doi.org/10.54097/fcis.v5i1.11673.

Full text
Abstract:
Chaotic neuron is a neural model based on chaos theory, which combines the complex dynamic behavior of biological neurons with the characteristics of chaotic systems. Inspired by the chaotic firing characteristics of biological neurons, a novel chaotic neuron model and its response activation function LMCU are proposed in this paper. Based on one-dimensional chaotic mapping, this chaotic neuron model takes the emissivity of chaotic firing characteristics of biological neurons as its response output, so that it has the nonlinear response and chaotic characteristics of biological neurons. Different from the traditional neuron model, it makes full use of the nonlinear dynamics of the chaotic system to achieve the activation output. In this paper, we apply the proposed chaotic neurons to artificial neural networks by using LeNet-5 models on MNIST and CIFAR-10 datasets, and compare them with common activation functions. The application of chaotic neurons can effectively reduce the overfitting phenomenon of artificial neural network, significantly reduce the generalization error of the model, and greatly improve the overall performance of artificial neural network. The innovative design of this chaotic neuron model provides a new cornerstone for the future development of artificial neural networks.
APA, Harvard, Vancouver, ISO, and other styles
5

Zigunovs, Maksims. "THE ALZHEIMER’S DISEASE IMPACT ON ARTIFICIAL NEURAL NETWORKS." ENVIRONMENT. TECHNOLOGIES. RESOURCES. Proceedings of the International Scientific and Practical Conference 2 (June 17, 2021): 205–9. http://dx.doi.org/10.17770/etr2021vol2.6632.

Full text
Abstract:
The Alzheimer’s Disease main impact on the brain is the memory loss effect. Therefore, in the “neuron world” this makes a disorder of signal impulses and disconnects neurons that causes the neuron death and memory loss. The research main aim is to determine the average loss of signal and develop memory loss prediction models for artificial neuron network. The Izhikevich neural networking model is often used for constructing neuron neural electrical signal modeling. The neuron model signal rhythm and spikes are used as model neuron characteristics for understanding if the system is stable at certain moment and in time. In addition, the electrical signal parameters are used in similar way as they are used in a biological brain. During the research the neural network initial conditions are assumed to be randomly selected in specified the working neuron average sigma I parameters range.
APA, Harvard, Vancouver, ISO, and other styles
6

Panda, Sashmita. "Comparative study of single biological neuron with an artificial neuron." BOHR Journal of Biocomputing and Nano Technology 1, no. 1 (2023): 9–16. http://dx.doi.org/10.54646/bjbnt.2023.02.

Full text
Abstract:
A number of artificial neural models have been presented in the literature in an effort to suggest a more accurate representation of a single biological neuron. There are numerous publications on synthetic neurons that attempted to replicate a single biological neuron, however, such models were unable to generate the spiking patterns of a real biological neuron. Therefore, there is still scope to design and research improved spiking neural models that more accurately reflect the functions of a biological neuron. This motivation drives extensive modification of an artificial neuron model to produce the spike patterns of a real biological neuron. The modified single artificial neuron model that has been proposed exhibits the functions of a biological neuron. It’s still crucial to model spiking bio-neuron behavior. Modeling a spiking bio-neuron is still an important exercise in view of possible applications of the underlying features in the areas of neuromorphic engineering, cognitive radio, and spiking neural networks
APA, Harvard, Vancouver, ISO, and other styles
7

M. Mijwil, Maad. "Artificial Neural Networks Advantages and Disadvantages." Mesopotamian Journal of Big Data 2021 (August 23, 2021): 29–31. http://dx.doi.org/10.58496/mjbd/2021/006.

Full text
Abstract:
Artificial neural networks (ANNs) are the modelling of the human brain with the simplest definition, and the building blocks are neurons. There are about 100 billion neurons in the human brain. Each neuron has a connection point between 1,000 and 100,000. In the human brain, information is stored in such a way as to be distributed, and we can extract more than one piece of this information from our memory in parallel when necessary. We are not mistaken when we say that a human brain is made up of thousands of very, very powerful parallel processors. In multi-layer artificial neural networks, there are also neurons placed in a similar manner to the human brain. Each neuron is connected to other neurons with specific coefficients. During training, information is distributed to these connection points so that the network is learned.
APA, Harvard, Vancouver, ISO, and other styles
8

Ruzek, Martin. "ARTIFICIAL NEURAL NETWORK FOR MODELS OF HUMAN OPERATOR." Acta Polytechnica CTU Proceedings 12 (December 15, 2017): 99. http://dx.doi.org/10.14311/app.2017.12.0099.

Full text
Abstract:
This paper presents a new approach to mental functions modeling with the use of artificial neural networks. The artificial neural networks seems to be a promising method for the modeling of a human operator because the architecture of the ANN is directly inspired by the biological neuron. On the other hand, the classical paradigms of artificial neural networks are not suitable because they simplify too much the real processes in biological neural network. The search for a compromise between the complexity of biological neural network and the practical feasibility of the artificial network led to a new learning algorithm. This algorithm is based on the classical multilayered neural network; however, the learning rule is different. The neurons are updating their parameters in a way that is similar to real biological processes. The basic idea is that the neurons are competing for resources and the criterion to decide which neuron will survive is the usefulness of the neuron to the whole neural network. The neuron is not using "teacher" or any kind of superior system, the neuron receives only the information that is present in the biological system. The learning process can be seen as searching of some equilibrium point that is equal to a state with maximal importance of the neuron for the neural network. This position can change if the environment changes. The name of this type of learning, the homeostatic artificial neural network, originates from this idea, as it is similar to the process of homeostasis known in any living cell. The simulation results suggest that this type of learning can be useful also in other tasks of artificial learning and recognition.
APA, Harvard, Vancouver, ISO, and other styles
9

Tomov, Konstantin, and Galina Momcheva. "Multi-Activation Dendritic Neural Network (MA-DNN) Working Example of Dendritic-Based Artificial Neural Network." Cybernetics and Information Technologies 23, no. 3 (2023): 145–62. http://dx.doi.org/10.2478/cait-2023-0030.

Full text
Abstract:
Abstract Throughout the years neural networks have been based on the perceptron model of the artificial neuron. Attempts to stray from it are few to none. The perceptron simply works and that has discouraged research around other neuron models. New discoveries highlight the importance of dendrites in the neuron, but the perceptron model does not include them. This brings us to the goal of the paper which is to present and test different models of artificial neurons that utilize dendrites to create an artificial neuron that better represents the biological neuron. The authors propose two models. One is made with the purpose of testing the idea of the dendritic neuron. The distinguishing feature of the second model is that it implements activation functions after its dendrites. Results from the second model suggest that it performs as well as or even better than the perceptron model.
APA, Harvard, Vancouver, ISO, and other styles
10

Yashchenko, V. O. "Neural-like growing networks in the development of general intelligence. Neural-like element (P. I)." Mathematical machines and systems 4 (2022): 15–36. http://dx.doi.org/10.34121/1028-9763-2022-4-15-36.

Full text
Abstract:
The article discusses a new approach to the creation of artificial neurons and neural networks as the means of developing artificial intelligence similar to natural. The article consists of two parts. In the first one, the system of artificial intelligence formation is considered in comparison with the system of natural intelligence formation. Based on the consideration and analysis of the structure and functions of a biological neuron, it was concluded that memory is stored in brain neurons at the molecular level. Information perceived by a person from the moment of his birth and throughout his life is stored in the endoplasmic reticulum of the neuron. There are about 100 billion neurons in the human brain, and each neuron contains millions of ribosomes that synthesize a mediator consisting of about 10,000 molecules. If we assume that one mole-cule corresponds to one unit of information, then human memory is unlimited. In the nerve cell, there is a synthesis of biologically active substances necessary for the analysis and memorizing information. The “factory” for the production of proteins is the endoplasmic reticulum which accumulates millions of ribosomes. One ribosome synthesizes protein at a rate of 15–20 amino acids per second. Considering that the functional structure of ribosomes is similar to the Turing machine, we can conclude that the neuron is an analog multimachine complex – an ultra-fast molecular multimachine supercomputer with an unusually simple analog programming device. An artificial neuron proposed by J. McCulloch and W. Pitts is considered a highly simplified mathematical model of a biological neuron. A maximally approximate analogue of a biological neuron, a neural-like element, is proposed. A description of the neural-like element is given. The process of perception and memorizing information in a neuron-like element is shown in comparison with a similar process in a nerve cell of the brain.
APA, Harvard, Vancouver, ISO, and other styles
More sources
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography