Статті в журналах з теми "Artificials neurons"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Artificials neurons.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Artificials neurons".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

HERRMANN, CHRISTOPH S., and ANDREAS KLAUS. "AUTAPSE TURNS NEURON INTO OSCILLATOR." International Journal of Bifurcation and Chaos 14, no. 02 (February 2004): 623–33. http://dx.doi.org/10.1142/s0218127404009338.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Recently, neurobiologists have discovered axons on neurons which synapse on the same neuron's dendrites — so-called autapses. It is not yet clear what functional significance autapses offer for neural behavior. This is an ideal case for using a physical simulation to investigate how an autapse alters the firing of a neuron. We simulated a neural basket cell via the Hodgkin–Huxley equations and implemented an autapse which feeds back onto the soma of the neuron. The behavior of the cell was compared with and without autaptic feedback. Our artificial autapse neuron (AAN) displays oscillatory behavior which is not observed for the same model neuron without autapse. The neuron oscillates between two functional states: one where it fires at high frequency and another where firing is suppressed. This behavior is called "spike bursting" and represents a common pattern recorded from cerebral neurons.
2

Sharp, A. A., L. F. Abbott, and E. Marder. "Artificial electrical synapses in oscillatory networks." Journal of Neurophysiology 67, no. 6 (June 1, 1992): 1691–94. http://dx.doi.org/10.1152/jn.1992.67.6.1691.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
1. We use an electronic circuit to artificially electrically couple neurons. 2. Strengthening the coupling between an oscillating neuron and a hyperpolarized, passive neuron can either increase or decrease the frequency of the oscillator depending on the properties of the oscillator. 3. The result of electrically coupling two neuronal oscillators depends on the membrane potentials, intrinsic properties of the neurons, and the coupling strength. 4. The interplay between chemical inhibitory synapses and electrical synapses can be studied by creating both chemical and electrical synapses between two cultured neurons and by artificially strengthening the electrical synapse between the ventricular dilator and one pyloric dilator neuron of the stomatogastric ganglion.
3

Márquez-Vera, Carlos Antonio, Zaineb Yakoub, Marco Antonio Márquez Vera, and Alfian Ma'arif. "Spiking PID Control Applied in the Van de Vusse Reaction." International Journal of Robotics and Control Systems 1, no. 4 (November 25, 2021): 488–500. http://dx.doi.org/10.31763/ijrcs.v1i4.490.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Artificial neural networks (ANN) can approximate signals and give interesting results in pattern recognition; some works use neural networks for control applications. However, biological neurons do not generate similar signals to the obtained by ANN. The spiking neurons are an interesting topic since they simulate the real behavior depicted by biological neurons. This paper employed a spiking neuron to compute a PID control, which is further applied to the Van de Vusse reaction. This reaction, as the inverse pendulum, is a benchmark used to work with systems that has inverse response producing the output to undershoot. One problem is how to code information that the neuron can interpret and decode the peak generated by the neuron to interpret the neuron's behavior. In this work, a spiking neuron is used to compute a PID control by coding in time the peaks generated by the neuron. The neuron has as synaptic weights the PID gains, and the peak observed in the axon is the coded control signal. The neuron adaptation tries to obtain the necessary weights to generate the peak instant necessary to control the chemical reaction. The simulation results show the possibility of using this kind of neuron for control issues and the possibility of using a spiking neural network to overcome the undershoot obtained due to the inverse response of the chemical reaction.
4

Torres-Treviño, Luis M., Angel Rodríguez-Liñán, Luis González-Estrada, and Gustavo González-Sanmiguel. "Single Gaussian Chaotic Neuron: Numerical Study and Implementation in an Embedded System." Discrete Dynamics in Nature and Society 2013 (2013): 1–11. http://dx.doi.org/10.1155/2013/318758.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Artificial Gaussian neurons are very common structures of artificial neural networks like radial basis function. These artificial neurons use a Gaussian activation function that includes two parameters called the center of mass (cm) and sensibility factor (λ). Changes on these parameters determine the behavior of the neuron. When the neuron has a feedback output, complex chaotic behavior is displayed. This paper presents a study and implementation of this particular neuron. Stability of fixed points, bifurcation diagrams, and Lyapunov exponents help to determine the dynamical nature of the neuron, and its implementation on embedded system illustrates preliminary results toward embedded chaos computation.
5

Alvarellos-González, Alberto, Alejandro Pazos, and Ana B. Porto-Pazos. "Computational Models of Neuron-Astrocyte Interactions Lead to Improved Efficacy in the Performance of Neural Networks." Computational and Mathematical Methods in Medicine 2012 (2012): 1–10. http://dx.doi.org/10.1155/2012/476324.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The importance of astrocytes, one part of the glial system, for information processing in the brain has recently been demonstrated. Regarding information processing in multilayer connectionist systems, it has been shown that systems which include artificial neurons and astrocytes (Artificial Neuron-Glia Networks) have well-known advantages over identical systems including only artificial neurons. Since the actual impact of astrocytes in neural network function is unknown, we have investigated, using computational models, different astrocyte-neuron interactions for information processing; different neuron-glia algorithms have been implemented for training and validation of multilayer Artificial Neuron-Glia Networks oriented toward classification problem resolution. The results of the tests performed suggest that all the algorithms modelling astrocyte-induced synaptic potentiation improved artificial neural network performance, but their efficacy depended on the complexity of the problem.
6

Takahata, M., and M. Hisada. "Local nonspiking interneurons involved in gating of the descending motor pathway in crayfish." Journal of Neurophysiology 56, no. 3 (September 1, 1986): 718–31. http://dx.doi.org/10.1152/jn.1986.56.3.718.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Uropod motor neurons in the terminal abdominal ganglion of crayfish are continuously excited during the abdominal posture movement so that subthreshold excitatory postsynaptic potentials from the descending statocyst pathway can elicit spike activity in the motor neurons only while the abdominal posture system is in operation. Local nonspiking interneurons in the terminal ganglion were also found to show sustained membrane potential change during the fictive abdominal posture movement. Artificial membrane potential change of these interneurons by intracellular current injection in the same direction as that actually observed during the abdominal movement caused similar excitation of uropod motor neurons. Artificial cancellation of the membrane potential change of these interneurons during the abdominal movement also caused cancellation of the excitation of uropod motor neurons. We concluded that the continuous excitation of uropod motor neurons during the fictive abdominal movement was mediated, at least partly, by the local nonspiking interneurons. Fourteen (36%) out of 39 examined nonspiking interneurons were judged to be involved in the excitation of uropod motor neurons during the fictive abdominal movement. Another 25 interneurons (64%) were found not to be involved in the excitation of motor neurons, although most of them had a strong effect on the uropod motor neuron activity when their membrane potential was changed artificially. The interneurons that were involved in the excitation of motor neurons during the abdominal movement included both of the two major structural types of nonspiking interneurons in the terminal ganglion, i.e., those in the anterolateral portion and those in the posterolateral portion. No strict correlation was found between the structure of nonspiking interneurons and their function in the control of uropod motor neuron activity.
7

Ribar, Srdjan, Vojislav V. Mitic, and Goran Lazovic. "Neural Networks Application on Human Skin Biophysical Impedance Characterizations." Biophysical Reviews and Letters 16, no. 01 (February 6, 2021): 9–19. http://dx.doi.org/10.1142/s1793048021500028.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Artificial neural networks (ANNs) are basically the structures that perform input–output mapping. This mapping mimics the signal processing in biological neural networks. The basic element of biological neural network is a neuron. Neurons receive input signals from other neurons or the environment, process them, and generate their output which represents the input to another neuron of the network. Neurons can change their sensitivity to input signals. Each neuron has a simple rule to process an input signal. Biological neural networks have the property that signals are processed through many parallel connections (massively parallel processing). The activity of all neurons in these parallel connections is summed and represents the output of the whole network. The main feature of biological neural networks is that changes in the sensitivity of the neurons lead to changes in the operation of the entire network. This is called adaptation and is correlated with the learning process of living organisms. In this paper, a set of artificial neural networks are used for classifying the human skin biophysical impedance data.
8

Wang, Yu, Xintong Chen, Daqi Shen, Miaocheng Zhang, Xi Chen, Xingyu Chen, Weijing Shao, et al. "Artificial Neurons Based on Ag/V2C/W Threshold Switching Memristors." Nanomaterials 11, no. 11 (October 27, 2021): 2860. http://dx.doi.org/10.3390/nano11112860.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Artificial synapses and neurons are two critical, fundamental bricks for constructing hardware neural networks. Owing to its high-density integration, outstanding nonlinearity, and modulated plasticity, memristors have attracted emerging attention on emulating biological synapses and neurons. However, fabricating a low-power and robust memristor-based artificial neuron without extra electrical components is still a challenge for brain-inspired systems. In this work, we demonstrate a single two-dimensional (2D) MXene(V2C)-based threshold switching (TS) memristor to emulate a leaky integrate-and-fire (LIF) neuron without auxiliary circuits, originating from the Ag diffusion-based filamentary mechanism. Moreover, our V2C-based artificial neurons faithfully achieve multiple neural functions including leaky integration, threshold-driven fire, self-relaxation, and linear strength-modulated spike frequency characteristics. This work demonstrates that three-atom-type MXene (e.g., V2C) memristors may provide an efficient method to construct the hardware neuromorphic computing systems.
9

Volchikhin, V. I., A. I. Ivanov, T. A. Zolotareva, and D. M. Skudnev. "Synthesis of four new neuro-statistical tests for testing the hypothesis of independence of small samples of biometric data." Journal of Physics: Conference Series 2094, no. 3 (November 1, 2021): 032013. http://dx.doi.org/10.1088/1742-6596/2094/3/032013.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract The paper considers the analysis of small samples according to several statistical criteria to test the hypothesis of independence, since the direct calculation of the correlation coefficients using the Pearson formula gives an unacceptably high error on small biometric samples. Each of the classical statistical criteria for testing the hypothesis of independence can be replaced with an equivalent artificial neuron. Neuron training is performed based on the condition of obtaining equal probabilities of errors of the first and second kind. To improve the quality of decisions made, it is necessary to use a variety of statistical criteria, both known and new. It is necessary to form networks of artificial neurons, generalizing the number of artificial neurons that is necessary for practical use. It is shown that the classical formula for calculating the correlation coefficients can be modified with four options. This allows you to create a network of 5 artificial neurons, which is not yet able to reduce the probability of errors in comparison with the classical formula. A gain in the confidence level in the future can only be obtained when using a network of more than 23 artificial neurons, if we apply the simplest code to detect and correct errors.
10

De-Miguel, Francisco F., Mariana Vargas-Caballero, and Elizabeth García-Pérez. "Spread of synaptic potentials through electrical synapses in Retzius neurones of the leech." Journal of Experimental Biology 204, no. 19 (October 1, 2001): 3241–50. http://dx.doi.org/10.1242/jeb.204.19.3241.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
SUMMARYWe studied the spread of excitatory postsynaptic potentials (EPSPs) through electrical synapses in Retzius neurones of the leech Haementeria officinalis. The pair of Retzius neurones in each ganglion is coupled by a non-rectifying electrical synapse. Both neurones displayed synchronous EPSPs of varying amplitudes and rise times. The kinetics of synchronous EPSPs was similar in 79 % of the EPSP pairs. In the remaining 21 %, one EPSP was smaller and slower than the other, suggesting its passive spread from the other neurone. The proportion of these events increased to 75 % in the presence of Mg2+ in the bathing fluid. This spread of EPSPs from one neurone to another was tested by producing artificial EPSPs by current injection into the soma of one Retzius neurone. The artificial EPSPs were smaller and arrived more slowly at the soma of the coupled neurone. The coupling ratios for the EPSPs were proportional to the coupling ratio for long steady-state pulses in different neuronal pairs. Our results showed that EPSPs spread from one Retzius neurone to the other and support the idea that EPSP spread between electrically coupled neurones may contribute to the integration processes of neurones.
11

Gupta, Pallavi, Nandhini Balasubramaniam, Hwan-You Chang, Fan-Gang Tseng, and Tuhin Subhra Santra. "A Single-Neuron: Current Trends and Future Prospects." Cells 9, no. 6 (June 23, 2020): 1528. http://dx.doi.org/10.3390/cells9061528.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The brain is an intricate network with complex organizational principles facilitating a concerted communication between single-neurons, distinct neuron populations, and remote brain areas. The communication, technically referred to as connectivity, between single-neurons, is the center of many investigations aimed at elucidating pathophysiology, anatomical differences, and structural and functional features. In comparison with bulk analysis, single-neuron analysis can provide precise information about neurons or even sub-neuron level electrophysiology, anatomical differences, pathophysiology, structural and functional features, in addition to their communications with other neurons, and can promote essential information to understand the brain and its activity. This review highlights various single-neuron models and their behaviors, followed by different analysis methods. Again, to elucidate cellular dynamics in terms of electrophysiology at the single-neuron level, we emphasize in detail the role of single-neuron mapping and electrophysiological recording. We also elaborate on the recent development of single-neuron isolation, manipulation, and therapeutic progress using advanced micro/nanofluidic devices, as well as microinjection, electroporation, microelectrode array, optical transfection, optogenetic techniques. Further, the development in the field of artificial intelligence in relation to single-neurons is highlighted. The review concludes with between limitations and future prospects of single-neuron analyses.
12

Zigunovs, Maksims. "THE ALZHEIMER’S DISEASE IMPACT ON ARTIFICIAL NEURAL NETWORKS." ENVIRONMENT. TECHNOLOGIES. RESOURCES. Proceedings of the International Scientific and Practical Conference 2 (June 17, 2021): 205–9. http://dx.doi.org/10.17770/etr2021vol2.6632.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The Alzheimer’s Disease main impact on the brain is the memory loss effect. Therefore, in the “neuron world” this makes a disorder of signal impulses and disconnects neurons that causes the neuron death and memory loss. The research main aim is to determine the average loss of signal and develop memory loss prediction models for artificial neuron network. The Izhikevich neural networking model is often used for constructing neuron neural electrical signal modeling. The neuron model signal rhythm and spikes are used as model neuron characteristics for understanding if the system is stable at certain moment and in time. In addition, the electrical signal parameters are used in similar way as they are used in a biological brain. During the research the neural network initial conditions are assumed to be randomly selected in specified the working neuron average sigma I parameters range.
13

Vazquez, Roberto A., and Beatriz A. Garro. "Training Spiking Neural Models Using Artificial Bee Colony." Computational Intelligence and Neuroscience 2015 (2015): 1–14. http://dx.doi.org/10.1155/2015/947098.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Spiking neurons are models designed to simulate, in a realistic manner, the behavior of biological neurons. Recently, it has been proven that this type of neurons can be applied to solve pattern recognition problems with great efficiency. However, the lack of learning strategies for training these models do not allow to use them in several pattern recognition problems. On the other hand, several bioinspired algorithms have been proposed in the last years for solving a broad range of optimization problems, including those related to the field of artificial neural networks (ANNs). Artificial bee colony (ABC) is a novel algorithm based on the behavior of bees in the task of exploring their environment to find a food source. In this paper, we describe how the ABC algorithm can be used as a learning strategy to train a spiking neuron aiming to solve pattern recognition problems. Finally, the proposed approach is tested on several pattern recognition problems. It is important to remark that to realize the powerfulness of this type of model only one neuron will be used. In addition, we analyze how the performance of these models is improved using this kind of learning strategy.
14

Nakamura, K., and T. Ono. "Lateral hypothalamus neuron involvement in integration of natural and artificial rewards and cue signals." Journal of Neurophysiology 55, no. 1 (January 1, 1986): 163–81. http://dx.doi.org/10.1152/jn.1986.55.1.163.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Involvement of rat lateral hypothalamus (LHA) neurons in integration of motivation, reward, and learning processes was studied by recording single-neuron activity during cuetone discrimination, learning behavior to obtain glucose, or electrical rewarding intracranial self-stimulation (ICSS) of the posterior LHA. To relate the activity of an LHA neuron to glucose, ICSS, and anticipatory cues, the same licking task was used to obtain both rewards. Each neuron was tested with rewards alone and then with rewards signaled by cuetone stimuli (CTS), CTS1+ = 1,200 Hz for glucose, CTS2+ = 4,300 Hz for ICSS, and CTS- = 2,800 Hz for no reward. The activity of 318 neurons in the LHA was analyzed. Of these, 212 (66.7%) responded to one or both rewarding stimuli (glucose, 115; ICSS, 193). Usually, both rewards affected the same neuron in the same direction. Of 96 neurons that responded to both rewards, the responses of 72 (75%) were similar, i.e., either both excitatory or both inhibitory. When a tone was associated with glucose or ICSS reward, 81 of the 212 neurons that responded to either or both rewards and none of 106 neurons that failed to respond to either reward acquired a response to the respective CTS. Usually, the response to a tone was in the same direction as the reward response. Of 45 neurons that responded to both glucose and CTS1+, 38 (84.4%) were similar, and of 66 that responded to both ICSS and CTS2+, 47 (71.2%) were similar. The neural response to a tone was acquired rapidly after licking behavior was learned and was extinguished equally rapidly before licking stopped in extinction. The latency of the neural response to CTS1+ was 10-150 ms (58.7 +/- 40.9 ms, mean +/- SE, n = 31), and that of the first lick was 100-370 ms (204.8 +/- 59.1 ms, n = 31). The latency of neural responses to CTS2+ was 10-230 ms (68.3 +/- 53.5 ms, n = 33), and that of the first lick was 90-370 ms (212.4 +/- 58.5 ms, n = 33). There was no significant difference between the neural response latencies for the two cue tones nor between the lick latencies for the different rewards. Neurons inhibited by glucose or ICSS reward were distributed widely in the LHA, whereas most excited neurons were in the posterodorsal subarea; fewer were in the anteroventral subarea. Neurons responding to the CTS for glucose or ICSS were found more frequently in the posterior region.(ABSTRACT TRUNCATED AT 400 WORDS)
15

Ruzek, Martin. "ARTIFICIAL NEURAL NETWORK FOR MODELS OF HUMAN OPERATOR." Acta Polytechnica CTU Proceedings 12 (December 15, 2017): 99. http://dx.doi.org/10.14311/app.2017.12.0099.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This paper presents a new approach to mental functions modeling with the use of artificial neural networks. The artificial neural networks seems to be a promising method for the modeling of a human operator because the architecture of the ANN is directly inspired by the biological neuron. On the other hand, the classical paradigms of artificial neural networks are not suitable because they simplify too much the real processes in biological neural network. The search for a compromise between the complexity of biological neural network and the practical feasibility of the artificial network led to a new learning algorithm. This algorithm is based on the classical multilayered neural network; however, the learning rule is different. The neurons are updating their parameters in a way that is similar to real biological processes. The basic idea is that the neurons are competing for resources and the criterion to decide which neuron will survive is the usefulness of the neuron to the whole neural network. The neuron is not using "teacher" or any kind of superior system, the neuron receives only the information that is present in the biological system. The learning process can be seen as searching of some equilibrium point that is equal to a state with maximal importance of the neuron for the neural network. This position can change if the environment changes. The name of this type of learning, the homeostatic artificial neural network, originates from this idea, as it is similar to the process of homeostasis known in any living cell. The simulation results suggest that this type of learning can be useful also in other tasks of artificial learning and recognition.
16

Anasari, Rashid. "Expectation of Tourism Demand in Iraq by Using Artificial Neural Network." International Journal of Social Science Research and Review 2, no. 2 (June 1, 2019): 1–7. http://dx.doi.org/10.47814/ijssrr.v2i2.19.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This study survey and proves this effectiveness connected with artificial neural networks (ANNs) as an alternative approach in the tourism research. The learning utilizes the travel industry in the Japan being a method for estimating need to exhibit the solicitation. The outcome reveals the use of ANNs in tourism research might perhaps result in better quotations when it comes to prediction bias and accuracy. Even more applications of ANNs in the context of tourism demand examination is needed to establish and validate the effects. An ANN is based on a collection of connected units or nodes called artificial neurons, which loosely model the neurons in a biological brain. Each connection, like the synapses in a biological brain, can transmit a signal to other neurons. An artificial neuron that receives a signal then processes it and can signal neurons connected to it.
17

Beatty, Joseph A., Soomin C. Song, and Charles J. Wilson. "Cell-type-specific resonances shape the responses of striatal neurons to synaptic input." Journal of Neurophysiology 113, no. 3 (February 1, 2015): 688–700. http://dx.doi.org/10.1152/jn.00827.2014.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Neurons respond to synaptic inputs in cell-type-specific ways. Each neuron type may thus respond uniquely to shared patterns of synaptic input. We applied statistically identical barrages of artificial synaptic inputs to four striatal cell types to assess differences in their responses to a realistic input pattern. Each interneuron type fired in phase with a specific input-frequency component. The fast-spiking interneuron fired in relation to the gamma-band (and higher) frequencies, the low-threshold spike interneuron to the beta-band frequencies, and the cholinergic neurons to the delta-band frequencies. Low-threshold spiking and cholinergic interneurons showed input impedance resonances at frequencies matching their spiking resonances. Fast-spiking interneurons showed resonance of input impedance but at lower than gamma frequencies. The spiny projection neuron's frequency preference did not have a fixed frequency but instead tracked its own firing rate. Spiny cells showed no input impedance resonance. Striatal interneurons are each tuned to a specific frequency band corresponding to the major frequency components of local field potentials. Their influence in the circuit may fluctuate along with the contribution of that frequency band to the input. In contrast, spiny neurons may tune to any of the frequency bands by a change in firing rate.
18

OBANA, ICHIRO, and YASUHIRO FUKUI. "ROLE OF CHAOS IN TRIAL-AND-ERROR PROBLEM SOLVING BY AN ARTIFICIAL NEURAL NETWORK." International Journal of Neural Systems 07, no. 01 (March 1996): 101–8. http://dx.doi.org/10.1142/s0129065796000099.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
One role of chaotic neural activity is illustrated by means of computer simulations of an imaginary agent’s goal-oriented behavior. The agent has a simplified neural network with seven neurons and three legs. The neural network consists of one photosensory neuron and three pairs of inter- and motor neurons. The three legs whose movements are governed by the three motor neurons allow the agent to walk in six concentric radial directions on a plane. It is intended that the neural network causes the agent to walk in a direction of greater brightness, to reach finally the most brightly lit place on the plane. The presence of only one sensory neuron has an important meaning. That is, no immediate information on directions of greater brightness is sensed by the agent. In other words, random walking in the manner of trial-and-error problem solving must be involved in the agent’s walking. Chaotic firing of the motor neurons is intended to play a crucial role in generating the random walking. Brief random walking and rapid straight walking in a direction of greater brightness were observed to occur alternately in the computer simulation. Controlled chaos in naturally occurring neural networks may play a similar role.
19

Gynther, I. C., and K. G. Pearson. "An evaluation of the role of identified interneurons in triggering kicks and jumps in the locust." Journal of Neurophysiology 61, no. 1 (January 1, 1989): 45–57. http://dx.doi.org/10.1152/jn.1989.61.1.45.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
1. We have used intracellular recording and staining techniques to examine the importance of certain identified interneurons within the system responsible for triggering kicks and jumps in the locust, Locusta migratoria. In particular, our study focused on a pair of metathoracic interneurons called the M-neurons. These cells make strong inhibitory connections to hind-leg flexor motoneurons and are thought to play a key role in the termination of flexor activity which causes kicks and jumps to be triggered (8, 20, 24). 2. Simultaneous recordings from M-neurons and flexor motoneurons during bilateral hindleg kicks revealed that in most cases the onset of the M-neuron's high-frequency discharge coincided precisely with the start of the flexor's rapid repolarization. This result demonstrated that M's activity had the correct timing to be involved in the triggering process and so confirmed suggestions made in previous studies. At times, however, the flexor motoneurons began to repolarize slowly prior to the first spike in the M-neuron, indicating that triggering must involve other neurons and perhaps also an additional mechanism such as a reduction of flexor excitation. 3. The sufficiency and necessity of the M-neurons for triggering kicks were tested by experiments involving intracellular current injections. The application of a brief pulse of depolarizing current to an M-neuron, in order to evoke a burst of spikes in the cell prior to the time it would normally have become active, caused extension of the ipsilateral leg to be triggered prematurely but did not influence the motor program in the contralateral leg. This effect was only observed when the discharge frequency evoked artificially in the M-neuron was greater than that seen during natural performance of the behavior. Even then, the repolarization produced in the flexor motoneurons by the current pulses was not the same as occurs normally. We conclude that under natural circumstances the M-neurons, by themselves, are not sufficient to trigger kicks. 4. When the usual discharge in an M-neuron was prevented by the injection of hyperpolarizing current, both legs were still able to kick. This lack of necessity of the M-neurons confirms that additional neurons must be involved in the triggering process. The rate of repolarization of the flexor motoneurons during kicks in which M activity had been abolished was slower and more variable than is seen in normal kicks but this did not appear to alter the timing of leg extension.(ABSTRACT TRUNCATED AT 400 WORDS)
20

Segers, L. S., S. C. Nuding, M. M. Ott, J. B. Dean, D. C. Bolser, R. O'Connor, K. F. Morris, and B. G. Lindsey. "Peripheral chemoreceptors tune inspiratory drive via tonic expiratory neuron hubs in the medullary ventral respiratory column network." Journal of Neurophysiology 113, no. 1 (January 1, 2015): 352–68. http://dx.doi.org/10.1152/jn.00542.2014.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Models of brain stem ventral respiratory column (VRC) circuits typically emphasize populations of neurons, each active during a particular phase of the respiratory cycle. We have proposed that “tonic” pericolumnar expiratory (t-E) neurons tune breathing during baroreceptor-evoked reductions and central chemoreceptor-evoked enhancements of inspiratory (I) drive. The aims of this study were to further characterize the coordinated activity of t-E neurons and test the hypothesis that peripheral chemoreceptors also modulate drive via inhibition of t-E neurons and disinhibition of their inspiratory neuron targets. Spike trains of 828 VRC neurons were acquired by multielectrode arrays along with phrenic nerve signals from 22 decerebrate, vagotomized, neuromuscularly blocked, artificially ventilated adult cats. Forty-eight of 191 t-E neurons fired synchronously with another t-E neuron as indicated by cross-correlogram central peaks; 32 of the 39 synchronous pairs were elements of groups with mutual pairwise correlations. Gravitational clustering identified fluctuations in t-E neuron synchrony. A network model supported the prediction that inhibitory populations with spike synchrony reduce target neuron firing probabilities, resulting in offset or central correlogram troughs. In five animals, stimulation of carotid chemoreceptors evoked changes in the firing rates of 179 of 240 neurons. Thirty-two neuron pairs had correlogram troughs consistent with convergent and divergent t-E inhibition of I cells and disinhibitory enhancement of drive. Four of 10 t-E neurons that responded to sequential stimulation of peripheral and central chemoreceptors triggered 25 cross-correlograms with offset features. The results support the hypothesis that multiple afferent systems dynamically tune inspiratory drive in part via coordinated t-E neurons.
21

Cazé, Romain D. "Any neuron can perform linearly non-separable computations." F1000Research 10 (July 6, 2021): 539. http://dx.doi.org/10.12688/f1000research.53961.1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Multiple studies have shown how dendrites enable some neurons to perform linearly non-separable computations. These works focus on cells with an extended dendritic arbor where voltage can vary independently, turning dendritic branches into local non-linear subunits. However, these studies leave a large fraction of the nervous system unexplored. Many neurons, e.g. granule cells, have modest dendritic trees and are electrically compact. It is impossible to decompose them into multiple independent subunits. Here, we upgraded the integrate and fire neuron to account for saturating dendrites. This artificial neuron has a unique membrane voltage and can be seen as a single layer. We present a class of linearly non-separable computations and how our neuron can perform them. We thus demonstrate that even a single layer neuron with dendrites has more computational capacity than without. Because any neuron has one or more layer, and all dendrites do saturate, we show that any dendrited neuron can implement linearly non-separable computations.
22

Cazé, Romain D. "Any neuron can perform linearly non-separable computations." F1000Research 10 (September 16, 2021): 539. http://dx.doi.org/10.12688/f1000research.53961.2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Multiple studies have shown how dendrites enable some neurons to perform linearly non-separable computations. These works focus on cells with an extended dendritic arbor where voltage can vary independently, turning dendritic branches into local non-linear subunits. However, these studies leave a large fraction of the nervous system unexplored. Many neurons, e.g. granule cells, have modest dendritic trees and are electrically compact. It is impossible to decompose them into multiple independent subunits. Here, we upgraded the integrate and fire neuron to account for saturating dendrites. This artificial neuron has a unique membrane voltage and can be seen as a single layer. We present a class of linearly non-separable computations and how our neuron can perform them. We thus demonstrate that even a single layer neuron with dendrites has more computational capacity than without. Because any neuron has one or more layer, and all dendrites do saturate, we show that any dendrited neuron can implement linearly non-separable computations.
23

Sharp, A. A., M. B. O'Neil, L. F. Abbott, and E. Marder. "Dynamic clamp: computer-generated conductances in real neurons." Journal of Neurophysiology 69, no. 3 (March 1, 1993): 992–95. http://dx.doi.org/10.1152/jn.1993.69.3.992.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
1. We describe a new method, the dynamic clamp, that uses a computer as an interactive tool to introduce simulated voltage and ligand mediated conductances into real neurons. 2. We simulate a gamma-aminobutyric acid (GABA) response of a cultured stomatogastric ganglion neuron to illustrate that the dynamic clamp effectively introduces a conductance into the target neuron. 3. To demonstrate an artificial voltage-dependent conductance, we simulate the action of a voltage-dependent proctolin response on a neuron in the intact stomatogastric ganglion. We show that shifts in the activation curve and the maximal conductance of the response produce different effects on the target neuron. 4. The dynamic clamp is used to construct reciprocal inhibitory synapses between two stomatogastric ganglion neurons that are not coupled naturally, illustrating that this method can be used to form new networks at will.
24

Lindsey, B. G., L. S. Segers, and R. Shannon. "Functional associations among simultaneously monitored lateral medullary respiratory neurons in the cat. II. Evidence for inhibitory actions of expiratory neurons." Journal of Neurophysiology 57, no. 4 (April 1, 1987): 1101–17. http://dx.doi.org/10.1152/jn.1987.57.4.1101.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Arrays of extracellular electrodes were used to monitor simultaneously several (2-8) respiratory neurons in the lateral medulla of anesthetized, paralyzed, bilaterally vagotomized, artificially ventilated cats. Efferent phrenic nerve activity was also recorded. The average discharge rate as a function of time in the respiratory cycle was determined for each neuron. Most cells were tested for spinal or vagal axonal projections using antidromic stimulation methods. Cross-correlational methods were used to analyze spike trains of 480 cell pairs. Each pair included at least one neuron most active during the expiratory phase. All simultaneously recorded neurons were located in the same side of the brain stem. Twenty-six percent (33/129) of the expiratory (E) neuron pairs exhibited short time scale correlations indicative of paucisynaptic interactions or shared inputs, whereas 8% (27/351) of the pairs consisting of an E neuron and an inspiratory (I) cell were similarly correlated. Evidence for several inhibitory actions of E neurons was found: 1) inhibition of I neurons by E neurons with both decrementing (DEC) and augmenting (AUG) firing patterns; 2) inhibition of E-DEC and E-AUG neurons by E-DEC cells; 3) inhibition of E-DEC and E-AUG neurons by E-AUG neurons; and 4) inhibition of E-DEC neurons by tonic I-E phase-spanning cells. Because several cells were recorded simultaneously, direct evidence for concurrent parallel and serial inhibitory processes was also obtained. The results suggest and support several hypotheses for mechanisms that may help to generate and control the pattern and coordination of respiratory motoneuron activities.
25

Nagai, T., H. Katayama, K. Aihara, and T. Yamamoto. "Pruning of rat cortical taste neurons by an artificial neural network model." Journal of Neurophysiology 74, no. 3 (September 1, 1995): 1010–19. http://dx.doi.org/10.1152/jn.1995.74.3.1010.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
1. Taste qualities are believed to be coded in the activity of ensembles of taste neurons. However, it is not clear whether all neurons are equally responsible for coding. To clarify the point, the relative contribution of each taste neuron to coding needs to be assessed. 2. We constructed simple three-layer neural networks with input units representing cortical taste neurons of the rat. The networks were trained by the back-propagation learning algorithm to classify the neural response patterns to the basic taste stimuli (sucrose, HCl, quinine hydrochloride, and NaCl). The networks had four output units representing the basic taste qualities, the values of which provide a measure for similarity of test stimuli (salts, tartaric acid, and umami substances) to the basic taste stimuli. 3. Trained networks discriminated the response patterns to the test stimuli in a plausible manner in light of previous physiological and psychological experiments. Profiles of output values of the networks paralleled those of across-neuron correlations with respect to the highest or second-highest values in the profiles. 4. We evaluated relative contributions of input units to the taste discrimination of the network by examining their significance Sj, which is defined as the sum of the absolute values of the connection weights from the jth input unit to the hidden layer. When the input units with weaker connection weights (e.g., 15 of 39 input units) were "pruned" from the trained network, the ability of the network to discriminate the basic taste qualities as well as other test stimuli was not greatly affected. On the other hand, the taste discrimination of the network progressively deteriorated much more rapidly with pruning of input units with stronger connection weights. 5. These results suggest that cortical taste neurons differentially contribute to the coding of taste qualities. The pruning technique may enable the evaluation of a given taste neuron in terms of its relative contribution to the coding, with Sj providing a quantitative measure for such evaluation.
26

Koyuncu, İsmail, İbrahim Şahin, Clay Gloster, and Namık Kemal Sarıtekin. "A Neuron Library for Rapid Realization of Artificial Neural Networks on FPGA: A Case Study of Rössler Chaotic System." Journal of Circuits, Systems and Computers 26, no. 01 (October 4, 2016): 1750015. http://dx.doi.org/10.1142/s0218126617500153.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Artificial neural networks (ANNs) are implemented in hardware when software implementations are inadequate in terms of performance. Implementing an ANN as hardware without using design automation tools is a time consuming process. On the other hand, this process can be automated using pre-designed neurons. Thus, in this work, several artificial neural cells were designed and implemented to form a library of neurons for rapid realization of ANNs on FPGA-based embedded systems. The library contains a total of 60 different neurons, two-, four- and six-input biased and non-biased, with each having 10 different activation functions. The neurons are highly pipelined and were designed to be connected to each other like Lego pieces. Chip statistics of the neurons showed that depending on the type of the neuron, about 25 selected neurons can be fit in to the smallest Virtex-6 chip and an ANN formed using the neurons can be clocked up to 576.89[Formula: see text]MHz. ANN based Rössler system was constructed to show the effectiveness of using neurons in rapid realization of ANNs on embedded systems. Our experiments with the neurons showed that using these neurons, ANNs can rapidly be implemented as hardware and design time can significantly be reduced.
27

Cazé, Romain D., and Marcel Stimberg. "Dendritic neurons can perform linearly separable computations with low resolution synaptic weights." F1000Research 9 (September 28, 2020): 1174. http://dx.doi.org/10.12688/f1000research.26486.1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In theory, neurons modelled as single layer perceptrons can implement all linearly separable computations. In practice, however, these computations may require arbitrarily precise synaptic weights. This is a strong constraint since both biological neurons and their artificial counterparts have to cope with limited precision. Here, we explore how non-linear processing in dendrites helps overcome this constraint. We start by finding a class of computations which requires increasing precision with the number of inputs in a Perceptron and show that it can be implemented without this constraint in a neuron with sub-linear dendritic subunits. Then, we complement this analytical study by a simulation of a biophysical neuron model with two passive dendrites and a soma, and show that it can implement this computation. This work demonstrates a new role of dendrites in neural computation: by distributing the computation across independent subunits, the same computation can be performed more efficiently with less precise tuning of the synaptic weights. This work not only offers new insight into the importance of dendrites for biological neurons, but also paves the way for new, more efficient architectures of artificial neuromorphic chips.
28

Cazé, Romain D., and Marcel Stimberg. "Neurons with dendrites can perform linearly separable computations with low resolution synaptic weights." F1000Research 9 (January 20, 2021): 1174. http://dx.doi.org/10.12688/f1000research.26486.2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In theory, neurons modelled as single layer perceptrons can implement all linearly separable computations. In practice, however, these computations may require arbitrarily precise synaptic weights. This is a strong constraint since both biological neurons and their artificial counterparts have to cope with limited precision. Here, we explore how non-linear processing in dendrites helps overcome this constraint. We start by finding a class of computations which requires increasing precision with the number of inputs in a perceptron and show that it can be implemented without this constraint in a neuron with sub-linear dendritic subunits. Then, we complement this analytical study by a simulation of a biophysical neuron model with two passive dendrites and a soma, and show that it can implement this computation. This work demonstrates a new role of dendrites in neural computation: by distributing the computation across independent subunits, the same computation can be performed more efficiently with less precise tuning of the synaptic weights. This work not only offers new insight into the importance of dendrites for biological neurons, but also paves the way for new, more efficient architectures of artificial neuromorphic chips.
29

Cazé, Romain D., and Marcel Stimberg. "Neurons with dendrites can perform linearly separable computations with low resolution synaptic weights." F1000Research 9 (April 18, 2021): 1174. http://dx.doi.org/10.12688/f1000research.26486.3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In theory, neurons modelled as single layer perceptrons can implement all linearly separable computations. In practice, however, these computations may require arbitrarily precise synaptic weights. This is a strong constraint since both biological neurons and their artificial counterparts have to cope with limited precision. Here, we explore how non-linear processing in dendrites helps overcome this constraint. We start by finding a class of computations which requires increasing precision with the number of inputs in a perceptron and show that it can be implemented without this constraint in a neuron with sub-linear dendritic subunits. Then, we complement this analytical study by a simulation of a biophysical neuron model with two passive dendrites and a soma, and show that it can implement this computation. This work demonstrates a new role of dendrites in neural computation: by distributing the computation across independent subunits, the same computation can be performed more efficiently with less precise tuning of the synaptic weights. This work not only offers new insight into the importance of dendrites for biological neurons, but also paves the way for new, more efficient architectures of artificial neuromorphic chips.
30

Aileni, Raluca Maria, Sever Pasca, and Adriana Florescu. "EEG-Brain Activity Monitoring and Predictive Analysis of Signals Using Artificial Neural Networks." Sensors 20, no. 12 (June 12, 2020): 3346. http://dx.doi.org/10.3390/s20123346.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Predictive observation and real-time analysis of the values of biomedical signals and automatic detection of epileptic seizures before onset are beneficial for the development of warning systems for patients because the patient, once informed that an epilepsy seizure is about to start, can take safety measures in useful time. In this article, Daubechies discrete wavelet transform (DWT) was used, coupled with analysis of the correlations between biomedical signals that measure the electrical activity in the brain by electroencephalogram (EEG), electrical currents generated in muscles by electromyogram (EMG), and heart rate monitoring by photoplethysmography (PPG). In addition, we used artificial neural networks (ANN) for automatic detection of epileptic seizures before onset. We analyzed 30 EEG recordings 10 min before a seizure and during the seizure for 30 patients with epilepsy. In this work, we investigated the ANN dimensions of 10, 50, 100, and 150 neurons, and we found that using an ANN with 150 neurons generates an excellent performance in comparison to a 10-neuron-based ANN. However, this analyzes requests in an increased amount of time in comparison with an ANN with a lower neuron number. For real-time monitoring, the neurons number should be correlated with the response time and power consumption used in wearable devices.
31

Dalhoum, Abdel Latif Abu, and Mohammed Al-Rawi. "High-Order Neural Networks are Equivalent to Ordinary Neural Networks." Modern Applied Science 13, no. 2 (January 27, 2019): 228. http://dx.doi.org/10.5539/mas.v13n2p228.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Equivalence of computational systems can assist in obtaining abstract systems, and thus enable better understanding of issues related their design and performance. For more than four decades, artificial neural networks have been used in many scientific applications to solve classification problems as well as other problems. Since the time of their introduction, multilayer feedforward neural network referred as Ordinary Neural Network (ONN), that contains only summation activation (Sigma) neurons, and multilayer feedforward High-order Neural Network (HONN), that contains Sigma neurons, and product activation (Pi) neurons, have been treated in the literature as different entities. In this work, we studied whether HONNs are mathematically equivalent to ONNs. We have proved that every HONN could be converted to some equivalent ONN. In most cases, one just needs to modify the neuronal transfer function of the Pi neuron to convert it to a Sigma neuron. The theorems that we have derived clearly show that the original HONN and its corresponding equivalent ONN would give exactly the same output, which means; they can both be used to perform exactly the same functionality. We also derived equivalence theorems for several other non-standard neural networks, for example, recurrent HONNs and HONNs with translated multiplicative neurons. This work rejects the hypothesis that HONNs and ONNs are different entities, a conclusion that might initiate a new research frontier in artificial neural network research.
32

Zakaria, Mamang, Luther Pagiling, and Wa Ode Siti Nur Alam. "Sistem Penyiraman Otomatis Tanaman Semusim Berbasis Jaringan Saraf Tiruan Multilayer Perceptron." Jurnal Fokus Elektroda : Energi Listrik, Telekomunikasi, Komputer, Elektronika dan Kendali) 7, no. 1 (February 27, 2022): 35. http://dx.doi.org/10.33772/jfe.v7i1.24050.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In general, farmers water plants when the conditions are met, such as dry soil, no rain, and cold temperatures. One of the efficient ways to control it is to use an artificial neural network-based automatic plant watering system. The purpose of this study was to determine the success of artificial neural networks as decision-makers to water plants automatically. The stages of designing an automatic watering system based on an artificial neural network were to build software including artificial neural network modeling and Arduino microcontroller programming, automatically watering tools, evaluating tool performance, and testing tools in real-time. The test results show that the artificial neural network-based automatic plant watering system can water plants according to the given input pattern. The artificial neural network structure obtained is three neurons in the input layer, eight neurons in the hidden layer, and one neuron in the output layer. The artificial neural network-based automatic plant watering system succeeded in automatically watering two areas of land that the success rate is a 100%.Keyword— Automatic Watering, Microcontroller, ANN, Annual Crops.
33

Giri, Santosh, and Basanta Joshi. "Multilayer Backpropagation Neural Networks for Implementation of Logic Gates." International Journal of Computer Science & Engineering Survey 12, no. 1 (February 28, 2021): 1–12. http://dx.doi.org/10.5121/ijcses.2021.12101.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
ANN is a computational model that is composed of several processing elements (neurons) that tries to solve a specific problem. Like the human brain, it provides the ability to learn from experiences without being explicitly programmed. This article is based on the implementation of artificial neural networks for logic gates. At first, the 3 layers Artificial Neural Network is designed with 2 input neurons, 2 hidden neurons & 1 output neuron. after that model is trained by using a backpropagation algorithm until the model satisfies the predefined error criteria (e) which set 0.01 in this experiment. The learning rate (α) used for this experiment was 0.01. The NN model produces correct output at iteration (p)= 20000 for AND, NAND & NOR gate. For OR & XOR the correct output is predicted at iteration (p)=15000 & 80000 respectively.
34

Aziz, Mustafa Nizamul. "A Review on Artificial Neural Networks and its’ Applicability." Bangladesh Journal of Multidisciplinary Scientific Research 2, no. 1 (June 11, 2020): 48–51. http://dx.doi.org/10.46281/bjmsr.v2i1.609.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The field of artificial neural networks (ANN) started from humble beginnings in the 1950s but got attention in the 1980s. ANN tries to emulate the neural structure of the brain, which consists of several thousand cells, neuron, which is interconnected in a large network. This is done through artificial neurons, handling the input and output, and connecting to other neurons, creating a large network. The potential for artificial neural networks is considered to be huge, today there are several different uses for ANN, ranging from academic research in such fields as mathematics and medicine to business-based purposes and sports prediction. The purpose of this paper is to give words to artificial neural networks and to show its applicability. Documents analysis was used here as the data collection method. The paper figured out network structures, steps for constructing an ANN, architectures, and learning algorithms.
35

Pi, Ziqi, and Giovanni Zocchi. "Critical behavior in the artificial axon." Journal of Physics Communications 5, no. 12 (December 1, 2021): 125013. http://dx.doi.org/10.1088/2399-6528/ac43d0.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract The Artificial Axon is a unique synthetic system, based on biomolecular components, which supports action potentials. Here we examine, experimentally and theoretically, the properties of the threshold for firing in this system. As in real neurons, this threshold corresponds to the critical point of a saddle-node bifurcation. We measure the delay time for firing as a function of the distance to threshold, recovering the expected scaling exponent of −1/2. We introduce a minimal model of the Morris-Lecar type, validate it on the experiments, and use it to extend analytical results obtained in the limit of ‘fast’ ion channel dynamics. In particular, we discuss the dependence of the firing threshold on the number of channels. The Artificial Axon is a simplified system, an Ur-neuron, relying on only one ion channel species for functioning. Nonetheless, universal properties such as the action potential behavior near threshold are the same as in real neurons. Thus we may think of the Artificial Axon as a cell-free breadboard for electrophysiology research.
36

Zhou, Lei, Jian Hui Wu, Guo Li Wang, Yu Su, and Guo Bin Zhang. "Modeling Research on Hospitalization Cost in Patients with Cerebral Infarction Based on BP Neural Network." Applied Mechanics and Materials 50-51 (February 2011): 944–48. http://dx.doi.org/10.4028/www.scientific.net/amm.50-51.944.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
To establish the model of hospitalization cost in patients with cerebral infarction based on neural network using artificial network mathematical model, and setting the appropriate parameters. LM, BR, OSS, SCG four algorithms were used respectively to model 3, 5,10,15, 20 neuron number in different hidden layers and comparisons between model fitting ability and its generalization ability were made. The model which involves a hidden layer, 8 input neurons, 15 neurons in output layer network is the optimal model by using OSS system.
37

XU, JIAN-XIN, and XIN DENG. "STUDY ON CHEMOTAXIS BEHAVIORS OF C. ELEGANS USING DYNAMIC NEURAL NETWORK MODELS: FROM ARTIFICIAL TO BIOLOGICAL MODEL." Journal of Biological Systems 18, spec01 (October 2010): 3–33. http://dx.doi.org/10.1142/s0218339010003597.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
With the anatomical understanding of the neural connection of the nematode Caenorhabditis elegans (C. elegans), its chemotaxis behaviors are investigated in this paper through the association with the biological nerve connections. The chemotaxis behaviors include food attraction, toxin avoidance and mixed-behaviors (finding food and avoiding toxin concurrently). Eight dynamic neural network (DNN) models, two artifical models and six biological models, are used to learn and implement the chemotaxis behaviors of C. elegans. The eight DNN models are classified into two classes with either single sensory neuron or dual sensory neurons. The DNN models are trained to learn certain switching logics according to different chemotaxis behaviors using real time recurrent learning algorithm (RTRL). First we show the good performance of the two artifical models in food attraction, toxin avoidance and the mixed-behaviors. Next, six neural wire diagrams from sensory neurons to motor neurons are extracted from the anatomical nerve connection of C. elegans. Then the extracted biological wire diagrams are trained using RTRL directly, which is the first time in this field of research by associating chemotaxis behaviors with biological neural models. An interesting discovery is the need for a memory neuron when single-sensory models are used, which is consistent with the anatomical understanding on a specific neuron that functions as a memory. In the simulations, the chemotaxis behaviors of C. elegans can be depicted by several switch logical functions which can be learned by RTRL for both artifical and biological models.
38

Yuan, Jia-Hui, Xiao-Kuo Yang, Bin Zhang, Ya-Bo Chen, Jun Zhong, Bo Wei, Ming-Xu Song, and Huan-Qing Cui. "Activation function and computing performance of spin neuron driven by magnetic field and strain." Acta Physica Sinica 70, no. 20 (2021): 207502. http://dx.doi.org/10.7498/aps.70.20210611.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The spin neuron is an emerging artificial neural device which has many advantages such as ultra-low power consumption, strong nonlinearity, and high integration. Besides, it has ability to remember and calculate at the same time. So it is seen as a suitable and excellent candidate for the new generation of neural network. In this paper, a spin neuron driven by magnetic field and strain is proposed. The micromagnetic model of the device is realized by using the OOMMF micromagnetic simulation software, and the numerical model of the device is also established by using the LLG equation. More importantly, a three-layer neural network is composed of spin neurons constructed respectively using three materials (Terfenol-D, FeGa, Ni). It is used to study the activation functions and the ability to recognize the MNIST handwritten datasets.c Results show that the spin neuron can successfully achieve the random magnetization switching to simulate the activation behavior of the biological neuron. Moreover, the results show that if the ranges of the inputting magnetic fields are different, the three materials' neurons can all reach the saturation accuracy. It is expected to replace the traditional CMOS neuron. And the overall power consumption of intelligent computing can be further reduced by using appropriate materials. If we input the magnetic fields in the same range, the recognition speed of the spin neuron made of Ni is the slowest in the three materials. The results can establish a theoretical foundation for the design and the applications of the new artificial neural networks and the intelligent circuits.
39

Moslehi, Saba, Conor Rowland, Julian H. Smith, William J. Watterson, David Miller, Cristopher M. Niell, Benjamín J. Alemán, Maria-Thereza Perez, and Richard P. Taylor. "Controlled assembly of retinal cells on fractal and Euclidean electrodes." PLOS ONE 17, no. 4 (April 6, 2022): e0265685. http://dx.doi.org/10.1371/journal.pone.0265685.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Controlled assembly of retinal cells on artificial surfaces is important for fundamental cell research and medical applications. We investigate fractal electrodes with branches of vertically-aligned carbon nanotubes and silicon dioxide gaps between the branches that form repeating patterns spanning from micro- to milli-meters, along with single-scaled Euclidean electrodes. Fluorescence and electron microscopy show neurons adhere in large numbers to branches while glial cells cover the gaps. This ensures neurons will be close to the electrodes’ stimulating electric fields in applications. Furthermore, glia won’t hinder neuron-branch interactions but will be sufficiently close for neurons to benefit from the glia’s life-supporting functions. This cell ‘herding’ is adjusted using the fractal electrode’s dimension and number of repeating levels. We explain how this tuning facilitates substantial glial coverage in the gaps which fuels neural networks with small-world structural characteristics. The large branch-gap interface then allows these networks to connect to the neuron-rich branches.
40

Fraile, Alberto, Emmanouil Panagiotakis, Nicholas Christakis, and Luis Acedo. "Cellular Automata and Artificial Brain Dynamics." Mathematical and Computational Applications 23, no. 4 (November 16, 2018): 75. http://dx.doi.org/10.3390/mca23040075.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Brain dynamics, neuron activity, information transfer in brains, etc., are a vast field where a large number of questions remain unsolved. Nowadays, computer simulation is playing a key role in the study of such an immense variety of problems. In this work, we explored the possibility of studying brain dynamics using cellular automata, more precisely the famous Game of Life (GoL). The model has some important features (i.e., pseudo-criticality, 1/f noise, universal computing), which represent good reasons for its use in brain dynamics modelling. We have also considered that the model maintains sufficient flexibility. For instance, the timestep is arbitrary, as are the spatial dimensions. As first steps in our study, we used the GoL to simulate the evolution of several neurons (i.e., a statistically significant set, typically a million neurons) and their interactions with the surrounding ones, as well as signal transfer in some simple scenarios. The way that signals (or life) propagate across the grid was described, along with a discussion on how this model could be compared with brain dynamics. Further work and variations of the model were also examined.
41

Vidybida, Alexander. "Relation Between Firing Statistics of Spiking Neuron with Instantaneous Feedback and Without Feedback." Fluctuation and Noise Letters 14, no. 04 (November 9, 2015): 1550034. http://dx.doi.org/10.1142/s0219477515500340.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
We consider a class of spiking neuron models, defined by a set of conditions which are typical for basic threshold-type models like leaky integrate-and-fire, or binding neuron model and also for some artificial neurons. A neuron is fed with a point renewal process. A relation between the three probability density functions (PDF): (i) PDF of input interspike intervals ISIs, (ii) PDF of output interspike intervals of a neuron with a feedback and (iii) PDF for that same neuron without feedback is derived. This allows to calculate any one of the three PDFs provided the remaining two are given. Similar relation between corresponding means and variances is derived. The relations are checked exactly for the binding neuron model stimulated with Poisson stream.
42

Ott, Mackenzie M., Sarah C. Nuding, Lauren S. Segers, Russell O'Connor, Kendall F. Morris, and Bruce G. Lindsey. "Central chemoreceptor modulation of breathing via multipath tuning in medullary ventrolateral respiratory column circuits." Journal of Neurophysiology 107, no. 2 (January 15, 2012): 603–17. http://dx.doi.org/10.1152/jn.00808.2011.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Ventrolateral respiratory column (VRC) circuits that modulate breathing in response to changes in central chemoreceptor drive are incompletely understood. We employed multielectrode arrays and spike train correlation methods to test predictions of the hypothesis that pre-Bötzinger complex (pre-BötC) and retrotrapezoid nucleus/parafacial (RTN-pF) circuits cooperate in chemoreceptor-evoked tuning of ventral respiratory group (VRG) inspiratory neurons. Central chemoreceptors were selectively stimulated by injections of CO2-saturated saline into the vertebral artery in seven decerebrate, vagotomized, neuromuscularly blocked, and artificially ventilated cats. Among sampled neurons in the Bötzinger complex (BötC)-to-VRG region, 70% (161 of 231) had a significant change in firing rate after chemoreceptor stimulation, as did 70% (101 of 144) of the RTN-pF neurons. Other responsive neurons (24 BötC-VRG; 11 RTN-pF) had a change in the depth of respiratory modulation without a significant change in average firing rate. Seventy BötC-VRG chemoresponsive neurons triggered 189 offset-feature correlograms (96 peaks; 93 troughs) with at least one responsive BötC-VRG cell. Functional input from at least one RTN-pF cell could be inferred for 45 BötC-VRG neurons (19%). Eleven RTN-pF cells were correlated with more than one BötC-VRG target neuron, providing evidence for divergent connectivity. Thirty-seven RTN-pF neurons, 24 of which were chemoresponsive, were correlated with at least one chemoresponsive BötC-VRG neuron. Correlation linkage maps and spike-triggered averages of phrenic nerve signals suggest transmission of chemoreceptor drive via a multipath network architecture: RTN-pF modulation of pre-BötC-VRG rostral-to-caudal excitatory inspiratory neuron chains is tuned by feedforward and recurrent inhibition from other inspiratory neurons and from “tonic” expiratory neurons.
43

Gomez, Natalia, Shaochen Chen, and Christine E. Schmidt. "Polarization of hippocampal neurons with competitive surface stimuli: contact guidance cues are preferred over chemical ligands." Journal of The Royal Society Interface 4, no. 13 (November 14, 2006): 223–33. http://dx.doi.org/10.1098/rsif.2006.0171.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Neuronal behaviour is profoundly influenced by extracellular stimuli in many developmental and regeneration processes. Understanding neuron responses and integration of environmental signals could impact the design of successful therapies for neurodegenerative diseases and nerve injuries. Here, we have investigated the influence of localized extracellular cues on polarization (i.e. axon formation) of hippocampal neurons. Electron-beam lithography, microfabrication techniques and protein immobilization were used to create a unique system that provided simultaneous and independent chemical and physical cues to individual neurons. In particular, we analysed competitive responses between simultaneous stimulation with chemical ligands, including immobilized nerve growth factor and laminin, and contact guidance cues mediated by surface topography (i.e. microchannels). Contact guidance cues were preferred 70% of the time over chemical ligands by neurons extending axons, which suggests a stronger stimulation mechanism triggered by topography. This investigation contributes to the understanding of neuronal behaviour on artificial substrates, which is applicable to the creation of artificial environments for neural engineering applications.
44

Kim, Dong Woo, Young Jae Shin, Kyoung Taik Park, Eung Sug Lee, Jong Hyun Lee, and Myeong Woo Cho. "Prediction of Surface Roughness in High Speed Milling Process Using the Artificial Neural Networks." Key Engineering Materials 364-366 (December 2007): 713–18. http://dx.doi.org/10.4028/www.scientific.net/kem.364-366.713.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The objective of this research was to apply the artificial neural network algorithm to predict the surface roughness in high speed milling operation. Tool length, feed rate, spindle speed, cutting path interval and run-out were used as five input neurons; and artificial neural networks model based on back-propagation algorithm was developed to predict the output neuron-surface roughness. A series of experiments was performed, and the results were estimated. The experimental results showed that the applied artificial neural network surface roughness prediction gave good accuracy in predicting the surface roughness under a variety of combinations of cutting conditions.
45

Segers, L. S., R. Shannon, S. Saporta, and B. G. Lindsey. "Functional associations among simultaneously monitored lateral medullary respiratory neurons in the cat. I. Evidence for excitatory and inhibitory actions of inspiratory neurons." Journal of Neurophysiology 57, no. 4 (April 1, 1987): 1078–100. http://dx.doi.org/10.1152/jn.1987.57.4.1078.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Data were obtained from 45 anesthetized (Dial), paralyzed, artificially ventilated, bilaterally vagotomized cats. Arrays of extracellular electrodes were used to monitor simultaneously the activities of lateral medullary respiratory neurons located in the rostral and caudal regions of the ventral respiratory group. The average discharge rate as a function of time in the respiratory cycle was determined for each neuron and concurrent phrenic nerve activity. Most cells were tested for axonal projections to the spinal cord or the ipsilateral vagus nerve using antidromic stimulation techniques. Seven hundred and sixty-one pairs of ipsilateral respiratory neurons that contained at least one neuron whose maximum discharge rate occurred during the inspiratory phase were analyzed by cross-correlation of the simultaneously recorded spike trains. Twenty-three percent of the 410 pairs of inspiratory (I) neurons showed short time scale correlations indicative of functional association due to paucisynaptic connections or shared inputs. Eight per cent of the 351 pairs composed of an I cell and and expiratory (E) neuron were correlated. We found evidence for excitation of both bulbospinal I neurons and I cells that were not antidromically activated by stimulation of the spinal cord and vagus nerve (NAA neurons) by NAA I cells. We also obtained data suggesting inhibitory actions of cells whose maximum discharge rate occurred in the first half of the I phase (I-DEC neurons). These actions included inhibition of other I-DEC neurons, inhibition of cells whose greatest firing rate occurred in the last half of the I phase (I-AUG neurons), inhibition of E-DEC neurons, and inhibition of E-AUG cells. Sixty-two percent (31/50) of the correlations that could be interpreted as evidence for an excitatory or inhibitory paucisynaptic connection were detected in pairs composed of a caudal and a rostral ventral respiratory group neuron. Eighty-eight percent (14/16) of proposed intergroup excitatory connections involved a projection from the rostral neuron of the pair to the caudal cell, whereas 73% (11/15) of proposed inhibitory connections involved a caudal-to-rostral projection. These results support and suggest several hypotheses for mechanisms that may help to control the development of augmenting activity in and the timing of each phase of the respiratory cycle.
46

Garza-González, M. T., M. M. Alcalá-Rodríguez, R. Pérez-Elizondo, F. J. Cerino-Córdova, R. B. Garcia-Reyes, J. A. Loredo-Medrano, and E. Soto-Regalado. "Artificial Neural Network for predicting biosorption of methylene blue by Spirulina sp." Water Science and Technology 63, no. 5 (March 1, 2011): 977–83. http://dx.doi.org/10.2166/wst.2011.279.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
An artificial neural network (ANN) was used to predict the biosorption of methylene blue on Spirulina sp. biomass. Genetic and anneal algorithms were tested with different quantity of neurons at the hidden layers to determine the optimal neurons in the ANN architecture. In addition, sensitivity analyses were conducted with the optimised ANN architecture for establishing which input variables (temperature, pH, and biomass dose) significantly affect the predicted data (removal efficiency or biosorption capacity). A number of isotherm models were also compared with the optimised ANN architecture. The removal efficiency or the biosorption capacity of MB on Spirulina sp. biomass was adequately predicted with the optimised ANN architecture by using the genetic algorithm with three input neurons, and 20 neurons in each one of the two hidden layers. Sensitivity analyses demonstrated that initial pH and biomass dose show a strong influence on the predicted removal efficiency or biosorption capacity, respectively. When supplying two variables to the genetic algorithm, initial pH and biomass dose improved the prediction of the output neuron (biosorption capacity or removal efficiency). The optimised ANN architecture predicted the equilibrium data 5,000 times better than the best isotherm model. These results demonstrate that ANN can be an effective way of predicting the experimental biosorption data of MB on Spirulina sp. biomass.
47

Piasecki, Adam, Jakub Jurasz, and Rajmund Skowron. "Application of artificial neural networks (ANN) in Lake Drwęckie water level modelling." Limnological Review 15, no. 1 (March 1, 2015): 21–30. http://dx.doi.org/10.2478/limre-2015-0003.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract This paper presents an attempt to model water-level fluctuations in a lake based on artificial neural networks. The subject of research was the water level in Lake Drwęckie over the period 1980-2012. For modelling purposes, meteorological data from the weather station in Olsztyn were used. As a result of the research conducted, the model M_Meteo_Lag_3 was identified as the most accurate. This artificial neural network model has seven input neurons, four neurons in the hidden layer and one neuron in the output layer. As explanatory variables meteorological parameters (minimal, maximal and mean temperature, and humidity) and values of dependent variables from three earlier months were implemented. The paper claims that artificial neural networks performed well in terms of modelling the analysed phenomenon. In most cases (55%) the modelled value differed from the real value by an average of 7.25 cm. Only in two cases did a meaningful error occur, of 33 and 38 cm.
48

Muresan, Raul C., and Cristina Savin. "Resonance or Integration? Self-Sustained Dynamics and Excitability of Neural Microcircuits." Journal of Neurophysiology 97, no. 3 (March 2007): 1911–30. http://dx.doi.org/10.1152/jn.01043.2006.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
We investigated spontaneous activity and excitability in large networks of artificial spiking neurons. We compared three different spiking neuron models: integrate-and-fire (IF), regular-spiking (RS), and resonator (RES). First, we show that different models have different frequency-dependent response properties, yielding large differences in excitability. Then, we investigate the responsiveness of these models to a single afferent inhibitory/excitatory spike and calibrate the total synaptic drive such that they would exhibit similar peaks of the postsynaptic potentials (PSP). Based on the synaptic calibration, we build large microcircuits of IF, RS, and RES neurons and show that the resonance property favors homeostasis and self-sustainability of the network activity. On the other hand, integration produces instability while it endows the network with other useful properties, such as responsiveness to external inputs. We also investigate other potential sources of stable self-sustained activity and their relation to the membrane properties of neurons. We conclude that resonance and integration at the neuron level might interact in the brain to promote stability as well as flexibility and responsiveness to external input and that membrane properties, in general, are essential for determining the behavior of large networks of neurons.
49

Rinco de Marques e Carmo, Naiara. "Use of Artificial Neural Networks for GHI Forecasting." Revista Virtual de Química 14, no. 1 (2022): 56–60. http://dx.doi.org/10.21577/1984-6835.20220013.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Ling, Yu. "Research on Loading System of Landing Gear Turning Test Based on Single Neuron PID Controller." Applied Mechanics and Materials 556-562 (May 2014): 2313–16. http://dx.doi.org/10.4028/www.scientific.net/amm.556-562.2313.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This paper designs a single neuron PID controller for the loading system which can simulate the load in the process of landing gear turning. As artificial neurons have the adaptive, self-learning and more fault-tolerant characteristics, the controller based on single neuron PID can improve performance of loading system. To assess the effectiveness of controller, united simulation between Matlab/Simulink and AMESim was conducted. Obtained results show the proposed approach is satisfactory in fast response, small overshoot, high control accuracy, strong anti-interference ability and robustness when compared with traditional PID controller.

До бібліографії