To see the other types of publications on this topic, follow the link: Spiking Neural network simulation.

Journal articles on the topic 'Spiking Neural network simulation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Spiking Neural network simulation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Li, Linyang. "Review of common spiking neural network simulation tools." Applied and Computational Engineering 37, no. 1 (2024): 81–85. http://dx.doi.org/10.54254/2755-2721/37/20230474.

Full text
Abstract:
As a recognized third-generation neural network, the spiking neural network has high concurrency and complexity. Although the degree of research is still far from the previous generation of neural networks, spiking neural networks are excellent in performance and energy consumption. In this paper, the common spiking neural network simulation tools are reviewed. The most frequently used and mentioned tools are NEURON, NEST, and BRAIN. NEURON is more suitable for simulation based on biological applications, pays more attention to biological characteristics, and can support large-scale network simulation. Examples used in the official documentation are neural simulations of invertebrates and mammals. Large heterogeneous networks of point neurons or neurons with a few compartments are frequently simulated using NEST. In contrast to models that concentrate on the specific morphological and biophysical characteristics of individual neurons, NEST is appropriate for those that emphasize the dynamics, size, and structure of the nervous system. Brian was originally designed for research and teaching and is well suited as a teaching and presentation tool for simulating and observing the effects of different parameters for classical neural network projects such as picture classification. In addition, CSIM, SPLIT, SPINNAKER, and other tools also have their merits, but due to the low frequency of relevant references and lack of universality, this study will not give a detailed introduction.
APA, Harvard, Vancouver, ISO, and other styles
2

Brette, Romain, and Dan F. M. Goodman. "Vectorized Algorithms for Spiking Neural Network Simulation." Neural Computation 23, no. 6 (2011): 1503–35. http://dx.doi.org/10.1162/neco_a_00123.

Full text
Abstract:
High-level languages (Matlab, Python) are popular in neuroscience because they are flexible and accelerate development. However, for simulating spiking neural networks, the cost of interpretation is a bottleneck. We describe a set of algorithms to simulate large spiking neural networks efficiently with high-level languages using vector-based operations. These algorithms constitute the core of Brian, a spiking neural network simulator written in the Python language. Vectorized simulation makes it possible to combine the flexibility of high-level languages with the computational efficiency usually associated with compiled languages.
APA, Harvard, Vancouver, ISO, and other styles
3

Rhodes, Oliver, Luca Peres, Andrew G. D. Rowley, et al. "Real-time cortical simulation on neuromorphic hardware." Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 378, no. 2164 (2019): 20190160. http://dx.doi.org/10.1098/rsta.2019.0160.

Full text
Abstract:
Real-time simulation of a large-scale biologically representative spiking neural network is presented, through the use of a heterogeneous parallelization scheme and SpiNNaker neuromorphic hardware. A published cortical microcircuit model is used as a benchmark test case, representing ≈1 mm 2 of early sensory cortex, containing 77 k neurons and 0.3 billion synapses. This is the first hard real-time simulation of this model, with 10 s of biological simulation time executed in 10 s wall-clock time. This surpasses best-published efforts on HPC neural simulators (3 × slowdown) and GPUs running optimized spiking neural network (SNN) libraries (2 × slowdown). Furthermore, the presented approach indicates that real-time processing can be maintained with increasing SNN size, breaking the communication barrier incurred by traditional computing machinery. Model results are compared to an established HPC simulator baseline to verify simulation correctness, comparing well across a range of statistical measures. Energy to solution and energy per synaptic event are also reported, demonstrating that the relatively low-tech SpiNNaker processors achieve a 10 × reduction in energy relative to modern HPC systems, and comparable energy consumption to modern GPUs. Finally, system robustness is demonstrated through multiple 12 h simulations of the cortical microcircuit, each simulating 12 h of biological time, and demonstrating the potential of neuromorphic hardware as a neuroscience research tool for studying complex spiking neural networks over extended time periods. This article is part of the theme issue ‘Harmonizing energy-autonomous computing and intelligence’.
APA, Harvard, Vancouver, ISO, and other styles
4

Grinblat, Guillermo L., Hernán Ahumada, and Ernesto Kofman. "Quantized state simulation of spiking neural networks." SIMULATION 88, no. 3 (2011): 299–313. http://dx.doi.org/10.1177/0037549711399935.

Full text
Abstract:
In this work, we explore the usage of quantized state system (QSS) methods in the simulation of networks of spiking neurons. We compare the simulation results obtained by these discrete-event algorithms with the results of the discrete time methods in use by the neuroscience community. We found that the computational costs of the QSS methods grow almost linearly with the size of the network, while they grows at least quadratically in the discrete time algorithms. We show that this advantage is mainly due to the fact that QSS methods only perform calculations in the components of the system that experience activity.
APA, Harvard, Vancouver, ISO, and other styles
5

Cancan, Murat. "On Ev-Degree and Ve-Degree Topological Properties of Tickysim Spiking Neural Network." Computational Intelligence and Neuroscience 2019 (June 2, 2019): 1–7. http://dx.doi.org/10.1155/2019/8429120.

Full text
Abstract:
Topological indices are indispensable tools for analyzing networks to understand the underlying topology of these networks. Spiking neural network architecture (SpiNNaker or TSNN) is a million-core calculating engine which aims at simulating the behavior of aggregates of up to a billion neurons in real time. Tickysim is a timing-based simulator of the interchip interconnection network of the SpiNNaker architecture. Tickysim spiking neural network is considered to be highly symmetrical network classes. Classical degree-based topological properties of Tickysim spiking neural network have been recently determined. Ev-degree and ve-degree concepts are two novel degrees recently defined in graph theory. Ev-degree and ve-degree topological indices have been defined as parallel to their corresponding counterparts. In this study, we investigate the ev-degree and ve-degree topological properties of Tickysim spiking neural network. These calculations give the information about the underlying topology of Tickysim spiking neural network.
APA, Harvard, Vancouver, ISO, and other styles
6

BUSYGIN, Alexander N., Andrey N. BOBYLEV, Alexey A. GUBIN, Alexander D. PISAREV, and Sergey Yu UDOVICHENKO. "NUMERICAL SIMULATION AND EXPERIMENTAL STUDY OF A HARDWARE PULSE NEURAL NETWORK WITH MEMRISTOR SYNAPSES." Tyumen State University Herald. Physical and Mathematical Modeling. Oil, Gas, Energy 7, no. 2 (2021): 223–35. http://dx.doi.org/10.21684/2411-7978-2021-7-2-223-235.

Full text
Abstract:
This article presents the results of a numerical simulation and an experimental study of the electrical circuit of a hardware spiking perceptron based on a memristor-diode crossbar. That has required developing and manufacturing a measuring bench, the electrical circuit of which consists of a hardware perceptron circuit and an input peripheral electrical circuit to implement the activation functions of the neurons and ensure the operation of the memory matrix in a spiking mode. The authors have performed a study of the operation of the hardware spiking neural network with memristor synapses in the form of a memory matrix in the mode of a single-layer perceptron synapses. The perceptron can be considered as the first layer of a biomorphic neural network that performs primary processing of incoming information in a biomorphic neuroprocessor. The obtained experimental and simulation learning curves show the expected increase in the proportion of correct classifications with an increase in the number of training epochs. The authors demonstrate generating a new association during retraining caused by the presence of new input information. Comparison of the results of modeling and an experiment on training a small neural network with a small crossbar will allow creating adequate models of hardware neural networks with a large memristor-diode crossbar. The arrival of new unknown information at the input of the hardware spiking neural network can be related with the generation of new associations in the biomorphic neuroprocessor. With further improvement of the neural network, this information will be comprehended and, therefore, will allow the transition from weak to strong artificial intelligence.
APA, Harvard, Vancouver, ISO, and other styles
7

GELEN, AYKUT GÖRKEM, and AYTEN ATASOY. "SPAYK: An environment for spiking neural network simulation." Turkish Journal of Electrical Engineering and Computer Sciences 31, no. 2 (2023): 462–80. http://dx.doi.org/10.55730/1300-0632.3995.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Zhang, Hong, and Yu Zhang. "Memory-Efficient Reversible Spiking Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 15 (2024): 16759–67. http://dx.doi.org/10.1609/aaai.v38i15.29616.

Full text
Abstract:
Spiking neural networks (SNNs) are potential competitors to artificial neural networks (ANNs) due to their high energy-efficiency on neuromorphic hardware. However, SNNs are unfolded over simulation time steps during the training process. Thus, SNNs require much more memory than ANNs, which impedes the training of deeper SNN models. In this paper, we propose the reversible spiking neural network to reduce the memory cost of intermediate activations and membrane potentials during training. Firstly, we extend the reversible architecture along temporal dimension and propose the reversible spiking block, which can reconstruct the computational graph and recompute all intermediate variables in forward pass with a reverse process. On this basis, we adopt the state-of-the-art SNN models to the reversible variants, namely reversible spiking ResNet (RevSResNet) and reversible spiking transformer (RevSFormer). Through experiments on static and neuromorphic datasets, we demonstrate that the memory cost per image of our reversible SNNs does not increase with the network depth. On CIFAR10 and CIFAR100 datasets, our RevSResNet37 and RevSFormer-4-384 achieve comparable accuracies and consume 3.79x and 3.00x lower GPU memory per image than their counterparts with roughly identical model complexity and parameters. We believe that this work can unleash the memory constraints in SNN training and pave the way for training extremely large and deep SNNs.
APA, Harvard, Vancouver, ISO, and other styles
9

Ekelmans, Pierre, Nataliya Kraynyukovas, and Tatjana Tchumatchenko. "Targeting operational regimes of interest in recurrent neural networks." PLOS Computational Biology 19, no. 5 (2023): e1011097. http://dx.doi.org/10.1371/journal.pcbi.1011097.

Full text
Abstract:
Neural computations emerge from local recurrent neural circuits or computational units such as cortical columns that comprise hundreds to a few thousand neurons. Continuous progress in connectomics, electrophysiology, and calcium imaging require tractable spiking network models that can consistently incorporate new information about the network structure and reproduce the recorded neural activity features. However, for spiking networks, it is challenging to predict which connectivity configurations and neural properties can generate fundamental operational states and specific experimentally reported nonlinear cortical computations. Theoretical descriptions for the computational state of cortical spiking circuits are diverse, including the balanced state where excitatory and inhibitory inputs balance almost perfectly or the inhibition stabilized state (ISN) where the excitatory part of the circuit is unstable. It remains an open question whether these states can co-exist with experimentally reported nonlinear computations and whether they can be recovered in biologically realistic implementations of spiking networks. Here, we show how to identify spiking network connectivity patterns underlying diverse nonlinear computations such as XOR, bistability, inhibitory stabilization, supersaturation, and persistent activity. We establish a mapping between the stabilized supralinear network (SSN) and spiking activity which allows us to pinpoint the location in parameter space where these activity regimes occur. Notably, we find that biologically-sized spiking networks can have irregular asynchronous activity that does not require strong excitation-inhibition balance or large feedforward input and we show that the dynamic firing rate trajectories in spiking networks can be precisely targeted without error-driven training algorithms.
APA, Harvard, Vancouver, ISO, and other styles
10

Przewlocka-Rus, Dominika, and Tomasz Kryjak. "The bioinspired traffic sign classifier." Bio-Algorithms and Med-Systems 18, no. 1 (2022): 29–38. http://dx.doi.org/10.1515/bams-2021-0159.

Full text
Abstract:
Abstract Objectives In this paper the research on developing convolutional spiking neural networks for traffic signs classification is presented. Unlike classical ones, spiking networks reflect the behaviour of biological neurons much more closely, by taking into account the time dimension and event-based operation. Spiking networks running on dedicated neuromorphic platforms, such as Intel Loihi, can operate with greater energy efficiency, hence they are an interesting approach for embedded solutions. Methods For convolutional spiking neural networks' design and simulation, Nengo and NengoDL libraries for Python language were used. Numerous experiments using the Leaky-Integrate-and-Fire (LIF) neuron model were conducted. The training results, with different augmentation methods and number of time steps for input image presentation were compared. Results Finally, an accuracy of up to 97% on the test set was achieved, depending on the number of time steps the input was presented to the SNN. Conclusions The proposed experiments show that using simple convolutional spiking neural network, one can achieve accuracy comparable to the classical network with the same architecture and trained on the same dataset. At the same time, running on dedicated neuromorphic hardware, such solution should be characterized by low latency and low energy consumption.
APA, Harvard, Vancouver, ISO, and other styles
11

Lin, Xianghong, Mengwei Zhang, and Xiangwen Wang. "Supervised Learning Algorithm for Multilayer Spiking Neural Networks with Long-Term Memory Spike Response Model." Computational Intelligence and Neuroscience 2021 (November 24, 2021): 1–16. http://dx.doi.org/10.1155/2021/8592824.

Full text
Abstract:
As a new brain-inspired computational model of artificial neural networks, spiking neural networks transmit and process information via precisely timed spike trains. Constructing efficient learning methods is a significant research field in spiking neural networks. In this paper, we present a supervised learning algorithm for multilayer feedforward spiking neural networks; all neurons can fire multiple spikes in all layers. The feedforward network consists of spiking neurons governed by biologically plausible long-term memory spike response model, in which the effect of earlier spikes on the refractoriness is not neglected to incorporate adaptation effects. The gradient descent method is employed to derive synaptic weight updating rule for learning spike trains. The proposed algorithm is tested and verified on spatiotemporal pattern learning problems, including a set of spike train learning tasks and nonlinear pattern classification problems on four UCI datasets. Simulation results indicate that the proposed algorithm can improve learning accuracy in comparison with other supervised learning algorithms.
APA, Harvard, Vancouver, ISO, and other styles
12

Qu, Peng, Youhui Zhang, Xiang Fei, and Weimin Zheng. "High Performance Simulation of Spiking Neural Network on GPGPUs." IEEE Transactions on Parallel and Distributed Systems 31, no. 11 (2020): 2510–23. http://dx.doi.org/10.1109/tpds.2020.2994123.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Stewart, Robert D., and Kevin N. Gurney. "Spiking neural network simulation: memory-optimal synaptic event scheduling." Journal of Computational Neuroscience 30, no. 3 (2010): 721–28. http://dx.doi.org/10.1007/s10827-010-0288-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Huderek, Damian, Szymon Szczęsny, and Raul Rato. "Spiking Neural Network Based on Cusp Catastrophe Theory." Foundations of Computing and Decision Sciences 44, no. 3 (2019): 273–84. http://dx.doi.org/10.2478/fcds-2019-0014.

Full text
Abstract:
Abstract This paper addresses the problem of effective processing using third generation neural networks. The article features two new models of spiking neurons based on the cusp catastrophe theory. The effectiveness of the models is demonstrated with an example of a network composed of three neurons solving the problem of linear inseparability of the XOR function. The proposed solutions are dedicated to hardware implementation using the Edge computing strategy. The paper presents simulation results and outlines further research direction in the field of practical applications and implementations using nanometer CMOS technologies and the current processing mode.
APA, Harvard, Vancouver, ISO, and other styles
15

D'Haene, Michiel, Michiel Hermans, and Benjamin Schrauwen. "Toward Unified Hybrid Simulation Techniques for Spiking Neural Networks." Neural Computation 26, no. 6 (2014): 1055–79. http://dx.doi.org/10.1162/neco_a_00587.

Full text
Abstract:
In the field of neural network simulation techniques, the common conception is that spiking neural network simulators can be divided in two categories: time-step-based and event-driven methods. In this letter, we look at state-of-the art simulation techniques in both categories and show that a clear distinction between both methods is increasingly difficult to define. In an attempt to improve the weak points of each simulation method, ideas of the alternative method are, sometimes unknowingly, incorporated in the simulation engine. Clearly the ideal simulation method is a mix of both methods. We formulate the key properties of such an efficient and generally applicable hybrid approach.
APA, Harvard, Vancouver, ISO, and other styles
16

Lightheart, Toby, Steven Grainger, and Tien-Fu Lu. "Spike-Timing-Dependent Construction." Neural Computation 25, no. 10 (2013): 2611–45. http://dx.doi.org/10.1162/neco_a_00501.

Full text
Abstract:
Spike-timing-dependent construction (STDC) is the production of new spiking neurons and connections in a simulated neural network in response to neuron activity. Following the discovery of spike-timing-dependent plasticity (STDP), significant effort has gone into the modeling and simulation of adaptation in spiking neural networks (SNNs). Limitations in computational power imposed by network topology, however, constrain learning capabilities through connection weight modification alone. Constructive algorithms produce new neurons and connections, allowing automatic structural responses for applications of unknown complexity and nonstationary solutions. A conceptual analogy is developed and extended to theoretical conditions for modeling synaptic plasticity as network construction. Generalizing past constructive algorithms, we propose a framework for the design of novel constructive SNNs and demonstrate its application in the development of simulations for the validation of developed theory. Potential directions of future research and applications of STDC for biological modeling and machine learning are also discussed.
APA, Harvard, Vancouver, ISO, and other styles
17

Muthna, Jasim Fadhil, Ali Naji Maitham, and Ahmed Salman Ghalib. "Transceiver error reduction by design prototype system based on neural network analysis method." Indonesian Journal of Electrical Engineering and Computer Science (IJEECS) 18, no. 3 (2020): 1244–51. https://doi.org/10.11591/ijeecs.v18.i3.pp1244-1251.

Full text
Abstract:
Code words traditional can be decoding when applied in artificial neural network. Nevertheless, explored rarely for encoding of artificial neural network so that it proposed encoder for artificial neural network forward with major structure built by Self Organizing Feature Map (SOFM). According to number of bits codeword and bits source mentioned the dimension of forward neural network at first then sets weight of distribution proposal choosing after that algorithm appropriate using for sets weight initializing and finally sets code word uniqueness check so that matching with existing. The spiking neural network (SNN) using as decoder of neural network for processing of decoding where depending on numbers of bits codeword and bits source dimension the spiking neural network structure built at first then generated sets codeword by network neural forward using for train spiking neural network after that when whole error reached minimum the process training stop and at last sets code word decode accepted. In tests simulation appear that feasible decoding and encoding neural network while performance better for structure network neural forward a proper condition is achieved with γ node output degree. The methods of mathematical traditional can not using for decoding generated Sets codeword by encoder network of neural so it is prospect good for communication security.
APA, Harvard, Vancouver, ISO, and other styles
18

Nguyen, Quang Anh Pham, Philipp Andelfinger, Wen Jun Tan, Wentong Cai, and Alois Knoll. "Transitioning Spiking Neural Network Simulators to Heterogeneous Hardware." ACM Transactions on Modeling and Computer Simulation 31, no. 2 (2021): 1–26. http://dx.doi.org/10.1145/3422389.

Full text
Abstract:
Spiking neural networks (SNN) are among the most computationally intensive types of simulation models, with node counts on the order of up to 10 11 . Currently, there is intensive research into hardware platforms suitable to support large-scale SNN simulations, whereas several of the most widely used simulators still rely purely on the execution on CPUs. Enabling the execution of these established simulators on heterogeneous hardware allows new studies to exploit the many-core hardware prevalent in modern supercomputing environments, while still being able to reproduce and compare with results from a vast body of existing literature. In this article, we propose a transition approach for CPU-based SNN simulators to enable the execution on heterogeneous hardware (e.g., CPUs, GPUs, and FPGAs), with only limited modifications to an existing simulator code base and without changes to model code. Our approach relies on manual porting of a small number of core simulator functionalities as found in common SNN simulators, whereas the unmodified model code is analyzed and transformed automatically. We apply our approach to the well-known simulator NEST and make a version executable on heterogeneous hardware available to the community. Our measurements show that at full utilization, a single GPU achieves the performance of about 9 CPU cores. A CPU-GPU co-execution with load balancing is also demonstrated, which shows better performance compared to CPU-only or GPU-only execution. Finally, an analytical performance model is proposed to heuristically determine the optimal parameters to execute the heterogeneous NEST.
APA, Harvard, Vancouver, ISO, and other styles
19

Lu, Junqi, Xinning Wu, Su Cao, Xiangke Wang, and Huangchao Yu. "An Implementation of Actor-Critic Algorithm on Spiking Neural Network Using Temporal Coding Method." Applied Sciences 12, no. 20 (2022): 10430. http://dx.doi.org/10.3390/app122010430.

Full text
Abstract:
Taking advantage of faster speed, less resource consumption and better biological interpretability of spiking neural networks, this paper developed a novel spiking neural network reinforcement learning method using actor-critic architecture and temporal coding. The simple improved leaky integrate-and-fire (LIF) model was used to describe the behavior of a spike neuron. Then the actor-critic network structure and the update formulas using temporally encoded information were provided. The current model was finally examined in the decision-making task, the gridworld task, the UAV flying through a window task and the avoiding a flying basketball task. In the 5 × 5 grid map, the value function learned was close to the ideal situation and the quickest way from one state to another was found. A UAV trained by this method was able to fly through the window quickly in simulation. An actual flight test of a UAV avoiding a flying basketball was conducted. With this model, the success rate of the test was 96% and the average decision time was 41.3 ms. The results show the effectiveness and accuracy of the temporal coded spiking neural network RL method. In conclusion, an attempt was made to provide insights into developing spiking neural network reinforcement learning methods for decision-making and autonomous control of unmanned systems.
APA, Harvard, Vancouver, ISO, and other styles
20

Schliebs, Stefan, and Nikola Kasabov. "Evolving spiking neural network—a survey." Evolving Systems 4, no. 2 (2013): 87–98. http://dx.doi.org/10.1007/s12530-013-9074-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Wang, Guowei, and Yan Fu. "Spatiotemporal patterns and collective dynamics of bi-layer coupled Izhikevich neural networks with multi-area channels." Mathematical Biosciences and Engineering 20, no. 2 (2022): 3944–69. http://dx.doi.org/10.3934/mbe.2023184.

Full text
Abstract:
<abstract> <p>The firing behavior and bifurcation of different types of Izhikevich neurons are analyzed firstly through numerical simulation. Then, a bi-layer neural network driven by random boundary is constructed by means of system simulation, in which each layer is a matrix network composed of 200 × 200 Izhikevich neurons, and the bi-layer neural network is connected by multi-area channels. Finally, the emergence and disappearance of spiral wave in matrix neural network are investigated, and the synchronization property of neural network is discussed. Obtained results show that random boundary can induce spiral waves under appropriate conditions, and it is clear that the emergence and disappearance of spiral wave can be observed only when the matrix neural network is constructed by regular spiking Izhikevich neurons, while it cannot be observed in neural networks constructed by other modes such as fast spiking, chattering and intrinsically bursting. Further research shows that the variation of synchronization factor with coupling strength between adjacent neurons shows an inverse bell-like curve in the form of "inverse stochastic resonance", but the variation of synchronization factor with coupling strength of inter-layer channels is a curve that is approximately monotonically decreasing. More importantly, it is found that lower synchronicity is helpful to develop spatiotemporal patterns. These results enable people to further understand the collective dynamics of neural networks under random conditions.</p> </abstract>
APA, Harvard, Vancouver, ISO, and other styles
22

Bodyanskiy, Yevgeniy, and Artem Dolotov. "A Multilayered Self-Learning Spiking Neural Network and its Learning Algorithm Based on ‘Winner-Takes-More’ Rule in Hierarchical Clustering." Scientific Journal of Riga Technical University. Computer Sciences 40, no. 1 (2009): 66–74. http://dx.doi.org/10.2478/v10143-010-0009-7.

Full text
Abstract:
A Multilayered Self-Learning Spiking Neural Network and its Learning Algorithm Based on ‘Winner-Takes-More’ Rule in Hierarchical ClusteringThis paper introduces architecture of multilayered selflearning spiking neural network for hierarchical data clustering. It consists of the layer of population coding and several layers of spiking neurons. Contrary to originally suggested multilayered spiking neural network, the proposed one does not require a separate learning algorithm for lateral connections. Irregular clusters detecting capability is achieved by improving the temporal Hebbian learning algorithm. It is generalized by replacing ‘Winner-Takes-All’ rule with ‘Winner-Takes-More’ one. It is shown that the layer of receptive neurons can be treated as a fuzzification layer where pool of receptive neurons is a linguistic variable, and receptive neuron within a pool is a linguistic term. The network architecture is designed in terms of control systems theory. Using the Laplace transform notion, spiking neuron synapse is presented as a second-order critically damped response unit. Spiking neuron soma is modeled on the basis of bang-bang control systems theory as a threshold detection system. Simulation experiment confirms that the proposed architecture is effective in detecting irregular clusters.
APA, Harvard, Vancouver, ISO, and other styles
23

Zhu, Xi, Yi Sun, Haijun Liu, Qingjiang Li, and Hui Xu. "Simulation of the Spiking Neural Network based on Practical Memristor." MATEC Web of Conferences 173 (2018): 01025. http://dx.doi.org/10.1051/matecconf/201817301025.

Full text
Abstract:
In order to gain a better understanding of the brain and explore biologically-inspired computation, significant attention is being paid to research into the spike-based neural computation. Spiking neural network (SNN), which is inspired by the understanding of observed biological structure, has been increasingly applied to pattern recognition task. In this work, a single layer SNN architecture based on the characteristics of spiking timing dependent plasticity (STDP) in accordance with the actual test of the device data has been proposed. The device data is derived from the Ag/GeSe/TiN fabricated memristor. The network has been tested on the MNIST dataset, and the classification accuracy attains 90.2%. Furthermore, the impact of device instability on the SNN performance has been discussed, which can propose guidelines for fabricating memristors used for SNN architecture based on STDP characteristics.
APA, Harvard, Vancouver, ISO, and other styles
24

Fadhil, Muthna Jasim, Maitham Ali Naji, and Ghalib Ahmed Salman. "Transceiver error reduction by design prototype system based on neural network analysis method." Indonesian Journal of Electrical Engineering and Computer Science 18, no. 3 (2020): 1244. http://dx.doi.org/10.11591/ijeecs.v18.i3.pp1244-1251.

Full text
Abstract:
<p><span>Code words traditional can be decoding when applied in artificial neural network. Nevertheless, explored rarely for encoding of artificial neural network so that it proposed encoder for artificial neural network forward with major structure built by Self Organizing Feature Map (SOFM). According to number of bits codeword and bits source mentioned the dimension of forward neural network at first then sets weight of distribution proposal choosing after that algorithm appropriate using for sets weight initializing and finally sets code word uniqueness check so that matching with existing. The spiking neural network (SNN) using as decoder of neural network for processing of decoding where depending on numbers of bits codeword and bits source dimension the spiking neural network structure built at first then generated sets codeword by network neural forward using for train spiking neural network after that when whole error reached minimum the process training stop and at last sets code word decode accepted. In tests simulation appear that feasible decoding and encoding neural network while performance better for structure network neural forward a proper condition is achieved with γ node output degree. The methods of mathematical traditional can not using for decoding generated Sets codeword by encoder network of neural so it is prospect good for communication security. </span></p>
APA, Harvard, Vancouver, ISO, and other styles
25

Liu, Chuang, Haojie Wang, Ning Liu, and Zhonghu Yuan. "Optimizing the Neural Structure and Hyperparameters of Liquid State Machines Based on Evolutionary Membrane Algorithm." Mathematics 10, no. 11 (2022): 1844. http://dx.doi.org/10.3390/math10111844.

Full text
Abstract:
As one of the important artificial intelligence fields, brain-like computing attempts to give machines a higher intelligence level by studying and simulating the cognitive principles of the human brain. A spiking neural network (SNN) is one of the research directions of brain-like computing, characterized by better biogenesis and stronger computing power than the traditional neural network. A liquid state machine (LSM) is a neural computing model with a recurrent network structure based on SNN. In this paper, a learning algorithm based on an evolutionary membrane algorithm is proposed to optimize the neural structure and hyperparameters of an LSM. First, the object of the proposed algorithm is designed according to the neural structure and hyperparameters of the LSM. Second, the reaction rules of the proposed algorithm are employed to discover the best neural structure and hyperparameters of the LSM. Third, the membrane structure is that the skin membrane contains several elementary membranes to speed up the search of the proposed algorithm. In the simulation experiment, effectiveness verification is carried out on the MNIST and KTH datasets. In terms of the MNIST datasets, the best test results of the proposed algorithm with 500, 1000 and 2000 spiking neurons are 86.8%, 90.6% and 90.8%, respectively. The best test results of the proposed algorithm on KTH with 500, 1000 and 2000 spiking neurons are 82.9%, 85.3% and 86.3%, respectively. The simulation results show that the proposed algorithm has a more competitive advantage than other experimental algorithms.
APA, Harvard, Vancouver, ISO, and other styles
26

Szynkiewicz, Paweł. "A Novel GPU-Enabled Simulator for Large Scale Spiking Neural Networks." Journal of Telecommunications and Information Technology, no. 2 (June 30, 2016): 34–42. http://dx.doi.org/10.26636/jtit.2016.2.717.

Full text
Abstract:
The understanding of the structural and dynamic complexity of neural networks is greatly facilitated by computer simulations. An ongoing challenge for simulating realistic models is, however, computational speed. In this paper a framework for modeling and parallel simulation of biological-inspired large scale spiking neural networks on high-performance graphics processors is described. This tool is implemented in the OpenCL programming technology. It enables simulation study with three models: Integrate-andfire, Hodgkin-Huxley and Izhikevich neuron model. The results of extensive simulations are provided to illustrate the operation and performance of the presented software framework. The particular attention is focused on the computational speed-up factor.
APA, Harvard, Vancouver, ISO, and other styles
27

Bruno, Golosio, Tiddia Gianmarco, De Luca Chiara, Pastorelli Elena, Simula Francesco, and Stanislao PAOLUCCI Pier. "Fast Simulations of Highly-Connected Spiking Cortical Models Using GPUs." Frontiers in Computational Neuroscience 15 (February 17, 2021): 13. https://doi.org/10.5281/zenodo.4661404.

Full text
Abstract:
Over the past decade there has been a growing interest in the development of parallel hardware systems for simulating large-scale networks of spiking neurons. Compared to other highly-parallel systems, GPU-accelerated solutions have the advantage of a relatively low cost and a great versatility, thanks also to the possibility of using the CUDA-C/C++ programming languages. NeuronGPU is a GPU library for large-scale simulations of spiking neural network models, written in the C++ and CUDA-C++ programming languages, based on a novel spike-delivery algorithm. This library includes simple LIF (leaky-integrate-and-fire) neuron models as well as several multisynapse AdEx (adaptive-exponential-integrate-and-fire) neuron models with current or conductance based synapses, different types of spike generators, tools for recording spikes, state variables and parameters, and it supports user-definable models. The numerical solution of the differential equations of the dynamics of the AdEx models is performed through a parallel implementation, written in CUDA-C++, of the fifth-order Runge-Kutta method with adaptive step-size control. In this work we evaluate the performance of this library on the simulation of a cortical microcircuit model, based on LIF neurons and current-based synapses, and on balanced networks of excitatory and inhibitory neurons, using AdEx or Izhikevich neuron models and conductance-based or current-based synapses. On these models, we will show that the proposed library achieves state-of-the-art performance in terms of simulation time per second of biological activity. In particular, using a single NVIDIA GeForce RTX 2080 Ti GPU board, the full-scale cortical-microcircuit model, which includes about 77,000 neurons and 3 · 10<sup>8</sup> connections, can be simulated at a speed very close to real time, while the simulation time of a balanced network of 1,000,000 AdEx neurons with 1,000 connections per neuron was about 70 s per second of biological activity.
APA, Harvard, Vancouver, ISO, and other styles
28

Dan, Yongping, Zhida Wang, Hengyi Li, and Jintong Wei. "Sa-SNN: spiking attention neural network for image classification." PeerJ Computer Science 10 (November 25, 2024): e2549. http://dx.doi.org/10.7717/peerj-cs.2549.

Full text
Abstract:
Spiking neural networks (SNNs) are known as third generation neural networks due to their energy efficient and low power consumption. SNNs have received a lot of attention due to their biological plausibility. SNNs are closer to the way biological neural systems work by simulating the transmission of information through discrete spiking signals between neurons. Influenced by the great potential shown by the attention mechanism in convolutional neural networks, Therefore, we propose a Spiking Attention Neural Network (Sa-SNN). The network includes a novel Spiking-Efficient Channel Attention (SECA) module that adopts a local cross-channel interaction strategy without dimensionality reduction, which can be achieved by one-dimensional convolution. It is implemented by convolution, which involves a small number of model parameters but provides a significant performance improvement for the network. The design of local inter-channel interactions through adaptive convolutional kernel sizes, rather than global dependencies, allows the network to focus more on the selection of important features, reduces the impact of redundant features, and improves the network’s recognition and generalisation capabilities. To investigate the effect of this structure on the network, we conducted a series of experiments. Experimental results show that Sa-SNN can perform image classification tasks more accurately. Our network achieved 99.61%, 99.61%, 94.13%, and 99.63% on the MNIST, Fashion-MNIST, N-MNIST datasets, respectively, and Sa-SNN performed well in terms of accuracy compared with mainstream SNNs.
APA, Harvard, Vancouver, ISO, and other styles
29

Kleijnen, Robert, Markus Robens, Michael Schiek, and Stefan van Waasen. "A Network Simulator for the Estimation of Bandwidth Load and Latency Created by Heterogeneous Spiking Neural Networks on Neuromorphic Computing Communication Networks." Journal of Low Power Electronics and Applications 12, no. 2 (2022): 23. http://dx.doi.org/10.3390/jlpea12020023.

Full text
Abstract:
Accelerated simulations of biological neural networks are in demand to discover the principals of biological learning. Novel many-core simulation platforms, e.g., SpiNNaker, BrainScaleS and Neurogrid, allow one to study neuron behavior in the brain at an accelerated rate, with a high level of detail. However, they do not come anywhere near simulating the human brain. The massive amount of spike communication has turned out to be a bottleneck. We specifically developed a network simulator to analyze in high detail the network loads and latencies caused by different network topologies and communication protocols in neuromorphic computing communication networks. This simulator allows simulating the impacts of heterogeneous neural networks and evaluating neuron mapping algorithms, which is a unique feature among state-of-the-art network models and simulators. The simulator was cross-checked by comparing the results of a homogeneous neural network-based run with corresponding bandwidth load results from comparable works. Additionally, the increased level of detail achieved by the new simulator is presented. Then, we show the impact heterogeneous connectivity can have on the network load, first for a small-scale test case, and later for a large-scale test case, and how different neuron mapping algorithms can influence this effect. Finally, we look at the latency estimations performed by the simulator for different mapping algorithms, and the impact of the node size.
APA, Harvard, Vancouver, ISO, and other styles
30

Tao, Yingzhi, and Qiaoyun Wu. "Spiking PointCNN: An Efficient Converted Spiking Neural Network under a Flexible Framework." Electronics 13, no. 18 (2024): 3626. http://dx.doi.org/10.3390/electronics13183626.

Full text
Abstract:
Spiking neural networks (SNNs) are generating wide attention due to their brain-like simulation capabilities and low energy consumption. Converting artificial neural networks (ANNs) to SNNs provides great advantages, combining the high accuracy of ANNs with the robustness and energy efficiency of SNNs. Existing point clouds processing SNNs have two issues to be solved: first, they lack a specialized surrogate gradient function; second, they are not robust enough to process a real-world dataset. In this work, we present a high-accuracy converted SNN for 3D point cloud processing. Specifically, we first revise and redesign the Spiking X-Convolution module based on the X-transformation. To address the problem of non-differentiable activation function arising from the binary signal from spiking neurons, we propose an effective adjustable surrogate gradient function, which can fit various models well by tuning the parameters. Additionally, we introduce a versatile ANN-to-SNN conversion framework enabling modular transformations. Based on this framework and the spiking X-Convolution module, we design the Spiking PointCNN, a highly efficient converted SNN for processing 3D point clouds. We conduct experiments on the public 3D point cloud datasets ModelNet40 and ScanObjectNN, on which our proposed model achieves excellent accuracy. Code will be available on GitHub.
APA, Harvard, Vancouver, ISO, and other styles
31

Et. al., Gopikrishna Panda,. "Insurance Fraud Detection using Spiking Neural Network along with NormAD Algorithm." Turkish Journal of Computer and Mathematics Education (TURCOMAT) 12, no. 11 (2021): 174–85. http://dx.doi.org/10.17762/turcomat.v12i11.5859.

Full text
Abstract:
General automobile insurance in recent years, has seen a huge escalation of fraud cases. The requirement of utilizing well organised and coherent technique to check on or determine user those are potential frauds. Thus, the deployment of the NormAD algorithm with less delay to enhance the safety and authorized in the operative process. The paper here describes attribute extrication method and Spiking Neural Network structure to resolve the issue of identification of automobile insurance fraud. The attribute second-level extrication algorithm coined in this paper can efficiently derive key attributes and enhance the identification accuracy of succeeding algorithms. So as to achieve to resolve the issue of unstable simulation allotment in the automobile insurance fraud identification scheme, an exemplary distributed method established on the plan of small unit proportion balance is presented. Formulated on the above techniques of attributes extrication and sample division, a model established on Spiking Neural Network with NormAD Algorithm is proposed. This method utilizes the complete goal of implementation of the Spiking Neural Network model algorithm that rely on Spiking Neuron, and ultimately accomplishes in enhancing the exactness of the detection of Automobile Insurance Fraud.
APA, Harvard, Vancouver, ISO, and other styles
32

VIDYBIDA, ALEXANDER. "TESTING OF INFORMATION CONDENSATION IN A MODEL REVERBERATING SPIKING NEURAL NETWORK." International Journal of Neural Systems 21, no. 03 (2011): 187–98. http://dx.doi.org/10.1142/s0129065711002742.

Full text
Abstract:
Information about external world is delivered to the brain in the form of structured in time spike trains. During further processing in higher areas, information is subjected to a certain condensation process, which results in formation of abstract conceptual images of external world, apparently, represented as certain uniform spiking activity partially independent on the input spike trains details. Possible physical mechanism of condensation at the level of individual neuron was discussed recently. In a reverberating spiking neural network, due to this mechanism the dynamics should settle down to the same uniform/ periodic activity in response to a set of various inputs. Since the same periodic activity may correspond to different input spike trains, we interpret this as possible candidate for information condensation mechanism in a network. Our purpose is to test this possibility in a network model consisting of five fully connected neurons, particularly, the influence of geometric size of the network, on its ability to condense information. Dynamics of 20 spiking neural networks of different geometric sizes are modelled by means of computer simulation. Each network was propelled into reverberating dynamics by applying various initial input spike trains. We run the dynamics until it becomes periodic. The Shannon's formula is used to calculate the amount of information in any input spike train and in any periodic state found. As a result, we obtain explicit estimate of the degree of information condensation in the networks, and conclude that it depends strongly on the net's geometric size.
APA, Harvard, Vancouver, ISO, and other styles
33

Luo, Zhenghao, Xingdong Wang, Shuo Yuan, and Zhangmeng Liu. "Radar Emitter Recognition Based on Spiking Neural Networks." Remote Sensing 16, no. 14 (2024): 2680. http://dx.doi.org/10.3390/rs16142680.

Full text
Abstract:
Efficient and effective radar emitter recognition is critical for electronic support measurement (ESM) systems. However, in complex electromagnetic environments, intercepted pulse trains generally contain substantial data noise, including spurious and missing pulses. Currently, radar emitter recognition methods utilizing traditional artificial neural networks (ANNs) like CNNs and RNNs are susceptible to data noise and require intensive computations, posing challenges to meeting the performance demands of modern ESM systems. Spiking neural networks (SNNs) exhibit stronger representational capabilities compared to traditional ANNs due to the temporal dynamics of spiking neurons and richer information encoded in precise spike timing. Furthermore, SNNs achieve higher computational efficiency by performing event-driven sparse addition calculations. In this paper, a lightweight spiking neural network is proposed by combining direct coding, leaky integrate-and-fire (LIF) neurons, and surrogate gradients to recognize radar emitters. Additionally, an improved SNN for radar emitter recognition is proposed, leveraging the local timing structure of pulses to enhance adaptability to data noise. Simulation results demonstrate the superior performance of the proposed method over existing methods.
APA, Harvard, Vancouver, ISO, and other styles
34

Nobukawa, Sou, Haruhiko Nishimura, and Teruya Yamanishi. "Pattern Classification by Spiking Neural Networks Combining Self-Organized and Reward-Related Spike-Timing-Dependent Plasticity." Journal of Artificial Intelligence and Soft Computing Research 9, no. 4 (2019): 283–91. http://dx.doi.org/10.2478/jaiscr-2019-0009.

Full text
Abstract:
Abstract Many recent studies have applied to spike neural networks with spike-timing-dependent plasticity (STDP) to machine learning problems. The learning abilities of dopamine-modulated STDP (DA-STDP) for reward-related synaptic plasticity have also been gathering attention. Following these studies, we hypothesize that a network structure combining self-organized STDP and reward-related DA-STDP can solve the machine learning problem of pattern classification. Therefore, we studied the ability of a network in which recurrent spiking neural networks are combined with STDP for non-supervised learning, with an output layer joined by DA-STDP for supervised learning, to perform pattern classification. We confirmed that this network could perform pattern classification using the STDP effect for emphasizing features of the input spike pattern and DA-STDP supervised learning. Therefore, our proposed spiking neural network may prove to be a useful approach for machine learning problems.
APA, Harvard, Vancouver, ISO, and other styles
35

Gnilenko, A. B. "HARDWARE IMPLEMENTATION OF AN ANALOG SPIKING NEURON WITH DIGITAL CONTROL OF INPUT SIGNALS WEIGHING." Radio Electronics, Computer Science, Control, no. 4 (December 26, 2024): 92–101. https://doi.org/10.15588/1607-3274-2024-4-9.

Full text
Abstract:
Context. Significant challenges facing hardware developers of artificial intelligence systems force them to look for new nonstandard architectural solutions. One of the promising solutions is the transition from von Neumann’s classic architecture to neuromorphic architecture, which at the hardware level tries to imitate the work of the neural network of the human brain. A neuromorphic processor built as hardware implementation of a spiking neural network consists of a large number of elementary electronic circuits that structurally and functionally correspond to neurons. Thus, the design of hardware implementation of a spiking neuron as the basic building element of a neuromorphic processor is of great scientific interest. Objective. The goal of the work is to design an analog spiking neuron hardware implementation with digital control of input signals by binary synaptic weighting coefficients. Method. Designing is performed at the logical/schematic and topological levels of the design flow using modern tools of electronic design automation. All proposed schematic and layout solutions are verified and simulated using computer aided design tools to prove their functionality. Results. The schematic and layout solutions have been developed and investigated for the hardware implementation of the spiking analog neuron with digital control of input signals by binary synaptic weighting coefficients to be the basic building element of a spiking neural network of the neuromorphic processor. Conclusions. The proposed hybrid design of the spiking neuron hardware implementation benefits by combining the simplicity of analog signal processing methods in the neuron with digital control of the state of the neuron using binary weighting coefficients. The simulation results confirm the functionality of the obtained schematic/layout solutions and demonstrate the possibility of implementing logical functions inherent in the perceptron. The prospects for further research may include the design of hardware implementation for a spiking neural network core based on the developed schematic and layout solutions for the spiking neuron.
APA, Harvard, Vancouver, ISO, and other styles
36

López-Asunción, Samuel, and Pablo Ituero. "Enabling Efficient On-Edge Spiking Neural Network Acceleration with Highly Flexible FPGA Architectures." Electronics 13, no. 6 (2024): 1074. http://dx.doi.org/10.3390/electronics13061074.

Full text
Abstract:
Spiking neural networks (SNNs) promise to perform tasks currently performed by classical artificial neural networks (ANNs) faster, in smaller footprints, and using less energy. Neuromorphic processors are set out to revolutionize computing at a large scale, but the move to edge-computing applications calls for finely-tuned custom implementations to keep pushing towards more efficient systems. To that end, we examined the architectural design space for executing spiking neuron models on FPGA platforms, focusing on achieving ultra-low area and power consumption. This work presents an efficient clock-driven spiking neuron architecture used for the implementation of both fully-connected cores and 2D convolutional cores, which rely on deep pipelines for synaptic processing and distributed memory for weight and neuron states. With them, we developed an accelerator for an SNN version of the LeNet-5 network trained on the MNIST dataset. At around 5.5 slices/neuron and only 348 mW, it is able to use 33% less area and four times less power per neuron as current state-of-the-art implementations while keeping low simulation step times.
APA, Harvard, Vancouver, ISO, and other styles
37

Sutton, Nate M., Blanca E. Gutiérrez-Guzmán, Holger Dannenberg, and Giorgio A. Ascoli. "A Continuous Attractor Model with Realistic Neural and Synaptic Properties Quantitatively Reproduces Grid Cell Physiology." International Journal of Molecular Sciences 25, no. 11 (2024): 6059. http://dx.doi.org/10.3390/ijms25116059.

Full text
Abstract:
Computational simulations with data-driven physiological detail can foster a deeper understanding of the neural mechanisms involved in cognition. Here, we utilize the wealth of cellular properties from Hippocampome.org to study neural mechanisms of spatial coding with a spiking continuous attractor network model of medial entorhinal cortex circuit activity. The primary goal is to investigate if adding such realistic constraints could produce firing patterns similar to those measured in real neurons. Biological characteristics included in the work are excitability, connectivity, and synaptic signaling of neuron types defined primarily by their axonal and dendritic morphologies. We investigate the spiking dynamics in specific neuron types and the synaptic activities between groups of neurons. Modeling the rodent hippocampal formation keeps the simulations to a computationally reasonable scale while also anchoring the parameters and results to experimental measurements. Our model generates grid cell activity that well matches the spacing, size, and firing rates of grid fields recorded in live behaving animals from both published datasets and new experiments performed for this study. Our simulations also recreate different scales of those properties, e.g., small and large, as found along the dorsoventral axis of the medial entorhinal cortex. Computational exploration of neuronal and synaptic model parameters reveals that a broad range of neural properties produce grid fields in the simulation. These results demonstrate that the continuous attractor network model of grid cells is compatible with a spiking neural network implementation sourcing data-driven biophysical and anatomical parameters from Hippocampome.org. The software (version 1.0) is released as open source to enable broad community reuse and encourage novel applications.
APA, Harvard, Vancouver, ISO, and other styles
38

Stewart, Robert D., and Wyeth Bair. "Spiking neural network simulation: numerical integration with the Parker-Sochacki method." Journal of Computational Neuroscience 27, no. 1 (2009): 115–33. http://dx.doi.org/10.1007/s10827-008-0131-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Liu, Shuang, Guangyao Wang, Tianshuo Bai, et al. "Magnetic Skyrmion-Based Spiking Neural Network for Pattern Recognition." Applied Sciences 12, no. 19 (2022): 9698. http://dx.doi.org/10.3390/app12199698.

Full text
Abstract:
Spiking neural network (SNN) has emerged as one of the most powerful brain-inspired computing paradigms in complex pattern recognition tasks that can be enabled by neuromorphic hardware. However, owing to the fundamental architecture mismatch between biological and Boolean logic, CMOS implementation of SNN is energy inefficient. A low-power approach with novel “neuro-mimetic” devices offering a direct mapping to synaptic and neuronal functionalities is still an open area. In this paper, SNN constructed with novel magnetic skyrmion-based leaky-integrate-fire (LIF) spiking neuron and the skyrmionic synapse crossbar is proposed. We perform a systematic device-circuit-architecture co-design for pattern recognition to evaluate the feasibility of our proposal. The simulation results demonstrated that our device has superior lower switching voltage and high energy efficiency, two times lower programming energy efficiency in comparison with CMOS devices. This work paves a novel pathway for low-power hardware design using full-skyrmion SNN architecture, as well as promising avenues for implementing neuromorphic computing schemes.
APA, Harvard, Vancouver, ISO, and other styles
40

Schæfer, Martin, Tim Schœnauer, Carsten Wolff, Georg Hartmann, Heinrich Klar, and Ulrich Rückert. "Simulation of spiking neural networks — architectures and implementations." Neurocomputing 48, no. 1-4 (2002): 647–79. http://dx.doi.org/10.1016/s0925-2312(01)00633-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

GRASSMANN, CYPRIAN, and JOACHIM K. ANLAUF. "FAST DIGITAL SIMULATION OF SPIKING NEURAL NETWORKS AND NEUROMORPHIC INTEGRATION WITH SPIKELAB." International Journal of Neural Systems 09, no. 05 (1999): 473–78. http://dx.doi.org/10.1142/s0129065799000502.

Full text
Abstract:
We present a simulation environment called SPIKELAB which incorporates a simulator that is able to simulate large networks of spiking neurons using a distributed event driven simulation. Contrary to a time driven simulation, which is usually used to simulate spiking neural networks, our simulation needs less computational resources because of the low average activity of typical networks. The paper addresses the speed up using an event driven versus a time driven simulation and how large networks can be simulated by a distribution of the simulation using already available computing resources. It also presents a solution for the integration of digital or analogue neuromorphic circuits into the simulation process.
APA, Harvard, Vancouver, ISO, and other styles
42

Ros, Eduardo, Richard Carrillo, Eva M. Ortigosa, Boris Barbour, and Rodrigo Agís. "Event-Driven Simulation Scheme for Spiking Neural Networks Using Lookup Tables to Characterize Neuronal Dynamics." Neural Computation 18, no. 12 (2006): 2959–93. http://dx.doi.org/10.1162/neco.2006.18.12.2959.

Full text
Abstract:
Nearly all neuronal information processing and interneuronal communication in the brain involves action potentials, or spikes, which drive the short-term synaptic dynamics of neurons, but also their long-term dynamics, via synaptic plasticity. In many brain structures, action potential activity is considered to be sparse. This sparseness of activity has been exploited to reduce the computational cost of large-scale network simulations, through the development of event-driven simulation schemes. However, existing event-driven simulations schemes use extremely simplified neuronal models. Here, we implement and evaluate critically an event-driven algorithm (ED-LUT) that uses precalculated look-up tables to characterize synaptic and neuronal dynamics. This approach enables the use of more complex (and realistic) neuronal models or data in representing the neurons, while retaining the advantage of high-speed simulation. We demonstrate the method's application for neurons containing exponential synaptic conductances, thereby implementing shunting inhibition, a phenomenon that is critical to cellular computation. We also introduce an improved two-stage event-queue algorithm, which allows the simulations to scale efficiently to highly connected networks with arbitrary propagation delays. Finally, the scheme readily accommodates implementation of synaptic plasticity mechanisms that depend on spike timing, enabling future simulations to explore issues of long-term learning and adaptation in large-scale networks.
APA, Harvard, Vancouver, ISO, and other styles
43

Dumont, Grégory, Alberto Pérez-Cervera, and Boris Gutkin. "A framework for macroscopic phase-resetting curves for generalised spiking neural networks." PLOS Computational Biology 18, no. 8 (2022): e1010363. http://dx.doi.org/10.1371/journal.pcbi.1010363.

Full text
Abstract:
Brain rhythms emerge from synchronization among interconnected spiking neurons. Key properties of such rhythms can be gleaned from the phase-resetting curve (PRC). Inferring the PRC and developing a systematic phase reduction theory for large-scale brain rhythms remains an outstanding challenge. Here we present a theoretical framework and methodology to compute the PRC of generic spiking networks with emergent collective oscillations. We adopt a renewal approach where neurons are described by the time since their last action potential, a description that can reproduce the dynamical feature of many cell types. For a sufficiently large number of neurons, the network dynamics are well captured by a continuity equation known as the refractory density equation. We develop an adjoint method for this equation giving a semi-analytical expression of the infinitesimal PRC. We confirm the validity of our framework for specific examples of neural networks. Our theoretical framework can link key biological properties at the individual neuron scale and the macroscopic oscillatory network properties. Beyond spiking networks, the approach is applicable to a broad class of systems that can be described by renewal processes.
APA, Harvard, Vancouver, ISO, and other styles
44

Yu, Luyan, and Thibaud O. Taillefumier. "Metastable spiking networks in the replica-mean-field limit." PLOS Computational Biology 18, no. 6 (2022): e1010215. http://dx.doi.org/10.1371/journal.pcbi.1010215.

Full text
Abstract:
Characterizing metastable neural dynamics in finite-size spiking networks remains a daunting challenge. We propose to address this challenge in the recently introduced replica-mean-field (RMF) limit. In this limit, networks are made of infinitely many replicas of the finite network of interest, but with randomized interactions across replicas. Such randomization renders certain excitatory networks fully tractable at the cost of neglecting activity correlations, but with explicit dependence on the finite size of the neural constituents. However, metastable dynamics typically unfold in networks with mixed inhibition and excitation. Here, we extend the RMF computational framework to point-process-based neural network models with exponential stochastic intensities, allowing for mixed excitation and inhibition. Within this setting, we show that metastable finite-size networks admit multistable RMF limits, which are fully characterized by stationary firing rates. Technically, these stationary rates are determined as the solutions of a set of delayed differential equations under certain regularity conditions that any physical solutions shall satisfy. We solve this original problem by combining the resolvent formalism and singular-perturbation theory. Importantly, we find that these rates specify probabilistic pseudo-equilibria which accurately capture the neural variability observed in the original finite-size network. We also discuss the emergence of metastability as a stochastic bifurcation, which can be interpreted as a static phase transition in the RMF limits. In turn, we expect to leverage the static picture of RMF limits to infer purely dynamical features of metastable finite-size networks, such as the transition rates between pseudo-equilibria.
APA, Harvard, Vancouver, ISO, and other styles
45

Cessac, B. "A discrete time neural network model with spiking neurons." Journal of Mathematical Biology 56, no. 3 (2007): 311–45. http://dx.doi.org/10.1007/s00285-007-0117-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Jang, Changhwan, Hong-Gi Kim, and Byeong-Hun Woo. "Machine Learning-Based Simulation of the Air Conditioner Operating Time in Concrete Structures with Bayesian Thresholding." Materials 17, no. 9 (2024): 2108. http://dx.doi.org/10.3390/ma17092108.

Full text
Abstract:
Efficient energy use is crucial for achieving carbon neutrality and reduction. As part of these efforts, research is being carried out to apply a phase change material (PCM) to a concrete structure together with an aggregate. In this study, an energy consumption simulation was performed using data from concrete mock-up structures. To perform the simulation, the threshold investigation was performed through the Bayesian approach. Furthermore, the spiking part of the spiking neural network was modularized and integrated into a recurrent neural network (RNN) to find accurate energy consumption. From the training-test results of the trained neural network, it was possible to predict data with an R2 value of 0.95 or higher through data prediction with high accuracy for the RNN. In addition, the spiked parts were obtained; it was found that PCM-containing concrete could consume 32% less energy than normal concrete. This result suggests that the use of PCM can be a key to reducing the energy consumption of concrete structures. Furthermore, the approach of this study is considered to be easily applicable in energy-related institutions and the like for predicting energy consumption during the summer.
APA, Harvard, Vancouver, ISO, and other styles
47

Skaar, Jan-Eirik W., Nicolai Haug, Alexander J. Stasik, Gaute T. Einevoll, and Kristin Tøndel. "Metamodelling of a two-population spiking neural network." PLOS Computational Biology 19, no. 11 (2023): e1011625. http://dx.doi.org/10.1371/journal.pcbi.1011625.

Full text
Abstract:
In computational neuroscience, hypotheses are often formulated as bottom-up mechanistic models of the systems in question, consisting of differential equations that can be numerically integrated forward in time. Candidate models can then be validated by comparison against experimental data. The model outputs of neural network models depend on both neuron parameters, connectivity parameters and other model inputs. Successful model fitting requires sufficient exploration of the model parameter space, which can be computationally demanding. Additionally, identifying degeneracy in the parameters, i.e. different combinations of parameter values that produce similar outputs, is of interest, as they define the subset of parameter values consistent with the data. In this computational study, we apply metamodels to a two-population recurrent spiking network of point-neurons, the so-called Brunel network. Metamodels are data-driven approximations to more complex models with more desirable computational properties, which can be run considerably faster than the original model. Specifically, we apply and compare two different metamodelling techniques, masked autoregressive flows (MAF) and deep Gaussian process regression (DGPR), to estimate the power spectra of two different signals; the population spiking activities and the local field potential. We find that the metamodels are able to accurately model the power spectra in the asynchronous irregular regime, and that the DGPR metamodel provides a more accurate representation of the simulator compared to the MAF metamodel. Using the metamodels, we estimate the posterior probability distributions over parameters given observed simulator outputs separately for both LFP and population spiking activities. We find that these distributions correctly identify parameter combinations that give similar model outputs, and that some parameters are significantly more constrained by observing the LFP than by observing the population spiking activities.
APA, Harvard, Vancouver, ISO, and other styles
48

Udovichenko, Sergey, Alexey Gubin, Andrey Bobylev, Abdulla Ebrahim, Alexander Busygin, and Alexander Pisarev. "Modeling of Information Processing in Biomorphic Neuroprocessor." OBM Neurobiology 06, no. 03 (2022): 1–19. http://dx.doi.org/10.21926/obm.neurobiol.2203134.

Full text
Abstract:
In the present study, we present the results of the modeling of incoming information processing in a neuroprocessor that implements a biomorphic spiking neural network with numerous neurons and trainable synaptic connections between them. Physico-mathematical models of processes of encoding information into biomorphic pulses and their decoding following a neural block into a binary code were developed as well as models of the process of routing the output pulses of neurons by the logic matrix to the synapses of other neurons and the processes of associative self-learning of the memory matrix as part of the hardware spiking neural network with long-term potentiation and with the spike-timing-dependent plasticity of the memristor. The performance of individual devices of the biomorphic neuroprocessor in processing the incoming information is shown based on developed models using numerical simulation.
APA, Harvard, Vancouver, ISO, and other styles
49

Márquez-Vera, Carlos Antonio, Zaineb Yakoub, Marco Antonio Márquez Vera, and Alfian Ma'arif. "Spiking PID Control Applied in the Van de Vusse Reaction." International Journal of Robotics and Control Systems 1, no. 4 (2021): 488–500. http://dx.doi.org/10.31763/ijrcs.v1i4.490.

Full text
Abstract:
Artificial neural networks (ANN) can approximate signals and give interesting results in pattern recognition; some works use neural networks for control applications. However, biological neurons do not generate similar signals to the obtained by ANN. The spiking neurons are an interesting topic since they simulate the real behavior depicted by biological neurons. This paper employed a spiking neuron to compute a PID control, which is further applied to the Van de Vusse reaction. This reaction, as the inverse pendulum, is a benchmark used to work with systems that has inverse response producing the output to undershoot. One problem is how to code information that the neuron can interpret and decode the peak generated by the neuron to interpret the neuron's behavior. In this work, a spiking neuron is used to compute a PID control by coding in time the peaks generated by the neuron. The neuron has as synaptic weights the PID gains, and the peak observed in the axon is the coded control signal. The neuron adaptation tries to obtain the necessary weights to generate the peak instant necessary to control the chemical reaction. The simulation results show the possibility of using this kind of neuron for control issues and the possibility of using a spiking neural network to overcome the undershoot obtained due to the inverse response of the chemical reaction.
APA, Harvard, Vancouver, ISO, and other styles
50

Kunev, Martin, Petr Kuznetsov, and Denis Sheynikhovich. "Agreement in Spiking Neural Networks." Journal of Computational Biology 29, no. 4 (2022): 358–69. http://dx.doi.org/10.1089/cmb.2021.0365.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography