Academic literature on the topic 'Spiking neural network (SNN)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Spiking neural network (SNN).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Spiking neural network (SNN)"

1

Ngu, Huynh Cong Viet, and Keon Myung Lee. "Effective Conversion of a Convolutional Neural Network into a Spiking Neural Network for Image Recognition Tasks." Applied Sciences 12, no. 11 (2022): 5749. http://dx.doi.org/10.3390/app12115749.

Full text
Abstract:
Due to energy efficiency, spiking neural networks (SNNs) have gradually been considered as an alternative to convolutional neural networks (CNNs) in various machine learning tasks. In image recognition tasks, leveraging the superior capability of CNNs, the CNN–SNN conversion is considered one of the most successful approaches to training SNNs. However, previous works assume a rather long inference time period called inference latency to be allowed, while having a trade-off between inference latency and accuracy. One of the main reasons for this phenomenon stems from the difficulty in determining proper a firing threshold for spiking neurons. The threshold determination procedure is called a threshold balancing technique in the CNN–SNN conversion approach. This paper proposes a CNN–SNN conversion method with a new threshold balancing technique that obtains converted SNN models with good accuracy even with low latency. The proposed method organizes the SNN models with soft-reset IF spiking neurons. The threshold balancing technique estimates the thresholds for spiking neurons based on the maximum input current in a layerwise and channelwise manner. The experiment results have shown that our converted SNN models attain even higher accuracy than the corresponding trained CNN model for the MNIST dataset with low latency. In addition, for the Fashion-MNIST and CIFAR-10 datasets, our converted SNNs have shown less conversion loss than other methods in low latencies. The proposed method can be beneficial in deploying efficient SNN models for recognition tasks on resource-limited systems because the inference latency is strongly associated with energy consumption.
APA, Harvard, Vancouver, ISO, and other styles
2

Ngu, Huynh Cong Viet, and Keon Myung Lee. "Effective Conversion of a Convolutional Neural Network into a Spiking Neural Network for Image Recognition Tasks." Applied Sciences 12, no. 11 (2022): 5749. http://dx.doi.org/10.3390/app12115749.

Full text
Abstract:
Due to energy efficiency, spiking neural networks (SNNs) have gradually been considered as an alternative to convolutional neural networks (CNNs) in various machine learning tasks. In image recognition tasks, leveraging the superior capability of CNNs, the CNN–SNN conversion is considered one of the most successful approaches to training SNNs. However, previous works assume a rather long inference time period called inference latency to be allowed, while having a trade-off between inference latency and accuracy. One of the main reasons for this phenomenon stems from the difficulty in determining proper a firing threshold for spiking neurons. The threshold determination procedure is called a threshold balancing technique in the CNN–SNN conversion approach. This paper proposes a CNN–SNN conversion method with a new threshold balancing technique that obtains converted SNN models with good accuracy even with low latency. The proposed method organizes the SNN models with soft-reset IF spiking neurons. The threshold balancing technique estimates the thresholds for spiking neurons based on the maximum input current in a layerwise and channelwise manner. The experiment results have shown that our converted SNN models attain even higher accuracy than the corresponding trained CNN model for the MNIST dataset with low latency. In addition, for the Fashion-MNIST and CIFAR-10 datasets, our converted SNNs have shown less conversion loss than other methods in low latencies. The proposed method can be beneficial in deploying efficient SNN models for recognition tasks on resource-limited systems because the inference latency is strongly associated with energy consumption.
APA, Harvard, Vancouver, ISO, and other styles
3

Zhang, Yongqiang, Haijie Pang, Jinlong Ma, Guilei Ma, Xiaoming Zhang, and Menghua Man. "Research on Anti-Interference Performance of Spiking Neural Network Under Network Connection Damage." Brain Sciences 15, no. 3 (2025): 217. https://doi.org/10.3390/brainsci15030217.

Full text
Abstract:
Background: With the development of artificial intelligence, memristors have become an ideal choice to optimize new neural network architectures and improve computing efficiency and energy efficiency due to their combination of storage and computing power. In this context, spiking neural networks show the ability to resist Gaussian noise, spike interference, and AC electric field interference by adjusting synaptic plasticity. The anti-interference ability to spike neural networks has become an important direction of electromagnetic protection bionics research. Methods: Therefore, this research constructs two types of spiking neural network models with LIF model as nodes: VGG-SNN and FCNN-SNN, and combines pruning algorithm to simulate network connection damage during the training process. By comparing and analyzing the millimeter wave radar human motion dataset and MNIST dataset with traditional artificial neural networks, the anti-interference performance of spiking neural networks and traditional artificial neural networks under the same probability of edge loss was deeply explored. Results: The experimental results show that on the millimeter wave radar human motion dataset, the accuracy of the spiking neural network decreased by 5.83% at a sparsity of 30%, while the accuracy of the artificial neural network decreased by 18.71%. On the MNIST dataset, the accuracy of the spiking neural network decreased by 3.91% at a sparsity of 30%, while the artificial neural network decreased by 10.13%. Conclusions: Therefore, under the same network connection damage conditions, spiking neural networks exhibit unique anti-interference performance advantages. The performance of spiking neural networks in information processing and pattern recognition is relatively more stable and outstanding. Further analysis reveals that factors such as network structure, encoding method, and learning algorithm have a significant impact on the anti-interference performance of both.
APA, Harvard, Vancouver, ISO, and other styles
4

Dan, Yongping, Zhida Wang, Hengyi Li, and Jintong Wei. "Sa-SNN: spiking attention neural network for image classification." PeerJ Computer Science 10 (November 25, 2024): e2549. http://dx.doi.org/10.7717/peerj-cs.2549.

Full text
Abstract:
Spiking neural networks (SNNs) are known as third generation neural networks due to their energy efficient and low power consumption. SNNs have received a lot of attention due to their biological plausibility. SNNs are closer to the way biological neural systems work by simulating the transmission of information through discrete spiking signals between neurons. Influenced by the great potential shown by the attention mechanism in convolutional neural networks, Therefore, we propose a Spiking Attention Neural Network (Sa-SNN). The network includes a novel Spiking-Efficient Channel Attention (SECA) module that adopts a local cross-channel interaction strategy without dimensionality reduction, which can be achieved by one-dimensional convolution. It is implemented by convolution, which involves a small number of model parameters but provides a significant performance improvement for the network. The design of local inter-channel interactions through adaptive convolutional kernel sizes, rather than global dependencies, allows the network to focus more on the selection of important features, reduces the impact of redundant features, and improves the network’s recognition and generalisation capabilities. To investigate the effect of this structure on the network, we conducted a series of experiments. Experimental results show that Sa-SNN can perform image classification tasks more accurately. Our network achieved 99.61%, 99.61%, 94.13%, and 99.63% on the MNIST, Fashion-MNIST, N-MNIST datasets, respectively, and Sa-SNN performed well in terms of accuracy compared with mainstream SNNs.
APA, Harvard, Vancouver, ISO, and other styles
5

Mohamed, Siti Aisyah, Muhaini Othman, and Mohd Hafizul Afifi. "A review on data clustering using spiking neural network (SNN) models." Indonesian Journal of Electrical Engineering and Computer Science 15, no. 3 (2019): 1392. http://dx.doi.org/10.11591/ijeecs.v15.i3.pp1392-1400.

Full text
Abstract:
The evolution of Artificial Neural Network recently gives researchers an interest to explore deep learning evolved by Spiking Neural Network clustering methods. Spiking Neural Network (SNN) models captured neuronal behaviour more precisely than a traditional neural network as it contains the theory of time into their functioning model [1]. The aim of this paper is to reviewed studies that are related to clustering problems employing Spiking Neural Networks models. Even though there are many algorithms used to solve clustering problems, most of the methods are only suitable for static data and fixed windows of time series. Hence, there is a need to analyse complex data type, the potential for improvement is encouraged. Therefore, this paper summarized the significant result obtains by implying SNN models in different clustering approach. Thus, the findings of this paper could demonstrate the purpose of clustering method using SNN for the fellow researchers from various disciplines to discover and understand complex data.
APA, Harvard, Vancouver, ISO, and other styles
6

Fu, Qiang, and Hongbin Dong. "Breast Cancer Recognition Using Saliency-Based Spiking Neural Network." Wireless Communications and Mobile Computing 2022 (March 24, 2022): 1–17. http://dx.doi.org/10.1155/2022/8369368.

Full text
Abstract:
The spiking neural networks (SNNs) use event-driven signals to encode physical information for neural computation. SNN takes the spiking neuron as the basic unit. It modulates the process of nerve cells from receiving stimuli to firing spikes. Therefore, SNN is more biologically plausible. Although the SNN has more characteristics of biological neurons, SNN is rarely used for medical image recognition due to its poor performance. In this paper, a reservoir spiking neural network is used for breast cancer image recognition. Due to the difficulties of extracting the lesion features in medical images, a salient feature extraction method is used in image recognition. The salient feature extraction network is composed of spiking convolution layers, which can effectively extract the features of lesions. Two temporal encoding manners, namely, linear time encoding and entropy-based time encoding methods, are used to encode the input patterns. Readout neurons use the ReSuMe algorithm for training, and the Fruit Fly Optimization Algorithm (FOA) is employed to optimize the network architecture to further improve the reservoir SNN performance. Three modality datasets are used to verify the effectiveness of the proposed method. The results show an accuracy of 97.44% for the BreastMNIST database. The classification accuracy is 98.27% on the mini-MIAS database. And the overall accuracy is 95.83% for the BreaKHis database by using the saliency feature extraction, entropy-based time encoding, and network optimization.
APA, Harvard, Vancouver, ISO, and other styles
7

Zhang, Hong, and Yu Zhang. "Memory-Efficient Reversible Spiking Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 15 (2024): 16759–67. http://dx.doi.org/10.1609/aaai.v38i15.29616.

Full text
Abstract:
Spiking neural networks (SNNs) are potential competitors to artificial neural networks (ANNs) due to their high energy-efficiency on neuromorphic hardware. However, SNNs are unfolded over simulation time steps during the training process. Thus, SNNs require much more memory than ANNs, which impedes the training of deeper SNN models. In this paper, we propose the reversible spiking neural network to reduce the memory cost of intermediate activations and membrane potentials during training. Firstly, we extend the reversible architecture along temporal dimension and propose the reversible spiking block, which can reconstruct the computational graph and recompute all intermediate variables in forward pass with a reverse process. On this basis, we adopt the state-of-the-art SNN models to the reversible variants, namely reversible spiking ResNet (RevSResNet) and reversible spiking transformer (RevSFormer). Through experiments on static and neuromorphic datasets, we demonstrate that the memory cost per image of our reversible SNNs does not increase with the network depth. On CIFAR10 and CIFAR100 datasets, our RevSResNet37 and RevSFormer-4-384 achieve comparable accuracies and consume 3.79x and 3.00x lower GPU memory per image than their counterparts with roughly identical model complexity and parameters. We believe that this work can unleash the memory constraints in SNN training and pave the way for training extremely large and deep SNNs.
APA, Harvard, Vancouver, ISO, and other styles
8

A, Mohamed Sikkander, R.RamaNachiar, and Yasmeen Khadeeja. "Spiking Neural Network (SNN) Using to Detect Breast Cancer." International Journal of Scientific Research and Innovative Studies 1, no. 1 (2022): 20–22. https://doi.org/10.5281/zenodo.6877740.

Full text
Abstract:
The spiking neural networks (SNNs) use event-driven signals to encipher physical data for neural computation. SNN takes the spiking somatic cell because the basic unit. It modulates the method of nerve cells from receiving stimuli to firing spikes. Therefore, SNN is a lot of biologically plausible. Though the SNN has a lot of characteristics of biological neurons, SNN isn't used for medical image recognition because of its poor performance. During this paper, a reservoir spiking neural network is employed for carcinoma image recognition. Because of the difficulties of extracting the lesion options in medical pictures, a salient feature extraction technique is employed in image recognition. The salient feature extraction network consists of spiking convolution layers, which might effectively extract the options of lesions. 2 temporal secret writing manners, namely, linear time secret writing and entropy-based time secret writing strategies, are accustomed encipher the input patterns. Readout neurons use the ReSuMe algorithmic program for coaching, and therefore the dipterans improvement algorithmic program (FOA) is utilized to optimize the specification to additional improve the reservoir SNN performance. 3 modality datasets are accustomed verify the effectiveness of the projected technique. The results show Associate in nursing accuracy of ninety seven.44% for the BreastMNIST information. The classification accuracy is ninety eight.27% on the mini-MIAS information. And therefore the overall accuracy is ninety five.83% for the Break His information by victimization the strikingness feature extraction, entropy-based time secret writing, and network improvement.
APA, Harvard, Vancouver, ISO, and other styles
9

Mo, Lingfei, and Minghao Wang. "LogicSNN: A Unified Spiking Neural Networks Logical Operation Paradigm." Electronics 10, no. 17 (2021): 2123. http://dx.doi.org/10.3390/electronics10172123.

Full text
Abstract:
LogicSNN, a unified spiking neural networks (SNN) logical operation paradigm is proposed in this paper. First, we define the logical variables under the semantics of SNN. Then, we design the network structure of this paradigm and use spike-timing-dependent plasticity for training. According to this paradigm, six kinds of basic SNN binary logical operation modules and three kinds of combined logical networks based on these basic modules are implemented. Through these experiments, the rationality, cascading characteristics and the potential of building large-scale network of this paradigm are verified. This study fills in the blanks of the logical operation of SNN and provides a possible way to realize more complex machine learning capabilities.
APA, Harvard, Vancouver, ISO, and other styles
10

Shiltagh, Nadia Adnan, and Hasnaa Ahmed Abas. "Spiking Neural Network in Precision Agriculture." Journal of Engineering 21, no. 7 (2015): 17–34. http://dx.doi.org/10.31026/j.eng.2015.07.02.

Full text
Abstract:
In this paper, precision agriculture system is introduced based on Wireless Sensor Network (WSN). Soil moisture considered one of environment factors that effect on crop. The period of irrigation must be monitored. Neural network capable of learning the behavior of the agricultural soil in absence of mathematical model. This paper introduced modified type of neural network that is known as Spiking Neural Network (SNN). In this work, the precision agriculture system is modeled, contains two SNNs which have been identified off-line based on logged data, one of these SNNs represents the monitor that located at sink where the period of irrigation is calculated and the other represents the soil. In addition, to reduce power consumption of sensor nodes Modified Chain-Cluster based Mixed (MCCM) routing algorithm is used. According to MCCM, the sensors will send their packets that are less than threshold moisture level to the sink. The SNN with Modified Spike-Prop (MSP) training algorithm is capable of identifying soil, irrigation periods and monitoring the soil moisture level, this means that SNN has the ability to be an identifier and monitor. By applying this system the particular agriculture area reaches to the desired moisture level.
 
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Spiking neural network (SNN)"

1

Buhry, Laure. "Estimation de paramètres de modèles de neurones biologiques sur une plate-forme de SNN (Spiking Neural Network) implantés "insilico"." Thesis, Bordeaux 1, 2010. http://www.theses.fr/2010BOR14057/document.

Full text
Abstract:
Ces travaux de thèse, réalisés dans une équipe concevant des circuits analogiques neuromimétiques suivant le modèle d’Hodgkin-Huxley, concernent la modélisation de neurones biologiques, plus précisément, l’estimation des paramètres de modèles de neurones. Une première partie de ce manuscrit s’attache à faire le lien entre la modélisation neuronale et l’optimisation. L’accent est mis sur le modèle d’Hodgkin- Huxley pour lequel il existait déjà une méthode d’extraction des paramètres associée à une technique de mesures électrophysiologiques (le voltage-clamp) mais dont les approximations successives rendaient impossible la détermination précise de certains paramètres. Nous proposons dans une seconde partie une méthode alternative d’estimation des paramètres du modèle d’Hodgkin-Huxley s’appuyant sur l’algorithme d’évolution différentielle et qui pallie les limitations de la méthode classique. Cette alternative permet d’estimer conjointement tous les paramètres d’un même canal ionique. Le troisième chapitre est divisé en trois sections. Dans les deux premières, nous appliquons notre nouvelle technique à l’estimation des paramètres du même modèle à partir de données biologiques, puis développons un protocole automatisé de réglage de circuits neuromimétiques, canal ionique par canal ionique. La troisième section présente une méthode d’estimation des paramètres à partir d’enregistrements de la tension de membrane d’un neurone, données dont l’acquisition est plus aisée que celle des courants ioniques. Le quatrième et dernier chapitre, quant à lui, est une ouverture vers l’utilisation de petits réseaux d’une centaine de neurones électroniques : nous réalisons une étude logicielle de l’influence des propriétés intrinsèques de la cellule sur le comportement global du réseau dans le cadre des oscillations gamma<br>These works, which were conducted in a research group designing neuromimetic integrated circuits based on the Hodgkin-Huxley model, deal with the parameter estimation of biological neuron models. The first part of the manuscript tries to bridge the gap between neuron modeling and optimization. We focus our interest on the Hodgkin-Huxley model because it is used in the group. There already existed an estimation method associated to the voltage-clamp technique. Nevertheless, this classical estimation method does not allow to extract precisely all parameters of the model, so in the second part, we propose an alternative method to jointly estimate all parameters of one ionic channel avoiding the usual approximations. This method is based on the differential evolution algorithm. The third chaper is divided into three sections : the first two sections present the application of our new estimation method to two different problems, model fitting from biological data and development of an automated tuning of neuromimetic chips. In the third section, we propose an estimation technique using only membrane voltage recordings – easier to mesure than ionic currents. Finally, the fourth and last chapter is a theoretical study preparing the implementation of small neural networks on neuromimetic chips. More specifically, we try to study the influence of cellular intrinsic properties on the global behavior of a neural network in the context of gamma oscillations
APA, Harvard, Vancouver, ISO, and other styles
2

Buhry, Laure. "Estimation de paramètres de modèles de neurones biologiques sur une plate-forme de SNN (Spiking Neural Network) implantés "in silico"." Phd thesis, Université Sciences et Technologies - Bordeaux I, 2010. http://tel.archives-ouvertes.fr/tel-00561396.

Full text
Abstract:
Ces travaux de thèse, réalisés dans une équipe concevant des circuits analogiques neuromimétiques suivant le modèle d'Hodgkin-Huxley, concernent la modélisation de neurones biologiques, plus précisément, l'estimation des paramètres de modèles de neurones. Une première partie de ce manuscrit s'attache à faire le lien entre la modélisation neuronale et l'optimisation. L'accent est mis sur le modèle d'Hodgkin- Huxley pour lequel il existait déjà une méthode d'extraction des paramètres associée à une technique de mesures électrophysiologiques (le voltage-clamp) mais dont les approximations successives rendaient impossible la détermination précise de certains paramètres. Nous proposons dans une seconde partie une méthode alternative d'estimation des paramètres du modèle d'Hodgkin-Huxley s'appuyant sur l'algorithme d'évolution différentielle et qui pallie les limitations de la méthode classique. Cette alternative permet d'estimer conjointement tous les paramètres d'un même canal ionique. Le troisième chapitre est divisé en trois sections. Dans les deux premières, nous appliquons notre nouvelle technique à l'estimation des paramètres du même modèle à partir de données biologiques, puis développons un protocole automatisé de réglage de circuits neuromimétiques, canal ionique par canal ionique. La troisième section présente une méthode d'estimation des paramètres à partir d'enregistrements de la tension de membrane d'un neurone, données dont l'acquisition est plus aisée que celle des courants ioniques. Le quatrième et dernier chapitre, quant à lui, est une ouverture vers l'utilisation de petits réseaux d'une centaine de neurones électroniques : nous réalisons une étude logicielle de l'influence des propriétés intrinsèques de la cellule sur le comportement global du réseau dans le cadre des oscillations gamma.
APA, Harvard, Vancouver, ISO, and other styles
3

Patterson, James Cameron. "Managing a real-time massively-parallel neural architecture." Thesis, University of Manchester, 2012. https://www.research.manchester.ac.uk/portal/en/theses/managing-a-realtime-massivelyparallel-neural-architecture(dfab5ca7-fcd5-4ebe-887b-0a7c330c7206).html.

Full text
Abstract:
A human brain has billions of processing elements operating simultaneously; the only practical way to model this computationally is with a massively-parallel computer. A computer on such a significant scale requires hundreds of thousands of interconnected processing elements, a complex environment which requires many levels of monitoring, management and control. Management begins from the moment power is applied and continues whilst the application software loads, executes, and the results are downloaded. This is the story of the research and development of a framework of scalable management tools that support SpiNNaker, a novel computing architecture designed to model spiking neural networks of biologically-significant sizes. This management framework provides solutions from the most fundamental set of power-on self-tests, through to complex, real-time monitoring of the health of the hardware and the software during simulation. The framework devised uses standard tools where appropriate, covering hardware up / down events and capacity information, through to bespoke software developed to provide real-time insight to neural network software operation across multiple levels of abstraction. With this layered management approach, users (or automated agents) have access to results dynamically and are able to make informed decisions on required actions in real-time.
APA, Harvard, Vancouver, ISO, and other styles
4

Spyrou, Theofilos. "Functional safety and reliability of neuromorphic computing systems." Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS118.

Full text
Abstract:
L'essor récent de l'intelligence artificielle (IA) a trouvé un large éventail d'applications qui l'intègrent essentiellement dans presque tous les domaines de notre vie. Avec une telle intégration, il est raisonnable que des préoccupations surgissent. Celles-ci doivent être éliminées avant l'utilisation de l'IA sur le terrain, en particulier dans les applications critiques en termes de mission et de sécurité, comme les véhicules autonomes. Les réseaux neuronaux à impulsions (Spiking Neural Networks, SNNs), bien que d'inspiration biologique, n'héritent que partiellement des remarquables capacités de résistance aux pannes de leurs homologues biologiques, car ils sont vulnérables aux défauts électroniques et aux pannes survenant au niveau du matériel. Par conséquent, une exploration méthodologique des caractéristiques de fiabilité des accélérateurs matériels d'IA et des plateformes neuromorphiques est de la plus haute importance. Cette thèse aborde les sujets du test et de la tolérance aux fautes dans les SNNs et leurs implémentations neuromorphiques sur le matériel<br>The recent rise of Artificial Intelligence (AI) has found a wide range of applications essentially integrating it gaining more and more ground in almost every field of our lives. With this steep integration of AI, it is reasonable for concerns to arise, which need to be eliminated before the employment of AI in the field, especially in mission- and safety-critical applications like autonomous vehicles. Spiking Neural Networks (SNNs), although biologically inspired, inherit only partially the remarkable fault resilience capabilities of their biological counterparts, being vulnerable to electronic defects and faults occurring at hardware level. Hence, a methodological exploration of the dependability characteristics of AI hardware accelerators and neuromorphic platforms is of utmost importance. This thesis tackles the subjects of testing and fault tolerance in SNNs and their neuromorphic implementations on hardware
APA, Harvard, Vancouver, ISO, and other styles
5

Johnson, Melissa. "A Spiking Bidirectional Associative Memory Neural Network." Thesis, Université d'Ottawa / University of Ottawa, 2021. http://hdl.handle.net/10393/42222.

Full text
Abstract:
Spiking neural networks (SNNs) are a more biologically realistic model of the brain than traditional analog neural networks and therefore should be better for modelling certain functions of the human brain. This thesis uses the concept of deriving an SNN from an accepted non-spiking neural network via analysis and modifications of the transmission function. We investigate this process to determine if and how the modifications can be made to minimize loss of information during the transition from non-spiking to spiking while retaining positive features and functionality of the non-spiking network. By comparing combinations of spiking neuron models and networks against each other, we determined that replacing the transmission function with a neural model that is similar to it allows for the easiest method to create a spiking neural network that works comparatively well. This similarity between transmission function and neuron model allows for easier parameter selection which is a key component in getting a functioning SNN. The parameters all play different roles, but for the most part, parameters that speed up spiking, such as large resistance values or small rheobases generally help the accuracy of the network. But the network is still incomplete for a spiking neural network since this conversion is often only performed after learning has been completed in analog form. The neuron model and subsequent network developed here are the initial steps in creating a bidirectional SNN that handles hetero-associative and auto-associative recall and can be switched easily between spiking and non-spiking with minimal to no loss of data. By tying everything to the transmission function, the non-spiking learning rule, which in our case uses the transmission function, and the neural model of the SNN, we are able to create a functioning SNN. Without this similarity, we find that creating SNN are much more complicated and require much more work in parameter optimization to achieve a functioning SNN.
APA, Harvard, Vancouver, ISO, and other styles
6

Goel, Piyush. "Spiking neural network based approach to EEG signal analysis." Thesis, University of Portsmouth, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.496600.

Full text
Abstract:
The research described in this thesis presents a new classification technique for continuous electroencephalographic (EEG) recordings, based on a network of spiking neurons. Analysis of the signals is performed on ensemble EEG and the task of the neural network is to identify the P300 component in the signals. The network employs leaky-integrate-and-fire neurons as nodes in a multi-layered structure. The method involves formation of multiple weak classifiers to perform voting and collective results are used for final classification.
APA, Harvard, Vancouver, ISO, and other styles
7

Davies, Sergio. "Learning in spiking neural networks." Thesis, University of Manchester, 2013. https://www.research.manchester.ac.uk/portal/en/theses/learning-in-spiking-neural-networks(2d2be0d7-9557-481e-b9f1-3889a5ca2447).html.

Full text
Abstract:
Artificial neural network simulators are a research field which attracts the interest of researchers from various fields, from biology to computer science. The final objectives are the understanding of the mechanisms underlying the human brain, how to reproduce them in an artificial environment, and how drugs interact with them. Multiple neural models have been proposed, each with their peculiarities, from the very complex and biologically realistic Hodgkin-Huxley neuron model to the very simple 'leaky integrate-and-fire' neuron. However, despite numerous attempts to understand the learning behaviour of the synapses, few models have been proposed. Spike-Timing-Dependent Plasticity (STDP) is one of the most relevant and biologically plausible models, and some variants (such as the triplet-based STDP rule) have been proposed to accommodate all biological observations. The research presented in this thesis focuses on a novel learning rule, based on the spike-pair STDP algorithm, which provides a statistical approach with the advantage of being less computationally expensive than the standard STDP rule, and is therefore suitable for its implementation on stand-alone computational units. The environment in which this research work has been carried out is the SpiNNaker project, which aims to provide a massively parallel computational substrate for neural simulation. To support such research, two other topics have been addressed: the first is a way to inject spikes into the SpiNNaker system through a non-real-time channel such as the Ethernet link, synchronising with the timing of the SpiNNaker system. The second research topic is focused on a way to route spikes in the SpiNNaker system based on populations of neurons. The three topics are presented in sequence after a brief introduction to the SpiNNaker project. Future work could include structural plasticity (also known as synaptic rewiring); here, during the simulation of neural networks on the SpiNNaker system, axons, dendrites and synapses may be grown or pruned according to biological observations.
APA, Harvard, Vancouver, ISO, and other styles
8

Mekemeza, Ona Keshia. "Photonic spiking neuron network." Electronic Thesis or Diss., Bourgogne Franche-Comté, 2023. http://www.theses.fr/2023UBFCD052.

Full text
Abstract:
Les réseaux neuromorphiques pour le traitement d'informations ont pris une placeimportante aujourd'hui notamment du fait de la montée en complexité des tâches à effectuer : reconnaissance vocale, corrélation d'images dynamiques, prise de décision rapide multidimensionnelle, fusion de données, optimisation comportementale, etc... Il existe plusieurs types de tels réseaux et parmi ceux- ci les réseaux impulsionnels, c'est-à-dire, ceux dont le fonctionnement est calqué sur celui des neurones corticaux. Ce sont ceux qui devraient offrir le meilleur rendement énergétique donc le meilleur passage à l'échelle. Plusieurs démonstrations de neurones artificielles ont été menées avec des circuits électroniques et plus récemment photoniques. La densité d'intégration de la filière photonique sur silicium est un atout pour créer des circuits suffisamment complexes pour espérer réaliser des démonstrations complètes. Le but de la thèse est donc d'exploiter une architecture de réseau neuromorphique impulsionnel à base de lasers à bascule de gain (Q switch) intégrés sur silicium et d'un circuit d'interconnexion ultra-dense et reconfigurable apte à imiter les poids synaptiques. Une modélisation complète ducircuit est attendue avec, à la clé la démonstration pratique d'une application dans la résolution d'un problème mathématique à définir<br>Today, neuromorphic networks play a crucial role in information processing,particularly as tasks become increasingly complex: voice recognition, dynamic image correlation, rapid multidimensional decision- making, data merging, behavioral optimization, etc... Neuromorphic networks come in several types; spiking networks are one of them. The latter's modus operandi is based on that of cortical neurons. As spiking networks are the most energy-efficient neuromorphic networks, they offer the greatest potential for scaling. Several demonstrations of artificial neurons have been conducted with electronic and more recently photonic circuits. The integration density of silicon photonics is an asset to create circuits that are complex enough to hopefully carry out a complete demonstration. Therefore, this thesis aims to exploit an architecture of a photonic spiking neural network based on Q-switched lasers integrated into silicon and an ultra-dense and reconfigurable interconnection circuit that can simulate synaptic weights. A complete modeling of the circuit is expected with a practical demonstration of an application in solving a mathematical problem to be defined
APA, Harvard, Vancouver, ISO, and other styles
9

Han, Bing. "ACCELERATION OF SPIKING NEURAL NETWORK ON GENERAL PURPOSE GRAPHICS PROCESSORS." University of Dayton / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1271368713.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

SUSI, GIANLUCA. "Asynchronous spiking neural networks: paradigma generale e applicazioni." Doctoral thesis, Università degli Studi di Roma "Tor Vergata", 2012. http://hdl.handle.net/2108/80567.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Spiking neural network (SNN)"

1

SpiNNaker: A Spiking Neural Network Architecture. now publishers, Inc., 2020. http://dx.doi.org/10.1561/9781680836523.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

SpiNNaker - a Spiking Neural Network Architecture. Now Publishers, 2020.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Li, Guoqi, Yam Song (Yansong) Chua, Haizhou Li, Peng Li, Emre O. Neftci, and Lei Deng, eds. Spiking Neural Network Learning, Benchmarking, Programming and Executing. Frontiers Media SA, 2020. http://dx.doi.org/10.3389/978-2-88963-767-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Wall, Julie, and Cornelius Glackin, eds. Spiking Neural Network Connectivity and its Potential for Temporal Sensory Processing and Variable Binding. Frontiers Media SA, 2014. http://dx.doi.org/10.3389/978-2-88919-239-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Spiking neural network (SNN)"

1

Penkov, Dimitar, Petia Koprinkova-Hristova, Nikola Kasabov, Simona Nedelcheva, Sofiya Ivanovska, and Svetlozar Yordanov. "Grid Search Optimization of Novel SNN-ESN Classifier on a Supercomputer Platform." In Large-Scale Scientific Computations. Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-56208-2_45.

Full text
Abstract:
AbstractThis work is demonstrating the use of a supercomputer platform to optimise hyper-parameters of a proposed by the team novel SNN-ESN computational model, that combines a brain template of spiking neurons in a spiking neural network (SNN) for feature extraction and an Echo State Network (ESN) for dynamic data series classification. A case study problem and data are used to illustrate the functionalities of the SNN-ESN. The overall SNN-ESN classifier has several hyper-parameters that are subject to refinement, such as: spiking threshold, duration of the refractory period and STDP learning rate for the SNN part; reservoir size, spectral radius of the connectivity matrix and leaking rate for the ESN part. In order to find the optimal hyper-parameter values exhaustive search over all possible combinations within reasonable intervals was performed using supercomputer Avitohol. The resulted optimal parameters led to improved classification accuracy. This work demonstrates the importance of model parameter optimisation using a supercomputer platform, which improves the usability of the proposed SNN-ESN for real-time applications on complex spatio-temporal data.
APA, Harvard, Vancouver, ISO, and other styles
2

Cao, Zhen, Hongwei Zhang, Qian Wang, and Chuanfeng Ma. "DNM-SNN: Spiking Neural Network Based on Dual Network Model." In IFIP Advances in Information and Communication Technology. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-14903-0_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Zhang, Yuan, Jian Cao, Jue Chen, Wenyu Sun, and Yuan Wang. "Razor SNN: Efficient Spiking Neural Network with Temporal Embeddings." In Artificial Neural Networks and Machine Learning – ICANN 2023. Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-44192-9_33.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Bhowmik, Debanjan. "Introduction to Artificial Neural Networks (ANN) and Spiking Neural Networks (SNN)." In Spintronics-Based Neuromorphic Computing. Springer Nature Singapore, 2024. http://dx.doi.org/10.1007/978-981-97-4445-9_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Li, Xinjie, Jianxiong Tang, and Jianhuang Lai. "EB-SNN: An Ensemble Binary Spiking Neural Network for Visual Recognition." In Lecture Notes in Computer Science. Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-78186-5_21.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Cachi, Paolo G., Sebastián Ventura Soto, and Krzysztof J. Cios. "TM-SNN: Threshold Modulated Spiking Neural Network for Multi-task Learning." In Advances in Computational Intelligence. Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-43078-7_53.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Su, Jiahao, Kang You, Zekai Xu, Weizhi Xu, and Zhezhi He. "Obtaining Optimal Spiking Neural Network in Sequence Learning via CRNN-SNN Conversion." In Lecture Notes in Computer Science. Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-72359-9_29.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Bhowmik, Debanjan. "Design of Spiking Neural Networks (SNN) with Domain-Wall Devices." In Spintronics-Based Neuromorphic Computing. Springer Nature Singapore, 2024. http://dx.doi.org/10.1007/978-981-97-4445-9_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Luo, Xiwen, Qiang Fu, Sheng Qin, and Kaiyang Wang. "Encrypted-SNN: A Privacy-Preserving Method for Converting Artificial Neural Networks to Spiking Neural Networks." In Neural Information Processing. Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-8082-6_40.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Liu, Chengzhi, Zihong Luo, Zheng Tao, Chenghao Liu, Yitao Xu, and Zile Huang. "MTSA-SNN: A Multi-modal Time Series Analysis Model Based on Spiking Neural Network." In Lecture Notes in Computer Science. Springer Nature Switzerland, 2024. https://doi.org/10.1007/978-3-031-78341-8_27.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Spiking neural network (SNN)"

1

Lee, Donghyun, Ruokai Yin, Youngeun Kim, Abhishek Moitra, Yuhang Li, and Priyadarshini Panda. "TT-SNN: Tensor Train Decomposition for Efficient Spiking Neural Network Training." In 2024 Design, Automation & Test in Europe Conference & Exhibition (DATE). IEEE, 2024. http://dx.doi.org/10.23919/date58400.2024.10546679.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Xu, Lei, Guohui Nie, Haibin Zheng, Chenlu Ma, Zhijun Yang, and Jinyin Chen. "SNN-GST: Gradient-Based Security Testing Method for Spiking Neural Networks." In 2025 7th International Conference on Software Engineering and Computer Science (CSECS). IEEE, 2025. https://doi.org/10.1109/csecs64665.2025.11009396.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Hasssan, Ahmed, Jian Meng, Anupreetham Anupreetham, and Jae-sun Seo. "IM-SNN: Memory-Efficient Spiking Neural Network with Low-Precision Membrane Potentials and Weights." In 2024 International Conference on Neuromorphic Systems (ICONS). IEEE, 2024. https://doi.org/10.1109/icons62911.2024.00029.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Lukashov, Ivan, and Alexander Antonov. "TRANSACTION-LEVEL DESIGNING OF NEUROMORPHIC PROCESSORS MICROARCHITECTURE." In 24th SGEM International Multidisciplinary Scientific GeoConference 2024. STEF92 Technology, 2024. https://doi.org/10.5593/sgem2024/2.1/s07.11.

Full text
Abstract:
Spiking neural networks (SNNs) is a promising research direction to their ability to imitate certain functions of brain. Hardware acceleration of SNN can offer orders of magnitude increase in performance and power efficiency. However, traditional hardware description languages have a barrier for rapid development and prototyping of custom internal hardware mechanisms that affect hardware construction throughout the entire processor structure. Mainstream high-level design methods also have disadvantages, e.g. poor focus on transaction streams management description in dynamically scheduled pipelined structures. To accelerate development of custom neuromorphic processors, we propose Neuromorphix software library, which implements a flexible, reconfigurable microarchitectural template enabling selection of a set of transactions specific to neuromorphic processors. Neuromorphix is based on the previously developed ActiveCore open-source framework, which provides a hardware-oriented intermediate representation for generation of hardware data types, operations and behavioral logic. Development process is accelerated by automatic generation of hardware structures typical for neuromorphic processors using transaction-level approach. At the same time, Neuromorphix supports the option to integrate user-defined hardware blocks and also enables reuse of high-level hardware mechanisms which allows to achieve fold decrease of entry barrier for a wide range of neuromorphic processors developers.
APA, Harvard, Vancouver, ISO, and other styles
5

Ban, Chaoyi, Linbo Shan, Gaoqi Yang, et al. "Artificial VO2 Spiking Neurons with Protective Mechanism for Enhancing Resilience of Spiking Neural Network Against Adversarial Attacks." In 2024 IEEE Silicon Nanoelectronics Workshop (SNW). IEEE, 2024. http://dx.doi.org/10.1109/snw63608.2024.10639247.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Bodden, Lennard, Duc Bach Ha, Franziska Schwaiger, Lars Kreuzberg, and Sven Behnke. "Spiking CenterNet: A Distillation-boosted Spiking Neural Network for Object Detection." In 2024 International Joint Conference on Neural Networks (IJCNN). IEEE, 2024. http://dx.doi.org/10.1109/ijcnn60899.2024.10650418.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Li, Yang, and Yi Zeng. "Efficient and Accurate Conversion of Spiking Neural Network with Burst Spikes." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/345.

Full text
Abstract:
Spiking neural network (SNN), as a brain-inspired energy-efficient neural network, has attracted the interest of researchers. While the training of spiking neural networks is still an open problem. One effective way is to map the weight of trained ANN to SNN to achieve high reasoning ability. However, the converted spiking neural network often suffers from performance degradation and a considerable time delay. To speed up the inference process and obtain higher accuracy, we theoretically analyze the errors in the conversion process from three perspectives: the differences between IF and ReLU, time dimension, and pooling operation. We propose a neuron model for releasing burst spikes, a cheap but highly efficient method to solve residual information. In addition, Lateral Inhibition Pooling (LIPooling) is proposed to solve the inaccuracy problem caused by MaxPooling in the conversion process. Experimental results on CIFAR and ImageNet demonstrate that our algorithm is efficient and accurate. For example, our method can ensure nearly lossless conversion of SNN and only use about 1/10 (less than 100) simulation time under 0.693x energy consumption of the typical method. Our code is available at https://github.com/Brain-Inspired-Cognitive-Engine/Conversion_Burst.
APA, Harvard, Vancouver, ISO, and other styles
8

Cheng, Xiang, Yunzhe Hao, Jiaming Xu, and Bo Xu. "LISNN: Improving Spiking Neural Networks with Lateral Interactions for Robust Object Recognition." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/211.

Full text
Abstract:
Spiking Neural Network (SNN) is considered more biologically plausible and energy-efficient on emerging neuromorphic hardware. Recently backpropagation algorithm has been utilized for training SNN, which allows SNN to go deeper and achieve higher performance. However, most existing SNN models for object recognition are mainly convolutional structures or fully-connected structures, which only have inter-layer connections, but no intra-layer connections. Inspired by Lateral Interactions in neuroscience, we propose a high-performance and noise-robust Spiking Neural Network (dubbed LISNN). Based on the convolutional SNN, we model the lateral interactions between spatially adjacent neurons and integrate it into the spiking neuron membrane potential formula, then build a multi-layer SNN on a popular deep learning framework, i.\,e., PyTorch. We utilize the pseudo-derivative method to solve the non-differentiable problem when applying backpropagation to train LISNN and test LISNN on multiple standard datasets. Experimental results demonstrate that the proposed model can achieve competitive or better performance compared to current state-of-the-art spiking neural networks on MNIST, Fashion-MNIST, and N-MNIST datasets. Besides, thanks to lateral interactions, our model processes stronger noise-robustness than other SNN. Our work brings a biologically plausible mechanism into SNN, hoping that it can help us understand the visual information processing in the brain.
APA, Harvard, Vancouver, ISO, and other styles
9

Arnold, Elias, Georg Böcherer, Eric Müller, et al. "Spiking Neural Network Equalization for IM/DD Optical Communication." In Signal Processing in Photonic Communications. Optica Publishing Group, 2022. http://dx.doi.org/10.1364/sppcom.2022.sptu1j.2.

Full text
Abstract:
A spiking neural network (SNN) equalizer model suitable for electronic neuromorphic hardware is designed for an IM/DD link. The SNN achieves the same bit-error-rate as an artificial neural network, outperforming linear equalization.
APA, Harvard, Vancouver, ISO, and other styles
10

von Bank, Alexander, Eike-Manuel Edelmann, and Laurent Schmalen. "Spiking Neural Network Decision Feedback Equalization for IM/DD Systems." In Integrated Photonics Research, Silicon and Nanophotonics. Optica Publishing Group, 2023. http://dx.doi.org/10.1364/iprsn.2023.jw2e.3.

Full text
Abstract:
A spiking neural network (SNN) equalizer with a decision feedback structure is applied to an IM/DD link with various parameters. The SNN outperforms linear and artificial neural network (ANN) based equalizers.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Spiking neural network (SNN)"

1

Wang, Felix, Nick Alonso, and Corinne Teeter. Combining Spike Time Dependent Plasticity (STDP) and Backpropagation (BP) for Robust and Data Efficient Spiking Neural Networks (SNN). Office of Scientific and Technical Information (OSTI), 2022. http://dx.doi.org/10.2172/1902866.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Pasupuleti, Murali Krishna. Neural Computation and Learning Theory: Expressivity, Dynamics, and Biologically Inspired AI. National Education Services, 2025. https://doi.org/10.62311/nesx/rriv425.

Full text
Abstract:
Abstract: Neural computation and learning theory provide the foundational principles for understanding how artificial and biological neural networks encode, process, and learn from data. This research explores expressivity, computational dynamics, and biologically inspired AI, focusing on theoretical expressivity limits, infinite-width neural networks, recurrent and spiking neural networks, attractor models, and synaptic plasticity. The study investigates mathematical models of function approximation, kernel methods, dynamical systems, and stability properties to assess the generalization capabilities of deep learning architectures. Additionally, it explores biologically plausible learning mechanisms such as Hebbian learning, spike-timing-dependent plasticity (STDP), and neuromodulation, drawing insights from neuroscience and cognitive computing. The role of spiking neural networks (SNNs) and neuromorphic computing in low-power AI and real-time decision-making is also analyzed, with applications in robotics, brain-computer interfaces, edge AI, and cognitive computing. Case studies highlight the industrial adoption of biologically inspired AI, focusing on adaptive neural controllers, neuromorphic vision, and memory-based architectures. This research underscores the importance of integrating theoretical learning principles with biologically motivated AI models to develop more interpretable, generalizable, and scalable intelligent systems. Keywords Neural computation, learning theory, expressivity, deep learning, recurrent neural networks, spiking neural networks, biologically inspired AI, infinite-width networks, kernel methods, attractor networks, synaptic plasticity, STDP, neuromodulation, cognitive computing, dynamical systems, function approximation, generalization, AI stability, neuromorphic computing, robotics, brain-computer interfaces, edge AI, biologically plausible learning.
APA, Harvard, Vancouver, ISO, and other styles
3

Aimone, James, Christopher Bennett, Suma Cardwell, Ryan Dellana, and Tianyao Xiao. Mosaic The Best of Both Worlds: Analog devices with Digital Spiking Communication to build a Hybrid Neural Network Accelerator. Office of Scientific and Technical Information (OSTI), 2020. http://dx.doi.org/10.2172/1673175.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography