To see the other types of publications on this topic, follow the link: Spiking neural network (SNN).

Journal articles on the topic 'Spiking neural network (SNN)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Spiking neural network (SNN).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Ngu, Huynh Cong Viet, and Keon Myung Lee. "Effective Conversion of a Convolutional Neural Network into a Spiking Neural Network for Image Recognition Tasks." Applied Sciences 12, no. 11 (2022): 5749. http://dx.doi.org/10.3390/app12115749.

Full text
Abstract:
Due to energy efficiency, spiking neural networks (SNNs) have gradually been considered as an alternative to convolutional neural networks (CNNs) in various machine learning tasks. In image recognition tasks, leveraging the superior capability of CNNs, the CNN–SNN conversion is considered one of the most successful approaches to training SNNs. However, previous works assume a rather long inference time period called inference latency to be allowed, while having a trade-off between inference latency and accuracy. One of the main reasons for this phenomenon stems from the difficulty in determining proper a firing threshold for spiking neurons. The threshold determination procedure is called a threshold balancing technique in the CNN–SNN conversion approach. This paper proposes a CNN–SNN conversion method with a new threshold balancing technique that obtains converted SNN models with good accuracy even with low latency. The proposed method organizes the SNN models with soft-reset IF spiking neurons. The threshold balancing technique estimates the thresholds for spiking neurons based on the maximum input current in a layerwise and channelwise manner. The experiment results have shown that our converted SNN models attain even higher accuracy than the corresponding trained CNN model for the MNIST dataset with low latency. In addition, for the Fashion-MNIST and CIFAR-10 datasets, our converted SNNs have shown less conversion loss than other methods in low latencies. The proposed method can be beneficial in deploying efficient SNN models for recognition tasks on resource-limited systems because the inference latency is strongly associated with energy consumption.
APA, Harvard, Vancouver, ISO, and other styles
2

Ngu, Huynh Cong Viet, and Keon Myung Lee. "Effective Conversion of a Convolutional Neural Network into a Spiking Neural Network for Image Recognition Tasks." Applied Sciences 12, no. 11 (2022): 5749. http://dx.doi.org/10.3390/app12115749.

Full text
Abstract:
Due to energy efficiency, spiking neural networks (SNNs) have gradually been considered as an alternative to convolutional neural networks (CNNs) in various machine learning tasks. In image recognition tasks, leveraging the superior capability of CNNs, the CNN–SNN conversion is considered one of the most successful approaches to training SNNs. However, previous works assume a rather long inference time period called inference latency to be allowed, while having a trade-off between inference latency and accuracy. One of the main reasons for this phenomenon stems from the difficulty in determining proper a firing threshold for spiking neurons. The threshold determination procedure is called a threshold balancing technique in the CNN–SNN conversion approach. This paper proposes a CNN–SNN conversion method with a new threshold balancing technique that obtains converted SNN models with good accuracy even with low latency. The proposed method organizes the SNN models with soft-reset IF spiking neurons. The threshold balancing technique estimates the thresholds for spiking neurons based on the maximum input current in a layerwise and channelwise manner. The experiment results have shown that our converted SNN models attain even higher accuracy than the corresponding trained CNN model for the MNIST dataset with low latency. In addition, for the Fashion-MNIST and CIFAR-10 datasets, our converted SNNs have shown less conversion loss than other methods in low latencies. The proposed method can be beneficial in deploying efficient SNN models for recognition tasks on resource-limited systems because the inference latency is strongly associated with energy consumption.
APA, Harvard, Vancouver, ISO, and other styles
3

Zhang, Yongqiang, Haijie Pang, Jinlong Ma, Guilei Ma, Xiaoming Zhang, and Menghua Man. "Research on Anti-Interference Performance of Spiking Neural Network Under Network Connection Damage." Brain Sciences 15, no. 3 (2025): 217. https://doi.org/10.3390/brainsci15030217.

Full text
Abstract:
Background: With the development of artificial intelligence, memristors have become an ideal choice to optimize new neural network architectures and improve computing efficiency and energy efficiency due to their combination of storage and computing power. In this context, spiking neural networks show the ability to resist Gaussian noise, spike interference, and AC electric field interference by adjusting synaptic plasticity. The anti-interference ability to spike neural networks has become an important direction of electromagnetic protection bionics research. Methods: Therefore, this research constructs two types of spiking neural network models with LIF model as nodes: VGG-SNN and FCNN-SNN, and combines pruning algorithm to simulate network connection damage during the training process. By comparing and analyzing the millimeter wave radar human motion dataset and MNIST dataset with traditional artificial neural networks, the anti-interference performance of spiking neural networks and traditional artificial neural networks under the same probability of edge loss was deeply explored. Results: The experimental results show that on the millimeter wave radar human motion dataset, the accuracy of the spiking neural network decreased by 5.83% at a sparsity of 30%, while the accuracy of the artificial neural network decreased by 18.71%. On the MNIST dataset, the accuracy of the spiking neural network decreased by 3.91% at a sparsity of 30%, while the artificial neural network decreased by 10.13%. Conclusions: Therefore, under the same network connection damage conditions, spiking neural networks exhibit unique anti-interference performance advantages. The performance of spiking neural networks in information processing and pattern recognition is relatively more stable and outstanding. Further analysis reveals that factors such as network structure, encoding method, and learning algorithm have a significant impact on the anti-interference performance of both.
APA, Harvard, Vancouver, ISO, and other styles
4

Dan, Yongping, Zhida Wang, Hengyi Li, and Jintong Wei. "Sa-SNN: spiking attention neural network for image classification." PeerJ Computer Science 10 (November 25, 2024): e2549. http://dx.doi.org/10.7717/peerj-cs.2549.

Full text
Abstract:
Spiking neural networks (SNNs) are known as third generation neural networks due to their energy efficient and low power consumption. SNNs have received a lot of attention due to their biological plausibility. SNNs are closer to the way biological neural systems work by simulating the transmission of information through discrete spiking signals between neurons. Influenced by the great potential shown by the attention mechanism in convolutional neural networks, Therefore, we propose a Spiking Attention Neural Network (Sa-SNN). The network includes a novel Spiking-Efficient Channel Attention (SECA) module that adopts a local cross-channel interaction strategy without dimensionality reduction, which can be achieved by one-dimensional convolution. It is implemented by convolution, which involves a small number of model parameters but provides a significant performance improvement for the network. The design of local inter-channel interactions through adaptive convolutional kernel sizes, rather than global dependencies, allows the network to focus more on the selection of important features, reduces the impact of redundant features, and improves the network’s recognition and generalisation capabilities. To investigate the effect of this structure on the network, we conducted a series of experiments. Experimental results show that Sa-SNN can perform image classification tasks more accurately. Our network achieved 99.61%, 99.61%, 94.13%, and 99.63% on the MNIST, Fashion-MNIST, N-MNIST datasets, respectively, and Sa-SNN performed well in terms of accuracy compared with mainstream SNNs.
APA, Harvard, Vancouver, ISO, and other styles
5

Mohamed, Siti Aisyah, Muhaini Othman, and Mohd Hafizul Afifi. "A review on data clustering using spiking neural network (SNN) models." Indonesian Journal of Electrical Engineering and Computer Science 15, no. 3 (2019): 1392. http://dx.doi.org/10.11591/ijeecs.v15.i3.pp1392-1400.

Full text
Abstract:
The evolution of Artificial Neural Network recently gives researchers an interest to explore deep learning evolved by Spiking Neural Network clustering methods. Spiking Neural Network (SNN) models captured neuronal behaviour more precisely than a traditional neural network as it contains the theory of time into their functioning model [1]. The aim of this paper is to reviewed studies that are related to clustering problems employing Spiking Neural Networks models. Even though there are many algorithms used to solve clustering problems, most of the methods are only suitable for static data and fixed windows of time series. Hence, there is a need to analyse complex data type, the potential for improvement is encouraged. Therefore, this paper summarized the significant result obtains by implying SNN models in different clustering approach. Thus, the findings of this paper could demonstrate the purpose of clustering method using SNN for the fellow researchers from various disciplines to discover and understand complex data.
APA, Harvard, Vancouver, ISO, and other styles
6

Fu, Qiang, and Hongbin Dong. "Breast Cancer Recognition Using Saliency-Based Spiking Neural Network." Wireless Communications and Mobile Computing 2022 (March 24, 2022): 1–17. http://dx.doi.org/10.1155/2022/8369368.

Full text
Abstract:
The spiking neural networks (SNNs) use event-driven signals to encode physical information for neural computation. SNN takes the spiking neuron as the basic unit. It modulates the process of nerve cells from receiving stimuli to firing spikes. Therefore, SNN is more biologically plausible. Although the SNN has more characteristics of biological neurons, SNN is rarely used for medical image recognition due to its poor performance. In this paper, a reservoir spiking neural network is used for breast cancer image recognition. Due to the difficulties of extracting the lesion features in medical images, a salient feature extraction method is used in image recognition. The salient feature extraction network is composed of spiking convolution layers, which can effectively extract the features of lesions. Two temporal encoding manners, namely, linear time encoding and entropy-based time encoding methods, are used to encode the input patterns. Readout neurons use the ReSuMe algorithm for training, and the Fruit Fly Optimization Algorithm (FOA) is employed to optimize the network architecture to further improve the reservoir SNN performance. Three modality datasets are used to verify the effectiveness of the proposed method. The results show an accuracy of 97.44% for the BreastMNIST database. The classification accuracy is 98.27% on the mini-MIAS database. And the overall accuracy is 95.83% for the BreaKHis database by using the saliency feature extraction, entropy-based time encoding, and network optimization.
APA, Harvard, Vancouver, ISO, and other styles
7

Zhang, Hong, and Yu Zhang. "Memory-Efficient Reversible Spiking Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 15 (2024): 16759–67. http://dx.doi.org/10.1609/aaai.v38i15.29616.

Full text
Abstract:
Spiking neural networks (SNNs) are potential competitors to artificial neural networks (ANNs) due to their high energy-efficiency on neuromorphic hardware. However, SNNs are unfolded over simulation time steps during the training process. Thus, SNNs require much more memory than ANNs, which impedes the training of deeper SNN models. In this paper, we propose the reversible spiking neural network to reduce the memory cost of intermediate activations and membrane potentials during training. Firstly, we extend the reversible architecture along temporal dimension and propose the reversible spiking block, which can reconstruct the computational graph and recompute all intermediate variables in forward pass with a reverse process. On this basis, we adopt the state-of-the-art SNN models to the reversible variants, namely reversible spiking ResNet (RevSResNet) and reversible spiking transformer (RevSFormer). Through experiments on static and neuromorphic datasets, we demonstrate that the memory cost per image of our reversible SNNs does not increase with the network depth. On CIFAR10 and CIFAR100 datasets, our RevSResNet37 and RevSFormer-4-384 achieve comparable accuracies and consume 3.79x and 3.00x lower GPU memory per image than their counterparts with roughly identical model complexity and parameters. We believe that this work can unleash the memory constraints in SNN training and pave the way for training extremely large and deep SNNs.
APA, Harvard, Vancouver, ISO, and other styles
8

A, Mohamed Sikkander, R.RamaNachiar, and Yasmeen Khadeeja. "Spiking Neural Network (SNN) Using to Detect Breast Cancer." International Journal of Scientific Research and Innovative Studies 1, no. 1 (2022): 20–22. https://doi.org/10.5281/zenodo.6877740.

Full text
Abstract:
The spiking neural networks (SNNs) use event-driven signals to encipher physical data for neural computation. SNN takes the spiking somatic cell because the basic unit. It modulates the method of nerve cells from receiving stimuli to firing spikes. Therefore, SNN is a lot of biologically plausible. Though the SNN has a lot of characteristics of biological neurons, SNN isn't used for medical image recognition because of its poor performance. During this paper, a reservoir spiking neural network is employed for carcinoma image recognition. Because of the difficulties of extracting the lesion options in medical pictures, a salient feature extraction technique is employed in image recognition. The salient feature extraction network consists of spiking convolution layers, which might effectively extract the options of lesions. 2 temporal secret writing manners, namely, linear time secret writing and entropy-based time secret writing strategies, are accustomed encipher the input patterns. Readout neurons use the ReSuMe algorithmic program for coaching, and therefore the dipterans improvement algorithmic program (FOA) is utilized to optimize the specification to additional improve the reservoir SNN performance. 3 modality datasets are accustomed verify the effectiveness of the projected technique. The results show Associate in nursing accuracy of ninety seven.44% for the BreastMNIST information. The classification accuracy is ninety eight.27% on the mini-MIAS information. And therefore the overall accuracy is ninety five.83% for the Break His information by victimization the strikingness feature extraction, entropy-based time secret writing, and network improvement.
APA, Harvard, Vancouver, ISO, and other styles
9

Mo, Lingfei, and Minghao Wang. "LogicSNN: A Unified Spiking Neural Networks Logical Operation Paradigm." Electronics 10, no. 17 (2021): 2123. http://dx.doi.org/10.3390/electronics10172123.

Full text
Abstract:
LogicSNN, a unified spiking neural networks (SNN) logical operation paradigm is proposed in this paper. First, we define the logical variables under the semantics of SNN. Then, we design the network structure of this paradigm and use spike-timing-dependent plasticity for training. According to this paradigm, six kinds of basic SNN binary logical operation modules and three kinds of combined logical networks based on these basic modules are implemented. Through these experiments, the rationality, cascading characteristics and the potential of building large-scale network of this paradigm are verified. This study fills in the blanks of the logical operation of SNN and provides a possible way to realize more complex machine learning capabilities.
APA, Harvard, Vancouver, ISO, and other styles
10

Shiltagh, Nadia Adnan, and Hasnaa Ahmed Abas. "Spiking Neural Network in Precision Agriculture." Journal of Engineering 21, no. 7 (2015): 17–34. http://dx.doi.org/10.31026/j.eng.2015.07.02.

Full text
Abstract:
In this paper, precision agriculture system is introduced based on Wireless Sensor Network (WSN). Soil moisture considered one of environment factors that effect on crop. The period of irrigation must be monitored. Neural network capable of learning the behavior of the agricultural soil in absence of mathematical model. This paper introduced modified type of neural network that is known as Spiking Neural Network (SNN). In this work, the precision agriculture system is modeled, contains two SNNs which have been identified off-line based on logged data, one of these SNNs represents the monitor that located at sink where the period of irrigation is calculated and the other represents the soil. In addition, to reduce power consumption of sensor nodes Modified Chain-Cluster based Mixed (MCCM) routing algorithm is used. According to MCCM, the sensors will send their packets that are less than threshold moisture level to the sink. The SNN with Modified Spike-Prop (MSP) training algorithm is capable of identifying soil, irrigation periods and monitoring the soil moisture level, this means that SNN has the ability to be an identifier and monitor. By applying this system the particular agriculture area reaches to the desired moisture level.
 
APA, Harvard, Vancouver, ISO, and other styles
11

Yang, Geunbo, Wongyu Lee, Youjung Seo, et al. "Unsupervised Spiking Neural Network with Dynamic Learning of Inhibitory Neurons." Sensors 23, no. 16 (2023): 7232. http://dx.doi.org/10.3390/s23167232.

Full text
Abstract:
A spiking neural network (SNN) is a type of artificial neural network that operates based on discrete spikes to process timing information, similar to the manner in which the human brain processes real-world problems. In this paper, we propose a new spiking neural network (SNN) based on conventional, biologically plausible paradigms, such as the leaky integrate-and-fire model, spike timing-dependent plasticity, and the adaptive spiking threshold, by suggesting new biological models; that is, dynamic inhibition weight change, a synaptic wiring method, and Bayesian inference. The proposed network is designed for image recognition tasks, which are frequently used to evaluate the performance of conventional deep neural networks. To manifest the bio-realistic neural architecture, the learning is unsupervised, and the inhibition weight is dynamically changed; this, in turn, affects the synaptic wiring method based on Hebbian learning and the neuronal population. In the inference phase, Bayesian inference successfully classifies the input digits by counting the spikes from the responding neurons. The experimental results demonstrate that the proposed biological model ensures a performance improvement compared with other biologically plausible SNN models.
APA, Harvard, Vancouver, ISO, and other styles
12

Wang, Junyi. "A Review of Spiking Neural Networks." SHS Web of Conferences 144 (2022): 03004. http://dx.doi.org/10.1051/shsconf/202214403004.

Full text
Abstract:
Spiking neuron network (SNN) attaches much attention to researchers in neuromorphic engineering and brain-like computing because of its advantages in Spatio-temporal dynamics, diverse coding mechanisms, and event-driven properties. This paper is a review of SNN in order to help researchers from other areas to know and became familiar with the field of SNN or even became interested in SNN. Neuron models, coding methods, training algorithms, and neuromorphic computing platforms will be introduced in this paper. This paper analyzes the disadvantages and advantages of several kinds of neural models, coding methods, learning algorithms, and neuromorphic computing platforms, and according to these to propose some expected development, such as improving the balance between bio-mimicry and cost of computing for neuron models, compounding coding methods, unsupervised learning algorithms in SNN, and digital-analog computing platform.
APA, Harvard, Vancouver, ISO, and other styles
13

Stasenko, Sergey V., and Victor B. Kazantsev. "Dynamic Image Representation in a Spiking Neural Network Supplied by Astrocytes." Mathematics 11, no. 3 (2023): 561. http://dx.doi.org/10.3390/math11030561.

Full text
Abstract:
The mathematical model of the spiking neural network (SNN) supplied by astrocytes is investigated. The astrocytes are a specific type of brain cells which are not electrically excitable but induce chemical modulations of neuronal firing. We analyze how the astrocytes influence images encoded in the form of the dynamic spiking pattern of the SNN. Serving at a much slower time scale, the astrocytic network interacting with the spiking neurons can remarkably enhance the image representation quality. The spiking dynamics are affected by noise distorting the information image. We demonstrate that the activation of astrocytes can significantly suppress noise influence, improving the dynamic image representation by the SNN.
APA, Harvard, Vancouver, ISO, and other styles
14

Su, Jing, and Jing Li. "HF-SNN: High-Frequency Spiking Neural Network." IEEE Access 9 (2021): 51950–57. http://dx.doi.org/10.1109/access.2021.3068159.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Islam, Riadul, Patrick Majurski, Jun Kwon, Anurag Sharma, and Sri Ranga Sai Krishna Tummala. "Benchmarking Artificial Neural Network Architectures for High-Performance Spiking Neural Networks." Sensors 24, no. 4 (2024): 1329. http://dx.doi.org/10.3390/s24041329.

Full text
Abstract:
Organizations managing high-performance computing systems face a multitude of challenges, including overarching concerns such as overall energy consumption, microprocessor clock frequency limitations, and the escalating costs associated with chip production. Evidently, processor speeds have plateaued over the last decade, persisting within the range of 2 GHz to 5 GHz. Scholars assert that brain-inspired computing holds substantial promise for mitigating these challenges. The spiking neural network (SNN) particularly stands out for its commendable power efficiency when juxtaposed with conventional design paradigms. Nevertheless, our scrutiny has brought to light several pivotal challenges impeding the seamless implementation of large-scale neural networks (NNs) on silicon. These challenges encompass the absence of automated tools, the need for multifaceted domain expertise, and the inadequacy of existing algorithms to efficiently partition and place extensive SNN computations onto hardware infrastructure. In this paper, we posit the development of an automated tool flow capable of transmuting any NN into an SNN. This undertaking involves the creation of a novel graph-partitioning algorithm designed to strategically place SNNs on a network-on-chip (NoC), thereby paving the way for future energy-efficient and high-performance computing paradigms. The presented methodology showcases its effectiveness by successfully transforming ANN architectures into SNNs with a marginal average error penalty of merely 2.65%. The proposed graph-partitioning algorithm enables a 14.22% decrease in inter-synaptic communication and an 87.58% reduction in intra-synaptic communication, on average, underscoring the effectiveness of the proposed algorithm in optimizing NN communication pathways. Compared to a baseline graph-partitioning algorithm, the proposed approach exhibits an average decrease of 79.74% in latency and a 14.67% reduction in energy consumption. Using existing NoC tools, the energy-latency product of SNN architectures is, on average, 82.71% lower than that of the baseline architectures.
APA, Harvard, Vancouver, ISO, and other styles
16

Andreeva, N. V., E. A. Ryndin, and I. A. Mavrin. "Approaches to Optimization of Spiking Neural Network Hardware Implementation." Nano- i Mikrosistemnaya Tehnika 26, no. 3 (2024): 117–30. http://dx.doi.org/10.17587/nmst.26.117-130.

Full text
Abstract:
The neuromorphic approach to hardware implementation of neural networks is usually considered separately from in-memory computing. In this case, we mean the hardware execution of spiking neural networks (SNN), which are the most biologically realistic compared to deep neural network algorithms. The operation principle of SNN is that the information propagates in form of spikes (voltage pulses) and the network weights (in the training mode) is updated using the time delays in the spike sequences according to the spike time-dependent plasticity rule (STDP). This results in asynchronous operation of the networks, which significantly improves their energy efficiency. In this article optimization approaches for the architecture of SNN classifiers have been proposed. The suggested optimization approaches aimed at improving the performance of neuromorphic devices based on memristive crossbars as well as minimizing the training cost. The integration of the spiking convolutional layer and implementation of dendritic computing principles in analog feedforward SNN architectures has been considered in this paper. It was shown that both approaches allow significantly facilitating the procedure of the network training. To evaluate the performance, analog circuits of the optimized SNN-architectures were modeled and designed. The functionality of one of the architectures under study is demonstrated on an experimental prototype implemented on commercial electronic components. The cost of processing a data packet by developed SNNs was estimated. The obtained results indicate that modification of SNN architectures using the principles of dendritic computing allows not only to significantly reduce the consumption of synaptic and neuronal resource in the hardware design, but also to reduce training costs. The research performed at the Saint Petersburg Electrotechnical University was funded by the grant FSEE-2020-0013 of the Ministry of Science and Higher Education of the Russian Federation.
APA, Harvard, Vancouver, ISO, and other styles
17

Shahsavari, Mahyar, Jonathan Beaumont, David Thomas, and Andrew D. Brown. "POETS: A Parallel Cluster Architecture for Spiking Neural Network." International Journal of Machine Learning and Computing 11, no. 4 (2021): 281–85. http://dx.doi.org/10.18178/ijmlc.2021.11.4.1048.

Full text
Abstract:
Spiking Neural Networks (SNNs) are known as a branch of neuromorphic computing and are currently used in neuroscience applications to understand and model the biological brain. SNNs could also potentially be used in many other application domains such as classification, pattern recognition, and autonomous control. This work presents a highly-scalable hardware platform called POETS, and uses it to implement SNN on a very large number of parallel and reconfigurable FPGA-based processors. The current system consists of 48 FPGAs, providing 3072 processing cores and 49152 threads. We use this hardware to implement up to four million neurons with one thousand synapses. Comparison to other similar platforms shows that the current POETS system is twenty times faster than the Brian simulator, and at least two times faster than SpiNNaker.
APA, Harvard, Vancouver, ISO, and other styles
18

Cılasun, Hüsrev, Salonik Resch, Zamshed I. Chowdhury, et al. "Spiking Neural Networks in Spintronic Computational RAM." ACM Transactions on Architecture and Code Optimization 18, no. 4 (2021): 1–21. http://dx.doi.org/10.1145/3475963.

Full text
Abstract:
Spiking Neural Networks (SNNs) represent a biologically inspired computation model capable of emulating neural computation in human brain and brain-like structures. The main promise is very low energy consumption. Classic Von Neumann architecture based SNN accelerators in hardware, however, often fall short of addressing demanding computation and data transfer requirements efficiently at scale. In this article, we propose a promising alternative to overcome scalability limitations, based on a network of in-memory SNN accelerators, which can reduce the energy consumption by up to 150.25= when compared to a representative ASIC solution. The significant reduction in energy comes from two key aspects of the hardware design to minimize data communication overheads: (1) each node represents an in-memory SNN accelerator based on a spintronic Computational RAM array, and (2) a novel, De Bruijn graph based architecture establishes the SNN array connectivity.
APA, Harvard, Vancouver, ISO, and other styles
19

Soupizet, Thomas, Zalfa Jouni, Siqi Wang, Aziz Benlarbi-Delai, and Pietro M. Ferreira. "Analog Spiking Neural Network Synthesis for the MNIST." Journal of Integrated Circuits and Systems 18, no. 1 (2023): 1–12. http://dx.doi.org/10.29292/jics.v18i1.663.

Full text
Abstract:
Different from classical artificial neural network which processes digital data, the spiking neural network (SNN) processes spike trains. Indeed, its event-driven property helps to capture the rich dynamics the neurons have within the brain, and the sparsity of collected spikes helps reducing computational power. Novel synthesis framework is proposed and an algorithm is detailed to guide designers into deep learning and energy-efficient analog SNN using MNIST. An analog SNN composed of 86 electronic neurons (eNeuron) and 1238 synapses interacting through two hidden layers is illustrated. Three different models of eNeurons implementations are tested, being (Leaky) Integrate-and-Fire (LIF), Morris Lecar (ML) simplified (simp.) and biomimetic (bio.). The proposed SNN, coupling deep learning and ultra-low power, is trained using a common machine learning system (Tensor- Flow) for the MNIST. LIF eNeurons implementations present some limitations and weakness in terms of dynamic range. Both ML eNeurons achieve robust accuracy which is approximately of 0.82.
APA, Harvard, Vancouver, ISO, and other styles
20

Wang, Manman, Yuhai Yuan, and Yanfeng Jiang. "Realization of Artificial Neurons and Synapses Based on STDP Designed by an MTJ Device." Micromachines 14, no. 10 (2023): 1820. http://dx.doi.org/10.3390/mi14101820.

Full text
Abstract:
As the third-generation neural network, the spiking neural network (SNN) has become one of the most promising neuromorphic computing paradigms to mimic brain neural networks over the past decade. The SNN shows many advantages in performing classification and recognition tasks in the artificial intelligence field. In the SNN, the communication between the pre-synapse neuron (PRE) and the post-synapse neuron (POST) is conducted by the synapse. The corresponding synaptic weights are dependent on both the spiking patterns of the PRE and the POST, which are updated by spike-timing-dependent plasticity (STDP) rules. The emergence and growing maturity of spintronic devices present a new approach for constructing the SNN. In the paper, a novel SNN is proposed, in which both the synapse and the neuron are mimicked with the spin transfer torque magnetic tunnel junction (STT-MTJ) device. The synaptic weight is presented by the conductance of the MTJ device. The mapping of the probabilistic spiking nature of the neuron to the stochastic switching behavior of the MTJ with thermal noise is presented based on the stochastic Landau–Lifshitz–Gilbert (LLG) equation. In this way, a simplified SNN is mimicked with the MTJ device. The function of the mimicked SNN is verified by a handwritten digit recognition task based on the MINIST database.
APA, Harvard, Vancouver, ISO, and other styles
21

Karimah, Hasna Nur, Chankyu Lee, and Yeongkyo Seo. "Batchnorm-Free Binarized Deep Spiking Neural Network for a Lightweight Machine Learning Model." Electronics 14, no. 8 (2025): 1602. https://doi.org/10.3390/electronics14081602.

Full text
Abstract:
The development of deep neural networks, although demonstrating astounding capabilities, leads to more complex models, high energy consumption, and expensive hardware costs. While network quantization is a widely used method to address this problem, the typical binary neural networks often require the batch normalization (batchnorm) layer to preserve their classification performances. The batchnorm layer contains full-precision multiplication and the addition operation that requires extra hardware and memory access. To address this issue, we present a batch normalization-free binarized deep spiking neural network (B-SNN). We combine spike-based backpropagation in a spiking neural network with weight binarization to further reduce the memory and computation overhead while maintaining comparable accuracy. Weight binarization reduces the huge amount of memory storage for a large number of parameters by replacing the full-precision weights (32 bit) with binary weights (1 bit). Moreover, the proposed B-SNN employs the stochastic input encoding scheme together with a spiking neuron model, thereby enabling networks to perform efficient bitwise computations without the necessity of using a batchnorm layer. As a result, our experimental results demonstrate that the efficacy of the proposed binarization scheme on deep SNNs outperforms the conventional binarized convolutional neural network.
APA, Harvard, Vancouver, ISO, and other styles
22

Wang, Qingyu, Tielin Zhang, Minglun Han, Yi Wang, Duzhen Zhang, and Bo Xu. "Complex Dynamic Neurons Improved Spiking Transformer Network for Efficient Automatic Speech Recognition." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 1 (2023): 102–9. http://dx.doi.org/10.1609/aaai.v37i1.25081.

Full text
Abstract:
The spiking neural network (SNN) using leaky-integrated-and-fire (LIF) neurons has been commonly used in automatic speech recognition (ASR) tasks. However, the LIF neuron is still relatively simple compared to that in the biological brain. Further research on more types of neurons with different scales of neuronal dynamics is necessary. Here we introduce four types of neuronal dynamics to post-process the sequential patterns generated from the spiking transformer to get the complex dynamic neuron improved spiking transformer neural network (DyTr-SNN). We found that the DyTr-SNN could handle the non-toy automatic speech recognition task well, representing a lower phoneme error rate, lower computational cost, and higher robustness. These results indicate that the further cooperation of SNNs and neural dynamics at the neuron and network scales might have much in store for the future, especially on the ASR tasks.
APA, Harvard, Vancouver, ISO, and other styles
23

Jiang, Wenwu, Jie Li, Hongbo Liu, et al. "Memristor-based multi-synaptic spiking neuron circuit for spiking neural network." Chinese Physics B 31, no. 4 (2022): 040702. http://dx.doi.org/10.1088/1674-1056/ac380b.

Full text
Abstract:
Spiking neural networks (SNNs) are widely used in many fields because they work closer to biological neurons. However, due to its computational complexity, many SNNs implementations are limited to computer programs. First, this paper proposes a multi-synaptic circuit (MSC) based on memristor, which realizes the multi-synapse connection between neurons and the multi-delay transmission of pulse signals. The synapse circuit participates in the calculation of the network while transmitting the pulse signal, and completes the complex calculations on the software with hardware. Secondly, a new spiking neuron circuit based on the leaky integrate-and-fire (LIF) model is designed in this paper. The amplitude and width of the pulse emitted by the spiking neuron circuit can be adjusted as required. The combination of spiking neuron circuit and MSC forms the multi-synaptic spiking neuron (MSSN). The MSSN was simulated in PSPICE and the expected result was obtained, which verified the feasibility of the circuit. Finally, a small SNN was designed based on the mathematical model of MSSN. After the SNN is trained and optimized, it obtains a good accuracy in the classification of the IRIS-dataset, which verifies the practicability of the design in the network.
APA, Harvard, Vancouver, ISO, and other styles
24

Pawar, Anuradha, and Nidhi Tiwari. "A Novel Approach of DDOS Attack Classification with Genetic Algorithm-optimized Spiking Neural Network." International Journal of Computer Network and Information Security 16, no. 2 (2024): 103–16. http://dx.doi.org/10.5815/ijcnis.2024.02.09.

Full text
Abstract:
Spiking Neural Network (SNN) use spiking neurons that transmit information through discrete spikes, similar to the way biological neurons communicate through action potentials. This unique property of SNNs makes them suitable for applications that require real-time processing and low power consumption. This paper proposes a new method for detecting DDoS attacks using a spiking neural network (SNN) with a distance-based rate coding mechanism and optimizing the SNN using a genetic algorithm (GA). The proposed GA-SNN approach achieved a remarkable accuracy rate of 99.98% in detecting DDoS attacks, outperforming existing state-of-the-art methods. The GA optimization approach helps to overcome the challenges of setting the initial weights and biases in the SNN, and the distance-based rate coding mechanism enhances the accuracy of the SNN in detecting DDoS attacks. Additionally, the proposed approach is designed to be computationally efficient, which is essential for practical implementation in real-time systems. Overall, the proposed GA-SNN approach is a promising solution for accurate and efficient detection of DDoS attacks in network security applications.
APA, Harvard, Vancouver, ISO, and other styles
25

Losh, Michael, and Daniel Llamocca. "A Low-Power Spike-Like Neural Network Design." Electronics 8, no. 12 (2019): 1479. http://dx.doi.org/10.3390/electronics8121479.

Full text
Abstract:
Modern massively-parallel Graphics Processing Units (GPUs) and Machine Learning (ML) frameworks enable neural network implementations of unprecedented performance and sophistication. However, state-of-the-art GPU hardware platforms are extremely power-hungry, while microprocessors cannot achieve the performance requirements. Biologically-inspired Spiking Neural Networks (SNN) have inherent characteristics that lead to lower power consumption. We thus present a bit-serial SNN-like hardware architecture. By using counters, comparators, and an indexing scheme, the design effectively implements the sum-of-products inherent in neurons. In addition, we experimented with various strength-reduction methods to lower neural network resource usage. The proposed Spiking Hybrid Network (SHiNe), validated on an FPGA, has been found to achieve reasonable performance with a low resource utilization, with some trade-off with respect to hardware throughput and signal representation.
APA, Harvard, Vancouver, ISO, and other styles
26

Liu, Yuqian, Chujie Zhao, Yizhou Jiang, Ying Fang, and Feng Chen. "LDD: High-Precision Training of Deep Spiking Neural Network Transformers Guided by an Artificial Neural Network." Biomimetics 9, no. 7 (2024): 413. http://dx.doi.org/10.3390/biomimetics9070413.

Full text
Abstract:
The rise of large-scale Transformers has led to challenges regarding computational costs and energy consumption. In this context, spiking neural networks (SNNs) offer potential solutions due to their energy efficiency and processing speed. However, the inaccuracy of surrogate gradients and feature space quantization pose challenges for directly training deep SNN Transformers. To tackle these challenges, we propose a method (called LDD) to align ANN and SNN features across different abstraction levels in a Transformer network. LDD incorporates structured feature knowledge from ANNs to guide SNN training, ensuring the preservation of crucial information and addressing inaccuracies in surrogate gradients through designing layer-wise distillation losses. The proposed approach outperforms existing methods on the CIFAR10 (96.1%), CIFAR100 (82.3%), and ImageNet (80.9%) datasets, and enables training of the deepest SNN Transformer network using ImageNet.
APA, Harvard, Vancouver, ISO, and other styles
27

Bensimon, Moshe, Shlomo Greenberg, and Moshe Haiut. "Using a Low-Power Spiking Continuous Time Neuron (SCTN) for Sound Signal Processing." Sensors 21, no. 4 (2021): 1065. http://dx.doi.org/10.3390/s21041065.

Full text
Abstract:
This work presents a new approach based on a spiking neural network for sound preprocessing and classification. The proposed approach is biologically inspired by the biological neuron’s characteristic using spiking neurons, and Spike-Timing-Dependent Plasticity (STDP)-based learning rule. We propose a biologically plausible sound classification framework that uses a Spiking Neural Network (SNN) for detecting the embedded frequencies contained within an acoustic signal. This work also demonstrates an efficient hardware implementation of the SNN network based on the low-power Spike Continuous Time Neuron (SCTN). The proposed sound classification framework suggests direct Pulse Density Modulation (PDM) interfacing of the acoustic sensor with the SCTN-based network avoiding the usage of costly digital-to-analog conversions. This paper presents a new connectivity approach applied to Spiking Neuron (SN)-based neural networks. We suggest considering the SCTN neuron as a basic building block in the design of programmable analog electronics circuits. Usually, a neuron is used as a repeated modular element in any neural network structure, and the connectivity between the neurons located at different layers is well defined. Thus, generating a modular Neural Network structure composed of several layers with full or partial connectivity. The proposed approach suggests controlling the behavior of the spiking neurons, and applying smart connectivity to enable the design of simple analog circuits based on SNN. Unlike existing NN-based solutions for which the preprocessing phase is carried out using analog circuits and analog-to-digital conversion, we suggest integrating the preprocessing phase into the network. This approach allows referring to the basic SCTN as an analog module enabling the design of simple analog circuits based on SNN with unique inter-connections between the neurons. The efficiency of the proposed approach is demonstrated by implementing SCTN-based resonators for sound feature extraction and classification. The proposed SCTN-based sound classification approach demonstrates a classification accuracy of 98.73% using the Real-World Computing Partnership (RWCP) database.
APA, Harvard, Vancouver, ISO, and other styles
28

Liu, Shuang, Guangyao Wang, Tianshuo Bai, et al. "Magnetic Skyrmion-Based Spiking Neural Network for Pattern Recognition." Applied Sciences 12, no. 19 (2022): 9698. http://dx.doi.org/10.3390/app12199698.

Full text
Abstract:
Spiking neural network (SNN) has emerged as one of the most powerful brain-inspired computing paradigms in complex pattern recognition tasks that can be enabled by neuromorphic hardware. However, owing to the fundamental architecture mismatch between biological and Boolean logic, CMOS implementation of SNN is energy inefficient. A low-power approach with novel “neuro-mimetic” devices offering a direct mapping to synaptic and neuronal functionalities is still an open area. In this paper, SNN constructed with novel magnetic skyrmion-based leaky-integrate-fire (LIF) spiking neuron and the skyrmionic synapse crossbar is proposed. We perform a systematic device-circuit-architecture co-design for pattern recognition to evaluate the feasibility of our proposal. The simulation results demonstrated that our device has superior lower switching voltage and high energy efficiency, two times lower programming energy efficiency in comparison with CMOS devices. This work paves a novel pathway for low-power hardware design using full-skyrmion SNN architecture, as well as promising avenues for implementing neuromorphic computing schemes.
APA, Harvard, Vancouver, ISO, and other styles
29

Aoun, Mario Antoine. "A STDP Rule that Favours Chaotic Spiking over Regular Spiking of Neurons." International Journal of Artificial Intelligence & Applications 12, no. 03 (2021): 25–33. http://dx.doi.org/10.5121/ijaia.2021.12303.

Full text
Abstract:
We compare the number of states of a Spiking Neural Network (SNN) composed from chaotic spiking neurons versus the number of states of a SNN composed from regular spiking neurons while both SNNs implementing a Spike Timing Dependent Plasticity (STDP) rule that we created. We find out that this STDP rule favors chaotic spiking since the number of states is larger in the chaotic SNN than the regular SNN. This chaotic favorability is not general; it is exclusive to this STDP rule only. This research falls under our long-term investigation of STDP and chaos theory.
APA, Harvard, Vancouver, ISO, and other styles
30

Tao, Yingzhi, and Qiaoyun Wu. "Spiking PointCNN: An Efficient Converted Spiking Neural Network under a Flexible Framework." Electronics 13, no. 18 (2024): 3626. http://dx.doi.org/10.3390/electronics13183626.

Full text
Abstract:
Spiking neural networks (SNNs) are generating wide attention due to their brain-like simulation capabilities and low energy consumption. Converting artificial neural networks (ANNs) to SNNs provides great advantages, combining the high accuracy of ANNs with the robustness and energy efficiency of SNNs. Existing point clouds processing SNNs have two issues to be solved: first, they lack a specialized surrogate gradient function; second, they are not robust enough to process a real-world dataset. In this work, we present a high-accuracy converted SNN for 3D point cloud processing. Specifically, we first revise and redesign the Spiking X-Convolution module based on the X-transformation. To address the problem of non-differentiable activation function arising from the binary signal from spiking neurons, we propose an effective adjustable surrogate gradient function, which can fit various models well by tuning the parameters. Additionally, we introduce a versatile ANN-to-SNN conversion framework enabling modular transformations. Based on this framework and the spiking X-Convolution module, we design the Spiking PointCNN, a highly efficient converted SNN for processing 3D point clouds. We conduct experiments on the public 3D point cloud datasets ModelNet40 and ScanObjectNN, on which our proposed model achieves excellent accuracy. Code will be available on GitHub.
APA, Harvard, Vancouver, ISO, and other styles
31

Nguyen, Quang Anh Pham, Philipp Andelfinger, Wen Jun Tan, Wentong Cai, and Alois Knoll. "Transitioning Spiking Neural Network Simulators to Heterogeneous Hardware." ACM Transactions on Modeling and Computer Simulation 31, no. 2 (2021): 1–26. http://dx.doi.org/10.1145/3422389.

Full text
Abstract:
Spiking neural networks (SNN) are among the most computationally intensive types of simulation models, with node counts on the order of up to 10 11 . Currently, there is intensive research into hardware platforms suitable to support large-scale SNN simulations, whereas several of the most widely used simulators still rely purely on the execution on CPUs. Enabling the execution of these established simulators on heterogeneous hardware allows new studies to exploit the many-core hardware prevalent in modern supercomputing environments, while still being able to reproduce and compare with results from a vast body of existing literature. In this article, we propose a transition approach for CPU-based SNN simulators to enable the execution on heterogeneous hardware (e.g., CPUs, GPUs, and FPGAs), with only limited modifications to an existing simulator code base and without changes to model code. Our approach relies on manual porting of a small number of core simulator functionalities as found in common SNN simulators, whereas the unmodified model code is analyzed and transformed automatically. We apply our approach to the well-known simulator NEST and make a version executable on heterogeneous hardware available to the community. Our measurements show that at full utilization, a single GPU achieves the performance of about 9 CPU cores. A CPU-GPU co-execution with load balancing is also demonstrated, which shows better performance compared to CPU-only or GPU-only execution. Finally, an analytical performance model is proposed to heuristically determine the optimal parameters to execute the heterogeneous NEST.
APA, Harvard, Vancouver, ISO, and other styles
32

Liu, Yang, Meng Tian, Ruijia Liu, et al. "Spike-Based Approximate Backpropagation Algorithm of Brain-Inspired Deep SNN for Sonar Target Classification." Computational Intelligence and Neuroscience 2022 (October 20, 2022): 1–11. http://dx.doi.org/10.1155/2022/1633946.

Full text
Abstract:
With the development of neuromorphic computing, more and more attention has been paid to a brain-inspired spiking neural network (SNN) because of its ultralow energy consumption and high-performance spatiotemporal information processing. Due to the discontinuity of the spiking neuronal activation function, it is still a difficult problem to train brain-inspired deep SNN directly, so SNN has not yet shown performance comparable to that of an artificial neural network. For this reason, the spike-based approximate backpropagation (SABP) algorithm and a general brain-inspired SNN framework are proposed in this paper. The combination of the two can be used for end-to-end direct training of brain-inspired deep SNN. Experiments show that compared with other spike-based methods of directly training SNN, the classification accuracy of this method is close to the best results on MNIST and CIFAR-10 datasets and achieves the best classification accuracy on sonar image target classification (SITC) of small sample datasets. Further analysis shows that compared with artificial neural networks, our brain-inspired SNN has great advantages in computational complexity and energy consumption in sonar target classification.
APA, Harvard, Vancouver, ISO, and other styles
33

Liu, Zhao. "Research on handwritten digits recognition system based on spiking neuron network." Applied and Computational Engineering 30, no. 1 (2024): 284–91. http://dx.doi.org/10.54254/2755-2721/30/20230052.

Full text
Abstract:
In the 21st century, deep learning has revolutionized the fields of machine learning and computer science, attaining high accuracy in tasks such as image recognition. More layers and more parameters are stuffed into the network to achieve higher performance, making the network extremely large. A new, radically different approach was proposed to complete the tasks, such as image recognition, using a spiking neural network(SNN). The spiking neural network is event-driven rather than data-driven, which makes it more physiologically realistic and uses a lot less power. This study reviews the development of spiking neural networks and their differences from non-spiking neural networks, as well as the different encoding methods, neuronal models and update rules that have an impact on the performance of the network. It can be concluded that though SNNs can hardly achieve the same accuracy as artificial neural networks(ANN), the gap is narrowing. More strategies from ANN such as back-propagation and convolutional layers have been applied to SNNs, making it more accurate, stable and comprehensive.
APA, Harvard, Vancouver, ISO, and other styles
34

Pang, Lili, Junxiu Liu, Jim Harkin, et al. "Case Study—Spiking Neural Network Hardware System for Structural Health Monitoring." Sensors 20, no. 18 (2020): 5126. http://dx.doi.org/10.3390/s20185126.

Full text
Abstract:
This case study provides feasibility analysis of adapting Spiking Neural Networks (SNN) based Structural Health Monitoring (SHM) system to explore low-cost solution for inspection of structural health of damaged buildings which survived after natural disaster that is, earthquakes or similar activities. Various techniques are used to detect the structural health status of a building for performance benchmarking, including different feature extraction methods and classification techniques (e.g., SNN, K-means and artificial neural network etc.). The SNN is utilized to process the sensory data generated from full-scale seven-story reinforced concrete building to verify the classification performances. Results show that the proposed SNN hardware has high classification accuracy, reliability, longevity and low hardware area overhead.
APA, Harvard, Vancouver, ISO, and other styles
35

Zhu, Xi, Yi Sun, Haijun Liu, Qingjiang Li, and Hui Xu. "Simulation of the Spiking Neural Network based on Practical Memristor." MATEC Web of Conferences 173 (2018): 01025. http://dx.doi.org/10.1051/matecconf/201817301025.

Full text
Abstract:
In order to gain a better understanding of the brain and explore biologically-inspired computation, significant attention is being paid to research into the spike-based neural computation. Spiking neural network (SNN), which is inspired by the understanding of observed biological structure, has been increasingly applied to pattern recognition task. In this work, a single layer SNN architecture based on the characteristics of spiking timing dependent plasticity (STDP) in accordance with the actual test of the device data has been proposed. The device data is derived from the Ag/GeSe/TiN fabricated memristor. The network has been tested on the MNIST dataset, and the classification accuracy attains 90.2%. Furthermore, the impact of device instability on the SNN performance has been discussed, which can propose guidelines for fabricating memristors used for SNN architecture based on STDP characteristics.
APA, Harvard, Vancouver, ISO, and other styles
36

Kim, Youngeun, Yeshwanth Venkatesha, and Priyadarshini Panda. "PrivateSNN: Privacy-Preserving Spiking Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 1 (2022): 1192–200. http://dx.doi.org/10.1609/aaai.v36i1.20005.

Full text
Abstract:
How can we bring both privacy and energy-efficiency to a neural system? In this paper, we propose PrivateSNN, which aims to build low-power Spiking Neural Networks (SNNs) from a pre-trained ANN model without leaking sensitive information contained in a dataset. Here, we tackle two types of leakage problems: 1) Data leakage is caused when the networks access real training data during an ANN-SNN conversion process. 2) Class leakage is caused when class-related features can be reconstructed from network parameters. In order to address the data leakage issue, we generate synthetic images from the pre-trained ANNs and convert ANNs to SNNs using the generated images. However, converted SNNs remain vulnerable to class leakage since the weight parameters have the same (or scaled) value with respect to ANN parameters. Therefore, we encrypt SNN weights by training SNNs with a temporal spike-based learning rule. Updating weight parameters with temporal data makes SNNs difficult to be interpreted in the spatial domain. We observe that the encrypted PrivateSNN eliminates data and class leakage issues with a slight performance drop (less than ~2%) and significant energy-efficiency gain (about 55x) compared to the standard ANN. We conduct extensive experiments on various datasets including CIFAR10, CIFAR100, and TinyImageNet, highlighting the importance of privacy-preserving SNN training.
APA, Harvard, Vancouver, ISO, and other styles
37

Lobov, Sergey A., Alexey I. Zharinov, Valeri A. Makarov, and Victor B. Kazantsev. "Spatial Memory in a Spiking Neural Network with Robot Embodiment." Sensors 21, no. 8 (2021): 2678. http://dx.doi.org/10.3390/s21082678.

Full text
Abstract:
Cognitive maps and spatial memory are fundamental paradigms of brain functioning. Here, we present a spiking neural network (SNN) capable of generating an internal representation of the external environment and implementing spatial memory. The SNN initially has a non-specific architecture, which is then shaped by Hebbian-type synaptic plasticity. The network receives stimuli at specific loci, while the memory retrieval operates as a functional SNN response in the form of population bursts. The SNN function is explored through its embodiment in a robot moving in an arena with safe and dangerous zones. We propose a measure of the global network memory using the synaptic vector field approach to validate results and calculate information characteristics, including learning curves. We show that after training, the SNN can effectively control the robot’s cognitive behavior, allowing it to avoid dangerous regions in the arena. However, the learning is not perfect. The robot eventually visits dangerous areas. Such behavior, also observed in animals, enables relearning in time-evolving environments. If a dangerous zone moves into another place, the SNN remaps positive and negative areas, allowing escaping the catastrophic interference phenomenon known for some AI architectures. Thus, the robot adapts to changing world.
APA, Harvard, Vancouver, ISO, and other styles
38

Zhou, Shaoheng. "Text Classification based on Spiking Neural Network with LSTM Conversion." Applied and Computational Engineering 99, no. 1 (2024): 186–93. http://dx.doi.org/10.54254/2755-2721/99/20251772.

Full text
Abstract:
In recent years, spiking neural network (SNN) have attracted much attention due to their low energy consumption and have achieved remarkable results in the fields of vision and information processing. However, the application of SNNs in the field of natural language processing (NLP) is still relatively small. Given that current popular large-scale language models rely on huge arithmetic and energy consumption, it is of great practical importance to explore SNN-based approaches to implement NLP tasks in a more energy-efficient way. This paper investigates the conversion method from LSTM network to SNN and compares the performance of the original LSTM network with the converted SNN in a text classification task. The paper experimentally compares the accuracy of different LSTM-based models using different methods on the same dataset. The experimental results show that the converted SNN is able to achieve similar performance to the original LSTM network with significantly lower power consumption in text classification tasks on multiple datasets.
APA, Harvard, Vancouver, ISO, and other styles
39

Li, Ming, Haibo Ruan, Yu Qi, Tiantian Guo, Ping Wang, and Gang Pan. "Odor Recognition with a Spiking Neural Network for Bioelectronic Nose." Sensors 19, no. 5 (2019): 993. http://dx.doi.org/10.3390/s19050993.

Full text
Abstract:
Electronic noses recognize odors using sensor arrays, and usually face difficulties for odor complicacy, while animals have their own biological sensory capabilities for various types of odors. By implanting electrodes into the olfactory bulb of mammalian animals, odors may be recognized by decoding the recorded neural signals, in order to construct a bioelectronic nose. This paper proposes a spiking neural network (SNN)-based odor recognition method from spike trains recorded by the implanted electrode array. The proposed SNN-based approach exploits rich timing information well in precise time points of spikes. To alleviate the overfitting problem, we design a new SNN learning method with a voltage-based regulation strategy. Experiments are carried out using spike train signals recorded from the main olfactory bulb in rats. Results show that our SNN-based approach achieves the state-of-the-art performance, compared with other methods. With the proposed voltage regulation strategy, it achieves about 15% improvement compared with a classical SNN model.
APA, Harvard, Vancouver, ISO, and other styles
40

Zhang, Tao, Shuiying Xiang, Wenzhuo Liu, Yanan Han, Xingxing Guo, and Yue Hao. "Hybrid Spiking Fully Convolutional Neural Network for Semantic Segmentation." Electronics 12, no. 17 (2023): 3565. http://dx.doi.org/10.3390/electronics12173565.

Full text
Abstract:
The spiking neural network (SNN) exhibits distinct advantages in terms of low power consumption due to its event-driven nature. However, it is limited to simple computer vision tasks because the direct training of SNNs is challenging. In this study, we propose a hybrid architecture called the spiking fully convolutional neural network (SFCNN) to expand the application of SNNs in the field of semantic segmentation. To train the SNN, we employ the surrogate gradient method along with backpropagation. The accuracy of mean intersection over union (mIoU) for the VOC2012 dataset is higher than that of existing spiking FCNs by almost 30%. The accuracy of mIoU can reach 39.6%. Moreover, the proposed hybrid SFCNN achieved excellent segmentation performance for other datasets such as COCO2017, DRIVE, and Cityscapes. Our hybrid SFCNN is a valuable and interesting contribution to extending the functionality of SNNs, especially for power-constrained applications.
APA, Harvard, Vancouver, ISO, and other styles
41

Hulea, Mircea, George Iulian Uleru, and Constantin Florin Caruntu. "Adaptive SNN for Anthropomorphic Finger Control." Sensors 21, no. 8 (2021): 2730. http://dx.doi.org/10.3390/s21082730.

Full text
Abstract:
Anthropomorphic hands that mimic the smoothness of human hand motions should be controlled by artificial units of high biological plausibility. Adaptability is among the characteristics of such control units, which provides the anthropomorphic hand with the ability to learn motions. This paper presents a simple structure of an adaptive spiking neural network implemented in analogue hardware that can be trained using Hebbian learning mechanisms to rotate the metacarpophalangeal joint of a robotic finger towards targeted angle intervals. Being bioinspired, the spiking neural network drives actuators made of shape memory alloy and receives feedback from neuromorphic sensors that convert the joint rotation angle and compression force into the spiking frequency. The adaptive SNN activates independent neural paths that correspond to angle intervals and learns in which of these intervals the rotation the finger rotation is stopped by an external force. Learning occurs when angle-specific neural paths are stimulated concurrently with the supraliminar stimulus that activates all the neurons that inhibit the SNN output stopping the finger. The results showed that after learning, the finger stopped in the angle interval in which the angle-specific neural path was active, without the activation of the supraliminar stimulus. The proposed concept can be used to implement control units for anthropomorphic robots that are able to learn motions unsupervised, based on principles of high biological plausibility.
APA, Harvard, Vancouver, ISO, and other styles
42

Stasenko, Sergey V., and Victor B. Kazantsev. "Information Encoding in Bursting Spiking Neural Network Modulated by Astrocytes." Entropy 25, no. 5 (2023): 745. http://dx.doi.org/10.3390/e25050745.

Full text
Abstract:
We investigated a mathematical model composed of a spiking neural network (SNN) interacting with astrocytes. We analysed how information content in the form of two-dimensional images can be represented by an SNN in the form of a spatiotemporal spiking pattern. The SNN includes excitatory and inhibitory neurons in some proportion, sustaining the excitation–inhibition balance of autonomous firing. The astrocytes accompanying each excitatory synapse provide a slow modulation of synaptic transmission strength. An information image was uploaded to the network in the form of excitatory stimulation pulses distributed in time reproducing the shape of the image. We found that astrocytic modulation prevented stimulation-induced SNN hyperexcitation and non-periodic bursting activity. Such homeostatic astrocytic regulation of neuronal activity makes it possible to restore the image supplied during stimulation and lost in the raster diagram of neuronal activity due to non-periodic neuronal firing. At a biological point, our model shows that astrocytes can act as an additional adaptive mechanism for regulating neural activity, which is crucial for sensory cortical representations.
APA, Harvard, Vancouver, ISO, and other styles
43

Stasenko, Sergey V., and Victor B. Kazantsev. "Bursting Dynamics of Spiking Neural Network Induced by Active Extracellular Medium." Mathematics 11, no. 9 (2023): 2109. http://dx.doi.org/10.3390/math11092109.

Full text
Abstract:
We propose a mathematical model of a spiking neural network (SNN) that interacts with an active extracellular field formed by the brain extracellular matrix (ECM). The SNN exhibits irregular spiking dynamics induced by a constant noise drive. Following neurobiological facts, neuronal firing leads to the production of the ECM that occupies the extracellular space. In turn, active components of the ECM can modulate neuronal signaling and synaptic transmission, for example, through the effect of so-called synaptic scaling. By simulating the model, we discovered that the ECM-mediated regulation of neuronal activity promotes spike grouping into quasi-synchronous population discharges called population bursts. We investigated how model parameters, particularly the strengths of ECM influence on synaptic transmission, may facilitate SNN bursting and increase the degree of neuronal population synchrony.
APA, Harvard, Vancouver, ISO, and other styles
44

Peng, Cheng, and Yangong Zheng. "Robust gas recognition with mixed interference using a spiking neural network." Measurement Science and Technology 33, no. 1 (2021): 015105. http://dx.doi.org/10.1088/1361-6501/ac3199.

Full text
Abstract:
Abstract Spiking neural networks (SNNs) have attracted significant interest owing to their high computing efficiency. However, few studies have focused on the robustness of SNNs and their application to electronic noses for gas recognition under strong interference. The goal of this study was to explore the robustness of a SNN for gas recognition under mixed interference. Data on mixed gases with different levels of interference were simulated by fitting experimental data. Two layers of a SNN based on leaky integrate-and-fire (LIF) neurons were constructed and the network was trained solely on datasets of pure targeted gases. Testing was then performed using data with mixed interference. The SNN achieved superior performance compared to other algorithms and remained 100% accurate for gas recognition up to a 10% interference ratio. The interval distance of spiking times between classes represents the robust capacity of the SNN according to the algorithm of the LIF neurons. SNNs have excellent capacity to maximize the differences between data of different classes and are promising candidates for electronic noses.
APA, Harvard, Vancouver, ISO, and other styles
45

Szczęsny, Szymon, Damian Huderek, and Łukasz Przyborowski. "Spiking Neural Network with Linear Computational Complexity for Waveform Analysis in Amperometry." Sensors 21, no. 9 (2021): 3276. http://dx.doi.org/10.3390/s21093276.

Full text
Abstract:
The paper describes the architecture of a Spiking Neural Network (SNN) for time waveform analyses using edge computing. The network model was based on the principles of preprocessing signals in the diencephalon and using tonic spiking and inhibition-induced spiking models typical for the thalamus area. The research focused on a significant reduction of the complexity of the SNN algorithm by eliminating most synaptic connections and ensuring zero dispersion of weight values concerning connections between neuron layers. The paper describes a network mapping and learning algorithm, in which the number of variables in the learning process is linearly dependent on the size of the patterns. The works included testing the stability of the accuracy parameter for various network sizes. The described approach used the ability of spiking neurons to process currents of less than 100 pA, typical of amperometric techniques. An example of a practical application is an analysis of vesicle fusion signals using an amperometric system based on Carbon NanoTube (CNT) sensors. The paper concludes with a discussion of the costs of implementing the network as a semiconductor structure.
APA, Harvard, Vancouver, ISO, and other styles
46

Oikonomou, Katerina Maria, Ioannis Kansizoglou, and Antonios Gasteratos. "A Hybrid Spiking Neural Network Reinforcement Learning Agent for Energy-Efficient Object Manipulation." Machines 11, no. 2 (2023): 162. http://dx.doi.org/10.3390/machines11020162.

Full text
Abstract:
Due to the wide spread of robotics technologies in everyday activities, from industrial automation to domestic assisted living applications, cutting-edge techniques such as deep reinforcement learning are intensively investigated with the aim to advance the technological robotics front. The mandatory limitation of power consumption remains an open challenge in contemporary robotics, especially in real-case applications. Spiking neural networks (SNN) constitute an ideal compromise as a strong computational tool with low-power capacities. This paper introduces a spiking neural network actor for a baseline robotic manipulation task using a dual-finger gripper. To achieve that, we used a hybrid deep deterministic policy gradient (DDPG) algorithm designed with a spiking actor and a deep critic network to train the robotic agent. Thus, the agent learns to obtain the optimal policies for the three main tasks of the robotic manipulation approach: target-object reach, grasp, and transfer. The proposed method has one of the main advantages that an SNN possesses, namely, its neuromorphic hardware implementation capacity that results in energy-efficient implementations. The latter accomplishment is highly demonstrated in the evaluation results of the SNN actor since the deep critic network was exploited only during training. Aiming to further display the capabilities of the introduced approach, we compare our model with the well-established DDPG algorithm.
APA, Harvard, Vancouver, ISO, and other styles
47

Zheng, Yu, Jingfeng Xue, Jing Liu, and Yanjun Zhang. "Biologically Inspired Spatial–Temporal Perceiving Strategies for Spiking Neural Network." Biomimetics 10, no. 1 (2025): 48. https://doi.org/10.3390/biomimetics10010048.

Full text
Abstract:
A future unmanned system needs the ability to perceive, decide and control in an open dynamic environment. In order to fulfill this requirement, it needs to construct a method with a universal environmental perception ability. Moreover, this perceptual process needs to be interpretable and understandable, so that future interactions between unmanned systems and humans can be unimpeded. However, current mainstream DNN (deep learning neural network)-based AI (artificial intelligence) is a ‘black box’. We cannot interpret or understand how the decision is made by these AIs. An SNN (spiking neural network), which is more similar to a biological brain than a DNN, has the potential to implement interpretable or understandable AI. In this work, we propose a neuron group-based structural learning method for an SNN to better capture the spatial and temporal information from the external environment, and propose a time-slicing scheme to better interpret the spatial and temporal information of responses generated by an SNN. Results show that our method indeed helps to enhance the environment perception ability of the SNN, and possesses a certain degree of robustness, enhancing the potential to build an interpretable or understandable AI in the future.
APA, Harvard, Vancouver, ISO, and other styles
48

Mavrin, I. A., E. A. Ryndin, N. V. Andreeva, and V. V. Luchinin. "Design of spiking neural network architecture based on dendritic computation principles." Genes & Cells 18, no. 4 (2023): 821–24. http://dx.doi.org/10.17816/gc623425.

Full text
Abstract:
The paper presents the hardware architecture design of a spiking neural network (SNN) based on dendritic computation principles. The integration of active dendritic properties into the neuronal structure of SNN aims to minimize the number of functional blocks required for hardware implementation, including synaptic connections and neurons. The available memory on the neuromorphic architecture imposes limitations on implementation, hence the need to reduce the number of functional blocks. As a test task for the SNN based on dendritic computations, we selected the image classification of eight symbols, consisting of digits one through eight. These symbols are depicted as 3×7 pixel, 1-bit images. Active dendritic properties were analyzed using the “delay plasticity” [1] principle, which introduces the mechanism of adjusting input signal delays in spiking neuron inputs. We designed an SNN model with complementary delay inputs, referred to as the active dendrite SNN, as a principle implementation. Input spikes arriving at the primary inputs are duplicated to the delay inputs after a modifiable time delay. For convenience, each delay input was set at a single value. The input images were scanned sequentially. The neural network received three direct and three inverse inputs from the six main inputs that were coded with spikes corresponding to three pixels of a string. An “on” pixel was coded with a spike arriving at a direct input, while an “off” pixel was coded with a spike arriving at the corresponding inverse input. The line scanning time was 10 μs, input width was 1 μs, and delay time was 5 μs. The optimization of spiking neuron parameters was performed through a stochastic search algorithm based on simulated annealing. The parameters optimized for the Leaky-Integrate-and-Fire (LIF) neurons included the leakage time constant (22.8 μs), firing threshold (1150 arbitrary units), and refractory period (1 μs). The active dendrite SNN training employed the tempotron learning rule [2]. The training optimized the following parameters: the maximum change in synaptic weight on potentiation and depression (0.7 and –3 arbitrary units, respectively) and the synaptic weight’s upper bound (195 arbitrary units). Complementary delayed inputs facilitated the learning of the order in which input patterns arrived for SNN neurons during training. The paper compares an SNN architecture based on dendritic computations to our previously designed two-layer SNN with a hidden perceptron layer and an output layer consisting of LIF neurons [3]. Using the same LIF neuron design, input image coding, and LIF neuron layer structure as in the proposed architecture, our two-layer SNN with a hidden perceptron layer and output layer of LIF neurons successfully recognized 3×5 images of three symbols with only 10 neurons and 63 synapses. Alternatively, the active dendrite SNN was able to recognize 3×7 images of eight symbols with four neurons and 48 synaptic weights. In conclusion, incorporating active dendrite properties into the SNN architecture for image recognition resulted in optimized functional block usage, lowering the number of neurons and synapses used by 60 and 24%, respectively.
APA, Harvard, Vancouver, ISO, and other styles
49

SCHLIEBS, STEFAN, NIKOLA KASABOV, and MICHAËL DEFOIN-PLATEL. "ON THE PROBABILISTIC OPTIMIZATION OF SPIKING NEURAL NETWORKS." International Journal of Neural Systems 20, no. 06 (2010): 481–500. http://dx.doi.org/10.1142/s0129065710002565.

Full text
Abstract:
The construction of a Spiking Neural Network (SNN), i.e. the choice of an appropriate topology and the configuration of its internal parameters, represents a great challenge for SNN based applications. Evolutionary Algorithms (EAs) offer an elegant solution for these challenges and methods capable of exploring both types of search spaces simultaneously appear to be the most promising ones. A variety of such heterogeneous optimization algorithms have emerged recently, in particular in the field of probabilistic optimization. In this paper, a literature review on heterogeneous optimization algorithms is presented and an example of probabilistic optimization of SNN is discussed in detail. The paper provides an experimental analysis of a novel Heterogeneous Multi-Model Estimation of Distribution Algorithm (hMM-EDA). First, practical guidelines for configuring the method are derived and then the performance of hMM-EDA is compared to state-of-the-art optimization algorithms. Results show hMM-EDA as a light-weight, fast and reliable optimization method that requires the configuration of only very few parameters. Its performance on a synthetic heterogeneous benchmark problem is highly competitive and suggests its suitability for the optimization of SNN.
APA, Harvard, Vancouver, ISO, and other styles
50

Zhu, Shurui. "Theoretical framework and advanced applications of spiking neural networks." Journal of Physics: Conference Series 2634, no. 1 (2023): 012044. http://dx.doi.org/10.1088/1742-6596/2634/1/012044.

Full text
Abstract:
Abstract Developed from traditional Artificial neural networks (ANN), the Spiking neural network (SNN) faithfully mimics the biological behaviours of natural neurons. SNNs transmit information through firing of spiking neurons only when the membrane potential reaches a certain threshold. Because of this property, SNNs are referred to as the most biologically plausible neural model. They are also evaluated as time-efficient and low power-consuming when dealing with complex computational tasks. In this paper, the differences between SNNs and ANNs are first identified. The theoretical framework of the SNN, including the biomedical background, classical spiking neuron models, neural coding mechanisms as well as the learning algorithm are then thoroughly introduced. From the theories, the SNN’s biological plausibility, working principles, strengths and limitations are discussed. Additionally, two applications in the medical & robotics field using the SNN’s pattern recognition and classification are described in detail, indicating its potential in more innovative studies. More imaginative uses of SNNs are in demand for its dominant role in future computational fields.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography