To see the other types of publications on this topic, follow the link: Spiking neural network (SNN).

Dissertations / Theses on the topic 'Spiking neural network (SNN)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Spiking neural network (SNN).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Buhry, Laure. "Estimation de paramètres de modèles de neurones biologiques sur une plate-forme de SNN (Spiking Neural Network) implantés "insilico"." Thesis, Bordeaux 1, 2010. http://www.theses.fr/2010BOR14057/document.

Full text
Abstract:
Ces travaux de thèse, réalisés dans une équipe concevant des circuits analogiques neuromimétiques suivant le modèle d’Hodgkin-Huxley, concernent la modélisation de neurones biologiques, plus précisément, l’estimation des paramètres de modèles de neurones. Une première partie de ce manuscrit s’attache à faire le lien entre la modélisation neuronale et l’optimisation. L’accent est mis sur le modèle d’Hodgkin- Huxley pour lequel il existait déjà une méthode d’extraction des paramètres associée à une technique de mesures électrophysiologiques (le voltage-clamp) mais dont les approximations successives rendaient impossible la détermination précise de certains paramètres. Nous proposons dans une seconde partie une méthode alternative d’estimation des paramètres du modèle d’Hodgkin-Huxley s’appuyant sur l’algorithme d’évolution différentielle et qui pallie les limitations de la méthode classique. Cette alternative permet d’estimer conjointement tous les paramètres d’un même canal ionique. Le troisième chapitre est divisé en trois sections. Dans les deux premières, nous appliquons notre nouvelle technique à l’estimation des paramètres du même modèle à partir de données biologiques, puis développons un protocole automatisé de réglage de circuits neuromimétiques, canal ionique par canal ionique. La troisième section présente une méthode d’estimation des paramètres à partir d’enregistrements de la tension de membrane d’un neurone, données dont l’acquisition est plus aisée que celle des courants ioniques. Le quatrième et dernier chapitre, quant à lui, est une ouverture vers l’utilisation de petits réseaux d’une centaine de neurones électroniques : nous réalisons une étude logicielle de l’influence des propriétés intrinsèques de la cellule sur le comportement global du réseau dans le cadre des oscillations gamma<br>These works, which were conducted in a research group designing neuromimetic integrated circuits based on the Hodgkin-Huxley model, deal with the parameter estimation of biological neuron models. The first part of the manuscript tries to bridge the gap between neuron modeling and optimization. We focus our interest on the Hodgkin-Huxley model because it is used in the group. There already existed an estimation method associated to the voltage-clamp technique. Nevertheless, this classical estimation method does not allow to extract precisely all parameters of the model, so in the second part, we propose an alternative method to jointly estimate all parameters of one ionic channel avoiding the usual approximations. This method is based on the differential evolution algorithm. The third chaper is divided into three sections : the first two sections present the application of our new estimation method to two different problems, model fitting from biological data and development of an automated tuning of neuromimetic chips. In the third section, we propose an estimation technique using only membrane voltage recordings – easier to mesure than ionic currents. Finally, the fourth and last chapter is a theoretical study preparing the implementation of small neural networks on neuromimetic chips. More specifically, we try to study the influence of cellular intrinsic properties on the global behavior of a neural network in the context of gamma oscillations
APA, Harvard, Vancouver, ISO, and other styles
2

Buhry, Laure. "Estimation de paramètres de modèles de neurones biologiques sur une plate-forme de SNN (Spiking Neural Network) implantés "in silico"." Phd thesis, Université Sciences et Technologies - Bordeaux I, 2010. http://tel.archives-ouvertes.fr/tel-00561396.

Full text
Abstract:
Ces travaux de thèse, réalisés dans une équipe concevant des circuits analogiques neuromimétiques suivant le modèle d'Hodgkin-Huxley, concernent la modélisation de neurones biologiques, plus précisément, l'estimation des paramètres de modèles de neurones. Une première partie de ce manuscrit s'attache à faire le lien entre la modélisation neuronale et l'optimisation. L'accent est mis sur le modèle d'Hodgkin- Huxley pour lequel il existait déjà une méthode d'extraction des paramètres associée à une technique de mesures électrophysiologiques (le voltage-clamp) mais dont les approximations successives rendaient impossible la détermination précise de certains paramètres. Nous proposons dans une seconde partie une méthode alternative d'estimation des paramètres du modèle d'Hodgkin-Huxley s'appuyant sur l'algorithme d'évolution différentielle et qui pallie les limitations de la méthode classique. Cette alternative permet d'estimer conjointement tous les paramètres d'un même canal ionique. Le troisième chapitre est divisé en trois sections. Dans les deux premières, nous appliquons notre nouvelle technique à l'estimation des paramètres du même modèle à partir de données biologiques, puis développons un protocole automatisé de réglage de circuits neuromimétiques, canal ionique par canal ionique. La troisième section présente une méthode d'estimation des paramètres à partir d'enregistrements de la tension de membrane d'un neurone, données dont l'acquisition est plus aisée que celle des courants ioniques. Le quatrième et dernier chapitre, quant à lui, est une ouverture vers l'utilisation de petits réseaux d'une centaine de neurones électroniques : nous réalisons une étude logicielle de l'influence des propriétés intrinsèques de la cellule sur le comportement global du réseau dans le cadre des oscillations gamma.
APA, Harvard, Vancouver, ISO, and other styles
3

Patterson, James Cameron. "Managing a real-time massively-parallel neural architecture." Thesis, University of Manchester, 2012. https://www.research.manchester.ac.uk/portal/en/theses/managing-a-realtime-massivelyparallel-neural-architecture(dfab5ca7-fcd5-4ebe-887b-0a7c330c7206).html.

Full text
Abstract:
A human brain has billions of processing elements operating simultaneously; the only practical way to model this computationally is with a massively-parallel computer. A computer on such a significant scale requires hundreds of thousands of interconnected processing elements, a complex environment which requires many levels of monitoring, management and control. Management begins from the moment power is applied and continues whilst the application software loads, executes, and the results are downloaded. This is the story of the research and development of a framework of scalable management tools that support SpiNNaker, a novel computing architecture designed to model spiking neural networks of biologically-significant sizes. This management framework provides solutions from the most fundamental set of power-on self-tests, through to complex, real-time monitoring of the health of the hardware and the software during simulation. The framework devised uses standard tools where appropriate, covering hardware up / down events and capacity information, through to bespoke software developed to provide real-time insight to neural network software operation across multiple levels of abstraction. With this layered management approach, users (or automated agents) have access to results dynamically and are able to make informed decisions on required actions in real-time.
APA, Harvard, Vancouver, ISO, and other styles
4

Spyrou, Theofilos. "Functional safety and reliability of neuromorphic computing systems." Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS118.

Full text
Abstract:
L'essor récent de l'intelligence artificielle (IA) a trouvé un large éventail d'applications qui l'intègrent essentiellement dans presque tous les domaines de notre vie. Avec une telle intégration, il est raisonnable que des préoccupations surgissent. Celles-ci doivent être éliminées avant l'utilisation de l'IA sur le terrain, en particulier dans les applications critiques en termes de mission et de sécurité, comme les véhicules autonomes. Les réseaux neuronaux à impulsions (Spiking Neural Networks, SNNs), bien que d'inspiration biologique, n'héritent que partiellement des remarquables capacités de résistance aux pannes de leurs homologues biologiques, car ils sont vulnérables aux défauts électroniques et aux pannes survenant au niveau du matériel. Par conséquent, une exploration méthodologique des caractéristiques de fiabilité des accélérateurs matériels d'IA et des plateformes neuromorphiques est de la plus haute importance. Cette thèse aborde les sujets du test et de la tolérance aux fautes dans les SNNs et leurs implémentations neuromorphiques sur le matériel<br>The recent rise of Artificial Intelligence (AI) has found a wide range of applications essentially integrating it gaining more and more ground in almost every field of our lives. With this steep integration of AI, it is reasonable for concerns to arise, which need to be eliminated before the employment of AI in the field, especially in mission- and safety-critical applications like autonomous vehicles. Spiking Neural Networks (SNNs), although biologically inspired, inherit only partially the remarkable fault resilience capabilities of their biological counterparts, being vulnerable to electronic defects and faults occurring at hardware level. Hence, a methodological exploration of the dependability characteristics of AI hardware accelerators and neuromorphic platforms is of utmost importance. This thesis tackles the subjects of testing and fault tolerance in SNNs and their neuromorphic implementations on hardware
APA, Harvard, Vancouver, ISO, and other styles
5

Johnson, Melissa. "A Spiking Bidirectional Associative Memory Neural Network." Thesis, Université d'Ottawa / University of Ottawa, 2021. http://hdl.handle.net/10393/42222.

Full text
Abstract:
Spiking neural networks (SNNs) are a more biologically realistic model of the brain than traditional analog neural networks and therefore should be better for modelling certain functions of the human brain. This thesis uses the concept of deriving an SNN from an accepted non-spiking neural network via analysis and modifications of the transmission function. We investigate this process to determine if and how the modifications can be made to minimize loss of information during the transition from non-spiking to spiking while retaining positive features and functionality of the non-spiking network. By comparing combinations of spiking neuron models and networks against each other, we determined that replacing the transmission function with a neural model that is similar to it allows for the easiest method to create a spiking neural network that works comparatively well. This similarity between transmission function and neuron model allows for easier parameter selection which is a key component in getting a functioning SNN. The parameters all play different roles, but for the most part, parameters that speed up spiking, such as large resistance values or small rheobases generally help the accuracy of the network. But the network is still incomplete for a spiking neural network since this conversion is often only performed after learning has been completed in analog form. The neuron model and subsequent network developed here are the initial steps in creating a bidirectional SNN that handles hetero-associative and auto-associative recall and can be switched easily between spiking and non-spiking with minimal to no loss of data. By tying everything to the transmission function, the non-spiking learning rule, which in our case uses the transmission function, and the neural model of the SNN, we are able to create a functioning SNN. Without this similarity, we find that creating SNN are much more complicated and require much more work in parameter optimization to achieve a functioning SNN.
APA, Harvard, Vancouver, ISO, and other styles
6

Goel, Piyush. "Spiking neural network based approach to EEG signal analysis." Thesis, University of Portsmouth, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.496600.

Full text
Abstract:
The research described in this thesis presents a new classification technique for continuous electroencephalographic (EEG) recordings, based on a network of spiking neurons. Analysis of the signals is performed on ensemble EEG and the task of the neural network is to identify the P300 component in the signals. The network employs leaky-integrate-and-fire neurons as nodes in a multi-layered structure. The method involves formation of multiple weak classifiers to perform voting and collective results are used for final classification.
APA, Harvard, Vancouver, ISO, and other styles
7

Davies, Sergio. "Learning in spiking neural networks." Thesis, University of Manchester, 2013. https://www.research.manchester.ac.uk/portal/en/theses/learning-in-spiking-neural-networks(2d2be0d7-9557-481e-b9f1-3889a5ca2447).html.

Full text
Abstract:
Artificial neural network simulators are a research field which attracts the interest of researchers from various fields, from biology to computer science. The final objectives are the understanding of the mechanisms underlying the human brain, how to reproduce them in an artificial environment, and how drugs interact with them. Multiple neural models have been proposed, each with their peculiarities, from the very complex and biologically realistic Hodgkin-Huxley neuron model to the very simple 'leaky integrate-and-fire' neuron. However, despite numerous attempts to understand the learning behaviour of the synapses, few models have been proposed. Spike-Timing-Dependent Plasticity (STDP) is one of the most relevant and biologically plausible models, and some variants (such as the triplet-based STDP rule) have been proposed to accommodate all biological observations. The research presented in this thesis focuses on a novel learning rule, based on the spike-pair STDP algorithm, which provides a statistical approach with the advantage of being less computationally expensive than the standard STDP rule, and is therefore suitable for its implementation on stand-alone computational units. The environment in which this research work has been carried out is the SpiNNaker project, which aims to provide a massively parallel computational substrate for neural simulation. To support such research, two other topics have been addressed: the first is a way to inject spikes into the SpiNNaker system through a non-real-time channel such as the Ethernet link, synchronising with the timing of the SpiNNaker system. The second research topic is focused on a way to route spikes in the SpiNNaker system based on populations of neurons. The three topics are presented in sequence after a brief introduction to the SpiNNaker project. Future work could include structural plasticity (also known as synaptic rewiring); here, during the simulation of neural networks on the SpiNNaker system, axons, dendrites and synapses may be grown or pruned according to biological observations.
APA, Harvard, Vancouver, ISO, and other styles
8

Mekemeza, Ona Keshia. "Photonic spiking neuron network." Electronic Thesis or Diss., Bourgogne Franche-Comté, 2023. http://www.theses.fr/2023UBFCD052.

Full text
Abstract:
Les réseaux neuromorphiques pour le traitement d'informations ont pris une placeimportante aujourd'hui notamment du fait de la montée en complexité des tâches à effectuer : reconnaissance vocale, corrélation d'images dynamiques, prise de décision rapide multidimensionnelle, fusion de données, optimisation comportementale, etc... Il existe plusieurs types de tels réseaux et parmi ceux- ci les réseaux impulsionnels, c'est-à-dire, ceux dont le fonctionnement est calqué sur celui des neurones corticaux. Ce sont ceux qui devraient offrir le meilleur rendement énergétique donc le meilleur passage à l'échelle. Plusieurs démonstrations de neurones artificielles ont été menées avec des circuits électroniques et plus récemment photoniques. La densité d'intégration de la filière photonique sur silicium est un atout pour créer des circuits suffisamment complexes pour espérer réaliser des démonstrations complètes. Le but de la thèse est donc d'exploiter une architecture de réseau neuromorphique impulsionnel à base de lasers à bascule de gain (Q switch) intégrés sur silicium et d'un circuit d'interconnexion ultra-dense et reconfigurable apte à imiter les poids synaptiques. Une modélisation complète ducircuit est attendue avec, à la clé la démonstration pratique d'une application dans la résolution d'un problème mathématique à définir<br>Today, neuromorphic networks play a crucial role in information processing,particularly as tasks become increasingly complex: voice recognition, dynamic image correlation, rapid multidimensional decision- making, data merging, behavioral optimization, etc... Neuromorphic networks come in several types; spiking networks are one of them. The latter's modus operandi is based on that of cortical neurons. As spiking networks are the most energy-efficient neuromorphic networks, they offer the greatest potential for scaling. Several demonstrations of artificial neurons have been conducted with electronic and more recently photonic circuits. The integration density of silicon photonics is an asset to create circuits that are complex enough to hopefully carry out a complete demonstration. Therefore, this thesis aims to exploit an architecture of a photonic spiking neural network based on Q-switched lasers integrated into silicon and an ultra-dense and reconfigurable interconnection circuit that can simulate synaptic weights. A complete modeling of the circuit is expected with a practical demonstration of an application in solving a mathematical problem to be defined
APA, Harvard, Vancouver, ISO, and other styles
9

Han, Bing. "ACCELERATION OF SPIKING NEURAL NETWORK ON GENERAL PURPOSE GRAPHICS PROCESSORS." University of Dayton / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1271368713.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

SUSI, GIANLUCA. "Asynchronous spiking neural networks: paradigma generale e applicazioni." Doctoral thesis, Università degli Studi di Roma "Tor Vergata", 2012. http://hdl.handle.net/2108/80567.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Evans, Benjamin D. "Learning transformation-invariant visual representations in spiking neural networks." Thesis, University of Oxford, 2012. https://ora.ox.ac.uk/objects/uuid:15bdf771-de28-400e-a1a7-82228c7f01e4.

Full text
Abstract:
This thesis aims to understand the learning mechanisms which underpin the process of visual object recognition in the primate ventral visual system. The computational crux of this problem lies in the ability to retain specificity to recognize particular objects or faces, while exhibiting generality across natural variations and distortions in the view (DiCarlo et al., 2012). In particular, the work presented is focussed on gaining insight into the processes through which transformation-invariant visual representations may develop in the primate ventral visual system. The primary motivation for this work is the belief that some of the fundamental mechanisms employed in the primate visual system may only be captured through modelling the individual action potentials of neurons and therefore, existing rate-coded models of this process constitute an inadequate level of description to fully understand the learning processes of visual object recognition. To this end, spiking neural network models are formulated and applied to the problem of learning transformation-invariant visual representations, using a spike-time dependent learning rule to adjust the synaptic efficacies between the neurons. The ways in which the existing rate-coded CT (Stringer et al., 2006) and Trace (Földiák, 1991) learning mechanisms may operate in a simple spiking neural network model are explored, and these findings are then applied to a more accurate model using realistic 3-D stimuli. Three mechanisms are then examined, through which a spiking neural network may solve the problem of learning separate transformation-invariant representations in scenes composed of multiple stimuli by temporally segmenting competing input representations. The spike-time dependent plasticity in the feed-forward connections is then shown to be able to exploit these input layer dynamics to form individual stimulus representations in the output layer. Finally, the work is evaluated and future directions of investigation are proposed.
APA, Harvard, Vancouver, ISO, and other styles
12

Hunter, Russell I. "Improving associative memory in a network of spiking neurons." Thesis, University of Stirling, 2011. http://hdl.handle.net/1893/6177.

Full text
Abstract:
In this thesis we use computational neural network models to examine the dynamics and functionality of the CA3 region of the mammalian hippocampus. The emphasis of the project is to investigate how the dynamic control structures provided by inhibitory circuitry and cellular modification may effect the CA3 region during the recall of previously stored information. The CA3 region is commonly thought to work as a recurrent auto-associative neural network due to the neurophysiological characteristics found, such as, recurrent collaterals, strong and sparse synapses from external inputs and plasticity between coactive cells. Associative memory models have been developed using various configurations of mathematical artificial neural networks which were first developed over 40 years ago. Within these models we can store information via changes in the strength of connections between simplified model neurons (two-state). These memories can be recalled when a cue (noisy or partial) is instantiated upon the net. The type of information they can store is quite limited due to restrictions caused by the simplicity of the hard-limiting nodes which are commonly associated with a binary activation threshold. We build a much more biologically plausible model with complex spiking cell models and with realistic synaptic properties between cells. This model is based upon some of the many details we now know of the neuronal circuitry of the CA3 region. We implemented the model in computer software using Neuron and Matlab and tested it by running simulations of storage and recall in the network. By building this model we gain new insights into how different types of neurons, and the complex circuits they form, actually work. The mammalian brain consists of complex resistive-capacative electrical circuitry which is formed by the interconnection of large numbers of neurons. A principal cell type is the pyramidal cell within the cortex, which is the main information processor in our neural networks. Pyramidal cells are surrounded by diverse populations of interneurons which have proportionally smaller numbers compared to the pyramidal cells and these form connections with pyramidal cells and other inhibitory cells. By building detailed computational models of recurrent neural circuitry we explore how these microcircuits of interneurons control the flow of information through pyramidal cells and regulate the efficacy of the network. We also explore the effect of cellular modification due to neuronal activity and the effect of incorporating spatially dependent connectivity on the network during recall of previously stored information. In particular we implement a spiking neural network proposed by Sommer and Wennekers (2001). We consider methods for improving associative memory recall using methods inspired by the work by Graham and Willshaw (1995) where they apply mathematical transforms to an artificial neural network to improve the recall quality within the network. The networks tested contain either 100 or 1000 pyramidal cells with 10% connectivity applied and a partial cue instantiated, and with a global pseudo-inhibition.We investigate three methods. Firstly, applying localised disynaptic inhibition which will proportionalise the excitatory post synaptic potentials and provide a fast acting reversal potential which should help to reduce the variability in signal propagation between cells and provide further inhibition to help synchronise the network activity. Secondly, implementing a persistent sodium channel to the cell body which will act to non-linearise the activation threshold where after a given membrane potential the amplitude of the excitatory postsynaptic potential (EPSP) is boosted to push cells which receive slightly more excitation (most likely high units) over the firing threshold. Finally, implementing spatial characteristics of the dendritic tree will allow a greater probability of a modified synapse existing after 10% random connectivity has been applied throughout the network. We apply spatial characteristics by scaling the conductance weights of excitatory synapses which simulate the loss in potential in synapses found in the outer dendritic regions due to increased resistance. To further increase the biological plausibility of the network we remove the pseudo-inhibition and apply realistic basket cell models with differing configurations for a global inhibitory circuit. The networks are configured with; 1 single basket cell providing feedback inhibition, 10% basket cells providing feedback inhibition where 10 pyramidal cells connect to each basket cell and finally, 100% basket cells providing feedback inhibition. These networks are compared and contrasted for efficacy on recall quality and the effect on the network behaviour. We have found promising results from applying biologically plausible recall strategies and network configurations which suggests the role of inhibition and cellular dynamics are pivotal in learning and memory.
APA, Harvard, Vancouver, ISO, and other styles
13

Sánchez, Rivera Giovanny. "Efficient multiprocessing architectures for spiking neural network emulation based on configurable devices." Doctoral thesis, Universitat Politècnica de Catalunya, 2014. http://hdl.handle.net/10803/285319.

Full text
Abstract:
The exploration of the dynamics of bioinspired neural networks has allowed neuroscientists to understand some clues and structures of the brain. Electronic neural network implementations are useful tools for this exploration. However, appropriate architectures are necessary due to the extremely high complexity of those networks. There has been an extraordinary development in reconfigurable computing devices within a short period of time especially in their resource availability, speed, and reconfigurability (FPGAs), which makes these devices suitable to emulate those networks. Reconfigurable parallel hardware architecture is proposed in this thesis in order to emulate in real time complex and biologically realistic spiking neural networks (SNNs). Some relevant SNN models and their hardware approaches have been studied, and analyzed in order to create an architecture that supports the implementation of these SNN models efficiently. The key factors, which involve flexibility in algorithm programmability, high performance processing, low area and power consumption, have been taken into account. In order to boost the performance of the proposed architecture, several techniques have been developed: time to space mapping, neural virtualization, flexible synapse-neuron mapping, specific learning and execution modes, among others. Besides this, an interface unit has been developed in order to build a bio-inspired system, which can process sensory information from the environment. The spiking-neuron-based system combines analog and digital multi-processor implementations. Several applications have been developed as a proof-of-concept in order to show the capabilities of the proposed architecture for processing this type of information.<br>L'estudi de la dinàmica de les xarxes neuronals bio-inspirades ha permès als neurocientífics entendre alguns processos i estructures del cervell. Les implementacions electròniques d'aquestes xarxes neuronals són eines útils per dur a terme aquest tipus d'estudi. No obstant això, l'alta complexitat de les xarxes neuronals requereix d'una arquitectura apropiada que pugui simular aquest tipus de xarxes. Emular aquest tipus de xarxes en dispositius configurables és possible a causa del seu extraordinari desenvolupament respecte a la seva disponibilitat de recursos, velocitat i capacitat de reconfiguració (FPGAs ). En aquesta tesi es proposa una arquitectura maquinari paral·lela i configurable per emular les complexes i realistes xarxes neuronals tipus spiking en temps real. S'han estudiat i analitzat alguns models de neurones tipus spiking rellevants i les seves implementacions en maquinari , amb la finalitat de crear una arquitectura que suporti la implementació d'aquests models de manera eficient . S'han tingut en compte diversos factors clau, incloent flexibilitat en la programació d'algorismes, processament d'alt rendiment, baix consum d'energia i àrea. S'han aplicat diverses tècniques en l'arquitectura desenvolupada amb el propòsit d'augmentar la seva capacitat de processament. Aquestes tècniques són: mapejat de temps a espai, virtualització de les neurones, mapeig flexible de neurones i sinapsis, modes d'execució, i aprenentatge específic, entre d'altres. A més, s'ha desenvolupat una unitat d'interfície de dades per tal de construir un sistema bio-inspirat, que pot processar informació sensorial del medi ambient. Aquest sistema basat en neurones tipus spiking combina implementacions analògiques i digitals. S'han desenvolupat diverses aplicacions usant aquest sistema com a prova de concepte, per tal de mostrar les capacitats de l'arquitectura proposada per al processament d'aquest tipus d'informació.
APA, Harvard, Vancouver, ISO, and other styles
14

Al, Nawasrah A. "Fast flux botnet detection based on adaptive dynamic evolving spiking neural network." Thesis, University of Salford, 2018. http://usir.salford.ac.uk/47199/.

Full text
Abstract:
A botnet, a set of compromised machines controlled distantly by an attacker, is the basis of numerous security threats around the world. Command and Control (C&C) servers are the backbone of botnet communications, where the bots and botmaster send reports and attack orders to each other, respectively. Botnets are also categorised according to their C&C protocols. A Domain Name System (DNS) method known as Fast-Flux Service Network (FFSN) is a special type of botnet that has been engaged by bot herders to cover malicious botnet activities, and increase the lifetime of malicious servers by quickly changing the IP addresses of the domain name over time. Although several methods have been suggested for detecting FFSNs domains, nevertheless they have low detection accuracy especially with zero-day domain, quite a long detection time, and consume high memory storage. In this research we propose a new system called Fast Flux Killer System (FFKA) that has the ability to detect “zero-day” FF-Domains in online mode with an implementation constructed on Adaptive Dynamic evolving Spiking Neural Network (ADeSNN) and in an offline mode to enhance the classification process which is a novelty in this field. The adaptation includes the initial weight, testing criteria, parameters customization, and parameters adjustment. The proposed system is expected to detect fast flux domains in online mode with high detection accuracy and low false positive and false negative rates respectively. It is also expected to have a high level of performance and the proposed system is designed to work for a lifetime with low memory usage. Three public datasets are exploited in the experiments to show the effects of the adaptive ADeSNN algorithm, two of them conducted on the ADeSNN algorithm itself and the last one on the process of detecting fast flux domains. The experiments showed an improved accuracy when using the proposed adaptive ADeSNN over the original algorithm. It also achieved a high detection accuracy in detecting zero-day fast flux domains that was about (99.54%) in an online mode, when using the public fast flux dataset. Finally, the improvements made to the performance of the adaptive algorithm are confirmed by the experiments.
APA, Harvard, Vancouver, ISO, and other styles
15

Nowshin, Fabiha. "Spiking Neural Network with Memristive Based Computing-In-Memory Circuits and Architecture." Thesis, Virginia Tech, 2021. http://hdl.handle.net/10919/103854.

Full text
Abstract:
In recent years neuromorphic computing systems have achieved a lot of success due to its ability to process data much faster and using much less power compared to traditional Von Neumann computing architectures. There are two main types of Artificial Neural Networks (ANNs), Feedforward Neural Network (FNN) and Recurrent Neural Network (RNN). In this thesis we first study the types of RNNs and then move on to Spiking Neural Networks (SNNs). SNNs are an improved version of ANNs that mimic biological neurons closely through the emission of spikes. This shows significant advantages in terms of power and energy when carrying out data intensive applications by allowing spatio-temporal information processing. On the other hand, emerging non-volatile memory (eNVM) technology is key to emulate neurons and synapses for in-memory computations for neuromorphic hardware. A particular eNVM technology, memristors, have received wide attention due to their scalability, compatibility with CMOS technology and low power consumption properties. In this work we develop a spiking neural network by incorporating an inter-spike interval encoding scheme to convert the incoming input signal to spikes and use a memristive crossbar to carry out in-memory computing operations. We develop a novel input and output processing engine for our network and demonstrate the spatio-temporal information processing capability. We demonstrate an accuracy of a 100% with our design through a small-scale hardware simulation for digit recognition and demonstrate an accuracy of 87% in software through MNIST simulations.<br>M.S.<br>In recent years neuromorphic computing systems have achieved a lot of success due to its ability to process data much faster and using much less power compared to traditional Von Neumann computing architectures. Artificial Neural Networks (ANNs) are models that mimic biological neurons where artificial neurons or neurodes are connected together via synapses, similar to the nervous system in the human body. here are two main types of Artificial Neural Networks (ANNs), Feedforward Neural Network (FNN) and Recurrent Neural Network (RNN). In this thesis we first study the types of RNNs and then move on to Spiking Neural Networks (SNNs). SNNs are an improved version of ANNs that mimic biological neurons closely through the emission of spikes. This shows significant advantages in terms of power and energy when carrying out data intensive applications by allowing spatio-temporal information processing capability. On the other hand, emerging non-volatile memory (eNVM) technology is key to emulate neurons and synapses for in-memory computations for neuromorphic hardware. A particular eNVM technology, memristors, have received wide attention due to their scalability, compatibility with CMOS technology and low power consumption properties. In this work we develop a spiking neural network by incorporating an inter-spike interval encoding scheme to convert the incoming input signal to spikes and use a memristive crossbar to carry out in-memory computing operations. We demonstrate the accuracy of our design through a small-scale hardware simulation for digit recognition and demonstrate an accuracy of 87% in software through MNIST simulations.
APA, Harvard, Vancouver, ISO, and other styles
16

Öberg, Oskar. "Critical Branching Regulation of the E-I Net Spiking Neural Network Model." Thesis, Luleå tekniska universitet, Institutionen för teknikvetenskap och matematik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-76770.

Full text
Abstract:
Spiking neural networks (SNN) are dynamic models of biological neurons, that communicates with event-based signals called spikes. SNN that reproduce observed properties of biological senses like vision are developed to better understand how such systems function, and to learn how more efficient sensor systems can be engineered. A branching parameter describes the average probability for spikes to propagate between two different neuron populations. The adaptation of branching parameters towards critical values is known to be important for maximizing the sensitivity and dynamic range of SNN. In this thesis, a recently proposed SNN model for visual feature learning and pattern recognition known as the E-I Net model is studied and extended with a critical branching mechanism. The resulting modified E-I Net model is studied with numerical experiments and two different types of sensory queues. The experiments show that the modified E-I Net model demonstrates critical branching and power-law scaling behavior, as expected from SNN near criticality, but the power-laws are broken and the stimuli reconstruction error is higher compared to the error of the original E-I Net model. Thus, on the basis of these experiments, it is not clear how to properly extend the E-I Net model properly with a critical branching mechanism. The E-I Net model has a particular structure where the inhibitory neurons (I) are tuned to decorrelate the excitatory neurons (E) so that the visual features learned matches the angular and frequency distributions of feature detectors in visual cortex V1 and different stimuli are represented by sparse subsets of the neurons. The broken power-laws correspond to different scaling behavior at low and high spike rates, which may be related to the efficacy of inhibition in the model.
APA, Harvard, Vancouver, ISO, and other styles
17

De, Pasquale Daniele. "Modelli di neural network ispirati alla biofisica." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2021.

Find full text
Abstract:
Il machine learning è stata una delle discipline più rivoluzionarie dello scorso secolo: ha permesso di affrontare problemi al di là della portata dei normali algoritmi computazionali, ed ha dato nuove prospettive per lo studio dell’intelligenza in natura. Grazie all’avvento dei big data e dei moderni calcolatori, il deep learning in particolare ha dal 2006 ricevuto una grande attenzione e riportato risultati straordinari in applicazioni cognitive e commerciali. Lo scopo di questa tesi è quello di riportare i principali risultati storici in queste discipline, partendo da una veloce analisi dei sistemi di machine learning (ML), per poi soffermarsi sui principali tipi di artificial neural network (ANN), ed infine presentare i risultati, i vantaggi e le sfide del più recente tipo di neural network, gli spiking neural network (SNN). A differenza dei classici ANN questi ultimi sono caratterizzati da uno stato di attivazione definito da un’equazione differenziale: ciò li rende da un lato più versatili, dall’altro più dispendiosi in termini di calcolo per un computer. La storia del machine learning è caratterizzata da momenti di entusiasmo e momenti di crisi, di cui avremo modo di vedere esempi nel corso del testo, e gli SNN non sono immuni a questa tendenza: i loro pregi e difetti hanno fatto parlare molto a riguardo della loro effettiva utilità, e sebbene ad oggi manchino ancora diversi elementi chiave per una vera esplosione da un punto di vista applicativo, gl SNN vengono considerati da molti come la più potente versione degli ANN concepita finora.
APA, Harvard, Vancouver, ISO, and other styles
18

Kourkoulas-Chondrorizos, Alexandros. "Online optimisation of information transmission in stochastic spiking neural systems." Thesis, University of Edinburgh, 2012. http://hdl.handle.net/1842/5832.

Full text
Abstract:
An Information Theoretic approach is used for studying the effect of noise on various spiking neural systems. Detailed statistical analyses of neural behaviour under the influence of stochasticity are carried out and their results related to other work and also biological neural networks. The neurocomputational capabilities of the neural systems under study are put on an absolute scale. This approach was also used in order to develop an optimisation framework. A proof-of-concept algorithm is designed, based on information theory and the coding fraction, which optimises noise through maximising information throughput. The algorithm is applied with success to a single neuron and then generalised to an entire neural population with various structural characteristics (feedforward, lateral, recurrent connections). It is shown that there are certain positive and persistent phenomena due to noise in spiking neural networks and that these phenomena can be observed even under simplified conditions and therefore exploited. The transition is made from detailed and computationally expensive tools to efficient approximations. These phenomena are shown to be persistent and exploitable under a variety of circumstances. The results of this work provide evidence that noise can be optimised online in both single neurons and neural populations of varying structures.
APA, Harvard, Vancouver, ISO, and other styles
19

Wang, Zhenzhong. "System Design and Implementation of a Fast and Accurate Bio-Inspired Spiking Neural Network." FIU Digital Commons, 2015. http://digitalcommons.fiu.edu/etd/2227.

Full text
Abstract:
Neuron models are the elementary units which determine the performance of an artificial spiking neural network (ASNN). This study introduces a new Generalized Leaky Integrate-and-Fire (GLIF) neuron model with variable leaking resistor and bias current in order to reproduce accurately the membrane voltage dynamics of a biological neuron. The accuracy of this model is ensured by adjusting its parameters to the statistical properties of the Hodgkin-Huxley model outputs; while the speed is enhanced by introducing a Generalized Exponential Moving Average method that converts the parameterized kernel functions into pre-calculated lookup tables based on an analytic solution of the dynamic equations of the GLIF model. Spike encoding is the initial yet crucial step for any application domain of ASNN. However, current encoding methods are not suitable to process complex temporal signal. Motivated by the modulation relationship found between afferent synaptic currents in biological neurons, this study proposes a biologically plausible spike phase encoding method based on a novel spiking neuron model which could perform wavelet decomposition of the input signal, and encode the wavelet spectrum into synchronized output spike trains. The spike delays in each synchronizing period represent the spectrum amplitudes. The encoding method was tested in encoding of human voice records for speech recognition purposes. Empirical evaluations confirm that encoded spike trains constitute a good representation of the continuous wavelet transform of the original signal. Interictal spike (IS) is a type of transient discharge commonly found in the electroencephalography (EEG) records from epilepsy patients. The detection of IS remains an essential task for 3D source localization as well as in developing algorithms for essential in seizure prediction and guided therapy. We present in this work a new IS detection technology method using the phase encoding method with customized wavelet sensor neuron and a specially designed ASNN structure. The detection results confirm the ability of such ASNN to capture IS automatically from multichannel EEG records.
APA, Harvard, Vancouver, ISO, and other styles
20

Schliebs, Stefan. "Heterogeneous probabilistic models for optimisation and modelling of evolving spiking neural networks." AUT University, 2010. http://hdl.handle.net/10292/963.

Full text
Abstract:
This thesis proposes a novel feature selection and classification method employing evolving spiking neural networks (eSNN) and evolutionary algorithms (EA). The method is named the Quantum-inspired Spiking Neural Network (QiSNN) framework. QiSNN represents an integrated wrapper approach. An evolutionary process evolves appropriate feature subsets for a given classification task and simultaneously optimises the neural and learning-related parameters of the network. Unlike other methods, the connection weights of this network are determined by a fast one-pass learning algorithm which dramatically reduces the training time. In its core, QiSNN employs the Thorpe neural model that allows the efficient simulation of even large networks. In QiSNN, the presence or absence of features is represented by a string of concatenated bits, while the parameters of the neural network are continuous. For the exploration of these two entirely different search spaces, a novel Estimation of Distribution Algorithm (EDA) is developed. The method maintains a population of probabilistic models specialised for the optimisation of either binary, continuous or heterogeneous search spaces while utilising a small and intuitive set of parameters. The EDA extends the Quantum-inspired Evolutionary Algorithm (QEA) proposed by Han and Kim (2002) and was named the Heterogeneous Hierarchical Model EDA (hHM-EDA). The algorithm is compared to numerous contemporary optimisation methods and studied in terms of convergence speed, solution quality and robustness in noisy search spaces. The thesis investigates the functioning and the characteristics of QiSNN using both synthetic feature selection benchmarks and a real-world case study on ecological modelling. By evolving suitable feature subsets, QiSNN significantly enhances the classification accuracy of eSNN. Compared to numerous other feature selection techniques, like the wrapper-based Multilayer Perceptron (MLP) and the Naive Bayesian Classifier (NBC), QiSNN demonstrates a competitive classification and feature selection performance while requiring comparatively low computational costs.
APA, Harvard, Vancouver, ISO, and other styles
21

Hampo, Michael J. "Implementation Of Associative Memory With Online Learning into a Spiking Neural Network On Neuromorphic Hardware." University of Dayton / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1605800559072791.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Wu, Jiaming. "A modular dynamic Neuro-Synaptic platform for Spiking Neural Networks." Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASP145.

Full text
Abstract:
Que le réseau de neurones soit biologique ou artificiel, il possède une unité de calcul fondamentale : le neurone. Ces neurones, interconnectés par des synapses, forment ainsi des réseaux complexes qui permettent d’obtenir une pluralité de fonctions. De même, le réseau de neurones neuromorphique, ou plus généralement les ordinateurs neuromorphiques, nécessitent également ces deux éléments fondamentaux que sont les neurones et les synapses. Dans ce travail, nous introduisons une unité matérielle neuro-synaptique à impulsions, inspirée de la biologie et entièrement réalisée avec des composants électroniques conventionnels. Le modèle de cette unité neuro-synaptique repose sur les modèles théoriques classiques du neurone à impulsions et des courants synaptiques et membranaires. Le neurone à impulsions est entièrement analogique et un dispositif memristif, dont les composants électroniques sont facilement disponibles sur le marché, permet d’assurer l’excitabilité du neurone. En ce qui concerne les courants synaptiques et membranaires, leur intensité est ajustable, et ils possèdent une dynamique biomimétique, incluant à la fois des courants excitateurs et inhibiteurs. Tous les paramètres du modèle sont ajustables et permettant ainsi d'adapter le système neuro-synaptique. Cette flexibilité et cette adaptabilité sont des caractéristiques essentielles dans la réalisation d’applications telles que les interfaces cerveau-machine. En nous appuyant sur ces deux unités modulaires, le neurone et la synapse, nous pouvons concevoir des motifs fondamentaux des réseaux de neurones. Ces motifs servent ainsi de base pour implémenter des réseaux aux fonctionnalités plus complexes, telles que des mémoires dynamiques ou des réseaux locomoteurs spinaux (Central Pattern Generator). De plus, il sera possible d’améliorer le modèle existant, que ce soit en y intégrant des memristors à base d’oxydes (actuellement étudiés en science des matériaux), ou en le déployant à grande échelle (VLSI) afin de réaliser des réseaux d’ordres de grandeurs supérieures. L’unité neuro-synaptique peut être considérée comme un bloc fondamental pour implémenter des réseaux neuronaux à impulsions de géométrie arbitraire. Son design compact et modulaire, associé à la large disponibilité des composants électroniques, font de notre plateforme une option attrayante de développement pour construire des interfaces neuronales, que ce soit dans les domaines médical, robotique, ou des systèmes d'intelligence artificielle (par exemple le calcul par réservoir), etc<br>Biological and artificial neural networks share a fundamental computational unit: the neuron. These neurons are coupled by synapses, forming complex networks that enable various functions. Similarly, neuromorphic hardware, or more generally neuro-computers, also require two hardware elements: neurons and synapses. In this work, we introduce a bio-inspired spiking Neuro-Synaptic hardware unit, fully implemented with conventional electronic components. Our hardware is based on a textbook theoretical model of the spiking neuron, and its synaptic and membrane currents. The spiking neuron is fully analog and the various models that we introduced are defined by their hardware implementation. The neuron excitability is achieved through a memristive device made from off-the-shelf electronic components. Both synaptic and membrane currents feature tunable intensities and bio-mimetic dynamics, including excitatory and inhibitory currents. All model parameters are adjustable, allowing the system to be tuned to bio-compatible timescales, which is crucial in applications such as brain-machine interfaces. Building on these two modular units, we demonstrate various basic neural network motifs (or neuro-computing primitives) and show how to combine these fundamental motifs to implement more complex network functionalities, such as dynamical memories and central pattern generators. Our hardware design also carries potential extensions for integrating oxide-based memristors (which are widely studied in material science),or porting the design to very large-scale integration (VLSI) to implement large-scale networks. The Neuro-Synaptic unit can be considered as a building block for implementing spiking neural networks of arbitrary geometry. Its compact and modular design, as well as the wide availability of ordinary electronic components, makes our approach an attractive platform for building neural interfaces in medical devices, robotics, and artificial intelligence systems such as reservoir computing
APA, Harvard, Vancouver, ISO, and other styles
23

Ghosh, Dastidar Samanwoy. "Models of EEG data mining and classification in temporal lobe epilepsy: wavelet-chaos-neural network methodology and spiking neural networks." Columbus, Ohio : Ohio State University, 2007. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1180459585.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Jansson, Ylva. "Normalization in a cortical hypercolumn : The modulatory effects of a highly structured recurrent spiking neural network." Thesis, KTH, Beräkningsbiologi, CB, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-158990.

Full text
Abstract:
Normalization is important for a large range of phenomena in biological neural systems such as light adaptation in the retina, context dependent decision making and probabilistic inference. In a normalizing circuit the activity of one neuron/-group of neurons is divisively rescaled in relation to the activity of other neurons/­­groups. This creates neural responses invariant to certain stimulus dimensions and dynamically adapts the range over which a neural system can respond discriminatively on stimuli. This thesis examines whether a biologically realistic normalizing circuit can be implemented by a spiking neural network model based on the columnar structure found in cortex. This was done by constructing and evaluating a highly structured spiking neural network model, modelling layer 2/3 of a cortical hypercolumn using a group of neurons as the basic computational unit. The results show that the structure of this hypercolumn module does not per se create a normalizing network. For most model versions the modulatory effect is better described as subtractive inhibition. However three mechanisms that shift the modulatory effect towards normalization were found: An increase in membrane variance for increased modulatory inputs; variability in neuron excitability and connections; and short-term depression on the driving synapses. Moreover it is shown that by combining those mechanisms it is possible to create a spiking neural network that implements approximate normalization over at least ten times increase in input magnitude. These results point towards possible normalizing mechanisms in a cortical hypercolumn; however more studies are needed to assess whether any of those could in fact be a viable explanation for normalization in the biological nervous system.<br>Normalisering är viktigt för en lång rad fenomen i biologiska nervsystem såsom näthinnans ljusanpassning, kontextberoende beslutsfattande och probabilistisk inferens. I en normaliserande krets skalas aktiviteten hos en nervcell/grupp av nervceller om i relation till aktiviteten hos andra nervceller/grupper. Detta ger neurala svar som är invarianta i förhållande till vissa dimensioner hos stimuli, och anpassar dynamiskt för vilka inputmagnituder ett system kan särskilja mellan stimuli. Den här uppsatsen undersöker huruvida en biologiskt realistisk normal­iserande krets kan implementeras av ett spikande neuronnätverk konstruerat med utgångspunkt från kolumnstrukturen i kortex. Detta gjordes genom att konstruera och utvärdera ett hårt strukturerat rekurrent spikande neuronnätverk, som modellerar lager 2/3 av en kortikal hyperkolumn med en grupp av neuroner som grundläggande beräkningsenhet. Resultaten visar att strukturen i hyperkolumn­modulen inte i sig skapar ett normaliserande nätverk. För de flesta nätverks­versioner implementerar nätverket en modulerande effekt som bättre beskrivs som subtraktiv inhibition. Dock hittades tre mekanismer som skapar ett mer normaliserande nätverk: Ökad membranvarians för större modulerande inputs; variabilitet i excitabilitet och inkommande kopplingar; och korttidsdepression på drivande synapser. Det visas också att genom att kombinera dessa mekanismer är det möjligt att skapa ett spikande neuronnät som approximerar normalisering över ett en åtminstone tio gångers ökning av storleken på input. Detta pekar på möjliga normaliserande mekanismer i en kortikal hyperkolumn, men ytterligare studier är nödvändiga för att avgöra om en eller flera av dessa kan vara en förklaring till hur normalisering är implementerat i biologiska nervsystem.
APA, Harvard, Vancouver, ISO, and other styles
25

Stewart, Robert Douglas. "Spiking neural network models of the basal ganglia : cortically-evoked response patterns and action selection properties." Thesis, University of Sheffield, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.445121.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Fox, Paul James. "Massively parallel neural computation." Thesis, University of Cambridge, 2013. https://www.repository.cam.ac.uk/handle/1810/245013.

Full text
Abstract:
Reverse-engineering the brain is one of the US National Academy of Engineering’s “Grand Challenges.” The structure of the brain can be examined at many different levels, spanning many disciplines from low-level biology through psychology and computer science. This thesis focusses on real-time computation of large neural networks using the Izhikevich spiking neuron model. Neural computation has been described as “embarrassingly parallel” as each neuron can be thought of as an independent system, with behaviour described by a mathematical model. However, the real challenge lies in modelling neural communication. While the connectivity of neurons has some parallels with that of electrical systems, its high fan-out results in massive data processing and communication requirements when modelling neural communication, particularly for real-time computations. It is shown that memory bandwidth is the most significant constraint to the scale of real-time neural computation, followed by communication bandwidth, which leads to a decision to implement a neural computation system on a platform based on a network of Field Programmable Gate Arrays (FPGAs), using commercial off- the-shelf components with some custom supporting infrastructure. This brings implementation challenges, particularly lack of on-chip memory, but also many advantages, particularly high-speed transceivers. An algorithm to model neural communication that makes efficient use of memory and communication resources is developed and then used to implement a neural computation system on the multi- FPGA platform. Finding suitable benchmark neural networks for a massively parallel neural computation system proves to be a challenge. A synthetic benchmark that has biologically-plausible fan-out, spike frequency and spike volume is proposed and used to evaluate the system. It is shown to be capable of computing the activity of a network of 256k Izhikevich spiking neurons with a fan-out of 1k in real-time using a network of 4 FPGA boards. This compares favourably with previous work, with the added advantage of scalability to larger neural networks using more FPGAs. It is concluded that communication must be considered as a first-class design constraint when implementing massively parallel neural computation systems.
APA, Harvard, Vancouver, ISO, and other styles
27

Diesmann, Markus. "Conditions for stable propagation of synchronous spiking in cortical neural networks single neuron dynamics and network properties /." [S.l.] : [s.n.], 2002. http://deposit.ddb.de/cgi-bin/dokserv?idn=968772781.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Li, Zhengrong. "Aerial image analysis using spiking neural networks with application to power line corridor monitoring." Thesis, Queensland University of Technology, 2011. https://eprints.qut.edu.au/46161/1/Zhengrong_Li_Thesis.pdf.

Full text
Abstract:
Trees, shrubs and other vegetation are of continued importance to the environment and our daily life. They provide shade around our roads and houses, offer a habitat for birds and wildlife, and absorb air pollutants. However, vegetation touching power lines is a risk to public safety and the environment, and one of the main causes of power supply problems. Vegetation management, which includes tree trimming and vegetation control, is a significant cost component of the maintenance of electrical infrastructure. For example, Ergon Energy, the Australia’s largest geographic footprint energy distributor, currently spends over $80 million a year inspecting and managing vegetation that encroach on power line assets. Currently, most vegetation management programs for distribution systems are calendar-based ground patrol. However, calendar-based inspection by linesman is labour-intensive, time consuming and expensive. It also results in some zones being trimmed more frequently than needed and others not cut often enough. Moreover, it’s seldom practicable to measure all the plants around power line corridors by field methods. Remote sensing data captured from airborne sensors has great potential in assisting vegetation management in power line corridors. This thesis presented a comprehensive study on using spiking neural networks in a specific image analysis application: power line corridor monitoring. Theoretically, the thesis focuses on a biologically inspired spiking cortical model: pulse coupled neural network (PCNN). The original PCNN model was simplified in order to better analyze the pulse dynamics and control the performance. Some new and effective algorithms were developed based on the proposed spiking cortical model for object detection, image segmentation and invariant feature extraction. The developed algorithms were evaluated in a number of experiments using real image data collected from our flight trails. The experimental results demonstrated the effectiveness and advantages of spiking neural networks in image processing tasks. Operationally, the knowledge gained from this research project offers a good reference to our industry partner (i.e. Ergon Energy) and other energy utilities who wants to improve their vegetation management activities. The novel approaches described in this thesis showed the potential of using the cutting edge sensor technologies and intelligent computing techniques in improve power line corridor monitoring. The lessons learnt from this project are also expected to increase the confidence of energy companies to move from traditional vegetation management strategy to a more automated, accurate and cost-effective solution using aerial remote sensing techniques.
APA, Harvard, Vancouver, ISO, and other styles
29

Guo, Lilin. "A Biologically Plausible Supervised Learning Method for Spiking Neurons with Real-world Applications." FIU Digital Commons, 2016. http://digitalcommons.fiu.edu/etd/2982.

Full text
Abstract:
Learning is central to infusing intelligence to any biologically inspired system. This study introduces a novel Cross-Correlated Delay Shift (CCDS) learning method for spiking neurons with the ability to learn and reproduce arbitrary spike patterns in a supervised fashion with applicability tospatiotemporalinformation encoded at the precise timing of spikes. By integrating the cross-correlated term,axonaland synapse delays, the CCDS rule is proven to be both biologically plausible and computationally efficient. The proposed learning algorithm is evaluated in terms of reliability, adaptive learning performance, generality to different neuron models, learning in the presence of noise, effects of its learning parameters and classification performance. The results indicate that the proposed CCDS learning rule greatly improves classification accuracy when compared to the standards reached with the Spike Pattern Association Neuron (SPAN) learning rule and the Tempotron learning rule. Network structureis the crucial partforany application domain of Artificial Spiking Neural Network (ASNN). Thus, temporal learning rules in multilayer spiking neural networks are investigated. As extensions of single-layer learning rules, the multilayer CCDS (MutCCDS) is also developed. Correlated neurons are connected through fine-tuned weights and delays. In contrast to the multilayer Remote Supervised Method (MutReSuMe) and multilayertempotronrule (MutTmptr), the newly developed MutCCDS shows better generalization ability and faster convergence. The proposed multilayer rules provide an efficient and biologically plausible mechanism, describing how delays and synapses in the multilayer networks are adjusted to facilitate learning. Interictalspikes (IS) aremorphologicallydefined brief events observed in electroencephalography (EEG) records from patients with epilepsy. The detection of IS remains an essential task for 3D source localization as well as in developing algorithms for seizure prediction and guided therapy. In this work, we present a new IS detection method using the Wavelet Encoding Device (WED) method together with CCDS learning rule and a specially designed Spiking Neural Network (SNN) structure. The results confirm the ability of such SNN to achieve good performance for automatically detecting such events from multichannel EEG records.
APA, Harvard, Vancouver, ISO, and other styles
30

Ternstedt, Andreas. "Pattern recognition with spiking neural networks and the ROLLS low-power online learning neuromorphic processor." Thesis, Luleå tekniska universitet, Institutionen för teknikvetenskap och matematik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-63033.

Full text
Abstract:
Online monitoring applications requiring advanced pattern recognition capabilities implemented in resource-constrained wireless sensor systems are challenging to construct using standard digital computers. An interesting alternative solution is to use a low-power neuromorphic processor like the ROLLS, with subthreshold mixed analog/digital circuits and online learning capabilities that approximate the behavior of real neurons and synapses. This requires that the monitoring algorithm is implemented with spiking neural networks, which in principle are efficient computational models for tasks such as pattern recognition. In this work, I investigate how spiking neural networks can be used as a pre-processing and feature learning system in a condition monitoring application where the vibration of a machine with healthy and faulty rolling-element bearings is considered. Pattern recognition with spiking neural networks is investigated using simulations with Brian -- a Python-based open source toolbox -- and an implementation is developed for the ROLLS neuromorphic processor. I analyze the learned feature-response properties of individual neurons. When pre-processing the input signals with a neuromorphic cochlea known as the AER-EAR system, the ROLLS chip learns to classify the resulting spike patterns with a training error of less than 1 %, at a combined power consumption of approximately 30 mW. Thus, the neuromorphic hardware system can potentially be realized in a resource-constrained wireless sensor for online monitoring applications.However, further work is needed for testing and cross validation of the feature learning and pattern recognition networks.i
APA, Harvard, Vancouver, ISO, and other styles
31

Humble, James. "Learning, self-organisation and homeostasis in spiking neuron networks using spike-timing dependent plasticity." Thesis, University of Plymouth, 2013. http://hdl.handle.net/10026.1/1499.

Full text
Abstract:
Spike-timing dependent plasticity is a learning mechanism used extensively within neural modelling. The learning rule has been shown to allow a neuron to find the onset of a spatio-temporal pattern repeated among its afferents. In this thesis, the first question addressed is ‘what does this neuron learn?’ With a spiking neuron model and linear prediction, evidence is adduced that the neuron learns two components: (1) the level of average background activity and (2) specific spike times of a pattern. Taking advantage of these findings, a network is developed that can train recognisers for longer spatio-temporal input signals using spike-timing dependent plasticity. Using a number of neurons that are mutually connected by plastic synapses and subject to a global winner-takes-all mechanism, chains of neurons can form where each neuron is selective to a different segment of a repeating input pattern, and the neurons are feedforwardly connected in such a way that both the correct stimulus and the firing of the previous neurons are required in order to activate the next neuron in the chain. This is akin to a simple class of finite state automata. Following this, a novel resource-based STDP learning rule is introduced. The learning rule has several advantages over typical implementations of STDP and results in synaptic statistics which match favourably with those observed experimentally. For example, synaptic weight distributions and the presence of silent synapses match experimental data.
APA, Harvard, Vancouver, ISO, and other styles
32

Henderson, Stephen Alexander Jr. "A Memristor-Based Liquid State Machine for Auditory Signal Recognition." University of Dayton / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1628251263454753.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Jin, Xin. "Parallel simulation of neural networks on SpiNNaker universal neuromorphic hardware." Thesis, University of Manchester, 2010. https://www.research.manchester.ac.uk/portal/en/theses/parallel-simulation-of-neural-networks-on-spinnaker-universal-neuromorphic-hardware(d6b8b72a-63c4-44ee-963a-ae349b0e379c).html.

Full text
Abstract:
Artificial neural networks have shown great potential and have attracted much research interest. One problem faced when simulating such networks is speed. As the number of neurons increases, the time to simulate and train a network increases dramatically. This makes it difficult to simulate and train a large-scale network system without the support of a high-performance computer system. The solution we present is a "real" parallel system - using a parallel machine to simulate neural networks which are intrinsically parallel applications. SpiNNaker is a scalable massively-parallel computing system under development with the aim of building a general-purpose platform for the parallel simulation of large-scale neural systems. This research investigates how to model large-scale neural networks efficiently on such a parallel machine. While providing increased overall computational power, a parallel architecture introduces a new problem - the increased communication reduces the speedup gains. Modeling schemes, which take into account communication, processing, and storage requirements, are investigated to solve this problem. Since modeling schemes are application-dependent, two different types of neural network are examined - spiking neural networks with spike-time dependent plasticity, and the parallel distributed processing model with the backpropagation learning rule. Different modeling schemes are developed and evaluated for the two types of neural network. The research shows the feasibility of the approach as well as the performance of SpiNNaker as a general-purpose platform for the simulation of neural networks. The linear scalability shown in this architecture provides a path to the further development of parallel solutions for the simulation of extremely large-scale neural networks.
APA, Harvard, Vancouver, ISO, and other styles
34

Holanda, Priscila Cavalcante. "DHyANA : neuromorphic architecture for liquid computing." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2016. http://hdl.handle.net/10183/169343.

Full text
Abstract:
Redes Neurais têm sido um tema de pesquisas por pelo menos sessenta anos. Desde a eficácia no processamento de informações à incrível capacidade de tolerar falhas, são incontáveis os mecanismos no cérebro que nos fascinam. Assim, não é nenhuma surpresa que, na medida que tecnologias facilitadoras tornam-se disponíveis, cientistas e engenheiros têm aumentado os esforços para o compreender e simular. Em uma abordagem semelhante à do Projeto Genoma Humano, a busca por tecnologias inovadoras na área deu origem a projetos internacionais que custam bilhões de dólares, o que alguns denominam o despertar global de pesquisa da neurociência. Avanços em hardware fizeram a simulação de milhões ou até bilhões de neurônios possível. No entanto, as abordagens existentes ainda não são capazes de fornecer a densidade de conexões necessária ao enorme número de neurônios e sinapses. Neste sentido, este trabalho propõe DHyANA (Arquitetura Digital Neuromórfica Hierárquica), uma nova arquitetura em hardware para redes neurais pulsadas, a qual utiliza comunicação em rede-em-chip hierárquica. A arquitetura é otimizada para implementações de Máquinas de Estado Líquido. A arquitetura DHyANA foi exaustivamente testada em plataformas de simulação, bem como implementada em uma FPGA Stratix IV da Altera. Além disso, foi realizada a síntese lógica em tecnologia 65nm, a fim de melhor avaliar e comparar o sistema resultante com projetos similares, alcançando uma área de 0,23mm2 e potência de 147mW para uma implementação de 256 neurônios.<br>Neural Networks has been a subject of research for at least sixty years. From the effectiveness in processing information to the amazing ability of tolerating faults, there are countless processing mechanisms in the brain that fascinates us. Thereupon, it comes with no surprise that as enabling technologies have become available, scientists and engineers have raised the efforts to understand, simulate and mimic parts of it. In a similar approach to that of the Human Genome Project, the quest for innovative technologies within the field has given birth to billion dollar projects and global efforts, what some call a global blossom of neuroscience research. Advances in hardware have made the simulation of millions or even billions of neurons possible. However, existing approaches cannot yet provide the even more dense interconnect for the massive number of neurons and synapses required. In this regard, this work proposes DHyANA (Digital HierArchical Neuromorphic Architecture), a new hardware architecture for a spiking neural network using hierarchical network-on-chip communication. The architecture is optimized for Liquid State Machine (LSM) implementations. DHyANA was exhaustively tested in simulation platforms, as well as implemented in an Altera Stratix IV FPGA. Furthermore, a logic synthesis analysis using 65-nm CMOS technology was performed in order to evaluate and better compare the resulting system with similar designs, achieving an area of 0.23mm2 and a power dissipation of 147mW for a 256 neurons implementation.
APA, Harvard, Vancouver, ISO, and other styles
35

Norton, R. David. "Improving Liquid State Machines Through Iterative Refinement of the Reservoir." Diss., CLICK HERE for online access, 2008. http://contentdm.lib.byu.edu/ETD/image/etd2316.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Painkras, Eustace. "A chip multiprocessor for a large-scale neural simulator." Thesis, University of Manchester, 2013. https://www.research.manchester.ac.uk/portal/en/theses/a-chip-multiprocessor-for-a-largescale-neural-simulator(d3637073-2669-4a81-985a-2da9eec46480).html.

Full text
Abstract:
A Chip Multiprocessor for a Large-scale Neural SimulatorEustace PainkrasA thesis submitted to The University of Manchesterfor the degree of Doctor of Philosophy, 17 December 2012The modelling and simulation of large-scale spiking neural networks in biologicalreal-time places very high demands on computational processing capabilities andcommunications infrastructure. These demands are difficult to satisfy even with powerfulgeneral-purpose high-performance computers. Taking advantage of the remarkableprogress in semiconductor technologies it is now possible to design and buildan application-driven platform to support large-scale spiking neural network simulations.This research investigates the design and implementation of a power-efficientchip multiprocessor (CMP) which constitutes the basic building block of a spikingneural network modelling and simulation platform. The neural modelling requirementsof many processing elements, high-fanout communications and local memoryare addressed in the design and implementation of the low-level modules in the designhierarchy as well as in the CMP. By focusing on a power-efficient design, the energyconsumption and related cost of SpiNNaker, the massively-parallel computation engine,are kept low compared with other state-of-the-art hardware neural simulators.The SpiNNaker CMP is composed of many simple power-efficient processors withsmall local memories, asynchronous networks-on-chip and numerous bespoke modulesspecifically designed to serve the demands of neural computation with a globallyasynchronous, locally synchronous (GALS) architecture.The SpiNNaker CMP, realised as part of this research, fulfills the demands of neuralsimulation in a power-efficient and scalable manner, with added fault-tolerancefeatures. The CMPs have, to date, been incorporated into three versions of SpiNNakersystem PCBs with up to 48 chips onboard. All chips on the PCBs are performing successfully, during both functional testing and their targeted role of neural simulation.
APA, Harvard, Vancouver, ISO, and other styles
37

FALCIONELLI, NICOLA. "From Symbolic Artificial Intelligence to Neural Networks Universality with Event-based Modeling." Doctoral thesis, Università Politecnica delle Marche, 2020. http://hdl.handle.net/11566/274620.

Full text
Abstract:
Rappresentare la conoscenza, modellare il ragionamento umano e comprendere i processi di pensiero sono sempre state parti centrali delle attività intellettuali, fin dai primi tentativi dei filosofi greci. Non è solo un caso che, non appena i computer hanno iniziato a diffondersi, scienziati e matematici straordinari come John McCarthy, Marvin Minsky e Claude Shannon hanno iniziato a creare sistemi Artificialmente Intelligenti con una prospettiva orientata al simbolismo. Anche se questo è stato un percorso parzialmente forzato a causa delle capacità di calcolo molto limitate dell'epoca, ha segnato l'inizio di quella che oggi è conosciuta come Intelligenza Artificiale Classica (o Simbolica), o essenzialmente, un insieme di tecniche per implementare comportamenti "intelligenti" attraverso formalismi logici e di dimostrazione di teoremi. Le tecniche di Intelligenza Artificiale Classica sono infatti processi molto diretti e centrati sull'uomo, che trovano il loro punto di forza nella semplice interpretabilità umana e nella riusabilità della conoscenza. Al contrario, esse soffrono di problemi di computabilità quando sono applicate a compiti del mondo reale, per lo più dovuti all'esplosione combinatoria dello spazio di ricerca (soprattutto quando si ha a che fare con il tempo), e all'indecidibilità. Tuttavia, le sempre maggiori capacità dell'hardware dei computer hanno aperto nuove possibilità di crescita per altri metodi più orientati alla statistica, come le Reti Neurali. Anche se la teoria alla base di questi metodi era nota da tempo, è stato solo negli ultimi anni che sono riusciti a raggiungere progressi significativi, e a superare le tecniche classiche di IA su molti fronti. Al momento, i principali ostacoli di tali tecniche di IA statistica sono rappresentati dall'elevato consumo di energia e dalla mancanza di modi semplici per gli esseri umani di comprendere il processo che ha portato a un particolare risultato. Riassumendo, le tecniche di IA classica e statistica possono essere viste come due facce della stessa medaglia: se un dominio presenta informazioni strutturate, poca incertezza e processi decisionali chiari, allora l'IA classica potrebbe essere lo strumento giusto, o altrimenti, quando le informazioni sono meno strutturate, hanno più incertezza, ambiguità e non è possibile identificare processi decisionali chiari, allora l'IA statistica dovrebbe essere scelta. Lo scopo principale di questa tesi è quindi (i) mostrare le capacità e i limiti delle attuali tecniche di Intelligenza Artificiale (Classica e Statistica) sia in ambiti strutturati che non strutturati, e (ii) dimostrare come la modellazione basata su eventi possa affrontare alcune delle loro criticità, fornendo nuove potenziali connessioni e nuove prospettive.<br>Representing knowledge, modeling human reasoning, and understanding thought processes have always been central parts of intellectual activities, since the first attempts by greek philosophers. It is not just by chance that, as soon as computers started to spread, remarkable scientists and mathematicians such as John McCarthy, Marvin Minsky and Claude Shannon started creating Artificially Intelligent systems with a symbolic oriented perspective. Even though this has been a partially forced path due to the very limited computing capabilities at the time, it marked the beginning of what is now known as Classical (or Symbolic) Artificial Intelligence, or essentially, a set of techniques for implementing "intelligent" behaviours by means of logic formalisms and theorem proving. Classical AI techniques are indeed very direct and human-centered processes, which find their strenghts on straightforward human interpretability and knowledge reusability. On the contrary, they suffer of computability problems when applied to real world tasks, mostly due to search space combinatorial explosion (especially when reasoning with time), and undecidability. However, the ever-increasing capabilites of computer hardware opened new possibilities for other more statistical-oriented methods to grow, such as Neural Networks. Even if the theory behind these methods was long known, it was only in recent years that they managed to achieve significant breakthroughs, and to surpass Classical AI techniques on many tasks. At the moment, the main hurdles of such statistical AI techniques are represented by the high energy consumption and the lack of easy ways for humans to understand the process that led to a particular result. Summing up, Classical and Statistical AI techniques can be seen as two faces of the same coin: if a domain presents structured information, little uncertainty, and clear decision processes, then Classical AI might be the right tool, or otherwise, when the information is less structured, has more uncertainty, ambiguity and clear decision processes cannot be identified, then Statistical AI should be chosen. The main purpose of this thesis is thus (i) to show capabilities and limits of current (Classical and Statistical) Artificial Intelligence techniques in both structured and unstructured domains, and (ii) to demostrate how event-based modeling can tackle some of their critical issues, providing new potential connections and novel perspectives.
APA, Harvard, Vancouver, ISO, and other styles
38

Dai, Jing. "Reservoir-computing-based, biologically inspired artificial neural networks and their applications in power systems." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/47646.

Full text
Abstract:
Computational intelligence techniques, such as artificial neural networks (ANNs), have been widely used to improve the performance of power system monitoring and control. Although inspired by the neurons in the brain, ANNs are largely different from living neuron networks (LNNs) in many aspects. Due to the oversimplification, the huge computational potential of LNNs cannot be realized by ANNs. Therefore, a more brain-like artificial neural network is highly desired to bridge the gap between ANNs and LNNs. The focus of this research is to develop a biologically inspired artificial neural network (BIANN), which is not only biologically meaningful, but also computationally powerful. The BIANN can serve as a novel computational intelligence tool in monitoring, modeling and control of the power systems. A comprehensive survey of ANNs applications in power system is presented. It is shown that novel types of reservoir-computing-based ANNs, such as echo state networks (ESNs) and liquid state machines (LSMs), have stronger modeling capability than conventional ANNs. The feasibility of using ESNs as modeling and control tools is further investigated in two specific power system applications, namely, power system nonlinear load modeling for true load harmonic prediction and the closed-loop control of active filters for power quality assessment and enhancement. It is shown that in both applications, ESNs are capable of providing satisfactory performances with low computational requirements. A novel, more brain-like artificial neural network, i.e. biologically inspired artificial neural network (BIANN), is proposed in this dissertation to bridge the gap between ANNs and LNNs and provide a novel tool for monitoring and control in power systems. A comprehensive survey of the spiking models of living neurons as well as the coding approaches is presented to review the state-of-the-art in BIANN research. The proposed BIANNs are based on spiking models of living neurons with adoption of reservoir-computing approaches. It is shown that the proposed BIANNs have strong modeling capability and low computational requirements, which makes it a perfect candidate for online monitoring and control applications in power systems. BIANN-based modeling and control techniques are also proposed for power system applications. The proposed modeling and control schemes are validated for the modeling and control of a generator in a single-machine infinite-bus system under various operating conditions and disturbances. It is shown that the proposed BIANN-based technique can provide better control of the power system to enhance its reliability and tolerance to disturbances. To sum up, a novel, more brain-like artificial neural network, i.e. biologically inspired artificial neural network (BIANN), is proposed in this dissertation to bridge the gap between ANNs and LNNs and provide a novel tool for monitoring and control in power systems. It is clearly shown that the proposed BIANN-based modeling and control schemes can provide faster and more accurate control for power system applications. The conclusions, the recommendations for future research, as well as the major contributions of this research are presented at the end.
APA, Harvard, Vancouver, ISO, and other styles
39

PRONO, LUCIANO. "Methods and Applications for Low-power Deep Neural Networks on Edge Devices." Doctoral thesis, Politecnico di Torino, 2023. https://hdl.handle.net/11583/2976593.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Hofmann, Jaco [Verfasser], Andreas [Akademischer Betreuer] Koch, and Mladen [Akademischer Betreuer] Berekovic. "An Improved Framework for and Case Studies in FPGA-Based Application Acceleration - Computer Vision, In-Network Processing and Spiking Neural Networks / Jaco Hofmann ; Andreas Koch, Mladen Berekovic." Darmstadt : Universitäts- und Landesbibliothek Darmstadt, 2020. http://d-nb.info/1202923097/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Zhao, Chenyuan. "Spike Processing Circuit Design for Neuromorphic Computing." Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/93591.

Full text
Abstract:
Von Neumann Bottleneck, which refers to the limited throughput between the CPU and memory, has already become the major factor hindering the technical advances of computing systems. In recent years, neuromorphic systems started to gain increasing attention as compact and energy-efficient computing platforms. Spike based-neuromorphic computing systems require high performance and low power neural encoder and decoder to emulate the spiking behavior of neurons. These two spike-analog signals converting interface determine the whole spiking neuromorphic computing system's performance, especially the highest performance. Many state-of-the-art neuromorphic systems typically operate in the frequency range between 〖10〗^0KHz and 〖10〗^2KHz due to the limitation of encoding/decoding speed. In this dissertation, all these popular encoding and decoding schemes, i.e. rate encoding, latency encoding, ISI encoding, together with related hardware implementations have been discussed and analyzed. The contributions included in this dissertation can be classified into three main parts: neuron improvement, three kinds of ISI encoder design, two types of ISI decoder design. Two-path leakage LIF neuron has been fabricated and modular design methodology is invented. Three kinds of ISI encoding schemes including parallel signal encoding, full signal iteration encoding, and partial signal encoding are discussed. The first two types ISI encoders have been fabricated successfully and the last ISI encoder will be taped out by the end of 2019. Two types of ISI decoders adopted different techniques which are sample-and-hold based mixed-signal design and spike-timing-dependent-plasticity (STDP) based analog design respectively. Both these two ISI encoders have been evaluated through post-layout simulations successfully. The STDP based ISI encoder will be taped out by the end of 2019. A test bench based on correlation inspection has been built to evaluate the information recovery capability of the proposed spiking processing link.<br>Doctor of Philosophy<br>Neuromorphic computing is a kind of specific electronic system that could mimic biological bodies’ behavior. In most cases, neuromorphic computing system is built with analog circuits which have benefits in power efficient and low thermal radiation. Among neuromorphic computing system, one of the most important components is the signal processing interface, i.e. encoder/decoder. To increase the whole system’s performance, novel encoders and decoders have been proposed in this dissertation. In this dissertation, three kinds of temporal encoders, one rate encoder, one latency encoder, one temporal decoder, and one general spike decoder have been proposed. These designs could be combined together to build high efficient spike-based data link which guarantee the processing performance of whole neuromorphic computing system.
APA, Harvard, Vancouver, ISO, and other styles
42

Bailey, Tony J. "Neuromorphic Architecture with Heterogeneously Integrated Short-Term and Long-Term Learning Paradigms." University of Cincinnati / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1554217105047975.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

IACONO, MASSIMILIANO. "Object detection and recognition with event driven cameras." Doctoral thesis, Università degli studi di Genova, 2020. http://hdl.handle.net/11567/1005981.

Full text
Abstract:
This thesis presents study, analysis and implementation of algorithms to perform object detection and recognition using an event-based cam era. This sensor represents a novel paradigm which opens a wide range of possibilities for future developments of computer vision. In partic ular it allows to produce a fast, compressed, illumination invariant output, which can be exploited for robotic tasks, where fast dynamics and significant illumination changes are frequent. The experiments are carried out on the neuromorphic version of the iCub humanoid platform. The robot is equipped with a novel dual camera setup mounted directly in the robot’s eyes, used to generate data with a moving camera. The motion causes the presence of background clut ter in the event stream. In such scenario the detection problem has been addressed with an at tention mechanism, specifically designed to respond to the presence of objects, while discarding clutter. The proposed implementation takes advantage of the nature of the data to simplify the original proto object saliency model which inspired this work. Successively, the recognition task was first tackled with a feasibility study to demonstrate that the event stream carries sufficient informa tion to classify objects and then with the implementation of a spiking neural network. The feasibility study provides the proof-of-concept that events are informative enough in the context of object classifi cation, whereas the spiking implementation improves the results by employing an architecture specifically designed to process event data. The spiking network was trained with a three-factor local learning rule which overcomes weight transport, update locking and non-locality problem. The presented results prove that both detection and classification can be carried-out in the target application using the event data.
APA, Harvard, Vancouver, ISO, and other styles
44

Ambroise, Matthieu. "Hybridation des réseaux de neurones : de la conception du réseau à l’interopérabilité des systèmes neuromorphiques." Thesis, Bordeaux, 2015. http://www.theses.fr/2015BORD0394/document.

Full text
Abstract:
L’hybridation est une technique qui consiste à interconnecter un réseau de neurones biologique et un réseau de neurones artificiel, utilisée dans la recherche en neuroscience et à des fins thérapeutiques. Durant ces trois années de doctorat, ce travail de thèse s’est focalisé sur l’hybridation dans un plan rapproché (communication directe bi-directionnelle entre l’artificiel et le vivant) et dans un plan plus élargies (interopérabilité des systèmes neuromorphiques). Au début des années 2000, cette technique a permis de connecter un système neuromorphique analogique avec le vivant. Ce travail est dans un premier temps, centré autour de la conception d’un réseau de neurones numérique, en vue d’hybridation, dans deux projets multi-disciplinaires en cours dans l’équipe AS2N de l’IMS, présentés dans ce document : HYRENE (ANR 2010-Blan-031601), ayant pour but le développement d’un système hybride de restauration de l’activité motrice dans le cas d’une lésion de la moelle épinière, BRAINBOW (European project FP7-ICT-2011-C), ayant pour objectif l’élaboration de neuro-prothèses innovantes capables de restaurer la communication autour de lésions cérébrales.Possédant une architecture configurable, un réseau de neurones numérique a été réalisé pour ces deux projets. Pour le premier projet, le réseau de neurones artificiel permet d’émuler l’activitéde CPGs (Central Pattern Generator), à l’origine de la locomotion dans le règne animale. Cette activité permet de déclencher une série de stimulations dans la moelle épinière lésée in vitro et de recréer ainsi la locomotion précédemment perdue. Dans le second projet, la topologie du réseau de neurones sera issue de l’analyse et le décryptage des signaux biologiques issues de groupes de neurones cultivés sur des électrodes, ainsi que de modélisations et simulations réalisées par nos partenaires. Le réseau de neurones sera alors capable de réparer le réseau de neurones lésé. Ces travaux de thèse présentent la démarche de conception des deux différents réseaux et des résultats préliminaires obtenus au sein des deux projets. Dans un second temps, ces travaux élargissent l’hybridation à l’interopérabilité des systèmes neuromorphiques. Au travers d’un protocole de communication utilisant Ethernet, il est possible d’interconnecter des réseaux de neurones électroniques, informatiques et biologiques. Dans un futur proche, il permettra d’augmenter la complexité et la taille des réseaux<br>HYBRID experiments allow to connect a biological neural network with an artificial one,used in neuroscience research and therapeutic purposes. During these three yearsof PhD, this thesis focused on hybridization in a close-up view (bi-diretionnal direct communication between the artificial and the living) and in a broader view (interoperability ofneuromorphic systems). In the early 2000s, an analog neuromorphic system has been connected to a biological neural network. This work is firstly, about the design of a digital neural network, for hybrid experimentsin two multi-disciplinary projects underway in AS2N team of IMS presented in this document : HYRENE (ANR 2010-Blan-031601), aiming the development of a hybrid system for therestoration of motor activity in the case of a spinal cord lesion,BRAINBOW (European project FP7-ICT-2011-C), aiming the development of innovativeneuro-prostheses that can restore communication around cortical lesions. Having a configurable architecture, a digital neural network was designed for these twoprojects. For the first project, the artificial neural network emulates the activity of CPGs (Central Pattern Generator), causing the locomotion in the animal kingdom. This activity will trigger aseries of stimuli in the injured spinal cord textit in vitro and recreating locomotion previously lost. In the second project, the neural network topology will be determined by the analysis anddecryption of biological signals from groups of neurons grown on electrodes, as well as modeling and simulations performed by our partners. The neural network will be able to repair the injuredneural network. This work show the two different networks design approach and preliminary results obtained in the two projects.Secondly, this work hybridization to extend the interoperability of neuromorphic systems. Through a communication protocol using Ethernet, it is possible to interconnect electronic neuralnetworks, computer and biological. In the near future, it will increase the complexity and size of networks
APA, Harvard, Vancouver, ISO, and other styles
45

Lecerf, Gwendal. "Développement d'un réseau de neurones impulsionnels sur silicium à synapses memristives." Thesis, Bordeaux, 2014. http://www.theses.fr/2014BORD0219/document.

Full text
Abstract:
Durant ces trois années de doctorat, financées par le projet ANR MHANN (MemristiveHardware Analog Neural Network), nous nous sommes intéressés au développement d’une nouvelle architecture de calculateur à l’aide de réseaux de neurones. Les réseaux de neurones artificiels sont particulièrement bien adaptés à la reconnaissance d’images et peuvent être utilisés en complément des processeurs séquentiels. En 2008, une nouvelle technologie de composant a vu le jour : le memristor. Classé comme étant le quatrième élément passif, il est possible de modifier sa résistance en fonction de la densité de courant qui le traverse et de garder en mémoire ces changements. Grâce à leurs propriétés, les composants memristifs sont des candidats idéaux pour jouer le rôle des synapses au sein des réseaux de neurones artificiels. En effectuant des mesures sur la technologie des memristors ferroélectriques de l’UMjCNRS/Thalès de l’équipe de Julie Grollier, nous avons pu démontrer qu’il était possible d’obtenir un apprentissage de type STDP (Spike Timing Dependant Plasticity) classiquement utilisé avec les réseaux de neurones impulsionnels. Cette forme d’apprentissage, inspirée de la biologie, impose une variation des poids synaptiques en fonction des évènements neuronaux. En s’appuyant sur les mesures réalisées sur ces memristors et sur des simulations provenant d’un programme élaboré avec nos partenaires de l’INRIA Saclay, nous avons conçu successivement deux puces en silicium pour deux technologies de memristors ferroélectriques. La première technologie (BTO), moins performante, a été mise de côté au profit d’une seconde technologie (BFO). La seconde puce a été élaborée avec les retours d’expérience de la première puce. Elle contient deux couches d’un réseau de neurones impulsionnels dédié à l’apprentissage d’images de 81 pixels. En la connectant à un boitier contenant un crossbar de memristors, nous pourrons réaliser un démonstrateur d’un réseau de neurones hybride réalisé avec des synapses memristives ferroélectriques<br>Supported financially by ANR MHANN project, this work proposes an architecture ofspiking neural network in order to recognize pictures, where traditional processing units are inefficient regarding this. In 2008, a new passive electrical component had been discovered : the memristor. Its resistance can be adjusted by applying a potential between its terminals. Behaving intrinsically as artificial synapses, memristives devices can be used inside artificial neural networks.We measure the variation in resistance of a ferroelectric memristor (obtained from UMjCNRS/Thalès) similar to the biological law STDP (Spike Timing Dependant Plasticity) used with spiking neurons. With our measurements on the memristor and our network simulation (aided by INRIASaclay) we designed successively two versions of the IC. The second IC design is driven by specifications of the first IC with additional functionalists. The second IC contains two layers of a spiking neural network dedicated to learn a picture of 81 pixels. A demonstrator of hybrid neural networks will be achieved by integrating a chip of memristive crossbar interfaced with thesecond IC
APA, Harvard, Vancouver, ISO, and other styles
46

Gómez, Cerdà Vicenç. "Algorithms and complex phenomena in networks: Neural ensembles, statistical, interference and online communities." Doctoral thesis, Universitat Pompeu Fabra, 2008. http://hdl.handle.net/10803/7548.

Full text
Abstract:
Aquesta tesi tracta d'algoritmes i fenòmens complexos en xarxes.<br/><br/>En la primera part s'estudia un model de neurones estocàstiques inter-comunicades mitjançant potencials d'acció. Proposem una tècnica de modelització a escala mesoscòpica i estudiem una transició de fase en un acoblament crític entre les neurones. Derivem una regla de plasticitat sinàptica local que fa que la xarxa s'auto-organitzi en el punt crític.<br/><br/>Seguidament tractem el problema d'inferència aproximada en xarxes probabilístiques mitjançant un algorisme que corregeix la solució obtinguda via belief propagation en grafs cíclics basada en una expansió en sèries. Afegint termes de correcció que corresponen a cicles generals en la xarxa, s'obté el resultat exacte. Introduïm i analitzem numèricament una manera de truncar aquesta sèrie.<br/><br/>Finalment analizem la interacció social en una comunitat d'Internet caracteritzant l'estructura de la xarxa d'usuaris, els fluxes de discussió en forma de comentaris i els patrons de temps de reacció davant una nova notícia.<br>This thesis is about algorithms and complex phenomena in networks.<br/><br/>In the first part we study a network model of stochastic spiking neurons. We propose a modelling technique based on a mesoscopic description level and show the presence of a phase transition around a critical coupling strength. We derive a local plasticity which drives the network towards the critical point.<br/><br/>We then deal with approximate inference in probabilistic networks. We develop an algorithm which corrects the belief propagation solution for loopy graphs based on a loop series expansion. By adding correction terms, one for each "generalized loop" in the network, the exact result is recovered. We introduce and analyze numerically a particular way of truncating the series.<br/><br/>Finally, we analyze the social interaction of an Internet community by characterizing the structure of the network of users, their discussion threads and the temporal patterns of reaction times to a new post.
APA, Harvard, Vancouver, ISO, and other styles
47

Tully, Philip. "Spike-Based Bayesian-Hebbian Learning in Cortical and Subcortical Microcircuits." Doctoral thesis, KTH, Beräkningsvetenskap och beräkningsteknik (CST), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-205568.

Full text
Abstract:
Cortical and subcortical microcircuits are continuously modified throughout life. Despite ongoing changes these networks stubbornly maintain their functions, which persist although destabilizing synaptic and nonsynaptic mechanisms should ostensibly propel them towards runaway excitation or quiescence. What dynamical phenomena exist to act together to balance such learning with information processing? What types of activity patterns do they underpin, and how do these patterns relate to our perceptual experiences? What enables learning and memory operations to occur despite such massive and constant neural reorganization? Progress towards answering many of these questions can be pursued through large-scale neuronal simulations.    In this thesis, a Hebbian learning rule for spiking neurons inspired by statistical inference is introduced. The spike-based version of the Bayesian Confidence Propagation Neural Network (BCPNN) learning rule involves changes in both synaptic strengths and intrinsic neuronal currents. The model is motivated by molecular cascades whose functional outcomes are mapped onto biological mechanisms such as Hebbian and homeostatic plasticity, neuromodulation, and intrinsic excitability. Temporally interacting memory traces enable spike-timing dependence, a stable learning regime that remains competitive, postsynaptic activity regulation, spike-based reinforcement learning and intrinsic graded persistent firing levels.    The thesis seeks to demonstrate how multiple interacting plasticity mechanisms can coordinate reinforcement, auto- and hetero-associative learning within large-scale, spiking, plastic neuronal networks. Spiking neural networks can represent information in the form of probability distributions, and a biophysical realization of Bayesian computation can help reconcile disparate experimental observations.<br><p>QC 20170421</p>
APA, Harvard, Vancouver, ISO, and other styles
48

Louis, Thomas. "Conventionnel ou bio-inspiré ? Stratégies d'optimisation de l'efficacité énergétique des réseaux de neurones pour environnements à ressources limitées." Electronic Thesis or Diss., Université Côte d'Azur, 2025. http://www.theses.fr/2025COAZ4001.

Full text
Abstract:
Intégrer des algorithmes d'intelligence artificielle (IA) directement dans des satellites présente de nombreux défis. Ces systèmes embarqués, fortement limités en consommation d'énergie et en empreinte mémoire, doivent également résister aux interférences. Cela nécessite systématiquement l'utilisation de systèmes sur puce (SoC) afin de combiner deux systèmes dits « hétérogènes » : un microcontrôleur polyvalent et un accélérateur de calcul économe en énergie (comme un FPGA ou un ASIC). Pour relever les défis liés au portage de telles architectures, cette thèse se concentre sur l'optimisation et le déploiement de réseaux de neurones sur des architectures embarquées hétérogènes, dans le but de trouver un compromis entre la consommation d'énergie et la performance de l'IA. Dans le chapitre 2 de cette thèse, une étude approfondie des techniques de compression récentes pour des réseaux de neurones formels (FNN) tels que les MLP ou CNN a tout d'abord été effectuée. Ces techniques, qui permettent de réduire la complexité calculatoire et l'empreinte mémoire de ces modèles, sont essentielles pour leur déploiement dans des environnements aux ressources limitées. Les réseaux de neurones impulsionnels (SNN) ont également été explorés. Ces réseaux bio-inspirés peuvent en effet offrir une plus grande efficacité énergétique par rapport aux FNN. Dans le chapitre 3, nous avons ainsi adapté et élaboré des méthodes de quantification innovantes afin de réduire le nombre de bits utilisés pour représenter les valeurs d'un réseau impulsionnel. Nous avons ainsi pu confronter la quantification des SNN et des FNN, afin d'en comparer et comprendre les pertes et gains respectifs. Néanmoins, réduire l'activité d'un SNN (e.g. le nombre d'impulsions générées lors de l'inférence) améliore directement l'efficacité énergétique des SNN. Dans ce but, nous avons exploité dans le chapitre 4 des techniques de distillation de connaissances et de régularisation. Ces méthodes permettent de réduire l'activité impulsionnelle du réseau tout en préservant son accuracy, ce qui garantit un fonctionnement efficace des SNN sur du matériel à ressources limitées. Dans la dernière partie de cette thèse, nous nous sommes intéressés à l'hybridation des SNN et FNN. Ces réseaux hybrides (HNN) visent à optimiser encore davantage l'efficacité énergétique tout en améliorant les performances. Nous avons également proposé des réseaux multi-timesteps innovants, qui traitent l'information à des latences différentes à travers les couches d'un même SNN. Les résultats expérimentaux montrent que cette approche permet une réduction de la consommation d'énergie globale tout en maintenant les performances sur un ensemble de tâches. Ce travail de thèse constitue une base pour déployer les futures applications des réseaux de neurones dans l'espace. Pour valider nos méthodes, nous fournissons une analyse comparative sur différents jeux de données publics (CIFAR-10, CIFAR-100, MNIST, Google Speech Commands) et sur un jeu de données privé pour la segmentation des nuages. Nos approches sont évaluées sur la base de métriques telles que l'accuracy, la consommation d'énergie ou l'activité du SNN. Ce travail de recherche ne se limite pas aux applications aérospatiales. Nous avons en effet mis en évidence le potentiel des SNN quantifiés, des réseaux de neurones hybrides et des réseaux multi-timesteps pour une variété de scénarios réels où l'efficacité énergétique est cruciale. Ce travail offre ainsi des perspectives intéressantes pour des domaines tels que les dispositifs IoT, les véhicules autonomes et d'autres systèmes nécessitant un déploiement efficace de l'IA<br>Integrating artificial intelligence (AI) algorithms directly into satellites presents numerous challenges. These embedded systems, which are heavily limited in energy consumption and memory footprint, must also withstand interference. This systematically requires the use of system-on-chip (SoC) solutions to combine two so-called “heterogeneous” systems: a versatile microcontroller and an energy-efficient computing accelerator (such as an FPGA or ASIC). To address the challenges related to deploying such architectures, this thesis focuses on optimizing and deploying neural networks on heterogeneous embedded architectures, aiming to balance energy consumption and AI performance.In Chapter 2 of this thesis, an in-depth study of recent compression techniques for feedforward neural networks (FNN) like MLPs or CNNs was conducted. These techniques, which reduce the computational complexity and memory footprint of these models, are essential for deployment in resource-constrained environments. Spiking neural networks (SNN) were also explored. These bio-inspired networks can indeed offer greater energy efficiency compared to FNNs.In Chapter 3, we adapted and developed innovative quantization methods to reduce the number of bits used to represent the values in a spiking network. This allowed us to compare the quantization of SNNs and FNNs, to understand and assess their respective trade-offs in terms of losses and gains. Reducing the activity of an SNN (e.g., the number of spikes generated during inference) directly improves the energy efficiency of SNNs. To this end, in Chapter 4, we leveraged knowledge distillation and regularization techniques. These methods reduce the spiking activity of the network while preserving its accuracy, ensuring effective operation of SNNs on resource-limited hardware.In the final part of this thesis, we explored the hybridization of SNNs and FNNs. These hybrid networks (HNN) aim to further optimize energy efficiency while enhancing performance. We also proposed innovative multi-timestep networks, which process information with different latencies across layers within the same SNN. Experimental results show that this approach enables a reduction in overall energy consumption while maintaining performance across a range of tasks.This thesis serves as a foundation for deploying future neural network applications in space. To validate our methods, we provide a comparative analysis on various public datasets (CIFAR-10, CIFAR-100, MNIST, Google Speech Commands) as well as on a private dataset for cloud segmentation. Our approaches are evaluated based on metrics such as accuracy, energy consumption, or SNN activity. This research extends beyond aerospace applications. We have demonstrated the potential of quantized SNNs, hybrid neural networks, and multi-timestep networks for a variety of real-world scenarios where energy efficiency is critical. This work offers promising prospects for fields such as IoT devices, autonomous vehicles, and other systems requiring efficient AI deployment
APA, Harvard, Vancouver, ISO, and other styles
49

Cherdo, Yann. "Détection d'anomalie non supervisée sur les séries temporelle à faible coût énergétique utilisant les SNNs." Electronic Thesis or Diss., Université Côte d'Azur, 2024. http://www.theses.fr/2024COAZ4018.

Full text
Abstract:
Dans le cadre de la maintenance prédictive du constructeur automobile Renault, cette thèse vise à fournir des solutions à faible coût énergétique pour la détection non supervisée d'anomalies sur des séries temporelles. Avec l'évolution récente de l'automobile, de plus en plus de données sont produites et doivent être traitées par des algorithmes d'apprentissage automatique. Ce traitement peut être effectué dans le cloud ou directement à bord de la voiture. Dans un tel cas, la bande passante du réseau, les coûts des services cloud, la gestion de la confidentialité des données et la perte de données peuvent être économisés. L'intégration d'un modèle d'apprentissage automatique dans une voiture est un défi car elle nécessite des modèles frugaux en raison des contraintes de mémoire et de calcul. Dans ce but, nous étudions l'utilisation de réseaux de neurones impulsionnels (SNN) pour la detection d'anomalies, la prédiction et la classification sur des séries temporelles. Les performances et les coûts énergétiques des modèles d'apprentissage automatique sont évalués dans un scénario Edge à l'aide de modèles matériels génériques qui prennent en compte tous les coûts de calcul et de mémoire. Pour exploiter autant que possible l'activité neuronale parcimonieuse des SNN, nous proposons un modèle avec des connexions peu denses et entraînables qui consomme la moitié de l'énergie de sa version dense. Ce modèle est évalué sur des benchmarks publics de détection d'anomalies, un cas d'utilisation réel de détection d'anomalies sur les voitures de Renault Alpine, les prévisions météorologiques et le dataset Google Speech Command. Nous comparons également ses performances avec d'autres modèles d'apprentissage automatique existants. Nous concluons que, pour certains cas d'utilisation, les modèles SNN peuvent atteindre les performances de l'état de l'art tout en consommant 2 à 8 fois moins d'énergie. Pourtant, d'autres études devraient être entreprises pour évaluer ces modèles une fois embarqués dans une voiture. Inspirés par les neurosciences, nous soutenons que d'autres propriétés bio-inspirées telles que l'attention, l'activité parcimonieuse, la hiérarchie ou la dynamique des assemblées de neurons pourraient être exploités pour obtenir une meilleure efficacité énergétique et de meilleures performances avec des modèles SNN. Enfin, nous terminons cette thèse par un essai à la croisée des neurosciences cognitives, de la philosophie et de l'intelligence artificielle. En plongeant dans les difficultés conceptuelles liées à la conscience et en considérant les mécanismes déterministes de la mémoire, nous soutenons que la conscience et le soi pourraient être constitutivement indépendants de la mémoire. L'objectif de cet essai est de questionner la nature de l'humain par opposition à celle des machines et de l'IA<br>In the context of the predictive maintenance of the car manufacturer Renault, this thesis aims at providing low-power solutions for unsupervised anomaly detection on time-series. With the recent evolution of cars, more and more data are produced and need to be processed by machine learning algorithms. This processing can be performed in the cloud or directly at the edge inside the car. In such a case, network bandwidth, cloud services costs, data privacy management and data loss can be saved. Embedding a machine learning model inside a car is challenging as it requires frugal models due to memory and processing constraints. To this aim, we study the usage of spiking neural networks (SNNs) for anomaly detection, prediction and classification on time-series. SNNs models' performance and energy costs are evaluated in an edge scenario using generic hardware models that consider all calculation and memory costs. To leverage as much as possible the sparsity of SNNs, we propose a model with trainable sparse connections that consumes half the energy compared to its non-sparse version. This model is evaluated on anomaly detection public benchmarks, a real use-case of anomaly detection from Renault Alpine cars, weather forecasts and the google speech command dataset. We also compare its performance with other existing SNN and non-spiking models. We conclude that, for some use-cases, spiking models can provide state-of-the-art performance while consuming 2 to 8 times less energy. Yet, further studies should be undertaken to evaluate these models once embedded in a car. Inspired by neuroscience, we argue that other bio-inspired properties such as attention, sparsity, hierarchy or neural assemblies dynamics could be exploited to even get better energy efficiency and performance with spiking models. Finally, we end this thesis with an essay dealing with cognitive neuroscience, philosophy and artificial intelligence. Diving into conceptual difficulties linked to consciousness and considering the deterministic mechanisms of memory, we argue that consciousness and the self could be constitutively independent from memory. The aim of this essay is to question the nature of humans by contrast with the ones of machines and AI
APA, Harvard, Vancouver, ISO, and other styles
50

Bichler, Olivier. "Contribution à la conception d'architecture de calcul auto-adaptative intégrant des nanocomposants neuromorphiques et applications potentielles." Phd thesis, Université Paris Sud - Paris XI, 2012. http://tel.archives-ouvertes.fr/tel-00781811.

Full text
Abstract:
Dans cette thèse, nous étudions les applications potentielles des nano-dispositifs mémoires émergents dans les architectures de calcul. Nous montrons que des architectures neuro-inspirées pourraient apporter l'efficacité et l'adaptabilité nécessaires à des applications de traitement et de classification complexes pour la perception visuelle et sonore. Cela, à un cout moindre en termes de consommation énergétique et de surface silicium que les architectures de type Von Neumann, grâce à une utilisation synaptique de ces nano-dispositifs. Ces travaux se focalisent sur les dispositifs dit "memristifs", récemment (ré)-introduits avec la découverte du memristor en 2008 et leur utilisation comme synapse dans des réseaux de neurones impulsionnels. Cela concerne la plupart des technologies mémoire émergentes : mémoire à changement de phase - "Phase-Change Memory" (PCM), "Conductive-Bridging RAM" (CBRAM), mémoire résistive - "Resistive RAM" (RRAM)... Ces dispositifs sont bien adaptés pour l'implémentation d'algorithmes d'apprentissage non supervisés issus des neurosciences, comme "Spike-Timing-Dependent Plasticity" (STDP), ne nécessitant que peu de circuit de contrôle. L'intégration de dispositifs memristifs dans des matrices, ou "crossbar", pourrait en outre permettre d'atteindre l'énorme densité d'intégration nécessaire pour ce type d'implémentation (plusieurs milliers de synapses par neurone), qui reste hors de portée d'une technologie purement en "Complementary Metal Oxide Semiconductor" (CMOS). C'est l'une des raisons majeures pour lesquelles les réseaux de neurones basés sur la technologie CMOS n'ont pas eu le succès escompté dans les années 1990. A cela s'ajoute la relative complexité et inefficacité de l'algorithme d'apprentissage de rétro-propagation du gradient, et ce malgré tous les aspects prometteurs des architectures neuro-inspirées, tels que l'adaptabilité et la tolérance aux fautes. Dans ces travaux, nous proposons des modèles synaptiques de dispositifs memristifs et des méthodologies de simulation pour des architectures les exploitant. Des architectures neuro-inspirées de nouvelle génération sont introduites et simulées pour le traitement de données naturelles. Celles-ci tirent profit des caractéristiques synaptiques des nano-dispositifs memristifs, combinées avec les dernières avancées dans les neurosciences. Nous proposons enfin des implémentations matérielles adaptées pour plusieurs types de dispositifs. Nous évaluons leur potentiel en termes d'intégration, d'efficacité énergétique et également leur tolérance à la variabilité et aux défauts inhérents à l'échelle nano-métrique de ces dispositifs. Ce dernier point est d'une importance capitale, puisqu'il constitue aujourd'hui encore la principale difficulté pour l'intégration de ces technologies émergentes dans des mémoires numériques.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography