Academic literature on the topic 'STDP [Spike Timing Dependant Plasticity]'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'STDP [Spike Timing Dependant Plasticity].'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "STDP [Spike Timing Dependant Plasticity]"

1

BADOUAL, MATHILDE, QUAN ZOU, ANDREW P. DAVISON, MICHAEL RUDOLPH, THIERRY BAL, YVES FRÉGNAC, and ALAIN DESTEXHE. "BIOPHYSICAL AND PHENOMENOLOGICAL MODELS OF MULTIPLE SPIKE INTERACTIONS IN SPIKE-TIMING DEPENDENT PLASTICITY." International Journal of Neural Systems 16, no. 02 (April 2006): 79–97. http://dx.doi.org/10.1142/s0129065706000524.

Full text
Abstract:
Spike-timing dependent plasticity (STDP) is a form of associative synaptic modification which depends on the respective timing of pre- and post-synaptic spikes. The biophysical mechanisms underlying this form of plasticity are currently not known. We present here a biophysical model which captures the characteristics of STDP, such as its frequency dependency, and the effects of spike pair or spike triplet interactions. We also make links with other well-known plasticity rules. A simplified phenomenological model is also derived, which should be useful for fast numerical simulation and analytical investigation of the impact of STDP at the network level.
APA, Harvard, Vancouver, ISO, and other styles
2

Uramoto, Takumi, and Hiroyuki Torikai. "A Calcium-Based Simple Model of Multiple Spike Interactions in Spike-Timing-Dependent Plasticity." Neural Computation 25, no. 7 (July 2013): 1853–69. http://dx.doi.org/10.1162/neco_a_00462.

Full text
Abstract:
Spike-timing-dependent plasticity (STDP) is a form of synaptic modification that depends on the relative timings of presynaptic and postsynaptic spikes. In this letter, we proposed a calcium-based simple STDP model, described by an ordinary differential equation having only three state variables: one represents the density of intracellular calcium, one represents a fraction of open state NMDARs, and one represents the synaptic weight. We shown that in spite of its simplicity, the model can reproduce the properties of the plasticity that have been experimentally measured in various brain areas (e.g., layer 2/3 and 5 visual cortical slices, hippocampal cultures, and layer 2/3 somatosensory cortical slices) with respect to various patterns of presynaptic and postsynaptic spikes. In addition, comparisons with other STDP models are made, and the significance and advantages of the proposed model are discussed.
APA, Harvard, Vancouver, ISO, and other styles
3

Dan, Yang, and Mu-Ming Poo. "Spike Timing-Dependent Plasticity: From Synapse to Perception." Physiological Reviews 86, no. 3 (July 2006): 1033–48. http://dx.doi.org/10.1152/physrev.00030.2005.

Full text
Abstract:
Information in the nervous system may be carried by both the rate and timing of neuronal spikes. Recent findings of spike timing-dependent plasticity (STDP) have fueled the interest in the potential roles of spike timing in processing and storage of information in neural circuits. Induction of long-term potentiation (LTP) and long-term depression (LTD) in a variety of in vitro and in vivo systems has been shown to depend on the temporal order of pre- and postsynaptic spiking. Spike timing-dependent modification of neuronal excitability and dendritic integration was also observed. Such STDP at the synaptic and cellular level is likely to play important roles in activity-induced functional changes in neuronal receptive fields and human perception.
APA, Harvard, Vancouver, ISO, and other styles
4

Echeveste, Rodrigo, and Claudius Gros. "Two-Trace Model for Spike-Timing-Dependent Synaptic Plasticity." Neural Computation 27, no. 3 (March 2015): 672–98. http://dx.doi.org/10.1162/neco_a_00707.

Full text
Abstract:
We present an effective model for timing-dependent synaptic plasticity (STDP) in terms of two interacting traces, corresponding to the fraction of activated NMDA receptors and the [Formula: see text] concentration in the dendritic spine of the postsynaptic neuron. This model intends to bridge the worlds of existing simplistic phenomenological rules and highly detailed models, thus constituting a practical tool for the study of the interplay of neural activity and synaptic plasticity in extended spiking neural networks. For isolated pairs of pre- and postsynaptic spikes, the standard pairwise STDP rule is reproduced, with appropriate parameters determining the respective weights and timescales for the causal and the anticausal contributions. The model contains otherwise only three free parameters, which can be adjusted to reproduce triplet nonlinearities in hippocampal culture and cortical slices. We also investigate the transition from time-dependent to rate-dependent plasticity occurring for both correlated and uncorrelated spike patterns.
APA, Harvard, Vancouver, ISO, and other styles
5

Florian, Răzvan V. "Reinforcement Learning Through Modulation of Spike-Timing-Dependent Synaptic Plasticity." Neural Computation 19, no. 6 (June 2007): 1468–502. http://dx.doi.org/10.1162/neco.2007.19.6.1468.

Full text
Abstract:
The persistent modification of synaptic efficacy as a function of the relative timing of pre- and postsynaptic spikes is a phenomenon known as spike-timing-dependent plasticity (STDP). Here we show that the modulation of STDP by a global reward signal leads to reinforcement learning. We first derive analytically learning rules involving reward-modulated spike-timing-dependent synaptic and intrinsic plasticity, by applying a reinforcement learning algorithm to the stochastic spike response model of spiking neurons. These rules have several features common to plasticity mechanisms experimentally found in the brain. We then demonstrate in simulations of networks of integrate-and-fire neurons the efficacy of two simple learning rules involving modulated STDP. One rule is a direct extension of the standard STDP model (modulated STDP), and the other one involves an eligibility trace stored at each synapse that keeps a decaying memory of the relationships between the recent pairs of pre- and postsynaptic spike pairs (modulated STDP with eligibility trace). This latter rule permits learning even if the reward signal is delayed. The proposed rules are able to solve the XOR problem with both rate coded and temporally coded input and to learn a target output firing-rate pattern. These learning rules are biologically plausible, may be used for training generic artificial spiking neural networks, regardless of the neural model used, and suggest the experimental investigation in animals of the existence of reward-modulated STDP.
APA, Harvard, Vancouver, ISO, and other styles
6

Lightheart, Toby, Steven Grainger, and Tien-Fu Lu. "Spike-Timing-Dependent Construction." Neural Computation 25, no. 10 (October 2013): 2611–45. http://dx.doi.org/10.1162/neco_a_00501.

Full text
Abstract:
Spike-timing-dependent construction (STDC) is the production of new spiking neurons and connections in a simulated neural network in response to neuron activity. Following the discovery of spike-timing-dependent plasticity (STDP), significant effort has gone into the modeling and simulation of adaptation in spiking neural networks (SNNs). Limitations in computational power imposed by network topology, however, constrain learning capabilities through connection weight modification alone. Constructive algorithms produce new neurons and connections, allowing automatic structural responses for applications of unknown complexity and nonstationary solutions. A conceptual analogy is developed and extended to theoretical conditions for modeling synaptic plasticity as network construction. Generalizing past constructive algorithms, we propose a framework for the design of novel constructive SNNs and demonstrate its application in the development of simulations for the validation of developed theory. Potential directions of future research and applications of STDC for biological modeling and machine learning are also discussed.
APA, Harvard, Vancouver, ISO, and other styles
7

Lu, Hui, Hyungju Park, and Mu-Ming Poo. "Spike-timing-dependent BDNF secretion and synaptic plasticity." Philosophical Transactions of the Royal Society B: Biological Sciences 369, no. 1633 (January 5, 2014): 20130132. http://dx.doi.org/10.1098/rstb.2013.0132.

Full text
Abstract:
In acute hippocampal slices, we found that the presence of extracellular brain-derived neurotrophic factor (BDNF) is essential for the induction of spike-timing-dependent long-term potentiation (tLTP). To determine whether BDNF could be secreted from postsynaptic dendrites in a spike-timing-dependent manner, we used a reduced system of dissociated hippocampal neurons in culture. Repetitive pairing of iontophoretically applied glutamate pulses at the dendrite with neuronal spikes could induce persistent alterations of glutamate-induced responses at the same dendritic site in a manner that mimics spike-timing-dependent plasticity (STDP)—the glutamate-induced responses were potentiated and depressed when the glutamate pulses were applied 20 ms before and after neuronal spiking, respectively. By monitoring changes in the green fluorescent protein (GFP) fluorescence at the dendrite of hippocampal neurons expressing GFP-tagged BDNF, we found that pairing of iontophoretic glutamate pulses with neuronal spiking resulted in BDNF secretion from the dendrite at the iontophoretic site only when the glutamate pulses were applied within a time window of approximately 40 ms prior to neuronal spiking, consistent with the timing requirement of synaptic potentiation via STDP. Thus, BDNF is required for tLTP and BDNF secretion could be triggered in a spike-timing-dependent manner from the postsynaptic dendrite.
APA, Harvard, Vancouver, ISO, and other styles
8

Leen, Todd K., and Robert Friel. "Stochastic Perturbation Methods for Spike-Timing-Dependent Plasticity." Neural Computation 24, no. 5 (May 2012): 1109–46. http://dx.doi.org/10.1162/neco_a_00267.

Full text
Abstract:
Online machine learning rules and many biological spike-timing-dependent plasticity (STDP) learning rules generate jump process Markov chains for the synaptic weights. We give a perturbation expansion for the dynamics that, unlike the usual approximation by a Fokker-Planck equation (FPE), is well justified. Our approach extends the related system size expansion by giving an expansion for the probability density as well as its moments. We apply the approach to two observed STDP learning rules and show that in regimes where the FPE breaks down, the new perturbation expansion agrees well with Monte Carlo simulations. The methods are also applicable to the dynamics of stochastic neural activity. Like previous ensemble analyses of STDP, we focus on equilibrium solutions, although the methods can in principle be applied to transients as well.
APA, Harvard, Vancouver, ISO, and other styles
9

Hunzinger, Jason F., Victor H. Chan, and Robert C. Froemke. "Learning complex temporal patterns with resource-dependent spike timing-dependent plasticity." Journal of Neurophysiology 108, no. 2 (July 15, 2012): 551–66. http://dx.doi.org/10.1152/jn.01150.2011.

Full text
Abstract:
Studies of spike timing-dependent plasticity (STDP) have revealed that long-term changes in the strength of a synapse may be modulated substantially by temporal relationships between multiple presynaptic and postsynaptic spikes. Whereas long-term potentiation (LTP) and long-term depression (LTD) of synaptic strength have been modeled as distinct or separate functional mechanisms, here, we propose a new shared resource model. A functional consequence of our model is fast, stable, and diverse unsupervised learning of temporal multispike patterns with a biologically consistent spiking neural network. Due to interdependencies between LTP and LTD, dendritic delays, and proactive homeostatic aspects of the model, neurons are equipped to learn to decode temporally coded information within spike bursts. Moreover, neurons learn spike timing with few exposures in substantial noise and jitter. Surprisingly, despite having only one parameter, the model also accurately predicts in vitro observations of STDP in more complex multispike trains, as well as rate-dependent effects. We discuss candidate commonalities in natural long-term plasticity mechanisms.
APA, Harvard, Vancouver, ISO, and other styles
10

Mendes, Alexandre, Gaetan Vignoud, Sylvie Perez, Elodie Perrin, Jonathan Touboul, and Laurent Venance. "Concurrent Thalamostriatal and Corticostriatal Spike-Timing-Dependent Plasticity and Heterosynaptic Interactions Shape Striatal Plasticity Map." Cerebral Cortex 30, no. 8 (March 7, 2020): 4381–401. http://dx.doi.org/10.1093/cercor/bhaa024.

Full text
Abstract:
Abstract The striatum integrates inputs from the cortex and thalamus, which display concomitant or sequential activity. The striatum assists in forming memory, with acquisition of the behavioral repertoire being associated with corticostriatal (CS) plasticity. The literature has mainly focused on that CS plasticity, and little remains known about thalamostriatal (TS) plasticity rules or CS and TS plasticity interactions. We undertook here the study of these plasticity rules. We found bidirectional Hebbian and anti-Hebbian spike-timing-dependent plasticity (STDP) at the thalamic and cortical inputs, respectively, which were driving concurrent changes at the striatal synapses. Moreover, TS- and CS-STDP induced heterosynaptic plasticity. We developed a calcium-based mathematical model of the coupled TS and CS plasticity, and simulations predict complex changes in the CS and TS plasticity maps depending on the precise cortex–thalamus–striatum engram. These predictions were experimentally validated using triplet-based STDP stimulations, which revealed the significant remodeling of the CS-STDP map upon TS activity, which is notably the induction of the LTD areas in the CS-STDP for specific timing regimes. TS-STDP exerts a greater influence on CS plasticity than CS-STDP on TS plasticity. These findings highlight the major impact of precise timing in cortical and thalamic activity for the memory engram of striatal synapses.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "STDP [Spike Timing Dependant Plasticity]"

1

Strain, Thomas. "A spiking neuron training approach using spike timing-dependent plasticity (STDP)." Thesis, Ulster University, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.538954.

Full text
Abstract:
Spiking Neural Networks (SNNs) model the dynamics and learning capabilities of the brain in a more biologically inspired way than previous generations of neural networks. However, training these networks is still problematic due to over-training and weight instabilities. This research focuses on the design and implementation of a more biologically inspired training algorithm, based on the Spike Timing Dependent Plasticity (STDP) learning rule where weight changes are dependant on the temporal distribution of the input data; the algorithm cross correlates (CC) similarities in the input data across all classes and hence implements global training. The implementation required that the original STDP training rule be extended to take into account both global and local similarities in the input data across all classes. Other novel features of the approach are the use of multiple synaptic connections, axonal delays and dynamic thresholds. Results from the benchmark problems, Iris and Wisconsin Breast Cancer data, for two and three layer SNNs are presented. The three layer SNN has showed a classification accuracy which was superior to the two layer network and comparable to other approaches. Unlike the two layer SNN, the three layer structure utilised the CC rule together with dynamic thresholds where the latter correlated similarities in the spatial patterns across the different data classes. Further investigations were carried out on a temporal application, speech corpus TI46 data. The TI46 data was pre-processed using the most popularly applied method: Mel-Frequency Cepstral Coefficients (MFCC). The results obtained are highly comparable with results from more complex state-of-the-art classification systems such as Liquid State Machines. The proposed learning algorithm facilitates competition between neurons eradicating the problem of a bi-polar weight distribution, consequently stabilising learning. Results have demonstrated that this novel learning technique produces stability in the learning process and is a significant contribution in enabling SNNs to be applied to realistic real world classification problems.
APA, Harvard, Vancouver, ISO, and other styles
2

Iglesias, Javier. "Emergence of oriented circuits driven by synaptic pruning associated with spike-timing-dependent plasticity (STDP)." Université Joseph Fourier (Grenoble), 2005. https://tel.archives-ouvertes.fr/tel-00010650.

Full text
Abstract:
L'élagage massif des synapses après une croissance excessive est une phase normale de la maturation du cerveau des mammifères. L'élagage commence peu avant la naissance et est complété avant l'âge de la maturité sexuelle. Les facteurs déclenchants capables d'induire l'élagage des synapses pourraient être liés à des processus dynamiques qui dépendent de la temporalité relative des potentiels d'actions. La plasticité synaptique à modulation temporelle relative STDP correspond à un changement de la force synaptique basé sur l'ordre des décharges pré- et post-synaptiques. La relation entre l'efficacité synaptique et l'élagage des synapses suggère que les synapses les plus faibles pourraient être modifiées et retirées au moyen d'une règle "d'apprentissage" faisant intervenir une compétition. Cette règle de plasticité pourrait produire le renforcement des connections parmi les neurones qui appartiennent à une assemblée de cellules caractérisée par des motifs de décharge récurrents. A l'inverse, les connections non activées de façon récurrente pourraient voir leur efficacité diminuée et être finalement éliminées. Le but principal de notre travail est de déterminer dans quelles conditions de telles assemblées pourraient émerger d'un réseau d'unités integrate-and-fire connectées aléatoirement à la surface d'une grille bidimensionnelle recevant à la fois du bruit et des entrées organisées dans les dimensions temporelle et spatiale. L'originalité de notre étude tient dans la taille relativement grande du réseau, 10'000 unités, dans la durée des simulations, 1 million d'unités de temps, et dans l'utilisation d'une règle STDP originale compatible avec une implémentation matérielle
Massive synaptic pruning following over-growth is a general feature of mammalian brain maturation. Pruning starts near time of birth and is completed by time of sexual maturation. Trigger signals able to induce synaptic pruning could be related to dynamic functions that depend on the timing of action potentials. Spike-timing-dependent synaptic plasticity (STDP) is a change in the synaptic strength based on the ordering of pre, and postsynaptic spikes. The relation between synaptic efficacy and synaptic pruning suggests that the weak synapses may be modified and removed through competitive "learning" rules. This plasticity rule might produce the strengthening of the connections among neurons that belong to cell assemblies characterized by recurrent patterns of firing. Conversely, the connections that are not recurrently activated might decrease in efficiency and eventually be eliminated. The main goal of our study is to determine whether or not, and under which conditions, such cell assemblies may emerge out of a locally connected random network of integrate-and-fire units distributed on a 2D lattice receiving background noise and content-related input organized in both temporal and spatial dimensions. The originality of our study stands on the relatively large size of the network, 10,000 units, the duration of the experiment, 1,000,000 time units (one time unit corresponding to the duration of a spike), and the application of an original bio-inspired STDP modification rule compatible with hardware implementation
APA, Harvard, Vancouver, ISO, and other styles
3

Humble, James. "Learning, self-organisation and homeostasis in spiking neuron networks using spike-timing dependent plasticity." Thesis, University of Plymouth, 2013. http://hdl.handle.net/10026.1/1499.

Full text
Abstract:
Spike-timing dependent plasticity is a learning mechanism used extensively within neural modelling. The learning rule has been shown to allow a neuron to find the onset of a spatio-temporal pattern repeated among its afferents. In this thesis, the first question addressed is ‘what does this neuron learn?’ With a spiking neuron model and linear prediction, evidence is adduced that the neuron learns two components: (1) the level of average background activity and (2) specific spike times of a pattern. Taking advantage of these findings, a network is developed that can train recognisers for longer spatio-temporal input signals using spike-timing dependent plasticity. Using a number of neurons that are mutually connected by plastic synapses and subject to a global winner-takes-all mechanism, chains of neurons can form where each neuron is selective to a different segment of a repeating input pattern, and the neurons are feedforwardly connected in such a way that both the correct stimulus and the firing of the previous neurons are required in order to activate the next neuron in the chain. This is akin to a simple class of finite state automata. Following this, a novel resource-based STDP learning rule is introduced. The learning rule has several advantages over typical implementations of STDP and results in synaptic statistics which match favourably with those observed experimentally. For example, synaptic weight distributions and the presence of silent synapses match experimental data.
APA, Harvard, Vancouver, ISO, and other styles
4

René, Alice. "Plasticité synaptique et fonctionnelle dans le cortex visuel primaire : une étude par conditionnement theta - burst in vivo." Paris 6, 2007. http://www.theses.fr/2007PA066656.

Full text
Abstract:
Des bouffées à 100 Hz répétées à la fréquence theta - burst dans les afférents thalamiques peuvent-elles induire des modifications synaptiques et affecter l'organisation des champs récepteurs visuels (CR) des neurones corticaux in vivo ? Des enregistrements intracellulaires dans l'aire 17 du chat adulte ont été réalisés ainsi que la stimulation et l'enregistrement extracellulaire du CGL. Les réponses à la stimulation électrique du CGL (PSP) sont potentialisées (50%) ou déprimées (32%) par theta - burst. Des modifications sélectives des CRs sont observés dans 40% des cas, sont spécifiques de la localisation du CR des entrées thalamiques et montrent une anisotropie en accord avec le signe du changement du PSP. Les modifications synaptiques suivent une règle de plasticité associative proche d'une détection de coïncidence et l'ordre temporel entre activité pré- et post-synaptique importe moins que le contexte post-synaptique de dépolarisation et d'inhibition présente au moment de la décharge.
APA, Harvard, Vancouver, ISO, and other styles
5

Falez, Pierre. "Improving spiking neural networks trained with spike timing dependent plasticity for image recognition." Thesis, Lille 1, 2019. http://www.theses.fr/2019LIL1I101.

Full text
Abstract:
La vision par ordinateur est un domaine stratégique, du fait du nombre potentiel d'applications avec un impact important sur la société. Ce secteur a rapidement progressé au cours de ces dernières années, notamment grâce aux avancées en intelligence artificielle et plus particulièrement l'avènement de l'apprentissage profond. Cependant, ces méthodes présentent deux défauts majeurs face au cerveau biologique : ils sont extrêmement énergivores et requièrent de gigantesques bases d'apprentissage étiquetées. Les réseaux de neurones à impulsions sont des modèles alternatifs qui permettent de répondre à la problématique de la consommation énergétique. Ces modèles ont la propriété de pouvoir être implémentés de manière très efficace sur du matériel, afin de créer des architectures très basse consommation. En contrepartie, ces modèles imposent certaines contraintes, comme l'utilisation uniquement de mémoire et de calcul locaux. Cette limitation empêche l'utilisation de méthodes d'apprentissage traditionnelles, telles que la rétro-propagation du gradient. La STDP est une règle d'apprentissage, observée dans la biologie, qui peut être utilisée dans les réseaux de neurones à impulsions. Cette règle renforce les synapses où des corrélations locales entre les temps d'impulsions sont détectées, et affaiblit les autres synapses. La nature locale et non-supervisée permet à la fois de respecter les contraintes des architectures neuromorphiques, et donc d'être implémentable de manière efficace, mais permet également de répondre aux problématiques d'étiquetage des bases d'apprentissage. Cependant, les réseaux de neurones à impulsions entraînés grâce à la STDP souffrent pour le moment de performances inférieures aux méthodes d'apprentissage profond. La littérature entourant la STDP utilise très majoritairement des données simples mais le comportement de cette règle n'a été que très peu étudié sur des données plus complexes, tel que sur des bases avec une variété d'images importante.L'objectif de ce manuscrit est d'étudier le comportement des modèles impulsionnels, entraîné via la STDP, sur des tâches de classification d'images. Le but principal est d'améliorer les performances de ces modèles, tout en respectant un maximum les contraintes imposées par les architectures neuromorphiques. Une première partie des contributions proposées dans ce manuscrit s'intéresse à la simulation logicielle des réseaux de neurones impulsionnels. L'implémentation matérielle étant un processus long et coûteux, l'utilisation de simulation est une bonne alternative pour étudier plus rapidement le comportement des différents modèles. La suite des contributions s'intéresse à la mise en place de réseaux impulsionnels multi-couches. Les réseaux composés d'un empilement de couches, tel que les méthodes d'apprentissage profond, permettent de traiter des données beaucoup plus complexes. Un des chapitres s'articule autour de la problématique de perte de fréquence observée dans les réseaux de neurones à impulsions. Ce problème empêche l'empilement de plusieurs couches de neurones impulsionnels. Une autre partie des contributions se concentre sur l'étude du comportement de la STDP sur des jeux de données plus complexes, tels que les images naturelles en couleur. Plusieurs mesures sont utilisées, telle que la cohérence des filtres ou la dispersion des activations, afin de mieux comprendre les raisons de l'écart de performances entre la STDP et les méthodes plus traditionnelles. Finalement, la réalisation de réseaux multi-couches est décrite dans la dernière partie des contributions. Pour ce faire, un nouveau mécanisme d'adaptation des seuils est introduit ainsi qu'un protocole permettant l'apprentissage multi-couches. Il est notamment démontré que de tels réseaux parviennent à améliorer l'état de l'art autour de la STDP
Computer vision is a strategic field, in consequence of its great number of potential applications which could have a high impact on society. This area has quickly improved over the last decades, especially thanks to the advances of artificial intelligence and more particularly thanks to the accession of deep learning. Nevertheless, these methods present two main drawbacks in contrast with biological brains: they are extremely energy intensive and they need large labeled training sets. Spiking neural networks are alternative models offering an answer to the energy consumption issue. One attribute of these models is that they can be implemented very efficiently on hardware, in order to build ultra low-power architectures. In return, these models impose certain limitations, such as the use of only local memory and computations. It prevents the use of traditional learning methods, for example the gradient back-propagation. STDP is a learning rule, observed in biology, which can be used in spiking neural networks. This rule reinforces the synapses in which local correlations of spike timing are detected. It also weakens the other synapses. The fact that it is local and unsupervised makes it possible to abide by the constraints of neuromorphic architectures, which means it can be implemented efficiently, but it also provides a solution to the data set labeling issue. However, spiking neural networks trained with the STDP rule are affected by lower performances in comparison to those following a deep learning process. The literature about STDP still uses simple data but the behavior of this rule has seldom been used with more complex data, such as sets made of a large variety of real-world images.The aim of this manuscript is to study the behavior of these spiking models, trained through the STDP rule, on image classification tasks. The main goal is to improve the performances of these models, while respecting as much as possible the constraints of neuromorphic architectures. The first contribution focuses on the software simulations of spiking neural networks. Hardware implementation being a long and costly process, using simulation is a good alternative in order to study more quickly the behavior of different models. Then, the contributions focus on the establishment of multi-layered spiking networks; networks made of several layers, such as those in deep learning methods, allow to process more complex data. One of the chapters revolves around the matter of frequency loss seen in several spiking neural networks. This issue prevents the stacking of multiple spiking layers. The center point then switches to a study of STDP behavior on more complex data, especially colored real-world image. Multiple measurements are used, such as the coherence of filters or the sparsity of activations, to better understand the reasons for the performance gap between STDP and the more traditional methods. Lastly, the manuscript describes the making of multi-layered networks. To this end, a new threshold adaptation mechanism is introduced, along with a multi-layer training protocol. It is proven that such networks can improve the state-of-the-art for STDP
APA, Harvard, Vancouver, ISO, and other styles
6

Jacob, Vincent. "Intégration spatio-temporelle de scènes tactiles et plasticité fonctionnelle dans le cortex à tonneaux du rat." Paris 6, 2007. http://www.theses.fr/2007PA066222.

Full text
Abstract:
Classiquement, les connexions entre vibrisses et tonneaux corticaux sont considérées comme des voies indépendantes. Cependant le rat génère des contacts multiples lors de l'exploration active. Les champs récepteurs (CR) corticaux sont très étendus, suggérant qu'une information multivibrissale converge sur chaque neurone. Afin d'étudier l'intégration de scènes tactiles dans le cortex, nous avons développé une matrice de 25 stimulateurs qui nous a permis d’étudier les CRs, leur dépendance à l’omission d’un stimulus prédictible et la sélectivité à la direction générale générée par la déflection séquentielle des vibrisses. Le cortex primaire réalise donc une analyse intégrée non-linéaire de l’information sensorielle. Certaines conditions d’activité engendrent une modification durable des CRs. Nous avons observé des modifications de réponse sensorielle dont le signe et l’intensité dépendent de l’ordre et de l’intervalle de temps entre la stimulation et l'activation post-synaptique du neurone enregistré, résultat compatible avec les règles de STDP pour lesquelles ce travail constitue la première validation dans le cortex somatosensoriel in vivo
Classically the connections between whiskers and cortical barrels are considered as independent ways. However, the rat generates complex patterns of contacts during active exploration. The cortical receptive fields (RF) are very large suggesting that multiwhisker information converge on each neuron. In order to study the integration of tactile scenes in the cortex, we have developed a matrix of 25 stimulators. We studied the RFs, their dependence to the omission of a predictable stimulus and the selectivity to global direction generated by sequential deflections of the whiskers. Then primary cortex performs an integrated and non-linear analysis of sensory information. Conditions of activity induce long-term modifications of the RFs. We observed modifications of the sensory response whose sign and intensity depended on the order and the time interval between the stimulations and the post-synaptic activation of the recorded neuron. This result is compatible with STDP rules for which this work is the first validation in the in vivo somatosensory cortex
APA, Harvard, Vancouver, ISO, and other styles
7

Bernert, Marie. "Développement d'un réseau de neurones STDP pour le tri en ligne et non-supervisé de potentiels d'action." Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAS001.

Full text
Abstract:
La reconnaissance de motifs est une tâche cruciale pour les êtres vivants, exécutée avec efficacité par le cerveau. Les réseaux de neurones profonds artificiels reproduisent de mieux en mieux ces performances, avec des applications telles que la reconnaissance d’images ou le traitement du langage. Ils nécessitent cependant un apprentissage intensif sur de grands jeux de données et couteux en calculs. Les réseaux de neurones à impulsions, plus proches du fonctionnement du cerveau avec des neurones émettant des impulsions et des lois d’apprentissage dites STDP dépendant du temps entre deux impulsions, constituent une alternative intéressante. Ils permettent un apprentissage non supervisé et ont déjà été utilisés pour la reconnaissance visuelle ou auditive, mais les applications restent limitées par rapport à l’apprentissage profond classique. Il est d’autant plus intéressant de développer de nouvelles applications pour ces réseaux qu’ils peuvent être implémentés sur des circuits neuromorphiques connaissant aujourd’hui des développements importants, notamment avec les composants analogiques « memristifs » qui miment la plasticité synaptique. Ici, nous avons choisi de développer un réseau STDP pour un problème crucial en neuroscience: le spike-sorting. Les implants cérébraux composés de matrices de microélectrode permettent d’enregistrer l’activité individuelle de multiples neurones, prenant la forme de pics de potentiel dans le signal, appelés potentiels d’action. Une même électrode enregistre l’activité de plusieurs neurones. Le spike-sorting a pour but de détecter et trier cette activité, en utilisant le fait que la forme d’un potentiel d’action dépend du neurone qui l’a émis. Il s’agit donc d’un problème de reconnaissance de motifs non supervisée. Les méthodes classiques de spike-sorting consistent en trois étapes : la détection des potentiels d’action, l’extraction de traits caractéristiques de leurs formes, et le tri de ces caractéristiques en groupes correspondant alors aux différentes cellules neurales. Bien que les méthodes onlines existent, les méthodes les plus répandues nécessitent un traitement offline, qui n’est pas compatible avec les applications temps réelles telles que les interfaces cerveau-machine (BCI). De plus, le développement de matrices de microélectrodes toujours plus denses nécessite des méthodes automatiques et efficaces. Utiliser un réseau STDP apporte une nouvelle méthode pour répondre à ces besoins. Le réseau que nous avons conçu prend en entrée le signal de l’électrode et produit en sortie un train d’impulsions qui correspond à l’activité des cellules enregistrées. Il est organisé en différentes couches, connectées en série, chacune effectuant une étape du traitement. La première couche, constituée de neurones senseurs, convertit le signal d’entrée en train d’impulsions. Les couches suivantes apprennent les motifs générés par la couche précédente grâce aux lois STDP. Chaque couche est améliorée par l’implémentation de différents mécanismes, tels que le STDP avec ressources, l’adaptation de seuil, la plasticité déclenchée par l’inhibition, ou un modèle de neurone déchargeant par rebond. Un mécanisme d’attention permet au réseau de ne traiter que les parties du signal contenant des potentiels d’action. Ce réseau a été conçu dans un premier temps pour traiter des données mono-électrode, puis adapté pour traiter des signaux provenant d’électrodes multiples. Il a été testé d’abord sur des données simulées qui permettent de comparer la sortie du réseau à la vérité, puis sur des enregistrements réels de microélectrodes associés à des enregistrements intracellulaires donnant une vérité partielle. Les différentes versions du réseau ont été ainsi évaluées et comparées à d’autres algorithmes, donnant des résultats très satisfaisants. Suite à ces résultats simulés sur ordinateur, nous avons travaillé à une implémentation FPGA, constituant une première étape vers une implémentation embarquée neuromorphique
Pattern recognition is a fundamental task for living beings and is perform very efficiently by the brain. Artificial deep neural networks are making quick progress in reproducing these performance and have many applications such as image recognition or natural language processing. However, they require extensive training on large datasets and heavy computations. A promising alternative are spiking neural networks, which closely mimic what happens in the brain, with spiking neurons and spike-timing dependent plasticity (STDP). They are able to perform unsupervised learning and have been used for visual or auditory pattern recognition. However, for now applications using STDP networks lag far behind classical deep learning. Developing new applications for this kind of networks is all the more at stake that they could be implemented in low power neuromorphic hardware that currently undergoes important developments, in particular with analog miniaturized memristive devices able to mimic synaptic plasticity. In this work, we chose to develop an STDP neural network to perform a specific task: spike-sorting, which is a crucial problem in neuroscience. Brain implants based on microelectrode arrays are able to record the activity of individual neurons, appearing in the recorded signal as peak potential variations called action potentials. However, several neurons can be recorded by the same electrode. The goal of spike-sorting is to extract and separate the activity of different neural cells from a common extracellular recording taking advantage of the fact that the shape of an action potential on an electrode depends on the neuron it stems from. Thus spike-sorting can be seen as an unsupervised pattern recognition task where the goal is to detect and classify different waveforms. Most classical spike-sorting approaches use three separated steps: detecting all action potentials in the signal, extract features characterizing their shapes, and separating these features into clusters that should correspond to different neural cells. Though online methods exists, most widespread spike-sorting methods are offline or require an offline preprocessing step, which is not compatible with online application such as Brain-computer interfaces (BCI). Moreover, the development of always larger microelectrode arrays creates a need for fully automatic and computationally efficient algorithms. Using an STDP network brings a new approach to meet these requirements. We designed a network that take the electrode signal as an input, and output spikes that correspond to the spiking activity of the recorded neural cells. It is organized into several layers, designed to achieve different processing steps, connected in feedforward way. The first layer, composed of neurons acting as sensory neurons, convert the input signal into spike train. The following layers are able to learn patterns from the previous layer thanks to STDP rules. Each layer implement different mechanisms that improve their performance, such as resource-dependent STDP, intrinsic plasticity, plasticity triggered by inhibition, or neuron models having rebound spiking properties. An attention mechanism has been implemented to make the network sensitive only to part of the signal containing action potentials. This network was first designed to process data from a single electrode, and then adapted to process data from multiple electrodes. It has been tested on simulated data, which allowed to compare the network output to the known ground truth, and also on real extracellular recordings associated with intracellular recordings that give an incomplete ground truth. Different versions of the network were evaluated and compared to other spike-sorting algorithms, and found to give very satisfying results. Following these software simulations, we initiated an FPGA implementation of the method, which constitutes a first step toward embedded neuromorphic implementation
APA, Harvard, Vancouver, ISO, and other styles
8

Masquelier, Timothée. "Mécanismes d'apprentissage pour expliquer la rapidité, la sélectivité et l'invariance des réponses dans le cortex visuel." Phd thesis, Université Paul Sabatier - Toulouse III, 2008. http://tel.archives-ouvertes.fr/tel-00271070.

Full text
Abstract:
Dans cette thèse je propose plusieurs mécanismes de plasticité synaptique qui pourraient expliquer la rapidité, la sélectivité et l'invariance des réponses neuronales dans le cortex visuel. Leur plausibilité biologique est discutée. J'expose également les résultats d'une expérience de psychophysique pertinente, qui montrent que la familiarité peut accélérer les traitements visuels. Au delà de ces résultats propres au système visuel, les travaux présentés ici créditent l'hypothèse de l'utilisation des dates de spikes pour encoder, décoder, et traiter l'information dans le cerveau – c'est la théorie dite du ‘codage temporel'. Dans un tel cadre, la Spike Timing Dependent Plasticity pourrait jouer un rôle clef, en détectant des patterns de spikes répétitifs et en permettant d'y répondre de plus en plus rapidement.
APA, Harvard, Vancouver, ISO, and other styles
9

Helson, Pascal. "Étude de la plasticité pour des neurones à décharge en interaction." Thesis, Université Côte d'Azur, 2021. http://www.theses.fr/2021COAZ4013.

Full text
Abstract:
Dans cette thèse nous étudions un phénomène susceptible d’être responsable de notre capacité de mémoire: la plasticité synaptique. C’est le changement des liens entre les neurones au cours du temps. Ce phénomène est stochastique: c’est le résultat d’une suite de divers et nombreux mécanismes chimiques. Le but de la thèse est de proposer un modèle de plasticité pour des neurones à décharge en interaction. La principale difficulté consiste à trouver un modèle qui satisfait les conditions suivantes: ce modèle doit être à la fois cohérent avec les résultats biologiques dans le domaine et assez simple pour être étudié mathématiquement et simulé avec un grand nombre de neurones.Dans un premier temps, à partir d’un modèle assez simple de plasticité, on étudie l’apprentissage de signaux extérieurs par un réseau de neurones ainsi que le temps d’oubli de ce signal lorsque le réseau est soumis à d’autres signaux (bruit). L’analyse mathématique nous permet de contrôler la probabilité d’une mauvaise évaluation du signal. On en déduit un minorant du temps de mémoire du signal en fonction des paramètres.Ensuite, nous proposons un modèle basé sur des règles stochastiques de plasticité fonction du temps d’occurrence des décharges électriques neurales (STDP en anglais). Ce modèle est décrit par un Processus de Markov Déterministe par Morceaux (PDMP en anglais). On étudie le comportement en temps long d’un tel réseau de neurones grâce à une analyse lent-rapide. En particulier, on trouve des conditions suffisantes pour lesquelles le processus associé aux poids synaptiques est ergodique.Enfin, nous faisons le lien entre deux niveaux de modélisation: l’approche microscopique et celle macroscopique. À partir des dynamiques présentées d’un point de vu microscopique (modèle du neurone et son interaction avec les autres neurones), on détermine une dynamique limite qui représente l’évolution d’un neurone typique et de ses poids synaptiques entrant: c’est l’analyse champ moyen du modèle. On condense ainsi l’information sur la dynamique des poids et celle des neurones dans une seule équation, celle d’un neurone typique
In this thesis, we study a phenomenon that may be responsible for our memory capacity: the synaptic plasticity. It modifies the links between neurons over time. This phenomenon is stochastic: it is the result of a series of diverse and numerous chemical processes. The aim of the thesis is to propose a model of plasticity for interacting spiking neurons. The main difficulty is to find a model that satisfies the following conditions: it must be both consistent with the biological results of the field and simple enough to be studied mathematically and simulated with a large number of neurons.In a first step, from a rather simple model of plasticity, we study the learning of external signals by a neural network as well as the forgetting time of this signal when the network is subjected to other signals (noise). The mathematical analysis allows us to control the probability to misevaluate the signal. From this, we deduce explicit bounds on the time during which a given signal is kept in memory.Next, we propose a model based on stochastic rules of plasticity as a function of the occurrence time of the neural electrical discharges (Spike Timing Dependent Plasticity, STDP). This model is described by a Piecewise Deterministic Markov Process (PDMP). The long time behaviour of such a neural network is studied using a slow-fast analysis. In particular, sufficient conditions are established under which the process associated with synaptic weights is ergodic. Finally, we make the link between two levels of modelling: the microscopic and the macroscopic approaches. Starting from the dynamics presented at a microscopic level (neuron model and its interaction with other neurons), we derive an asymptotic dynamics which represents the evolution of a typical neuron and its incoming synaptic weights: this is the mean field analysis of the model. We thus condense the information on the dynamics of the weights and that of the neurons into a single equation, that of a typical neuron
APA, Harvard, Vancouver, ISO, and other styles
10

Lecerf, Gwendal. "Développement d'un réseau de neurones impulsionnels sur silicium à synapses memristives." Thesis, Bordeaux, 2014. http://www.theses.fr/2014BORD0219/document.

Full text
Abstract:
Durant ces trois années de doctorat, financées par le projet ANR MHANN (MemristiveHardware Analog Neural Network), nous nous sommes intéressés au développement d’une nouvelle architecture de calculateur à l’aide de réseaux de neurones. Les réseaux de neurones artificiels sont particulièrement bien adaptés à la reconnaissance d’images et peuvent être utilisés en complément des processeurs séquentiels. En 2008, une nouvelle technologie de composant a vu le jour : le memristor. Classé comme étant le quatrième élément passif, il est possible de modifier sa résistance en fonction de la densité de courant qui le traverse et de garder en mémoire ces changements. Grâce à leurs propriétés, les composants memristifs sont des candidats idéaux pour jouer le rôle des synapses au sein des réseaux de neurones artificiels. En effectuant des mesures sur la technologie des memristors ferroélectriques de l’UMjCNRS/Thalès de l’équipe de Julie Grollier, nous avons pu démontrer qu’il était possible d’obtenir un apprentissage de type STDP (Spike Timing Dependant Plasticity) classiquement utilisé avec les réseaux de neurones impulsionnels. Cette forme d’apprentissage, inspirée de la biologie, impose une variation des poids synaptiques en fonction des évènements neuronaux. En s’appuyant sur les mesures réalisées sur ces memristors et sur des simulations provenant d’un programme élaboré avec nos partenaires de l’INRIA Saclay, nous avons conçu successivement deux puces en silicium pour deux technologies de memristors ferroélectriques. La première technologie (BTO), moins performante, a été mise de côté au profit d’une seconde technologie (BFO). La seconde puce a été élaborée avec les retours d’expérience de la première puce. Elle contient deux couches d’un réseau de neurones impulsionnels dédié à l’apprentissage d’images de 81 pixels. En la connectant à un boitier contenant un crossbar de memristors, nous pourrons réaliser un démonstrateur d’un réseau de neurones hybride réalisé avec des synapses memristives ferroélectriques
Supported financially by ANR MHANN project, this work proposes an architecture ofspiking neural network in order to recognize pictures, where traditional processing units are inefficient regarding this. In 2008, a new passive electrical component had been discovered : the memristor. Its resistance can be adjusted by applying a potential between its terminals. Behaving intrinsically as artificial synapses, memristives devices can be used inside artificial neural networks.We measure the variation in resistance of a ferroelectric memristor (obtained from UMjCNRS/Thalès) similar to the biological law STDP (Spike Timing Dependant Plasticity) used with spiking neurons. With our measurements on the memristor and our network simulation (aided by INRIASaclay) we designed successively two versions of the IC. The second IC design is driven by specifications of the first IC with additional functionalists. The second IC contains two layers of a spiking neural network dedicated to learn a picture of 81 pixels. A demonstrator of hybrid neural networks will be achieved by integrating a chip of memristive crossbar interfaced with thesecond IC
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "STDP [Spike Timing Dependant Plasticity]"

1

Börgers, Christoph. "Spike Timing-Dependent Plasticity (STDP)." In An Introduction to Modeling Neuronal Dynamics, 349–59. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-51171-9_40.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Griffith, Thom, Jack Mellor, and Krasimira Tsaneva-Atanasova. "Spike Timing-Dependent Plasticity (STDP), Biophysical Models." In Encyclopedia of Computational Neuroscience, 1–5. New York, NY: Springer New York, 2014. http://dx.doi.org/10.1007/978-1-4614-7320-6_359-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Griffith, Thom, Jack Mellor, and Krasimira Tsaneva-Atanasova. "Spike-Timing Dependent Plasticity (STDP), Biophysical Models." In Encyclopedia of Computational Neuroscience, 2803–7. New York, NY: Springer New York, 2015. http://dx.doi.org/10.1007/978-1-4614-6675-8_359.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Koprinkova-Hristova, Petia, Nadejda Bocheva, Simona Nedelcheva, Miroslava Stefanova, Bilyana Genova, Radoslava Kraleva, and Velin Kralev. "STDP Plasticity in TRN Within Hierarchical Spike Timing Model of Visual Information Processing." In IFIP Advances in Information and Communication Technology, 279–90. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-49161-1_24.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Buonomano, D. V., and T. P. Carvalho. "Spike-Timing-Dependent Plasticity (STDP)." In Encyclopedia of Neuroscience, 265–68. Elsevier, 2009. http://dx.doi.org/10.1016/b978-008045046-9.00822-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "STDP [Spike Timing Dependant Plasticity]"

1

Kiani, Mahdi, Nan Du, Danilo Burger, Ilona Skorupa, Ramona Ecke, Stefan E. Schulz, and Heidemarie Schmidt. "Electroforming-free BiFeO3 switches for neuromorphic computing: Spike-timing dependent plasticity (STDP) and cycle-number dependent plasticity (CNDP)." In 2019 26th IEEE International Conference on Electronics, Circuits and Systems (ICECS). IEEE, 2019. http://dx.doi.org/10.1109/icecs46596.2019.8965060.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Di Hu, Xu Zhang, Ziye Xu, Silvia Ferrari, and Pinaki Mazumder. "Digital implementation of a spiking neural network (SNN) capable of spike-timing-dependent plasticity (STDP) learning." In 2014 IEEE 14th International Conference on Nanotechnology (IEEE-NANO). IEEE, 2014. http://dx.doi.org/10.1109/nano.2014.6968000.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Marukame, T., R. Ichihara, M. Mori, Y. Nishi, S. Yasuda, T. Tanamoto, and Y. Mitani. "Artificial neuron operations and spike-timing-dependent plasticity (STDP) using memristive devices for brain-inspired computing." In 2017 International Conference on Solid State Devices and Materials. The Japan Society of Applied Physics, 2017. http://dx.doi.org/10.7567/ssdm.2017.d-4-03.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Qi, Yu, Jiangrong Shen, Yueming Wang, Huajin Tang, Hang Yu, Zhaohui Wu, and Gang Pan. "Jointly Learning Network Connections and Link Weights in Spiking Neural Networks." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/221.

Full text
Abstract:
Spiking neural networks (SNNs) are considered to be biologically plausible and power-efficient on neuromorphic hardware. However, unlike the brain mechanisms, most existing SNN algorithms have fixed network topologies and connection relationships. This paper proposes a method to jointly learn network connections and link weights simultaneously. The connection structures are optimized by the spike-timing-dependent plasticity (STDP) rule with timing information, and the link weights are optimized by a supervised algorithm. The connection structures and the weights are learned alternately until a termination condition is satisfied. Experiments are carried out using four benchmark datasets. Our approach outperforms classical learning methods such as STDP, Tempotron, SpikeProp, and a state-of-the-art supervised algorithm. In addition, the learned structures effectively reduce the number of connections by about 24%, thus facilitate the computational efficiency of the network.
APA, Harvard, Vancouver, ISO, and other styles
5

Zhang, Tielin, Yi Zeng, Dongcheng Zhao, and Bo Xu. "Brain-inspired Balanced Tuning for Spiking Neural Networks." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/229.

Full text
Abstract:
Due to the nature of Spiking Neural Networks (SNNs), it is challenging to be trained by biologically plausible learning principles. The multi-layered SNNs are with non-differential neurons, temporary-centric synapses, which make them nearly impossible to be directly tuned by back propagation. Here we propose an alternative biological inspired balanced tuning approach to train SNNs. The approach contains three main inspirations from the brain: Firstly, the biological network will usually be trained towards the state where the temporal update of variables are equilibrium (e.g. membrane potential); Secondly, specific proportions of excitatory and inhibitory neurons usually contribute to stable representations; Thirdly, the short-term plasticity (STP) is a general principle to keep the input and output of synapses balanced towards a better learning convergence. With these inspirations, we train SNNs with three steps: Firstly, the SNN model is trained with three brain-inspired principles; then weakly supervised learning is used to tune the membrane potential in the final layer for network classification; finally the learned information is consolidated from membrane potential into the weights of synapses by Spike-Timing Dependent Plasticity (STDP). The proposed approach is verified on the MNIST hand-written digit recognition dataset and the performance (the accuracy of 98.64%) indicates that the ideas of balancing state could indeed improve the learning ability of SNNs, which shows the power of proposed brain-inspired approach on the tuning of biological plausible SNNs.
APA, Harvard, Vancouver, ISO, and other styles
6

Sarim, Mohammad, Thomas Schultz, Manish Kumar, and Rashmi Jha. "An Artificial Brain Mechanism to Develop a Learning Paradigm for Robot Navigation." In ASME 2016 Dynamic Systems and Control Conference. American Society of Mechanical Engineers, 2016. http://dx.doi.org/10.1115/dscc2016-9903.

Full text
Abstract:
Advancement in solid state device technology has made it possible to replicate some learning behaviors on those devices as observed in biological neurons. A very widely used mechanism of learning is Spike Timing Dependent Plasticity (STDP) to realize an unsupervised learning process. In this work, we have developed a novel solution for learning using such networks in robots that can be implemented on novel resistive memory devices. This artificial brain mechanism is capable of learning by observing the environment and taking actions to achieve a desired goal. We have demonstrated this learning scheme using a mathematical model representing the reconfiguration of strengths in synapses. This model can be easily implemented on miniaturized microprocessors using resistive crossbar memories. The reconfigurable resistive memory devices in crossbar arrays are capable of mimicking the synapses in human brains by changing their resistances on application of appropriate voltage signal. In this work, we have demonstrated the potential of this learning scheme by applying it to navigate a two-wheeled differential drive robot in an environment cluttered with obstacles. It can be observed that the robot is able to navigate the environment autonomously, and can reach a given target while avoiding obstacles.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography