To see the other types of publications on this topic, follow the link: Temporal Hebbian learning.

Journal articles on the topic 'Temporal Hebbian learning'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Temporal Hebbian learning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Rao, Rajesh P. N., and Terrence J. Sejnowski. "Spike-Timing-Dependent Hebbian Plasticity as Temporal Difference Learning." Neural Computation 13, no. 10 (2001): 2221–37. http://dx.doi.org/10.1162/089976601750541787.

Full text
Abstract:
A spike-timing-dependent Hebbian mechanism governs the plasticity of recurrent excitatory synapses in the neocortex: synapses that are activated a few milliseconds before a postsynaptic spike are potentiated, while those that are activated a few milliseconds after are depressed. We show that such a mechanism can implement a form of temporal difference learning for prediction of input sequences. Using a biophysical model of a cortical neuron, we show that a temporal difference rule used in conjunction with dendritic backpropagating action potentials reproduces the temporally asymmetric window of Hebbian plasticity observed physiologically. Furthermore, the size and shape of the window vary with the distance of the synapse from the soma. Using a simple example, we show how a spike-timing-based temporal difference learning rule can allow a network of neocortical neurons to predict an input a few milliseconds before the input's expected arrival.
APA, Harvard, Vancouver, ISO, and other styles
2

Cho, Myoung Won. "Temporal Hebbian plasticity designed for efficient competitive learning." Journal of the Korean Physical Society 64, no. 8 (2014): 1213–19. http://dx.doi.org/10.3938/jkps.64.1213.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Tully, Philip J., Henrik Lindén, Matthias H. Hennig, and Anders Lansner. "Spike-Based Bayesian-Hebbian Learning of Temporal Sequences." PLOS Computational Biology 12, no. 5 (2016): e1004954. http://dx.doi.org/10.1371/journal.pcbi.1004954.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Girolami, Mark, and Colin Fyfe. "A temporal model of linear anti-Hebbian learning." Neural Processing Letters 4, no. 3 (1996): 139–48. http://dx.doi.org/10.1007/bf00426022.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Zenke, Friedemann, Wulfram Gerstner, and Surya Ganguli. "The temporal paradox of Hebbian learning and homeostatic plasticity." Current Opinion in Neurobiology 43 (April 2017): 166–76. http://dx.doi.org/10.1016/j.conb.2017.03.015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Lee, Yun-Parn. "Multidimensional Hebbian Learning With Temporal Coding in Neocognitron Visual Recognition." IEEE Transactions on Systems, Man, and Cybernetics: Systems 47, no. 12 (2017): 3386–96. http://dx.doi.org/10.1109/tsmc.2016.2599200.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Kolodziejski, Christoph, Bernd Porr, and Florentin Wörgötter. "On the Asymptotic Equivalence Between Differential Hebbian and Temporal Difference Learning." Neural Computation 21, no. 4 (2009): 1173–202. http://dx.doi.org/10.1162/neco.2008.04-08-750.

Full text
Abstract:
In this theoretical contribution, we provide mathematical proof that two of the most important classes of network learning—correlation-based differential Hebbian learning and reward-based temporal difference learning—are asymptotically equivalent when timing the learning with a modulatory signal. This opens the opportunity to consistently reformulate most of the abstract reinforcement learning framework from a correlation-based perspective more closely related to the biophysics of neurons.
APA, Harvard, Vancouver, ISO, and other styles
8

El-Laithy, Karim, and Martin Bogdan. "A Reinforcement Learning Framework for Spiking Networks with Dynamic Synapses." Computational Intelligence and Neuroscience 2011 (2011): 1–12. http://dx.doi.org/10.1155/2011/869348.

Full text
Abstract:
An integration of both the Hebbian-based and reinforcement learning (RL) rules is presented for dynamic synapses. The proposed framework permits the Hebbian rule to update the hidden synaptic model parameters regulating the synaptic response rather than the synaptic weights. This is performed using both the value and the sign of the temporal difference in the reward signal after each trial. Applying this framework, a spiking network with spike-timing-dependent synapses is tested to learn the exclusive-OR computation on a temporally coded basis. Reward values are calculated with the distance between the output spike train of the network and a reference target one. Results show that the network is able to capture the required dynamics and that the proposed framework can reveal indeed an integrated version of Hebbian and RL. The proposed framework is tractable and less computationally expensive. The framework is applicable to a wide class of synaptic models and is not restricted to the used neural representation. This generality, along with the reported results, supports adopting the introduced approach to benefit from the biologically plausible synaptic models in a wide range of intuitive signal processing.
APA, Harvard, Vancouver, ISO, and other styles
9

Mitchison, Graeme. "Removing Time Variation with the Anti-Hebbian Differential Synapse." Neural Computation 3, no. 3 (1991): 312–20. http://dx.doi.org/10.1162/neco.1991.3.3.312.

Full text
Abstract:
I describe a local synaptic learning rule that can be used to remove the effects of certain types of systematic temporal variation in the inputs to a unit. According to this rule, changes in synaptic weight result from a conjunction of short-term temporal changes in the inputs and the output. Formally, This is like the differential rule proposed by Klopf (1986) and Kosko (1986), except for a change of sign, which gives it an anti-Hebbian character. By itself this rule is insufficient. A weight conservation condition is needed to prevent the weights from collapsing to zero, and some further constraint—implemented here by a biasing term—to select particular sets of weights from the subspace of those which give minimal variation. As an example, I show that this rule will generate center-surround receptive fields that remove temporally varying linear gradients from the inputs.
APA, Harvard, Vancouver, ISO, and other styles
10

Kempter, Richard, Wulfram Gerstner, and J. Leo van Hemmen. "Intrinsic Stabilization of Output Rates by Spike-Based Hebbian Learning." Neural Computation 13, no. 12 (2001): 2709–41. http://dx.doi.org/10.1162/089976601317098501.

Full text
Abstract:
We study analytically a model of long-term synaptic plasticity where synaptic changes are triggered by presynaptic spikes, postsynaptic spikes, and the time differences between presynaptic and postsynaptic spikes. The changes due to correlated input and output spikes are quantified by means of a learning window. We show that plasticity can lead to an intrinsic stabilization of the mean firing rate of the postsynaptic neuron. Subtractive normalization of the synaptic weights (summed over all presynaptic inputs converging on a postsynaptic neuron) follows if, in addition, the mean input rates and the mean input correlations are identical at all synapses. If the integral over the learning window is positive, firing-rate stabilization requires a non-Hebbian component, whereas such a component is not needed if the integral of the learning window is negative. A negative integral corresponds to anti-Hebbian learning in a model with slowly varying firing rates. For spike-based learning, a strict distinction between Hebbian and anti-Hebbian rules is questionable since learning is driven by correlations on the timescale of the learning window. The correlations between presynaptic and postsynaptic firing are evaluated for a piecewise-linear Poisson model and for a noisy spiking neuron model with refractoriness. While a negative integral over the learning window leads to intrinsic rate stabilization, the positive part of the learning window picks up spatial and temporal correlations in the input.
APA, Harvard, Vancouver, ISO, and other styles
11

Stone, James V. "Learning Perceptually Salient Visual Parameters Using Spatiotemporal Smoothness Constraints." Neural Computation 8, no. 7 (1996): 1463–92. http://dx.doi.org/10.1162/neco.1996.8.7.1463.

Full text
Abstract:
A model is presented for unsupervised learning of low level vision tasks, such as the extraction of surface depth. A key assumption is that perceptually salient visual parameters (e.g., surface depth) vary smoothly over time. This assumption is used to derive a learning rule that maximizes the long-term variance of each unit's outputs, whilst simultaneously minimizing its short-term variance. The length of the half-life associated with each of these variances is not critical to the success of the algorithm. The learning rule involves a linear combination of anti-Hebbian and Hebbian weight changes, over short and long time scales, respectively. This maximizes the information throughput with respect to low-frequency parameters implicit in the input sequence. The model is used to learn stereo disparity from temporal sequences of random-dot and gray-level stereograms containing synthetically generated subpixel disparities. The presence of temporal discontinuities in disparity does not prevent learning or generalization to previously unseen image sequences. The implications of this class of unsupervised methods for learning in perceptual systems are discussed.
APA, Harvard, Vancouver, ISO, and other styles
12

Bush, Daniel, Andrew Philippides, Phil Husbands, and Michael O'Shea. "Reconciling the STDP and BCM Models of Synaptic Plasticity in a Spiking Recurrent Neural Network." Neural Computation 22, no. 8 (2010): 2059–85. http://dx.doi.org/10.1162/neco_a_00003-bush.

Full text
Abstract:
Rate-coded Hebbian learning, as characterized by the BCM formulation, is an established computational model of synaptic plasticity. Recently it has been demonstrated that changes in the strength of synapses in vivo can also depend explicitly on the relative timing of pre- and postsynaptic firing. Computational modeling of this spike-timing-dependent plasticity (STDP) has demonstrated that it can provide inherent stability or competition based on local synaptic variables. However, it has also been demonstrated that these properties rely on synaptic weights being either depressed or unchanged by an increase in mean stochastic firing rates, which directly contradicts empirical data. Several analytical studies have addressed this apparent dichotomy and identified conditions under which distinct and disparate STDP rules can be reconciled with rate-coded Hebbian learning. The aim of this research is to verify, unify, and expand on these previous findings by manipulating each element of a standard computational STDP model in turn. This allows us to identify the conditions under which this plasticity rule can replicate experimental data obtained using both rate and temporal stimulation protocols in a spiking recurrent neural network. Our results describe how the relative scale of mean synaptic weights and their dependence on stochastic pre- or postsynaptic firing rates can be manipulated by adjusting the exact profile of the asymmetric learning window and temporal restrictions on spike pair interactions respectively. These findings imply that previously disparate models of rate-coded autoassociative learning and temporally coded heteroassociative learning, mediated by symmetric and asymmetric connections respectively, can be implemented in a single network using a single plasticity rule. However, we also demonstrate that forms of STDP that can be reconciled with rate-coded Hebbian learning do not generate inherent synaptic competition, and thus some additional mechanism is required to guarantee long-term input-output selectivity.
APA, Harvard, Vancouver, ISO, and other styles
13

Nguyen, Tien, Khoa Pham, and Kyeong-Sik Min. "Memristor-CMOS Hybrid Circuit for Temporal-Pooling of Sensory and Hippocampal Responses of Cortical Neurons." Materials 12, no. 6 (2019): 875. http://dx.doi.org/10.3390/ma12060875.

Full text
Abstract:
As a software framework, Hierarchical Temporal Memory (HTM) has been developed to perform the brain’s neocortical functions, such as spatial and temporal pooling. However, it should be realized with hardware not software not only to mimic the neocortical function but also to exploit its architectural benefit. To do so, we propose a new memristor-CMOS (Complementary Metal-Oxide-Semiconductor) hybrid circuit of temporal-pooling here, which is composed of the input-layer and output-layer neurons mimicking the neocortex. In the hybrid circuit, the input-layer neurons have the proximal and basal/distal dendrites to combine sensory information with the temporal/location information from the brain’s hippocampus. Using the same crossbar architecture, the output-layer neurons can perform a prediction by integrating the temporal information on the basal/distal dendrites. For training the proposed circuit, we used only simple Hebbian learning, not the complicated backpropagation algorithm. Due to the simple hardware of Hebbian learning, the proposed hybrid circuit can be very suitable to online learning. The proposed memristor-CMOS hybrid circuit has been verified by the circuit simulation using the real memristor model. The proposed circuit has been verified to predict both the ordinal and out-of-order sequences. In addition, the proposed circuit has been tested with the external noise and memristance variation.
APA, Harvard, Vancouver, ISO, and other styles
14

Shigematsu, Yukifumi, Hiroshi Okamoto, Kazuhisa Ichikawa, and Gen Matsumoto. "Temporal Event Association and Output-Dependent Learning: A Proposed Scheme of Neural Molecular Connections." Journal of Advanced Computational Intelligence and Intelligent Informatics 3, no. 4 (1999): 234–44. http://dx.doi.org/10.20965/jaciii.1999.p0234.

Full text
Abstract:
We introduce a model of temporal-event-associated and output-dependent learning rule, genetically acquired and expressed in a single neuron. This is essentially indispensable for the brain to acquire algorithms, how to process its self-selected information, by itself. This proposed learning rule is revised-Hebbian with a synaptic history trace to correlate one temporal event to others. Temporal events are memorized to be expressed at the synaptic site of inputs and in the form of the asymmetric neural strength corrections associated with temporal events. This learning algorithm has an advantage to associate one temporal event with others, resulting in the neuron with predictability but also makes recalling flexible. Re ’ calling is, according to this learning, independent of timing, supposed to be crucial in learning. Underlying molecular mechanisms for our proposed learning rule are discussed and we identify three important factors: 1) the back-propagating action potentials experimentally observed a single neuron play a crucial role for outputdependent learning, 2) temporally associated, nonlinear couplings are modeled at molecular levels with glutamate receptors, voltage-dependent channels, intracellular calcium concentration, protein kinases and phosphatase, and 3) intracellular concentration of inositol-tri-phosphate [IP3] is the memory substrate of synaptic history.
APA, Harvard, Vancouver, ISO, and other styles
15

Gillett, Maxwell, Ulises Pereira, and Nicolas Brunel. "Characteristics of sequential activity in networks with temporally asymmetric Hebbian learning." Proceedings of the National Academy of Sciences 117, no. 47 (2020): 29948–58. http://dx.doi.org/10.1073/pnas.1918674117.

Full text
Abstract:
Sequential activity has been observed in multiple neuronal circuits across species, neural structures, and behaviors. It has been hypothesized that sequences could arise from learning processes. However, it is still unclear whether biologically plausible synaptic plasticity rules can organize neuronal activity to form sequences whose statistics match experimental observations. Here, we investigate temporally asymmetric Hebbian rules in sparsely connected recurrent rate networks and develop a theory of the transient sequential activity observed after learning. These rules transform a sequence of random input patterns into synaptic weight updates. After learning, recalled sequential activity is reflected in the transient correlation of network activity with each of the stored input patterns. Using mean-field theory, we derive a low-dimensional description of the network dynamics and compute the storage capacity of these networks. Multiple temporal characteristics of the recalled sequential activity are consistent with experimental observations. We find that the degree of sparseness of the recalled sequences can be controlled by nonlinearities in the learning rule. Furthermore, sequences maintain robust decoding, but display highly labile dynamics, when synaptic connectivity is continuously modified due to noise or storage of other patterns, similar to recent observations in hippocampus and parietal cortex. Finally, we demonstrate that our results also hold in recurrent networks of spiking neurons with separate excitatory and inhibitory populations.
APA, Harvard, Vancouver, ISO, and other styles
16

Brunel, Nicolas. "Hebbian Learning of Context in Recurrent Neural Networks." Neural Computation 8, no. 8 (1996): 1677–710. http://dx.doi.org/10.1162/neco.1996.8.8.1677.

Full text
Abstract:
Single electrode recordings in the inferotemporal cortex of monkeys during delayed visual memory tasks provide evidence for attractor dynamics in the observed region. The persistent elevated delay activities could be internal representations of features of the learned visual stimuli shown to the monkey during training. When uncorrelated stimuli are presented during training in a fixed sequence, these experiments display significant correlations between the internal representations. Recently a simple model of attractor neural network has reproduced quantitatively the measured correlations. An underlying assumption of the model is that the synaptic matrix formed during the training phase contains in its efficacies information about the contiguity of persistent stimuli in the training sequence. We present here a simple unsupervised learning dynamics that produces such a synaptic matrix if sequences of stimuli are repeatedly presented to the network at fixed order. The resulting matrix is then shown to convert temporal correlations during training into spatial correlations between attractors. The scenario is that, in the presence of selective delay activity, at the presentation of each stimulus, the activity distribution in the neural assembly contains information of both the current stimulus and the previous one (carried by the attractor). Thus the recurrent synaptic matrix can code not only for each of the stimuli presented to the network but also for their context. We combine the idea that for learning to be effective, synaptic modification should be stochastic, with the fact that attractors provide learnable information about two consecutive stimuli. We calculate explicitly the probability distribution of synaptic efficacies as a function of training protocol, that is, the order in which stimuli are presented to the network. We then solve for the dynamics of a network composed of integrate-and-fire excitatory and inhibitory neurons with a matrix of synaptic collaterals resulting from the learning dynamics. The network has a stable spontaneous activity, and stable delay activity develops after a critical learning stage. The availability of a learning dynamics makes possible a number of experimental predictions for the dependence of the delay activity distributions and the correlations between them, on the learning stage and the learning protocol. In particular it makes specific predictions for pair-associates delay experiments.
APA, Harvard, Vancouver, ISO, and other styles
17

Stone, James V. "A Canonical Microfunction for Learning Perceptual Invariances." Perception 25, no. 2 (1996): 207–20. http://dx.doi.org/10.1068/p250207.

Full text
Abstract:
An unsupervised method is presented which permits a set of model neurons, or a microcircuit, to learn low-level vision tasks, such as the extraction of surface depth. Each microcircuit implements a simple, generic strategy which is based on a key assumption: perceptually salient visual invariances, such as surface depth, vary smoothly over time. In the process of learning to extract smoothly varying invariances, each microcircuit maximises a microfunction. This is achieved by means of a learning rule which maximises the long-term variance of the state of a model neuron and simultaneously minimises its short-term variance. The learning rule involves a linear combination of anti-Hebbian and Hebbian weight changes, over short and long time scales, respectively. The method is demonstrated on a hyperacuity task: estimating subpixel stereo disparity from a temporal sequence of random-dot stereograms. After learning, the microcircuit generalises, without additional learning, to previously unseen image sequences. It is proposed that the approach adopted here may be used to define a canonical microfunction, which can be used to learn many perceptually salient invariances.
APA, Harvard, Vancouver, ISO, and other styles
18

BARRETO, GUILHERME DE A., and ALUIZIO F. R. ARAÚJO. "Unsupervised Learning and Recall of Temporal Sequences: An Application to Robotics." International Journal of Neural Systems 09, no. 03 (1999): 235–42. http://dx.doi.org/10.1142/s012906579900023x.

Full text
Abstract:
This paper describes an unsupervised neural network model for learning and recall of temporal patterns. The model comprises two groups of synaptic weights, named competitive feedforward and Hebbian feedback, which are responsible for encoding the static and temporal features of the sequence respectively. Three additional mechanisms allow the network to deal with complex sequences: context units, a neuron commitment equation, and redundancy in the representation of sequence states. The proposed network encodes a set of robot trajectories which may contain states in common, and retrieves them accurately in the correct order. Further tests evaluate the fault-tolerance and noise sensitivity of the proposed model.
APA, Harvard, Vancouver, ISO, and other styles
19

Schulz, Reiner, and James A. Reggia. "Temporally Asymmetric Learning Supports Sequence Processing in Multi-Winner Self-Organizing Maps." Neural Computation 16, no. 3 (2004): 535–61. http://dx.doi.org/10.1162/089976604772744901.

Full text
Abstract:
We examine the extent to which modified Kohonen self-organizing maps (SOMs) can learn unique representations of temporal sequences while still supporting map formation. Two biologically inspired extensions are made to traditional SOMs: selection of multiple simultaneous rather than single “winners” and the use of local intramap connections that are trained according to a temporally asymmetric Hebbian learning rule. The extended SOM is then trained with variable-length temporal sequences that are composed of phoneme feature vectors, with each sequence corresponding to the phonetic transcription of a noun. The model transforms each input sequence into a spatial representation (final activation pattern on the map). Training improves this transformation by, for example, increasing the uniqueness of the spatial representations of distinct sequences, while still retaining map formation based on input patterns. The closeness of the spatial representations of two sequences is found to correlate significantly with the sequences' similarity. The extended model presented here raises the possibility that SOMs may ultimately prove useful as visualization tools for temporal sequences and as preprocessors for sequence pattern recognition systems.
APA, Harvard, Vancouver, ISO, and other styles
20

Porr, Bernd, and Florentin Wörgötter. "Strongly Improved Stability and Faster Convergence of Temporal Sequence Learning by Using Input Correlations Only." Neural Computation 18, no. 6 (2006): 1380–412. http://dx.doi.org/10.1162/neco.2006.18.6.1380.

Full text
Abstract:
Currently all important, low-level, unsupervised network learning algorithms follow the paradigm of Hebb, where input and output activity are correlated to change the connection strength of a synapse. However, as a consequence, classical Hebbian learning always carries a potentially destabilizing autocorrelation term, which is due to the fact that every input is in a weighted form reflected in the neuron's output. This self-correlation can lead to positive feedback, where increasing weights will increase the output, and vice versa, which may result in divergence. This can be avoided by different strategies like weight normalization or weight saturation, which, however, can cause different problems. Consequently, in most cases, high learning rates cannot be used for Hebbian learning, leading to relatively slow convergence. Here we introduce a novel correlation-based learning rule that is related to our isotropic sequence order (ISO) learning rule (Porr & Wörgötter, 2003a), but replaces the derivative of the output in the learning rule with the derivative of the reflex input. Hence, the new rule uses input correlations only, effectively implementing strict heterosynaptic learning. This looks like a minor modification but leads to dramatically improved properties. Elimination of the output from the learning rule removes the unwanted, destabilizing autocorrelation term, allowing us to use high learning rates. As a consequence, we can mathematically show that the theoretical optimum of one-shot learning can be reached under ideal conditions with the new rule. This result is then tested against four different experimental setups, and we will show that in all of them, very few (and sometimes only one) learning experiences are needed to achieve the learning goal. As a consequence, the new learning rule is up to 100 times faster and in general more stable than ISO learning.
APA, Harvard, Vancouver, ISO, and other styles
21

Wallis, Guy, and Roland Baddeley. "Optimal, Unsupervised Learning in Invariant Object Recognition." Neural Computation 9, no. 4 (1997): 883–94. http://dx.doi.org/10.1162/neco.1997.9.4.883.

Full text
Abstract:
A means for establishing transformation-invariant representations of objects is proposed and analyzed, in which different views are associated on the basis of the temporal order of the presentation of these views, as well as their spatial similarity. Assuming knowledge of the distribution of presentation times, an optimal linear learning rule is derived. Simulations of a competitive network trained on a character recognition task are then used to highlight the success of this learning rule in relation to simple Hebbian learning and to show that the theory can give accurate quantitative predictions for the optimal parameters for such networks.
APA, Harvard, Vancouver, ISO, and other styles
22

DE A. BARRETO, GUILHERME, and ALUIZIO F. R. ARAÚJO. "UNSUPERVISED LEARNING AND TEMPORAL CONTEXT TO RECALL COMPLEX ROBOT TRAJECTORIES." International Journal of Neural Systems 11, no. 01 (2001): 11–22. http://dx.doi.org/10.1142/s0129065701000461.

Full text
Abstract:
An unsupervised neural network is proposed to learn and recall complex robot trajectories. Two cases are considered: (i) A single trajectory in which a particular arm configuration (state) may occur more than once, and (ii) trajectories sharing states with each other. Ambiguities occur in both cases during recall of such trajectories. The proposed model consists of two groups of synaptic weights trained by competitive and Hebbian learning laws. They are responsible for encoding spatial and temporal features of the input sequences, respectively. Three mechanisms allow the network to deal with repeated or shared states: local and global context units, neurons disabled from learning, and redundancy. The network reproduces the current and the next state of the learned sequences and is able to resolve ambiguities. The model was simulated over various sets of robot trajectories in order to evaluate learning and recall, trajectory sampling effects and robustness.
APA, Harvard, Vancouver, ISO, and other styles
23

KAWATA, SOTARO, and AKIRA HIROSE. "FREQUENCY-MULTIPLEXING ABILITY OF COMPLEX-VALUED HEBBIAN LEARNING IN LOGIC GATES." International Journal of Neural Systems 18, no. 02 (2008): 173–84. http://dx.doi.org/10.1142/s0129065708001488.

Full text
Abstract:
Lightwave has attractive characteristics such as spatial parallelism, temporal rapidity in signal processing, and frequency band vastness. In particular, the vast carrier frequency bandwidth promises novel information processing. In this paper, we propose a novel optical logic gate that learns multiple functions at frequencies different from one another, and analyze the frequency-domain multiplexing ability in the learning based on complex-valued Hebbian rule. We evaluate the averaged error function values in the learning process and the error probabilities in the realized logic functions. We investigate optimal learning parameters as well as performance dependence on the number of learning iterations and the number of parallel paths per neuron. Results show a trade-off among the learning parameters such as learning time constant and learning gain. We also find that when we prepare 10 optical path differences and conduct 200 learning iterations, the error probability completely decreases to zero in a three-function multiplexing case. However, at the same time, the error probability is tolerant of the path number. That is, even if the path number is reduced by half, error probability is found almost zero. The results can be useful to determine neural parameters for future optical neural network systems and devices that utilize the vast frequency bandwidth for frequency-domain multiplexing.
APA, Harvard, Vancouver, ISO, and other styles
24

Lobov, Sergey A., Andrey V. Chernyshov, Nadia P. Krilova, Maxim O. Shamshin, and Victor B. Kazantsev. "Competitive Learning in a Spiking Neural Network: Towards an Intelligent Pattern Classifier." Sensors 20, no. 2 (2020): 500. http://dx.doi.org/10.3390/s20020500.

Full text
Abstract:
One of the modern trends in the design of human–machine interfaces (HMI) is to involve the so called spiking neuron networks (SNNs) in signal processing. The SNNs can be trained by simple and efficient biologically inspired algorithms. In particular, we have shown that sensory neurons in the input layer of SNNs can simultaneously encode the input signal based both on the spiking frequency rate and on varying the latency in generating spikes. In the case of such mixed temporal-rate coding, the SNN should implement learning working properly for both types of coding. Based on this, we investigate how a single neuron can be trained with pure rate and temporal patterns, and then build a universal SNN that is trained using mixed coding. In particular, we study Hebbian and competitive learning in SNN in the context of temporal and rate coding problems. We show that the use of Hebbian learning through pair-based and triplet-based spike timing-dependent plasticity (STDP) rule is accomplishable for temporal coding, but not for rate coding. Synaptic competition inducing depression of poorly used synapses is required to ensure a neural selectivity in the rate coding. This kind of competition can be implemented by the so-called forgetting function that is dependent on neuron activity. We show that coherent use of the triplet-based STDP and synaptic competition with the forgetting function is sufficient for the rate coding. Next, we propose a SNN capable of classifying electromyographical (EMG) patterns using an unsupervised learning procedure. The neuron competition achieved via lateral inhibition ensures the “winner takes all” principle among classifier neurons. The SNN also provides gradual output response dependent on muscular contraction strength. Furthermore, we modify the SNN to implement a supervised learning method based on stimulation of the target classifier neuron synchronously with the network input. In a problem of discrimination of three EMG patterns, the SNN with supervised learning shows median accuracy 99.5% that is close to the result demonstrated by multi-layer perceptron learned by back propagation of an error algorithm.
APA, Harvard, Vancouver, ISO, and other styles
25

Zappacosta, Stefano, Francesco Mannella, Marco Mirolli, and Gianluca Baldassarre. "General differential Hebbian learning: Capturing temporal relations between events in neural networks and the brain." PLOS Computational Biology 14, no. 8 (2018): e1006227. http://dx.doi.org/10.1371/journal.pcbi.1006227.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Saudargiene, Ausra, Bernd Porr, and Florentin Wörgötter. "How the Shape of Pre- and Postsynaptic Signals Can Influence STDP: A Biophysical Model." Neural Computation 16, no. 3 (2004): 595–625. http://dx.doi.org/10.1162/089976604772744929.

Full text
Abstract:
Spike-timing-dependent plasticity (STDP) is described by long-term potentiation (LTP), when a presynaptic event precedes a postsynaptic event, and by long-term depression (LTD), when the temporal order is reversed. In this article, we present a biophysical model of STDP based on a differential Hebbian learning rule (ISO learning). This rule correlates presynaptically the NMDA channel conductance with the derivative of the membrane potential at the synapse as the postsynaptic signal. The model is able to reproduce the generic STDP weight change characteristic. We find that (1) The actual shape of the weight change curve strongly depends on the NMDA channel characteristics and on the shape of the membrane potential at the synapse. (2) The typical antisymmetrical STDP curve (LTD and LTP) can become similar to a standard Hebbian characteristic (LTP only) without having to change the learning rule. This occurs if the membrane depolarization has a shallow onset and is long lasting. (3) It is known that the membrane potential varies along the dendrite as a result of the active or passive backpropagation of somatic spikes or because of local dendritic processes. As a consequence, our model predicts that learning properties will be different at different locations on the dendritic tree. In conclusion, such site-specific synaptic plasticity would provide a neuron with powerful learning capabilities.
APA, Harvard, Vancouver, ISO, and other styles
27

Cui, Yuwei, Subutai Ahmad, and Jeff Hawkins. "Continuous Online Sequence Learning with an Unsupervised Neural Network Model." Neural Computation 28, no. 11 (2016): 2474–504. http://dx.doi.org/10.1162/neco_a_00893.

Full text
Abstract:
The ability to recognize and predict temporal sequences of sensory inputs is vital for survival in natural environments. Based on many known properties of cortical neurons, hierarchical temporal memory (HTM) sequence memory recently has been proposed as a theoretical framework for sequence learning in the cortex. In this letter, we analyze properties of HTM sequence memory and apply it to sequence learning and prediction problems with streaming data. We show the model is able to continuously learn a large number of variable order temporal sequences using an unsupervised Hebbian-like learning rule. The sparse temporal codes formed by the model can robustly handle branching temporal sequences by maintaining multiple predictions until there is sufficient disambiguating evidence. We compare the HTM sequence memory with other sequence learning algorithms, including statistical methods—autoregressive integrated moving average; feedforward neural networks—time delay neural network and online sequential extreme learning machine; and recurrent neural networks—long short-term memory and echo-state networks on sequence prediction problems with both artificial and real-world data. The HTM model achieves comparable accuracy to other state-of-the-art algorithms. The model also exhibits properties that are critical for sequence learning, including continuous online learning, the ability to handle multiple predictions and branching sequences with high-order statistics, robustness to sensor noise and fault tolerance, and good performance without task-specific hyperparameter tuning. Therefore, the HTM sequence memory not only advances our understanding of how the brain may solve the sequence learning problem but is also applicable to real-world sequence learning problems from continuous data streams.
APA, Harvard, Vancouver, ISO, and other styles
28

Senn, Walter, Martin Schneider, and Berthold Ruf. "Activity-Dependent Development of Axonal and Dendritic Delays, or, Why Synaptic Transmission Should Be Unreliable." Neural Computation 14, no. 3 (2002): 583–619. http://dx.doi.org/10.1162/089976602317250915.

Full text
Abstract:
Systematic temporal relations between single neuronal activities or population activities are ubiquitous in the brain. No experimental evidence, however, exists for a direct modification of neuronal delays during Hebbian-type stimulation protocols. We show that in fact an explicit delay adaptation is not needed if one assumes that the synaptic strengths are modified according to the recently observed temporally asymmetric learning rule with the downregulating branch dominating the upregulating branch. During development, slow, unbiased fluctuations in the transmission time, together with temporally correlated network activity, may control neural growth and implicitly induce drifts in the axonal delays and dendritic latencies. These delays and latencies become optimally tuned in the sense that the synaptic response tends to peak in the soma of the postsynaptic cell if this is most likely to fire. The nature of the selection process requires unreliable synapses in order to give successful synapses an evolutionary advantage over the others. The width of the learning function also determines the preferred dendritic delay and the preferred width of the postsynaptic response. Hence, it may implicitly determine whether a synaptic connection provides a precisely timed or a broadly tuned “contextual” signal.
APA, Harvard, Vancouver, ISO, and other styles
29

Wörgötter, Florentin, and Bernd Porr. "Temporal Sequence Learning, Prediction, and Control: A Review of Different Models and Their Relation to Biological Mechanisms." Neural Computation 17, no. 2 (2005): 245–319. http://dx.doi.org/10.1162/0899766053011555.

Full text
Abstract:
In this review, we compare methods for temporal sequence learning (TSL) across the disciplines machine-control, classical conditioning, neuronal models for TSL as well as spike-timing-dependent plasticity (STDP). This review introduces the most influential models and focuses on two questions: To what degree are reward-based (e.g., TD learning) and correlation-based (Hebbian) learning related? and How do the different models correspond to possibly underlying biological mechanisms of synaptic plasticity? We first compare the different models in an open-loop condition, where behavioral feedback does not alter the learning. Here we observe that reward-based and correlation-based learning are indeed very similar. Machine control is then used to introduce the problem of closed-loop control (e.g., actor-critic architectures). Here the problem of evaluative (rewards) versus nonevaluative (correlations) feedback from the environment will be discussed, showing that both learning approaches are fundamentally different in the closed-loop condition. In trying to answer the second question, we compare neuronal versions of the different learning architectures to the anatomy of the involved brain structures (basal-ganglia, thalamus, and cortex) and the molecular biophysics of glutamatergic and dopaminergic synapses. Finally, we discuss the different algorithms used to model STDP and compare them to reward-based learning rules. Certain similarities are found in spite of the strongly different timescales. Here we focus on the biophysics of the different calcium-release mechanisms known to be involved in STDP.
APA, Harvard, Vancouver, ISO, and other styles
30

Treur, Jan. "Relating Emerging Adaptive Network Behavior to Network Structure: A Declarative Network Analysis Perspective." Vietnam Journal of Computer Science 08, no. 01 (2020): 39–92. http://dx.doi.org/10.1142/s2196888821500020.

Full text
Abstract:
In this paper, the challenge for dynamic network modeling is addressed how emerging behavior of an adaptive network can be related to characteristics of the adaptive network’s structure. By applying network reification, the adaptation structure is modeled in a declarative manner as a subnetwork of a reified network extending the base network. This construction can be used to model and analyze any adaptive network in a neat and declarative manner, where the adaptation principles are described by declarative mathematical relations and functions in reified temporal-causal network format. In different examples, it is shown how certain adaptation principles known from the literature can be formulated easily in such a declarative reified temporal-causal network format. The main focus of this paper on how emerging adaptive network behavior relates to network structure is addressed, among others, by means of a number of theorems of the format “properties of reified network structure characteristics imply emerging adaptive behavior properties”. In such theorems, classes of networks are considered that satisfy certain network structure properties concerning connectivity and aggregation characteristics. Results include, for example, that under some conditions on the network structure characteristics, all states eventually get the same value. Similar analysis methods are applied to reification states, in particular for adaptation principles for Hebbian learning and for bonding by homophily, respectively. Here results include how certain properties of the aggregation characteristics of the network structure of the reified network for Hebbian learning entail behavioral properties relating to the maximal final values of the adaptive connection weights. Similarly, results are discussed on how properties of the aggregation characteristics of the reified network structure for bonding by homophily entail behavioral properties relating to clustering and community formation in a social network.
APA, Harvard, Vancouver, ISO, and other styles
31

Bodyanskiy, Yevgeniy, and Artem Dolotov. "A Multilayered Self-Learning Spiking Neural Network and its Learning Algorithm Based on ‘Winner-Takes-More’ Rule in Hierarchical Clustering." Scientific Journal of Riga Technical University. Computer Sciences 40, no. 1 (2009): 66–74. http://dx.doi.org/10.2478/v10143-010-0009-7.

Full text
Abstract:
A Multilayered Self-Learning Spiking Neural Network and its Learning Algorithm Based on ‘Winner-Takes-More’ Rule in Hierarchical ClusteringThis paper introduces architecture of multilayered selflearning spiking neural network for hierarchical data clustering. It consists of the layer of population coding and several layers of spiking neurons. Contrary to originally suggested multilayered spiking neural network, the proposed one does not require a separate learning algorithm for lateral connections. Irregular clusters detecting capability is achieved by improving the temporal Hebbian learning algorithm. It is generalized by replacing ‘Winner-Takes-All’ rule with ‘Winner-Takes-More’ one. It is shown that the layer of receptive neurons can be treated as a fuzzification layer where pool of receptive neurons is a linguistic variable, and receptive neuron within a pool is a linguistic term. The network architecture is designed in terms of control systems theory. Using the Laplace transform notion, spiking neuron synapse is presented as a second-order critically damped response unit. Spiking neuron soma is modeled on the basis of bang-bang control systems theory as a threshold detection system. Simulation experiment confirms that the proposed architecture is effective in detecting irregular clusters.
APA, Harvard, Vancouver, ISO, and other styles
32

BELATRECHE, AMMAR, LIAM P. MAGUIRE, MARTIN MCGINNITY, and QING XIANG WU. "EVOLUTIONARY DESIGN OF SPIKING NEURAL NETWORKS." New Mathematics and Natural Computation 02, no. 03 (2006): 237–53. http://dx.doi.org/10.1142/s179300570600049x.

Full text
Abstract:
Unlike traditional artificial neural networks (ANNs), which use a high abstraction of real neurons, spiking neural networks (SNNs) offer a biologically plausible model of realistic neurons. They differ from classical artificial neural networks in that SNNs handle and communicate information by means of timing of individual pulses, an important feature of neuronal systems being ignored by models based on rate coding scheme. However, in order to make the most of these realistic neuronal models, good training algorithms are required. Most existing learning paradigms tune the synaptic weights in an unsupervised way using an adaptation of the famous Hebbian learning rule, which is based on the correlation between the pre- and post-synaptic neurons activity. Nonetheless, supervised learning is more appropriate when prior knowledge about the outcome of the network is available. In this paper, a new approach for supervised training is presented with a biologically plausible architecture. An adapted evolutionary strategy (ES) is used for adjusting the synaptic strengths and delays, which underlie the learning and memory processes in the nervous system. The algorithm is applied to complex non-linearly separable problems, and the results show that the network is able to perform learning successfully by means of temporal encoding of presented patterns.
APA, Harvard, Vancouver, ISO, and other styles
33

Boerlin, Martin, Tobi Delbruck, and Kynan Eng. "Getting to Know Your Neighbors: Unsupervised Learning of Topography from Real-World, Event-Based Input." Neural Computation 21, no. 1 (2009): 216–38. http://dx.doi.org/10.1162/neco.2009.06-07-554.

Full text
Abstract:
Biological neural systems must grow their own connections and maintain topological relations between elements that are related to the sensory input surface. Artificial systems have traditionally prewired such maps, but the sensor arrangement is not always known and can be expensive to specify before run time. Here we present a method for learning and updating topographic maps in systems comprising modular, event-based elements. Using an unsupervised neural spike-timing-based learning rule combined with Hebbian learning, our algorithm uses the spatiotemporal coherence of the external world to train its network. It improves on existing algorithms by not assuming a known topography of the target map and includes a novel method for automatically detecting edge elements. We show how, for stimuli that are small relative to the sensor resolution, the temporal learning window parameters can be determined without using any user-specified constants. For stimuli that are larger relative to the sensor resolution, we provide a parameter extraction method that generally outperforms the small-stimulus method but requires one user-specified constant. The algorithm was tested on real data from a 64 × 64-pixel section of an event-based temporal contrast silicon retina and a 360-tile tactile luminous floor. It learned 95.8% of the correct neighborhood relations for the silicon retina within about 400 seconds of real-world input from a driving scene and 98.1% correct for the sensory floor after about 160 minutes of human pedestrian traffic. Residual errors occurred in regions receiving little or ambiguous input, and the learned topological representations were able to update automatically in response to simulated damage. Our algorithm has applications in the design of modular autonomous systems in which the interfaces between components are learned during operation rather than at design time.
APA, Harvard, Vancouver, ISO, and other styles
34

James, Logan S., and Jon T. Sakata. "Vocal motor changes beyond the sensitive period for song plasticity." Journal of Neurophysiology 112, no. 9 (2014): 2040–52. http://dx.doi.org/10.1152/jn.00217.2014.

Full text
Abstract:
Behavior is critically shaped during sensitive periods in development. Birdsong is a learned vocal behavior that undergoes dramatic plasticity during a sensitive period of sensorimotor learning. During this period, juvenile songbirds engage in vocal practice to shape their vocalizations into relatively stereotyped songs. By the time songbirds reach adulthood, their songs are relatively stable and thought to be “crystallized.” Recent studies, however, highlight the potential for adult song plasticity and suggest that adult song could naturally change over time. As such, we investigated the degree to which temporal and spectral features of song changed over time in adult Bengalese finches. We observed that the sequencing and timing of song syllables became more stereotyped over time. Increases in the stereotypy of syllable sequencing were due to the pruning of infrequently produced transitions and, to a lesser extent, increases in the prevalence of frequently produced transitions. Changes in song tempo were driven by decreases in the duration and variability of intersyllable gaps. In contrast to significant changes to temporal song features, we found little evidence that the spectral structure of adult song syllables changed over time. These data highlight differences in the degree to which temporal and spectral features of adult song change over time and support evidence for distinct mechanisms underlying the control of syllable sequencing, timing, and structure. Furthermore, the observed changes to temporal song features are consistent with a Hebbian framework of behavioral plasticity and support the notion that adult song should be considered a form of vocal practice.
APA, Harvard, Vancouver, ISO, and other styles
35

Wagatsuma, Hiroaki, and Yoko Yamaguchi. "Cognitive Map Formation Through Sequence Encoding by Theta Phase Precession." Neural Computation 16, no. 12 (2004): 2665–97. http://dx.doi.org/10.1162/0899766042321742.

Full text
Abstract:
The rodent hippocampus has been thought to represent the spatial environment as a cognitive map. The associative connections in the hippocampus imply that a neural entity represents the map as a geometrical network of hippocampal cells in terms of a chart. According to recent experimental observations, the cells fire successively relative to the theta oscillation of the local field potential, called theta phase precession, when the animal is running. This observation suggests the learning of temporal sequences with asymmetric connections in the hippocampus, but it also gives rather inconsistent implications on the formation of the chart that should consist of symmetric connections for space coding. In this study, we hypothesize that the chart is generated with theta phase coding through the integration of asymmetric connections. Our computer experiments use a hippocampal network model to demonstrate that a geometrical network is formed through running experiences in a few minutes. Asymmetric connections are found to remain and distribute heterogeneously in the network. The obtained network exhibits the spatial localization of activities at each instance as the chart does and their propagation that represents behavioral motions with multidirectional properties. We conclude that theta phase precession and the Hebbian rule with a time delay can provide the neural principles for learning the cognitive map.
APA, Harvard, Vancouver, ISO, and other styles
36

O'Reilly, Randall C., and Mark H. Johnson. "Object Recognition and Sensitive Periods: A Computational Analysis of Visual Imprinting." Neural Computation 6, no. 3 (1994): 357–89. http://dx.doi.org/10.1162/neco.1994.6.3.357.

Full text
Abstract:
Using neural and behavioral constraints from a relatively simple biological visual system, we evaluate the mechanism and behavioral implications of a model of invariant object recognition. Evidence from a variety of methods suggests that a localized portion of the domestic chick brain, the intermediate and medial hyperstriatum ventrale (IMHV), is critical for object recognition. We have developed a neural network model of translation-invariant object recognition that incorporates features of the neural circuitry of IMHV, and exhibits behavior qualitatively similar to a range of findings in the filial imprinting paradigm. We derive several counter-intuitive behavioral predictions that depend critically upon the biologically derived features of the model. In particular, we propose that the recurrent excitatory and lateral inhibitory circuitry in the model, and observed in IMHV, produces hysteresis on the activation state of the units in the model and the principal excitatory neurons in IMHV. Hysteresis, when combined with a simple Hebbian covariance learning mechanism, has been shown in this and earlier work (Földiák 1991; O'Reilly and McClelland 1992) to produce translation-invariant visual representations. The hysteresis and learning rule are responsible for a sensitive period phenomenon in the network, and for a series of novel temporal blending phenomena. These effects are empirically testable. Further, physiological and anatomical features of mammalian visual cortex support a hysteresis-based mechanism, arguing for the generality of the algorithm.
APA, Harvard, Vancouver, ISO, and other styles
37

Pham, D. T., M. S. Packianather, and E. Y. A. Charles. "Control chart pattern clustering using a new self-organizing spiking neural network." Proceedings of the Institution of Mechanical Engineers, Part B: Journal of Engineering Manufacture 222, no. 10 (2008): 1201–11. http://dx.doi.org/10.1243/09544054jem1054.

Full text
Abstract:
This paper focuses on the architecture and learning algorithm associated with using a new self-organizing delay adaptation spiking neural network model for clustering control chart patterns. This temporal coding spiking neural network model employs a Hebbian-based rule to shift the connection delays instead of the previous approaches of delay selection. Here the tuned delays compensate the differences in the input firing times of temporal patterns and enables them to coincide. The coincidence detection capability of the spiking neuron has been utilized for pattern clustering. The structure of the network is similar to that of a Kohonen self-organizing map (SOM) except that the output layer neurons are coincidence detecting spiking neurons. An input pattern is represented by the neuron that is the first to fire among all the competing spiking neurons. Clusters within the input data are identified with the location of the winning neurons and their firing times. The proposed self-organized delay adaptation spiking neural network (SODA_SNN) has been utilized to cluster control chart patterns. The trained network obtained an average clustering accuracy of 96.1 per cent on previously unseen test data. This was achieved with a network of 8 × 8 spiking neurons trained for 20 epochs containing 1000 training examples. The improvement in clustering accuracy achieved by the proposed SODA_SNN on the unseen test data was twice as much as that on the training data when compared to the SOM.
APA, Harvard, Vancouver, ISO, and other styles
38

Rolls, E. T., L. Franco, and S. M. Stringer. "The Perirhinal Cortex and Long-Term Familiarity Memory." Quarterly Journal of Experimental Psychology Section B 58, no. 3-4b (2005): 234–45. http://dx.doi.org/10.1080/02724990444000122.

Full text
Abstract:
To analyse the functions of the perirhinal cortex, the activity of single neurons in the perirhinal cortex was recorded while macaques performed a delayed matching-to-sample task with up to three intervening stimuli. Some neurons had activity related to working memory, in that they responded more to the sample than to the match image within a trial, as shown previously. However, when a novel set of stimuli was introduced, the neuronal responses were on average only 47% of the magnitude of the responses to the set of very familiar stimuli. Moreover, it was shown in three monkeys that the responses of the perirhinal cortex neurons gradually increased over hundreds of presentations (mean = 400 over 7–13 days) of the new set of (initially novel) stimuli to become as large as those to the already familiar stimuli. Thus perirhinal cortex neurons represent the very long-term familiarity of visual stimuli. Part of the impairment in temporal lobe amnesia may be related to the difficulty of building representations of the degree of familiarity of stimuli. A neural network model of how the perirhinal cortex could implement tong-term familiarity memory is proposed using Hebbian associative learning.
APA, Harvard, Vancouver, ISO, and other styles
39

Buonomano, D. V., and M. M. Merzenich. "Associative synaptic plasticity in hippocampal CA1 neurons is not sensitive to unpaired presynaptic activity." Journal of Neurophysiology 76, no. 1 (1996): 631–36. http://dx.doi.org/10.1152/jn.1996.76.1.631.

Full text
Abstract:
1. Hebbian or associative synaptic plasticity has been proposed to play an important role in learning and memory. Whereas many behaviorally relevant stimuli are time-varying, most experimental and theoretical work on synaptic plasticity has focused on stimuli or induction protocols without temporal structure. Recent theoretical studies have suggested that associative plasticity sensitive to only the conjunction of pre- and postsynaptic activity is not an effective learning rule for networks required to learn time-varying stimuli. Our goal in the current experiment was to determine whether associative long-term potentiation (LTP) is sensitive to temporal structure. We examined whether the presentation of unpaired presynaptic pulses in addition to paired pre- and postsynaptic activity altered the induction of associative LTP. 2. By using intracellular recordings from CA1 pyramidal cells, associative long-term potentiation (LTP) was induced in a control pathway by pairing a single presynaptic pulse with postsynaptic depolarization every 5 s (50-70 x). The experimental pathway received the same training, with additional unpaired presynaptic pulses delivered in close temporal proximity, either after or before associative pairing. Five separate sets of experiments were performed with intervals of -200, -50, +50, +200, or +800 ms. Negative intervals indicate that the unpaired presynaptic pulse was presented before the depolarizing pulse. Our results showed that the presence of unpaired presynaptic pulses, occurring either before or after pairing, did not significantly alter the magnitude of LTP. 3. The experimental design permitted an analysis of whether changes in paired-pulse facilitation (PPF) occur as a result of associative LTP. The average degree of PPF was the same before and after LTP. However, there was a significant inverse correlation between the initial degree of PPF and the degree of PPF after LTP. There was no relationship between the change in PPF, and whether the first or second pulse had been paired with depolarization. 4. These results indicate that the presence of unpaired presynaptic pulses does not alter the induction of synaptic plasticity, suggesting that plasticity of the Schaffer collateral-CA1 synapse is primarily conjunctive rather than correlative.
APA, Harvard, Vancouver, ISO, and other styles
40

Rolls, Edmund T., and T. Milward. "A Model of Invariant Object Recognition in the Visual System: Learning Rules, Activation Functions, Lateral Inhibition, and Information-Based Performance Measures." Neural Computation 12, no. 11 (2000): 2547–72. http://dx.doi.org/10.1162/089976600300014845.

Full text
Abstract:
VisNet2 is a model to investigate some aspects of invariant visual object recognition in the primate visual system. It is a four-layer feedforward network with convergence to each part of a layer from a small region of the preceding layer, with competition between the neurons within a layer and with a trace learning rule to help it learn transform invariance. The trace rule is a modified Hebbian rule, which modifies synaptic weights according to both the current firing rates and the firing rates to recently seen stimuli. This enables neurons to learn to respond similarly to the gradually transforming inputs it receives, which over the short term are likely to be about the same object, given the statistics of normal visual inputs. First, we introduce for VisNet2 both single-neuron and multiple-neuron information-theoretic measures of its ability to respond to transformed stimuli. Second, using these measures, we show that quantitatively resetting the trace between stimuli is not necessary for good performance. Third, it is shown that the sigmoid activation functions used in VisNet2, which allow the sparseness of the representation to be controlled, allow good performance when using sparse distributed representations. Fourth, it is shown that VisNet2 operates well with medium-range lateral inhibition with a radius in the same order of size as the region of the preceding layer from which neurons receive inputs. Fifth, in an investigation of different learning rules for learning transform invariance, it is shown that VisNet2 operates better with a trace rule that incorporates in the trace only activity from the preceding presentations of a given stimulus, with no contribution to the trace from the current presentation, and that this is related to temporal difference learning.
APA, Harvard, Vancouver, ISO, and other styles
41

CARTLING, BO. "GENERATION OF ASSOCIATIVE PROCESSES IN A NEURAL NETWORK WITH REALISTIC FEATURES OF ARCHITECTURE AND UNITS." International Journal of Neural Systems 05, no. 03 (1994): 181–94. http://dx.doi.org/10.1142/s0129065794000207.

Full text
Abstract:
A recent neural network model of cortical associative memory incorporating neuronal adaptation by a simplified description of its underlying ionic mechanisms is extended towards more realistic network units and architecture. Excitatory units correspond to groups of adapting pyramidal neurons and inhibitory units to groups of nonadapting interneurons. The network architecture is formed from pairs of one pyramidal and one interneuron unit each with inhibitory connections within and excitatory connections between pairs. The degree of adaptability of the pyramidal units controls the character of the network dynamics. An intermediate adaptability generates limit cycles of transitions between stored patterns and regulates oscillation frequencies in the range of theta rhythms observed in the brain. In particular, neuronal adaptation can impose a direction of transitions between overlapping patterns also in a symmetrically connected network. The model permits a detailed analysis of the transition mechanisms. Temporal sequences of patterns thus formed may constitute parts of associative processes, such as recall of stored sequences or search of pattern subspaces. As a special case, neuronal adaptation can accomplish pattern segmentation by which overlapping patterns are temporally resolved. The type of limit cycles produced by neuronal adaptation may also be of significance for central pattern generators, also for networks involving motor neurons. The applied learning rule of Hebbian type is compared to a modified version also common in neural network modelling. It is also shown that the dependence of the network dynamic behaviour on neuronal adaptability, from fixed point attractors at weak adaptability towards more complex dynamics of limit cycles and chaos at strong adaptability, agrees with that recently observed in a more abstract version of the model. The present description of neuronal adaptation is compared to models based on dynamic firing thresholds.
APA, Harvard, Vancouver, ISO, and other styles
42

Stein, Barry E., Liping Yu, Jinghong Xu, and Benjamin A. Rowland. "Plasticity in the acquisition of multisensory integration capabilities in superior colliculus." Seeing and Perceiving 25 (2012): 133. http://dx.doi.org/10.1163/187847612x647658.

Full text
Abstract:
The multisensory integration capabilities of superior colliculus (SC) neurons are normally acquired during early postnatal life and adapted to the environment in which they will be used. Recent evidence shows that they can even be acquired in adulthood, and require neither consciousness nor any of the reinforcement contingencies generally associated with learning. This process is believed to be based on Hebbian mechanisms, whereby the temporal coupling of multiple sensory inputs initiates development of a means of integrating their information. This predicts that co-activation of those input channels is sufficient to induce multisensory integration capabilities regardless of the specific spatiotemporal properties of the initiating stimuli. However, one might expect that the stimuli to be integrated should be consonant with the functional role of the neurons involved. For the SC, this would involve stimuli that can be localized. Experience with a non-localizable cue in one modality (e.g., ambient sound) and a discrete stimulus in another (e.g., a light flash) should not be sufficient for this purpose. Indeed, experiments with cats reared in omnidirectional sound (effectively masking discrete auditory events) reveal that the simple co-activation of two sensory input channels is not sufficient for this purpose. The data suggest that experience with the kinds of cross-modal events that facilitate the role of the SC in detecting, locating, and orienting to localized external events is a guiding factor in this maturational process. Supported by NIH grants NS 036916 and EY016716.
APA, Harvard, Vancouver, ISO, and other styles
43

Song, Sen, and L. F. Abbott. "Temporally asymmetric Hebbian learning and neuronal response variability." Neurocomputing 32-33 (June 2000): 523–28. http://dx.doi.org/10.1016/s0925-2312(00)00208-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Suri, Roland E., and Terrence J. Sejnowski. "Spike propagation synchronized by temporally asymmetric Hebbian learning." Biological Cybernetics 87, no. 5-6 (2002): 440–45. http://dx.doi.org/10.1007/s00422-002-0355-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Gütig, R., R. Aharonov, S. Rotter, and Haim Sompolinsky. "Learning Input Correlations through Nonlinear Temporally Asymmetric Hebbian Plasticity." Journal of Neuroscience 23, no. 9 (2003): 3697–714. http://dx.doi.org/10.1523/jneurosci.23-09-03697.2003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Matsumoto, Narihisa, and Masato Okada. "Self-Regulation Mechanism of Temporally Asymmetric Hebbian Plasticity." Neural Computation 14, no. 12 (2002): 2883–902. http://dx.doi.org/10.1162/089976602760805322.

Full text
Abstract:
Recent biological experimental findings have shown that synaptic plasticity depends on the relative timing of the pre- and postsynaptic spikes. This determines whether long-term potentiation (LTP) or long-term depression (LTD) is induced. This synaptic plasticity has been called temporally asymmetric Hebbian plasticity (TAH). Many authors have numerically demonstrated that neural networks are capable of storing spatiotemporal patterns. However, the mathematical mechanism of the storage of spatiotemporal patterns is still unknown, and the effect of LTD is particularly unknown. In this article, we employ a simple neural network model and show that interference between LTP and LTD disappears in a sparse coding scheme. On the other hand, the covariance learning rule is known to be indispensable for the storage of sparse patterns. We also show that TAH has the same qualitative effect as the covariance rule when spatiotemporal patterns are embedded in the network.
APA, Harvard, Vancouver, ISO, and other styles
47

Wallenstein, Gene V., and Michael E. Hasselmo. "GABAergic Modulation of Hippocampal Population Activity: Sequence Learning, Place Field Development, and the Phase Precession Effect." Journal of Neurophysiology 78, no. 1 (1997): 393–408. http://dx.doi.org/10.1152/jn.1997.78.1.393.

Full text
Abstract:
Wallenstein, Gene V. and Michael E. Hasselmo. GABAergic modulation of hippocampal population activity: sequence learning, place field development, and the phase precession effect. J. Neurophysiol. 78: 393–408, 1997. A detailed biophysical model of hippocampal region CA3 was constructed to study how GABAergic modulation influences place field development and the learning and recall of sequence information. Simulations included 1,000 multicompartmental pyramidal cells, each consisting of seven intrinsic and four synaptic currents, and 200 multicompartmental interneurons, consisting of two intrinsic and four synaptic currents. Excitatory rhythmic septal input to the apical dendrites of pyramidal cells and both excitatory and inhibitory input to interneurons at theta frequencies provided a cellular basis for the development of theta and gamma frequency oscillations in population activity. The fundamental frequency of theta oscillations was dictated by the driving rhythm from the septum. Gamma oscillation frequency, however, was determined by both the decay time of the γ-aminobutyric acid-A (GABAA)-receptor-mediated synaptic current and the overall level of excitability in interneurons due to α-amino-3-hydroxy-5-methyl-4-isoxazole proprionic acid and N-methyl-d-aspartate (NMDA)-receptor-gated channel activation. During theta population activity, total GABAB-receptor-mediated conductance levels were found to gradually rise and fall in rhythmic fashion with the predominant population frequency (theta rhythm). This resulted in periodic GABAB-receptor-mediated suppression of excitatory synaptic transmission at recurrent collaterals (intrinsic fibers) of pyramidal cells and suppression of inhibitory synaptic transmission to both pyramidal cells and interneurons. To test the ability of the model to learn and recall temporal sequence information, a completion task was employed. During learning, the network was presented a sequence of nonorthogonal spatial patterns. Each input pattern represented a spatial “location” of a simulated rat running a specific navigational path. Hebbian-type learning was expressed as an increase in postsynaptic NMDA-receptor-mediated conductances. Because of several factors including the sparse, asymmetric excitatory synaptic connections among pyramidal cells in the model and a sufficient degree of random “background” firing unrelated to the input patterns, repeated simulated runs resulted in the gradual emergence of place fields where a given cell began to respond to a contiguous segment of locations on the path. During recall, the simulated rat was placed at a random location on the previously learned path and tested to see whether the sequence of locations could be completed on the basis of this initial position. Periodic GABAB-receptor-mediated suppression of excitatory and inhibitory transmission at intrinsic but not afferent fibers resulted in sensory information about location being dominant during early portions of each theta cycle when GABAB-receptor-related effects were highest. This suppression declined with levels of GABAB receptor activation toward the end of a theta cycle, resulting in an increase in synaptic transmission at intrinsic fibers and the subsequent recall of a segment of the entire location sequence. This scenario typically continued across theta cycles until the full sequence was recalled. When the GABAB-receptor-mediated suppression of excitatory and inhibitory transmission at intrinsic fibers was not included in the model, place field development was curtailed and the network consequently exhibited poor learning and recall performance. This was, in part, due to increased competition of information from intrinsic and afferent fibers during early portions of each theta cycle. Because afferent sensory information did not dominate early in each cycle, the current location of the rat was obscured by ongoing activity from intrinsic sources. Furthermore, even when the current location was accurately identified, competition between afferent and intrinsic sources resulted in a tendency for rapid recall of several locations at once, which often lead to inaccuracies in the sequence. Thus the rat often recalled a path different from the particular one that was learned. GABAB-receptor-mediated modulation of excitatory synaptic transmission within a theta cycle resulted in a systematic relationship between single-unit activity and peaks in pyramidal cell population behavior (theta rhythm). Because presynaptic inhibition of intrinsic fibers was strongest at early portions of each theta cycle, single-unit firing usually started late in a cycle as the place field of the associated cell was approached. This firing typically advanced to progressively earlier phases in a theta cycle as the place field was traversed. Thus, as the rat moved through successive locations along a learned trajectory during completion trials, place cell firing gradually shifted from late phases of a theta cycle, where future locations were “predicted” (intrinsic information dominated), to early phases of a cycle, where the current location was “perceived” (afferent sources dominated). This result suggests that theGABAergic modulation of temporal sequence learning may serve as a general framework for understanding navigational phenomena such as the phase precession effect.
APA, Harvard, Vancouver, ISO, and other styles
48

YANG, ZHIJUN, KATHERINE L. CAMERON, ALAN F. MURRAY, and VASIN BOONSOBHAK. "AN ADAPTIVE VISUAL NEURONAL MODEL IMPLEMENTING COMPETITIVE, TEMPORALLY ASYMMETRIC HEBBIAN LEARNING." International Journal of Neural Systems 16, no. 03 (2006): 151–62. http://dx.doi.org/10.1142/s0129065706000573.

Full text
Abstract:
A novel depth-from-motion vision model based on leaky integrate-and-fire (I&F) neurons incorporates the implications of recent neurophysiological findings into an algorithm for object discovery and depth analysis. Pulse-coupled I&F neurons capture the edges in an optical flow field and the associated time of travel of those edges is encoded as the neuron parameters, mainly the time constant of the membrane potential and synaptic weight. Correlations between spikes and their timing thus code depth in the visual field. Neurons have multiple output synapses connecting to neighbouring neurons with an initial Gaussian weight distribution. A temporally asymmetric learning rule is used to adapt the synaptic weights online, during which competitive behaviour emerges between the different input synapses of a neuron. It is shown that the competition mechanism can further improve the model performance. After training, the weights of synapses sourced from a neuron do not display a Gaussian distribution, having adapted to encode features of the scenes to which they have been exposed.
APA, Harvard, Vancouver, ISO, and other styles
49

Pulvermüller, Friedemann. "Words in the brain's language." Behavioral and Brain Sciences 22, no. 2 (1999): 253–79. http://dx.doi.org/10.1017/s0140525x9900182x.

Full text
Abstract:
If the cortex is an associative memory, strongly connected cell assemblies will form when neurons in different cortical areas are frequently active at the same time. The cortical distributions of these assemblies must be a consequence of where in the cortex correlated neuronal activity occurred during learning. An assembly can be considered a functional unit exhibiting activity states such as full activation (“ignition”) after appropriate sensory stimulation (possibly related to perception) and continuous reverberation of excitation within the assembly (a putative memory process). This has implications for cortical topographies and activity dynamics of cell assemblies forming during language acquisition, in particular for those representing words. Cortical topographies of assemblies should be related to aspects of the meaning of the words they represent, and physiological signs of cell assembly ignition should be followed by possible indicators of reverberation. The following postulates are discussed in detail: (1) assemblies representing phonological word forms are strongly lateralized and distributed over perisylvian cortices; (2) assemblies representing highly abstract words such as grammatical function words are also strongly lateralized and restricted to these perisylvian regions; (3) assemblies representing concrete content words include additional neurons in both hemispheres; (4) assemblies representing words referring to visual stimuli include neurons in visual cortices; and (5) assemblies representing words referring to actions include neurons in motor cortices. Two main sources of evidence are used to evaluate these proposals: (a) imaging studies focusing on localizing word processing in the brain, based on stimulus-triggered event-related potentials (ERPs), positron emission tomography (PET), and functional magnetic resonance imaging (fMRI), and (b) studies of the temporal dynamics of fast activity changes in the brain, as revealed by high-frequency responses recorded in the electroencephalogram (EEG) and magnetoencephalogram (MEG). These data provide evidence for processing differences between words and matched meaningless pseudowords, and between word classes, such as concrete content and abstract function words, and words evoking visual or motor associations. There is evidence for early word class-specific spreading of neuronal activity and for equally specific high-frequency responses occurring later. These results support a neurobiological model of language in the Hebbian tradition. Competing large-scale neuronal theories of language are discussed in light of the data summarized. Neurobiological perspectives on the problem of serial order of words in syntactic strings are considered in closing.
APA, Harvard, Vancouver, ISO, and other styles
50

Howe, Michael, and Risto Miikkulainen. "Hebbian learning and temporary storage in the convergence-zone model of episodic memory." Neurocomputing 32-33 (June 2000): 817–21. http://dx.doi.org/10.1016/s0925-2312(00)00248-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!