To see the other types of publications on this topic, follow the link: Spiking Neural network simulation.

Dissertations / Theses on the topic 'Spiking Neural network simulation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Spiking Neural network simulation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Hunter, Russell I. "Improving associative memory in a network of spiking neurons." Thesis, University of Stirling, 2011. http://hdl.handle.net/1893/6177.

Full text
Abstract:
In this thesis we use computational neural network models to examine the dynamics and functionality of the CA3 region of the mammalian hippocampus. The emphasis of the project is to investigate how the dynamic control structures provided by inhibitory circuitry and cellular modification may effect the CA3 region during the recall of previously stored information. The CA3 region is commonly thought to work as a recurrent auto-associative neural network due to the neurophysiological characteristics found, such as, recurrent collaterals, strong and sparse synapses from external inputs and plasticity between coactive cells. Associative memory models have been developed using various configurations of mathematical artificial neural networks which were first developed over 40 years ago. Within these models we can store information via changes in the strength of connections between simplified model neurons (two-state). These memories can be recalled when a cue (noisy or partial) is instantiated upon the net. The type of information they can store is quite limited due to restrictions caused by the simplicity of the hard-limiting nodes which are commonly associated with a binary activation threshold. We build a much more biologically plausible model with complex spiking cell models and with realistic synaptic properties between cells. This model is based upon some of the many details we now know of the neuronal circuitry of the CA3 region. We implemented the model in computer software using Neuron and Matlab and tested it by running simulations of storage and recall in the network. By building this model we gain new insights into how different types of neurons, and the complex circuits they form, actually work. The mammalian brain consists of complex resistive-capacative electrical circuitry which is formed by the interconnection of large numbers of neurons. A principal cell type is the pyramidal cell within the cortex, which is the main information processor in our neural networks. Pyramidal cells are surrounded by diverse populations of interneurons which have proportionally smaller numbers compared to the pyramidal cells and these form connections with pyramidal cells and other inhibitory cells. By building detailed computational models of recurrent neural circuitry we explore how these microcircuits of interneurons control the flow of information through pyramidal cells and regulate the efficacy of the network. We also explore the effect of cellular modification due to neuronal activity and the effect of incorporating spatially dependent connectivity on the network during recall of previously stored information. In particular we implement a spiking neural network proposed by Sommer and Wennekers (2001). We consider methods for improving associative memory recall using methods inspired by the work by Graham and Willshaw (1995) where they apply mathematical transforms to an artificial neural network to improve the recall quality within the network. The networks tested contain either 100 or 1000 pyramidal cells with 10% connectivity applied and a partial cue instantiated, and with a global pseudo-inhibition.We investigate three methods. Firstly, applying localised disynaptic inhibition which will proportionalise the excitatory post synaptic potentials and provide a fast acting reversal potential which should help to reduce the variability in signal propagation between cells and provide further inhibition to help synchronise the network activity. Secondly, implementing a persistent sodium channel to the cell body which will act to non-linearise the activation threshold where after a given membrane potential the amplitude of the excitatory postsynaptic potential (EPSP) is boosted to push cells which receive slightly more excitation (most likely high units) over the firing threshold. Finally, implementing spatial characteristics of the dendritic tree will allow a greater probability of a modified synapse existing after 10% random connectivity has been applied throughout the network. We apply spatial characteristics by scaling the conductance weights of excitatory synapses which simulate the loss in potential in synapses found in the outer dendritic regions due to increased resistance. To further increase the biological plausibility of the network we remove the pseudo-inhibition and apply realistic basket cell models with differing configurations for a global inhibitory circuit. The networks are configured with; 1 single basket cell providing feedback inhibition, 10% basket cells providing feedback inhibition where 10 pyramidal cells connect to each basket cell and finally, 100% basket cells providing feedback inhibition. These networks are compared and contrasted for efficacy on recall quality and the effect on the network behaviour. We have found promising results from applying biologically plausible recall strategies and network configurations which suggests the role of inhibition and cellular dynamics are pivotal in learning and memory.
APA, Harvard, Vancouver, ISO, and other styles
2

Jin, Xin. "Parallel simulation of neural networks on SpiNNaker universal neuromorphic hardware." Thesis, University of Manchester, 2010. https://www.research.manchester.ac.uk/portal/en/theses/parallel-simulation-of-neural-networks-on-spinnaker-universal-neuromorphic-hardware(d6b8b72a-63c4-44ee-963a-ae349b0e379c).html.

Full text
Abstract:
Artificial neural networks have shown great potential and have attracted much research interest. One problem faced when simulating such networks is speed. As the number of neurons increases, the time to simulate and train a network increases dramatically. This makes it difficult to simulate and train a large-scale network system without the support of a high-performance computer system. The solution we present is a "real" parallel system - using a parallel machine to simulate neural networks which are intrinsically parallel applications. SpiNNaker is a scalable massively-parallel computing system under development with the aim of building a general-purpose platform for the parallel simulation of large-scale neural systems. This research investigates how to model large-scale neural networks efficiently on such a parallel machine. While providing increased overall computational power, a parallel architecture introduces a new problem - the increased communication reduces the speedup gains. Modeling schemes, which take into account communication, processing, and storage requirements, are investigated to solve this problem. Since modeling schemes are application-dependent, two different types of neural network are examined - spiking neural networks with spike-time dependent plasticity, and the parallel distributed processing model with the backpropagation learning rule. Different modeling schemes are developed and evaluated for the two types of neural network. The research shows the feasibility of the approach as well as the performance of SpiNNaker as a general-purpose platform for the simulation of neural networks. The linear scalability shown in this architecture provides a path to the further development of parallel solutions for the simulation of extremely large-scale neural networks.
APA, Harvard, Vancouver, ISO, and other styles
3

Mundy, Andrew. "Real time Spaun on SpiNNaker : functional brain simulation on a massively-parallel computer architecture." Thesis, University of Manchester, 2017. https://www.research.manchester.ac.uk/portal/en/theses/real-time-spaun-on-spinnaker--functional-brain-simulation-on-a-massivelyparallel-computer-architecture(fcf5388c-4893-4b10-a6b4-577ffee2d562).html.

Full text
Abstract:
Model building is a fundamental scientific tool. Increasingly there is interest in building neurally-implemented models of cognitive processes with the intention of modelling brains. However, simulation of such models can be prohibitively expensive in both the time and energy required. For example, Spaun - "the world's first functional brain model", comprising 2.5 million neurons - required 2.5 hours of computation for every second of simulation on a large compute cluster. SpiNNaker is a massively parallel, low power architecture specifically designed for the simulation of large neural models in biological real time. Ideally, SpiNNaker could be used to facilitate rapid simulation of models such as Spaun. However the Neural Engineering Framework (NEF), with which Spaun is built, maps poorly to the architecture - to the extent that models such as Spaun would consume vast portions of SpiNNaker machines and still not run as fast as biology. This thesis investigates whether real time simulation of Spaun on SpiNNaker is at all possible. Three techniques which facilitate such a simulation are presented. The first reduces the memory, compute and network loads consumed by the NEF. Consequently, it is demonstrated that only a twentieth of the cores are required to simulate a core component of the Spaun network than would otherwise have been needed. The second technique uses a small number of additional cores to significantly reduce the network traffic required to simulated this core component. As a result simulation in real time is shown to be feasible. The final technique is a novel logic minimisation algorithm which reduces the size of the routing tables which are used to direct information around the SpiNNaker machine. This last technique is necessary to allow the routing of models of the scale and complexity of Spaun. Together these provide the ability to simulate the Spaun model in biological real time - representing a speed-up of 9000 times over previously reported results - with room for much larger models on full-scale SpiNNaker machines.
APA, Harvard, Vancouver, ISO, and other styles
4

Painkras, Eustace. "A chip multiprocessor for a large-scale neural simulator." Thesis, University of Manchester, 2013. https://www.research.manchester.ac.uk/portal/en/theses/a-chip-multiprocessor-for-a-largescale-neural-simulator(d3637073-2669-4a81-985a-2da9eec46480).html.

Full text
Abstract:
A Chip Multiprocessor for a Large-scale Neural SimulatorEustace PainkrasA thesis submitted to The University of Manchesterfor the degree of Doctor of Philosophy, 17 December 2012The modelling and simulation of large-scale spiking neural networks in biologicalreal-time places very high demands on computational processing capabilities andcommunications infrastructure. These demands are difficult to satisfy even with powerfulgeneral-purpose high-performance computers. Taking advantage of the remarkableprogress in semiconductor technologies it is now possible to design and buildan application-driven platform to support large-scale spiking neural network simulations.This research investigates the design and implementation of a power-efficientchip multiprocessor (CMP) which constitutes the basic building block of a spikingneural network modelling and simulation platform. The neural modelling requirementsof many processing elements, high-fanout communications and local memoryare addressed in the design and implementation of the low-level modules in the designhierarchy as well as in the CMP. By focusing on a power-efficient design, the energyconsumption and related cost of SpiNNaker, the massively-parallel computation engine,are kept low compared with other state-of-the-art hardware neural simulators.The SpiNNaker CMP is composed of many simple power-efficient processors withsmall local memories, asynchronous networks-on-chip and numerous bespoke modulesspecifically designed to serve the demands of neural computation with a globallyasynchronous, locally synchronous (GALS) architecture.The SpiNNaker CMP, realised as part of this research, fulfills the demands of neuralsimulation in a power-efficient and scalable manner, with added fault-tolerancefeatures. The CMPs have, to date, been incorporated into three versions of SpiNNakersystem PCBs with up to 48 chips onboard. All chips on the PCBs are performing successfully, during both functional testing and their targeted role of neural simulation.
APA, Harvard, Vancouver, ISO, and other styles
5

Harischandra, Nalin. "Computer Simulation of the Neural Control of Locomotion in the Cat and the Salamander." Doctoral thesis, KTH, Beräkningsbiologi, CB, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-47362.

Full text
Abstract:
Locomotion is an integral part of a whole range of animal behaviours. The basic rhythm for locomotion in vertebrates has been shown to arise from local networks residing in the spinal cord and these networks are known as central pattern generators (CPG). However, during the locomotion, these centres are constantly interacting with the sensory feedback signals coming from muscles, joints and peripheral skin receptors in order to adapt the stepping or swimming to varying environmental conditions. Conceptual models of vertebrate locomotion have been constructed using mathematical models of locomotor subsystems based on the neurophysiological evidence obtained primarily in the cat and the salamander, an amphibian with a sprawling posture. Such models provide opportunity for studying the key elements in the transition from aquatic to terrestrial locomotion. Several aspects of locomotor control using the cat or the salamander as an animal model have been investigated employing computer simulations and here we use the same approach to address a number of questions or/and hypotheses related to rhythmic locomotion in quadrupeds. Some of the involved questions are, the role of mechanical linkage during deafferented walking, finding inherent stabilities/instabilities of muscle-joint interactions during normal walking and estimating phase dependent controlability of muscle action over joints. Also we investigate limb and body coordination for different gaits, use of side-stepping in front limbs for turning and the role of sensory feedback in gait generation and transitions in salamanders.      This thesis presents the basics of the biologically realistic models of cat and salamander locomotion and summarizes computational methods in modeling quadruped locomotor subsystems such as CPG, limb muscles and sensory pathways. In the case of cat hind limb, we conclude that the mechanical linkages between the legs play a major role in producing the alternating gait. In another experiment we use the model to identify open-loop linear transfer functions between muscle activations and joint angles while ongoing locomotion. We hypothesize that the musculo-skeletal system for locomotion in animals, at least in cats, operates under critically damped condition.      The 3D model of the salamander is successfully used to mimic locomotion on level ground and in water. We compare the walking gait with the trotting gait in simulations. We also found that for turning, the use of side-stepping alone or in combination with trunk bending is more effective than the use of trunk bending alone. The same model together with a more realistic CPG composed of spiking neurons was used to investigate the role of sensory feedback in gait generation and transition. We found that the proprioceptive sensory inputs are essential in obtaining the walking gait, whereas the trotting gait is more under central (CPG) influence compared to that of the peripheral or sensory feedback.      This thesis work sheds light on understanding the neural control mechanisms behind vertebrate locomotion. Additionally, both neuro-mechanical models can be used for further investigations in finding new control algorithms which give robust, adaptive, efficient and realistic stepping in each leg, which would be advantageous since it can be implemented on a controller of a quadruped-robotic device.<br>This work is Funded by Swedish International Development cooperation Agency (SIDA). QC 20111110
APA, Harvard, Vancouver, ISO, and other styles
6

Johnson, Melissa. "A Spiking Bidirectional Associative Memory Neural Network." Thesis, Université d'Ottawa / University of Ottawa, 2021. http://hdl.handle.net/10393/42222.

Full text
Abstract:
Spiking neural networks (SNNs) are a more biologically realistic model of the brain than traditional analog neural networks and therefore should be better for modelling certain functions of the human brain. This thesis uses the concept of deriving an SNN from an accepted non-spiking neural network via analysis and modifications of the transmission function. We investigate this process to determine if and how the modifications can be made to minimize loss of information during the transition from non-spiking to spiking while retaining positive features and functionality of the non-spiking network. By comparing combinations of spiking neuron models and networks against each other, we determined that replacing the transmission function with a neural model that is similar to it allows for the easiest method to create a spiking neural network that works comparatively well. This similarity between transmission function and neuron model allows for easier parameter selection which is a key component in getting a functioning SNN. The parameters all play different roles, but for the most part, parameters that speed up spiking, such as large resistance values or small rheobases generally help the accuracy of the network. But the network is still incomplete for a spiking neural network since this conversion is often only performed after learning has been completed in analog form. The neuron model and subsequent network developed here are the initial steps in creating a bidirectional SNN that handles hetero-associative and auto-associative recall and can be switched easily between spiking and non-spiking with minimal to no loss of data. By tying everything to the transmission function, the non-spiking learning rule, which in our case uses the transmission function, and the neural model of the SNN, we are able to create a functioning SNN. Without this similarity, we find that creating SNN are much more complicated and require much more work in parameter optimization to achieve a functioning SNN.
APA, Harvard, Vancouver, ISO, and other styles
7

Goel, Piyush. "Spiking neural network based approach to EEG signal analysis." Thesis, University of Portsmouth, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.496600.

Full text
Abstract:
The research described in this thesis presents a new classification technique for continuous electroencephalographic (EEG) recordings, based on a network of spiking neurons. Analysis of the signals is performed on ensemble EEG and the task of the neural network is to identify the P300 component in the signals. The network employs leaky-integrate-and-fire neurons as nodes in a multi-layered structure. The method involves formation of multiple weak classifiers to perform voting and collective results are used for final classification.
APA, Harvard, Vancouver, ISO, and other styles
8

Davies, Sergio. "Learning in spiking neural networks." Thesis, University of Manchester, 2013. https://www.research.manchester.ac.uk/portal/en/theses/learning-in-spiking-neural-networks(2d2be0d7-9557-481e-b9f1-3889a5ca2447).html.

Full text
Abstract:
Artificial neural network simulators are a research field which attracts the interest of researchers from various fields, from biology to computer science. The final objectives are the understanding of the mechanisms underlying the human brain, how to reproduce them in an artificial environment, and how drugs interact with them. Multiple neural models have been proposed, each with their peculiarities, from the very complex and biologically realistic Hodgkin-Huxley neuron model to the very simple 'leaky integrate-and-fire' neuron. However, despite numerous attempts to understand the learning behaviour of the synapses, few models have been proposed. Spike-Timing-Dependent Plasticity (STDP) is one of the most relevant and biologically plausible models, and some variants (such as the triplet-based STDP rule) have been proposed to accommodate all biological observations. The research presented in this thesis focuses on a novel learning rule, based on the spike-pair STDP algorithm, which provides a statistical approach with the advantage of being less computationally expensive than the standard STDP rule, and is therefore suitable for its implementation on stand-alone computational units. The environment in which this research work has been carried out is the SpiNNaker project, which aims to provide a massively parallel computational substrate for neural simulation. To support such research, two other topics have been addressed: the first is a way to inject spikes into the SpiNNaker system through a non-real-time channel such as the Ethernet link, synchronising with the timing of the SpiNNaker system. The second research topic is focused on a way to route spikes in the SpiNNaker system based on populations of neurons. The three topics are presented in sequence after a brief introduction to the SpiNNaker project. Future work could include structural plasticity (also known as synaptic rewiring); here, during the simulation of neural networks on the SpiNNaker system, axons, dendrites and synapses may be grown or pruned according to biological observations.
APA, Harvard, Vancouver, ISO, and other styles
9

Mekemeza, Ona Keshia. "Photonic spiking neuron network." Electronic Thesis or Diss., Bourgogne Franche-Comté, 2023. http://www.theses.fr/2023UBFCD052.

Full text
Abstract:
Les réseaux neuromorphiques pour le traitement d'informations ont pris une placeimportante aujourd'hui notamment du fait de la montée en complexité des tâches à effectuer : reconnaissance vocale, corrélation d'images dynamiques, prise de décision rapide multidimensionnelle, fusion de données, optimisation comportementale, etc... Il existe plusieurs types de tels réseaux et parmi ceux- ci les réseaux impulsionnels, c'est-à-dire, ceux dont le fonctionnement est calqué sur celui des neurones corticaux. Ce sont ceux qui devraient offrir le meilleur rendement énergétique donc le meilleur passage à l'échelle. Plusieurs démonstrations de neurones artificielles ont été menées avec des circuits électroniques et plus récemment photoniques. La densité d'intégration de la filière photonique sur silicium est un atout pour créer des circuits suffisamment complexes pour espérer réaliser des démonstrations complètes. Le but de la thèse est donc d'exploiter une architecture de réseau neuromorphique impulsionnel à base de lasers à bascule de gain (Q switch) intégrés sur silicium et d'un circuit d'interconnexion ultra-dense et reconfigurable apte à imiter les poids synaptiques. Une modélisation complète ducircuit est attendue avec, à la clé la démonstration pratique d'une application dans la résolution d'un problème mathématique à définir<br>Today, neuromorphic networks play a crucial role in information processing,particularly as tasks become increasingly complex: voice recognition, dynamic image correlation, rapid multidimensional decision- making, data merging, behavioral optimization, etc... Neuromorphic networks come in several types; spiking networks are one of them. The latter's modus operandi is based on that of cortical neurons. As spiking networks are the most energy-efficient neuromorphic networks, they offer the greatest potential for scaling. Several demonstrations of artificial neurons have been conducted with electronic and more recently photonic circuits. The integration density of silicon photonics is an asset to create circuits that are complex enough to hopefully carry out a complete demonstration. Therefore, this thesis aims to exploit an architecture of a photonic spiking neural network based on Q-switched lasers integrated into silicon and an ultra-dense and reconfigurable interconnection circuit that can simulate synaptic weights. A complete modeling of the circuit is expected with a practical demonstration of an application in solving a mathematical problem to be defined
APA, Harvard, Vancouver, ISO, and other styles
10

Han, Bing. "ACCELERATION OF SPIKING NEURAL NETWORK ON GENERAL PURPOSE GRAPHICS PROCESSORS." University of Dayton / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1271368713.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

SUSI, GIANLUCA. "Asynchronous spiking neural networks: paradigma generale e applicazioni." Doctoral thesis, Università degli Studi di Roma "Tor Vergata", 2012. http://hdl.handle.net/2108/80567.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Evans, Benjamin D. "Learning transformation-invariant visual representations in spiking neural networks." Thesis, University of Oxford, 2012. https://ora.ox.ac.uk/objects/uuid:15bdf771-de28-400e-a1a7-82228c7f01e4.

Full text
Abstract:
This thesis aims to understand the learning mechanisms which underpin the process of visual object recognition in the primate ventral visual system. The computational crux of this problem lies in the ability to retain specificity to recognize particular objects or faces, while exhibiting generality across natural variations and distortions in the view (DiCarlo et al., 2012). In particular, the work presented is focussed on gaining insight into the processes through which transformation-invariant visual representations may develop in the primate ventral visual system. The primary motivation for this work is the belief that some of the fundamental mechanisms employed in the primate visual system may only be captured through modelling the individual action potentials of neurons and therefore, existing rate-coded models of this process constitute an inadequate level of description to fully understand the learning processes of visual object recognition. To this end, spiking neural network models are formulated and applied to the problem of learning transformation-invariant visual representations, using a spike-time dependent learning rule to adjust the synaptic efficacies between the neurons. The ways in which the existing rate-coded CT (Stringer et al., 2006) and Trace (Földiák, 1991) learning mechanisms may operate in a simple spiking neural network model are explored, and these findings are then applied to a more accurate model using realistic 3-D stimuli. Three mechanisms are then examined, through which a spiking neural network may solve the problem of learning separate transformation-invariant representations in scenes composed of multiple stimuli by temporally segmenting competing input representations. The spike-time dependent plasticity in the feed-forward connections is then shown to be able to exploit these input layer dynamics to form individual stimulus representations in the output layer. Finally, the work is evaluated and future directions of investigation are proposed.
APA, Harvard, Vancouver, ISO, and other styles
13

Schmidt, Maximilian [Verfasser], Markus [Akademischer Betreuer] Diesmann, and Andreas [Akademischer Betreuer] Offenhäusser. "Modeling and simulation of multi-scale spiking neuronal networks / Maximilian Schmidt ; Markus Diesmann, Andreas Offenhäusser." Aachen : Universitätsbibliothek der RWTH Aachen, 2016. http://d-nb.info/1126040851/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Sánchez, Rivera Giovanny. "Efficient multiprocessing architectures for spiking neural network emulation based on configurable devices." Doctoral thesis, Universitat Politècnica de Catalunya, 2014. http://hdl.handle.net/10803/285319.

Full text
Abstract:
The exploration of the dynamics of bioinspired neural networks has allowed neuroscientists to understand some clues and structures of the brain. Electronic neural network implementations are useful tools for this exploration. However, appropriate architectures are necessary due to the extremely high complexity of those networks. There has been an extraordinary development in reconfigurable computing devices within a short period of time especially in their resource availability, speed, and reconfigurability (FPGAs), which makes these devices suitable to emulate those networks. Reconfigurable parallel hardware architecture is proposed in this thesis in order to emulate in real time complex and biologically realistic spiking neural networks (SNNs). Some relevant SNN models and their hardware approaches have been studied, and analyzed in order to create an architecture that supports the implementation of these SNN models efficiently. The key factors, which involve flexibility in algorithm programmability, high performance processing, low area and power consumption, have been taken into account. In order to boost the performance of the proposed architecture, several techniques have been developed: time to space mapping, neural virtualization, flexible synapse-neuron mapping, specific learning and execution modes, among others. Besides this, an interface unit has been developed in order to build a bio-inspired system, which can process sensory information from the environment. The spiking-neuron-based system combines analog and digital multi-processor implementations. Several applications have been developed as a proof-of-concept in order to show the capabilities of the proposed architecture for processing this type of information.<br>L'estudi de la dinàmica de les xarxes neuronals bio-inspirades ha permès als neurocientífics entendre alguns processos i estructures del cervell. Les implementacions electròniques d'aquestes xarxes neuronals són eines útils per dur a terme aquest tipus d'estudi. No obstant això, l'alta complexitat de les xarxes neuronals requereix d'una arquitectura apropiada que pugui simular aquest tipus de xarxes. Emular aquest tipus de xarxes en dispositius configurables és possible a causa del seu extraordinari desenvolupament respecte a la seva disponibilitat de recursos, velocitat i capacitat de reconfiguració (FPGAs ). En aquesta tesi es proposa una arquitectura maquinari paral·lela i configurable per emular les complexes i realistes xarxes neuronals tipus spiking en temps real. S'han estudiat i analitzat alguns models de neurones tipus spiking rellevants i les seves implementacions en maquinari , amb la finalitat de crear una arquitectura que suporti la implementació d'aquests models de manera eficient . S'han tingut en compte diversos factors clau, incloent flexibilitat en la programació d'algorismes, processament d'alt rendiment, baix consum d'energia i àrea. S'han aplicat diverses tècniques en l'arquitectura desenvolupada amb el propòsit d'augmentar la seva capacitat de processament. Aquestes tècniques són: mapejat de temps a espai, virtualització de les neurones, mapeig flexible de neurones i sinapsis, modes d'execució, i aprenentatge específic, entre d'altres. A més, s'ha desenvolupat una unitat d'interfície de dades per tal de construir un sistema bio-inspirat, que pot processar informació sensorial del medi ambient. Aquest sistema basat en neurones tipus spiking combina implementacions analògiques i digitals. S'han desenvolupat diverses aplicacions usant aquest sistema com a prova de concepte, per tal de mostrar les capacitats de l'arquitectura proposada per al processament d'aquest tipus d'informació.
APA, Harvard, Vancouver, ISO, and other styles
15

Al, Nawasrah A. "Fast flux botnet detection based on adaptive dynamic evolving spiking neural network." Thesis, University of Salford, 2018. http://usir.salford.ac.uk/47199/.

Full text
Abstract:
A botnet, a set of compromised machines controlled distantly by an attacker, is the basis of numerous security threats around the world. Command and Control (C&C) servers are the backbone of botnet communications, where the bots and botmaster send reports and attack orders to each other, respectively. Botnets are also categorised according to their C&C protocols. A Domain Name System (DNS) method known as Fast-Flux Service Network (FFSN) is a special type of botnet that has been engaged by bot herders to cover malicious botnet activities, and increase the lifetime of malicious servers by quickly changing the IP addresses of the domain name over time. Although several methods have been suggested for detecting FFSNs domains, nevertheless they have low detection accuracy especially with zero-day domain, quite a long detection time, and consume high memory storage. In this research we propose a new system called Fast Flux Killer System (FFKA) that has the ability to detect “zero-day” FF-Domains in online mode with an implementation constructed on Adaptive Dynamic evolving Spiking Neural Network (ADeSNN) and in an offline mode to enhance the classification process which is a novelty in this field. The adaptation includes the initial weight, testing criteria, parameters customization, and parameters adjustment. The proposed system is expected to detect fast flux domains in online mode with high detection accuracy and low false positive and false negative rates respectively. It is also expected to have a high level of performance and the proposed system is designed to work for a lifetime with low memory usage. Three public datasets are exploited in the experiments to show the effects of the adaptive ADeSNN algorithm, two of them conducted on the ADeSNN algorithm itself and the last one on the process of detecting fast flux domains. The experiments showed an improved accuracy when using the proposed adaptive ADeSNN over the original algorithm. It also achieved a high detection accuracy in detecting zero-day fast flux domains that was about (99.54%) in an online mode, when using the public fast flux dataset. Finally, the improvements made to the performance of the adaptive algorithm are confirmed by the experiments.
APA, Harvard, Vancouver, ISO, and other styles
16

Nowshin, Fabiha. "Spiking Neural Network with Memristive Based Computing-In-Memory Circuits and Architecture." Thesis, Virginia Tech, 2021. http://hdl.handle.net/10919/103854.

Full text
Abstract:
In recent years neuromorphic computing systems have achieved a lot of success due to its ability to process data much faster and using much less power compared to traditional Von Neumann computing architectures. There are two main types of Artificial Neural Networks (ANNs), Feedforward Neural Network (FNN) and Recurrent Neural Network (RNN). In this thesis we first study the types of RNNs and then move on to Spiking Neural Networks (SNNs). SNNs are an improved version of ANNs that mimic biological neurons closely through the emission of spikes. This shows significant advantages in terms of power and energy when carrying out data intensive applications by allowing spatio-temporal information processing. On the other hand, emerging non-volatile memory (eNVM) technology is key to emulate neurons and synapses for in-memory computations for neuromorphic hardware. A particular eNVM technology, memristors, have received wide attention due to their scalability, compatibility with CMOS technology and low power consumption properties. In this work we develop a spiking neural network by incorporating an inter-spike interval encoding scheme to convert the incoming input signal to spikes and use a memristive crossbar to carry out in-memory computing operations. We develop a novel input and output processing engine for our network and demonstrate the spatio-temporal information processing capability. We demonstrate an accuracy of a 100% with our design through a small-scale hardware simulation for digit recognition and demonstrate an accuracy of 87% in software through MNIST simulations.<br>M.S.<br>In recent years neuromorphic computing systems have achieved a lot of success due to its ability to process data much faster and using much less power compared to traditional Von Neumann computing architectures. Artificial Neural Networks (ANNs) are models that mimic biological neurons where artificial neurons or neurodes are connected together via synapses, similar to the nervous system in the human body. here are two main types of Artificial Neural Networks (ANNs), Feedforward Neural Network (FNN) and Recurrent Neural Network (RNN). In this thesis we first study the types of RNNs and then move on to Spiking Neural Networks (SNNs). SNNs are an improved version of ANNs that mimic biological neurons closely through the emission of spikes. This shows significant advantages in terms of power and energy when carrying out data intensive applications by allowing spatio-temporal information processing capability. On the other hand, emerging non-volatile memory (eNVM) technology is key to emulate neurons and synapses for in-memory computations for neuromorphic hardware. A particular eNVM technology, memristors, have received wide attention due to their scalability, compatibility with CMOS technology and low power consumption properties. In this work we develop a spiking neural network by incorporating an inter-spike interval encoding scheme to convert the incoming input signal to spikes and use a memristive crossbar to carry out in-memory computing operations. We demonstrate the accuracy of our design through a small-scale hardware simulation for digit recognition and demonstrate an accuracy of 87% in software through MNIST simulations.
APA, Harvard, Vancouver, ISO, and other styles
17

Öberg, Oskar. "Critical Branching Regulation of the E-I Net Spiking Neural Network Model." Thesis, Luleå tekniska universitet, Institutionen för teknikvetenskap och matematik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-76770.

Full text
Abstract:
Spiking neural networks (SNN) are dynamic models of biological neurons, that communicates with event-based signals called spikes. SNN that reproduce observed properties of biological senses like vision are developed to better understand how such systems function, and to learn how more efficient sensor systems can be engineered. A branching parameter describes the average probability for spikes to propagate between two different neuron populations. The adaptation of branching parameters towards critical values is known to be important for maximizing the sensitivity and dynamic range of SNN. In this thesis, a recently proposed SNN model for visual feature learning and pattern recognition known as the E-I Net model is studied and extended with a critical branching mechanism. The resulting modified E-I Net model is studied with numerical experiments and two different types of sensory queues. The experiments show that the modified E-I Net model demonstrates critical branching and power-law scaling behavior, as expected from SNN near criticality, but the power-laws are broken and the stimuli reconstruction error is higher compared to the error of the original E-I Net model. Thus, on the basis of these experiments, it is not clear how to properly extend the E-I Net model properly with a critical branching mechanism. The E-I Net model has a particular structure where the inhibitory neurons (I) are tuned to decorrelate the excitatory neurons (E) so that the visual features learned matches the angular and frequency distributions of feature detectors in visual cortex V1 and different stimuli are represented by sparse subsets of the neurons. The broken power-laws correspond to different scaling behavior at low and high spike rates, which may be related to the efficacy of inhibition in the model.
APA, Harvard, Vancouver, ISO, and other styles
18

De, Pasquale Daniele. "Modelli di neural network ispirati alla biofisica." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2021.

Find full text
Abstract:
Il machine learning è stata una delle discipline più rivoluzionarie dello scorso secolo: ha permesso di affrontare problemi al di là della portata dei normali algoritmi computazionali, ed ha dato nuove prospettive per lo studio dell’intelligenza in natura. Grazie all’avvento dei big data e dei moderni calcolatori, il deep learning in particolare ha dal 2006 ricevuto una grande attenzione e riportato risultati straordinari in applicazioni cognitive e commerciali. Lo scopo di questa tesi è quello di riportare i principali risultati storici in queste discipline, partendo da una veloce analisi dei sistemi di machine learning (ML), per poi soffermarsi sui principali tipi di artificial neural network (ANN), ed infine presentare i risultati, i vantaggi e le sfide del più recente tipo di neural network, gli spiking neural network (SNN). A differenza dei classici ANN questi ultimi sono caratterizzati da uno stato di attivazione definito da un’equazione differenziale: ciò li rende da un lato più versatili, dall’altro più dispendiosi in termini di calcolo per un computer. La storia del machine learning è caratterizzata da momenti di entusiasmo e momenti di crisi, di cui avremo modo di vedere esempi nel corso del testo, e gli SNN non sono immuni a questa tendenza: i loro pregi e difetti hanno fatto parlare molto a riguardo della loro effettiva utilità, e sebbene ad oggi manchino ancora diversi elementi chiave per una vera esplosione da un punto di vista applicativo, gl SNN vengono considerati da molti come la più potente versione degli ANN concepita finora.
APA, Harvard, Vancouver, ISO, and other styles
19

Littlewort, G. C. "Neural network analysis and simulation." Thesis, University of Oxford, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.292677.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Kourkoulas-Chondrorizos, Alexandros. "Online optimisation of information transmission in stochastic spiking neural systems." Thesis, University of Edinburgh, 2012. http://hdl.handle.net/1842/5832.

Full text
Abstract:
An Information Theoretic approach is used for studying the effect of noise on various spiking neural systems. Detailed statistical analyses of neural behaviour under the influence of stochasticity are carried out and their results related to other work and also biological neural networks. The neurocomputational capabilities of the neural systems under study are put on an absolute scale. This approach was also used in order to develop an optimisation framework. A proof-of-concept algorithm is designed, based on information theory and the coding fraction, which optimises noise through maximising information throughput. The algorithm is applied with success to a single neuron and then generalised to an entire neural population with various structural characteristics (feedforward, lateral, recurrent connections). It is shown that there are certain positive and persistent phenomena due to noise in spiking neural networks and that these phenomena can be observed even under simplified conditions and therefore exploited. The transition is made from detailed and computationally expensive tools to efficient approximations. These phenomena are shown to be persistent and exploitable under a variety of circumstances. The results of this work provide evidence that noise can be optimised online in both single neurons and neural populations of varying structures.
APA, Harvard, Vancouver, ISO, and other styles
21

Weidel, Philipp [Verfasser], Abigail [Akademischer Betreuer] Morrison, and Gerhard [Akademischer Betreuer] Lakemeyer. "Learning and decision making in closed loop simulations of plastic spiking neural networks / Philipp Weidel ; Abigail Morrison, Gerhard Lakemeyer." Aachen : Universitätsbibliothek der RWTH Aachen, 2020. http://d-nb.info/1240838581/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Wang, Zhenzhong. "System Design and Implementation of a Fast and Accurate Bio-Inspired Spiking Neural Network." FIU Digital Commons, 2015. http://digitalcommons.fiu.edu/etd/2227.

Full text
Abstract:
Neuron models are the elementary units which determine the performance of an artificial spiking neural network (ASNN). This study introduces a new Generalized Leaky Integrate-and-Fire (GLIF) neuron model with variable leaking resistor and bias current in order to reproduce accurately the membrane voltage dynamics of a biological neuron. The accuracy of this model is ensured by adjusting its parameters to the statistical properties of the Hodgkin-Huxley model outputs; while the speed is enhanced by introducing a Generalized Exponential Moving Average method that converts the parameterized kernel functions into pre-calculated lookup tables based on an analytic solution of the dynamic equations of the GLIF model. Spike encoding is the initial yet crucial step for any application domain of ASNN. However, current encoding methods are not suitable to process complex temporal signal. Motivated by the modulation relationship found between afferent synaptic currents in biological neurons, this study proposes a biologically plausible spike phase encoding method based on a novel spiking neuron model which could perform wavelet decomposition of the input signal, and encode the wavelet spectrum into synchronized output spike trains. The spike delays in each synchronizing period represent the spectrum amplitudes. The encoding method was tested in encoding of human voice records for speech recognition purposes. Empirical evaluations confirm that encoded spike trains constitute a good representation of the continuous wavelet transform of the original signal. Interictal spike (IS) is a type of transient discharge commonly found in the electroencephalography (EEG) records from epilepsy patients. The detection of IS remains an essential task for 3D source localization as well as in developing algorithms for essential in seizure prediction and guided therapy. We present in this work a new IS detection technology method using the phase encoding method with customized wavelet sensor neuron and a specially designed ASNN structure. The detection results confirm the ability of such ASNN to capture IS automatically from multichannel EEG records.
APA, Harvard, Vancouver, ISO, and other styles
23

Schliebs, Stefan. "Heterogeneous probabilistic models for optimisation and modelling of evolving spiking neural networks." AUT University, 2010. http://hdl.handle.net/10292/963.

Full text
Abstract:
This thesis proposes a novel feature selection and classification method employing evolving spiking neural networks (eSNN) and evolutionary algorithms (EA). The method is named the Quantum-inspired Spiking Neural Network (QiSNN) framework. QiSNN represents an integrated wrapper approach. An evolutionary process evolves appropriate feature subsets for a given classification task and simultaneously optimises the neural and learning-related parameters of the network. Unlike other methods, the connection weights of this network are determined by a fast one-pass learning algorithm which dramatically reduces the training time. In its core, QiSNN employs the Thorpe neural model that allows the efficient simulation of even large networks. In QiSNN, the presence or absence of features is represented by a string of concatenated bits, while the parameters of the neural network are continuous. For the exploration of these two entirely different search spaces, a novel Estimation of Distribution Algorithm (EDA) is developed. The method maintains a population of probabilistic models specialised for the optimisation of either binary, continuous or heterogeneous search spaces while utilising a small and intuitive set of parameters. The EDA extends the Quantum-inspired Evolutionary Algorithm (QEA) proposed by Han and Kim (2002) and was named the Heterogeneous Hierarchical Model EDA (hHM-EDA). The algorithm is compared to numerous contemporary optimisation methods and studied in terms of convergence speed, solution quality and robustness in noisy search spaces. The thesis investigates the functioning and the characteristics of QiSNN using both synthetic feature selection benchmarks and a real-world case study on ecological modelling. By evolving suitable feature subsets, QiSNN significantly enhances the classification accuracy of eSNN. Compared to numerous other feature selection techniques, like the wrapper-based Multilayer Perceptron (MLP) and the Naive Bayesian Classifier (NBC), QiSNN demonstrates a competitive classification and feature selection performance while requiring comparatively low computational costs.
APA, Harvard, Vancouver, ISO, and other styles
24

Hampo, Michael J. "Implementation Of Associative Memory With Online Learning into a Spiking Neural Network On Neuromorphic Hardware." University of Dayton / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1605800559072791.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Garrihy, G. "Neural network simulation of dynamic speech perception." Thesis, University of Essex, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.317930.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Wu, Jiaming. "A modular dynamic Neuro-Synaptic platform for Spiking Neural Networks." Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASP145.

Full text
Abstract:
Que le réseau de neurones soit biologique ou artificiel, il possède une unité de calcul fondamentale : le neurone. Ces neurones, interconnectés par des synapses, forment ainsi des réseaux complexes qui permettent d’obtenir une pluralité de fonctions. De même, le réseau de neurones neuromorphique, ou plus généralement les ordinateurs neuromorphiques, nécessitent également ces deux éléments fondamentaux que sont les neurones et les synapses. Dans ce travail, nous introduisons une unité matérielle neuro-synaptique à impulsions, inspirée de la biologie et entièrement réalisée avec des composants électroniques conventionnels. Le modèle de cette unité neuro-synaptique repose sur les modèles théoriques classiques du neurone à impulsions et des courants synaptiques et membranaires. Le neurone à impulsions est entièrement analogique et un dispositif memristif, dont les composants électroniques sont facilement disponibles sur le marché, permet d’assurer l’excitabilité du neurone. En ce qui concerne les courants synaptiques et membranaires, leur intensité est ajustable, et ils possèdent une dynamique biomimétique, incluant à la fois des courants excitateurs et inhibiteurs. Tous les paramètres du modèle sont ajustables et permettant ainsi d'adapter le système neuro-synaptique. Cette flexibilité et cette adaptabilité sont des caractéristiques essentielles dans la réalisation d’applications telles que les interfaces cerveau-machine. En nous appuyant sur ces deux unités modulaires, le neurone et la synapse, nous pouvons concevoir des motifs fondamentaux des réseaux de neurones. Ces motifs servent ainsi de base pour implémenter des réseaux aux fonctionnalités plus complexes, telles que des mémoires dynamiques ou des réseaux locomoteurs spinaux (Central Pattern Generator). De plus, il sera possible d’améliorer le modèle existant, que ce soit en y intégrant des memristors à base d’oxydes (actuellement étudiés en science des matériaux), ou en le déployant à grande échelle (VLSI) afin de réaliser des réseaux d’ordres de grandeurs supérieures. L’unité neuro-synaptique peut être considérée comme un bloc fondamental pour implémenter des réseaux neuronaux à impulsions de géométrie arbitraire. Son design compact et modulaire, associé à la large disponibilité des composants électroniques, font de notre plateforme une option attrayante de développement pour construire des interfaces neuronales, que ce soit dans les domaines médical, robotique, ou des systèmes d'intelligence artificielle (par exemple le calcul par réservoir), etc<br>Biological and artificial neural networks share a fundamental computational unit: the neuron. These neurons are coupled by synapses, forming complex networks that enable various functions. Similarly, neuromorphic hardware, or more generally neuro-computers, also require two hardware elements: neurons and synapses. In this work, we introduce a bio-inspired spiking Neuro-Synaptic hardware unit, fully implemented with conventional electronic components. Our hardware is based on a textbook theoretical model of the spiking neuron, and its synaptic and membrane currents. The spiking neuron is fully analog and the various models that we introduced are defined by their hardware implementation. The neuron excitability is achieved through a memristive device made from off-the-shelf electronic components. Both synaptic and membrane currents feature tunable intensities and bio-mimetic dynamics, including excitatory and inhibitory currents. All model parameters are adjustable, allowing the system to be tuned to bio-compatible timescales, which is crucial in applications such as brain-machine interfaces. Building on these two modular units, we demonstrate various basic neural network motifs (or neuro-computing primitives) and show how to combine these fundamental motifs to implement more complex network functionalities, such as dynamical memories and central pattern generators. Our hardware design also carries potential extensions for integrating oxide-based memristors (which are widely studied in material science),or porting the design to very large-scale integration (VLSI) to implement large-scale networks. The Neuro-Synaptic unit can be considered as a building block for implementing spiking neural networks of arbitrary geometry. Its compact and modular design, as well as the wide availability of ordinary electronic components, makes our approach an attractive platform for building neural interfaces in medical devices, robotics, and artificial intelligence systems such as reservoir computing
APA, Harvard, Vancouver, ISO, and other styles
27

Ghosh, Dastidar Samanwoy. "Models of EEG data mining and classification in temporal lobe epilepsy: wavelet-chaos-neural network methodology and spiking neural networks." Columbus, Ohio : Ohio State University, 2007. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1180459585.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Vitay, Julien, Helge Ülo Dinkelbach, and Fred Henrik Hamker. "ANNarchy: a code generation approach to neural simulations on parallel hardware." Universitätsbibliothek Chemnitz, 2015. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-182490.

Full text
Abstract:
Many modern neural simulators focus on the simulation of networks of spiking neurons on parallel hardware. Another important framework in computational neuroscience, rate-coded neural networks, is mostly difficult or impossible to implement using these simulators. We present here the ANNarchy (Artificial Neural Networks architect) neural simulator, which allows to easily define and simulate rate-coded and spiking networks, as well as combinations of both. The interface in Python has been designed to be close to the PyNN interface, while the definition of neuron and synapse models can be specified using an equation-oriented mathematical description similar to the Brian neural simulator. This information is used to generate C++ code that will efficiently perform the simulation on the chosen parallel hardware (multi-core system or graphical processing unit). Several numerical methods are available to transform ordinary differential equations into an efficient C++code. We compare the parallel performance of the simulator to existing solutions.
APA, Harvard, Vancouver, ISO, and other styles
29

Jansson, Ylva. "Normalization in a cortical hypercolumn : The modulatory effects of a highly structured recurrent spiking neural network." Thesis, KTH, Beräkningsbiologi, CB, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-158990.

Full text
Abstract:
Normalization is important for a large range of phenomena in biological neural systems such as light adaptation in the retina, context dependent decision making and probabilistic inference. In a normalizing circuit the activity of one neuron/-group of neurons is divisively rescaled in relation to the activity of other neurons/­­groups. This creates neural responses invariant to certain stimulus dimensions and dynamically adapts the range over which a neural system can respond discriminatively on stimuli. This thesis examines whether a biologically realistic normalizing circuit can be implemented by a spiking neural network model based on the columnar structure found in cortex. This was done by constructing and evaluating a highly structured spiking neural network model, modelling layer 2/3 of a cortical hypercolumn using a group of neurons as the basic computational unit. The results show that the structure of this hypercolumn module does not per se create a normalizing network. For most model versions the modulatory effect is better described as subtractive inhibition. However three mechanisms that shift the modulatory effect towards normalization were found: An increase in membrane variance for increased modulatory inputs; variability in neuron excitability and connections; and short-term depression on the driving synapses. Moreover it is shown that by combining those mechanisms it is possible to create a spiking neural network that implements approximate normalization over at least ten times increase in input magnitude. These results point towards possible normalizing mechanisms in a cortical hypercolumn; however more studies are needed to assess whether any of those could in fact be a viable explanation for normalization in the biological nervous system.<br>Normalisering är viktigt för en lång rad fenomen i biologiska nervsystem såsom näthinnans ljusanpassning, kontextberoende beslutsfattande och probabilistisk inferens. I en normaliserande krets skalas aktiviteten hos en nervcell/grupp av nervceller om i relation till aktiviteten hos andra nervceller/grupper. Detta ger neurala svar som är invarianta i förhållande till vissa dimensioner hos stimuli, och anpassar dynamiskt för vilka inputmagnituder ett system kan särskilja mellan stimuli. Den här uppsatsen undersöker huruvida en biologiskt realistisk normal­iserande krets kan implementeras av ett spikande neuronnätverk konstruerat med utgångspunkt från kolumnstrukturen i kortex. Detta gjordes genom att konstruera och utvärdera ett hårt strukturerat rekurrent spikande neuronnätverk, som modellerar lager 2/3 av en kortikal hyperkolumn med en grupp av neuroner som grundläggande beräkningsenhet. Resultaten visar att strukturen i hyperkolumn­modulen inte i sig skapar ett normaliserande nätverk. För de flesta nätverks­versioner implementerar nätverket en modulerande effekt som bättre beskrivs som subtraktiv inhibition. Dock hittades tre mekanismer som skapar ett mer normaliserande nätverk: Ökad membranvarians för större modulerande inputs; variabilitet i excitabilitet och inkommande kopplingar; och korttidsdepression på drivande synapser. Det visas också att genom att kombinera dessa mekanismer är det möjligt att skapa ett spikande neuronnät som approximerar normalisering över ett en åtminstone tio gångers ökning av storleken på input. Detta pekar på möjliga normaliserande mekanismer i en kortikal hyperkolumn, men ytterligare studier är nödvändiga för att avgöra om en eller flera av dessa kan vara en förklaring till hur normalisering är implementerat i biologiska nervsystem.
APA, Harvard, Vancouver, ISO, and other styles
30

Stewart, Robert Douglas. "Spiking neural network models of the basal ganglia : cortically-evoked response patterns and action selection properties." Thesis, University of Sheffield, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.445121.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Fox, Paul James. "Massively parallel neural computation." Thesis, University of Cambridge, 2013. https://www.repository.cam.ac.uk/handle/1810/245013.

Full text
Abstract:
Reverse-engineering the brain is one of the US National Academy of Engineering’s “Grand Challenges.” The structure of the brain can be examined at many different levels, spanning many disciplines from low-level biology through psychology and computer science. This thesis focusses on real-time computation of large neural networks using the Izhikevich spiking neuron model. Neural computation has been described as “embarrassingly parallel” as each neuron can be thought of as an independent system, with behaviour described by a mathematical model. However, the real challenge lies in modelling neural communication. While the connectivity of neurons has some parallels with that of electrical systems, its high fan-out results in massive data processing and communication requirements when modelling neural communication, particularly for real-time computations. It is shown that memory bandwidth is the most significant constraint to the scale of real-time neural computation, followed by communication bandwidth, which leads to a decision to implement a neural computation system on a platform based on a network of Field Programmable Gate Arrays (FPGAs), using commercial off- the-shelf components with some custom supporting infrastructure. This brings implementation challenges, particularly lack of on-chip memory, but also many advantages, particularly high-speed transceivers. An algorithm to model neural communication that makes efficient use of memory and communication resources is developed and then used to implement a neural computation system on the multi- FPGA platform. Finding suitable benchmark neural networks for a massively parallel neural computation system proves to be a challenge. A synthetic benchmark that has biologically-plausible fan-out, spike frequency and spike volume is proposed and used to evaluate the system. It is shown to be capable of computing the activity of a network of 256k Izhikevich spiking neurons with a fan-out of 1k in real-time using a network of 4 FPGA boards. This compares favourably with previous work, with the added advantage of scalability to larger neural networks using more FPGAs. It is concluded that communication must be considered as a first-class design constraint when implementing massively parallel neural computation systems.
APA, Harvard, Vancouver, ISO, and other styles
32

Bazargan-Harandi, Hamid. "Neural network based simulation of sea-state sequences." Thesis, Brunel University, 2006. http://bura.brunel.ac.uk/handle/2438/379.

Full text
Abstract:
The present PhD study, in its first part, uses artificial neural networks (ANNs), an optimization technique called simulated annealing, and statistics to simulate the significant wave height (Hs) and mean zero-up-crossing period ( ) of 3-hourly sea-states of a location in the North East Pacific using a proposed distribution called hepta-parameter spline distribution for the conditional distribution of Hs or given some inputs. Two different seven- network sets of ANNs for the simulation and prediction of Hs and were trained using 20-year observed Hs’s and ’s. The preceding Hs’s and ’s were the most important inputs given to the networks, but the starting day of the simulated period was also necessary. However, the code replaced the day with the corresponding time and the season. The networks were trained by a simulated annealing algorithm and the outputs of the two sets of networks were used for calculating the parameters of the probability density function (pdf) of the proposed hepta-parameter distribution. After the calculation of the seven parameters of the pdf from the network outputs, the Hs and of the future sea-state is predicted by generating random numbers from the corresponding pdf. In another part of the thesis, vertical piles have been studied with the goal of identifying the range of sea-states suitable for the safe pile driving operation. Pile configuration including the non-linear foundation and the gap between the pile and the pile sleeve shims were modeled using the finite elements analysis facilities within ABAQUS. Dynamic analyses of the system for a sea-state characterized by Hs and and modeled as a combination of several wave components were performed. A table of safe and unsafe sea-states was generated by repeating the analysis for various sea-states. If the prediction for a particular sea-state is repeated N times of which n times prove to be safe, then it could be said that the predicted sea-state is safe with the probability of 100(n/N)%. The last part of the thesis deals with the Hs return values. The return value is a widely used measure of wave extremes having an important role in determining the design wave used in the design of maritime structures. In this part, Hs return value was calculated demonstrating another application of the above simulation of future 3-hourly Hs’s. The maxima method for calculating return values was applied in such a way that avoids the conventional need for unrealistic assumptions. The significant wave height return value has also been calculated using the convolution concept from a model presented by Anderson et al. (2001).
APA, Harvard, Vancouver, ISO, and other styles
33

May, Norman L. "Fault simulation of a wafer-scale neural network." Full text open access at:, 1988. http://content.ohsu.edu/u?/etd,159.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Diesmann, Markus. "Conditions for stable propagation of synchronous spiking in cortical neural networks single neuron dynamics and network properties /." [S.l.] : [s.n.], 2002. http://deposit.ddb.de/cgi-bin/dokserv?idn=968772781.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Franke, Cameron. "Autonomous Driving with a Simulation Trained Convolutional Neural Network." Scholarly Commons, 2017. https://scholarlycommons.pacific.edu/uop_etds/2971.

Full text
Abstract:
Autonomous vehicles will help society if they can easily support a broad range of driving environments, conditions, and vehicles. Achieving this requires reducing the complexity of the algorithmic system, easing the collection of training data, and verifying operation using real-world experiments. Our work addresses these issues by utilizing a reflexive neural network that translates images into steering and throttle commands. This network is trained using simulation data from Grand Theft Auto V~\cite{gtav}, which we augment to reduce the number of simulation hours driven. We then validate our work using a RC car system through numerous tests. Our system successfully drive 98 of 100 laps of a track with multiple road types and difficult turns; it also successfully avoids collisions with another vehicle in 90\% of the trials.
APA, Harvard, Vancouver, ISO, and other styles
36

Patterson, James Cameron. "Managing a real-time massively-parallel neural architecture." Thesis, University of Manchester, 2012. https://www.research.manchester.ac.uk/portal/en/theses/managing-a-realtime-massivelyparallel-neural-architecture(dfab5ca7-fcd5-4ebe-887b-0a7c330c7206).html.

Full text
Abstract:
A human brain has billions of processing elements operating simultaneously; the only practical way to model this computationally is with a massively-parallel computer. A computer on such a significant scale requires hundreds of thousands of interconnected processing elements, a complex environment which requires many levels of monitoring, management and control. Management begins from the moment power is applied and continues whilst the application software loads, executes, and the results are downloaded. This is the story of the research and development of a framework of scalable management tools that support SpiNNaker, a novel computing architecture designed to model spiking neural networks of biologically-significant sizes. This management framework provides solutions from the most fundamental set of power-on self-tests, through to complex, real-time monitoring of the health of the hardware and the software during simulation. The framework devised uses standard tools where appropriate, covering hardware up / down events and capacity information, through to bespoke software developed to provide real-time insight to neural network software operation across multiple levels of abstraction. With this layered management approach, users (or automated agents) have access to results dynamically and are able to make informed decisions on required actions in real-time.
APA, Harvard, Vancouver, ISO, and other styles
37

Li, Zhengrong. "Aerial image analysis using spiking neural networks with application to power line corridor monitoring." Thesis, Queensland University of Technology, 2011. https://eprints.qut.edu.au/46161/1/Zhengrong_Li_Thesis.pdf.

Full text
Abstract:
Trees, shrubs and other vegetation are of continued importance to the environment and our daily life. They provide shade around our roads and houses, offer a habitat for birds and wildlife, and absorb air pollutants. However, vegetation touching power lines is a risk to public safety and the environment, and one of the main causes of power supply problems. Vegetation management, which includes tree trimming and vegetation control, is a significant cost component of the maintenance of electrical infrastructure. For example, Ergon Energy, the Australia’s largest geographic footprint energy distributor, currently spends over $80 million a year inspecting and managing vegetation that encroach on power line assets. Currently, most vegetation management programs for distribution systems are calendar-based ground patrol. However, calendar-based inspection by linesman is labour-intensive, time consuming and expensive. It also results in some zones being trimmed more frequently than needed and others not cut often enough. Moreover, it’s seldom practicable to measure all the plants around power line corridors by field methods. Remote sensing data captured from airborne sensors has great potential in assisting vegetation management in power line corridors. This thesis presented a comprehensive study on using spiking neural networks in a specific image analysis application: power line corridor monitoring. Theoretically, the thesis focuses on a biologically inspired spiking cortical model: pulse coupled neural network (PCNN). The original PCNN model was simplified in order to better analyze the pulse dynamics and control the performance. Some new and effective algorithms were developed based on the proposed spiking cortical model for object detection, image segmentation and invariant feature extraction. The developed algorithms were evaluated in a number of experiments using real image data collected from our flight trails. The experimental results demonstrated the effectiveness and advantages of spiking neural networks in image processing tasks. Operationally, the knowledge gained from this research project offers a good reference to our industry partner (i.e. Ergon Energy) and other energy utilities who wants to improve their vegetation management activities. The novel approaches described in this thesis showed the potential of using the cutting edge sensor technologies and intelligent computing techniques in improve power line corridor monitoring. The lessons learnt from this project are also expected to increase the confidence of energy companies to move from traditional vegetation management strategy to a more automated, accurate and cost-effective solution using aerial remote sensing techniques.
APA, Harvard, Vancouver, ISO, and other styles
38

Guo, Lilin. "A Biologically Plausible Supervised Learning Method for Spiking Neurons with Real-world Applications." FIU Digital Commons, 2016. http://digitalcommons.fiu.edu/etd/2982.

Full text
Abstract:
Learning is central to infusing intelligence to any biologically inspired system. This study introduces a novel Cross-Correlated Delay Shift (CCDS) learning method for spiking neurons with the ability to learn and reproduce arbitrary spike patterns in a supervised fashion with applicability tospatiotemporalinformation encoded at the precise timing of spikes. By integrating the cross-correlated term,axonaland synapse delays, the CCDS rule is proven to be both biologically plausible and computationally efficient. The proposed learning algorithm is evaluated in terms of reliability, adaptive learning performance, generality to different neuron models, learning in the presence of noise, effects of its learning parameters and classification performance. The results indicate that the proposed CCDS learning rule greatly improves classification accuracy when compared to the standards reached with the Spike Pattern Association Neuron (SPAN) learning rule and the Tempotron learning rule. Network structureis the crucial partforany application domain of Artificial Spiking Neural Network (ASNN). Thus, temporal learning rules in multilayer spiking neural networks are investigated. As extensions of single-layer learning rules, the multilayer CCDS (MutCCDS) is also developed. Correlated neurons are connected through fine-tuned weights and delays. In contrast to the multilayer Remote Supervised Method (MutReSuMe) and multilayertempotronrule (MutTmptr), the newly developed MutCCDS shows better generalization ability and faster convergence. The proposed multilayer rules provide an efficient and biologically plausible mechanism, describing how delays and synapses in the multilayer networks are adjusted to facilitate learning. Interictalspikes (IS) aremorphologicallydefined brief events observed in electroencephalography (EEG) records from patients with epilepsy. The detection of IS remains an essential task for 3D source localization as well as in developing algorithms for seizure prediction and guided therapy. In this work, we present a new IS detection method using the Wavelet Encoding Device (WED) method together with CCDS learning rule and a specially designed Spiking Neural Network (SNN) structure. The results confirm the ability of such SNN to achieve good performance for automatically detecting such events from multichannel EEG records.
APA, Harvard, Vancouver, ISO, and other styles
39

Xu, Shuxiang, University of Western Sydney, and of Informatics Science and Technology Faculty. "Neuron-adaptive neural network models and applications." THESIS_FIST_XXX_Xu_S.xml, 1999. http://handle.uws.edu.au:8081/1959.7/275.

Full text
Abstract:
Artificial Neural Networks have been widely probed by worldwide researchers to cope with the problems such as function approximation and data simulation. This thesis deals with Feed-forward Neural Networks (FNN's) with a new neuron activation function called Neuron-adaptive Activation Function (NAF), and Feed-forward Higher Order Neural Networks (HONN's) with this new neuron activation function. We have designed a new neural network model, the Neuron-Adaptive Neural Network (NANN), and mathematically proved that one NANN can approximate any piecewise continuous function to any desired accuracy. In the neural network literature only Zhang proved the universal approximation ability of FNN Group to any piecewise continuous function. Next, we have developed the approximation properties of Neuron Adaptive Higher Order Neural Networks (NAHONN's), a combination of HONN's and NAF, to any continuous function, functional and operator. Finally, we have created a software program called MASFinance which runs on the Solaris system for the approximation of continuous or discontinuous functions, and for the simulation of any continuous or discontinuous data (especially financial data). Our work distinguishes itself from previous work in the following ways: we use a new neuron-adaptive activation function, while the neuron activation functions in most existing work are all fixed and can't be tuned to adapt to different approximation problems; we only use on NANN to approximate any piecewise continuous function, while a neural network group must be utilised in previous research; we combine HONN's with NAF and investigate its approximation properties to any continuous function, functional, and operator; we present a new software program, MASFinance, for function approximation and data simulation. Experiments running MASFinance indicate that the proposed NANN's present several advantages over traditional neuron-fixed networks (such as greatly reduced network size, faster learning, and lessened simulation errors), and that the suggested NANN's can effectively approximate piecewise continuous functions better than neural networks groups. Experiments also indicate that NANN's are especially suitable for data simulation<br>Doctor of Philosophy (PhD)
APA, Harvard, Vancouver, ISO, and other styles
40

Buhry, Laure. "Estimation de paramètres de modèles de neurones biologiques sur une plate-forme de SNN (Spiking Neural Network) implantés "insilico"." Thesis, Bordeaux 1, 2010. http://www.theses.fr/2010BOR14057/document.

Full text
Abstract:
Ces travaux de thèse, réalisés dans une équipe concevant des circuits analogiques neuromimétiques suivant le modèle d’Hodgkin-Huxley, concernent la modélisation de neurones biologiques, plus précisément, l’estimation des paramètres de modèles de neurones. Une première partie de ce manuscrit s’attache à faire le lien entre la modélisation neuronale et l’optimisation. L’accent est mis sur le modèle d’Hodgkin- Huxley pour lequel il existait déjà une méthode d’extraction des paramètres associée à une technique de mesures électrophysiologiques (le voltage-clamp) mais dont les approximations successives rendaient impossible la détermination précise de certains paramètres. Nous proposons dans une seconde partie une méthode alternative d’estimation des paramètres du modèle d’Hodgkin-Huxley s’appuyant sur l’algorithme d’évolution différentielle et qui pallie les limitations de la méthode classique. Cette alternative permet d’estimer conjointement tous les paramètres d’un même canal ionique. Le troisième chapitre est divisé en trois sections. Dans les deux premières, nous appliquons notre nouvelle technique à l’estimation des paramètres du même modèle à partir de données biologiques, puis développons un protocole automatisé de réglage de circuits neuromimétiques, canal ionique par canal ionique. La troisième section présente une méthode d’estimation des paramètres à partir d’enregistrements de la tension de membrane d’un neurone, données dont l’acquisition est plus aisée que celle des courants ioniques. Le quatrième et dernier chapitre, quant à lui, est une ouverture vers l’utilisation de petits réseaux d’une centaine de neurones électroniques : nous réalisons une étude logicielle de l’influence des propriétés intrinsèques de la cellule sur le comportement global du réseau dans le cadre des oscillations gamma<br>These works, which were conducted in a research group designing neuromimetic integrated circuits based on the Hodgkin-Huxley model, deal with the parameter estimation of biological neuron models. The first part of the manuscript tries to bridge the gap between neuron modeling and optimization. We focus our interest on the Hodgkin-Huxley model because it is used in the group. There already existed an estimation method associated to the voltage-clamp technique. Nevertheless, this classical estimation method does not allow to extract precisely all parameters of the model, so in the second part, we propose an alternative method to jointly estimate all parameters of one ionic channel avoiding the usual approximations. This method is based on the differential evolution algorithm. The third chaper is divided into three sections : the first two sections present the application of our new estimation method to two different problems, model fitting from biological data and development of an automated tuning of neuromimetic chips. In the third section, we propose an estimation technique using only membrane voltage recordings – easier to mesure than ionic currents. Finally, the fourth and last chapter is a theoretical study preparing the implementation of small neural networks on neuromimetic chips. More specifically, we try to study the influence of cellular intrinsic properties on the global behavior of a neural network in the context of gamma oscillations
APA, Harvard, Vancouver, ISO, and other styles
41

Ternstedt, Andreas. "Pattern recognition with spiking neural networks and the ROLLS low-power online learning neuromorphic processor." Thesis, Luleå tekniska universitet, Institutionen för teknikvetenskap och matematik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-63033.

Full text
Abstract:
Online monitoring applications requiring advanced pattern recognition capabilities implemented in resource-constrained wireless sensor systems are challenging to construct using standard digital computers. An interesting alternative solution is to use a low-power neuromorphic processor like the ROLLS, with subthreshold mixed analog/digital circuits and online learning capabilities that approximate the behavior of real neurons and synapses. This requires that the monitoring algorithm is implemented with spiking neural networks, which in principle are efficient computational models for tasks such as pattern recognition. In this work, I investigate how spiking neural networks can be used as a pre-processing and feature learning system in a condition monitoring application where the vibration of a machine with healthy and faulty rolling-element bearings is considered. Pattern recognition with spiking neural networks is investigated using simulations with Brian -- a Python-based open source toolbox -- and an implementation is developed for the ROLLS neuromorphic processor. I analyze the learned feature-response properties of individual neurons. When pre-processing the input signals with a neuromorphic cochlea known as the AER-EAR system, the ROLLS chip learns to classify the resulting spike patterns with a training error of less than 1 %, at a combined power consumption of approximately 30 mW. Thus, the neuromorphic hardware system can potentially be realized in a resource-constrained wireless sensor for online monitoring applications.However, further work is needed for testing and cross validation of the feature learning and pattern recognition networks.i
APA, Harvard, Vancouver, ISO, and other styles
42

Humble, James. "Learning, self-organisation and homeostasis in spiking neuron networks using spike-timing dependent plasticity." Thesis, University of Plymouth, 2013. http://hdl.handle.net/10026.1/1499.

Full text
Abstract:
Spike-timing dependent plasticity is a learning mechanism used extensively within neural modelling. The learning rule has been shown to allow a neuron to find the onset of a spatio-temporal pattern repeated among its afferents. In this thesis, the first question addressed is ‘what does this neuron learn?’ With a spiking neuron model and linear prediction, evidence is adduced that the neuron learns two components: (1) the level of average background activity and (2) specific spike times of a pattern. Taking advantage of these findings, a network is developed that can train recognisers for longer spatio-temporal input signals using spike-timing dependent plasticity. Using a number of neurons that are mutually connected by plastic synapses and subject to a global winner-takes-all mechanism, chains of neurons can form where each neuron is selective to a different segment of a repeating input pattern, and the neurons are feedforwardly connected in such a way that both the correct stimulus and the firing of the previous neurons are required in order to activate the next neuron in the chain. This is akin to a simple class of finite state automata. Following this, a novel resource-based STDP learning rule is introduced. The learning rule has several advantages over typical implementations of STDP and results in synaptic statistics which match favourably with those observed experimentally. For example, synaptic weight distributions and the presence of silent synapses match experimental data.
APA, Harvard, Vancouver, ISO, and other styles
43

Rastogi, Preeti. "Assessing Wireless Network Dependability Using Neural Networks." Ohio University / OhioLINK, 2005. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1129134364.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Henderson, Stephen Alexander Jr. "A Memristor-Based Liquid State Machine for Auditory Signal Recognition." University of Dayton / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1628251263454753.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Buhry, Laure. "Estimation de paramètres de modèles de neurones biologiques sur une plate-forme de SNN (Spiking Neural Network) implantés "in silico"." Phd thesis, Université Sciences et Technologies - Bordeaux I, 2010. http://tel.archives-ouvertes.fr/tel-00561396.

Full text
Abstract:
Ces travaux de thèse, réalisés dans une équipe concevant des circuits analogiques neuromimétiques suivant le modèle d'Hodgkin-Huxley, concernent la modélisation de neurones biologiques, plus précisément, l'estimation des paramètres de modèles de neurones. Une première partie de ce manuscrit s'attache à faire le lien entre la modélisation neuronale et l'optimisation. L'accent est mis sur le modèle d'Hodgkin- Huxley pour lequel il existait déjà une méthode d'extraction des paramètres associée à une technique de mesures électrophysiologiques (le voltage-clamp) mais dont les approximations successives rendaient impossible la détermination précise de certains paramètres. Nous proposons dans une seconde partie une méthode alternative d'estimation des paramètres du modèle d'Hodgkin-Huxley s'appuyant sur l'algorithme d'évolution différentielle et qui pallie les limitations de la méthode classique. Cette alternative permet d'estimer conjointement tous les paramètres d'un même canal ionique. Le troisième chapitre est divisé en trois sections. Dans les deux premières, nous appliquons notre nouvelle technique à l'estimation des paramètres du même modèle à partir de données biologiques, puis développons un protocole automatisé de réglage de circuits neuromimétiques, canal ionique par canal ionique. La troisième section présente une méthode d'estimation des paramètres à partir d'enregistrements de la tension de membrane d'un neurone, données dont l'acquisition est plus aisée que celle des courants ioniques. Le quatrième et dernier chapitre, quant à lui, est une ouverture vers l'utilisation de petits réseaux d'une centaine de neurones électroniques : nous réalisons une étude logicielle de l'influence des propriétés intrinsèques de la cellule sur le comportement global du réseau dans le cadre des oscillations gamma.
APA, Harvard, Vancouver, ISO, and other styles
46

Dickson, Scott M. "Stochastic neural network dynamics : synchronisation and control." Thesis, Loughborough University, 2014. https://dspace.lboro.ac.uk/2134/16508.

Full text
Abstract:
Biological brains exhibit many interesting and complex behaviours. Understanding of the mechanisms behind brain behaviours is critical for continuing advancement in fields of research such as artificial intelligence and medicine. In particular, synchronisation of neuronal firing is associated with both improvements to and degeneration of the brain's performance; increased synchronisation can lead to enhanced information-processing or neurological disorders such as epilepsy and Parkinson's disease. As a result, it is desirable to research under which conditions synchronisation arises in neural networks and the possibility of controlling its prevalence. Stochastic ensembles of FitzHugh-Nagumo elements are used to model neural networks for numerical simulations and bifurcation analysis. The FitzHugh-Nagumo model is employed because of its realistic representation of the flow of sodium and potassium ions in addition to its advantageous property of allowing phase plane dynamics to be observed. Network characteristics such as connectivity, configuration and size are explored to determine their influences on global synchronisation generation in their respective systems. Oscillations in the mean-field are used to detect the presence of synchronisation over a range of coupling strength values. To ensure simulation efficiency, coupling strengths between neurons that are identical and fixed with time are investigated initially. Such networks where the interaction strengths are fixed are referred to as homogeneously coupled. The capacity of controlling and altering behaviours produced by homogeneously coupled networks is assessed through the application of weak and strong delayed feedback independently with various time delays. To imitate learning, the coupling strengths later deviate from one another and evolve with time in networks that are referred to as heterogeneously coupled. The intensity of coupling strength fluctuations and the rate at which coupling strengths converge to a desired mean value are studied to determine their impact upon synchronisation performance. The stochastic delay differential equations governing the numerically simulated networks are then converted into a finite set of deterministic cumulant equations by virtue of the Gaussian approximation method. Cumulant equations for maximal and sub-maximal connectivity are used to generate two-parameter bifurcation diagrams on the noise intensity and coupling strength plane, which provides qualitative agreement with numerical simulations. Analysis of artificial brain networks, in respect to biological brain networks, are discussed in light of recent research in sleep theory.
APA, Harvard, Vancouver, ISO, and other styles
47

Holanda, Priscila Cavalcante. "DHyANA : neuromorphic architecture for liquid computing." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2016. http://hdl.handle.net/10183/169343.

Full text
Abstract:
Redes Neurais têm sido um tema de pesquisas por pelo menos sessenta anos. Desde a eficácia no processamento de informações à incrível capacidade de tolerar falhas, são incontáveis os mecanismos no cérebro que nos fascinam. Assim, não é nenhuma surpresa que, na medida que tecnologias facilitadoras tornam-se disponíveis, cientistas e engenheiros têm aumentado os esforços para o compreender e simular. Em uma abordagem semelhante à do Projeto Genoma Humano, a busca por tecnologias inovadoras na área deu origem a projetos internacionais que custam bilhões de dólares, o que alguns denominam o despertar global de pesquisa da neurociência. Avanços em hardware fizeram a simulação de milhões ou até bilhões de neurônios possível. No entanto, as abordagens existentes ainda não são capazes de fornecer a densidade de conexões necessária ao enorme número de neurônios e sinapses. Neste sentido, este trabalho propõe DHyANA (Arquitetura Digital Neuromórfica Hierárquica), uma nova arquitetura em hardware para redes neurais pulsadas, a qual utiliza comunicação em rede-em-chip hierárquica. A arquitetura é otimizada para implementações de Máquinas de Estado Líquido. A arquitetura DHyANA foi exaustivamente testada em plataformas de simulação, bem como implementada em uma FPGA Stratix IV da Altera. Além disso, foi realizada a síntese lógica em tecnologia 65nm, a fim de melhor avaliar e comparar o sistema resultante com projetos similares, alcançando uma área de 0,23mm2 e potência de 147mW para uma implementação de 256 neurônios.<br>Neural Networks has been a subject of research for at least sixty years. From the effectiveness in processing information to the amazing ability of tolerating faults, there are countless processing mechanisms in the brain that fascinates us. Thereupon, it comes with no surprise that as enabling technologies have become available, scientists and engineers have raised the efforts to understand, simulate and mimic parts of it. In a similar approach to that of the Human Genome Project, the quest for innovative technologies within the field has given birth to billion dollar projects and global efforts, what some call a global blossom of neuroscience research. Advances in hardware have made the simulation of millions or even billions of neurons possible. However, existing approaches cannot yet provide the even more dense interconnect for the massive number of neurons and synapses required. In this regard, this work proposes DHyANA (Digital HierArchical Neuromorphic Architecture), a new hardware architecture for a spiking neural network using hierarchical network-on-chip communication. The architecture is optimized for Liquid State Machine (LSM) implementations. DHyANA was exhaustively tested in simulation platforms, as well as implemented in an Altera Stratix IV FPGA. Furthermore, a logic synthesis analysis using 65-nm CMOS technology was performed in order to evaluate and better compare the resulting system with similar designs, achieving an area of 0.23mm2 and a power dissipation of 147mW for a 256 neurons implementation.
APA, Harvard, Vancouver, ISO, and other styles
48

Norton, R. David. "Improving Liquid State Machines Through Iterative Refinement of the Reservoir." Diss., CLICK HERE for online access, 2008. http://contentdm.lib.byu.edu/ETD/image/etd2316.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Manuputty, James David. "Shipbuilding evaluation using neural network based response surface and adaptive optimisation." Thesis, University of Newcastle Upon Tyne, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.366515.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Novak, Martina. "A neural network approach for simulation and forecasting of chaotic time series." Thesis, Georgia Institute of Technology, 2002. http://hdl.handle.net/1853/19087.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography