To see the other types of publications on this topic, follow the link: Reservoir computing.

Dissertations / Theses on the topic 'Reservoir computing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Reservoir computing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Dale, Matthew. "Reservoir Computing in materio." Thesis, University of York, 2018. http://etheses.whiterose.ac.uk/22306/.

Full text
Abstract:
Reservoir Computing first emerged as an efficient mechanism for training recurrent neural networks and later evolved into a general theoretical model for dynamical systems. By applying only a simple training mechanism many physical systems have become exploitable unconventional computers. However, at present, many of these systems require careful selection and tuning by hand to produce usable or optimal reservoir computers. In this thesis we show the first steps to applying the reservoir model as a simple computational layer to extract exploitable information from complex material substrates. We argue that many physical substrates, even systems that in their natural state might not form usable or "good" reservoirs, can be configured into working reservoirs given some stimulation. To achieve this we apply techniques from evolution in materio whereby configuration is through evolved input-output signal mappings and targeted stimuli. In preliminary experiments the combined model and configuration method is applied to carbon nanotube/polymer composites. The results show substrates can be configured and trained as reservoir computers of varying quality. It is shown that applying the reservoir model adds greater functionality and programmability to physical substrates, without sacrificing performance. Next, the weaknesses of the technique are addressed, with the creation of new high input-output hardware system and an alternative multi-substrate framework. Lastly, a substantial effort is put into characterising the quality of a substrate for reservoir computing, i.e its ability to realise many reservoirs. From this, a methodological framework is devised. Using the framework, radically different computing substrates are compared and assessed, something previously not possible. As a result, a new understanding of the relationships between substrate, tasks and properties is possible, outlining the way for future exploration and optimisation of new computing substrates.
APA, Harvard, Vancouver, ISO, and other styles
2

Kulkarni, Manjari S. "Memristor-based Reservoir Computing." PDXScholar, 2012. https://pdxscholar.library.pdx.edu/open_access_etds/899.

Full text
Abstract:
In today's nanoscale era, scaling down to even smaller feature sizes poses a significant challenge in the device fabrication, the circuit, and the system design and integration. On the other hand, nanoscale technology has also led to novel materials and devices with unique properties. The memristor is one such emergent nanoscale device that exhibits non-linear current-voltage characteristics and has an inherent memory property, i.e., its current state depends on the past. Both the non-linear and the memory property of memristors have the potential to enable solving spatial and temporal pattern recognition tasks in radically different ways from traditional binary transistor-based technology. The goal of this thesis is to explore the use of memristors in a novel computing paradigm called "Reservoir Computing" (RC). RC is a new paradigm that belongs to the class of artificial recurrent neural networks (RNN). However, it architecturally differs from the traditional RNN techniques in that the pre-processor (i.e., the reservoir) is made up of random recurrently connected non-linear elements. Learning is only implemented at the readout (i.e., the output) layer, which reduces the learning complexity significantly. To the best of our knowledge, memristors have never been used as reservoir components. We use pattern recognition and classification tasks as benchmark problems. Real world applications associated with these tasks include process control, speech recognition, and signal processing. We have built a software framework, RCspice (Reservoir Computing Simulation Program with Integrated Circuit Emphasis), for this purpose. The framework allows to create random memristor networks, to simulate and evaluate them in Ngspice, and to train the readout layer by means of Genetic Algorithms (GA). We have explored reservoir-related parameters, such as the network connectivity and the reservoir size along with the GA parameters. Our results show that we are able to efficiently and robustly classify time-series patterns using memristor-based dynamical reservoirs. This presents an important step towards computing with memristor-based nanoscale systems.
APA, Harvard, Vancouver, ISO, and other styles
3

Tran, Dat Tien. "Memcapacitive Reservoir Computing Architectures." PDXScholar, 2019. https://pdxscholar.library.pdx.edu/open_access_etds/5001.

Full text
Abstract:
In this thesis, I propose novel brain-inspired and energy-efficient computing systems. Designing such systems has been the forefront goal of neuromorphic scientists over the last few decades. The results from my research show that it is possible to design such systems with emerging nanoscale memcapacitive devices. Technological development has advanced greatly over the years with the conventional von Neumann architecture. The current architectures and materials, however, will inevitably reach their physical limitations. While conventional computing systems have achieved great performances in general tasks, they are often not power-efficient in performing tasks with large input data, such as natural image recognition and tracking objects in streaming video. Moreover, in the von Neumann architecture, all computations take place in the Central Processing Unit (CPU) and the results are saved in the memory. As a result, information is shuffled back and forth between the memory and the CPU for processing, which creates a bottleneck due to the limited bandwidth of data paths. Adding cache memory and using general-purpose Graphic Processing Units (GPUs) do not completely resolve this bottleneck. Neuromorphic architectures offer an alternative to the conventional architecture by mimicking the functionality of a biological neural network. In a biological neural network, neurons communicate with each other through a large number of dendrites and synapses. Each neuron (a processing unit) locally processes the information that is stored in its input synapses (memory units). Distributing information to neurons and localizing computation at the synapse level alleviate the bottleneck problem and allow for the processing of a large amount of data in parallel. Furthermore, biological neural networks are highly adaptable to complex environments, tolerant of system noise and variations, and capable of processing complex information with extremely low power. Over the past five decades, researchers have proposed various brain-inspired architectures to perform neuromorphic tasks. IBM's TrueNorth is considered as the state-of-the-art brain-inspired architecture. It has 106 CMOS neurons with 256 x 256 programmable synapses and consumes about 60nW/neuron. Even though TrueNorth is power-efficient, its number of neurons and synapses is nothing compared to a human brain that has 1011 neurons and each neuron has, on average, 7,000 synaptic connections to other neurons. The human brain only consumes 2.3nW/neuron. The memristor brought neuromorphic computing one step closer to the human brain target. A memristor is a passive nano-device that has a memory. Its resistance changes with applied voltages. The resistive change with an applied voltage is similar to the function of a synapse. Memristors have been the prominent option for designing low power systems with high-area density. In fact, Truong and Min reported that an improved memristor-based crossbar performed a neuromorphic task with 50% reduction in area and 48% of power savings compared to CMOS arrays. However, memristive devices, by their nature, are still resistors, and the power consumption is bounded by their resistance. Here, a memcapacitor offers a promising alternative. My initial work indicated that memcapacitive networks performed complex tasks with equivalent performance, compared to memristive networks, but with much higher energy efficiency. A memcapacitor is also a two-terminal nano-device and its capacitance varies with applied voltages. Similar to a memristor, the capacitance of the memcapacitor changes with an applied voltage, similar to the function of a synapse. The memcapacitor is a storage device and does not consume static energy. Its switching energy is also small due to its small capacitance (nF to pF range). As a result, networks of memcapacitors have the potential to perform complex tasks with much higher power efficiency. Several memcapacitive synaptic models have been proposed as artificial synapses. Pershin and Di Ventra illustrated that a memcapacitor with two diodes has the functionality of a synapse. Flak suggested that a memcapacitor behaves as a synapse when it is connected with three CMOS switches in a Cellular Nanoscale Network (CNN). Li et al. demonstrated that when four identical memcapacitors are connected in a bridge network, they characterize the function of a synapse as well. Reservoir Computing (RC) has been used to explain higher-order cognitive functions and the interaction of short-term memory with other cognitive processes. Rigotti et al. observed that a dynamic system with short-term memory is essential in defining the internal brain states of a test agent. Although both traditional Recurrent Neural Networks (RNNs) and RC are dynamical systems, RC has a great benefit over RNNs due to the fact that the learning process of RC is simple and based on the training of the output layer. RC harnesses the computing nature of a random network of nonlinear devices, such as memcapacitors. Appeltant et al. showed that RC with a simplified reservoir structure is sufficient to perform speech recognition. Fewer nonlinear units connecting in a delay feedback loop provide enough dynamic responses for RC. Fewer units in reservoirs mean fewer connections and inputs, and therefore lower power consumption. As Goudarzi and Teuscher indicated, RC architectures still have inherent challenges that need to be addressed. First, theoretical studies have shown that both regular and random reservoirs achieve similar performances for particular tasks. A random reservoir, however, is more appropriate for unstructured networks of nanoscale devices. What is the role of network structure in RC for solving a task (Q1)? Secondly, the nonlinear characteristics of nanoscale devices contribute directly to the dynamics of a physical network, which influences the overall performance of an RC system. To what degree is a mixture of nonlinear devices able to improve the performances of reservoirs (Q2)? Thirdly, modularity, such as CMOS circuits in a digital building, is an essential key in building a complex system from fundamental blocks. Is hierarchical RCs able to solve complex tasks? What network topologies/hierarchies will lead to optimal performance? What is the learning complexity of such a system (Q3)? My research goal is to address the above RC challenges by exploring memcapacitive reservoir architectures. The analysis of memcapacitive monolithic reservoirs addresses both questions Q1 and Q2 above by showing that Small-World Power-Law (SWPL) structure is an optimal topological structure for RCs to perform time series prediction (NARMA-10), temporal recognition (Isolate Spoken Digits), and spatial task (MNIST) with minimal power consumption. On average, the SWPL reservoirs reduce significantly the power consumption by a factor of 1.21x, 31x, and 31.2x compared to the regular, the random, and the small-world reservoirs, respectively. Further analysis of SWPL structures underlines that high locality α and low randomness β decrease the cost to the systems in terms of wiring and nanowire dissipated power but do not guarantee the optimal performance of reservoirs. With a genetic algorithm to refine network structure, SWPL reservoirs with optimal network parameters are able to achieve comparable performance with less power. Compared to the regular reservoirs, the SWPL reservoirs consume less power, by a factor of 1.3x, 1.4x, and 1.5x. Similarly, compared to the random topology, the SWPL reservoirs save power consumption by a factor of 4.8x, 1.6x, and 2.1x, respectively. The simulation results of mixed-device reservoirs (memristive and memcapacitive reservoirs) provide evidence that the combination of memristive and memcapacitive devices potentially enhances the nonlinear dynamics of reservoirs in three tasks: NARMA-10, Isolated Spoken Digits, and MNIST. In addressing the third question (Q3), the kernel quality measurements show that hierarchical reservoirs have better dynamic responses than monolithic reservoirs. The improvement of dynamic responses allows hierarchical reservoirs to achieve comparable performance for Isolated Spoken Digit tasks but with less power consumption by a factor of 1.4x, 8.8x, 9.5, and 6.3x for delay-line, delay-line feedback, simple cycle, and random structures, respectively. Similarly, for the CIFAR-10 image tasks, hierarchical reservoirs gain higher performance with less power, by a factor of 5.6x, 4.2x, 4.8x, and 1.9x. The results suggest that hierarchical reservoirs have better dynamics than the monolithic reservoirs to solve sufficiently complex tasks. Although the performance of deep mem-device reservoirs is low compared to the state-of-the-art deep Echo State Networks, the initial results demonstrate that deep mem-device reservoirs are able to solve a high-dimensional and complex task such as polyphonic music task. The performance of deep mem-device reservoirs can be further improved with better settings of network parameters and architectures. My research illustrates the potentials of novel memcapacitive systems with SWPL structures that are brained-inspired and energy-efficient in performing tasks. My research offers novel memcapacitive systems that are applicable to low-power applications, such as mobile devices and the Internet of Things (IoT), and provides an initial design step to incorporate nano memcapacitive devices into future applications of nanotechnology.
APA, Harvard, Vancouver, ISO, and other styles
4

Melandri, Luca. "Introduction to Reservoir Computing Methods." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2015. http://amslaurea.unibo.it/8268/.

Full text
Abstract:
Il documento tratta la famiglia di metodologie di allenamento e sfruttamento delle reti neurali ricorrenti nota sotto il nome di Reservoir Computing. Viene affrontata un'introduzione sul Machine Learning in generale per fornire tutti gli strumenti necessari a comprendere l'argomento. Successivamente, vengono dati dettagli implementativi ed analisi dei vantaggi e punti deboli dei vari approcci, il tutto con supporto di codice ed immagini esplicative. Nel finale vengono tratte conclusioni sugli approcci, su quanto migliorabile e sulle applicazioni pratiche.
APA, Harvard, Vancouver, ISO, and other styles
5

Weddell, Stephen John. "Optical Wavefront Prediction with Reservoir Computing." Thesis, University of Canterbury. Electrical and Computer Engineering, 2010. http://hdl.handle.net/10092/4070.

Full text
Abstract:
Over the last four decades there has been considerable research in the improvement of imaging exo-atmospheric objects through air turbulence from ground-based instruments. Whilst such research was initially motivated for military purposes, the benefits to the astronomical community have been significant. A key topic in this research is isoplanatism. The isoplanatic angle is an angular limit that separates two point-source objects, where if independent measurements of wavefront perturbations were obtained from each source, the wavefront distortion would be considered equivalent. In classical adaptive optics, perturbations from a point-source reference, such as a bright, natural guide star, are used to partially negate perturbations distorting an image of a fainter, nearby science object. Various techniques, such as atmospheric tomography, maximum a posteriori (MAP), and parameterised modelling, have been used to estimate wavefront perturbations when the distortion function is spatially variant, i.e., angular separations exceed the isoplanatic angle, θ₀, where θ₀ ≈ 10 μrad for mild distortion at visual wavelengths. However, the effectiveness of such techniques is also dependent on knowledge a priori of turbulence profiles and configuration data. This dissertation describes a new method used to estimate the eigenvalues that comprise wavefront perturbations over a wide, spatial field. To help reduce dependency on prior knowledge for specific configurations, machine learning is used with a recurrent neural network trained using a posteriori wavefront ensembles from multiple point-source objects. Using a spatiotemporal framework for prediction, the eigenvalues, in terms of Zernike polynomials, are used to reconstruct the spatially-variant, point spread function (SVPSF) for image restoration. The overall requirement is to counter the adverse effects of atmospheric turbulence on the images of extended astronomical objects. The method outlined in this thesis combines optical wavefront sensing using multiple natural guide stars, with a reservoir-based, artificial neural network. The network is used to predict aberrations caused by atmospheric turbulence that degrade the images of faint science objects. A modified geometric wavefront sensor was used to simultaneously measure phase perturbations from multiple, point-source reference objects in the pupil. A specialised recurrent neural network (RNN) was used to learn the spatiotemporal effects of phase perturbations measured from several source references. Modal expansions, in terms of Zernike coefficients, were used to build time-series ensembles that defined wavefront maps of point-source reference objects. The ensembles were used to firstly train an RNN by applying a spatiotemporal training algorithm, and secondly, new data ensembles presented to the trained RNN were used to estimate the wavefront map of science objects over a wide field. Both simulations and experiments were used to evaluate this method. The results of this study showed that by employing three or more source references over an angular separation of 24 μrad from a target, and given mild turbulence with Fried coherence length of 20 cm, the normalised mean squared error of low-order Zernike modes could be estimated to within 0.086. A key benefit in estimating phase perturbations using a time-series of short exposure point-spread functions (PSFs) is that it is then possible to determine the long exposure PSF. Based on the summation of successive, corrected, short-exposure frames, high resolution images of the science object can be obtained. The method was shown to predict a contiguous series of short exposure aberrations, as a phase screen was moved over a simulated aperture. By qualifying temporal decorrelation of atmospheric turbulence, in terms of Taylor's hypothesis, long exposure estimates of the PSF were obtained.
APA, Harvard, Vancouver, ISO, and other styles
6

Sergio, Anderson Tenório. "Otimização de Reservoir Computing com PSO." Universidade Federal de Pernambuco, 2013. https://repositorio.ufpe.br/handle/123456789/11498.

Full text
Abstract:
Submitted by Daniella Sodre (daniella.sodre@ufpe.br) on 2015-03-09T14:34:23Z No. of bitstreams: 2 Dissertaçao Anderson Sergio.pdf: 1358589 bytes, checksum: fdd2a84a1ce8a69596fa45676bc522e4 (MD5) license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5)
Made available in DSpace on 2015-03-09T14:34:23Z (GMT). No. of bitstreams: 2 Dissertaçao Anderson Sergio.pdf: 1358589 bytes, checksum: fdd2a84a1ce8a69596fa45676bc522e4 (MD5) license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Previous issue date: 2013-03-07
Reservoir Computing (RC) é um paradigma de Redes Neurais Artificiais com aplicações importantes no mundo real. RC utiliza arquitetura similar às Redes Neurais Recorrentes para processamento temporal, com a vantagem de não necessitar treinar os pesos da camada intermediária. De uma forma geral, o conceito de RC é baseado na construção de uma rede recorrente de maneira randômica (reservoir), sem alteração dos pesos. Após essa fase, uma função de regressão linear é utilizada para treinar a saída do sistema. A transformação dinâmica não-linear oferecida pelo reservoir é suficiente para que a camada de saída consiga extrair os sinais de saída utilizando um mapeamento linear simples, fazendo com que o treinamento seja consideravelmente mais rápido. Entretanto, assim como as redes neurais convencionais, Reservoir Computing possui alguns problemas. Sua utilização pode ser computacionalmente onerosa, diversos parâmetros influenciam sua eficiência e é improvável que a geração aleatória dos pesos e o treinamento da camada de saída com uma função de regressão linear simples seja a solução ideal para generalizar os dados. O PSO é um algoritmo de otimização que possui algumas vantagens sobre outras técnicas de busca global. Ele possui implementação simples e, em alguns casos, convergência mais rápida e custo computacional menor. Esta dissertação teve o objetivo de investigar a utilização do PSO (e duas de suas extensões – EPUS-PSO e APSO) na tarefa de otimizar os parâmetros globais, arquitetura e pesos do reservoir de um RC, aplicada ao problema de previsão de séries temporais. Os resultados alcançados mostraram que a otimização de Reservoir Computing com PSO, bem como com as suas extensões selecionadas, apresentaram desempenho satisfatório para todas as bases de dados estudadas – séries temporais de benchmark e bases de dados com aplicação em energia eólica. A otimização superou o desempenho de diversos trabalhos na literatura, apresentando-se como uma solução importante para o problema de previsão de séries temporais.
APA, Harvard, Vancouver, ISO, and other styles
7

Appeltant, Lennert. "Reservoir computing based on delay-dynamical systems." Doctoral thesis, Universitat de les Illes Balears, 2012. http://hdl.handle.net/10803/84144.

Full text
Abstract:
Today, except for mathematical operations, our brain functions much faster and more efficient than any supercomputer. It is precisely this form of information processing in neural networks that inspires researchers to create systems that mimic the brain’s information processing capabilities. In this thesis we propose a novel approach to implement these alternative computer architectures, based on delayed feedback. We show that one single nonlinear node with delayed feedback can replace a large network of nonlinear nodes. First we numerically investigate the architecture and performance of delayed feedback systems as information processing units. Then we elaborate on electronic and opto-electronic implementations of the concept. Next to evaluating their performance for standard benchmarks, we also study task independent properties of the system, extracting information on how to further improve the initial scheme. Finally, some simple modifications are suggested, yielding improvements in terms of speed or performance.
APA, Harvard, Vancouver, ISO, and other styles
8

Andersson, Casper. "Reservoir Computing Approach for Network Intrusion Detection." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-54983.

Full text
Abstract:
Identifying intrusions in computer networks is important to be able to protect the network. The network is the entry point that attackers use in an attempt to gain access to valuable information from a company or organization or to simply destroy digital property. There exist many good methods already but there is always room for improvement. This thesis proposes to use reservoir computing as a feature extractor on network traffic data as a time series to train machine learning models for anomaly detection. The models used in this thesis are neural network, support vector machine, and linear discriminant analysis. The performance is measured in terms of detection rate, false alarm rate, and overall accuracy of the identification of attacks in the test data. The results show that the neural network generally improved with the use of a reservoir network. Support vector machine wasn't hugely affected by the reservoir. Linear discriminant analysis always got worse performance. Overall, the time aspect of the reservoir didn't have a huge effect. The performance of my experiments is inferior to those of previous works, but it might perform better if a separate feature selection or extraction is done first. Extracting a sequence to a single vector and determining if it contained any attacks worked very well when the sequences contained several attacks, otherwise not so well.
APA, Harvard, Vancouver, ISO, and other styles
9

Fu, Kaiwei. "Reservoir Computing with Neuro-memristive Nanowire Networks." Thesis, The University of Sydney, 2021. https://hdl.handle.net/2123/25900.

Full text
Abstract:
We present simulation results based on a model of self–assembled nanowire networks with memristive junctions and neural network–like topology. We analyse the dynamical voltage distribution in response to an applied bias and explain the network conductance fluctuations observed in previous experimental studies. We show I − V curves under AC stimulation and compare these to other bulk memristors. We then study the capacity of these nanowire networks for neuro-inspired reservoir computing by demonstrating higher harmonic generation and short/long–term memory. Benchmark tasks in a reservoir computing framework are implemented. The tasks include nonlinear wave transformation, wave auto-generation, and hand-written digit classification.
APA, Harvard, Vancouver, ISO, and other styles
10

Alomar, Barceló Miquel Lleó. "Methodologies for hardware implementation of reservoir computing systems." Doctoral thesis, Universitat de les Illes Balears, 2017. http://hdl.handle.net/10803/565422.

Full text
Abstract:
[cat]Inspirades en la forma en què el cervell processa la informació, les xarxes neuronals artificials (XNA) es crearen amb l’objectiu de reproduir habilitats humanes en tasques que són difícils de resoldre mitjançant la programació algorítmica clàssica. El paradigma de les XNA s’ha aplicat a nombrosos camps de la ciència i enginyeria gràcies a la seva capacitat d’aprendre dels exemples, l’adaptació, el paral·lelisme i la tolerància a fallades. El reservoir computing (RC), basat en l’ús d’una xarxa neuronal recurrent (XNR) aleatòria com a nucli de processament, és un model de gran abast molt adequat per processar sèries temporals. Les realitzacions en maquinari de les XNA són crucials per aprofitar les propietats paral·leles d’aquests models, les quals afavoreixen una major velocitat i fiabilitat. D’altra banda, les xarxes neuronals en maquinari (XNM) poden oferir avantatges apreciables en termes de consum energètic i cost. Els dispositius compactes de baix cost implementant XNM són útils per donar suport o reemplaçar el programari en aplicacions en temps real, com ara de control, supervisió mèdica, robòtica i xarxes de sensors. No obstant això, la realització en maquinari de XNA amb un nombre elevat de neurones, com al cas de l’RC, és una tasca difícil a causa de la gran quantitat de recursos exigits per les operacions involucrades. Tot i els possibles beneficis dels circuits digitals en maquinari per realitzar un processament neuronal basat en RC, la majoria d’implementacions es realitzen en programari usant processadors convencionals. En aquesta tesi, proposo i analitzo diverses metodologies per a la implementació digital de sistemes RC fent ús d’un nombre limitat de recursos de maquinari. Els dissenys de la xarxa neuronal es descriuen en detall tant per a una implementació convencional com per als distints mètodes alternatius. Es discuteixen els avantatges i inconvenients de les diferents tècniques pel que fa a l’exactitud, velocitat de càlcul i àrea requerida. Finalment, les implementacions proposades s’apliquen a resoldre diferents problemes pràctics d’enginyeria.
[spa]Inspiradas en la forma en que el cerebro procesa la información, las redes neuronales artificiales (RNA) se crearon con el objetivo de reproducir habilidades humanas en tareas que son difíciles de resolver utilizando la programación algorítmica clásica. El paradigma de las RNA se ha aplicado a numerosos campos de la ciencia y la ingeniería gracias a su capacidad de aprender de ejemplos, la adaptación, el paralelismo y la tolerancia a fallas. El reservoir computing (RC), basado en el uso de una red neuronal recurrente (RNR) aleatoria como núcleo de procesamiento, es un modelo de gran alcance muy adecuado para procesar series temporales. Las realizaciones en hardware de las RNA son cruciales para aprovechar las propiedades paralelas de estos modelos, las cuales favorecen una mayor velocidad y fiabilidad. Por otro lado, las redes neuronales en hardware (RNH) pueden ofrecer ventajas apreciables en términos de consumo energético y coste. Los dispositivos compactos de bajo coste implementando RNH son útiles para apoyar o reemplazar al software en aplicaciones en tiempo real, como el control, monitorización médica, robótica y redes de sensores. Sin embargo, la realización en hardware de RNA con un número elevado de neuronas, como en el caso del RC, es una tarea difícil debido a la gran cantidad de recursos exigidos por las operaciones involucradas. A pesar de los posibles beneficios de los circuitos digitales en hardware para realizar un procesamiento neuronal basado en RC, la mayoría de las implementaciones se realizan en software mediante procesadores convencionales. En esta tesis, propongo y analizo varias metodologías para la implementación digital de sistemas RC utilizando un número limitado de recursos hardware. Los diseños de la red neuronal se describen en detalle tanto para una implementación convencional como para los distintos métodos alternativos. Se discuten las ventajas e inconvenientes de las diversas técnicas con respecto a la precisión, velocidad de cálculo y área requerida. Finalmente, las implementaciones propuestas se aplican a resolver diferentes problemas prácticos de ingeniería.
[eng]Inspired by the way the brain processes information, artificial neural networks (ANNs) were created with the aim of reproducing human capabilities in tasks that are hard to solve using the classical algorithmic programming. The ANN paradigma has been applied to numerous fields of science and engineering thanks to its ability to learn from examples, adaptation, parallelism and fault-tolerance. Reservoir computing (RC), based on the use of a random recurrent neural network (RNN) as processing core, is a powerful model that is highly suited to time-series processing. Hardware realizations of ANNs are crucial to exploit the parallel properties of these models, which favor higher speed and reliability. On the other hand, hardware neural networks (HNNs) may offer appreciable advantages in terms of power consumption and cost. Low-cost compact devices implementing HNNs are useful to suport or replace software in real-time applications, such as control, medical monitoring, robotics and sensor networks. However, the hardware realization of ANNs with large neuron counts, such as in RC, is a challenging task due to the large resource requirement of the involved operations. Despite the potential benefits of hardware digital circuits to perform RC-based neural processing, most implementations are realized in software using sequential processors. In this thesis, I propose and analyze several methodologies for the digital implementation of RC systems using limited hardware resources. The neural network design is described in detail for both a conventional implementation and the diverse alternative approaches. The advantages and shortcomings of the various techniques regarding the accuracy, computation speed and required silicon area are discussed. Finally, the proposed approaches are applied to solve different real-life engineering problems.
APA, Harvard, Vancouver, ISO, and other styles
11

Mohamed, Abdalla Mohab Sameh. "Reservoir computing in lithium niobate on insulator platforms." Electronic Thesis or Diss., Ecully, Ecole centrale de Lyon, 2024. http://www.theses.fr/2024ECDL0051.

Full text
Abstract:
Cette étude concerne le calcul par réservoir à retard temporel, en anglais Time-Delay Reservoir Computing (TDRC) dans les plateformes de photonique intégré, en particulier la plateforme Lithium Niobate On Insulator (LNOI). Nous proposons une nouvelle architecture intégrée « tout optique », avec seulement un déphaseur comme paramètre modifiable pouvant atteindre de bonnes performances sur plusieurs tâches de référence de calcul par réservoir. Nous étudions également l'espace de conception de cette architecture et le fonctionnement asynchrone du TDRC, qui s'écarte du cadre plus courant consistant à envisager les ordinateurs TDRC comme des réseaux. En outre, nous suggérons d'exploiter le schéma tout optique pour se passer du masque d'entrée, ce qui permet de contourner la conversion Optique/Electronique/Optique (O/E/O), souvent nécessaire pour appliquer le masque dans les architectures TDRC. Dans des travaux futurs, cela pourra permettre le traitement de signaux entrants en temps réel, éventuellement pour des applications de télécommunication de pointe. Les effets de la lecture électronique de sortie sur cette architecture sont également étudiés. Aussi, nous suggérons d'utiliser la corrélation de Pearson comme une métrique nous permettant de concevoir un réservoir capable de traiter plusieurs tâches en même temps sur le même signal entrant (et éventuellement sur des signaux dans des canaux différents). Les premiers travaux expérimentaux menés à l'université RMIT sont également présentés. Par ces travaux, nous voulons étudier la performance de ces nouvelles architectures TDRC tout en ayant minimisant la complexité du matériel photonique. Pour cela on s’appuiera principalement sur les faibles pertes du LNOI qui permettent l'intégration du guide d'onde de rétroaction, et en utilisant uniquement l'interférence et la conversion d'intensité à la sortie (par le biais d'un photodétecteur) en tant que non-linéarité. Cela constitue une base sur laquelle pourront s’appuyer de futurs travaux étudiant les gains de performance lorsque des non-linéarités supplémentaires sont prises en compte (telles que celles de la plateforme LNOI) et lorsque la complexité globale du système augmente par l'introduction d'un plus grand nombre de paramètres. Ces travaux portent donc sur l'exploration d'une approche informatique non conventionnelle particulière (TDRC), utilisant une technologie particulière (la photonique intégrée), sur une plateforme particulière (LNOI). Ces travaux s'appuient sur l'intérêt croissant pour l'informatique non conventionnelle puisqu'il a été démontré au fil des ans que les ordinateurs numériques ne peuvent plus être une solution unique, en particulier pour les applications émergentes telles que l'intelligence artificielle (IA). Le paysage futur de l'informatique englobera probablement une grande variété de paradigmes informatiques, d'architectures et de hardware, afin de répondre aux besoins d'applications spécialisées croissantes, tout en coexistant avec les ordinateurs numériques qui restent - du moins pour l'instant - mieux adaptés à l'informatique à usage général
This work concerns time-delay reservoir computing (TDRC) in integrated photonic platforms, specifically the Lithium Niobate on Insulator (LNOI) platform. We propose a novel all-optical integrated architecture, which has only one tunable parameter in the form of a phase-shifter, and which can achieve good performance on several reservoir computing benchmark tasks. We also investigate the design space of this architecture and the asynchronous operation, which represents a departure from the more common framework of envisioning time-delay reservoir computers as networks in the stricter sense. Additionally, we suggest to leverage the all-optical scheme to dispense with the input mask, which allows the bypassing of an O/E/O conversion, often necessary to apply the mask in TDRC architectures. In future work, this can allow the processing of real-time incoming signals, possibly for telecom/edge applications. The effects of the output electronic readout on this architecture are also investigated. Furthermore, it is suggested to use the Pearson correlation as a simple way to design a reservoir which can handle multiple tasks at the same time, on the same incoming signal (and possibly on signals in different channels). Initial experimental work carried out at RMIT University is also reported. The unifying theme of this work is to investigate the performance possibilities with minimum photonic hardware requirements, relying mainly on LNOI’s low losses which enables the integration of the feedback waveguide, and using only interference and subsequent intensity conversion (through a photodetector) as the nonlinearity. This provides a base for future work to compare against in terms of performance gains when additional nonlinearities are considered (such as those of the LNOI platform), and when overall system complexity is increased by means of introducing more tunable parameters. Thus, the scope of this work is about the exploration of one particular unconventional computing approach (reservoir computing), using one particular technology (photonics), on one particular platform (lithium niobate on insulator). This work builds on the increasing interest of exploring unconventional computing, since it has been shown over the years that digital computers can no longer be a `one-size-fits-all', especially for emerging applications like artificial intelligence (AI). The future landscape of computing will likely encompass a rich variety of computing paradigms, architectures, and hardware, to meet the needs of rising specialized applications, and all in coexistence with digital computers which remain --- at least for now --- better suited for general-purpose computing
APA, Harvard, Vancouver, ISO, and other styles
12

Lukoševičius, Mantas [Verfasser]. "Reservoir Computing and Self-Organized Neural Hierarchies / Mantas Lukoševičius." Bremen : IRC-Library, Information Resource Center der Jacobs University Bremen, 2013. http://d-nb.info/1035433168/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Canaday, Daniel M. "Modeling and Control of Dynamical Systems with Reservoir Computing." The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu157469471458874.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Grein, Ederson Augusto. "A parallel computing approach applied to petroleum reservoir simulation." reponame:Repositório Institucional da UFSC, 2015. https://repositorio.ufsc.br/xmlui/handle/123456789/160633.

Full text
Abstract:
Dissertação (mestrado) - Universidade Federal de Santa Catarina, Centro Tecnológico, Programa de Pós-Graduação em Engenharia Mecânica, Florianópolis, 2015.
Made available in DSpace on 2016-04-19T04:03:44Z (GMT). No. of bitstreams: 1 337626.pdf: 16916870 bytes, checksum: a0cb8bc1bf93f21cc1a78cd631272e49 (MD5) Previous issue date: 2015
A simulação numérica é uma ferramenta de extrema importância à indústria do petróleo e gás. Entretanto, para que os resultados advindos da simulação sejam fidedignos, é fundamental o emprego de modelos físicos fiéis e de uma boa caracterização geométrica do reservatório. Isso tende a introduzir elevada carga computacional e, consequentemente, a obtenção da solução do modelo numérico correspondente pode demandar um excessivo tempo de simulação. É evidente que a redução desse tempo interessa profundamente à engenharia de reservatórios. Dentre as técnicas de melhoria de performance, uma das mais promissoras é a aplicação da computação paralela. Nessa técnica, a carga computacional é dividida entre diversos processadores. Idealmente, a carga computacional é dividida de maneira igualitária e, assim, se N é o número de processadores, o tempo computacional é N vezes menor. No presente estudo, a computação paralela foi aplicada a dois simuladores numéricos: UTCHEM e EFVLib. UTCHEM é um simulador químico-composicional desenvolvido pela The University of Texas at Austin. A EFVLib, por sua vez, é uma biblioteca desenvolvida pelo laboratório SINMEC  laboratório ligado ao Departamento de Engenharia Mecânica da Universidade Federal de Santa Catarina  cujo intuito é prover suporte à aplicação do Método dos Volumes Finitos Baseado em Elementos. Em ambos os casos a metodologia de paralalelização é baseada na decomposição de domínio.

Abstract : Numerical simulation is an extremely relevant tool to the oil and gas industry. It makes feasible the procedure of predicting the production scenery in a given reservoir and design more advantageous exploit strategies fromits results. However, in order to obtain reliability fromthe numerical results, it is essential to employ reliable numerical models and an accurate geometrical characterization of the reservoir. This leads to a high computational load and consequently the achievement of the solution of the corresponding numerical method may require an exceedingly large simulation time. Seemingly, reducing this time is an accomplishment of great interest to the reservoir engineering. Among the techniques of boosting performance, parallel computing is one of the most promising ones. In this technique, the computational load is split throughout the set of processors. In the most ideal situation, this computational load is split in an egalitarian way, in such a way that if N is the number of processors then the computational time is N times smaller. In this study, parallel computing was applied to two distinct numerical simulators: UTCHEM and EFVLib. UTCHEM is a compositional reservoir simulator developed at TheUniversity of Texas atAustin. EFVLib, by its turn, is a computational library developed at SINMEC  a laboratory at theMechanical Enginering Department of The Federal University of Santa Catarina  with the aim of supporting the Element-based Finite Volume Method employment. The parallelization process were based on the domain decomposition on the both cases formerly described.
APA, Harvard, Vancouver, ISO, and other styles
15

Reinhart, Felix [Verfasser]. "Reservoir computing with output feedback / René Felix Reinhart. Technische Fakultät." Bielefeld : Universitätsbibliothek Bielefeld, Hochschulschriften, 2012. http://d-nb.info/1019275367/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Pauwels, Jaël. "High performance optical reservoir computing based on spatially extended systems." Doctoral thesis, Universite Libre de Bruxelles, 2021. https://dipot.ulb.ac.be/dspace/bitstream/2013/331699/3/thesis.pdf.

Full text
Abstract:
In this thesis we study photonic computation within the framework of reservoir computing. Inspired by the insight that the human brain processes information by generating patterns of transient neuronal activity excited by input sensory signals, reservoir computing exploits the transient dynamics of an analogue nonlinear dynamical system to solve tasks that are hard to solve by algorithmic approaches. Harnessing the massive parallelism offered by optics, we consider a generic class of nonlinear dynamical systems which are suitable for reservoir computing and which we label photonic computing liquids. These are spatially extended systems which exhibit dispersive or diffractive signal coupling and nonlinear signal distortion. We demonstrate that a wide range of optical systems meet these requirements and allow for elegant and performant imple- mentations of optical reservoirs. These advances address the limitations of current photonic reservoirs in terms of scalability, ease of implementation and the transition towards truly all-optical computing systems.We start with an abstract presentation of a photonic computing liquid and an in-depth analysis of what makes these kinds of systems function as potent reservoir computers. We then present an experimental study of two photonic reservoir computers, the first based on a diffractive free-space cavity, the second based on a fiber-loop cavity. These systems allow us to validate the promising concept of photonic computing liquids, to investigate the effects of symme- tries in the neural interconnectivity and to demonstrate the effectiveness of weak and distributed optical nonlinearities. We also investigate the ability to recover performance lost due to uncontrolled parameters variations in unstable operating environments by introducing an easily scalable way to expand a reservoir’s output layer. Finally, we show how to exploit random diffraction in a strongly dispersive optical system, including applications in optical telecom- munications. In the conclusion we discuss future perspectives and identify the characteristic of the optical systems that we consider most promising for the future of photonic reservoir computing.
Doctorat en Sciences
info:eu-repo/semantics/nonPublished
APA, Harvard, Vancouver, ISO, and other styles
17

Martinenghi, Romain. "Démonstration opto-électronique du concept de calculateur neuromorphique par Reservoir Computing." Thesis, Besançon, 2013. http://www.theses.fr/2013BESA2052/document.

Full text
Abstract:
Le Reservoir Computing (RC) est un paradigme s’inspirant du cerveau humain, apparu récemment au début des années2000. Il s'agit d'un calculateur neuromorphique habituellement décomposé en trois parties dont la plus importanteappelée "réservoir" est très proche d'un réseau de neurones récurrent. Il se démarque des autres réseaux de neuronesartificiels notamment grâce aux traditionnelles phases d'apprentissage et d’entraînement qui ne sont plus appliquées surla totalité du réseau de neurones mais uniquement sur la lecture du réservoir, ce qui simplifie le fonctionnement etfacilite une réalisation physique. C'est précisément dans ce contexte qu’ont été réalisés les travaux de recherche de cettethèse, durant laquelle nous avons réalisé une première implémentation physique opto-électronique de système RC.Notre approche des systèmes physiques RC repose sur l'utilisation de dynamiques non-linéaires à retards multiples dansl'objectif de reproduire le comportement complexe d'un réservoir. L'utilisation d'un système dynamique purementtemporel pour reproduire la dimension spatio-temporelle d'un réseau de neurones traditionnel, nécessite une mise enforme particulière des signaux d'entrée et de sortie, appelée multiplexage temporel ou encore étape de masquage. Troisannées auront été nécessaires pour étudier et construire expérimentalement nos démonstrateurs physiques basés sur desdynamiques non-linéaires à retards multiples opto-électroniques, en longueur d'onde et en intensité. La validationexpérimentale de nos systèmes RC a été réalisée en utilisant deux tests de calcul standards. Le test NARMA10 (test deprédiction de séries temporelles) et la reconnaissance vocale de chiffres prononcés (test de classification de données) ontpermis de quantifier la puissance de calcul de nos systèmes RC et d'atteindre pour certaines configurations l'état del'art
Reservoir Computing (RC) is a currently emerging new brain-inspired computational paradigm, which appeared in theearly 2000s. It is similar to conventional recurrent neural network (RNN) computing concepts, exhibiting essentiallythree parts: (i) an input layer to inject the information in the computing system; (ii) a central computational layercalled the Reservoir; (iii) and an output layer which is extracting the computed result though a so-called Read-Outprocedure, the latter being determined after a learning and training step. The main originality compared to RNNconsists in the last part, which is the only one concerned by the training step, the input layer and the Reservoir beingoriginally randomly determined and fixed. This specificity brings attractive features to RC compared to RNN, in termsof simplification, efficiency, rapidity, and feasibility of the learning, as well as in terms of dedicated hardwareimplementation of the RC scheme. This thesis is indeed concerned by one of the first a hardware implementation of RC,moreover with an optoelectronic architecture.Our approach to physical RC implementation is based on the use of a sepcial class of complex system for the Reservoir,a nonlinear delay dynamics involving multiple delayed feedback paths. The Reservoir appears thus as a spatio-temporalemulation of a purely temporal dynamics, the delay dynamics. Specific design of the input and output layer are shownto be possible, e.g. through time division multiplexing techniques, and amplitude modulation for the realization of aninput mask to address the virtual nodes in the delay dynamics. Two optoelectronic setups are explored, one involving awavelength nonlinear dynamics with a tunable laser, and another one involving an intensity nonlinear dynamics with anintegrated optics Mach-Zehnder modulator. Experimental validation of the computational efficiency is performedthrough two standard benchmark tasks: the NARMA10 test (prediction task), and a spoken digit recognition test(classification task), the latter showing results very close to state of the art performances, even compared with purenumerical simulation approaches
APA, Harvard, Vancouver, ISO, and other styles
18

Vinckier, Quentin. "Analog bio-inspired photonic processors based on the reservoir computing paradigm." Doctoral thesis, Universite Libre de Bruxelles, 2016. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/237069.

Full text
Abstract:
For many challenging problems where the mathematical description is not explicitly defined, artificial intelligence methods appear to be much more robust compared to traditional algorithms. Such methods share the common property of learning from examples in order to “explore” the problem to solve. Then, they generalize these examples to new and unseen input signals. The reservoir computing paradigm is a bio-inspired approach drawn from the theory of artificial Recurrent Neural Networks (RNNs) to process time-dependent data. This machine learning method was proposed independently by several research groups in the early 2000s. It has enabled a breakthrough in analog information processing, with several experiments demonstrating state-of-the-art performance for a wide range of hard nonlinear tasks. These tasks include for instance dynamic pattern classification, grammar modeling, speechrecognition, nonlinear channel equalization, detection of epileptic seizures, robot control, timeseries prediction, brain-machine interfacing, power system monitoring, financial forecasting, or handwriting recognition. A Reservoir Computer (RC) is composed of three different layers. There is first the neural network itself, called “reservoir”, which consists of a large number of internal variables (i.e. reservoir states) all interconnected together to exchange information. The internal dynamics of such a system, driven by a function of the inputs and the former reservoir states, is thus extremely rich. Through an input layer, a time-dependent input signal is applied to all the internal variables to disturb the neural network dynamics. Then, in the output layer, all these reservoir states are processed, often by taking a linear combination thereof at each time-step, to compute the output signal. Let us note that the presence of a non-linearity somewhere in the system is essential to reach high performance computing on nonlinear tasks. The principal novelty of the reservoir computing paradigm was to propose an RNN where most of the connection weights are generated randomly, except for the weights adjusted to compute the output signal from a linear combination of the reservoir states. In addition, some global parameters can be tuned to get the best performance, depending on the reservoir architecture and on the task. This simple and easy process considerably decreases the training complexity compared to traditional RNNs, for which all the weights needed to be optimized. RC algorithms can be programmed using modern traditional processors. But these electronic processors are better suited to digital processing for which a lot of transistors continuously need to be switched on and off, leading to higher power consumption. As we can intuitively understand, processors with hardware directly dedicated to RC operations – in otherwords analog bio-inspired processors – could be much more efficient regarding both speed and power consumption. Based on the same idea of high speed and low power consumption, the last few decades have seen an increasing use of coherent optics in the transport of information thanks to its high bandwidth and high power efficiency advantages. In order to address the future challenge of high performance, high speed, and power efficient nontrivial computing, it is thus natural to turn towards optical implementations of RCs using coherent light. Over the last few years, several physical implementations of RCs using optics and (opto)electronics have been successfully demonstrated. In the present PhD thesis, the reservoirs are based on a large coherently driven linear passive fiber cavity. The internal states are encoded by time-multiplexing in the cavity. Each reservoir state is therefore processed sequentially. This reservoir architecture exhibits many qualities that were either absent or not simultaneously present in previous works: we can perform analog optical signal processing; the easy tunability of each key parameter achieves the best operating point for each task; the system is able to reach a strikingly weak noise floor thanks to the absence of active elements in the reservoir itself; a richer dynamics is provided by operating in coherent light, as the reservoir states are encoded in both the amplitude and the phase of the electromagnetic field; high power efficiency is obtained as a result of the passive nature and simplicity of the setup. However, it is important to note that at this stage we have only obtained low optical power consumption for the reservoir itself. We have not tried to minimize the overall power consumption, including all control electronics. The first experiment reported in chapter 4 uses a quadratic non-linearity on each reservoir state in the output layer. This non-linearity is provided by a readout photodiode since it produces a current proportional to the intensity of the light. On a number of benchmark tasks widely used in the reservoir computing community, the error rates demonstrated with this RC architecture – both in simulation and experimentally – are, to our knowledge, the lowest obtained so far. Furthermore, the analytic model describing our experiment is also of interest, asit constitutes a very simple high performance RC algorithm. The setup reported in chapter 4 requires offline digital post-processing to compute its output signal by summing the weighted reservoir states at each time-step. In chapter 5, we numerically study a realistic model of an optoelectronic “analog readout layer” adapted on the setup presented in chapter 4. This readout layer is based on an RLC low-pass filter acting as an integrator over the weighted reservoir states to autonomously generate the RC output signal. On three benchmark tasks, we obtained very good simulation results that need to be confirmed experimentally in the future. These promising simulation results pave the way for standalone high performance physical reservoir computers.The RC architecture presented in chapter 5 is an autonomous optoelectronic implementation able to electrically generate its output signal. In order to contribute to the challenge of all-optical computing, chapter 6 highlights the possibility of processing information autonomously and optically using an RC based on two coherently driven passive linear cavities. The first one constitutes the reservoir itself and pumps the second one, which acts as an optical integrator onthe weighted reservoir states to optically generate the RC output signal after sampling. A sine non-linearity is implemented on the input signal, whereas both the reservoir and the readout layer are kept linear. Let us note that, because the non-linearity in this system is provided by a Mach-Zehnder modulator on the input signal, the input signal of this RC configuration needs to be an electrical signal. On the contrary, the RC implementation presented in chapter 5 processes optical input signals, but its output is electrical. We obtained very good simulation results on a single task and promising experimental results on two tasks. At the end of this chapter, interesting perspectives are pointed out to improve the performance of this challenging experiment. This system constitutes the first autonomous photonic RC able to optically generate its output signal.
Doctorat en Sciences de l'ingénieur et technologie
info:eu-repo/semantics/nonPublished
APA, Harvard, Vancouver, ISO, and other styles
19

Alalshekmubarak, Abdulrahman. "Towards a robust Arabic speech recognition system based on reservoir computing." Thesis, University of Stirling, 2014. http://hdl.handle.net/1893/21733.

Full text
Abstract:
In this thesis we investigate the potential of developing a speech recognition system based on a recently introduced artificial neural network (ANN) technique, namely Reservoir Computing (RC). This technique has, in theory, a higher capability for modelling dynamic behaviour compared to feed-forward ANNs due to the recurrent connections between the nodes in the reservoir layer, which serves as a memory. We conduct this study on the Arabic language, (one of the most spoken languages in the world and the official language in 26 countries), because there is a serious gap in the literature on speech recognition systems for Arabic, making the potential impact high. The investigation covers a variety of tasks, including the implementation of the first reservoir-based Arabic speech recognition system. In addition, a thorough evaluation of the developed system is conducted including several comparisons to other state- of-the-art models found in the literature, and baseline models. The impact of feature extraction methods are studied in this work, and a new biologically inspired feature extraction technique, namely the Auditory Nerve feature, is applied to the speech recognition domain. Comparing different feature extraction methods requires access to the original recorded sound, which is not possible in the only publicly accessible Arabic corpus. We have developed the largest public Arabic corpus for isolated words, which contains roughly 10,000 samples. Our investigation has led us to develop two novel approaches based on reservoir computing, ESNSVMs (Echo State Networks with Support Vector Machines) and ESNEKMs (Echo State Networks with Extreme Kernel Machines). These aim to improve the performance of the conventional RC approach by proposing different readout architectures. These two approaches have been compared to the conventional RC approach and other state-of-the- art systems. Finally, these developed approaches have been evaluated on the presence of different types and levels of noise to examine their resilience to noise, which is crucial for real world applications.
APA, Harvard, Vancouver, ISO, and other styles
20

Bai, Kang Jun. "Moving Toward Intelligence: A Hybrid Neural Computing Architecture for Machine Intelligence Applications." Diss., Virginia Tech, 2021. http://hdl.handle.net/10919/103711.

Full text
Abstract:
Rapid advances in machine learning have made information analysis more efficient than ever before. However, to extract valuable information from trillion bytes of data for learning and decision-making, general-purpose computing systems or cloud infrastructures are often deployed to train a large-scale neural network, resulting in a colossal amount of resources in use while themselves exposing other significant security issues. Among potential approaches, the neuromorphic architecture, which is not only amenable to low-cost implementation, but can also deployed with in-memory computing strategy, has been recognized as important methods to accelerate machine intelligence applications. In this dissertation, theoretical and practical properties of a hybrid neural computing architecture are introduced, which utilizes a dynamic reservoir having the short-term memory to enable the historical learning capability with the potential to classify non-separable functions. The hybrid neural computing architecture integrates both spatial and temporal processing structures, sidestepping the limitations introduced by the vanishing gradient. To be specific, this is made possible through four critical features: (i) a feature extractor built based upon the in-memory computing strategy, (ii) a high-dimensional mapping with the Mackey-Glass neural activation, (iii) a delay-dynamic system with historical learning capability, and (iv) a unique learning mechanism by only updating readout weights. To support the integration of neuromorphic architecture and deep learning strategies, the first generation of delay-feedback reservoir network has been successfully fabricated in 2017, better yet, the spatial-temporal hybrid neural network with an improved delay-feedback reservoir network has been successfully fabricated in 2020. To demonstrate the effectiveness and performance across diverse machine intelligence applications, the introduced network structures are evaluated through (i) time series prediction, (ii) image classification, (iii) speech recognition, (iv) modulation symbol detection, (v) radio fingerprint identification, and (vi) clinical disease identification.
Doctor of Philosophy
Deep learning strategies are the cutting-edge of artificial intelligence, in which the artificial neural networks are trained to extract key features or finding similarities from raw sensory information. This is made possible through multiple processing layers with a colossal amount of neurons, in a similar way to humans. Deep learning strategies run on von Neumann computers are deployed worldwide. However, in today's data-driven society, the use of general-purpose computing systems and cloud infrastructures can no longer offer a timely response while themselves exposing other significant security issues. Arose with the introduction of neuromorphic architecture, application-specific integrated circuit chips have paved the way for machine intelligence applications in recently years. The major contributions in this dissertation include designing and fabricating a new class of hybrid neural computing architecture and implementing various deep learning strategies to diverse machine intelligence applications. The resulting hybrid neural computing architecture offers an alternative solution to accelerate the neural computations required for sophisticated machine intelligence applications with a simple system-level design, and therefore, opening the door to low-power system-on-chip design for future intelligence computing, what is more, providing prominent design solutions and performance improvements for internet of things applications.
APA, Harvard, Vancouver, ISO, and other styles
21

Dai, Jing. "Reservoir-computing-based, biologically inspired artificial neural networks and their applications in power systems." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/47646.

Full text
Abstract:
Computational intelligence techniques, such as artificial neural networks (ANNs), have been widely used to improve the performance of power system monitoring and control. Although inspired by the neurons in the brain, ANNs are largely different from living neuron networks (LNNs) in many aspects. Due to the oversimplification, the huge computational potential of LNNs cannot be realized by ANNs. Therefore, a more brain-like artificial neural network is highly desired to bridge the gap between ANNs and LNNs. The focus of this research is to develop a biologically inspired artificial neural network (BIANN), which is not only biologically meaningful, but also computationally powerful. The BIANN can serve as a novel computational intelligence tool in monitoring, modeling and control of the power systems. A comprehensive survey of ANNs applications in power system is presented. It is shown that novel types of reservoir-computing-based ANNs, such as echo state networks (ESNs) and liquid state machines (LSMs), have stronger modeling capability than conventional ANNs. The feasibility of using ESNs as modeling and control tools is further investigated in two specific power system applications, namely, power system nonlinear load modeling for true load harmonic prediction and the closed-loop control of active filters for power quality assessment and enhancement. It is shown that in both applications, ESNs are capable of providing satisfactory performances with low computational requirements. A novel, more brain-like artificial neural network, i.e. biologically inspired artificial neural network (BIANN), is proposed in this dissertation to bridge the gap between ANNs and LNNs and provide a novel tool for monitoring and control in power systems. A comprehensive survey of the spiking models of living neurons as well as the coding approaches is presented to review the state-of-the-art in BIANN research. The proposed BIANNs are based on spiking models of living neurons with adoption of reservoir-computing approaches. It is shown that the proposed BIANNs have strong modeling capability and low computational requirements, which makes it a perfect candidate for online monitoring and control applications in power systems. BIANN-based modeling and control techniques are also proposed for power system applications. The proposed modeling and control schemes are validated for the modeling and control of a generator in a single-machine infinite-bus system under various operating conditions and disturbances. It is shown that the proposed BIANN-based technique can provide better control of the power system to enhance its reliability and tolerance to disturbances. To sum up, a novel, more brain-like artificial neural network, i.e. biologically inspired artificial neural network (BIANN), is proposed in this dissertation to bridge the gap between ANNs and LNNs and provide a novel tool for monitoring and control in power systems. It is clearly shown that the proposed BIANN-based modeling and control schemes can provide faster and more accurate control for power system applications. The conclusions, the recommendations for future research, as well as the major contributions of this research are presented at the end.
APA, Harvard, Vancouver, ISO, and other styles
22

Enel, Pierre. "Représentation dynamique dans le cortex préfrontal : comparaison entre reservoir computing et neurophysiologie du primate." Phd thesis, Université Claude Bernard - Lyon I, 2014. http://tel.archives-ouvertes.fr/tel-01056696.

Full text
Abstract:
Les primates doivent pouvoir reconnaître de nouvelles situations pour pouvoir s'y adapter. La représentation de ces situations dans l'activité du cortex est le sujet de cette thèse. Les situations complexes s'expliquent souvent par l'interaction entre des informations sensorielles, internes et motrices. Des activités unitaires dénommées sélectivité mixte, qui sont très présentes dans le cortex préfrontal (CPF), sont un mécanisme possible pour représenter n'importe quelle interaction entre des informations. En parallèle, le Reservoir Computing a démontré que des réseaux récurrents ont la propriété de recombiner des entrées actuelles et passées dans un espace de plus haute dimension, fournissant ainsi un pré-codage potentiellement universel de combinaisons pouvant être ensuite sélectionnées et utilisées en fonction de leur pertinence pour la tâche courante. En combinant ces deux approches, nous soutenons que la nature fortement récurrente de la connectivité locale du CPF est à l'origine d'une forme dynamique de sélectivité mixte. De plus, nous tentons de démontrer qu'une simple régression linéaire, implémentable par un neurone seul, peut extraire n'importe qu'elle information/contingence encodée dans ces combinaisons complexes et dynamiques. Finalement, les entrées précédentes, qu'elles soient sensorielles ou motrices, à ces réseaux du CPF doivent être maintenues pour pouvoir influencer les traitements courants. Nous soutenons que les représentations de ces contextes définis par ces entrées précédentes doivent être exprimées explicitement et retournées aux réseaux locaux du CPF pour influencer les combinaisons courantes à l'origine de la représentation des contingences
APA, Harvard, Vancouver, ISO, and other styles
23

Denis-Le, Coarer Florian. "Neuromorphic computing using nonlinear ring resonators on a Silicon photonic chip." Electronic Thesis or Diss., CentraleSupélec, 2020. http://www.theses.fr/2020CSUP0001.

Full text
Abstract:
Avec les volumes exponentiels de données numériques générées chaque jour, un besoin de traitement des données en temps réel et économe en énergie s'est fait sentir. Ces défis ont motivé la recherche sur le traitement non conventionnel de l'information. Parmi les techniques existantes, l'apprentissage machine est un paradigme très efficace de l'informatique cognitive. Il fournit, au travers de nombreuses implémentations dont celle des réseaux de neurones artificiels, un ensemble de techniques pour apprendre à un ordinateur ou un système physique à effectuer des tâches complexes, telles que la classification, la reconnaissance de formes ou la génération de signaux. Le reservoir computing a été proposé il y a une dizaine d'années pour simplifier la procédure d’entraînement du réseau de neurones artificiels. En effet, le réseau est maintenu fixe et seules les connexions entre la couche de lecture et la sortie sont entraînées par une simple régression linéaire. L'architecture interne d’un reservoir computer permet des implémentations au niveau physique, et plusieurs implémentations ont été proposées sur différentes plateformes technologiques, dont les dispositifs photoniques. Le reservoir computing sur circuits intégrés optiques est un candidat très prometteur pour relever ces défis. L’objectif de ce travail de thèse a été de proposer trois architectures différentes de réservoir intégré basées sur l’utilisation des micro-anneaux résonnants. Nous en avons numériquement étudié les performances et mis en évidence des vitesses de traitement de données pouvant atteindre plusieurs dizaines de Gigabit par seconde avec des consommations énergétiques de quelques milliwatt
With the exponential volumes of digital data generated every day, there is a need for real-time, energy-efficient data processing. These challenges have motivated research on unconventional information processing. Among the existing techniques, machine learning is a very effective paradigm of cognitive computing. It provides, through many implementations including that of artificial neural networks, a set of techniques to teach a computer or physical system to perform complex tasks, such as classification, pattern recognition or signal generation. Reservoir computing was proposed about ten years ago to simplify the procedure for training the artificial neural network. Indeed, the network is kept fixed and only the connections between the reading layer and the output are driven by a simple linear regression. The internal architecture of a reservoir computer allows physical implementations, and several implementations have been proposed on different technological platforms, including photonic devices. On-chip reservoir computing is a very promising candidate to meet these challenges. The objective of this thesis work was to propose three different integrated reservoir architectures based on the use of resonant micro-rings. We have digitally studied its performance and highlighted data processing speeds of up to several tens of Gigabits per second with energy consumption of a few milliwatts
APA, Harvard, Vancouver, ISO, and other styles
24

Masominia, Amir Hossein. "Neuro-inspired computing with excitable microlasers." Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASP053.

Full text
Abstract:
Cette thèse présente des recherches sur des systèmes de calcul alternatifs, en se concentrant spécifiquement sur le calcul analogique et neuromimétique. La quête d'une intelligence artificielle plus générale a mis en évidence les limitations des unités de calcul conventionnelles basées sur les architectures de Von Neumann, en particulier en termes d'efficacité énergétique et de complexité. Les architectures de calcul inspirées du cerveau et les ordinateurs analogiques sont des prétendants de premier plan dans ce domaine. Parmi les différentes possibilités, les systèmes photoniques impulsionnels (spiking) offrent des avantages significatifs en termes de vitesse de traitement, ainsi qu'une efficacité énergétique accrue. Nous proposons une approche novatrice pour les tâches de classification et de reconnaissance d'images en utilisant un laser à micropilier développé en interne fonctionnant comme un neurone artificiel. La non-linéarité du laser excitable, résultant des dynamiques internes, permet de projeter les informations entrantes, injectées optiquement dans le micropilier au travers de son gain, dans des dimensions supérieures. Cela permet de trouver des régions linéairement séparables pour la classification. Le micropilier laser excitable présente toutes les propriétés fondamentales d'un neurone biologique, y compris l'excitabilité, la période réfractaire et l'effet de sommation, avec des échelles caractéristiques de fonctionnement sous la nanoseconde. Cela en fait un candidat de premier choix dans les systèmes impulsionnels où la dynamique de l'impulsion elle-même porte des informations, par opposition aux systèmes qui considèrent uniquement la fréquence moyenne des impulsions. Nous avons conçu et étudié plusieurs systèmes utilisant le micropilier laser, basés sur un calculateur à réservoir à nœud physique unique qui émule un calculateur à plusieurs noeuds et utilisant différents régimes dynamiques du microlaser. Ces systèmes ont atteint des performances de reconnaissance plus élevées par rapport aux systèmes sans le microlaser. De plus, nous introduisons un nouveau modèle inspiré des champs réceptifs dans le cortex visuel, capable de classifier un ensemble de chiffres tout en éliminant le besoin d'un ordinateur conventionnel dans le processus. Ce système a été mis en œuvre expérimentalement avec succès en utilisant une configuration optique combinée en espace libre et fibrée, ouvrant des perspectives intéressantes pour le calcul analogue ultra-rapide sur architecture matérielle
This thesis presents research on alternative computing systems, with a focus on analog and neuromimetic computing. The pursuit of more general artificial intelligence has underscored limitations in conventional computing units based on Von Neumann architectures, particularly regarding energy efficiency and complexity. Brain-inspired computing architectures and analog computers are key contenders in this field. Among the various proposed methods, photonic spiking systems offer significant advantages in processing and communication speeds, as well as potential energy efficiency. We propose a novel approach to classification and image recognition tasks using an in-house developed micropillar laser as the artificial neuron. The nonlinearity of the spiking micropillar laser, resulting from the internal dynamics of the system, allows for mapping incoming information, optically injected to the micropillar through gain, into higher dimensions. This enables finding linearly separable regions for classification. The micropillar laser exhibits all fundamental properties of a biological neuron, including excitability, refractory period, and summation effect, with sub-nanosecond characteristic timescales. This makes it a strong candidate in spiking systems where the dynamics of the spike itself carries information, as opposed to systems that consider spiking rates only. We designed and studied several systems using the micropillar laser, based on a reservoir computer with a single physical node that emulates a reservoir computer with several nodes, using different dynamical regimes of the microlaser. These systems achieved higher performance in prediction accuracy of the classes compared to systems without the micropillar. Additionally, we introduce a novel system inspired by receptive fields in the visual cortex, capable of classifying a digit dataset entirely online, eliminating the need for a conventional computer in the process. This system was successfully implemented experimentally using a combined fiber and free-space optical setup, opening promising prospects for ultra-fast, hardware based feature selection and classification systems
APA, Harvard, Vancouver, ISO, and other styles
25

FERREIRA, Aida Araújo. "Um Método para Design e Treinamento de Reservoir Computing Aplicado à Previsão de Séries Temporais." Universidade Federal de Pernambuco, 2011. https://repositorio.ufpe.br/handle/123456789/1786.

Full text
Abstract:
Made available in DSpace on 2014-06-12T15:52:23Z (GMT). No. of bitstreams: 2 arquivo3249_1.pdf: 1966741 bytes, checksum: f61bcb05fd026755e0cb70974b156e7d (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2011
Instituto Federal de Educação, Ciência e Tecnologia de Pernambuco
Reservoir Computing é um tipo de rede neural recorrente que permite uma modelagem caixa-preta para sistemas dinâmicos (não-lineares). Em contraste com outras abordagens de redes neurais recorrentes, com Reservoir Computing não existe a necessidade de treinamento dos pesos da camada de entrada e nem dos pesos internos da rede (reservoir), apenas os pesos da camada de saída (readout) são treinados. No entanto, é necessário ajustar os parâmetros e a topologia da rede para a criação de um reservoir ótimo que seja adequado a uma determinada aplicação. Neste trabalho, foi criado um método, chamado RCDESIGN, para encontrar o melhor reservoir aplicado à tarefa de previsão de séries temporais. O método desenvolvido combina um algoritmo evolucionário com Reservoir Computing e busca simultaneamente pelos melhores valores dos parâmetros, da topologia da rede e dos pesos, sem reescalar a matriz de pesos do reservoir pelo raio espectral. A ideia do ajuste do raio espectral dentro de um círculo unitário no plano complexo, vem da teoria dos sistemas lineares que mostra claramente que a estabilidade é necessária para a obtenção de respostas úteis em sistemas lineares. Contudo, este argumento não se aplica necessariamente aos sistemas não-lineares, que é o caso de Reservoir Computing. O método criado considera também o Reservoir Computing em toda a sua não linearidade, pois permite a utilização de todas as suas possíveis conexões, em vez de usar apenas as conexões obrigatórias. Os resultados obtidos com o método proposto são comparados com dois métodos diferentes. O primeiro, chamado neste trabalho de Busca RS, utiliza algoritmo genético para otimizar os principais parâmetros de Reservoir Computing, que são: tamanho do reservoir, raio espectral e densidade de conexão. O segundo, chamado neste trabalho de Busca TR, utiliza algoritmo genético para otimizar a topologia e pesos de Reservoir Computing baseado no raio espectral. Foram utilizadas sete séries clássicas para realizar a validação acadêmica da aplicação do método proposto à tarefa de previsão de séries temporais. Um estudo de caso foi desenvolvido para verificar a adequação do método proposto ao problema de previsão da velocidade horária dos ventos na região nordeste do Brasil. A geração eólica é uma das fontes renováveis de energia com o menor custo de produção e com a maior quantidade de recursos disponíveis. Dessa forma, a utilização de modelos eficientes de previsão da velocidade dos ventos e da geração eólica pode reduzir as dificuldades de operação de um sistema elétrico composto por fontes tradicionais de energia e pela fonte eólica
APA, Harvard, Vancouver, ISO, and other styles
26

Penkovsky, Bogdan. "Theory and modeling of complex nonlinear delay dynamics applied to neuromorphic computing." Thesis, Bourgogne Franche-Comté, 2017. http://www.theses.fr/2017UBFCD059/document.

Full text
Abstract:
Cette thèse développe une nouvelle approche pour la conception d'un reservoir computer, l'un des défis de la science et de la technologie modernes. La thèse se compose de deux parties, toutes deux s'appuyant sur l'analogie entre les systèmes optoelectroniques à retard et les dynamiques spatio-temporelles non linéaires. Dans la première partie (Chapitres 1 et 2) cette analogie est utilisée dans une perspective fondamentale afin d'étudier les formes auto-organisées connues sous le nom d'états Chimère, mis en évidence une première fois comme une conséquence de ces travaux. Dans la deuxième partie (Chapitres 3 et 4) la même analogie est exploitée dans une perspective appliquée afin de concevoir et mettre en oeuvre un concept de traitement de l'information inspiré par le cerveau: un réservoir computer fonctionnant en temps réel est construit dans une puce FPGA, grâce à la mise en oeuvre d'une dynamique à retard et de ses couches d'entrée et de sortie, pour obtenir un système traitement d'information autonome intelligent
The thesis develops a novel approach to design of a reservoir computer, one of the challenges of modern Science and Technology. It consists of two parts, both connected by the correspondence between optoelectronic delayed-feedback systems and spatio-temporal nonlinear dynamics. In the first part (Chapters 1 and 2), this correspondence is used in a fundamental perspective, studying self-organized patterns known as chimera states, discovered for the first time in purely temporal systems. Study of chimera states may shed light on mechanisms occurring in many structurally similar high-dimensional systems such as neural systems or power grids. In the second part (Chapters 3 and 4), the same spatio-temporal analogy is exploited from an applied perspective, designing and implementing a brain-inspired information processing device: a real-time digital reservoir computer is constructed in FPGA hardware. The implementation utilizes delay dynamics and realizes input as well as output layers for an autonomous cognitive computing system
APA, Harvard, Vancouver, ISO, and other styles
27

Mejaouri, Salim. "Conception et fabrication de micro-résonateurs pour la réalisation d'une puce neuromorphique." Mémoire, Université de Sherbrooke, 2018. http://hdl.handle.net/11143/11930.

Full text
Abstract:
La miniaturisation des transistors ayant atteint ses limites, des technologies alternatives capables de traiter les données sont aujourd’hui beaucoup étudiées. Dans ce contexte, nous développons une architecture de réseau de neurones mécaniques, capable de résoudre effi- cacement des problèmes non-triviaux comme la classification ou la prédiction de fonctions chaotiques. Cette architecture est inspirée des travaux sur les réseaux de neurones récur- rents (RNN), et plus particulièrement du reservoir computing. Le dispositif est un réseau d’oscillateurs MEMS anharmoniques, lui permettant ainsi d’être compact et de consom- mer peu d’énergie. Les poutres en silicium bi-encastrées ont été choisies pour réaliser le dispositif, sachant qu’elles ont été largement étudiées et sont simples à implémenter. Nous présentons ici le travail expérimental sur les MEMS non linéaires qui seront utilisés par la suite pour réaliser le dispositif. Des simulations numériques du réseau ont permis, dans un premier temps, d’identifier les requis sur la dynamique des résonateurs. Ceux-ci ont été par la suite conçus de manière à répondre le mieux possible à ces requis. Un couplage méca- nique efficace a été élaboré pour relier chacun des oscillateurs. Afin de prédire précisément le comportement des résonateurs couplés dans le régime linéaire et non linéaire, des ana- lyses par éléments finis ont été réalisées. Un procédé de micro fabrication rapide et simple a été développé. Enfin, les structures ont été caractérisées optiquement et électriquement. Les résultats expérimentaux sont en accord avec les simulations ce qui suggère que notre approche convient à la conception et à la fabrication d’un dispositif neuromorphique.
APA, Harvard, Vancouver, ISO, and other styles
28

Butcher, John B. "Reservoir Computing with high non-linear separation and long-term memory for time-series data analysis." Thesis, Keele University, 2012. http://eprints.keele.ac.uk/1185/.

Full text
Abstract:
Left unchecked the degradation of reinforced concrete can result in the weakening of a structure and lead to both hazardous and costly problems throughout the built environment. In some cases failure to recognise the problem and apply appropriate remedies has already resulted in fatalities. The problem increases with the age of any structures and consequently has become more pressing throughout the latter half of the 20th century. It is therefore of paramount importance to assess and repair these structures using an accurate and cost-effective approach. ElectroMagnetic Anomaly Detection (EMAD) is one such approach where currently analysis is performed visually, which is undesirable. A relatively new Recurrent Artificial Neural Network (RANN) approach which overcomes problems which have prohibited the widespread use of RANNs, Reservoir Computing (RC), is investigated here. This research aimed to automate the detection of defects within reinforced concrete using RC while gaining further insights into fundamental properties of an RC architecture when applied to real-world time-series datasets. As a product of these studies a novel RC architecture, Reservoir with Random Static Projections (R2SP), has been developed. R2SP helps to address what this research shows to be an antagonistic trade-off between a standard RC architecture’s ability to transform its input data onto a highly non-linear state space whilst at the same time possessing a short-term memory of its previous inputs. The R2SP architecture provided a significant improvement in performance for each dataset investigated when compared to a standard RC approach as a result of overcoming the aforementioned trade-off. The implementation of an R2SP architecture is now planned to be incorporated on a new version of the EMAD data collection apparatus to give fast or near to real-time information about areas of potential problems in real-world concrete structures.
APA, Harvard, Vancouver, ISO, and other styles
29

Almassian, Amin. "Information Representation and Computation of Spike Trains in Reservoir Computing Systems with Spiking Neurons and Analog Neurons." PDXScholar, 2016. http://pdxscholar.library.pdx.edu/open_access_etds/2724.

Full text
Abstract:
Real-time processing of space-and-time-variant signals is imperative for perception and real-world problem-solving. In the brain, spatio-temporal stimuli are converted into spike trains by sensory neurons and projected to the neurons in subcortical and cortical layers for further processing. Reservoir Computing (RC) is a neural computation paradigm that is inspired by cortical Neural Networks (NN). It is promising for real-time, on-line computation of spatio-temporal signals. An RC system incorporates a Recurrent Neural Network (RNN) called reservoir, the state of which is changed by a trajectory of perturbations caused by a spatio-temporal input sequence. A trained, non- recurrent, linear readout-layer interprets the dynamics of the reservoir over time. Echo-State Network (ESN) [1] and Liquid-State Machine (LSM) [2] are two popular and canonical types of RC system. The former uses non-spiking analog sigmoidal neurons – and, more recently, Leaky Integrator (LI) neurons – and a normalized random connectivity matrix in the reservoir. Whereas, the reservoir in the latter is composed of Leaky Integrate-and-Fire (LIF) neurons, distributed in a 3-D space, which are connected with dynamic synapses through a probability function. The major difference between analog neurons and spiking neurons is in their neuron model dynamics and their inter-neuron communication mechanism. However, RC systems share a mysterious common property: they exhibit the best performance when reservoir dynamics undergo a criticality [1–6] – governed by the reservoirs’ connectivity parameters, |λmax| ≈ 1 in ESN, λ ≈ 2 and w in LSM – which is referred to as the edge of chaos in [3–5]. In this study, we are interested in exploring the possible reasons for this commonality, despite the differences imposed by different neuron types in the reservoir dynamics. We address this concern from the perspective of the information representation in both spiking and non-spiking reservoirs. We measure the Mutual Information (MI) between the state of the reservoir and a spatio-temporal spike-trains input, as well as that, between the reservoir and a linearly inseparable function of the input, temporal parity. In addition, we derive Mean Cumulative Mutual Information (MCMI) quantity from MI to measure the amount of stable memory in the reservoir and its correlation with the temporal parity task performance. We complement our investigation by conducting isolated spoken-digit recognition and spoken-digit sequence-recognition tasks. We hypothesize that a performance analysis of these two tasks will agree with our MI and MCMI results with regard to the impact of stable memory in task performance. It turns out that, in all reservoir types and in all the tasks conducted, reservoir performance peaks when the amount of stable memory in the reservoir is maxi-mized. Likewise, in the chaotic regime (when the network connectivity parameter is greater than a critical value), the absence of stable memory in the reservoir seems to be an evident cause for performance decrease in all conducted tasks. Our results also show that the reservoir with LIF neurons possess a higher stable memory of the input (quantified by input-reservoir MCMI) and outperforms the reservoirs with analog sigmoidal and LI neurons in processing the temporal parity and spoken-digit recognition tasks. From an efficiency stand point, the reservoir with 100 LIF neurons outperforms the reservoir with 500 LI neurons in spoken- digit recognition tasks. The sigmoidal reservoir falls short of solving this task. The optimum input-reservoir MCMI’s and output-reservoir MCMI’s we obtained for the reservoirs with LIF, LI, and sigmoidal neurons are 4.21, 3.79, 3.71, and 2.92, 2.51, and 2.47 respectively. In our isolated spoken-digits recognition experiments, the maximum achieved mean-performance by the reservoirs with N = 500 LIF, LI, and sigmoidal neurons are 97%, 79% and 2% respectively. The reservoirs with N = 100 neurons could solve the task with 80%, 68%, and 0.9% respectively. Our study sheds light on the impact of the information representation and memory of the reservoir on the performance of RC systems. The results of our experiments reveal the advantage of using LIF neurons in RC systems for computing spike-trains to solve memory demanding, real-world, spatio-temporal problems. Our findings have applications in engineering nano-electronic RC systems that can be used to solve real-world spatio-temporal problems.
APA, Harvard, Vancouver, ISO, and other styles
30

Cazin, Nicolas. "A replay driven model of spatial sequence learning in the hippocampus-prefrontal cortex network using reservoir computing." Thesis, Lyon, 2018. http://www.theses.fr/2018LYSE1133/document.

Full text
Abstract:
Alors que le rat apprend à chercher de multiples sources de nourriture ou d'eau, des processus d'apprentissage de séquences spatiales et de rejeu ont lieu dans l'hippocampe et le cortex préfrontal.Des études récentes (De Jong et al. 2011; Carr, Jadhav, and Frank 2011) mettent en évidence que la navigation spatiale dans l'hippocampe de rat implique le rejeu de l'activation de cellules de lieu durant les étant de sommeil et d'éveil en générant des petites sous séquences contigues d'activation de cellules de lieu cohérentes entre elles. Ces fragments sont observés en particulier lors d'évènements sharp wave ripple (SPWR).Les phénomènes de rejeu lors du sommeil dans le contexte de la consolidation de la mémoire à long terme ont beaucoup attiré l'attention. Ici nous nous focalisons sur le rôle du rejeu pendant l'état d'éveil.Nous formulons l'hypothèse que ces fragments peuvent être utilisés par le cortex préfrontal pour réaliser une tâche d'apprentissage spatial comprenant plusieurs buts.Nous proposons de développer un modèle intégré d'hippocampe et de cortex préfrontal capable de générer des séquences d'activation de cellules de lieu.Le travail collaboratif proposé prolonge les travaux existants sur un modèle de cognition spatiale pour des tâches orientés but plus simples (Barrera and Weitzenfeld 2008; Barrera et al. 2015) avec un nouveau modèle basé sur le rejeu pour la formation de mémoire dans l'hippocampe et l'apprentissage et génération de séquences spatiales par le cortex préfrontal.En contraste avec les travaux existants d'apprentissage de séquence qui repose sur des règles d'apprentissage sophistiquées, nous proposons d'utiliser un paradigme calculatoire appelé calcul par réservoir (Dominey 1995) dans lequel des groupes importants de neurones artificiels dont la connectivité est fixe traitent dynamiquement l'information au travers de réverbérations. Ce modèle calculatoire par réservoir consolide les fragments de séquence d'activations de cellule de lieu en une plus grande séquence qui pourra être rappelée elle-même par des fragments de séquence.Le travail proposé est supposé contribuer à une nouvelle compréhension du rôle du phénomène de rejeu dans l'acquisition de la mémoire dans une tâche complexe liée à l'apprentissage de séquence.Cette compréhension opérationnelle sera mise à profit et testée dans l'architecture cognitive incarnée d'un robot mobile selon l'approche animat (Wilson 1991) [etc...]
As rats learn to search for multiple sources of food or water in a complex environment, processes of spatial sequence learning and recall in the hippocampus (HC) and prefrontal cortex (PFC) are taking place. Recent studies (De Jong et al. 2011; Carr, Jadhav, and Frank 2011) show that spatial navigation in the rat hippocampus involves the replay of place-cell firing during awake and sleep states generating small contiguous subsequences of spatially related place-cell activations that we will call "snippets". These "snippets" occur primarily during sharp-wave-ripple (SPWR) events. Much attention has been paid to replay during sleep in the context of long-term memory consolidation. Here we focus on the role of replay during the awake state, as the animal is learning across multiple trials.We hypothesize that these "snippets" can be used by the PFC to achieve multi-goal spatial sequence learning.We propose to develop an integrated model of HC and PFC that is able to form place-cell activation sequences based on snippet replay. The proposed collaborative research will extend existing spatial cognition model for simpler goal-oriented tasks (Barrera and Weitzenfeld 2008; Barrera et al. 2015) with a new replay-driven model for memory formation in the hippocampus and spatial sequence learning and recall in PFC.In contrast to existing work on sequence learning that relies heavily on sophisticated learning algorithms and synaptic modification rules, we propose to use an alternative computational framework known as reservoir computing (Dominey 1995) in which large pools of prewired neural elements process information dynamically through reverberations. This reservoir computational model will consolidate snippets into larger place-cell activation sequences that may be later recalled by subsets of the original sequences.The proposed work is expected to generate a new understanding of the role of replay in memory acquisition in complex tasks such as sequence learning. That operational understanding will be leveraged and tested on a an embodied-cognitive real-time framework of a robot, related to the animat paradigm (Wilson 1991) [etc...]
APA, Harvard, Vancouver, ISO, and other styles
31

Nowshin, Fabiha. "Spiking Neural Network with Memristive Based Computing-In-Memory Circuits and Architecture." Thesis, Virginia Tech, 2021. http://hdl.handle.net/10919/103854.

Full text
Abstract:
In recent years neuromorphic computing systems have achieved a lot of success due to its ability to process data much faster and using much less power compared to traditional Von Neumann computing architectures. There are two main types of Artificial Neural Networks (ANNs), Feedforward Neural Network (FNN) and Recurrent Neural Network (RNN). In this thesis we first study the types of RNNs and then move on to Spiking Neural Networks (SNNs). SNNs are an improved version of ANNs that mimic biological neurons closely through the emission of spikes. This shows significant advantages in terms of power and energy when carrying out data intensive applications by allowing spatio-temporal information processing. On the other hand, emerging non-volatile memory (eNVM) technology is key to emulate neurons and synapses for in-memory computations for neuromorphic hardware. A particular eNVM technology, memristors, have received wide attention due to their scalability, compatibility with CMOS technology and low power consumption properties. In this work we develop a spiking neural network by incorporating an inter-spike interval encoding scheme to convert the incoming input signal to spikes and use a memristive crossbar to carry out in-memory computing operations. We develop a novel input and output processing engine for our network and demonstrate the spatio-temporal information processing capability. We demonstrate an accuracy of a 100% with our design through a small-scale hardware simulation for digit recognition and demonstrate an accuracy of 87% in software through MNIST simulations.
M.S.
In recent years neuromorphic computing systems have achieved a lot of success due to its ability to process data much faster and using much less power compared to traditional Von Neumann computing architectures. Artificial Neural Networks (ANNs) are models that mimic biological neurons where artificial neurons or neurodes are connected together via synapses, similar to the nervous system in the human body. here are two main types of Artificial Neural Networks (ANNs), Feedforward Neural Network (FNN) and Recurrent Neural Network (RNN). In this thesis we first study the types of RNNs and then move on to Spiking Neural Networks (SNNs). SNNs are an improved version of ANNs that mimic biological neurons closely through the emission of spikes. This shows significant advantages in terms of power and energy when carrying out data intensive applications by allowing spatio-temporal information processing capability. On the other hand, emerging non-volatile memory (eNVM) technology is key to emulate neurons and synapses for in-memory computations for neuromorphic hardware. A particular eNVM technology, memristors, have received wide attention due to their scalability, compatibility with CMOS technology and low power consumption properties. In this work we develop a spiking neural network by incorporating an inter-spike interval encoding scheme to convert the incoming input signal to spikes and use a memristive crossbar to carry out in-memory computing operations. We demonstrate the accuracy of our design through a small-scale hardware simulation for digit recognition and demonstrate an accuracy of 87% in software through MNIST simulations.
APA, Harvard, Vancouver, ISO, and other styles
32

Marquez, Alfonzo Bicky. "Reservoir computing photonique et méthodes non-linéaires de représentation de signaux complexes : Application à la prédiction de séries temporelles." Thesis, Bourgogne Franche-Comté, 2018. http://www.theses.fr/2018UBFCD042/document.

Full text
Abstract:
Les réseaux de neurones artificiels constituent des systèmes alternatifs pour effectuer des calculs complexes, ainsi que pour contribuer à l'étude des systèmes neuronaux biologiques. Ils sont capables de résoudre des problèmes complexes, tel que la prédiction de signaux chaotiques, avec des performances à l'état de l'art. Cependant, la compréhension du fonctionnement des réseaux de neurones dans la résolution de problèmes comme la prédiction reste vague ; l'analogie avec une boîte-noire est souvent employée. En combinant la théorie des systèmes dynamiques non linéaires avec celle de l'apprentissage automatique (Machine Learning), nous avons développé un nouveau concept décrivant à la fois le fonctionnement des réseaux neuronaux ainsi que les mécanismes à l'œuvre dans leurs capacités de prédiction. Grâce à ce concept, nous avons pu imaginer un processeur neuronal hybride composé d'un réseaux de neurones et d'une mémoire externe. Nous avons également identifié les mécanismes basés sur la synchronisation spatio-temporelle avec lesquels des réseaux neuronaux aléatoires récurrents peuvent effectivement fonctionner, au-delà de leurs états de point fixe habituellement utilisés. Cette synchronisation a entre autre pour effet de réduire l'impact de la dynamique régulière spontanée sur la performance du système. Enfin, nous avons construit physiquement un réseau récurrent à retard dans un montage électro-optique basé sur le système dynamique d'Ikeda. Celui-ci a dans un premier temps été étudié dans le contexte de la dynamique non-linéaire afin d'en explorer certaines propriétés, puis nous l'avons utilisé pour implémenter un processeur neuromorphique dédié à la prédiction de signaux chaotiques
Artificial neural networks are systems prominently used in computation and investigations of biological neural systems. They provide state-of-the-art performance in challenging problems like the prediction of chaotic signals. Yet, the understanding of how neural networks actually solve problems like prediction remains vague; the black-box analogy is often employed. Merging nonlinear dynamical systems theory with machine learning, we develop a new concept which describes neural networks and prediction within the same framework. Taking profit of the obtained insight, we a-priori design a hybrid computer, which extends a neural network by an external memory. Furthermore, we identify mechanisms based on spatio-temporal synchronization with which random recurrent neural networks operated beyond their fixed point could reduce the negative impact of regular spontaneous dynamics on their computational performance. Finally, we build a recurrent delay network in an electro-optical setup inspired by the Ikeda system, which at first is investigated in a nonlinear dynamics framework. We then implement a neuromorphic processor dedicated to a prediction task
APA, Harvard, Vancouver, ISO, and other styles
33

Oliverio, Lucas. "Nonlinear dynamics from a laser diode with both optical injection and optical feedback for telecommunication applications." Electronic Thesis or Diss., CentraleSupélec, 2024. http://www.theses.fr/2024CSUP0002.

Full text
Abstract:
Le traitement actuel de l'information dans les grands clusters de calcul, est responsable d'un fort impact énergétique au niveau mondial. Le paradigme actuel est à repenser, et une architecture de calcul basée sur des composants photoniques (laser à semi-conducteur notamment) est étudiée dans cette thèse. La structure envisagée est un réseau de neurones artificiels pour du traitement de données de télécommunications. Nous étudions une diode laser et ses états dynamiques lorsque soumise à une injection optique et à un feedback optiques simultanés et les liens avec sa capacité de calcul neuroinspirée par de la simulation et de l'expérimentation
The current processing of information in large computing clusters is responsible for a strong energetic impact at a global level. The current paradigm needs to be rethought, and a computing architecture based on photonic components (semiconductor laser in particular) is studied in this thesis. The considered structure is a network of artificial neurons for telecommunications data processing. This involves using a laser diode to study the relationship between the dynamics with optical injection and optical feedback and neuroinspired computing capacity with simulations and experimental work
APA, Harvard, Vancouver, ISO, and other styles
34

Chaix-Eichel, Naomi. "Etude du rôle de l’architecture des réseaux neuronaux dans la prise de décision à l’aide de modèles de reservoir computing." Electronic Thesis or Diss., Bordeaux, 2024. http://www.theses.fr/2024BORD0279.

Full text
Abstract:
Une similarité frappante existe dans l’organisation et la structure de certaines régions du cerveau chez diverses espèces. Par exemple, la structure cérébrale des vertébrés, des poissons aux mammifères, présente une similarité remarquable dans des régions telles que le cortex, l’hippocampe, le cervelet et les ganglions de la base. Cela suggère que ces régions sont apparues tôt dans l’évolution des vertébrés et ont été conservées au fil du temps. La persistance de ces structures soulève des questions fondamentales sur leurs origines évolutives : sont-elles des solutions uniques et optimales pour le traitement de l’information et le contrôle du comportement, ou d’autres architectures cérébrales pourraient-elles émerger pour offrir des propriétés fonctionnelles équivalentes ? Cette thèse étudie la relation entre la structure du cerveau et les fonctions cognitives, en se concentrant particulièrement sur le processus de prise de décision. Nous proposons d’utiliser un type de réseau de neurones récurrent appelé Echo State Network qui est structurellement minimal et dans lequel les neurones sont connectées de manière aléatoire. Nous voulons déterminer si ce modèle minimal peut capturer tout processus décisionnel et si ce n’est pas le cas, nous chercherons l’existence de structures alternatives. Premièrement, nous démontrons qu’un modèle minimal est capable de résoudre des tâches décisionnelles simples dans le contexte de la navigation spatiale. Ensuite, nous montrons que cette structure minimale a des limitations de performance lorsqu’il s’agit de tâches plus complexes, nécessitant plus de structures pour retrouvant de bonnes performances. Troisièmement, nous utilisons un algorithme génétique faisant évoluer la structure du réseau vers des configurations plus complexes, ce qui nous conduit à découvrir plusieurs solutions réalisables émergeantes de variations structurelles. De plus, nos résultats révèlent que des architectures identiques peuvent manifester une gamme de comportements différents, nous incitant à explorer les facteurs supplémentaires pouvant contribuer à ces différences comportementales, au-delà des variations structurelles. Notre analyse du comportement de 24 singes vivant en communauté révèle que des facteurs sociaux, tels que la hiérarchie sociale, jouent un rôle significatif dans l’influence du comportement. Cette thèse adopte une approche qui diffère des méthodologies traditionnelles en neurosciences. Plutôt que de construire directement des architectures biologique, les modèles sont construits en faisant évoluer leur structures de simples à complexes, reproduisant ainsi le processus de l’évolution biologique. En s’appuyant sur les principes de réalisabilité multiple, cette approche permet l’évolution de configurations structurelles diverses capables de parvenir à des résultats fonctionnels équivalents
A striking similarity exists in the organization and structure of certain brain regions across diverse species. For instance, the brain structure of vertebrates, from fish to mammals, includes regions like the cortex, hippocampus, cerebellum and basal ganglia with remarkable similarity. The presence of these structures across a wide range of species strongly suggests that they emerged early in vertebrate evolution and have been conserved throughout evolution. The persistence of these structures raises intriguing questions about their evolutionary origins: are they unique and optimal solutions for processing information and controlling behavior, or could alternative brain architectures emerge to achieve similar functional properties? To investigate this question, this thesis explores the relationship between brain architecture and cognitive function, with a focus on decision-making processes. We propose to use variants of a recurrent neural network model (echo state network) that is structurally minimal and randomly connected. We aim to identify whether a minimal model can capture any decision-making process and if it cannot, we explore whether multiple realizable solutions emerge through structural variations. First we demonstrate that a minimal model is able to solve simple decision tasks in the context of spatial navigation. Second, we show that this minimal structure has performance limitations when handling more complex tasks, requiring additional structural constraints to achieve better results. Third, by employing a genetic algorithm to evolve network structure to more complex ones, we discover that multiple realizable solutions emerging through structural variations. Furthermore we reveal that identical architectures can exhibit a range of different behaviors, leading us to investigate additional factors contributing to these different behaviors beyond structural variations. Our analysis of the behavior of 24 monkeys living in a community reveals that social factors, such as social hierarchy, play a significant role in their behavior. This thesis takes an approach that differs from traditional neuroscience methodologies. Rather than directly constructing biologically inspired architectures, the models are designed from simple to complex structures, reproducing the process of biological evolution. By leveraging the principles of multiple realizability, this approach enables the evolution of diverse structural configurations that can achieve equivalent functional outcomes
APA, Harvard, Vancouver, ISO, and other styles
35

Röhm, André [Verfasser], Kathy [Akademischer Betreuer] Lüdge, Kathy [Gutachter] Lügde, and Ingo [Gutachter] Fischer. "Symmetry-Breaking bifurcations and reservoir computing in regular oscillator networks / André Röhm ; Gutachter: Kathy Lügde, Ingo Fischer ; Betreuer: Kathy Lüdge." Berlin : Technische Universität Berlin, 2019. http://d-nb.info/1183789491/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Vincent-Lamarre, Philippe. "Learning Long Temporal Sequences in Spiking Networks by Multiplexing Neural Oscillations." Thesis, Université d'Ottawa / University of Ottawa, 2019. http://hdl.handle.net/10393/39960.

Full text
Abstract:
Many living organisms have the ability to execute complex behaviors and cognitive processes that are reliable. In many cases, such tasks are generated in the absence of an ongoing external input that could drive the activity on their underlying neural populations. For instance, writing the word "time" requires a precise sequence of muscle contraction in the hand and wrist. There has to be some patterns of activity in the areas of the brain responsible for this behaviour that are endogenously generated every time an individual performs this action. Whereas the question of how such neural code is transformed in the target motor sequence is a question of its own, their origin is perhaps even more puzzling. Most models of cortical and sub-cortical circuits suggest that many of their neural populations are chaotic. This means that very small amounts of noise, such as an additional action potential in a neuron of a network, can lead to completely different patterns of activity. Reservoir computing is one of the first frameworks that provided an efficient solution for biologically relevant neural networks to learn complex temporal tasks in the presence of chaos. We showed that although reservoirs (i.e. recurrent neural networks) are robust to noise, they are extremely sensitive to some forms of structural perturbations, such as removing one neuron out of thousands. We proposed an alternative to these models, where the source of autonomous activity is no longer originating from the reservoir, but from a set of oscillating networks projecting to the reservoir. In our simulations, we show that this solution produce rich patterns of activity and lead to networks that are both resistant to noise and structural perturbations. The model can learn a wide variety of temporal tasks such as interval timing, motor control, speech production and spatial navigation.
APA, Harvard, Vancouver, ISO, and other styles
37

Strock, Anthony. "Mémoire de travail dans les réseaux de neurones récurrents aléatoires." Thesis, Bordeaux, 2020. http://www.theses.fr/2020BORD0195.

Full text
Abstract:
La mémoire de travail peut être définie comme la capacité à stocker temporairement et à manipuler des informations de toute nature.Par exemple, imaginez que l'on vous demande d'additionner mentalement une série de nombres. Afin de réaliser cette tâche, vous devez garder une trace de la somme partielle qui doit être mise à jour à chaque fois qu'un nouveau nombre est donné. La mémoire de travail est précisément ce qui permettrait de maintenir (i.e. stocker temporairement) la somme partielle et de la mettre à jour (i.e. manipuler). Dans cette thèse, nous proposons d'explorer les implémentations neuronales de cette mémoire de travail en utilisant un nombre restreint d'hypothèses.Pour ce faire, nous nous plaçons dans le contexte général des réseaux de neurones récurrents et nous proposons d'utiliser en particulier le paradigme du reservoir computing.Ce type de modèle très simple permet néanmoins de produire des dynamiques dont l'apprentissage peut tirer parti pour résoudre une tâche donnée.Dans ce travail, la tâche à réaliser est une mémoire de travail à porte (gated working memory).Le modèle reçoit en entrée un signal qui contrôle la mise à jour de la mémoire.Lorsque la porte est fermée, le modèle doit maintenir son état de mémoire actuel, alors que lorsqu'elle est ouverte, il doit la mettre à jour en fonction d'une entrée.Dans notre approche, cette entrée supplémentaire est présente à tout instant, même lorsqu'il n'y a pas de mise à jour à faire.En d'autres termes, nous exigeons que notre modèle soit un système ouvert, i.e. un système qui est toujours perturbé par ses entrées mais qui doit néanmoins apprendre à conserver une mémoire stable.Dans la première partie de ce travail, nous présentons l'architecture du modèle et ses propriétés, puis nous montrons sa robustesse au travers d'une étude de sensibilité aux paramètres.Celle-ci montre que le modèle est extrêmement robuste pour une large gamme de paramètres.Peu ou prou, toute population aléatoire de neurones peut être utilisée pour effectuer le gating.Par ailleurs, après apprentissage, nous mettons en évidence une propriété intéressante du modèle, à savoir qu'une information peut être maintenue de manière entièrement distribuée, i.e. sans être corrélée à aucun des neurones mais seulement à la dynamique du groupe.Plus précisément, la mémoire de travail n'est pas corrélée avec l'activité soutenue des neurones ce qui a pourtant longtemps été observé dans la littérature et remis en cause récemment de façon expérimentale.Ce modèle vient confirmer ces résultats au niveau théorique.Dans la deuxième partie de ce travail, nous montrons comment ces modèles obtenus par apprentissage peuvent être étendus afin de manipuler l'information qui se trouve dans l'espace latent.Nous proposons pour cela de considérer les conceptors qui peuvent être conceptualisé comme un jeu de poids synaptiques venant contraindre la dynamique du réservoir et la diriger vers des sous-espaces particuliers; par exemple des sous-espaces correspondants au maintien d'une valeur particulière.Plus généralement, nous montrons que ces conceptors peuvent non seulement maintenir des informations, ils peuvent aussi maintenir des fonctions.Dans le cas du calcul mental évoqué précédemment, ces conceptors permettent alors de se rappeler et d'appliquer l'opération à effectuer sur les différentes entrées données au système.Ces conceptors permettent donc d'instancier une mémoire de type procédural en complément de la mémoire de travail de type déclaratif.Nous concluons ce travail en remettant en perspective ce modèle théorique vis à vis de la biologie et des neurosciences
Working memory can be defined as the ability to temporarily store and manipulate information of any kind.For example, imagine that you are asked to mentally add a series of numbers.In order to accomplish this task, you need to keep track of the partial sum that needs to be updated every time a new number is given.The working memory is precisely what would make it possible to maintain (i.e. temporarily store) the partial sum and to update it (i.e. manipulate).In this thesis, we propose to explore the neuronal implementations of this working memory using a limited number of hypotheses.To do this, we place ourselves in the general context of recurrent neural networks and we propose to use in particular the reservoir computing paradigm.This type of very simple model nevertheless makes it possible to produce dynamics that learning can take advantage of to solve a given task.In this job, the task to be performed is a gated working memory task.The model receives as input a signal which controls the update of the memory.When the door is closed, the model should maintain its current memory state, while when open, it should update it based on an input.In our approach, this additional input is present at all times, even when there is no update to do.In other words, we require our model to be an open system, i.e. a system which is always disturbed by its inputs but which must nevertheless learn to keep a stable memory.In the first part of this work, we present the architecture of the model and its properties, then we show its robustness through a parameter sensitivity study.This shows that the model is extremely robust for a wide range of parameters.More or less, any random population of neurons can be used to perform gating.Furthermore, after learning, we highlight an interesting property of the model, namely that information can be maintained in a fully distributed manner, i.e. without being correlated to any of the neurons but only to the dynamics of the group.More precisely, working memory is not correlated with the sustained activity of neurons, which has nevertheless been observed for a long time in the literature and recently questioned experimentally.This model confirms these results at the theoretical level.In the second part of this work, we show how these models obtained by learning can be extended in order to manipulate the information which is in the latent space.We therefore propose to consider conceptors which can be conceptualized as a set of synaptic weights which constrain the dynamics of the reservoir and direct it towards particular subspaces; for example subspaces corresponding to the maintenance of a particular value.More generally, we show that these conceptors can not only maintain information, they can also maintain functions.In the case of mental arithmetic mentioned previously, these conceptors then make it possible to remember and apply the operation to be carried out on the various inputs given to the system.These conceptors therefore make it possible to instantiate a procedural working memory in addition to the declarative working memory.We conclude this work by putting this theoretical model into perspective with respect to biology and neurosciences
APA, Harvard, Vancouver, ISO, and other styles
38

Antonik, Piotr. "Application of FPGA to real-time machine learning: hardware reservoir computers and software image processing." Doctoral thesis, Universite Libre de Bruxelles, 2017. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/257660.

Full text
Abstract:
Reservoir computing est un ensemble de techniques permettant de simplifierl’utilisation des réseaux de neurones artificiels. Les réalisations expérimentales,notamment optiques, de ce concept ont montré des performances proches de l’étatde l’art ces dernières années. La vitesse élevée des expériences optiques ne permetpas d’y intervenir en temps réel avec un ordinateur standard. Dans ce travail, nousutilisons une carte de logique programmable (Field-Programmable Gate Array, ouFPGA) très rapide afin d’interagir avec l’expérience en temps réel, ce qui permetde développer de nouvelles fonctionnalités.Quatre expériences ont été réalisées dans ce cadre. La première visait à implé-menter un algorithme de online training, permettant d’optimiser les paramètresdu réseau de neurones en temps réel. Nous avons montré qu’un tel système étaitcapable d’accomplir des tâches réalistes dont les consignes variaient au cours dutemps.Le but de la deuxième expérience était de créer un reservoir computer optiquepermettant l’optimisation de ses poids d’entrée suivant l’algorithme de backpropaga-tion through time. L’expérience a montré que cette idée était tout à fait réalisable,malgré les quelques difficultés techniques rencontrées. Nous avons testé le systèmeobtenu sur des tâches complexes (au-delà des capacités de reservoir computers clas-siques) et avons obtenu des résultats proches de l’état de l’art.Dans la troisième expérience nous avons rebouclé notre reservoir computer op-tique sur lui-même afin de pouvoir générer des séries temporelles de façon autonome.Le système a été testé avec succès sur des séries périodiques et des attracteurs chao-tiques. L’expérience nous a également permis de mettre en évidence les effets debruit expérimental dans les systèmes rebouclés.La quatrième expérience, bien que numérique, visait le développement d’unecouche de sortie analogique. Nous avons pu vérifier que la méthode de onlinetraining, développée précédemment, était robuste contre tous les problèmes expéri-mentaux étudiés. Par conséquent, nous avons toutes les informations pour réalisercette idée expérimentalement.Finalement, durant les derniers mois de ma thèse, j’ai effectué un stage dont lebut était d’appliquer mes connaissance en programmation de FPGA et réseaux deneurones artificiels à un problème concret en imagerie cardiovasculaire. Nous avonsdéveloppé un programme capable d’analyser les images en temps réel, convenablepour des applications cliniques.
Doctorat en Sciences
info:eu-repo/semantics/nonPublished
APA, Harvard, Vancouver, ISO, and other styles
39

Baylon, Fuentes Antonio. "Ring topology of an optical phase delayed nonlinear dynamics for neuromorphic photonic computing." Thesis, Besançon, 2016. http://www.theses.fr/2016BESA2047/document.

Full text
Abstract:
Aujourd'hui, la plupart des ordinateurs sont encore basés sur des concepts développés il y a plus de 60 ans par Alan Turing et John von Neumann. Cependant, ces ordinateurs numériques ont déjà commencé à atteindre certaines limites physiques via la technologie de la microélectronique au silicium (dissipation, vitesse, limites d'intégration, consommation d'énergie). Des approches alternatives, plus puissantes, plus efficaces et moins consommatrices d'énergie, constituent depuis plusieurs années un enjeu scientifique majeur. Beaucoup de ces approches s'inspirent naturellement du cerveau humain, dont les principes opérationnels sont encore loin d'être compris. Au début des années 2000, la communauté scientifique s'est aperçue qu'une modification du réseau neuronal récurrent (RNN), plus simple et maintenant appelée Reservoir Computing (RC), est parfois plus efficace pour certaines fonctionnalités, et est un nouveau paradigme de calcul qui s'inspire du cerveau. Sa structure est assez semblable aux concepts classiques de RNN, présentant généralement trois parties: une couche d'entrée pour injecter l'information dans un système dynamique non-linéaire (Write-In), une seconde couche où l'information d'entrée est projetée dans un espace de grande dimension (appelé réservoir dynamique) et une couche de sortie à partir de laquelle les informations traitées sont extraites par une fonction dite de lecture-sortie. Dans l'approche RC, la procédure d'apprentissage est effectuée uniquement dans la couche de sortie, tandis que la couche d'entrée et la couche réservoir sont fixées de manière aléatoire, ce qui constitue l'originalité principale du RC par rapport aux méthodes RNN. Cette fonctionnalité permet d'obtenir plus d'efficacité, de rapidité, de convergence d'apprentissage, et permet une mise en œuvre expérimentale. Cette thèse de doctorat a pour objectifs d'implémenter pour la première fois le RC photoniques en utilisant des dispositifs de télécommunication. Notre mise en œuvre expérimentale est basée sur un système dynamique non linéaire à retard, qui repose sur un oscillateur électro-optique (EO) avec une modulation de phase différentielle. Cet oscillateur EO a été largement étudié dans le contexte de la cryptographie optique du chaos. La dynamique présentée par de tels systèmes est en effet exploitée pour développer des comportements complexes dans un espace de phase à dimension infinie, et des analogies avec la dynamique spatio-temporelle (tels que les réseaux neuronaux) sont également trouvés dans la littérature. De telles particularités des systèmes à retard ont conforté l'idée de remplacer le RNN traditionnel (généralement difficile à concevoir technologiquement) par une architecture à retard d'EO non linéaire. Afin d'évaluer la puissance de calcul de notre approche RC, nous avons mis en œuvre deux tests de reconnaissance de chiffres parlés (tests de classification) à partir d'une base de données standard en intelligence artificielle (TI-46 et AURORA-2), et nous avons obtenu des performances très proches de l'état de l'art tout en établissant un nouvel état de l'art en ce qui concerne la vitesse de classification. Notre approche RC photonique nous a en effet permis de traiter environ 1 million de mots par seconde, améliorant la vitesse de traitement de l'information d'un facteur supérieur à ~3
Nowadays most of computers are still based on concepts developed more than 60 years ago by Alan Turing and John von Neumann. However, these digital computers have already begun to reach certain physical limits of their implementation via silicon microelectronics technology (dissipation, speed, integration limits, energy consumption). Alternative approaches, more powerful, more efficient and with less consume of energy, have constituted a major scientific issue for several years. Many of these approaches naturally attempt to get inspiration for the human brain, whose operating principles are still far from being understood. In this line of research, a surprising variation of recurrent neural network (RNN), simpler, and also even sometimes more efficient for features or processing cases, has appeared in the early 2000s, now known as Reservoir Computing (RC), which is currently emerging new brain-inspired computational paradigm. Its structure is quite similar to the classical RNN computing concepts, exhibiting generally three parts: an input layer to inject the information into a nonlinear dynamical system (Write-In), a second layer where the input information is projected in a space of high dimension called dynamical reservoir and an output layer from which the processed information is extracted through a so-called Read-Out function. In RC approach the learning procedure is performed in the output layer only, while the input and reservoir layer are randomly fixed, being the main originality of RC compared to the RNN methods. This feature allows to get more efficiency, rapidity and a learning convergence, as well as to provide an experimental implementation solution. This PhD thesis is dedicated to one of the first photonic RC implementation using telecommunication devices. Our experimental implementation is based on a nonlinear delayed dynamical system, which relies on an electro-optic (EO) oscillator with a differential phase modulation. This EO oscillator was extensively studied in the context of the optical chaos cryptography. Dynamics exhibited by such systems are indeed known to develop complex behaviors in an infinite dimensional phase space, and analogies with space-time dynamics (as neural network ones are a kind of) are also found in the literature. Such peculiarities of delay systems supported the idea of replacing the traditional RNN (usually difficult to design technologically) by a nonlinear EO delay architecture. In order to evaluate the computational power of our RC approach, we implement two spoken digit recognition tests (classification tests) taken from a standard databases in artificial intelligence TI-46 and AURORA-2, obtaining results very close to state-of-the-art performances and establishing state-of-the-art in classification speed. Our photonic RC approach allowed us to process around of 1 million of words per second, improving the information processing speed by a factor ~3
APA, Harvard, Vancouver, ISO, and other styles
40

Mwamsojo, Nickson. "Neuromorphic photonic systems for information processing." Electronic Thesis or Diss., Institut polytechnique de Paris, 2023. http://www.theses.fr/2023IPPAS002.

Full text
Abstract:
Par une utilisation performante de nombreux algorithmes dont les réseaux neuronaux, l'intelligence artificielle révolutionne le développement de la société numérique. Néanmoins, la tendance actuelle dépasse les limites prédites par la loi de Moore et celle de Koomey, ce qui implique des limitations éventuelles des implémentations numériques de ces systèmes. Pour répondre plus efficacement aux besoins calculatoires spécifiques de cette révolution, des systèmes physiques innovants tentent en amont d'apporter des solutions, nommées "neuro-morphiques" puisqu'elles imitent le fonctionnement des cerveaux biologiques. Les systèmes existants sont basés sur des techniques dites de "Reservoir Computing" ou "coherent Ising Machine." Leurs versions photoniques, ont permis de démontrer l'intérêt de ces techniques notamment pour la reconnaissance vocale avec un état de l'art en 2017 attestant de bonnes performances en termes de reconnaissance à un rythme d'1 million de mots par seconde. Nous proposons dans un premier temps une technique d'ajustement automatique des hyperparamètres pour le "Reservoir Computing", accompagnée d'une étude théorique de convergence. Nous proposons ensuite une solution au problème de la détection précoce de la maladie d'Alzheimer de type "Reservoir Computing" optoélectronique. En plus des taux de classifications obtenus meilleurs que l'état de l'art, une étude complète du compromis coût énergétique performance démontre la validité de cette approche. Enfin, le problème de la restauration d'image par maximum de vraisemblance est abordé à l'aide d'une implémentation optoélectronique appropriée de type "coherent Ising Machine"
Artificial Intelligence has revolutionized the scientific community thanks to the advent of a robust computation workforce and Artificial Neural Neural Networks. However, the current implementation trends introduce a rapidly growing demand for computational power surpassing the rates and limitations of Moore's and Koomey's Laws, which implies an eventual efficiency barricade. To respond to these demands, bio-inspired techniques, known as 'neuro-morphic' systems, are proposed using physical devices. Of these systems, we focus on 'Reservoir Computing' and 'Coherent Ising Machines' in our works.Reservoir Computing, for instance, demonstrated its computation power such as the state-of-the-art performance of up to 1 million words per second using photonic hardware in 2017. We propose an automatic hyperparameter tuning algorithm for Reservoir Computing and give a theoretical study of its convergence. Moreover, we propose Reservoir Computing for early-stage Alzheimer's disease detection with a thorough assessment of the energy costs versus performance compromise. Finally, we confront the noisy image restoration problem by maximum a posteriori using an optoelectronic implementation of a Coherent Ising Machine
APA, Harvard, Vancouver, ISO, and other styles
41

Bazzanella, Davide. "Microring Based Neuromorphic Photonics." Doctoral thesis, Università degli studi di Trento, 2022. http://hdl.handle.net/11572/344624.

Full text
Abstract:
This manuscript investigates the use of microring resonators to create all-optical reservoir-computing networks implemented in silicon photonics. Artificial neural networks and reservoir-computing are promising applications for integrated photonics, as they could make use of the bandwidth and the intrinsic parallelism of optical signals. This work mainly illustrates two aspects: the modelling of photonic integrated circuits and the experimental results obtained with all-optical devices. The modelling of photonic integrated circuits is examined in detail, both concerning fundamental theory and from the point of view of numerical simulations. In particular, the simulations focus on the nonlinear effects present in integrated optical cavities, which increase the inherent complexity of their optical response. Toward this objective, I developed a new numerical tool, precise, which can simulate arbitrary circuits, taking into account both linear propagation and nonlinear effects. The experimental results concentrate on the use of SCISSORs and a single microring resonator as reservoirs and the complex perceptron scheme. The devices have been extensively tested with logical operations, achieving bit error rates of less than 10^−5 at 16 Gbps in the case of the complex perceptron. Additionally, an in-depth explanation of the experimental setup and the description of the manufactured designs are provided. The achievements reported in this work mark an encouraging first step in the direction of the development of novel networks that employ the full potential of all-optical devices.
APA, Harvard, Vancouver, ISO, and other styles
42

Pinto, Rafael Coimbra. "Online incremental one-shot learning of temporal sequences." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2011. http://hdl.handle.net/10183/49063.

Full text
Abstract:
Este trabalho introduz novos algoritmos de redes neurais para o processamento online de padrões espaço-temporais, estendendo o algoritmo Incremental Gaussian Mixture Network (IGMN). O algoritmo IGMN é uma rede neural online incremental que aprende a partir de uma única passada através de dados por meio de uma versão incremental do algoritmo Expectation-Maximization (EM) combinado com regressão localmente ponderada (Locally Weighted Regression, LWR). Quatro abordagens diferentes são usadas para dar capacidade de processamento temporal para o algoritmo IGMN: linhas de atraso (Time-Delay IGMN), uma camada de reservoir (Echo-State IGMN), média móvel exponencial do vetor de entrada reconstruído (Merge IGMN) e auto-referência (Recursive IGMN). Isso resulta em algoritmos que são online, incrementais, agressivos e têm capacidades temporais e, portanto, são adequados para tarefas com memória ou estados internos desconhecidos, caracterizados por fluxo contínuo ininterrupto de dados, e que exigem operação perpétua provendo previsões sem etapas separadas para aprendizado e execução. Os algoritmos propostos são comparados a outras redes neurais espaço-temporais em 8 tarefas de previsão de séries temporais. Dois deles mostram desempenhos satisfatórios, em geral, superando as abordagens existentes. Uma melhoria geral para o algoritmo IGMN também é descrita, eliminando um dos parâmetros ajustáveis manualmente e provendo melhores resultados.
This work introduces novel neural networks algorithms for online spatio-temporal pattern processing by extending the Incremental Gaussian Mixture Network (IGMN). The IGMN algorithm is an online incremental neural network that learns from a single scan through data by means of an incremental version of the Expectation-Maximization (EM) algorithm combined with locally weighted regression (LWR). Four different approaches are used to give temporal processing capabilities to the IGMN algorithm: time-delay lines (Time-Delay IGMN), a reservoir layer (Echo-State IGMN), exponential moving average of reconstructed input vector (Merge IGMN) and self-referencing (Recursive IGMN). This results in algorithms that are online, incremental, aggressive and have temporal capabilities, and therefore are suitable for tasks with memory or unknown internal states, characterized by continuous non-stopping data-flows, and that require life-long learning while operating and giving predictions without separated stages. The proposed algorithms are compared to other spatio-temporal neural networks in 8 time-series prediction tasks. Two of them show satisfactory performances, generally improving upon existing approaches. A general enhancement for the IGMN algorithm is also described, eliminating one of the algorithm’s manually tunable parameters and giving better results.
APA, Harvard, Vancouver, ISO, and other styles
43

Passey, Jr David Joseph. "Growing Complex Networks for Better Learning of Chaotic Dynamical Systems." BYU ScholarsArchive, 2020. https://scholarsarchive.byu.edu/etd/8146.

Full text
Abstract:
This thesis advances the theory of network specialization by characterizing the effect of network specialization on the eigenvectors of a network. We prove and provide explicit formulas for the eigenvectors of specialized graphs based on the eigenvectors of their parent graphs. The second portion of this thesis applies network specialization to learning problems. Our work focuses on training reservoir computers to mimic the Lorentz equations. We experiment with random graph, preferential attachment and small world topologies and demonstrate that the random removal of directed edges increases predictive capability of a reservoir topology. We then create a new network model by growing networks via targeted application of the specialization model. This is accomplished iteratively by selecting top preforming nodes within the reservoir computer and specializing them. Our generated topology out-preforms all other topologies on average.
APA, Harvard, Vancouver, ISO, and other styles
44

Vatin, Jeremy. "Photonique neuro-inspirée pour des applications télécoms." Electronic Thesis or Diss., CentraleSupélec, 2020. http://www.theses.fr/2020CSUP0004.

Full text
Abstract:
Nous produisons chaque jour de grandes quantités de données, que nous échangeons sur le réseau Internet. Ces données sont traitées grâce à des clusters de calcul, responsables de la consommation énergétique d’internet. Dans cette thèse, nous étudions une architecture faite de composants photoniques, pour se débarrasser des composants électroniques consommant de l'énergie. Grâce aux composants actuellement utilisés dans le réseau Internet (laser et fibre optique), nous réalisons un réseau neuronal artificiel capable de traiter les données de télécommunication. Le réseau de neurones artificiel est constitué d'un laser et d'une fibre optique qui renvoie la lumière dans ce laser. Le comportement complexe de ce système est utilisé pour alimenter les neurones artificiels qui sont répartis le long de la fibre. Nous sommes en mesure de prouver que ce système est capable de traiter soit un signal avec une grande efficacité, soit deux signaux au prix d'une petite perte de précision
We are producing everyday thousands of gigabits of data, exchanged over the internet network. These data are processed thanks to computation clusters, which are responsible of the large amount of energy consumed by the internet network. In this work, we study an architecture made of photonic components, to get rid of electronic components that are power consuming. Thanks to components that are currently used in the internet network (laser and optical fiber), we aim at building an artificial neural network that is able to process telecommunication data. The artificial neural network is made of a laser, and an optical fiber that send back the light into the laser. The complex behavior of this system is used to feed the artificial neurons that are distributed along the fiber. We are able to prove that this system is able either to process one signal with a high efficiency, or two signals at the expense of a small loss of accuracy
APA, Harvard, Vancouver, ISO, and other styles
45

Ismail, Ali Rida. "Commensurable and Chaotic Nano-Contact Vortex Oscillator (NCVO) study for information processing." Electronic Thesis or Diss., Université de Lorraine, 2022. http://www.theses.fr/2022LORR0003.

Full text
Abstract:
La quantité de données utilisées dans les technologies de l'information augmente considérablement. Cela s'accompagne de la prolifération de technologies électroniques très avancées. Les problèmes thermiques, résultant de ces grands processus de données, imposent l'utilisation de nouvelles technologies et de nouveaux paradigmes à la place des circuits CMOS. Les dispositifs spintroniques sont l'une des nombreuses alternatives proposées jusqu'à présent dans la littérature. Dans ce travail, nous considérons un dispositif spintronique appelé oscillateur vortex à nano-contact (NCVO), qui a récemment commencé à attirer l'attention en raison de sa dynamique riche et variable. Cet oscillateur est actionné par un courant continu de polarisation et soumis à un champ magnétique, qui détermine sa dynamique de sortie. L'utilisation pratique du NCVO nécessite l'existence d'un modèle précis qui imite son aimantation de sortie et la trajectoire du vortex tournant autour du centre dans la couche supérieure de l'appareil. Ces deux variables sont nécessaires au calcul de la résistance équivalente du NCVO. Pour cela, nous construisons dans ce travail de thèse un modèle pour le NCVO produisant ces deux variables en utilisant une approche de calcul réservoir appelée réseau piloté par conceptor. Le réseau est formé sur les données NCVO acquises par simulation micromagnétique. Le modèle construit capture avec succès la dynamique du NCVO dans ses différents régimes (chaotique, périodique et quasi-périodique) avec un passage facile entre les régimes. Le même réseau est ensuite utilisé pour la détection du chaos dans la série des temps d'entrée. La méthode de détection de chaos proposée s'est révélée efficace et plus robuste par rapport aux méthodes existantes. Enfin, le modèle NCVO est exploité pour la génération de nombres véritablement aléatoires (TRNG) où une conception matérielle, alimentée par un signal chaotique généré par le modèle, est proposée. Cette conception a montré la capacité de concurrencer les techniques RNG existantes en termes de vitesse, de coût et de qualité
The amount of data used in information technology is increasing dramatically. This comes with the proliferation of highly advanced electronic technologies. The thermal issues, rising as an effect of such large data processes, impose the usage of novel technologies and paradigms in place of CMOS circuits. Spintronic devices are one of many alternatives proposed so far in the literature. In this work, we consider a spintronic device called nano-contact vortex oscillator (NCVO), which has recently begun to gain attention due to its rich and variable dynamics. This oscillators is operated by an bias DC current and subjected in a magnetic field, that determines it output dynamics. The practical use of the NCVO requires the existence of an accurate model that imitates its output magnetization and the vortex's trajectory rotating around the center in the upper layer of the device. These two variables are needed for the calculation of the equivalent resistance of the NCVO. For that, we build in this PhD work a model for the NCVO producing these two variables using a reservoir computing approach called conceptor-driven network. The network is trained on NCVO data gained by micromagnetic simulation. The built model successfully captures the NCVO dynamics in its different regimes (chaotic, periodic, and quasi- periodic) with an easy shift between regimes. The same network is used then for the detection of chaos in the input-times series. The proposed chaos-detection method has shown to be efficient and more robust compared to existing methods. Finally, the NCVO model is exploited for truly random number generation (TRNG) where a hardware design, fed by a chaotic signal generated by the model, is proposed. This design has shown the ability to compete existing RNG techniques in terms of speed, cost, and quality
APA, Harvard, Vancouver, ISO, and other styles
46

Baldini, Paolo. "Online adaptation of robots controlled by nanowire networks." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2022. http://amslaurea.unibo.it/25396/.

Full text
Abstract:
The use of current computational systems is sometimes problematic in robotics, as their cost and power consumption might be too high to find space in low-cost robots. Additionally, their computational capabilities are notable, but not always suited for real-time operations. These limits drastically reduce the amount of applications that can be envisioned. A basic example is the use of micro and nanobots, that even add the issue of size. The need to find different technologies is therefore strong. Here, one alternative computational system is proposed and used: Nanowire Networks. These are novel types of electrical circuits, able to show dynamical properties. Their low-cost and consumption, in addition to their high computational capabilities, make them a perfect candidate for robotic applications. Here, this possibility is assessed, evaluating their use as robots controllers. This research begins with preliminary studies on their behaviour, and considerations about their use. The initial analysis is then used to define an online, adaptive learning approach, allowing the robot to exploit the network to adapt to different tasks and environments. The tested capabilities are: a simple collision avoidance, with fault tolerance considerations; a fast, reactive behaviour to avoid illegal areas; a memory aware behaviour, that can navigate a maze according to an initial stimulus. The results support the promising capabilities of the robotic controller. Additionally, the power of the online adaptation is clearly shown. Therefore, this thesis paves the way for a new type of computation in the robotic area, allowing plastic, fault-tolerant, cheap and efficient systems to be developed.
APA, Harvard, Vancouver, ISO, and other styles
47

Bou-Fakhreddine, Bassam. "Modeling, Control and Optimization Of Cascade Hydroelectric-Irrigation Plants : Operation and Planning." Thesis, Paris, CNAM, 2018. http://www.theses.fr/2018CNAM1172.

Full text
Abstract:
Ce travail de recherche vise à optimiser la procédure opérationnelle des centrales hydroélectriques en cascade afin de les utiliser efficacement pour la production d’électricité et l’irrigation. Le défi consistait à trouver le modèle le plus réaliste basé sur la caractéristique stochastique des ressources en eau, sur la demande en énergie et sur le profil d'irrigation. Tous ces aspects sont affectés à court et à long terme par un large éventail de conditions différentes (hydrologique, météorologique et hydraulique). Au cours de ce projet, une étude bibliographique a été réalisée afin d'identifier les problèmes techniques qui empêchent l'utilisation efficace des centrales hydroélectriques dans les pays en développement. Le système est modélisé numériquement en tenant compte de toutes les variables et paramètres impliqués dans le fonctionnement optimal. L'approche la plus appropriée est choisie afin de maximiser l'utilisation efficace de l'eau et de minimiser les pertes économiques, où différents scénarios sont simulés afin de valider les solutions adoptées
This research work aims to optimize the operational procedure of cascade hydro plants in order to be efficiently used for power generation and irrigation. The challenge was to find the most realistic model based on the stochastic feature of water resources, on the power demand and on the irrigation profile. All these aspects are affected on the short and on the long run by a wide range of different conditions (hydrological, meteorological and hydraulic). During this project a bibliographic study was done in order to identify the technical issues that prevent the efficient use of hydro plants in developing countries. The system is numerically modelled taking into consideration all the variables and parameters involved in the optimal operation. The most appropriate approach is chosen in order to maximize the efficient use of water and to minimize economical losses, where different scenarios are simulated in order to validate the adopted suggestions
APA, Harvard, Vancouver, ISO, and other styles
48

Ayyad, Marouane. "Le traitement d'information avec des états chimères optiques." Electronic Thesis or Diss., Université de Lille (2022-....), 2023. http://www.theses.fr/2023ULILR058.

Full text
Abstract:
Dans la mythologie grecque, une chimère est une créature fantastique dont certaines parties du corps appartiennent à des animaux différents.Par analogie à cette mythologie, en physique et plus particulièrement dans l'étude des systèmes complexes discrets spatialement étendus, ces états chimères correspondent à la coexistence de deux comportements dynamiques spatio-temporels opposés.La coexistence de deux domaines l'un cohérent et l'autre incohérent dans une chaîne d'oscillateurs non-linéaires couplés en est l'exemple historique, à l'image des différentes parties du corps d'une chimère.Ces auto-organisations spatio-temporelles ont été largement étudiées théoriquement et expérimentalement. Cependant, rares sont les études menées pour explorer les liens entre ce type de dynamique et les automates cellulaires.Ces automates, malgré leur simplicité, possèdent des propriétés dynamiques remarquables et, par conséquent, représentent un des socles de la théorie d'information.Pour répondre à cette problématique, nous avons considéré des états chimères obtenus dans une chaîne de résonateurs optiques identiques couplés.Ces structures ont alors fait l'objet d'analyses quantitatives et qualitatives par les mêmes outils que ceux utilisés pour caractériser les automates cellulaires.Cela nous a permis de mettre en évidence une dynamique de type automate cellulaire élémentaire cachée dans l'évolution de nos états chimères.Nous avons alors été en mesure de déduire, un ensemble de propriétés en terme de calculabilité, ouvrant des perspectives vers des potentielles applications pour le traitement de l'information.Par la suite, nous avons utilisé nos états chimères optiques dans le cadre des réseaux de neurones récurrents.Il s'agit d'un nouveau paradigme, qui se distingue par sa grande simplicité, sa rapidité ainsi que son efficacité incontournable dans le traitement de l'information.Cependant, les performances de cette technique d'apprentissage automatique, dépendent notamment du design du réservoir.Nos résultats montrent que l'implémentation de nos états chimères optiques au lieu des réservoirs ‘classiques', peut fournir une alternative architecturale prometteuse permettant d'améliorer davantage la vitesse du traitement d'information
In Greek mythology, a chimera is a fantastic creature whose body parts belong to different animals.By analogy with this mythology, in physics and more particularly in the study of spatially extended discrete complex systems, these chimera states correspond to the coexistence of two opposing spatio-temporal dynamic behaviors.The coexistence of two domains, one coherent and the other incoherent in a chain of coupled non-linear oscillators is the historical example, like the different parts of the body of a chimera.These spatio-temporal self-organizations have been widely studied theoretically and experimentally.However, few studies have been carried out to explore the links between this type of dynamics and cellular automata.These automata, despite their simplicity, have remarkable dynamic properties and, consequently, represent one of the foundations of information theory.To answer this problem, we considered chimera states obtained in a chain of identical coupled optical resonators.These structures were then the subject of quantitative and qualitative analyzes using the same tools as those used to characterize cellular automata.This allowed us to highlight an elementary cellular automaton type dynamic hidden in the evolution of our chimera states.We were then able to deduce a set of properties in terms of computability, opening perspectives towards potential applications for information processing.Subsequently, we used our optical chimera states in the context of recurrent neural networks.This is a new paradigm, which stands out for its great simplicity, speed and essential efficiency in the processing of information.However, the performance of this machine learning technique depends in particular on the design of the reservoir.Our results show that the implementation of our optical chimeric states instead of 'classic' reservoirs can provide a promising architectural alternative to further improve the speed of information processing
APA, Harvard, Vancouver, ISO, and other styles
49

Lawrie, Sofía. "Information representation and processing in neuronal networks: from biological to artificial systems and from first to second-order statistics." Doctoral thesis, Universitat Pompeu Fabra, 2022. http://hdl.handle.net/10803/673989.

Full text
Abstract:
Neuronal networks are today hypothesized to the basis for the computing capabilities of biological nervous systems. In the same manner, artificial neuronal systems are intensively exploited for a diversity of industrial and scientific applications. However, how information is represented and processed by these networks remains under debate, meaning that it is not clear which sets of neuronal activity features are useful for computation. In this thesis, I present a set of results that link the first-order statistics of neuronal activity with behavior, in the general context of encoding/decoding to analyse experimental data collected while non human primates performed a working memory task. Subsequently, I go beyond the first-order and show that the second-order statistics of neuronal activity in reservoir computing, a recurrent artificial network model, make up a robust candidate for information representation and transmission for the classification of multivariate inputs.
Las redes neuronales se presentan hoy, hipotéticamente, como las responsables de las capacidades computacionales de los sistemas nerviosos biológicos. De la misma manera, los sistemas neuronales artificiales son intensamente explotados en una diversidad de aplicaciones industriales y científicas. No obstante, cómo la información es representada y procesada por estas redes está aún sujeto a debate. Es decir, no está claro qué propiedades de la actividad neuronal son útiles para llevar a cabo computaciones. En esta tesis, presento un conjunto de resultados que relaciona el primer orden estadístico de la actividad neuronal con comportamiento, en el contexto general de codificación/decodificación, para analizar datos recolectados mientras primates no humanos realizaban una tarea de memoria de trabajo. Subsecuentemente, voy más allá del primer orden y muestro que las estadísticas de segundo orden en computación de reservorios, un modelo de red neuronal artificial y recurrente, constituyen un candidato robusto para la representación y transmisión de información con el fin de clasificar señales multidimensionales.
APA, Harvard, Vancouver, ISO, and other styles
50

Bou-Fakhreddine, Bassam. "Modeling, Control and Optimization Of Cascade Hydroelectric-Irrigation Plants : Operation and Planning." Electronic Thesis or Diss., Paris, CNAM, 2018. http://www.theses.fr/2018CNAM1172.

Full text
Abstract:
Ce travail de recherche vise à optimiser la procédure opérationnelle des centrales hydroélectriques en cascade afin de les utiliser efficacement pour la production d’électricité et l’irrigation. Le défi consistait à trouver le modèle le plus réaliste basé sur la caractéristique stochastique des ressources en eau, sur la demande en énergie et sur le profil d'irrigation. Tous ces aspects sont affectés à court et à long terme par un large éventail de conditions différentes (hydrologique, météorologique et hydraulique). Au cours de ce projet, une étude bibliographique a été réalisée afin d'identifier les problèmes techniques qui empêchent l'utilisation efficace des centrales hydroélectriques dans les pays en développement. Le système est modélisé numériquement en tenant compte de toutes les variables et paramètres impliqués dans le fonctionnement optimal. L'approche la plus appropriée est choisie afin de maximiser l'utilisation efficace de l'eau et de minimiser les pertes économiques, où différents scénarios sont simulés afin de valider les solutions adoptées
This research work aims to optimize the operational procedure of cascade hydro plants in order to be efficiently used for power generation and irrigation. The challenge was to find the most realistic model based on the stochastic feature of water resources, on the power demand and on the irrigation profile. All these aspects are affected on the short and on the long run by a wide range of different conditions (hydrological, meteorological and hydraulic). During this project a bibliographic study was done in order to identify the technical issues that prevent the efficient use of hydro plants in developing countries. The system is numerically modelled taking into consideration all the variables and parameters involved in the optimal operation. The most appropriate approach is chosen in order to maximize the efficient use of water and to minimize economical losses, where different scenarios are simulated in order to validate the adopted suggestions
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography