Dissertations / Theses on the topic 'Reservoir computing'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Reservoir computing.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Dale, Matthew. "Reservoir Computing in materio." Thesis, University of York, 2018. http://etheses.whiterose.ac.uk/22306/.
Full textKulkarni, Manjari S. "Memristor-based Reservoir Computing." PDXScholar, 2012. https://pdxscholar.library.pdx.edu/open_access_etds/899.
Full textTran, Dat Tien. "Memcapacitive Reservoir Computing Architectures." PDXScholar, 2019. https://pdxscholar.library.pdx.edu/open_access_etds/5001.
Full textMelandri, Luca. "Introduction to Reservoir Computing Methods." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2015. http://amslaurea.unibo.it/8268/.
Full textWeddell, Stephen John. "Optical Wavefront Prediction with Reservoir Computing." Thesis, University of Canterbury. Electrical and Computer Engineering, 2010. http://hdl.handle.net/10092/4070.
Full textSergio, Anderson Tenório. "Otimização de Reservoir Computing com PSO." Universidade Federal de Pernambuco, 2013. https://repositorio.ufpe.br/handle/123456789/11498.
Full textMade available in DSpace on 2015-03-09T14:34:23Z (GMT). No. of bitstreams: 2 Dissertaçao Anderson Sergio.pdf: 1358589 bytes, checksum: fdd2a84a1ce8a69596fa45676bc522e4 (MD5) license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Previous issue date: 2013-03-07
Reservoir Computing (RC) é um paradigma de Redes Neurais Artificiais com aplicações importantes no mundo real. RC utiliza arquitetura similar às Redes Neurais Recorrentes para processamento temporal, com a vantagem de não necessitar treinar os pesos da camada intermediária. De uma forma geral, o conceito de RC é baseado na construção de uma rede recorrente de maneira randômica (reservoir), sem alteração dos pesos. Após essa fase, uma função de regressão linear é utilizada para treinar a saída do sistema. A transformação dinâmica não-linear oferecida pelo reservoir é suficiente para que a camada de saída consiga extrair os sinais de saída utilizando um mapeamento linear simples, fazendo com que o treinamento seja consideravelmente mais rápido. Entretanto, assim como as redes neurais convencionais, Reservoir Computing possui alguns problemas. Sua utilização pode ser computacionalmente onerosa, diversos parâmetros influenciam sua eficiência e é improvável que a geração aleatória dos pesos e o treinamento da camada de saída com uma função de regressão linear simples seja a solução ideal para generalizar os dados. O PSO é um algoritmo de otimização que possui algumas vantagens sobre outras técnicas de busca global. Ele possui implementação simples e, em alguns casos, convergência mais rápida e custo computacional menor. Esta dissertação teve o objetivo de investigar a utilização do PSO (e duas de suas extensões – EPUS-PSO e APSO) na tarefa de otimizar os parâmetros globais, arquitetura e pesos do reservoir de um RC, aplicada ao problema de previsão de séries temporais. Os resultados alcançados mostraram que a otimização de Reservoir Computing com PSO, bem como com as suas extensões selecionadas, apresentaram desempenho satisfatório para todas as bases de dados estudadas – séries temporais de benchmark e bases de dados com aplicação em energia eólica. A otimização superou o desempenho de diversos trabalhos na literatura, apresentando-se como uma solução importante para o problema de previsão de séries temporais.
Appeltant, Lennert. "Reservoir computing based on delay-dynamical systems." Doctoral thesis, Universitat de les Illes Balears, 2012. http://hdl.handle.net/10803/84144.
Full textAndersson, Casper. "Reservoir Computing Approach for Network Intrusion Detection." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-54983.
Full textFu, Kaiwei. "Reservoir Computing with Neuro-memristive Nanowire Networks." Thesis, The University of Sydney, 2021. https://hdl.handle.net/2123/25900.
Full textAlomar, Barceló Miquel Lleó. "Methodologies for hardware implementation of reservoir computing systems." Doctoral thesis, Universitat de les Illes Balears, 2017. http://hdl.handle.net/10803/565422.
Full text[spa]Inspiradas en la forma en que el cerebro procesa la información, las redes neuronales artificiales (RNA) se crearon con el objetivo de reproducir habilidades humanas en tareas que son difíciles de resolver utilizando la programación algorítmica clásica. El paradigma de las RNA se ha aplicado a numerosos campos de la ciencia y la ingeniería gracias a su capacidad de aprender de ejemplos, la adaptación, el paralelismo y la tolerancia a fallas. El reservoir computing (RC), basado en el uso de una red neuronal recurrente (RNR) aleatoria como núcleo de procesamiento, es un modelo de gran alcance muy adecuado para procesar series temporales. Las realizaciones en hardware de las RNA son cruciales para aprovechar las propiedades paralelas de estos modelos, las cuales favorecen una mayor velocidad y fiabilidad. Por otro lado, las redes neuronales en hardware (RNH) pueden ofrecer ventajas apreciables en términos de consumo energético y coste. Los dispositivos compactos de bajo coste implementando RNH son útiles para apoyar o reemplazar al software en aplicaciones en tiempo real, como el control, monitorización médica, robótica y redes de sensores. Sin embargo, la realización en hardware de RNA con un número elevado de neuronas, como en el caso del RC, es una tarea difícil debido a la gran cantidad de recursos exigidos por las operaciones involucradas. A pesar de los posibles beneficios de los circuitos digitales en hardware para realizar un procesamiento neuronal basado en RC, la mayoría de las implementaciones se realizan en software mediante procesadores convencionales. En esta tesis, propongo y analizo varias metodologías para la implementación digital de sistemas RC utilizando un número limitado de recursos hardware. Los diseños de la red neuronal se describen en detalle tanto para una implementación convencional como para los distintos métodos alternativos. Se discuten las ventajas e inconvenientes de las diversas técnicas con respecto a la precisión, velocidad de cálculo y área requerida. Finalmente, las implementaciones propuestas se aplican a resolver diferentes problemas prácticos de ingeniería.
[eng]Inspired by the way the brain processes information, artificial neural networks (ANNs) were created with the aim of reproducing human capabilities in tasks that are hard to solve using the classical algorithmic programming. The ANN paradigma has been applied to numerous fields of science and engineering thanks to its ability to learn from examples, adaptation, parallelism and fault-tolerance. Reservoir computing (RC), based on the use of a random recurrent neural network (RNN) as processing core, is a powerful model that is highly suited to time-series processing. Hardware realizations of ANNs are crucial to exploit the parallel properties of these models, which favor higher speed and reliability. On the other hand, hardware neural networks (HNNs) may offer appreciable advantages in terms of power consumption and cost. Low-cost compact devices implementing HNNs are useful to suport or replace software in real-time applications, such as control, medical monitoring, robotics and sensor networks. However, the hardware realization of ANNs with large neuron counts, such as in RC, is a challenging task due to the large resource requirement of the involved operations. Despite the potential benefits of hardware digital circuits to perform RC-based neural processing, most implementations are realized in software using sequential processors. In this thesis, I propose and analyze several methodologies for the digital implementation of RC systems using limited hardware resources. The neural network design is described in detail for both a conventional implementation and the diverse alternative approaches. The advantages and shortcomings of the various techniques regarding the accuracy, computation speed and required silicon area are discussed. Finally, the proposed approaches are applied to solve different real-life engineering problems.
Mohamed, Abdalla Mohab Sameh. "Reservoir computing in lithium niobate on insulator platforms." Electronic Thesis or Diss., Ecully, Ecole centrale de Lyon, 2024. http://www.theses.fr/2024ECDL0051.
Full textThis work concerns time-delay reservoir computing (TDRC) in integrated photonic platforms, specifically the Lithium Niobate on Insulator (LNOI) platform. We propose a novel all-optical integrated architecture, which has only one tunable parameter in the form of a phase-shifter, and which can achieve good performance on several reservoir computing benchmark tasks. We also investigate the design space of this architecture and the asynchronous operation, which represents a departure from the more common framework of envisioning time-delay reservoir computers as networks in the stricter sense. Additionally, we suggest to leverage the all-optical scheme to dispense with the input mask, which allows the bypassing of an O/E/O conversion, often necessary to apply the mask in TDRC architectures. In future work, this can allow the processing of real-time incoming signals, possibly for telecom/edge applications. The effects of the output electronic readout on this architecture are also investigated. Furthermore, it is suggested to use the Pearson correlation as a simple way to design a reservoir which can handle multiple tasks at the same time, on the same incoming signal (and possibly on signals in different channels). Initial experimental work carried out at RMIT University is also reported. The unifying theme of this work is to investigate the performance possibilities with minimum photonic hardware requirements, relying mainly on LNOI’s low losses which enables the integration of the feedback waveguide, and using only interference and subsequent intensity conversion (through a photodetector) as the nonlinearity. This provides a base for future work to compare against in terms of performance gains when additional nonlinearities are considered (such as those of the LNOI platform), and when overall system complexity is increased by means of introducing more tunable parameters. Thus, the scope of this work is about the exploration of one particular unconventional computing approach (reservoir computing), using one particular technology (photonics), on one particular platform (lithium niobate on insulator). This work builds on the increasing interest of exploring unconventional computing, since it has been shown over the years that digital computers can no longer be a `one-size-fits-all', especially for emerging applications like artificial intelligence (AI). The future landscape of computing will likely encompass a rich variety of computing paradigms, architectures, and hardware, to meet the needs of rising specialized applications, and all in coexistence with digital computers which remain --- at least for now --- better suited for general-purpose computing
Lukoševičius, Mantas [Verfasser]. "Reservoir Computing and Self-Organized Neural Hierarchies / Mantas Lukoševičius." Bremen : IRC-Library, Information Resource Center der Jacobs University Bremen, 2013. http://d-nb.info/1035433168/34.
Full textCanaday, Daniel M. "Modeling and Control of Dynamical Systems with Reservoir Computing." The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu157469471458874.
Full textGrein, Ederson Augusto. "A parallel computing approach applied to petroleum reservoir simulation." reponame:Repositório Institucional da UFSC, 2015. https://repositorio.ufsc.br/xmlui/handle/123456789/160633.
Full textMade available in DSpace on 2016-04-19T04:03:44Z (GMT). No. of bitstreams: 1 337626.pdf: 16916870 bytes, checksum: a0cb8bc1bf93f21cc1a78cd631272e49 (MD5) Previous issue date: 2015
A simulação numérica é uma ferramenta de extrema importância à indústria do petróleo e gás. Entretanto, para que os resultados advindos da simulação sejam fidedignos, é fundamental o emprego de modelos físicos fiéis e de uma boa caracterização geométrica do reservatório. Isso tende a introduzir elevada carga computacional e, consequentemente, a obtenção da solução do modelo numérico correspondente pode demandar um excessivo tempo de simulação. É evidente que a redução desse tempo interessa profundamente à engenharia de reservatórios. Dentre as técnicas de melhoria de performance, uma das mais promissoras é a aplicação da computação paralela. Nessa técnica, a carga computacional é dividida entre diversos processadores. Idealmente, a carga computacional é dividida de maneira igualitária e, assim, se N é o número de processadores, o tempo computacional é N vezes menor. No presente estudo, a computação paralela foi aplicada a dois simuladores numéricos: UTCHEM e EFVLib. UTCHEM é um simulador químico-composicional desenvolvido pela The University of Texas at Austin. A EFVLib, por sua vez, é uma biblioteca desenvolvida pelo laboratório SINMEC  laboratório ligado ao Departamento de Engenharia Mecânica da Universidade Federal de Santa Catarina  cujo intuito é prover suporte à aplicação do Método dos Volumes Finitos Baseado em Elementos. Em ambos os casos a metodologia de paralalelização é baseada na decomposição de domínio.
Abstract : Numerical simulation is an extremely relevant tool to the oil and gas industry. It makes feasible the procedure of predicting the production scenery in a given reservoir and design more advantageous exploit strategies fromits results. However, in order to obtain reliability fromthe numerical results, it is essential to employ reliable numerical models and an accurate geometrical characterization of the reservoir. This leads to a high computational load and consequently the achievement of the solution of the corresponding numerical method may require an exceedingly large simulation time. Seemingly, reducing this time is an accomplishment of great interest to the reservoir engineering. Among the techniques of boosting performance, parallel computing is one of the most promising ones. In this technique, the computational load is split throughout the set of processors. In the most ideal situation, this computational load is split in an egalitarian way, in such a way that if N is the number of processors then the computational time is N times smaller. In this study, parallel computing was applied to two distinct numerical simulators: UTCHEM and EFVLib. UTCHEM is a compositional reservoir simulator developed at TheUniversity of Texas atAustin. EFVLib, by its turn, is a computational library developed at SINMEC  a laboratory at theMechanical Enginering Department of The Federal University of Santa Catarina  with the aim of supporting the Element-based Finite Volume Method employment. The parallelization process were based on the domain decomposition on the both cases formerly described.
Reinhart, Felix [Verfasser]. "Reservoir computing with output feedback / René Felix Reinhart. Technische Fakultät." Bielefeld : Universitätsbibliothek Bielefeld, Hochschulschriften, 2012. http://d-nb.info/1019275367/34.
Full textPauwels, Jaël. "High performance optical reservoir computing based on spatially extended systems." Doctoral thesis, Universite Libre de Bruxelles, 2021. https://dipot.ulb.ac.be/dspace/bitstream/2013/331699/3/thesis.pdf.
Full textDoctorat en Sciences
info:eu-repo/semantics/nonPublished
Martinenghi, Romain. "Démonstration opto-électronique du concept de calculateur neuromorphique par Reservoir Computing." Thesis, Besançon, 2013. http://www.theses.fr/2013BESA2052/document.
Full textReservoir Computing (RC) is a currently emerging new brain-inspired computational paradigm, which appeared in theearly 2000s. It is similar to conventional recurrent neural network (RNN) computing concepts, exhibiting essentiallythree parts: (i) an input layer to inject the information in the computing system; (ii) a central computational layercalled the Reservoir; (iii) and an output layer which is extracting the computed result though a so-called Read-Outprocedure, the latter being determined after a learning and training step. The main originality compared to RNNconsists in the last part, which is the only one concerned by the training step, the input layer and the Reservoir beingoriginally randomly determined and fixed. This specificity brings attractive features to RC compared to RNN, in termsof simplification, efficiency, rapidity, and feasibility of the learning, as well as in terms of dedicated hardwareimplementation of the RC scheme. This thesis is indeed concerned by one of the first a hardware implementation of RC,moreover with an optoelectronic architecture.Our approach to physical RC implementation is based on the use of a sepcial class of complex system for the Reservoir,a nonlinear delay dynamics involving multiple delayed feedback paths. The Reservoir appears thus as a spatio-temporalemulation of a purely temporal dynamics, the delay dynamics. Specific design of the input and output layer are shownto be possible, e.g. through time division multiplexing techniques, and amplitude modulation for the realization of aninput mask to address the virtual nodes in the delay dynamics. Two optoelectronic setups are explored, one involving awavelength nonlinear dynamics with a tunable laser, and another one involving an intensity nonlinear dynamics with anintegrated optics Mach-Zehnder modulator. Experimental validation of the computational efficiency is performedthrough two standard benchmark tasks: the NARMA10 test (prediction task), and a spoken digit recognition test(classification task), the latter showing results very close to state of the art performances, even compared with purenumerical simulation approaches
Vinckier, Quentin. "Analog bio-inspired photonic processors based on the reservoir computing paradigm." Doctoral thesis, Universite Libre de Bruxelles, 2016. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/237069.
Full textDoctorat en Sciences de l'ingénieur et technologie
info:eu-repo/semantics/nonPublished
Alalshekmubarak, Abdulrahman. "Towards a robust Arabic speech recognition system based on reservoir computing." Thesis, University of Stirling, 2014. http://hdl.handle.net/1893/21733.
Full textBai, Kang Jun. "Moving Toward Intelligence: A Hybrid Neural Computing Architecture for Machine Intelligence Applications." Diss., Virginia Tech, 2021. http://hdl.handle.net/10919/103711.
Full textDoctor of Philosophy
Deep learning strategies are the cutting-edge of artificial intelligence, in which the artificial neural networks are trained to extract key features or finding similarities from raw sensory information. This is made possible through multiple processing layers with a colossal amount of neurons, in a similar way to humans. Deep learning strategies run on von Neumann computers are deployed worldwide. However, in today's data-driven society, the use of general-purpose computing systems and cloud infrastructures can no longer offer a timely response while themselves exposing other significant security issues. Arose with the introduction of neuromorphic architecture, application-specific integrated circuit chips have paved the way for machine intelligence applications in recently years. The major contributions in this dissertation include designing and fabricating a new class of hybrid neural computing architecture and implementing various deep learning strategies to diverse machine intelligence applications. The resulting hybrid neural computing architecture offers an alternative solution to accelerate the neural computations required for sophisticated machine intelligence applications with a simple system-level design, and therefore, opening the door to low-power system-on-chip design for future intelligence computing, what is more, providing prominent design solutions and performance improvements for internet of things applications.
Dai, Jing. "Reservoir-computing-based, biologically inspired artificial neural networks and their applications in power systems." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/47646.
Full textEnel, Pierre. "Représentation dynamique dans le cortex préfrontal : comparaison entre reservoir computing et neurophysiologie du primate." Phd thesis, Université Claude Bernard - Lyon I, 2014. http://tel.archives-ouvertes.fr/tel-01056696.
Full textDenis-Le, Coarer Florian. "Neuromorphic computing using nonlinear ring resonators on a Silicon photonic chip." Electronic Thesis or Diss., CentraleSupélec, 2020. http://www.theses.fr/2020CSUP0001.
Full textWith the exponential volumes of digital data generated every day, there is a need for real-time, energy-efficient data processing. These challenges have motivated research on unconventional information processing. Among the existing techniques, machine learning is a very effective paradigm of cognitive computing. It provides, through many implementations including that of artificial neural networks, a set of techniques to teach a computer or physical system to perform complex tasks, such as classification, pattern recognition or signal generation. Reservoir computing was proposed about ten years ago to simplify the procedure for training the artificial neural network. Indeed, the network is kept fixed and only the connections between the reading layer and the output are driven by a simple linear regression. The internal architecture of a reservoir computer allows physical implementations, and several implementations have been proposed on different technological platforms, including photonic devices. On-chip reservoir computing is a very promising candidate to meet these challenges. The objective of this thesis work was to propose three different integrated reservoir architectures based on the use of resonant micro-rings. We have digitally studied its performance and highlighted data processing speeds of up to several tens of Gigabits per second with energy consumption of a few milliwatts
Masominia, Amir Hossein. "Neuro-inspired computing with excitable microlasers." Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASP053.
Full textThis thesis presents research on alternative computing systems, with a focus on analog and neuromimetic computing. The pursuit of more general artificial intelligence has underscored limitations in conventional computing units based on Von Neumann architectures, particularly regarding energy efficiency and complexity. Brain-inspired computing architectures and analog computers are key contenders in this field. Among the various proposed methods, photonic spiking systems offer significant advantages in processing and communication speeds, as well as potential energy efficiency. We propose a novel approach to classification and image recognition tasks using an in-house developed micropillar laser as the artificial neuron. The nonlinearity of the spiking micropillar laser, resulting from the internal dynamics of the system, allows for mapping incoming information, optically injected to the micropillar through gain, into higher dimensions. This enables finding linearly separable regions for classification. The micropillar laser exhibits all fundamental properties of a biological neuron, including excitability, refractory period, and summation effect, with sub-nanosecond characteristic timescales. This makes it a strong candidate in spiking systems where the dynamics of the spike itself carries information, as opposed to systems that consider spiking rates only. We designed and studied several systems using the micropillar laser, based on a reservoir computer with a single physical node that emulates a reservoir computer with several nodes, using different dynamical regimes of the microlaser. These systems achieved higher performance in prediction accuracy of the classes compared to systems without the micropillar. Additionally, we introduce a novel system inspired by receptive fields in the visual cortex, capable of classifying a digit dataset entirely online, eliminating the need for a conventional computer in the process. This system was successfully implemented experimentally using a combined fiber and free-space optical setup, opening promising prospects for ultra-fast, hardware based feature selection and classification systems
FERREIRA, Aida Araújo. "Um Método para Design e Treinamento de Reservoir Computing Aplicado à Previsão de Séries Temporais." Universidade Federal de Pernambuco, 2011. https://repositorio.ufpe.br/handle/123456789/1786.
Full textInstituto Federal de Educação, Ciência e Tecnologia de Pernambuco
Reservoir Computing é um tipo de rede neural recorrente que permite uma modelagem caixa-preta para sistemas dinâmicos (não-lineares). Em contraste com outras abordagens de redes neurais recorrentes, com Reservoir Computing não existe a necessidade de treinamento dos pesos da camada de entrada e nem dos pesos internos da rede (reservoir), apenas os pesos da camada de saída (readout) são treinados. No entanto, é necessário ajustar os parâmetros e a topologia da rede para a criação de um reservoir ótimo que seja adequado a uma determinada aplicação. Neste trabalho, foi criado um método, chamado RCDESIGN, para encontrar o melhor reservoir aplicado à tarefa de previsão de séries temporais. O método desenvolvido combina um algoritmo evolucionário com Reservoir Computing e busca simultaneamente pelos melhores valores dos parâmetros, da topologia da rede e dos pesos, sem reescalar a matriz de pesos do reservoir pelo raio espectral. A ideia do ajuste do raio espectral dentro de um círculo unitário no plano complexo, vem da teoria dos sistemas lineares que mostra claramente que a estabilidade é necessária para a obtenção de respostas úteis em sistemas lineares. Contudo, este argumento não se aplica necessariamente aos sistemas não-lineares, que é o caso de Reservoir Computing. O método criado considera também o Reservoir Computing em toda a sua não linearidade, pois permite a utilização de todas as suas possíveis conexões, em vez de usar apenas as conexões obrigatórias. Os resultados obtidos com o método proposto são comparados com dois métodos diferentes. O primeiro, chamado neste trabalho de Busca RS, utiliza algoritmo genético para otimizar os principais parâmetros de Reservoir Computing, que são: tamanho do reservoir, raio espectral e densidade de conexão. O segundo, chamado neste trabalho de Busca TR, utiliza algoritmo genético para otimizar a topologia e pesos de Reservoir Computing baseado no raio espectral. Foram utilizadas sete séries clássicas para realizar a validação acadêmica da aplicação do método proposto à tarefa de previsão de séries temporais. Um estudo de caso foi desenvolvido para verificar a adequação do método proposto ao problema de previsão da velocidade horária dos ventos na região nordeste do Brasil. A geração eólica é uma das fontes renováveis de energia com o menor custo de produção e com a maior quantidade de recursos disponíveis. Dessa forma, a utilização de modelos eficientes de previsão da velocidade dos ventos e da geração eólica pode reduzir as dificuldades de operação de um sistema elétrico composto por fontes tradicionais de energia e pela fonte eólica
Penkovsky, Bogdan. "Theory and modeling of complex nonlinear delay dynamics applied to neuromorphic computing." Thesis, Bourgogne Franche-Comté, 2017. http://www.theses.fr/2017UBFCD059/document.
Full textThe thesis develops a novel approach to design of a reservoir computer, one of the challenges of modern Science and Technology. It consists of two parts, both connected by the correspondence between optoelectronic delayed-feedback systems and spatio-temporal nonlinear dynamics. In the first part (Chapters 1 and 2), this correspondence is used in a fundamental perspective, studying self-organized patterns known as chimera states, discovered for the first time in purely temporal systems. Study of chimera states may shed light on mechanisms occurring in many structurally similar high-dimensional systems such as neural systems or power grids. In the second part (Chapters 3 and 4), the same spatio-temporal analogy is exploited from an applied perspective, designing and implementing a brain-inspired information processing device: a real-time digital reservoir computer is constructed in FPGA hardware. The implementation utilizes delay dynamics and realizes input as well as output layers for an autonomous cognitive computing system
Mejaouri, Salim. "Conception et fabrication de micro-résonateurs pour la réalisation d'une puce neuromorphique." Mémoire, Université de Sherbrooke, 2018. http://hdl.handle.net/11143/11930.
Full textButcher, John B. "Reservoir Computing with high non-linear separation and long-term memory for time-series data analysis." Thesis, Keele University, 2012. http://eprints.keele.ac.uk/1185/.
Full textAlmassian, Amin. "Information Representation and Computation of Spike Trains in Reservoir Computing Systems with Spiking Neurons and Analog Neurons." PDXScholar, 2016. http://pdxscholar.library.pdx.edu/open_access_etds/2724.
Full textCazin, Nicolas. "A replay driven model of spatial sequence learning in the hippocampus-prefrontal cortex network using reservoir computing." Thesis, Lyon, 2018. http://www.theses.fr/2018LYSE1133/document.
Full textAs rats learn to search for multiple sources of food or water in a complex environment, processes of spatial sequence learning and recall in the hippocampus (HC) and prefrontal cortex (PFC) are taking place. Recent studies (De Jong et al. 2011; Carr, Jadhav, and Frank 2011) show that spatial navigation in the rat hippocampus involves the replay of place-cell firing during awake and sleep states generating small contiguous subsequences of spatially related place-cell activations that we will call "snippets". These "snippets" occur primarily during sharp-wave-ripple (SPWR) events. Much attention has been paid to replay during sleep in the context of long-term memory consolidation. Here we focus on the role of replay during the awake state, as the animal is learning across multiple trials.We hypothesize that these "snippets" can be used by the PFC to achieve multi-goal spatial sequence learning.We propose to develop an integrated model of HC and PFC that is able to form place-cell activation sequences based on snippet replay. The proposed collaborative research will extend existing spatial cognition model for simpler goal-oriented tasks (Barrera and Weitzenfeld 2008; Barrera et al. 2015) with a new replay-driven model for memory formation in the hippocampus and spatial sequence learning and recall in PFC.In contrast to existing work on sequence learning that relies heavily on sophisticated learning algorithms and synaptic modification rules, we propose to use an alternative computational framework known as reservoir computing (Dominey 1995) in which large pools of prewired neural elements process information dynamically through reverberations. This reservoir computational model will consolidate snippets into larger place-cell activation sequences that may be later recalled by subsets of the original sequences.The proposed work is expected to generate a new understanding of the role of replay in memory acquisition in complex tasks such as sequence learning. That operational understanding will be leveraged and tested on a an embodied-cognitive real-time framework of a robot, related to the animat paradigm (Wilson 1991) [etc...]
Nowshin, Fabiha. "Spiking Neural Network with Memristive Based Computing-In-Memory Circuits and Architecture." Thesis, Virginia Tech, 2021. http://hdl.handle.net/10919/103854.
Full textM.S.
In recent years neuromorphic computing systems have achieved a lot of success due to its ability to process data much faster and using much less power compared to traditional Von Neumann computing architectures. Artificial Neural Networks (ANNs) are models that mimic biological neurons where artificial neurons or neurodes are connected together via synapses, similar to the nervous system in the human body. here are two main types of Artificial Neural Networks (ANNs), Feedforward Neural Network (FNN) and Recurrent Neural Network (RNN). In this thesis we first study the types of RNNs and then move on to Spiking Neural Networks (SNNs). SNNs are an improved version of ANNs that mimic biological neurons closely through the emission of spikes. This shows significant advantages in terms of power and energy when carrying out data intensive applications by allowing spatio-temporal information processing capability. On the other hand, emerging non-volatile memory (eNVM) technology is key to emulate neurons and synapses for in-memory computations for neuromorphic hardware. A particular eNVM technology, memristors, have received wide attention due to their scalability, compatibility with CMOS technology and low power consumption properties. In this work we develop a spiking neural network by incorporating an inter-spike interval encoding scheme to convert the incoming input signal to spikes and use a memristive crossbar to carry out in-memory computing operations. We demonstrate the accuracy of our design through a small-scale hardware simulation for digit recognition and demonstrate an accuracy of 87% in software through MNIST simulations.
Marquez, Alfonzo Bicky. "Reservoir computing photonique et méthodes non-linéaires de représentation de signaux complexes : Application à la prédiction de séries temporelles." Thesis, Bourgogne Franche-Comté, 2018. http://www.theses.fr/2018UBFCD042/document.
Full textArtificial neural networks are systems prominently used in computation and investigations of biological neural systems. They provide state-of-the-art performance in challenging problems like the prediction of chaotic signals. Yet, the understanding of how neural networks actually solve problems like prediction remains vague; the black-box analogy is often employed. Merging nonlinear dynamical systems theory with machine learning, we develop a new concept which describes neural networks and prediction within the same framework. Taking profit of the obtained insight, we a-priori design a hybrid computer, which extends a neural network by an external memory. Furthermore, we identify mechanisms based on spatio-temporal synchronization with which random recurrent neural networks operated beyond their fixed point could reduce the negative impact of regular spontaneous dynamics on their computational performance. Finally, we build a recurrent delay network in an electro-optical setup inspired by the Ikeda system, which at first is investigated in a nonlinear dynamics framework. We then implement a neuromorphic processor dedicated to a prediction task
Oliverio, Lucas. "Nonlinear dynamics from a laser diode with both optical injection and optical feedback for telecommunication applications." Electronic Thesis or Diss., CentraleSupélec, 2024. http://www.theses.fr/2024CSUP0002.
Full textThe current processing of information in large computing clusters is responsible for a strong energetic impact at a global level. The current paradigm needs to be rethought, and a computing architecture based on photonic components (semiconductor laser in particular) is studied in this thesis. The considered structure is a network of artificial neurons for telecommunications data processing. This involves using a laser diode to study the relationship between the dynamics with optical injection and optical feedback and neuroinspired computing capacity with simulations and experimental work
Chaix-Eichel, Naomi. "Etude du rôle de l’architecture des réseaux neuronaux dans la prise de décision à l’aide de modèles de reservoir computing." Electronic Thesis or Diss., Bordeaux, 2024. http://www.theses.fr/2024BORD0279.
Full textA striking similarity exists in the organization and structure of certain brain regions across diverse species. For instance, the brain structure of vertebrates, from fish to mammals, includes regions like the cortex, hippocampus, cerebellum and basal ganglia with remarkable similarity. The presence of these structures across a wide range of species strongly suggests that they emerged early in vertebrate evolution and have been conserved throughout evolution. The persistence of these structures raises intriguing questions about their evolutionary origins: are they unique and optimal solutions for processing information and controlling behavior, or could alternative brain architectures emerge to achieve similar functional properties? To investigate this question, this thesis explores the relationship between brain architecture and cognitive function, with a focus on decision-making processes. We propose to use variants of a recurrent neural network model (echo state network) that is structurally minimal and randomly connected. We aim to identify whether a minimal model can capture any decision-making process and if it cannot, we explore whether multiple realizable solutions emerge through structural variations. First we demonstrate that a minimal model is able to solve simple decision tasks in the context of spatial navigation. Second, we show that this minimal structure has performance limitations when handling more complex tasks, requiring additional structural constraints to achieve better results. Third, by employing a genetic algorithm to evolve network structure to more complex ones, we discover that multiple realizable solutions emerging through structural variations. Furthermore we reveal that identical architectures can exhibit a range of different behaviors, leading us to investigate additional factors contributing to these different behaviors beyond structural variations. Our analysis of the behavior of 24 monkeys living in a community reveals that social factors, such as social hierarchy, play a significant role in their behavior. This thesis takes an approach that differs from traditional neuroscience methodologies. Rather than directly constructing biologically inspired architectures, the models are designed from simple to complex structures, reproducing the process of biological evolution. By leveraging the principles of multiple realizability, this approach enables the evolution of diverse structural configurations that can achieve equivalent functional outcomes
Röhm, André [Verfasser], Kathy [Akademischer Betreuer] Lüdge, Kathy [Gutachter] Lügde, and Ingo [Gutachter] Fischer. "Symmetry-Breaking bifurcations and reservoir computing in regular oscillator networks / André Röhm ; Gutachter: Kathy Lügde, Ingo Fischer ; Betreuer: Kathy Lüdge." Berlin : Technische Universität Berlin, 2019. http://d-nb.info/1183789491/34.
Full textVincent-Lamarre, Philippe. "Learning Long Temporal Sequences in Spiking Networks by Multiplexing Neural Oscillations." Thesis, Université d'Ottawa / University of Ottawa, 2019. http://hdl.handle.net/10393/39960.
Full textStrock, Anthony. "Mémoire de travail dans les réseaux de neurones récurrents aléatoires." Thesis, Bordeaux, 2020. http://www.theses.fr/2020BORD0195.
Full textWorking memory can be defined as the ability to temporarily store and manipulate information of any kind.For example, imagine that you are asked to mentally add a series of numbers.In order to accomplish this task, you need to keep track of the partial sum that needs to be updated every time a new number is given.The working memory is precisely what would make it possible to maintain (i.e. temporarily store) the partial sum and to update it (i.e. manipulate).In this thesis, we propose to explore the neuronal implementations of this working memory using a limited number of hypotheses.To do this, we place ourselves in the general context of recurrent neural networks and we propose to use in particular the reservoir computing paradigm.This type of very simple model nevertheless makes it possible to produce dynamics that learning can take advantage of to solve a given task.In this job, the task to be performed is a gated working memory task.The model receives as input a signal which controls the update of the memory.When the door is closed, the model should maintain its current memory state, while when open, it should update it based on an input.In our approach, this additional input is present at all times, even when there is no update to do.In other words, we require our model to be an open system, i.e. a system which is always disturbed by its inputs but which must nevertheless learn to keep a stable memory.In the first part of this work, we present the architecture of the model and its properties, then we show its robustness through a parameter sensitivity study.This shows that the model is extremely robust for a wide range of parameters.More or less, any random population of neurons can be used to perform gating.Furthermore, after learning, we highlight an interesting property of the model, namely that information can be maintained in a fully distributed manner, i.e. without being correlated to any of the neurons but only to the dynamics of the group.More precisely, working memory is not correlated with the sustained activity of neurons, which has nevertheless been observed for a long time in the literature and recently questioned experimentally.This model confirms these results at the theoretical level.In the second part of this work, we show how these models obtained by learning can be extended in order to manipulate the information which is in the latent space.We therefore propose to consider conceptors which can be conceptualized as a set of synaptic weights which constrain the dynamics of the reservoir and direct it towards particular subspaces; for example subspaces corresponding to the maintenance of a particular value.More generally, we show that these conceptors can not only maintain information, they can also maintain functions.In the case of mental arithmetic mentioned previously, these conceptors then make it possible to remember and apply the operation to be carried out on the various inputs given to the system.These conceptors therefore make it possible to instantiate a procedural working memory in addition to the declarative working memory.We conclude this work by putting this theoretical model into perspective with respect to biology and neurosciences
Antonik, Piotr. "Application of FPGA to real-time machine learning: hardware reservoir computers and software image processing." Doctoral thesis, Universite Libre de Bruxelles, 2017. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/257660.
Full textDoctorat en Sciences
info:eu-repo/semantics/nonPublished
Baylon, Fuentes Antonio. "Ring topology of an optical phase delayed nonlinear dynamics for neuromorphic photonic computing." Thesis, Besançon, 2016. http://www.theses.fr/2016BESA2047/document.
Full textNowadays most of computers are still based on concepts developed more than 60 years ago by Alan Turing and John von Neumann. However, these digital computers have already begun to reach certain physical limits of their implementation via silicon microelectronics technology (dissipation, speed, integration limits, energy consumption). Alternative approaches, more powerful, more efficient and with less consume of energy, have constituted a major scientific issue for several years. Many of these approaches naturally attempt to get inspiration for the human brain, whose operating principles are still far from being understood. In this line of research, a surprising variation of recurrent neural network (RNN), simpler, and also even sometimes more efficient for features or processing cases, has appeared in the early 2000s, now known as Reservoir Computing (RC), which is currently emerging new brain-inspired computational paradigm. Its structure is quite similar to the classical RNN computing concepts, exhibiting generally three parts: an input layer to inject the information into a nonlinear dynamical system (Write-In), a second layer where the input information is projected in a space of high dimension called dynamical reservoir and an output layer from which the processed information is extracted through a so-called Read-Out function. In RC approach the learning procedure is performed in the output layer only, while the input and reservoir layer are randomly fixed, being the main originality of RC compared to the RNN methods. This feature allows to get more efficiency, rapidity and a learning convergence, as well as to provide an experimental implementation solution. This PhD thesis is dedicated to one of the first photonic RC implementation using telecommunication devices. Our experimental implementation is based on a nonlinear delayed dynamical system, which relies on an electro-optic (EO) oscillator with a differential phase modulation. This EO oscillator was extensively studied in the context of the optical chaos cryptography. Dynamics exhibited by such systems are indeed known to develop complex behaviors in an infinite dimensional phase space, and analogies with space-time dynamics (as neural network ones are a kind of) are also found in the literature. Such peculiarities of delay systems supported the idea of replacing the traditional RNN (usually difficult to design technologically) by a nonlinear EO delay architecture. In order to evaluate the computational power of our RC approach, we implement two spoken digit recognition tests (classification tests) taken from a standard databases in artificial intelligence TI-46 and AURORA-2, obtaining results very close to state-of-the-art performances and establishing state-of-the-art in classification speed. Our photonic RC approach allowed us to process around of 1 million of words per second, improving the information processing speed by a factor ~3
Mwamsojo, Nickson. "Neuromorphic photonic systems for information processing." Electronic Thesis or Diss., Institut polytechnique de Paris, 2023. http://www.theses.fr/2023IPPAS002.
Full textArtificial Intelligence has revolutionized the scientific community thanks to the advent of a robust computation workforce and Artificial Neural Neural Networks. However, the current implementation trends introduce a rapidly growing demand for computational power surpassing the rates and limitations of Moore's and Koomey's Laws, which implies an eventual efficiency barricade. To respond to these demands, bio-inspired techniques, known as 'neuro-morphic' systems, are proposed using physical devices. Of these systems, we focus on 'Reservoir Computing' and 'Coherent Ising Machines' in our works.Reservoir Computing, for instance, demonstrated its computation power such as the state-of-the-art performance of up to 1 million words per second using photonic hardware in 2017. We propose an automatic hyperparameter tuning algorithm for Reservoir Computing and give a theoretical study of its convergence. Moreover, we propose Reservoir Computing for early-stage Alzheimer's disease detection with a thorough assessment of the energy costs versus performance compromise. Finally, we confront the noisy image restoration problem by maximum a posteriori using an optoelectronic implementation of a Coherent Ising Machine
Bazzanella, Davide. "Microring Based Neuromorphic Photonics." Doctoral thesis, Università degli studi di Trento, 2022. http://hdl.handle.net/11572/344624.
Full textPinto, Rafael Coimbra. "Online incremental one-shot learning of temporal sequences." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2011. http://hdl.handle.net/10183/49063.
Full textThis work introduces novel neural networks algorithms for online spatio-temporal pattern processing by extending the Incremental Gaussian Mixture Network (IGMN). The IGMN algorithm is an online incremental neural network that learns from a single scan through data by means of an incremental version of the Expectation-Maximization (EM) algorithm combined with locally weighted regression (LWR). Four different approaches are used to give temporal processing capabilities to the IGMN algorithm: time-delay lines (Time-Delay IGMN), a reservoir layer (Echo-State IGMN), exponential moving average of reconstructed input vector (Merge IGMN) and self-referencing (Recursive IGMN). This results in algorithms that are online, incremental, aggressive and have temporal capabilities, and therefore are suitable for tasks with memory or unknown internal states, characterized by continuous non-stopping data-flows, and that require life-long learning while operating and giving predictions without separated stages. The proposed algorithms are compared to other spatio-temporal neural networks in 8 time-series prediction tasks. Two of them show satisfactory performances, generally improving upon existing approaches. A general enhancement for the IGMN algorithm is also described, eliminating one of the algorithm’s manually tunable parameters and giving better results.
Passey, Jr David Joseph. "Growing Complex Networks for Better Learning of Chaotic Dynamical Systems." BYU ScholarsArchive, 2020. https://scholarsarchive.byu.edu/etd/8146.
Full textVatin, Jeremy. "Photonique neuro-inspirée pour des applications télécoms." Electronic Thesis or Diss., CentraleSupélec, 2020. http://www.theses.fr/2020CSUP0004.
Full textWe are producing everyday thousands of gigabits of data, exchanged over the internet network. These data are processed thanks to computation clusters, which are responsible of the large amount of energy consumed by the internet network. In this work, we study an architecture made of photonic components, to get rid of electronic components that are power consuming. Thanks to components that are currently used in the internet network (laser and optical fiber), we aim at building an artificial neural network that is able to process telecommunication data. The artificial neural network is made of a laser, and an optical fiber that send back the light into the laser. The complex behavior of this system is used to feed the artificial neurons that are distributed along the fiber. We are able to prove that this system is able either to process one signal with a high efficiency, or two signals at the expense of a small loss of accuracy
Ismail, Ali Rida. "Commensurable and Chaotic Nano-Contact Vortex Oscillator (NCVO) study for information processing." Electronic Thesis or Diss., Université de Lorraine, 2022. http://www.theses.fr/2022LORR0003.
Full textThe amount of data used in information technology is increasing dramatically. This comes with the proliferation of highly advanced electronic technologies. The thermal issues, rising as an effect of such large data processes, impose the usage of novel technologies and paradigms in place of CMOS circuits. Spintronic devices are one of many alternatives proposed so far in the literature. In this work, we consider a spintronic device called nano-contact vortex oscillator (NCVO), which has recently begun to gain attention due to its rich and variable dynamics. This oscillators is operated by an bias DC current and subjected in a magnetic field, that determines it output dynamics. The practical use of the NCVO requires the existence of an accurate model that imitates its output magnetization and the vortex's trajectory rotating around the center in the upper layer of the device. These two variables are needed for the calculation of the equivalent resistance of the NCVO. For that, we build in this PhD work a model for the NCVO producing these two variables using a reservoir computing approach called conceptor-driven network. The network is trained on NCVO data gained by micromagnetic simulation. The built model successfully captures the NCVO dynamics in its different regimes (chaotic, periodic, and quasi- periodic) with an easy shift between regimes. The same network is used then for the detection of chaos in the input-times series. The proposed chaos-detection method has shown to be efficient and more robust compared to existing methods. Finally, the NCVO model is exploited for truly random number generation (TRNG) where a hardware design, fed by a chaotic signal generated by the model, is proposed. This design has shown the ability to compete existing RNG techniques in terms of speed, cost, and quality
Baldini, Paolo. "Online adaptation of robots controlled by nanowire networks." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2022. http://amslaurea.unibo.it/25396/.
Full textBou-Fakhreddine, Bassam. "Modeling, Control and Optimization Of Cascade Hydroelectric-Irrigation Plants : Operation and Planning." Thesis, Paris, CNAM, 2018. http://www.theses.fr/2018CNAM1172.
Full textThis research work aims to optimize the operational procedure of cascade hydro plants in order to be efficiently used for power generation and irrigation. The challenge was to find the most realistic model based on the stochastic feature of water resources, on the power demand and on the irrigation profile. All these aspects are affected on the short and on the long run by a wide range of different conditions (hydrological, meteorological and hydraulic). During this project a bibliographic study was done in order to identify the technical issues that prevent the efficient use of hydro plants in developing countries. The system is numerically modelled taking into consideration all the variables and parameters involved in the optimal operation. The most appropriate approach is chosen in order to maximize the efficient use of water and to minimize economical losses, where different scenarios are simulated in order to validate the adopted suggestions
Ayyad, Marouane. "Le traitement d'information avec des états chimères optiques." Electronic Thesis or Diss., Université de Lille (2022-....), 2023. http://www.theses.fr/2023ULILR058.
Full textIn Greek mythology, a chimera is a fantastic creature whose body parts belong to different animals.By analogy with this mythology, in physics and more particularly in the study of spatially extended discrete complex systems, these chimera states correspond to the coexistence of two opposing spatio-temporal dynamic behaviors.The coexistence of two domains, one coherent and the other incoherent in a chain of coupled non-linear oscillators is the historical example, like the different parts of the body of a chimera.These spatio-temporal self-organizations have been widely studied theoretically and experimentally.However, few studies have been carried out to explore the links between this type of dynamics and cellular automata.These automata, despite their simplicity, have remarkable dynamic properties and, consequently, represent one of the foundations of information theory.To answer this problem, we considered chimera states obtained in a chain of identical coupled optical resonators.These structures were then the subject of quantitative and qualitative analyzes using the same tools as those used to characterize cellular automata.This allowed us to highlight an elementary cellular automaton type dynamic hidden in the evolution of our chimera states.We were then able to deduce a set of properties in terms of computability, opening perspectives towards potential applications for information processing.Subsequently, we used our optical chimera states in the context of recurrent neural networks.This is a new paradigm, which stands out for its great simplicity, speed and essential efficiency in the processing of information.However, the performance of this machine learning technique depends in particular on the design of the reservoir.Our results show that the implementation of our optical chimeric states instead of 'classic' reservoirs can provide a promising architectural alternative to further improve the speed of information processing
Lawrie, Sofía. "Information representation and processing in neuronal networks: from biological to artificial systems and from first to second-order statistics." Doctoral thesis, Universitat Pompeu Fabra, 2022. http://hdl.handle.net/10803/673989.
Full textLas redes neuronales se presentan hoy, hipotéticamente, como las responsables de las capacidades computacionales de los sistemas nerviosos biológicos. De la misma manera, los sistemas neuronales artificiales son intensamente explotados en una diversidad de aplicaciones industriales y científicas. No obstante, cómo la información es representada y procesada por estas redes está aún sujeto a debate. Es decir, no está claro qué propiedades de la actividad neuronal son útiles para llevar a cabo computaciones. En esta tesis, presento un conjunto de resultados que relaciona el primer orden estadístico de la actividad neuronal con comportamiento, en el contexto general de codificación/decodificación, para analizar datos recolectados mientras primates no humanos realizaban una tarea de memoria de trabajo. Subsecuentemente, voy más allá del primer orden y muestro que las estadísticas de segundo orden en computación de reservorios, un modelo de red neuronal artificial y recurrente, constituyen un candidato robusto para la representación y transmisión de información con el fin de clasificar señales multidimensionales.
Bou-Fakhreddine, Bassam. "Modeling, Control and Optimization Of Cascade Hydroelectric-Irrigation Plants : Operation and Planning." Electronic Thesis or Diss., Paris, CNAM, 2018. http://www.theses.fr/2018CNAM1172.
Full textThis research work aims to optimize the operational procedure of cascade hydro plants in order to be efficiently used for power generation and irrigation. The challenge was to find the most realistic model based on the stochastic feature of water resources, on the power demand and on the irrigation profile. All these aspects are affected on the short and on the long run by a wide range of different conditions (hydrological, meteorological and hydraulic). During this project a bibliographic study was done in order to identify the technical issues that prevent the efficient use of hydro plants in developing countries. The system is numerically modelled taking into consideration all the variables and parameters involved in the optimal operation. The most appropriate approach is chosen in order to maximize the efficient use of water and to minimize economical losses, where different scenarios are simulated in order to validate the adopted suggestions