Academic literature on the topic 'Reservoir computing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Reservoir computing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Reservoir computing"

1

Van der Sande, Guy, Daniel Brunner, and Miguel C. Soriano. "Advances in photonic reservoir computing." Nanophotonics 6, no. 3 (May 12, 2017): 561–76. http://dx.doi.org/10.1515/nanoph-2016-0132.

Full text
Abstract:
AbstractWe review a novel paradigm that has emerged in analogue neuromorphic optical computing. The goal is to implement a reservoir computer in optics, where information is encoded in the intensity and phase of the optical field. Reservoir computing is a bio-inspired approach especially suited for processing time-dependent information. The reservoir’s complex and high-dimensional transient response to the input signal is capable of universal computation. The reservoir does not need to be trained, which makes it very well suited for optics. As such, much of the promise of photonic reservoirs lies in their minimal hardware requirements, a tremendous advantage over other hardware-intensive neural network models. We review the two main approaches to optical reservoir computing: networks implemented with multiple discrete optical nodes and the continuous system of a single nonlinear device coupled to delayed feedback.
APA, Harvard, Vancouver, ISO, and other styles
2

Tanaka, Gouhei. "Reservoir Computing." Journal of The Institute of Image Information and Television Engineers 74, no. 3 (2020): 532–34. http://dx.doi.org/10.3169/itej.74.532.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Antonik, Piotr, Serge Massar, and Guy Van Der Sande. "Photonic reservoir computing using delay dynamical systems." Photoniques, no. 104 (September 2020): 45–48. http://dx.doi.org/10.1051/photon/202010445.

Full text
Abstract:
The recent progress in artificial intelligence has spurred renewed interest in hardware implementations of neural networks. Reservoir computing is a powerful, highly versatile machine learning algorithm well suited for experimental implementations. The simplest highperformance architecture is based on delay dynamical systems. We illustrate its power through a series of photonic examples, including the first all optical reservoir computer and reservoir computers based on lasers with delayed feedback. We also show how reservoirs can be used to emulate dynamical systems. We discuss the perspectives of photonic reservoir computing.
APA, Harvard, Vancouver, ISO, and other styles
4

Senn, Christoph Walter, and Itsuo Kumazawa. "Abstract Reservoir Computing." AI 3, no. 1 (March 10, 2022): 194–210. http://dx.doi.org/10.3390/ai3010012.

Full text
Abstract:
Noise of any kind can be an issue when translating results from simulations to the real world. We suddenly have to deal with building tolerances, faulty sensors, or just noisy sensor readings. This is especially evident in systems with many free parameters, such as the ones used in physical reservoir computing. By abstracting away these kinds of noise sources using intervals, we derive a regularized training regime for reservoir computing using sets of possible reservoir states. Numerical simulations are used to show the effectiveness of our approach against different sources of errors that can appear in real-world scenarios and compare them with standard approaches. Our results support the application of interval arithmetics to improve the robustness of mass-spring networks trained in simulations.
APA, Harvard, Vancouver, ISO, and other styles
5

Lukoševičius, Mantas, Herbert Jaeger, and Benjamin Schrauwen. "Reservoir Computing Trends." KI - Künstliche Intelligenz 26, no. 4 (May 16, 2012): 365–71. http://dx.doi.org/10.1007/s13218-012-0204-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

NAITOH, Yasuhisa, and Yoshiyuki YAMASHITA. "Physical Reservoir Computing." Vacuum and Surface Science 67, no. 11 (November 10, 2024): 520. http://dx.doi.org/10.1380/vss.67.520.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Yue, Dianzuo, Yushuang Hou, Chunxia Hu, Cunru Zang, and Yingzhe Kou. "Handwritten Digits Recognition Based on a Parallel Optoelectronic Time-Delay Reservoir Computing System." Photonics 10, no. 3 (February 22, 2023): 236. http://dx.doi.org/10.3390/photonics10030236.

Full text
Abstract:
In this work, the performance of an optoelectronic time-delay reservoir computing system for performing a handwritten digit recognition task is numerically investigated, and a scheme to improve the recognition speed using multiple parallel reservoirs is proposed. By comparing four image injection methods based on a single time-delay reservoir, we find that when injecting the histograms of oriented gradient (HOG) features of the digit image, the accuracy rate (AR) is relatively high and is less affected by the offset phase. To improve the recognition speed, we construct a parallel time-delay reservoir system including multi-reservoirs, where each reservoir processes part of the HOG features of one image. Based on 6 parallel reservoirs with each reservoir possessing 100 virtual nodes, the AR can reach about 97.8%, and the reservoir processing speed can reach about 1 × 106 digits per second. Meanwhile, the parallel reservoir system shows strong robustness to the parameter mismatch between multi-reservoirs.
APA, Harvard, Vancouver, ISO, and other styles
8

Asadullah, M., P. Behrenbruch, and S. Pham. "RESERVOIR SIMULATION—UPSCALING, STREAMLINES AND PARALLEL COMPUTING." APPEA Journal 47, no. 1 (2007): 199. http://dx.doi.org/10.1071/aj06013.

Full text
Abstract:
Simulation of petroleum reservoirs is becoming more and more complex due to increasing necessity to model heterogeneity of reservoirs for accurate reservoir performance prediction. With high oil prices and less easy oil, accurate reservoir management tools such as simulation models are in more demand than ever before. The aim is to capture and preserve reservoir heterogeneity when changing over from a detailed geocellular model to a flow simulation model, minimising errors when upscaling and preventing excessive numerical dispersion by employing variable and innovative grids, as well as improved computational algorithms.For accurate and efficient simulation of large-scale models there are essentially three choices: upscaling, which involves averaging of parameters for several blocks, resulting in a coarser model that executes faster; the use of streamline simulation, which uses a more optimal grid, combined with a different computational algorithm for increased efficiency; and, the use of parallel computing techniques, which use superior hardware configurations for efficiency gains. With uncertainty screening of various multiple geostatistical realisations and investigation of alternative development scenarios— now commonplace for determining reservoir performance—computational efficiency and accuracy in modelling are paramount. This paper summarises the main techniques and methodologies involved in considering geocellular models for flow simulation of reservoirs, commenting on advantages and disadvantages among the various possibilities. Starting with some historic comments, the three modes of simulation are reviewed and examples are given for illustrative purposes, including a case history for the Bayu-Undan Field, Timor Sea.
APA, Harvard, Vancouver, ISO, and other styles
9

Govia, L. C. G., G. J. Ribeill, G. E. Rowlands, and T. A. Ohki. "Nonlinear input transformations are ubiquitous in quantum reservoir computing." Neuromorphic Computing and Engineering 2, no. 1 (February 18, 2022): 014008. http://dx.doi.org/10.1088/2634-4386/ac4fcd.

Full text
Abstract:
Abstract The nascent computational paradigm of quantum reservoir computing presents an attractive use of near-term, noisy-intermediate-scale quantum processors. To understand the potential power and use cases of quantum reservoir computing, it is necessary to define a conceptual framework to separate its constituent components and determine their impacts on performance. In this manuscript, we utilize such a framework to isolate the input encoding component of contemporary quantum reservoir computing schemes. We find that across the majority of schemes the input encoding implements a nonlinear transformation on the input data. As nonlinearity is known to be a key computational resource in reservoir computing, this calls into question the necessity and function of further, post-input, processing. Our findings will impact the design of future quantum reservoirs, as well as the interpretation of results and fair comparison between proposed designs.
APA, Harvard, Vancouver, ISO, and other styles
10

Oliveira, Estevao Rada, and Fernando Juliani. "Reservoir Computing: uma Abordagem Conceitual." Revista de Ciências Exatas e Tecnologia 13, no. 13 (December 30, 2018): 09. http://dx.doi.org/10.17921/1890-1793.2018v13n13p09-12.

Full text
Abstract:
Reservoir computing é um paradigma de rede neural recorrente construída de forma aleatória, onde sua camada intermediária não necessita ser treinada. O presente artigo sintetiza os principais conceitos, métodos e pesquisas recentes realizadas sobre o paradigma de reservoir computing, objetivando servir como apoio teórico para outros artigos. Foi realizada uma revisão bibliográfica fundamentada em bases de conhecimento científico confiáveis enfatizando pesquisas compreendidas no período de 2007 a 2017 e direcionadas à implementação e otimização do paradigma em questão. Como resultado do trabalho, tem-se a apresentação de trabalhos recentes que contribuem de forma geral para o desenvolvimento de reservoir computing, e devido à atualidade do tema, é apresentada uma diversidade de tópicos abertos à pesquisa, podendo servir como norteamento para a comunidade científica. Palavras-chave: Aprendizado de Máquina. Inteligência Artificial. Redes Neurais Recorrentes.Abstract Reservoir computng is a randomly constructed recurrent neural network paradigm, where the hidden layer does not need to be trained. This article summarizes the main concepts, methods and recent researches about reservoir computing paradigm, aiming to offer a theoretical support for other articles. Were made a bibliographic review based on reliable scientific knowledge bases, emphasizing researches published between 2007 and 2017 and focused on implementation and optimization of aforementioned paradigm. As a result, there's a report of recent articles that contribute in general to the development of reservoir computing, and due to its topicality, a diversity of topics that are still open to research are given, that may possibly work as a guide for the research community. Keywords: Artificial Intelligence. Machine Learning. Recurrent Neural Network.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Reservoir computing"

1

Dale, Matthew. "Reservoir Computing in materio." Thesis, University of York, 2018. http://etheses.whiterose.ac.uk/22306/.

Full text
Abstract:
Reservoir Computing first emerged as an efficient mechanism for training recurrent neural networks and later evolved into a general theoretical model for dynamical systems. By applying only a simple training mechanism many physical systems have become exploitable unconventional computers. However, at present, many of these systems require careful selection and tuning by hand to produce usable or optimal reservoir computers. In this thesis we show the first steps to applying the reservoir model as a simple computational layer to extract exploitable information from complex material substrates. We argue that many physical substrates, even systems that in their natural state might not form usable or "good" reservoirs, can be configured into working reservoirs given some stimulation. To achieve this we apply techniques from evolution in materio whereby configuration is through evolved input-output signal mappings and targeted stimuli. In preliminary experiments the combined model and configuration method is applied to carbon nanotube/polymer composites. The results show substrates can be configured and trained as reservoir computers of varying quality. It is shown that applying the reservoir model adds greater functionality and programmability to physical substrates, without sacrificing performance. Next, the weaknesses of the technique are addressed, with the creation of new high input-output hardware system and an alternative multi-substrate framework. Lastly, a substantial effort is put into characterising the quality of a substrate for reservoir computing, i.e its ability to realise many reservoirs. From this, a methodological framework is devised. Using the framework, radically different computing substrates are compared and assessed, something previously not possible. As a result, a new understanding of the relationships between substrate, tasks and properties is possible, outlining the way for future exploration and optimisation of new computing substrates.
APA, Harvard, Vancouver, ISO, and other styles
2

Kulkarni, Manjari S. "Memristor-based Reservoir Computing." PDXScholar, 2012. https://pdxscholar.library.pdx.edu/open_access_etds/899.

Full text
Abstract:
In today's nanoscale era, scaling down to even smaller feature sizes poses a significant challenge in the device fabrication, the circuit, and the system design and integration. On the other hand, nanoscale technology has also led to novel materials and devices with unique properties. The memristor is one such emergent nanoscale device that exhibits non-linear current-voltage characteristics and has an inherent memory property, i.e., its current state depends on the past. Both the non-linear and the memory property of memristors have the potential to enable solving spatial and temporal pattern recognition tasks in radically different ways from traditional binary transistor-based technology. The goal of this thesis is to explore the use of memristors in a novel computing paradigm called "Reservoir Computing" (RC). RC is a new paradigm that belongs to the class of artificial recurrent neural networks (RNN). However, it architecturally differs from the traditional RNN techniques in that the pre-processor (i.e., the reservoir) is made up of random recurrently connected non-linear elements. Learning is only implemented at the readout (i.e., the output) layer, which reduces the learning complexity significantly. To the best of our knowledge, memristors have never been used as reservoir components. We use pattern recognition and classification tasks as benchmark problems. Real world applications associated with these tasks include process control, speech recognition, and signal processing. We have built a software framework, RCspice (Reservoir Computing Simulation Program with Integrated Circuit Emphasis), for this purpose. The framework allows to create random memristor networks, to simulate and evaluate them in Ngspice, and to train the readout layer by means of Genetic Algorithms (GA). We have explored reservoir-related parameters, such as the network connectivity and the reservoir size along with the GA parameters. Our results show that we are able to efficiently and robustly classify time-series patterns using memristor-based dynamical reservoirs. This presents an important step towards computing with memristor-based nanoscale systems.
APA, Harvard, Vancouver, ISO, and other styles
3

Tran, Dat Tien. "Memcapacitive Reservoir Computing Architectures." PDXScholar, 2019. https://pdxscholar.library.pdx.edu/open_access_etds/5001.

Full text
Abstract:
In this thesis, I propose novel brain-inspired and energy-efficient computing systems. Designing such systems has been the forefront goal of neuromorphic scientists over the last few decades. The results from my research show that it is possible to design such systems with emerging nanoscale memcapacitive devices. Technological development has advanced greatly over the years with the conventional von Neumann architecture. The current architectures and materials, however, will inevitably reach their physical limitations. While conventional computing systems have achieved great performances in general tasks, they are often not power-efficient in performing tasks with large input data, such as natural image recognition and tracking objects in streaming video. Moreover, in the von Neumann architecture, all computations take place in the Central Processing Unit (CPU) and the results are saved in the memory. As a result, information is shuffled back and forth between the memory and the CPU for processing, which creates a bottleneck due to the limited bandwidth of data paths. Adding cache memory and using general-purpose Graphic Processing Units (GPUs) do not completely resolve this bottleneck. Neuromorphic architectures offer an alternative to the conventional architecture by mimicking the functionality of a biological neural network. In a biological neural network, neurons communicate with each other through a large number of dendrites and synapses. Each neuron (a processing unit) locally processes the information that is stored in its input synapses (memory units). Distributing information to neurons and localizing computation at the synapse level alleviate the bottleneck problem and allow for the processing of a large amount of data in parallel. Furthermore, biological neural networks are highly adaptable to complex environments, tolerant of system noise and variations, and capable of processing complex information with extremely low power. Over the past five decades, researchers have proposed various brain-inspired architectures to perform neuromorphic tasks. IBM's TrueNorth is considered as the state-of-the-art brain-inspired architecture. It has 106 CMOS neurons with 256 x 256 programmable synapses and consumes about 60nW/neuron. Even though TrueNorth is power-efficient, its number of neurons and synapses is nothing compared to a human brain that has 1011 neurons and each neuron has, on average, 7,000 synaptic connections to other neurons. The human brain only consumes 2.3nW/neuron. The memristor brought neuromorphic computing one step closer to the human brain target. A memristor is a passive nano-device that has a memory. Its resistance changes with applied voltages. The resistive change with an applied voltage is similar to the function of a synapse. Memristors have been the prominent option for designing low power systems with high-area density. In fact, Truong and Min reported that an improved memristor-based crossbar performed a neuromorphic task with 50% reduction in area and 48% of power savings compared to CMOS arrays. However, memristive devices, by their nature, are still resistors, and the power consumption is bounded by their resistance. Here, a memcapacitor offers a promising alternative. My initial work indicated that memcapacitive networks performed complex tasks with equivalent performance, compared to memristive networks, but with much higher energy efficiency. A memcapacitor is also a two-terminal nano-device and its capacitance varies with applied voltages. Similar to a memristor, the capacitance of the memcapacitor changes with an applied voltage, similar to the function of a synapse. The memcapacitor is a storage device and does not consume static energy. Its switching energy is also small due to its small capacitance (nF to pF range). As a result, networks of memcapacitors have the potential to perform complex tasks with much higher power efficiency. Several memcapacitive synaptic models have been proposed as artificial synapses. Pershin and Di Ventra illustrated that a memcapacitor with two diodes has the functionality of a synapse. Flak suggested that a memcapacitor behaves as a synapse when it is connected with three CMOS switches in a Cellular Nanoscale Network (CNN). Li et al. demonstrated that when four identical memcapacitors are connected in a bridge network, they characterize the function of a synapse as well. Reservoir Computing (RC) has been used to explain higher-order cognitive functions and the interaction of short-term memory with other cognitive processes. Rigotti et al. observed that a dynamic system with short-term memory is essential in defining the internal brain states of a test agent. Although both traditional Recurrent Neural Networks (RNNs) and RC are dynamical systems, RC has a great benefit over RNNs due to the fact that the learning process of RC is simple and based on the training of the output layer. RC harnesses the computing nature of a random network of nonlinear devices, such as memcapacitors. Appeltant et al. showed that RC with a simplified reservoir structure is sufficient to perform speech recognition. Fewer nonlinear units connecting in a delay feedback loop provide enough dynamic responses for RC. Fewer units in reservoirs mean fewer connections and inputs, and therefore lower power consumption. As Goudarzi and Teuscher indicated, RC architectures still have inherent challenges that need to be addressed. First, theoretical studies have shown that both regular and random reservoirs achieve similar performances for particular tasks. A random reservoir, however, is more appropriate for unstructured networks of nanoscale devices. What is the role of network structure in RC for solving a task (Q1)? Secondly, the nonlinear characteristics of nanoscale devices contribute directly to the dynamics of a physical network, which influences the overall performance of an RC system. To what degree is a mixture of nonlinear devices able to improve the performances of reservoirs (Q2)? Thirdly, modularity, such as CMOS circuits in a digital building, is an essential key in building a complex system from fundamental blocks. Is hierarchical RCs able to solve complex tasks? What network topologies/hierarchies will lead to optimal performance? What is the learning complexity of such a system (Q3)? My research goal is to address the above RC challenges by exploring memcapacitive reservoir architectures. The analysis of memcapacitive monolithic reservoirs addresses both questions Q1 and Q2 above by showing that Small-World Power-Law (SWPL) structure is an optimal topological structure for RCs to perform time series prediction (NARMA-10), temporal recognition (Isolate Spoken Digits), and spatial task (MNIST) with minimal power consumption. On average, the SWPL reservoirs reduce significantly the power consumption by a factor of 1.21x, 31x, and 31.2x compared to the regular, the random, and the small-world reservoirs, respectively. Further analysis of SWPL structures underlines that high locality α and low randomness β decrease the cost to the systems in terms of wiring and nanowire dissipated power but do not guarantee the optimal performance of reservoirs. With a genetic algorithm to refine network structure, SWPL reservoirs with optimal network parameters are able to achieve comparable performance with less power. Compared to the regular reservoirs, the SWPL reservoirs consume less power, by a factor of 1.3x, 1.4x, and 1.5x. Similarly, compared to the random topology, the SWPL reservoirs save power consumption by a factor of 4.8x, 1.6x, and 2.1x, respectively. The simulation results of mixed-device reservoirs (memristive and memcapacitive reservoirs) provide evidence that the combination of memristive and memcapacitive devices potentially enhances the nonlinear dynamics of reservoirs in three tasks: NARMA-10, Isolated Spoken Digits, and MNIST. In addressing the third question (Q3), the kernel quality measurements show that hierarchical reservoirs have better dynamic responses than monolithic reservoirs. The improvement of dynamic responses allows hierarchical reservoirs to achieve comparable performance for Isolated Spoken Digit tasks but with less power consumption by a factor of 1.4x, 8.8x, 9.5, and 6.3x for delay-line, delay-line feedback, simple cycle, and random structures, respectively. Similarly, for the CIFAR-10 image tasks, hierarchical reservoirs gain higher performance with less power, by a factor of 5.6x, 4.2x, 4.8x, and 1.9x. The results suggest that hierarchical reservoirs have better dynamics than the monolithic reservoirs to solve sufficiently complex tasks. Although the performance of deep mem-device reservoirs is low compared to the state-of-the-art deep Echo State Networks, the initial results demonstrate that deep mem-device reservoirs are able to solve a high-dimensional and complex task such as polyphonic music task. The performance of deep mem-device reservoirs can be further improved with better settings of network parameters and architectures. My research illustrates the potentials of novel memcapacitive systems with SWPL structures that are brained-inspired and energy-efficient in performing tasks. My research offers novel memcapacitive systems that are applicable to low-power applications, such as mobile devices and the Internet of Things (IoT), and provides an initial design step to incorporate nano memcapacitive devices into future applications of nanotechnology.
APA, Harvard, Vancouver, ISO, and other styles
4

Melandri, Luca. "Introduction to Reservoir Computing Methods." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2015. http://amslaurea.unibo.it/8268/.

Full text
Abstract:
Il documento tratta la famiglia di metodologie di allenamento e sfruttamento delle reti neurali ricorrenti nota sotto il nome di Reservoir Computing. Viene affrontata un'introduzione sul Machine Learning in generale per fornire tutti gli strumenti necessari a comprendere l'argomento. Successivamente, vengono dati dettagli implementativi ed analisi dei vantaggi e punti deboli dei vari approcci, il tutto con supporto di codice ed immagini esplicative. Nel finale vengono tratte conclusioni sugli approcci, su quanto migliorabile e sulle applicazioni pratiche.
APA, Harvard, Vancouver, ISO, and other styles
5

Weddell, Stephen John. "Optical Wavefront Prediction with Reservoir Computing." Thesis, University of Canterbury. Electrical and Computer Engineering, 2010. http://hdl.handle.net/10092/4070.

Full text
Abstract:
Over the last four decades there has been considerable research in the improvement of imaging exo-atmospheric objects through air turbulence from ground-based instruments. Whilst such research was initially motivated for military purposes, the benefits to the astronomical community have been significant. A key topic in this research is isoplanatism. The isoplanatic angle is an angular limit that separates two point-source objects, where if independent measurements of wavefront perturbations were obtained from each source, the wavefront distortion would be considered equivalent. In classical adaptive optics, perturbations from a point-source reference, such as a bright, natural guide star, are used to partially negate perturbations distorting an image of a fainter, nearby science object. Various techniques, such as atmospheric tomography, maximum a posteriori (MAP), and parameterised modelling, have been used to estimate wavefront perturbations when the distortion function is spatially variant, i.e., angular separations exceed the isoplanatic angle, θ₀, where θ₀ ≈ 10 μrad for mild distortion at visual wavelengths. However, the effectiveness of such techniques is also dependent on knowledge a priori of turbulence profiles and configuration data. This dissertation describes a new method used to estimate the eigenvalues that comprise wavefront perturbations over a wide, spatial field. To help reduce dependency on prior knowledge for specific configurations, machine learning is used with a recurrent neural network trained using a posteriori wavefront ensembles from multiple point-source objects. Using a spatiotemporal framework for prediction, the eigenvalues, in terms of Zernike polynomials, are used to reconstruct the spatially-variant, point spread function (SVPSF) for image restoration. The overall requirement is to counter the adverse effects of atmospheric turbulence on the images of extended astronomical objects. The method outlined in this thesis combines optical wavefront sensing using multiple natural guide stars, with a reservoir-based, artificial neural network. The network is used to predict aberrations caused by atmospheric turbulence that degrade the images of faint science objects. A modified geometric wavefront sensor was used to simultaneously measure phase perturbations from multiple, point-source reference objects in the pupil. A specialised recurrent neural network (RNN) was used to learn the spatiotemporal effects of phase perturbations measured from several source references. Modal expansions, in terms of Zernike coefficients, were used to build time-series ensembles that defined wavefront maps of point-source reference objects. The ensembles were used to firstly train an RNN by applying a spatiotemporal training algorithm, and secondly, new data ensembles presented to the trained RNN were used to estimate the wavefront map of science objects over a wide field. Both simulations and experiments were used to evaluate this method. The results of this study showed that by employing three or more source references over an angular separation of 24 μrad from a target, and given mild turbulence with Fried coherence length of 20 cm, the normalised mean squared error of low-order Zernike modes could be estimated to within 0.086. A key benefit in estimating phase perturbations using a time-series of short exposure point-spread functions (PSFs) is that it is then possible to determine the long exposure PSF. Based on the summation of successive, corrected, short-exposure frames, high resolution images of the science object can be obtained. The method was shown to predict a contiguous series of short exposure aberrations, as a phase screen was moved over a simulated aperture. By qualifying temporal decorrelation of atmospheric turbulence, in terms of Taylor's hypothesis, long exposure estimates of the PSF were obtained.
APA, Harvard, Vancouver, ISO, and other styles
6

Sergio, Anderson Tenório. "Otimização de Reservoir Computing com PSO." Universidade Federal de Pernambuco, 2013. https://repositorio.ufpe.br/handle/123456789/11498.

Full text
Abstract:
Submitted by Daniella Sodre (daniella.sodre@ufpe.br) on 2015-03-09T14:34:23Z No. of bitstreams: 2 Dissertaçao Anderson Sergio.pdf: 1358589 bytes, checksum: fdd2a84a1ce8a69596fa45676bc522e4 (MD5) license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5)
Made available in DSpace on 2015-03-09T14:34:23Z (GMT). No. of bitstreams: 2 Dissertaçao Anderson Sergio.pdf: 1358589 bytes, checksum: fdd2a84a1ce8a69596fa45676bc522e4 (MD5) license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Previous issue date: 2013-03-07
Reservoir Computing (RC) é um paradigma de Redes Neurais Artificiais com aplicações importantes no mundo real. RC utiliza arquitetura similar às Redes Neurais Recorrentes para processamento temporal, com a vantagem de não necessitar treinar os pesos da camada intermediária. De uma forma geral, o conceito de RC é baseado na construção de uma rede recorrente de maneira randômica (reservoir), sem alteração dos pesos. Após essa fase, uma função de regressão linear é utilizada para treinar a saída do sistema. A transformação dinâmica não-linear oferecida pelo reservoir é suficiente para que a camada de saída consiga extrair os sinais de saída utilizando um mapeamento linear simples, fazendo com que o treinamento seja consideravelmente mais rápido. Entretanto, assim como as redes neurais convencionais, Reservoir Computing possui alguns problemas. Sua utilização pode ser computacionalmente onerosa, diversos parâmetros influenciam sua eficiência e é improvável que a geração aleatória dos pesos e o treinamento da camada de saída com uma função de regressão linear simples seja a solução ideal para generalizar os dados. O PSO é um algoritmo de otimização que possui algumas vantagens sobre outras técnicas de busca global. Ele possui implementação simples e, em alguns casos, convergência mais rápida e custo computacional menor. Esta dissertação teve o objetivo de investigar a utilização do PSO (e duas de suas extensões – EPUS-PSO e APSO) na tarefa de otimizar os parâmetros globais, arquitetura e pesos do reservoir de um RC, aplicada ao problema de previsão de séries temporais. Os resultados alcançados mostraram que a otimização de Reservoir Computing com PSO, bem como com as suas extensões selecionadas, apresentaram desempenho satisfatório para todas as bases de dados estudadas – séries temporais de benchmark e bases de dados com aplicação em energia eólica. A otimização superou o desempenho de diversos trabalhos na literatura, apresentando-se como uma solução importante para o problema de previsão de séries temporais.
APA, Harvard, Vancouver, ISO, and other styles
7

Appeltant, Lennert. "Reservoir computing based on delay-dynamical systems." Doctoral thesis, Universitat de les Illes Balears, 2012. http://hdl.handle.net/10803/84144.

Full text
Abstract:
Today, except for mathematical operations, our brain functions much faster and more efficient than any supercomputer. It is precisely this form of information processing in neural networks that inspires researchers to create systems that mimic the brain’s information processing capabilities. In this thesis we propose a novel approach to implement these alternative computer architectures, based on delayed feedback. We show that one single nonlinear node with delayed feedback can replace a large network of nonlinear nodes. First we numerically investigate the architecture and performance of delayed feedback systems as information processing units. Then we elaborate on electronic and opto-electronic implementations of the concept. Next to evaluating their performance for standard benchmarks, we also study task independent properties of the system, extracting information on how to further improve the initial scheme. Finally, some simple modifications are suggested, yielding improvements in terms of speed or performance.
APA, Harvard, Vancouver, ISO, and other styles
8

Andersson, Casper. "Reservoir Computing Approach for Network Intrusion Detection." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-54983.

Full text
Abstract:
Identifying intrusions in computer networks is important to be able to protect the network. The network is the entry point that attackers use in an attempt to gain access to valuable information from a company or organization or to simply destroy digital property. There exist many good methods already but there is always room for improvement. This thesis proposes to use reservoir computing as a feature extractor on network traffic data as a time series to train machine learning models for anomaly detection. The models used in this thesis are neural network, support vector machine, and linear discriminant analysis. The performance is measured in terms of detection rate, false alarm rate, and overall accuracy of the identification of attacks in the test data. The results show that the neural network generally improved with the use of a reservoir network. Support vector machine wasn't hugely affected by the reservoir. Linear discriminant analysis always got worse performance. Overall, the time aspect of the reservoir didn't have a huge effect. The performance of my experiments is inferior to those of previous works, but it might perform better if a separate feature selection or extraction is done first. Extracting a sequence to a single vector and determining if it contained any attacks worked very well when the sequences contained several attacks, otherwise not so well.
APA, Harvard, Vancouver, ISO, and other styles
9

Fu, Kaiwei. "Reservoir Computing with Neuro-memristive Nanowire Networks." Thesis, The University of Sydney, 2021. https://hdl.handle.net/2123/25900.

Full text
Abstract:
We present simulation results based on a model of self–assembled nanowire networks with memristive junctions and neural network–like topology. We analyse the dynamical voltage distribution in response to an applied bias and explain the network conductance fluctuations observed in previous experimental studies. We show I − V curves under AC stimulation and compare these to other bulk memristors. We then study the capacity of these nanowire networks for neuro-inspired reservoir computing by demonstrating higher harmonic generation and short/long–term memory. Benchmark tasks in a reservoir computing framework are implemented. The tasks include nonlinear wave transformation, wave auto-generation, and hand-written digit classification.
APA, Harvard, Vancouver, ISO, and other styles
10

Alomar, Barceló Miquel Lleó. "Methodologies for hardware implementation of reservoir computing systems." Doctoral thesis, Universitat de les Illes Balears, 2017. http://hdl.handle.net/10803/565422.

Full text
Abstract:
[cat]Inspirades en la forma en què el cervell processa la informació, les xarxes neuronals artificials (XNA) es crearen amb l’objectiu de reproduir habilitats humanes en tasques que són difícils de resoldre mitjançant la programació algorítmica clàssica. El paradigma de les XNA s’ha aplicat a nombrosos camps de la ciència i enginyeria gràcies a la seva capacitat d’aprendre dels exemples, l’adaptació, el paral·lelisme i la tolerància a fallades. El reservoir computing (RC), basat en l’ús d’una xarxa neuronal recurrent (XNR) aleatòria com a nucli de processament, és un model de gran abast molt adequat per processar sèries temporals. Les realitzacions en maquinari de les XNA són crucials per aprofitar les propietats paral·leles d’aquests models, les quals afavoreixen una major velocitat i fiabilitat. D’altra banda, les xarxes neuronals en maquinari (XNM) poden oferir avantatges apreciables en termes de consum energètic i cost. Els dispositius compactes de baix cost implementant XNM són útils per donar suport o reemplaçar el programari en aplicacions en temps real, com ara de control, supervisió mèdica, robòtica i xarxes de sensors. No obstant això, la realització en maquinari de XNA amb un nombre elevat de neurones, com al cas de l’RC, és una tasca difícil a causa de la gran quantitat de recursos exigits per les operacions involucrades. Tot i els possibles beneficis dels circuits digitals en maquinari per realitzar un processament neuronal basat en RC, la majoria d’implementacions es realitzen en programari usant processadors convencionals. En aquesta tesi, proposo i analitzo diverses metodologies per a la implementació digital de sistemes RC fent ús d’un nombre limitat de recursos de maquinari. Els dissenys de la xarxa neuronal es descriuen en detall tant per a una implementació convencional com per als distints mètodes alternatius. Es discuteixen els avantatges i inconvenients de les diferents tècniques pel que fa a l’exactitud, velocitat de càlcul i àrea requerida. Finalment, les implementacions proposades s’apliquen a resoldre diferents problemes pràctics d’enginyeria.
[spa]Inspiradas en la forma en que el cerebro procesa la información, las redes neuronales artificiales (RNA) se crearon con el objetivo de reproducir habilidades humanas en tareas que son difíciles de resolver utilizando la programación algorítmica clásica. El paradigma de las RNA se ha aplicado a numerosos campos de la ciencia y la ingeniería gracias a su capacidad de aprender de ejemplos, la adaptación, el paralelismo y la tolerancia a fallas. El reservoir computing (RC), basado en el uso de una red neuronal recurrente (RNR) aleatoria como núcleo de procesamiento, es un modelo de gran alcance muy adecuado para procesar series temporales. Las realizaciones en hardware de las RNA son cruciales para aprovechar las propiedades paralelas de estos modelos, las cuales favorecen una mayor velocidad y fiabilidad. Por otro lado, las redes neuronales en hardware (RNH) pueden ofrecer ventajas apreciables en términos de consumo energético y coste. Los dispositivos compactos de bajo coste implementando RNH son útiles para apoyar o reemplazar al software en aplicaciones en tiempo real, como el control, monitorización médica, robótica y redes de sensores. Sin embargo, la realización en hardware de RNA con un número elevado de neuronas, como en el caso del RC, es una tarea difícil debido a la gran cantidad de recursos exigidos por las operaciones involucradas. A pesar de los posibles beneficios de los circuitos digitales en hardware para realizar un procesamiento neuronal basado en RC, la mayoría de las implementaciones se realizan en software mediante procesadores convencionales. En esta tesis, propongo y analizo varias metodologías para la implementación digital de sistemas RC utilizando un número limitado de recursos hardware. Los diseños de la red neuronal se describen en detalle tanto para una implementación convencional como para los distintos métodos alternativos. Se discuten las ventajas e inconvenientes de las diversas técnicas con respecto a la precisión, velocidad de cálculo y área requerida. Finalmente, las implementaciones propuestas se aplican a resolver diferentes problemas prácticos de ingeniería.
[eng]Inspired by the way the brain processes information, artificial neural networks (ANNs) were created with the aim of reproducing human capabilities in tasks that are hard to solve using the classical algorithmic programming. The ANN paradigma has been applied to numerous fields of science and engineering thanks to its ability to learn from examples, adaptation, parallelism and fault-tolerance. Reservoir computing (RC), based on the use of a random recurrent neural network (RNN) as processing core, is a powerful model that is highly suited to time-series processing. Hardware realizations of ANNs are crucial to exploit the parallel properties of these models, which favor higher speed and reliability. On the other hand, hardware neural networks (HNNs) may offer appreciable advantages in terms of power consumption and cost. Low-cost compact devices implementing HNNs are useful to suport or replace software in real-time applications, such as control, medical monitoring, robotics and sensor networks. However, the hardware realization of ANNs with large neuron counts, such as in RC, is a challenging task due to the large resource requirement of the involved operations. Despite the potential benefits of hardware digital circuits to perform RC-based neural processing, most implementations are realized in software using sequential processors. In this thesis, I propose and analyze several methodologies for the digital implementation of RC systems using limited hardware resources. The neural network design is described in detail for both a conventional implementation and the diverse alternative approaches. The advantages and shortcomings of the various techniques regarding the accuracy, computation speed and required silicon area are discussed. Finally, the proposed approaches are applied to solve different real-life engineering problems.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Reservoir computing"

1

Nakajima, Kohei, and Ingo Fischer, eds. Reservoir Computing. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-13-1687-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Brunner, Daniel, Miguel C. Soriano, and Guy Van der Sande, eds. Photonic Reservoir Computing. Berlin, Boston: De Gruyter, 2019. http://dx.doi.org/10.1515/9783110583496.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wong, Patrick, Fred Aminzadeh, and Masoud Nikravesh, eds. Soft Computing for Reservoir Characterization and Modeling. Heidelberg: Physica-Verlag HD, 2002. http://dx.doi.org/10.1007/978-3-7908-1807-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Bruk, Stevan. Methods of computing sedimentation in lakes and reservoirs: A contribution to the International Hydrological Programme, IHP - II Project A. 2.6.1 panel. Paris: UNESCO, 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

1959-, Nikravesh Masoud, Aminzadeh Fred, and Zadeh Lotfi Asker, eds. Soft computing and intelligent data analysis in oil exploration. Amsterdam: Elsevier, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Stevan, Bruk, and International Hydrological Programme, eds. Methods of computing sedimentation in lakes and reservoirs: A contribution to the International Hydrological Programme, IHP - II Project A. 2.6.1 panel. Paris: UNESCO, 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Reservoir Computing. Springer Nature, 2021.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Brunner, Daniel, Miguel C. Soriano, and Guy Van der Sande. Photonic Reservoir Computing: Optical Recurrent Neural Networks. de Gruyter GmbH, Walter, 2019.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Brunner, Daniel, Miguel C. Soriano, and Guy Van der Sande. Photonic Reservoir Computing: Optical Recurrent Neural Networks. de Gruyter GmbH, Walter, 2019.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Aminzadeh, Fred, Masoud Nikravesh, and Patrick Wong. Soft Computing for Reservoir Characterization and Modeling. Physica-Verlag, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Reservoir computing"

1

Tate, Naoya. "Quantum-Dot-Based Photonic Reservoir Computing." In Photonic Neural Networks with Spatiotemporal Dynamics, 71–87. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-5072-0_4.

Full text
Abstract:
AbstractReservoir computing is a novel computational framework based on the characteristic behavior of recurrent neural networks. In particular, a recurrent neural network for reservoir computing is defined as a reservoir, which is implemented as a fixed and nonlinear system. Recently, to overcome the limitation of data throughput between processors and storage devices in conventional computer systems during processing, known as the Von Neumann bottleneck, physical implementations of reservoirs have been actively investigated in various research fields. The author’s group has been currently studying a quantum dot reservoir, which consists of coupled structures of randomly dispersed quantum dots, as a physical reservoir. The quantum dot reservoir is driven by sequential signal inputs using radiation with laser pulses, and the characteristic dynamics of the excited energy in the network are exhibited with the corresponding spatiotemporal fluorescence outputs. We have presented the fundamental physics of a quantum dot reservoir. Subsequently, experimental methods have been introduced to prepare a practical quantum dot reservoir. Next, we have presented the experimental input/output properties of our quantum dot reservoir. Here, we experimentally focused on the relaxation of fluorescence outputs, which indicates the characteristics of optical energy dynamics in the reservoir, and qualitatively discussed the usability of quantum dot reservoirs based on their properties. Finally, we have presented experimental reservoir computing based on spatiotemporal fluorescence outputs from a quantum dot reservoir. We consider that the achievements of quantum dot reservoirs can be effectively utilized for advanced reservoir computing.
APA, Harvard, Vancouver, ISO, and other styles
2

Buhmann, M. D., Prem Melville, Vikas Sindhwani, Novi Quadrianto, Wray L. Buntine, Luís Torgo, Xinhua Zhang, et al. "Reservoir Computing." In Encyclopedia of Machine Learning, 863. Boston, MA: Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-30164-8_726.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Konkoli, Zoran. "Reservoir Computing." In Encyclopedia of Complexity and Systems Science, 1–12. Berlin, Heidelberg: Springer Berlin Heidelberg, 2017. http://dx.doi.org/10.1007/978-3-642-27737-5_683-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Miikkulainen, Risto. "Reservoir Computing." In Encyclopedia of Machine Learning and Data Mining, 1. Boston, MA: Springer US, 2014. http://dx.doi.org/10.1007/978-1-4899-7502-7_731-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Miikkulainen, Risto. "Reservoir Computing." In Encyclopedia of Machine Learning and Data Mining, 1103–4. Boston, MA: Springer US, 2017. http://dx.doi.org/10.1007/978-1-4899-7687-1_731.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Konkoli, Zoran. "Reservoir Computing." In Unconventional Computing, 619–29. New York, NY: Springer US, 2018. http://dx.doi.org/10.1007/978-1-4939-6883-1_683.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Miikkulainen, Risto. "Reservoir Computing." In Encyclopedia of Machine Learning and Data Science, 1. New York, NY: Springer US, 2023. http://dx.doi.org/10.1007/978-1-4899-7502-7_731-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Gallicchio, Claudio, and Alessio Micheli. "Deep Reservoir Computing." In Natural Computing Series, 77–95. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-13-1687-6_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Conti, Claudio. "Quantum Reservoir Computing." In Quantum Science and Technology, 219–38. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-44226-1_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Brunner, Daniel, Piotr Antonik, and Xavier Porte. "1. Introduction to novel photonic computing." In Photonic Reservoir Computing, edited by Daniel Brunner, Miguel C. Soriano, and Guy Van der Sande, 1–32. Berlin, Boston: De Gruyter, 2019. http://dx.doi.org/10.1515/9783110583496-001.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Reservoir computing"

1

Bednyakova, A., E. Manuylovich, D. A. Ivoilov, I. S. Terekhov, and S. K. Turitsyn. "SOA-based reservoir computing." In 2024 International Conference Laser Optics (ICLO), 279. IEEE, 2024. http://dx.doi.org/10.1109/iclo59702.2024.10624399.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Nikiruy, K., T. Ivanov, M. Ziegler, D. Rossetti, F. Corinto, A. Ascoli, R. Tetzlaff, A. S. Demirkol, and N. Schmitt. "Next Generation Memristor Reservoir Computing." In 2024 IEEE International Conference on Metrology for eXtended Reality, Artificial Intelligence and Neural Engineering (MetroXRAINE), 912–17. IEEE, 2024. https://doi.org/10.1109/metroxraine62247.2024.10796786.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Castro, Bernard J. Giron, Christophe Peucheret, and Francesco Da Ros. "Microring Resonator-based Photonic Reservoir Computing." In 2024 24th International Conference on Transparent Optical Networks (ICTON), 1–4. IEEE, 2024. http://dx.doi.org/10.1109/icton62926.2024.10648245.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Nishimura, Ryo, and Makoto Fukushima. "Comparing Connectivity-To-Reservoir Conversion Methods for Connectome-Based Reservoir Computing." In 2024 International Joint Conference on Neural Networks (IJCNN), 1–8. IEEE, 2024. http://dx.doi.org/10.1109/ijcnn60899.2024.10650803.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Gaur, Prabhav, Chengkuan Gao, Karl Johnson, Shimon Rubin, Yeshaiahu Fainman, and Tzu-Chien Hsueh. "Optimization of hybrid photonic electrical reservoir computing." In Photonic Computing: From Materials and Devices to Systems and Applications, edited by Xingjie Ni and Wenshan Cai, 11. SPIE, 2024. http://dx.doi.org/10.1117/12.3027516.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Garcia-Beni, Jorge, Gian Luca Giorgi, Miguel C. Soriano, and Roberta Zambrini. "Quantum reservoir computing for time series processing." In Quantum Communications and Quantum Imaging XXII, edited by Keith S. Deacon and Ronald E. Meyers, 31. SPIE, 2024. http://dx.doi.org/10.1117/12.3027999.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Goudarzi, Alireza, and Christof Teuscher. "Reservoir Computing." In NANOCOM'16: ACM The Third Annual International Conference on Nanoscale Computing and Communication. New York, NY, USA: ACM, 2016. http://dx.doi.org/10.1145/2967446.2967448.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Dat Tran, S. J., and Christof Teuscher. "Memcapacitive reservoir computing." In 2017 IEEE/ACM International Symposium on Nanoscale Architectures (NANOARCH). IEEE, 2017. http://dx.doi.org/10.1109/nanoarch.2017.8053719.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Reid, David, and Mark Barrett-Baxendale. "Glial Reservoir Computing." In 2008 Second UKSIM European Symposium on Computer Modeling and Simulation (EMS). IEEE, 2008. http://dx.doi.org/10.1109/ems.2008.74.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Sharma, Divyam, Si En Ng, and Nripan Mathews. "Photovoltaic Reservoir Computing." In Neuromorphic Materials, Devices, Circuits and Systems. València: FUNDACIO DE LA COMUNITAT VALENCIANA SCITO, 2023. http://dx.doi.org/10.29363/nanoge.neumatdecas.2023.033.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Reservoir computing"

1

Kulkarni, Manjari. Memristor-based Reservoir Computing. Portland State University Library, January 2000. http://dx.doi.org/10.15760/etd.899.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Tran, Dat. Memcapacitive Reservoir Computing Architectures. Portland State University Library, January 2000. http://dx.doi.org/10.15760/etd.6877.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Zyvoloski, G., L. Auer, and J. Dendy. High performance computing for domestic petroleum reservoir simulation. Office of Scientific and Technical Information (OSTI), June 1996. http://dx.doi.org/10.2172/237335.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Guppy, Babur, and Remezani. L51555 Estimation of Average Reservoir Pressure in Underground Storage Reservoirs. Chantilly, Virginia: Pipeline Research Council International, Inc. (PRCI), January 1988. http://dx.doi.org/10.55274/r0011302.

Full text
Abstract:
The objective of this study was to examine different methods available for estimating the average reservoir pressure of an underground storage reservoir and to determine the relative accuracy of each technique under varying conditions. Three techniques were reviewed, including a new procedure developed as a part of this research project. Approach 1: Shutting-in all wells and monitoring several wells or one representative well upstructure after a selected length of time. Either the arithmetic or weighted average of the final pressure for all the wells is used to determine the average reservoir pressure. Approach 2: Pressure build-up analysis was performed on several wells and the average reservoir pressure for that drainage area was determined from the Horner plots. The average reservoir pressure for the entire reservoir was found by computing the weighted average of average drainage pressures for the individual wells. Approach 3: All data collected from Approach 2 was combined into one composite Horner plot from which the average reservoir pressure could be determined.
APA, Harvard, Vancouver, ISO, and other styles
5

Kenneth D. Luff. INTELLIGENT COMPUTING SYSTEM FOR RESERVOIR ANALYSIS AND RISK ASSESSMENT OF THE RED RIVER FORMATION. Office of Scientific and Technical Information (OSTI), September 2002. http://dx.doi.org/10.2172/808958.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kenneth D. Luff. INTELLIGENT COMPUTING SYSTEM FOR RESERVOIR ANALYSIS AND RISK ASSESSMENT OF THE RED RIVER FORMATION. Office of Scientific and Technical Information (OSTI), June 2002. http://dx.doi.org/10.2172/834753.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Mark A. Sippel, William C. Carrigan, Kenneth D. Luff, and Lyn Canter. INTELLIGENT COMPUTING SYSTEM FOR RESERVOIR ANALYSIS AND RISK ASSESSMENT OF THE RED RIVER FORMATION. Office of Scientific and Technical Information (OSTI), November 2003. http://dx.doi.org/10.2172/823509.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Francesco, Caravelli, Zhu Ruomin, Baccetti Valentina, and Kuncic Zdenka. Ergodicity, lack thereof, and the performance of reservoir computing with memristive networks and nanowire. Office of Scientific and Technical Information (OSTI), September 2023. http://dx.doi.org/10.2172/2386906.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Sippel, Mark A. Intelligent Computing System for Reservoir Analysis and Risk Assessment of Red River Formation, Class Revisit. Office of Scientific and Technical Information (OSTI), September 2002. http://dx.doi.org/10.2172/801440.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Almassian, Amin. Information Representation and Computation of Spike Trains in Reservoir Computing Systems with Spiking Neurons and Analog Neurons. Portland State University Library, January 2000. http://dx.doi.org/10.15760/etd.2720.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography