To see the other types of publications on this topic, follow the link: Chaotic Recurrent Neural Networks.

Journal articles on the topic 'Chaotic Recurrent Neural Networks'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Chaotic Recurrent Neural Networks.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Marković, Dimitrije, and Claudius Gros. "Intrinsic Adaptation in Autonomous Recurrent Neural Networks." Neural Computation 24, no. 2 (2012): 523–40. http://dx.doi.org/10.1162/neco_a_00232.

Full text
Abstract:
A massively recurrent neural network responds on one side to input stimuli and is autonomously active, on the other side, in the absence of sensory inputs. Stimuli and information processing depend crucially on the qualia of the autonomous-state dynamics of the ongoing neural activity. This default neural activity may be dynamically structured in time and space, showing regular, synchronized, bursting, or chaotic activity patterns. We study the influence of nonsynaptic plasticity on the default dynamical state of recurrent neural networks. The nonsynaptic adaption considered acts on intrinsic
APA, Harvard, Vancouver, ISO, and other styles
2

Wang, Jeff, and Raymond Lee. "Chaotic Recurrent Neural Networks for Financial Forecast." American Journal of Neural Networks and Applications 7, no. 1 (2021): 7. http://dx.doi.org/10.11648/j.ajnna.20210701.12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wang, Xing-Yuan, and Yi Zhang. "Chaotic diagonal recurrent neural network." Chinese Physics B 21, no. 3 (2012): 038703. http://dx.doi.org/10.1088/1674-1056/21/3/038703.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Manantsoa, Franci Zara, Hery Zo Randrianandraina, Minoson Sendrahasina Rakotomalala, and Modeste Kameni Nematchoua. "Chaos Control in Recurrent Neural Networks Using a Sinusoidal Activation Function via the Periodic Pulse Method." Research on Intelligent Manufacturing and Assembly 4, no. 1 (2025): 168–79. https://doi.org/10.25082/rima.2025.01.003.

Full text
Abstract:
Controlling chaos in recurrent neural networks (RNNs) is a crucial challenge in both computational neuroscience and artificial intelligence. Chaotic behavior in these networks can hinder stability and predictability, particularly in systems requiring structured memory and temporal processing. In this study, we apply the periodic pulse method to stabilize the dynamics of chaotic RNNs using a sinusoidal activation function. Two network configurations (2 and 3 neurons) were analyzed using numerical simulations in MATLAB. Our results show that the periodic pulse method effectively suppresses chaot
APA, Harvard, Vancouver, ISO, and other styles
5

Bertschinger, Nils, and Thomas Natschläger. "Real-Time Computation at the Edge of Chaos in Recurrent Neural Networks." Neural Computation 16, no. 7 (2004): 1413–36. http://dx.doi.org/10.1162/089976604323057443.

Full text
Abstract:
Depending on the connectivity, recurrent networks of simple computational units can show very different types of dynamics, ranging from totally ordered to chaotic. We analyze how the type of dynamics (ordered or chaotic) exhibited by randomly connected networks of threshold gates driven by a time-varying input signal depends on the parameters describing the distribution of the connectivity matrix. In particular, we calculate the critical boundary in parameter space where the transition from ordered to chaotic dynamics takes place. Employing a recently developed framework for analyzing real-tim
APA, Harvard, Vancouver, ISO, and other styles
6

Echenausía-Monroy, José Luis, Daniel Alejandro Magallón-García, Luis Javier Ontañón-García, Raul Rivera Rodriguez, Jonatan Pena Ramirez, and Joaquín Álvarez. "Does a Fractional-Order Recurrent Neural Network Improve the Identification of Chaotic Dynamics?" Fractal and Fractional 8, no. 11 (2024): 632. http://dx.doi.org/10.3390/fractalfract8110632.

Full text
Abstract:
This paper presents a quantitative study of the effects of using arbitrary-order operators in Neural Networks. It is based on a Recurrent Wavelet First-Order Neural Network (RWFONN), which can accurately identify several chaotic systems (measured by the mean square error and the coefficient of determination, also known as R-Squared, r2) under a fixed parameter scheme in the neural algorithm. Using fractional operators, we analyze whether the identification capabilities of the RWFONN are improved, and whether it can identify signals from fractional-order chaotic systems. The results presented i
APA, Harvard, Vancouver, ISO, and other styles
7

Fournier, Samantha J., and Pierfrancesco Urbani. "Statistical physics of learning in high-dimensional chaotic systems." Journal of Statistical Mechanics: Theory and Experiment 2023, no. 11 (2023): 113301. http://dx.doi.org/10.1088/1742-5468/ad082d.

Full text
Abstract:
Abstract In many complex systems, elementary units live in a chaotic environment and need to adapt their strategies to perform a task by extracting information from the environment and controlling the feedback loop on it. One of the main examples of systems of this kind is provided by recurrent neural networks. In this case, recurrent connections between neurons drive chaotic behavior, and when learning takes place, the response of the system to a perturbation should also take into account its feedback on the dynamics of the network itself. In this work, we consider an abstract model of a high
APA, Harvard, Vancouver, ISO, and other styles
8

Dong, En Zeng, Yang Du, Cheng Cheng Li, and Zai Ping Chen. "Image Encryption Scheme Based on Dual Hyper-Chaotic Recurrent Neural Networks." Key Engineering Materials 474-476 (April 2011): 599–604. http://dx.doi.org/10.4028/www.scientific.net/kem.474-476.599.

Full text
Abstract:
Based on two hyper-chaotic recurrent neural networks, a new image encryption scheme is presented in this paper. In the encryption scheme, the shuffling matrix is generated by using a Hopfield neural network, which is used to shuffle the pixels location; the diffusing matrix is generated by using a cellular neural network, which is used to diffuse the pixels grey value by OXRoperation. Finally, through numerical simulation and security analysis, the effectiveness of the encryption scheme is verified. Duo to the complex dynamical behavior of the hyper-chaotic systems, the encryption scheme has t
APA, Harvard, Vancouver, ISO, and other styles
9

Kandıran, Engin, and Avadis Hacınlıyan. "Comparison of Feedforward and Recurrent Neural Network in Forecasting Chaotic Dynamical System." AJIT-e Online Academic Journal of Information Technology 10, no. 37 (2019): 31–44. http://dx.doi.org/10.5824/1309-1581.2019.2.002.x.

Full text
Abstract:
Artificial neural networks are commonly accepted as a very successful tool for global function approximation. Because of this reason, they are considered as a good approach to forecasting chaotic time series in many studies. For a given time series, the Lyapunov exponent is a good parameter to characterize the series as chaotic or not. In this study, we use three different neural network architectures to test capabilities of the neural network in forecasting time series generated from different dynamical systems. In addition to forecasting time series, using the feedforward neural network with
APA, Harvard, Vancouver, ISO, and other styles
10

Wu, Xiaoying, Yuanlong Chen, Jing Tian, and Liangliang Li. "Chaotic Dynamics of Discrete Multiple-Time Delayed Neural Networks of Ring Architecture Evoked by External Inputs." International Journal of Bifurcation and Chaos 26, no. 11 (2016): 1650179. http://dx.doi.org/10.1142/s0218127416501790.

Full text
Abstract:
In this paper, we consider a general class of discrete multiple-time delayed recurrent neural networks with external inputs. By applying a new transformation, we transform an m-neuron network model into a parameterized map from [Formula: see text] to [Formula: see text]. A chaotic invariant set of the neural networks system is obtained by using a family of projections from [Formula: see text] onto [Formula: see text]. Furthermore, we prove that the dynamics of this neural networks system restricted to the chaotic invariant set is topologically conjugate to the dynamics of the full shift map wi
APA, Harvard, Vancouver, ISO, and other styles
11

Wen, Tan, and Wang Yao-Nan. "Synchronization of an uncertain chaotic system via recurrent neural networks." Chinese Physics 14, no. 1 (2004): 72–76. http://dx.doi.org/10.1088/1009-1963/14/1/015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Ryeu, Jin Kyung, and Ho Sun Chung. "Chaotic recurrent neural networks and their application to speech recognition." Neurocomputing 13, no. 2-4 (1996): 281–94. http://dx.doi.org/10.1016/0925-2312(95)00093-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Cechin, Adelmo L., Denise R. Pechmann, and Luiz P. L. de Oliveira. "Optimizing Markovian modeling of chaotic systems with recurrent neural networks." Chaos, Solitons & Fractals 37, no. 5 (2008): 1317–27. http://dx.doi.org/10.1016/j.chaos.2006.10.018.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Echenausía-Monroy, José Luis, Jonatan Pena Ramirez, Joaquín Álvarez, Raúl Rivera-Rodríguez, Luis Javier Ontañón-García, and Daniel Alejandro Magallón-García. "A Recurrent Neural Network for Identifying Multiple Chaotic Systems." Mathematics 12, no. 12 (2024): 1835. http://dx.doi.org/10.3390/math12121835.

Full text
Abstract:
This paper presents a First-Order Recurrent Neural Network activated by a wavelet function, in particular a Morlet wavelet, with a fixed set of parameters and capable of identifying multiple chaotic systems. By maintaining a fixed structure for the neural network and using the same activation function, the network can successfully identify the three state variables of several different chaotic systems, including the Chua, PWL-Rössler, Anishchenko–Astakhov, Álvarez-Curiel, Aizawa, and Rucklidge models. The performance of this approach was validated by numerical simulations in which the accuracy
APA, Harvard, Vancouver, ISO, and other styles
15

Liu, Lu Bin, Attila Losonczy, and Zhenrui Liao. "tension: A Python package for FORCE learning." PLOS Computational Biology 18, no. 12 (2022): e1010722. http://dx.doi.org/10.1371/journal.pcbi.1010722.

Full text
Abstract:
First-Order, Reduced and Controlled Error (FORCE) learning and its variants are widely used to train chaotic recurrent neural networks (RNNs), and outperform gradient methods on certain tasks. However, there is currently no standard software framework for FORCE learning. We present tension, an object-oriented, open-source Python package that implements a TensorFlow / Keras API for FORCE. We show how rate networks, spiking networks, and networks constrained by biological data can all be trained using a shared, easily extensible high-level API. With the same resources, our implementation outperf
APA, Harvard, Vancouver, ISO, and other styles
16

Anh, Duong Tuan, and Ta Ngoc Huy Nam. "Chaotic time series prediction with deep belief networks: an empirical evaluation." Science & Technology Development Journal - Engineering and Technology 3, SI1 (2020): SI102—SI112. http://dx.doi.org/10.32508/stdjet.v3isi1.571.

Full text
Abstract:
Chaotic time series are widespread in several real world areas such as finance, environment, meteorology, traffic flow, weather. A chaotic time series is considered as generated from the deterministic dynamics of a nonlinear system. The chaotic system is sensitive to initial conditions; points that are arbitrarily close initially become exponentially further apart with progressing time. Therefore, it is challenging to make accurate prediction in chaotic time series. The prediction using conventional statistical techniques, k-nearest-nearest neighbors algorithm, Multi-Layer-Perceptron (MPL) neu
APA, Harvard, Vancouver, ISO, and other styles
17

Zhou, Shuyu, Bo Liu, Jianxin Ren, et al. "A High-Security Probabilistic Constellation Shaping Transmission Scheme Based on Recurrent Neural Networks." Photonics 10, no. 10 (2023): 1078. http://dx.doi.org/10.3390/photonics10101078.

Full text
Abstract:
In this paper, a high-security probabilistic constellation shaping transmission scheme based on recurrent neural networks (RNNs) is proposed, in which the constellation point probabilistic distribution is generated based on recurrent neural network training. A 4D biplane fractional-order chaotic system is introduced to ensure the security performance of the system. The performance of the proposed scheme is verified in a 2 km seven-core optical transmission system. The RNN-trained probabilistic shaping scheme achieves a transmission gain of 1.23 dB compared to the standard 16QAM signal, 0.39 dB
APA, Harvard, Vancouver, ISO, and other styles
18

Chen, Yuan, and Abdul Khaliq. "Quantum Recurrent Neural Networks: Predicting the Dynamics of Oscillatory and Chaotic Systems." Algorithms 17, no. 4 (2024): 163. http://dx.doi.org/10.3390/a17040163.

Full text
Abstract:
In this study, we investigate Quantum Long Short-Term Memory and Quantum Gated Recurrent Unit integrated with Variational Quantum Circuits in modeling complex dynamical systems, including the Van der Pol oscillator, coupled oscillators, and the Lorenz system. We implement these advanced quantum machine learning techniques and compare their performance with traditional Long Short-Term Memory and Gated Recurrent Unit models. The results of our study reveal that the quantum-based models deliver superior precision and more stable loss metrics throughout 100 epochs for both the Van der Pol oscillat
APA, Harvard, Vancouver, ISO, and other styles
19

Soma, Ken-ichiro, Ryota Mori, Ryuichi Sato, Noriyuki Furumai, and Shigetoshi Nara. "Simultaneous Multichannel Signal Transfers via Chaos in a Recurrent Neural Network." Neural Computation 27, no. 5 (2015): 1083–101. http://dx.doi.org/10.1162/neco_a_00715.

Full text
Abstract:
We propose neural network model that demonstrates the phenomenon of signal transfer between separated neuron groups via other chaotic neurons that show no apparent correlations with the input signal. The model is a recurrent neural network in which it is supposed that synchronous behavior between small groups of input and output neurons has been learned as fragments of high-dimensional memory patterns, and depletion of neural connections results in chaotic wandering dynamics. Computer experiments show that when a strong oscillatory signal is applied to an input group in the chaotic regime, the
APA, Harvard, Vancouver, ISO, and other styles
20

Zhang, Jia-Shu, and Xian-Ci Xiao. "Predicting Chaotic Time Series Using Recurrent Neural Network." Chinese Physics Letters 17, no. 2 (2000): 88–90. http://dx.doi.org/10.1088/0256-307x/17/2/004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Smith, Anthony W., and David Zipser. "LEARNING SEQUENTIAL STRUCTURE WITH THE REAL-TIME RECURRENT LEARNING ALGORITHM." International Journal of Neural Systems 01, no. 02 (1989): 125–31. http://dx.doi.org/10.1142/s0129065789000037.

Full text
Abstract:
Recurrent connections in neural networks potentially allow information about events occurring in the past to be preserved and used in current computations. How effectively this potential is realized depends on the power of the learning algorithm used. As an example of a task requiring recurrency, Servan-Schreiber, Cleeremans, and McClelland1 have applied a simple recurrent learning algorithm to the task of recognizing finite-state grammars of increasing difficulty. These nets showed considerable power and were able to learn fairly complex grammars by emulating the state machines that produced
APA, Harvard, Vancouver, ISO, and other styles
22

Engelken, Rainer, Alessandro Ingrosso, Ramin Khajeh, Sven Goedeke, and L. F. Abbott. "Input correlations impede suppression of chaos and learning in balanced firing-rate networks." PLOS Computational Biology 18, no. 12 (2022): e1010590. http://dx.doi.org/10.1371/journal.pcbi.1010590.

Full text
Abstract:
Neural circuits exhibit complex activity patterns, both spontaneously and evoked by external stimuli. Information encoding and learning in neural circuits depend on how well time-varying stimuli can control spontaneous network activity. We show that in firing-rate networks in the balanced state, external control of recurrent dynamics, i.e., the suppression of internally-generated chaotic variability, strongly depends on correlations in the input. A distinctive feature of balanced networks is that, because common external input is dynamically canceled by recurrent feedback, it is far more diffi
APA, Harvard, Vancouver, ISO, and other styles
23

Serrano-Pérez, José de Jesús, Guillermo Fernández-Anaya, Salvador Carrillo-Moreno, and Wen Yu. "New Results for Prediction of Chaotic Systems Using Deep Recurrent Neural Networks." Neural Processing Letters 53, no. 2 (2021): 1579–96. http://dx.doi.org/10.1007/s11063-021-10466-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Lu, Zhao, Leang-San Shieh, Guanrong Chen, and Jagdish Chandra. "Identification and Control Of Chaotic Systems Via Recurrent High-Order Neural Networks." Intelligent Automation & Soft Computing 13, no. 4 (2007): 357–72. http://dx.doi.org/10.1080/10798587.2007.10642969.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Chandra, Rohitash, and Mengjie Zhang. "Cooperative coevolution of Elman recurrent neural networks for chaotic time series prediction." Neurocomputing 86 (June 2012): 116–23. http://dx.doi.org/10.1016/j.neucom.2012.01.014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Perez-Padron, J., C. Posadas-Castillo, J. Paz-Perez, E. Zambrano-Serrano, and M. A. Platas-Garza. "FPGA Realization and Lyapunov–Krasovskii Analysis for a Master-Slave Synchronization Scheme Involving Chaotic Systems and Time-Delay Neural Networks." Mathematical Problems in Engineering 2021 (September 23, 2021): 1–17. http://dx.doi.org/10.1155/2021/2604874.

Full text
Abstract:
In this paper, the trajectory tracking control and the field programmable gate array (FPGA) implementation between a recurrent neural network with time delay and a chaotic system are presented. The tracking error is globally asymptotically stabilized by means of a control law generated from the Lyapunov–Krasovskii and Lur’e theory. The applicability of the approach is illustrated by considering two different chaotic systems: Liu chaotic system and Genesio–Tesi chaotic system. The numerical results have shown the effectiveness of obtained theoretical results. Finally, the theoretical results ar
APA, Harvard, Vancouver, ISO, and other styles
27

Thivierge, Jean-Philippe, Eloïse Giraud, Michael Lynn, and Annie Théberge Charbonneau. "Key role of neuronal diversity in structured reservoir computing." Chaos: An Interdisciplinary Journal of Nonlinear Science 32, no. 11 (2022): 113130. http://dx.doi.org/10.1063/5.0111131.

Full text
Abstract:
Chaotic time series have been captured by reservoir computing models composed of a recurrent neural network whose output weights are trained in a supervised manner. These models, however, are typically limited to randomly connected networks of homogeneous units. Here, we propose a new class of structured reservoir models that incorporates a diversity of cell types and their known connections. In a first version of the model, the reservoir was composed of mean-rate units separated into pyramidal, parvalbumin, and somatostatin cells. Stability analysis of this model revealed two distinct dynamic
APA, Harvard, Vancouver, ISO, and other styles
28

Yidi, Zhang, Guo Shan, and Sun Mingzhu. "Front Waves of Chemical Reactions and Travelling Waves of Neural Activity." Journal of NeuroPhilosophy 1, no. 2 (2022): 222–39. https://doi.org/10.5281/zenodo.7254050.

Full text
Abstract:
Travelling waves crossing the nervous networks at mesoscopic/macroscopic scales have been correlated with different brain functions, from long-term memory to visual stimuli. Here we investigate a feasible relationship between wave generation/propagation in recurrent nervous networks and a physical/chemical model, namely the Belousov–Zhabotinsky reaction (BZ). Since BZ’s nonlinear, chaotic chemical process generates concentric/intersecting waves that closely resemble the diffusive nonlinear/chaotic oscillatory patterns crossing the nervous tissue, we aimed to investigate whether wav
APA, Harvard, Vancouver, ISO, and other styles
29

Seifter, Jared, and James A. Reggia. "Lambda and the Edge of Chaos in Recurrent Neural Networks." Artificial Life 21, no. 1 (2015): 55–71. http://dx.doi.org/10.1162/artl_a_00152.

Full text
Abstract:
The idea that there is an edge of chaos, a region in the space of dynamical systems having special meaning for complex living entities, has a long history in artificial life. The significance of this region was first emphasized in cellular automata models when a single simple measure, λCA, identified it as a transitional region between order and chaos. Here we introduce a parameter λNN that is inspired by λCA but is defined for recurrent neural networks. We show through a series of systematic computational experiments that λNN generally orders the dynamical behaviors of randomly connected/weig
APA, Harvard, Vancouver, ISO, and other styles
30

Antony, Veena, and Nainan Thangarasu. "Chaotic crow search enhanced CRNN: a next-gen approach for IoT botnet attack detection." Indonesian Journal of Electrical Engineering and Computer Science 38, no. 3 (2025): 1745. https://doi.org/10.11591/ijeecs.v38.i3.pp1745-1754.

Full text
Abstract:
Internet of things (IoT) botnet attack detection is crucial for reducing and identifying hostile threats in networks. To create efficient threat detection systems, deep learning (DL) and machine learning (ML) are currently being used in many sectors, mostly in information security. The botnet attack categorization problem is difficult as data dimensionality increases. By combining convolutional and recurrent neural layers, our work effectively addressed the vanishing and expanding gradient difficulties, improving the ability to capture spatial and temporal connections. The problem of weight de
APA, Harvard, Vancouver, ISO, and other styles
31

Vlachas, Pantelis R., Wonmin Byeon, Zhong Y. Wan, Themistoklis P. Sapsis, and Petros Koumoutsakos. "Data-driven forecasting of high-dimensional chaotic systems with long short-term memory networks." Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 474, no. 2213 (2018): 20170844. http://dx.doi.org/10.1098/rspa.2017.0844.

Full text
Abstract:
We introduce a data-driven forecasting method for high-dimensional chaotic systems using long short-term memory (LSTM) recurrent neural networks. The proposed LSTM neural networks perform inference of high-dimensional dynamical systems in their reduced order space and are shown to be an effective set of nonlinear approximators of their attractor. We demonstrate the forecasting performance of the LSTM and compare it with Gaussian processes (GPs) in time series obtained from the Lorenz 96 system, the Kuramoto–Sivashinsky equation and a prototype climate model. The LSTM networks outperform the GP
APA, Harvard, Vancouver, ISO, and other styles
32

Tino, P., and M. Koteles. "Extracting finite-state representations from recurrent neural networks trained on chaotic symbolic sequences." IEEE Transactions on Neural Networks 10, no. 2 (1999): 284–302. http://dx.doi.org/10.1109/72.750555.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Wang, Dingsu, Huiyue Tang, Yuan Wang, and JingShen Wu. "Beautiful chaotic patterns generated using simple untrained recurrent neural networks under harmonic excitation." Nonlinear Dynamics 100, no. 4 (2020): 3887–905. http://dx.doi.org/10.1007/s11071-020-05640-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Li, Zhanying, Jun Xing, Li Bo, and Jue Wang. "Prediction of Ship Roll Motion based on Optimized Chaotic Diagonal Recurrent Neural Networks." International Journal of Multimedia and Ubiquitous Engineering 10, no. 4 (2015): 231–42. http://dx.doi.org/10.14257/ijmue.2015.10.4.22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

LU, Z., L. SHIEH, G. CHEN, and N. COLEMAN. "Adaptive feedback linearization control of chaotic systems via recurrent high-order neural networks." Information Sciences 176, no. 16 (2006): 2337–54. http://dx.doi.org/10.1016/j.ins.2005.08.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Li, Xiaofan, Jian-an Fang, and Huiyuan Li. "Exponential Synchronization of Memristive Chaotic Recurrent Neural Networks Via Alternate Output Feedback Control." Asian Journal of Control 20, no. 1 (2017): 469–82. http://dx.doi.org/10.1002/asjc.1562.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Alomar, Miquel L., Vincent Canals, Nicolas Perez-Mora, Víctor Martínez-Moll, and Josep L. Rosselló. "FPGA-Based Stochastic Echo State Networks for Time-Series Forecasting." Computational Intelligence and Neuroscience 2016 (2016): 1–14. http://dx.doi.org/10.1155/2016/3917892.

Full text
Abstract:
Hardware implementation of artificial neural networks (ANNs) allows exploiting the inherent parallelism of these systems. Nevertheless, they require a large amount of resources in terms of area and power dissipation. Recently, Reservoir Computing (RC) has arisen as a strategic technique to design recurrent neural networks (RNNs) with simple learning capabilities. In this work, we show a new approach to implement RC systems with digital gates. The proposed method is based on the use of probabilistic computing concepts to reduce the hardware required to implement different arithmetic operations.
APA, Harvard, Vancouver, ISO, and other styles
38

Ríos-Rivera, Daniel, Alma Y. Alanis, and Edgar N. Sanchez. "Neural-Impulsive Pinning Control for Complex Networks Based on V-Stability." Mathematics 8, no. 9 (2020): 1388. http://dx.doi.org/10.3390/math8091388.

Full text
Abstract:
In this work, a neural impulsive pinning controller for a twenty-node dynamical discrete complex network is presented. The node dynamics of the network are all different types of discrete versions of chaotic attractors of three dimensions. Using the V-stability method, we propose a criterion for selecting nodes to design pinning control, in which only a small fraction of the nodes is locally controlled in order to stabilize the network states at zero. A discrete recurrent high order neural network (RHONN) trained with extended Kalman filter (EKF) is used to identify the dynamics of controlled
APA, Harvard, Vancouver, ISO, and other styles
39

ZHOU, ZHAN, JINLIANG WANG, ZHUJUN JING, and RUQI WANG. "COMPLEX DYNAMICAL BEHAVIORS IN DISCRETE-TIME RECURRENT NEURAL NETWORKS WITH ASYMMETRIC CONNECTION MATRIX." International Journal of Bifurcation and Chaos 16, no. 08 (2006): 2221–33. http://dx.doi.org/10.1142/s0218127406016021.

Full text
Abstract:
This paper investigates the discrete-time recurrent neural networks and aims to extend the previous works with symmetric connection matrix to the asymmetric connection matrix. We provide the sufficient conditions of existence for asymptotical stability of fixed point, flip and fold bifurcations, Marotto's chaos. Besides, we state the conditions of existence for the bounded trapping region including many fixed points, and attracting set contained in bounded region and chaotic set. To demonstrate the theoretical results of the paper, several numerical examples are provided.The theorems in this p
APA, Harvard, Vancouver, ISO, and other styles
40

Jacobsson, Henrik. "The Crystallizing Substochastic Sequential Machine Extractor: CrySSMEx." Neural Computation 18, no. 9 (2006): 2211–55. http://dx.doi.org/10.1162/neco.2006.18.9.2211.

Full text
Abstract:
This letter presents an algorithm, CrySSMEx, for extracting minimal finite state machine descriptions of dynamic systems such as recurrent neural networks. Unlike previous algorithms, CrySSMEx is parameter free and deterministic, and it efficiently generates a series of increasingly refined models. A novel finite stochastic model of dynamic systems and a novel vector quantization function have been developed to take into account the state-space dynamics of the system. The experiments show that (1) extraction from systems that can be described as regular grammars is trivial, (2) extraction from
APA, Harvard, Vancouver, ISO, and other styles
41

Wang, Yufei, Cheng Hua, and Ameer Hamza Khan. "Advances in Zeroing Neural Networks: Bio-Inspired Structures, Performance Enhancements, and Applications." Biomimetics 10, no. 5 (2025): 279. https://doi.org/10.3390/biomimetics10050279.

Full text
Abstract:
Zeroing neural networks (ZNN), as a specialized class of bio-Iinspired neural networks, emulate the adaptive mechanisms of biological systems, allowing for continuous adjustments in response to external variations. Compared to traditional numerical methods and common neural networks (such as gradient-based and recurrent neural networks), this adaptive capability enables the ZNN to rapidly and accurately solve time-varying problems. By leveraging dynamic zeroing error functions, the ZNN exhibits distinct advantages in addressing complex time-varying challenges, including matrix inversion, nonli
APA, Harvard, Vancouver, ISO, and other styles
42

Magallón-García, Daniel Alejandro, Luis Javier Ontanon-Garcia, Juan Hugo García-López, Guillermo Huerta-Cuéllar, and Carlos Soubervielle-Montalvo. "Identification of Chaotic Dynamics in Jerky-Based Systems by Recurrent Wavelet First-Order Neural Networks with a Morlet Wavelet Activation Function." Axioms 12, no. 2 (2023): 200. http://dx.doi.org/10.3390/axioms12020200.

Full text
Abstract:
Considering that chaotic systems are immersed in multiple areas of science and nature and that their dynamics are governed by a great sensitivity to the initial conditions and variations in their parameters, it is of great interest for the scientific community to have tools to characterize and reproduce these trajectories. Two dynamic chaotic systems whose equations are based on the jerky system are used as benchmarks, i.e., the Memristive Shaking Chaotic System (MSCS) and the Unstable Dissipative System of type I (UDSI). One characteristic common to them is their simple mathematical structure
APA, Harvard, Vancouver, ISO, and other styles
43

Li, Qinghai, and Rui-Chang Lin. "A New Approach for Chaotic Time Series Prediction Using Recurrent Neural Network." Mathematical Problems in Engineering 2016 (2016): 1–9. http://dx.doi.org/10.1155/2016/3542898.

Full text
Abstract:
A self-constructing fuzzy neural network (SCFNN) has been successfully used for chaotic time series prediction in the literature. In this paper, we propose the strategy of adding a recurrent path in each node of the hidden layer of SCFNN, resulting in a self-constructing recurrent fuzzy neural network (SCRFNN). This novel network does not increase complexity in fuzzy inference or learning process. Specifically, the structure learning is based on partition of the input space, and the parameter learning is based on the supervised gradient descent method using a delta adaptation law. This novel n
APA, Harvard, Vancouver, ISO, and other styles
44

Li, Xiaofan, Jian-an Fang, and Huiyuan Li. "Exponential adaptive synchronization of stochastic memristive chaotic recurrent neural networks with time-varying delays." Neurocomputing 267 (December 2017): 396–405. http://dx.doi.org/10.1016/j.neucom.2017.06.049.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Cui, Baotong, and Xuyang Lou. "Synchronization of chaotic recurrent neural networks with time-varying delays using nonlinear feedback control." Chaos, Solitons & Fractals 39, no. 1 (2009): 288–94. http://dx.doi.org/10.1016/j.chaos.2007.01.100.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Lee, Seungwon, Sung Hwan Won, Iickho Song, Seokho Yoon, and Sun Yong Kim. "On the Identification and Generation of Discrete-Time Chaotic Systems with Recurrent Neural Networks." Journal of Electrical Engineering & Technology 14, no. 4 (2019): 1699–706. http://dx.doi.org/10.1007/s42835-019-00103-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Yan, Zhilian, Yamin Liu, Xia Huang, Jianping Zhou та Hao Shen. "Mixed ℋ∞ and ℒ2 — ℒ∞ Anti-synchronization Control for Chaotic Delayed Recurrent Neural Networks". International Journal of Control, Automation and Systems 17, № 12 (2019): 3158–69. http://dx.doi.org/10.1007/s12555-019-0263-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

NARA, SHIGETOSHI, PETER DAVIS, MASAYOSHI KAWACHI, and HIROO TOTSUJI. "CHAOTIC MEMORY DYNAMICS IN A RECURRENT NEURAL NETWORK WITH CYCLE MEMORIES EMBEDDED BY PSEUDO-INVERSE METHOD." International Journal of Bifurcation and Chaos 05, no. 04 (1995): 1205–12. http://dx.doi.org/10.1142/s0218127495000867.

Full text
Abstract:
It is shown that hierarchical bifurcation of chaotic intermittency among memories can be induced by reducing neural connectivity when sequences of similar patterns are stored in a recurrent neural network using the pseudo-inverse method. This chaos is potentially useful for memory search and synthesis.
APA, Harvard, Vancouver, ISO, and other styles
49

Zhang, Lei. "Chaotic System Design Based on Recurrent Artificial Neural Network for the Simulation of EEG Time Series." International Journal of Cognitive Informatics and Natural Intelligence 13, no. 1 (2019): 25–35. http://dx.doi.org/10.4018/ijcini.2019010103.

Full text
Abstract:
Electroencephalogram (EEG) signals captured from brain activities demonstrate chaotic features, and can be simulated by nonlinear dynamic time series outputs of chaotic systems. This article presents the research work of chaotic system generator design based on artificial neural network (ANN), for studying the chaotic features of human brain dynamics. The ANN training performances of Nonlinear Auto-Regressive (NAR) model are evaluated for the generation and prediction of chaotic system time series outputs, based on varying the ANN architecture and the precision of the generated training data.
APA, Harvard, Vancouver, ISO, and other styles
50

Faranda, Davide, Mathieu Vrac, Pascal Yiou, et al. "Enhancing geophysical flow machine learning performance via scale separation." Nonlinear Processes in Geophysics 28, no. 3 (2021): 423–43. http://dx.doi.org/10.5194/npg-28-423-2021.

Full text
Abstract:
Abstract. Recent advances in statistical and machine learning have opened the possibility of forecasting the behaviour of chaotic systems using recurrent neural networks. In this article we investigate the applicability of such a framework to geophysical flows, known to involve multiple scales in length, time and energy and to feature intermittency. We show that both multiscale dynamics and intermittency introduce severe limitations to the applicability of recurrent neural networks, both for short-term forecasts as well as for the reconstruction of the underlying attractor. We suggest that pos
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!