To see the other types of publications on this topic, follow the link: Continuous time recurrent neural network.

Journal articles on the topic 'Continuous time recurrent neural network'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Continuous time recurrent neural network.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Osipov, Vasiliy, and Dmitriy Miloserdov. "Neural network event forecasting for robots with continuous training." Information and Control Systems, no. 5 (October 20, 2020): 33–42. http://dx.doi.org/10.31799/1684-8853-2020-5-33-42.

Full text
Abstract:
Introduction: High hopes for a significant expansion of human capabilities in various fields of activity are pinned on the creation and use of highly intelligent robots. To achieve this level of robot intelligence, it is necessary to successfully solve the problems of predicting the external environment and the state of the robots themselves. Solutions based on recurrent neural networks with controlled elements are promising neural network forecasting systems. Purpose: Search for appropriate neural network structures for predicting events. Development of approaches to controlling the associati
APA, Harvard, Vancouver, ISO, and other styles
2

Gavaldà, Ricard, and Hava T. Siegelmann. "Discontinuities in Recurrent Neural Networks." Neural Computation 11, no. 3 (1999): 715–45. http://dx.doi.org/10.1162/089976699300016638.

Full text
Abstract:
This article studies the computational power of various discontinuous real computational models that are based on the classical analog recurrent neural network (ARNN). This ARNN consists of finite number of neurons; each neuron computes a polynomial net function and a sigmoid-like continuous activation function. We introduce arithmetic networks as ARNN augmented with a few simple discontinuous (e.g., threshold or zero test) neurons. We argue that even with weights restricted to polynomial time computable reals, arithmetic networks are able to compute arbitrarily complex recursive functions. We
APA, Harvard, Vancouver, ISO, and other styles
3

Cauwenberghs, G. "An analog VLSI recurrent neural network learning a continuous-time trajectory." IEEE Transactions on Neural Networks 7, no. 2 (1996): 346–61. http://dx.doi.org/10.1109/72.485671.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Sontag, Eduardo, and Héctor Sussmann. "Complete controllability of continuous-time recurrent neural networks." Systems & Control Letters 30, no. 4 (1997): 177–83. http://dx.doi.org/10.1016/s0167-6911(97)00002-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Das, S., and O. Olurotimi. "Noisy recurrent neural networks: the continuous-time case." IEEE Transactions on Neural Networks 9, no. 5 (1998): 913–36. http://dx.doi.org/10.1109/72.712164.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Yu, Jiali, Huajin Tang, and Haizhou Li. "Continuous attractors of discrete-time recurrent neural networks." Neural Computing and Applications 23, no. 1 (2012): 89–96. http://dx.doi.org/10.1007/s00521-012-0975-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Wang, Xin, Arun Jagota, Fernanda Botelho, and Max Garzon. "Absence of Cycles in Symmetric Neural Networks." Neural Computation 10, no. 5 (1998): 1235–49. http://dx.doi.org/10.1162/089976698300017430.

Full text
Abstract:
For a given recurrent neural network, a discrete-time model may have asymptotic dynamics different from the one of a related continuous-time model. In this article, we consider a discrete-time model that discretizes the continuous-time leaky integrat or model and study its parallel, sequential, block-sequential, and distributed dynamics for symmetric networks. We provide sufficient (and in many cases necessary) conditions for the discretized model to have the same cycle-free dynamics of the corresponding continuous-time model in symmetric networks.
APA, Harvard, Vancouver, ISO, and other styles
8

SATO, SHOZO, and KAZUTOSHI GOHARA. "FRACTAL TRANSITION IN CONTINUOUS RECURRENT NEURAL NETWORKS." International Journal of Bifurcation and Chaos 11, no. 02 (2001): 421–34. http://dx.doi.org/10.1142/s0218127401002158.

Full text
Abstract:
A theory for continuous dynamical systems stochastically excited by temporal external inputs has been presented. The theory suggests that the dynamics of continuous-time recurrent neural networks (RNNs) is generally characterized by a set of continuous trajectories with a fractal-like structure in hyper-cylindrical phase space. We refer to this dynamics as the fractal transition. In this paper, three types of numerical experiments are discussed in order to investigate the learning process and noise effects in terms of the fractal transition. First, to analyze how an RNN learns desired input–ou
APA, Harvard, Vancouver, ISO, and other styles
9

TSUNG, FU-SHENG, and GARRISON W. COTTRELL. "LEARNING IN RECURRENT FINITE DIFFERENCE NETWORKS." International Journal of Neural Systems 06, no. 03 (1995): 249–56. http://dx.doi.org/10.1142/s0129065795000184.

Full text
Abstract:
A recurrent learning algorithm based on a finite difference discretization of continuous equations for neural networks is derived. This algorithm has the simplicity of discrete algorithms while retaining some essential characteristics of the continuous equations. In discrete networks learning smooth oscillations is difficult if the period of oscillation is too large. The network either grossly distorts the waveforms or is unable to learn at all. We show how the finite difference formulation can explain and overcome this problem. Formulas for learning time constants and time delays in this fram
APA, Harvard, Vancouver, ISO, and other styles
10

Wang, Jun, and Guang Wu. "A multilayer recurrent neural network for solving continuous-time algebraic Riccati equations." Neural Networks 11, no. 5 (1998): 939–50. http://dx.doi.org/10.1016/s0893-6080(98)00034-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Beer, Randall D. "Parameter Space Structure of Continuous-Time Recurrent Neural Networks." Neural Computation 18, no. 12 (2006): 3009–51. http://dx.doi.org/10.1162/neco.2006.18.12.3009.

Full text
Abstract:
A fundamental challenge for any general theory of neural circuits is how to characterize the structure of the space of all possible circuits over a given model neuron. As a first step in this direction, this letter begins a systematic study of the global parameter space structure of continuous-time recurrent neural networks (CTRNNs), a class of neural models that is simple but dynamically universal. First, we explicitly compute the local bifurcation manifolds of CTRNNs. We then visualize the structure of these manifolds in net input space for small circuits. These visualizations reveal a set o
APA, Harvard, Vancouver, ISO, and other styles
12

Sontag, Eduardo D. "A learning result for continuous-time recurrent neural networks." Systems & Control Letters 34, no. 3 (1998): 151–58. http://dx.doi.org/10.1016/s0167-6911(98)00006-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Hermans, Michiel, and Benjamin Schrauwen. "Memory in linear recurrent neural networks in continuous time." Neural Networks 23, no. 3 (2010): 341–55. http://dx.doi.org/10.1016/j.neunet.2009.08.008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

SATO, SHOZO, and KAZUTOSHI GOHARA. "POINCARÉ MAPPING OF CONTINUOUS RECURRENT NEURAL NETWORKS EXCITED BY TEMPORAL EXTERNAL INPUT." International Journal of Bifurcation and Chaos 10, no. 07 (2000): 1677–95. http://dx.doi.org/10.1142/s0218127400001055.

Full text
Abstract:
This paper presents qualitative analyses of the dynamics of continuous-time recurrent neural networks (RNNs) with continuous temporal external input. We show how to analyze continuous-time RNNs using Poincaré mapping. We introduce an input space in which the external input is parametrized, and define the product space which consists of the input space and the phase space. We numerically examine the bifurcation caused by changing the external input in the product space. It is shown that the network dynamics can be considered as rapid transitions in the bifurcation diagram. From the bifurcation
APA, Harvard, Vancouver, ISO, and other styles
15

Zhang, Quan-Ju, and Xiao Qing Lu. "A Recurrent Neural Network for Nonlinear Fractional Programming." Mathematical Problems in Engineering 2012 (2012): 1–18. http://dx.doi.org/10.1155/2012/807656.

Full text
Abstract:
This paper presents a novel recurrent time continuous neural network model which performs nonlinear fractional optimization subject to interval constraints on each of the optimization variables. The network is proved to be complete in the sense that the set of optima of the objective function to be minimized with interval constraints coincides with the set of equilibria of the neural network. It is also shown that the network is primal and globally convergent in the sense that its trajectory cannot escape from the feasible region and will converge to an exact optimal solution for any initial p
APA, Harvard, Vancouver, ISO, and other styles
16

Funahashi, Ken-ichi, and Yuichi Nakamura. "Approximation of dynamical systems by continuous time recurrent neural networks." Neural Networks 6, no. 6 (1993): 801–6. http://dx.doi.org/10.1016/s0893-6080(05)80125-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Beer, Randall D. "On the Dynamics of Small Continuous-Time Recurrent Neural Networks." Adaptive Behavior 3, no. 4 (1995): 469–509. http://dx.doi.org/10.1177/105971239500300405.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Fung, C. C. Alan, K. Y. Michael Wong, and Si Wu. "A Moving Bump in a Continuous Manifold: A Comprehensive Study of the Tracking Dynamics of Continuous Attractor Neural Networks." Neural Computation 22, no. 3 (2010): 752–92. http://dx.doi.org/10.1162/neco.2009.07-08-824.

Full text
Abstract:
Understanding how the dynamics of a neural network is shaped by the network structure and, consequently, how the network structure facilitates the functions implemented by the neural system is at the core of using mathematical models to elucidate brain functions. This study investigates the tracking dynamics of continuous attractor neural networks (CANNs). Due to the translational invariance of neuronal recurrent interactions, CANNs can hold a continuous family of stationary states. They form a continuous manifold in which the neural system is neutrally stable. We systematically explore how th
APA, Harvard, Vancouver, ISO, and other styles
19

Xiao-Dong Li, J. K. L. Ho, and T. W. S. Chow. "Approximation of dynamical time-variant systems by continuous-time recurrent neural networks." IEEE Transactions on Circuits and Systems II: Express Briefs 52, no. 10 (2005): 656–60. http://dx.doi.org/10.1109/tcsii.2005.852006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Liu, Pingzhou, and Qing-Long Han. "Discrete-Time Analogs for a Class of Continuous-Time Recurrent Neural Networks." IEEE Transactions on Neural Networks 18, no. 5 (2007): 1343–55. http://dx.doi.org/10.1109/tnn.2007.891593.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Jurado, F., and S. Lopez. "A wavelet neural control scheme for a quadrotor unmanned aerial vehicle." Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 376, no. 2126 (2018): 20170248. http://dx.doi.org/10.1098/rsta.2017.0248.

Full text
Abstract:
Wavelets are designed to have compact support in both time and frequency, giving them the ability to represent a signal in the two-dimensional time–frequency plane. The Gaussian, the Mexican hat and the Morlet wavelets are crude wavelets that can be used only in continuous decomposition. The Morlet wavelet is complex-valued and suitable for feature extraction using the continuous wavelet transform. Continuous wavelets are favoured when high temporal and spectral resolution is required at all scales. In this paper, considering the properties from the Morlet wavelet and based on the structure of
APA, Harvard, Vancouver, ISO, and other styles
22

Miloserdov, D. I. "Architectural Features of Neural Network Forecasting Software Systems with Continuous Training." INFORMACIONNYE TEHNOLOGII 26, no. 11 (2020): 621–47. http://dx.doi.org/10.17587/it.26.641-647.

Full text
Abstract:
In recent years, a method of neural network event forecasting has been developed, based on the use of a pair of recurrent neural networks with controlled elements. This method allows you to make predictions without interrupting training. However, for its full use, a reasonable software implementation is necessary. This study considers the problem of searching for a software architecture that implements the method of neural network forecasting with continuous learning. Offers an improved prediction method that significantly reduces the required amount of memory. A procedure for accelerated calc
APA, Harvard, Vancouver, ISO, and other styles
23

Ye, Feng, and Jun Yang. "A Deep Neural Network Model for Speaker Identification." Applied Sciences 11, no. 8 (2021): 3603. http://dx.doi.org/10.3390/app11083603.

Full text
Abstract:
Speaker identification is a classification task which aims to identify a subject from a given time-series sequential data. Since the speech signal is a continuous one-dimensional time series, most of the current research methods are based on convolutional neural network (CNN) or recurrent neural network (RNN). Indeed, these methods perform well in many tasks, but there is no attempt to combine these two network models to study the speaker identification task. Due to the spectrogram that a speech signal contains, the spatial features of voiceprint (which corresponds to the voice spectrum) and C
APA, Harvard, Vancouver, ISO, and other styles
24

Qiao, Chen, Wen-Feng Jing, Jian Fang, and Yu-Ping Wang. "The general critical analysis for continuous-time UPPAM recurrent neural networks." Neurocomputing 175 (January 2016): 40–46. http://dx.doi.org/10.1016/j.neucom.2015.09.103.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Galicki, M., L. Leistritz, and H. Witte. "Learning continuous trajectories in recurrent neural networks with time-dependent weights." IEEE Transactions on Neural Networks 10, no. 4 (1999): 741–56. http://dx.doi.org/10.1109/72.774210.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Sanqing Hu and Jun Wang. "Global stability of a class of continuous-time recurrent neural networks." IEEE Transactions on Circuits and Systems I: Fundamental Theory and Applications 49, no. 9 (2002): 1334–47. http://dx.doi.org/10.1109/tcsi.2002.802360.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Williams, Hywel, and Jason Noble. "Homeostatic plasticity improves signal propagation in continuous-time recurrent neural networks." Biosystems 87, no. 2-3 (2007): 252–59. http://dx.doi.org/10.1016/j.biosystems.2006.09.020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Zhang, Weiwei, Hui Liu, Xuncheng Wu, Lingyun Xiao, Yubin Qian, and Zhi Fang. "Lane marking detection and classification with combined deep neural network for driver assistance." Proceedings of the Institution of Mechanical Engineers, Part D: Journal of Automobile Engineering 233, no. 5 (2018): 1259–68. http://dx.doi.org/10.1177/0954407018768659.

Full text
Abstract:
An efficient approach for lane marking detection and classification by the combination of convolution neural network and recurrent neural network is proposed in this paper. First, convolution neural network is trained for lane marking features extraction, and then these convolution neural network features of continuous frames are transferred to recurrent neural network model for lane boundary detection and classification in the time domain. At last, a lane boundary fitting method based on dynamic programming is proposed to improve the lane detection accuracy and robustness. The method presente
APA, Harvard, Vancouver, ISO, and other styles
29

Liao, Bolin, and Qiuhong Xiang. "Robustness Analyses and Optimal Sampling Gap of Recurrent Neural Network for Dynamic Matrix Pseudoinversion." Journal of Advanced Computational Intelligence and Intelligent Informatics 21, no. 5 (2017): 778–84. http://dx.doi.org/10.20965/jaciii.2017.p0778.

Full text
Abstract:
This study analyses the robustness and convergence characteristics of a neural network. First, a special class of recurrent neural network (RNN), termed a continuous-time Zhang neural network (CTZNN) model, is presented and investigated for dynamic matrix pseudoinversion. Theoretical analysis of the CTZNN model demonstrates that it has good robustness against various types of noise. In addition, considering the requirements of digital implementation and online computation, the optimal sampling gap for a discrete-time Zhang neural network (DTZNN) model under noisy environments is proposed. Fina
APA, Harvard, Vancouver, ISO, and other styles
30

Wu, Xing, Hanlu Jin, Xueming Ye, et al. "Multiscale Convolutional and Recurrent Neural Network for Quality Prediction of Continuous Casting Slabs." Processes 9, no. 1 (2020): 33. http://dx.doi.org/10.3390/pr9010033.

Full text
Abstract:
Quality prediction in the continuous casting process is of great significance to the quality improvement of casting slabs. Due to the uncertainty and nonlinear relationship between the quality of continuous casting slabs (CCSs) and various factors, reliable prediction of CCS quality poses a challenge to the steel industry. However, traditional prediction models based on domain knowledge and expertise are difficult to adapt to the changes in multiple operating conditions and raw materials from various enterprises. To meet the challenge, we propose a framework with a multiscale convolutional and
APA, Harvard, Vancouver, ISO, and other styles
31

Jia, YuKang, Zhicheng Wu, Yanyan Xu, Dengfeng Ke, and Kaile Su. "Long Short-Term Memory Projection Recurrent Neural Network Architectures for Piano’s Continuous Note Recognition." Journal of Robotics 2017 (2017): 1–7. http://dx.doi.org/10.1155/2017/2061827.

Full text
Abstract:
Long Short-Term Memory (LSTM) is a kind of Recurrent Neural Networks (RNN) relating to time series, which has achieved good performance in speech recogniton and image recognition. Long Short-Term Memory Projection (LSTMP) is a variant of LSTM to further optimize speed and performance of LSTM by adding a projection layer. As LSTM and LSTMP have performed well in pattern recognition, in this paper, we combine them with Connectionist Temporal Classification (CTC) to study piano’s continuous note recognition for robotics. Based on the Beijing Forestry University music library, we conduct experimen
APA, Harvard, Vancouver, ISO, and other styles
32

Kulawik, Adam, Joanna Wróbel, and Alexey Mikhailovich Ikonnikov. "Model of the Austenite Decomposition during Cooling of the Medium Carbon Steel Using LSTM Recurrent Neural Network." Materials 14, no. 16 (2021): 4492. http://dx.doi.org/10.3390/ma14164492.

Full text
Abstract:
The motivation of the presented paper is the desire to create a universal tool to analyse the process of austenite decomposition during the cooling process of various steel grades. The presented analysis concerns the application of Recurrent Artificial Neural Networks (RANN) of the Long Short-Term Memory (LSTM) type for the analysis of the transition path of the cooling curve. This type of network was selected due to its ability to predict events in time sequences. The proposed generalisation allows for the determination of the austenite transformation during the continuous cooling process for
APA, Harvard, Vancouver, ISO, and other styles
33

Chow, T. W. S., and Xiao-Dong Li. "Modeling of continuous time dynamical systems with input by recurrent neural networks." IEEE Transactions on Circuits and Systems I: Fundamental Theory and Applications 47, no. 4 (2000): 575–78. http://dx.doi.org/10.1109/81.841860.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Zhang, Huaguang, Zhanshan Wang, and Derong Liu. "A Comprehensive Review of Stability Analysis of Continuous-Time Recurrent Neural Networks." IEEE Transactions on Neural Networks and Learning Systems 25, no. 7 (2014): 1229–62. http://dx.doi.org/10.1109/tnnls.2014.2317880.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Sanqing Hu and Jun Wang. "Absolute exponential stability of a class of continuous-time recurrent neural networks." IEEE Transactions on Neural Networks 14, no. 1 (2003): 35–45. http://dx.doi.org/10.1109/tnn.2002.806954.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Luna-Perejón, Francisco, Manuel Jesús Domínguez-Morales, and Antón Civit-Balcells. "Wearable Fall Detector Using Recurrent Neural Networks." Sensors 19, no. 22 (2019): 4885. http://dx.doi.org/10.3390/s19224885.

Full text
Abstract:
Falls have become a relevant public health issue due to their high prevalence and negative effects in elderly people. Wearable fall detector devices allow the implementation of continuous and ubiquitous monitoring systems. The effectiveness for analyzing temporal signals with low energy consumption is one of the most relevant characteristics of these devices. Recurrent neural networks (RNNs) have demonstrated a great accuracy in some problems that require analyzing sequential inputs. However, getting appropriate response times in low power microcontrollers remains a difficult task due to their
APA, Harvard, Vancouver, ISO, and other styles
37

Cui, Yuwei, Subutai Ahmad, and Jeff Hawkins. "Continuous Online Sequence Learning with an Unsupervised Neural Network Model." Neural Computation 28, no. 11 (2016): 2474–504. http://dx.doi.org/10.1162/neco_a_00893.

Full text
Abstract:
The ability to recognize and predict temporal sequences of sensory inputs is vital for survival in natural environments. Based on many known properties of cortical neurons, hierarchical temporal memory (HTM) sequence memory recently has been proposed as a theoretical framework for sequence learning in the cortex. In this letter, we analyze properties of HTM sequence memory and apply it to sequence learning and prediction problems with streaming data. We show the model is able to continuously learn a large number of variable order temporal sequences using an unsupervised Hebbian-like learning r
APA, Harvard, Vancouver, ISO, and other styles
38

Vázquez, Luis A., Francisco Jurado, and Alma Y. Alanís. "Decentralized Identification and Control in Real-Time of a Robot Manipulator via Recurrent Wavelet First-Order Neural Network." Mathematical Problems in Engineering 2015 (2015): 1–12. http://dx.doi.org/10.1155/2015/451049.

Full text
Abstract:
A decentralized recurrent wavelet first-order neural network (RWFONN) structure is presented. The use of a wavelet Morlet activation function allows proposing a neural structure in continuous time of a single layer and a single neuron in order to identify online in a series-parallel configuration, using the filtered error (FE) training algorithm, the dynamics behavior of each joint for a two-degree-of-freedom (DOF) vertical robot manipulator, whose parameters such as friction and inertia are unknown. Based on the RWFONN subsystem, a decentralized neural controller is designed via backstepping
APA, Harvard, Vancouver, ISO, and other styles
39

Sun, Min, Maoying Tian, and Yiju Wang. "Discrete-Time Zhang Neural Networks for Time-Varying Nonlinear Optimization." Discrete Dynamics in Nature and Society 2019 (April 8, 2019): 1–14. http://dx.doi.org/10.1155/2019/4745759.

Full text
Abstract:
As a special kind of recurrent neural networks, Zhang neural network (ZNN) has been successfully applied to various time-variant problems solving. In this paper, we present three Zhang et al. discretization (ZeaD) formulas, including a special two-step ZeaD formula, a general two-step ZeaD formula, and a general five-step ZeaD formula, and prove that the special and general two-step ZeaD formulas are convergent while the general five-step ZeaD formula is not zero-stable and thus is divergent. Then, to solve the time-varying nonlinear optimization (TVNO) in real time, based on the Taylor series
APA, Harvard, Vancouver, ISO, and other styles
40

Khan, Muhammad Ashfaq. "HCRNNIDS: Hybrid Convolutional Recurrent Neural Network-Based Network Intrusion Detection System." Processes 9, no. 5 (2021): 834. http://dx.doi.org/10.3390/pr9050834.

Full text
Abstract:
Nowadays, network attacks are the most crucial problem of modern society. All networks, from small to large, are vulnerable to network threats. An intrusion detection (ID) system is critical for mitigating and identifying malicious threats in networks. Currently, deep learning (DL) and machine learning (ML) are being applied in different domains, especially information security, for developing effective ID systems. These ID systems are capable of detecting malicious threats automatically and on time. However, malicious threats are occurring and changing continuously, so the network requires a
APA, Harvard, Vancouver, ISO, and other styles
41

Wu, Ga, Buser Say, and Scott Sanner. "Scalable Planning with Deep Neural Network Learned Transition Models." Journal of Artificial Intelligence Research 68 (July 20, 2020): 571–606. http://dx.doi.org/10.1613/jair.1.11829.

Full text
Abstract:
In many complex planning problems with factored, continuous state and action spaces such as Reservoir Control, Heating Ventilation and Air Conditioning (HVAC), and Navigation domains, it is difficult to obtain a model of the complex nonlinear dynamics that govern state evolution. However, the ubiquity of modern sensors allows us to collect large quantities of data from each of these complex systems and build accurate, nonlinear deep neural network models of their state transitions. But there remains one major problem for the task of control – how can we plan with deep network learned transitio
APA, Harvard, Vancouver, ISO, and other styles
42

Chow, T. W. S., Xiao-Dong Li, and Yong Fang. "A real-time learning control approach for nonlinear continuous-time system using recurrent neural networks." IEEE Transactions on Industrial Electronics 47, no. 2 (2000): 478–86. http://dx.doi.org/10.1109/41.836364.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Miller, Paul, and Xiao-Jing Wang. "Power-Law Neuronal Fluctuations in a Recurrent Network Model of Parametric Working Memory." Journal of Neurophysiology 95, no. 2 (2006): 1099–114. http://dx.doi.org/10.1152/jn.00491.2005.

Full text
Abstract:
In a working memory system, persistent activity maintains information in the absence of external stimulation, therefore the time scale and structure of correlated neural fluctuations reflect the intrinsic microcircuit dynamics rather than direct responses to sensory inputs. Here we show that a parametric working memory model capable of graded persistent activity is characterized by arbitrarily long correlation times, with Fano factors and power spectra of neural activity described by the power laws of a random walk. Collective drifts of the mnemonic firing pattern induce long-term noise correl
APA, Harvard, Vancouver, ISO, and other styles
44

Kimura, Masahiro. "On Unique Representations of Certain Dynamical Systems Produced by Continuous-Time Recurrent Neural Networks." Neural Computation 14, no. 12 (2002): 2981–96. http://dx.doi.org/10.1162/089976602760805377.

Full text
Abstract:
This article extends previous mathematical studies on elucidating the redundancy for describing functions by feedforward neural networks (FNNs) to the elucidation of redundancy for describing dynamical systems (DSs) by continuous-time recurrent neural networks (RNNs). In order to approximate a DS on Rn using an RNN with n visible units, an n—dimensional affine neural dynamical system (A-NDS) can be used as the DS actually produced by the above RNN under an affine map from its visible state-space Rn to its hidden state-space. Therefore, we consider the problem of clarifying the redundancy for d
APA, Harvard, Vancouver, ISO, and other styles
45

Dash, Debadatta, Paul Ferrari, Satwik Dutta, and Jun Wang. "NeuroVAD: Real-Time Voice Activity Detection from Non-Invasive Neuromagnetic Signals." Sensors 20, no. 8 (2020): 2248. http://dx.doi.org/10.3390/s20082248.

Full text
Abstract:
Neural speech decoding-driven brain-computer interface (BCI) or speech-BCI is a novel paradigm for exploring communication restoration for locked-in (fully paralyzed but aware) patients. Speech-BCIs aim to map a direct transformation from neural signals to text or speech, which has the potential for a higher communication rate than the current BCIs. Although recent progress has demonstrated the potential of speech-BCIs from either invasive or non-invasive neural signals, the majority of the systems developed so far still assume knowing the onset and offset of the speech utterances within the c
APA, Harvard, Vancouver, ISO, and other styles
46

Sanqing Hu and Jun Wang. "Global asymptotic stability and global exponential stability of continuous-time recurrent neural networks." IEEE Transactions on Automatic Control 47, no. 5 (2002): 802–7. http://dx.doi.org/10.1109/tac.2002.1000277.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Santos, José, and Ángel Campo. "Biped locomotion control with evolved adaptive center-crossing continuous time recurrent neural networks." Neurocomputing 86 (June 2012): 86–96. http://dx.doi.org/10.1016/j.neucom.2012.01.009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Zeng, Zhigang, and Tingwen Huang. "New passivity analysis of continuous-time recurrent neural networks with multiple discrete delays." Journal of Industrial & Management Optimization 7, no. 2 (2011): 283–89. http://dx.doi.org/10.3934/jimo.2011.7.283.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Albertini, Francesca, and Paolo Dai Pra. "Recurrent neural networks coupled with linear systems: Observability in continuous and discrete time." Systems & Control Letters 27, no. 2 (1996): 109–16. http://dx.doi.org/10.1016/0167-6911(95)00042-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Onyekpe, Uche, Vasile Palade, and Stratis Kanarachos. "Learning to Localise Automated Vehicles in Challenging Environments Using Inertial Navigation Systems (INS)." Applied Sciences 11, no. 3 (2021): 1270. http://dx.doi.org/10.3390/app11031270.

Full text
Abstract:
An approach based on Artificial Neural Networks is proposed in this paper to improve the localisation accuracy of Inertial Navigation Systems (INS)/Global Navigation Satellite System (GNSS) based aided navigation during the absence of GNSS signals. The INS can be used to continuously position autonomous vehicles during GNSS signal losses around urban canyons, bridges, tunnels and trees, however, it suffers from unbounded exponential error drifts cascaded over time during the multiple integrations of the accelerometer and gyroscope measurements to position. More so, the error drift is character
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!