To see the other types of publications on this topic, follow the link: Recurent neural networks.

Journal articles on the topic 'Recurent neural networks'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Recurent neural networks.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Mu, Yangzi, Mengxing Huang, Chunyang Ye, and Qingzhou Wu. "Diagnosis Prediction via Recurrent Neural Networks." International Journal of Machine Learning and Computing 8, no. 2 (2018): 117–20. http://dx.doi.org/10.18178/ijmlc.2018.8.2.673.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lee, Changki. "Image Caption Generation using Recurrent Neural Network." Journal of KIISE 43, no. 8 (2016): 878–82. http://dx.doi.org/10.5626/jok.2016.43.8.878.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

D, Sreekanth. "Metro Water Fraudulent Prediction in Houses Using Convolutional Neural Network and Recurrent Neural Network." Revista Gestão Inovação e Tecnologias 11, no. 4 (2021): 1177–87. http://dx.doi.org/10.47059/revistageintec.v11i4.2177.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Wutsqa, Dhoriva Urwatul, and Anisa Nurjanah. "Breast Cancer Classification Using Fuzzy Elman Recurrent Neural Network." Journal of Advanced Research in Dynamical and Control Systems 11, no. 11-SPECIAL ISSUE (2019): 946–53. http://dx.doi.org/10.5373/jardcs/v11sp11/20193119.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kumar, G. Prem, and P. Venkataram. "Network restoration using recurrent neural networks." International Journal of Network Management 8, no. 5 (1998): 264–73. http://dx.doi.org/10.1002/(sici)1099-1190(199809/10)8:5<264::aid-nem298>3.0.co;2-o.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ziemke, Tom. "Radar Image Segmentation Using Self-Adapting Recurrent Networks." International Journal of Neural Systems 08, no. 01 (1997): 47–54. http://dx.doi.org/10.1142/s0129065797000070.

Full text
Abstract:
This paper presents a novel approach to the segmentation and integration of (radar) images using a second-order recurrent artificial neural network architecture consisting of two sub-networks: a function network that classifies radar measurements into four different categories of objects in sea environments (water, oil spills, land and boats), and a context network that dynamically computes the function network's input weights. It is shown that in experiments (using simulated radar images) this mechanism outperforms conventional artificial neural networks since it allows the network to learn t
APA, Harvard, Vancouver, ISO, and other styles
7

Fomenko, Volodymyr, Heorhii Loutskii, Pavlo Rehida, and Artem Volokyta. "THEMATIC TEXTS GENERATION ISSUES BASED ON RECURRENT NEURAL NETWORKS AND WORD2VEC." TECHNICAL SCIENCES AND TECHNOLOG IES, no. 4(10) (2017): 110–15. http://dx.doi.org/10.25140/2411-5363-2017-4(10)-110-115.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

P. Deshmukh, Rahul, and A. A. Ghatol. "Short Term Flood Forecasting Using Recurrent Neural Networks a Comparative Study." International Journal of Engineering and Technology 2, no. 5 (2010): 430–34. http://dx.doi.org/10.7763/ijet.2010.v2.160.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Kalinin, Maxim, Vasiliy Krundyshev, and Evgeny Zubkov. "Estimation of applicability of modern neural network methods for preventing cyberthreats to self-organizing network infrastructures of digital economy platforms,." SHS Web of Conferences 44 (2018): 00044. http://dx.doi.org/10.1051/shsconf/20184400044.

Full text
Abstract:
The problems of applying neural network methods for solving problems of preventing cyberthreats to flexible self-organizing network infrastructures of digital economy platforms: vehicle adhoc networks, wireless sensor networks, industrial IoT, “smart buildings” and “smart cities” are considered. The applicability of the classic perceptron neural network, recurrent, deep, LSTM neural networks and neural networks ensembles in the restricting conditions of fast training and big data processing are estimated. The use of neural networks with a complex architecture– recurrent and LSTM neural network
APA, Harvard, Vancouver, ISO, and other styles
10

Back, Andrew D., and Ah Chung Tsoi. "A Low-Sensitivity Recurrent Neural Network." Neural Computation 10, no. 1 (1998): 165–88. http://dx.doi.org/10.1162/089976698300017935.

Full text
Abstract:
The problem of high sensitivity in modeling is well known. Small perturbations in the model parameters may result in large, undesired changes in the model behavior. A number of authors have considered the issue of sensitivity in feedforward neural networks from a probabilistic perspective. Less attention has been given to such issues in recurrent neural networks. In this article, we present a new recurrent neural network architecture, that is capable of significantly improved parameter sensitivity properties compared to existing recurrent neural networks. The new recurrent neural network gener
APA, Harvard, Vancouver, ISO, and other styles
11

Bandyopadhyay, Samir Kuma. "Detection of Fraud Transactions Using Recurrent Neural Network during COVID-19." Journal of Advanced Research in Medical Science & Technology 07, no. 03 (2020): 16–21. http://dx.doi.org/10.24321/2394.6539.202012.

Full text
Abstract:
Online transactions are becoming more popular in present situation where the globe is facing an unknown disease COVID-19. Now authorities of several countries have requested people to use cashless transaction as far as possible. Practically, it is not always possible to use it in all transactions. Since number of such cashless transactions has been increasing during lockdown period due to COVID-19, fraudulent transactions are also increasing in a rapid way. Fraud can be analysed by viewing a series of customer transactions data that was done in his/ her previous transactions. Normally banks or
APA, Harvard, Vancouver, ISO, and other styles
12

Liu, Qingshan, Jinde Cao, and Guanrong Chen. "A Novel Recurrent Neural Network with Finite-Time Convergence for Linear Programming." Neural Computation 22, no. 11 (2010): 2962–78. http://dx.doi.org/10.1162/neco_a_00029.

Full text
Abstract:
In this letter, a novel recurrent neural network based on the gradient method is proposed for solving linear programming problems. Finite-time convergence of the proposed neural network is proved by using the Lyapunov method. Compared with the existing neural networks for linear programming, the proposed neural network is globally convergent to exact optimal solutions in finite time, which is remarkable and rare in the literature of neural networks for optimization. Some numerical examples are given to show the effectiveness and excellent performance of the new recurrent neural network.
APA, Harvard, Vancouver, ISO, and other styles
13

Vahed, A., and C. W. Omlin. "A Machine Learning Method for Extracting Symbolic Knowledge from Recurrent Neural Networks." Neural Computation 16, no. 1 (2004): 59–71. http://dx.doi.org/10.1162/08997660460733994.

Full text
Abstract:
Neural networks do not readily provide an explanation of the knowledge stored in their weights as part of their information processing. Until recently, neural networks were considered to be black boxes, with the knowledge stored in their weights not readily accessible. Since then, research has resulted in a number of algorithms for extracting knowledge in symbolic form from trained neural networks. This article addresses the extraction of knowledge in symbolic form from recurrent neural networks trained to behave like deterministic finite-state automata (DFAs). To date, methods used to extract
APA, Harvard, Vancouver, ISO, and other styles
14

Nguyen, Viet-Hung, Minh-Tuan Nguyen, Jeongsik Choi, and Yong-Hwa Kim. "NLOS Identification in WLANs Using Deep LSTM with CNN Features." Sensors 18, no. 11 (2018): 4057. http://dx.doi.org/10.3390/s18114057.

Full text
Abstract:
Identifying channel states as line-of-sight or non-line-of-sight helps to optimize location-based services in wireless communications. The received signal strength identification and channel state information are used to estimate channel conditions for orthogonal frequency division multiplexing systems in indoor wireless local area networks. This paper proposes a joint convolutional neural network and recurrent neural network architecture to classify channel conditions. Convolutional neural networks extract the feature from frequency-domain characteristics of channel state information data and
APA, Harvard, Vancouver, ISO, and other styles
15

Grossberg, Stephen. "Recurrent neural networks." Scholarpedia 8, no. 2 (2013): 1888. http://dx.doi.org/10.4249/scholarpedia.1888.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Hammer, Barbara, and Peter Tiňo. "Recurrent Neural Networks with Small Weights Implement Definite Memory Machines." Neural Computation 15, no. 8 (2003): 1897–929. http://dx.doi.org/10.1162/08997660360675080.

Full text
Abstract:
Recent experimental studies indicate that recurrent neural networks initialized with “small” weights are inherently biased toward definite memory machines (Tiňno, Čerňanský, &amp; Beňušková, 2002a, 2002b). This article establishes a theoretical counterpart: transition function of recurrent network with small weights and squashing activation function is a contraction. We prove that recurrent networks with contractive transition function can be approximated arbitrarily well on input sequences of unbounded length by a definite memory machine. Conversely, every definite memory machine can be simul
APA, Harvard, Vancouver, ISO, and other styles
17

Rivera, Patricio, Edwin Valarezo, Mun-Taek Choi, and Tae-Seong Kim. "Recognition of Human Hand Activities Based on a Single Wrist IMU Using Recurrent Neural Networks." International Journal of Pharma Medicine and Biological Sciences 6, no. 4 (2017): 114–18. http://dx.doi.org/10.18178/ijpmbs.6.4.114-118.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Kundu, Sourav, and Rajshekhar Singhania. "Forecasting the United States Unemployment Rate by Using Recurrent Neural Networks with Google Trends Data." International Journal of Trade, Economics and Finance 11, no. 6 (2020): 135–40. http://dx.doi.org/10.18178/ijtef.2020.11.6.679.

Full text
Abstract:
We study the problem of obtaining an accurate forecast of the unemployment claims using online search data. The motivation for this study arises from the fact that there is a need for nowcasting or providing a reliable short-term estimate of the unemployment rate. The data regarding initial jobless claims are published by the US Department of labor weekly. To tackle the problem of getting an accurate forecast, we propose the use of the novel Long Short-Term Memory (LSTM) architecture of Recurrent Neural Networks, to predict the unemployment claims (initial jobless claims) using the Google Tren
APA, Harvard, Vancouver, ISO, and other styles
19

Munro, Edwin E., Larry E. Shupe, and Eberhard E. Fetz. "Integration and Differentiation in Dynamic Recurrent Neural Networks." Neural Computation 6, no. 3 (1994): 405–19. http://dx.doi.org/10.1162/neco.1994.6.3.405.

Full text
Abstract:
Dynamic neural networks with recurrent connections were trained by backpropagation to generate the differential or the leaky integral of a nonrepeating frequency-modulated sinusoidal signal. The trained networks performed these operations on arbitrary input waveforms. Reducing the network size by deleting ineffective hidden units and combining redundant units, and then retraining the network produced a minimal network that computed the same function and revealed the underlying computational algorithm. Networks could also be trained to compute simultaneously the differential and integral of the
APA, Harvard, Vancouver, ISO, and other styles
20

Fabien-Ouellet, Gabriel, and Rahul Sarkar. "Seismic velocity estimation: A deep recurrent neural-network approach." GEOPHYSICS 85, no. 1 (2019): U21—U29. http://dx.doi.org/10.1190/geo2018-0786.1.

Full text
Abstract:
Applying deep learning to 3D velocity model building remains a challenge due to the sheer volume of data required to train large-scale artificial neural networks. Moreover, little is known about what types of network architectures are appropriate for such a complex task. To ease the development of a deep-learning approach for seismic velocity estimation, we have evaluated a simplified surrogate problem — the estimation of the root-mean-square (rms) and interval velocity in time from common-midpoint gathers — for 1D layered velocity models. We have developed a deep neural network, whose design
APA, Harvard, Vancouver, ISO, and other styles
21

Pascual, Santiago, Joan Serrà, and Antonio Bonafonte. "Exploring Efficient Neural Architectures for Linguistic–Acoustic Mapping in Text-To-Speech." Applied Sciences 9, no. 16 (2019): 3391. http://dx.doi.org/10.3390/app9163391.

Full text
Abstract:
Conversion from text to speech relies on the accurate mapping from linguistic to acoustic symbol sequences, for which current practice employs recurrent statistical models such as recurrent neural networks. Despite the good performance of such models (in terms of low distortion in the generated speech), their recursive structure with intermediate affine transformations tends to make them slow to train and to sample from. In this work, we explore two different mechanisms that enhance the operational efficiency of recurrent neural networks, and study their performance–speed trade-off. The first
APA, Harvard, Vancouver, ISO, and other styles
22

Kim Soon, Gan, Chin Kim On, Nordaliela Mohd Rusli, Tan Soo Fun, Rayner Alfred, and Tan Tse Guan. "Comparison of simple feedforward neural network, recurrent neural network and ensemble neural networks in phishing detection." Journal of Physics: Conference Series 1502 (March 2020): 012033. http://dx.doi.org/10.1088/1742-6596/1502/1/012033.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Gavaldà, Ricard, and Hava T. Siegelmann. "Discontinuities in Recurrent Neural Networks." Neural Computation 11, no. 3 (1999): 715–45. http://dx.doi.org/10.1162/089976699300016638.

Full text
Abstract:
This article studies the computational power of various discontinuous real computational models that are based on the classical analog recurrent neural network (ARNN). This ARNN consists of finite number of neurons; each neuron computes a polynomial net function and a sigmoid-like continuous activation function. We introduce arithmetic networks as ARNN augmented with a few simple discontinuous (e.g., threshold or zero test) neurons. We argue that even with weights restricted to polynomial time computable reals, arithmetic networks are able to compute arbitrarily complex recursive functions. We
APA, Harvard, Vancouver, ISO, and other styles
24

Ceni, Andrea, Peter Ashwin, and Lorenzo Livi. "Interpreting Recurrent Neural Networks Behaviour via Excitable Network Attractors." Cognitive Computation 12, no. 2 (2019): 330–56. http://dx.doi.org/10.1007/s12559-019-09634-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Chen, Jieh-Haur, Mu-Chun Su, Vidya Trisandini Azzizi, Ting-Kwei Wang, and Wei-Jen Lin. "Smart Project Management: Interactive Platform Using Natural Language Processing Technology." Applied Sciences 11, no. 4 (2021): 1597. http://dx.doi.org/10.3390/app11041597.

Full text
Abstract:
Technological developments have made the construction industry efficient. The aim of this research is to solve communication interaction problems to build a project management platform using the interactive concept of natural language processing technology. A comprehensive literature review and expert interviews associated with techniques dealing with natural languages suggests the proposed system containing the Progressive Scale Expansion Network (PSENet), Convolutional Recurrent Neural Network (CRNN), and Bi-directional Recurrent Neutral Networks Convolutional Recurrent Neural Network (BRNN-
APA, Harvard, Vancouver, ISO, and other styles
26

Bitzer, Sebastian, and Stefan J. Kiebel. "Recognizing recurrent neural networks (rRNN): Bayesian inference for recurrent neural networks." Biological Cybernetics 106, no. 4-5 (2012): 201–17. http://dx.doi.org/10.1007/s00422-012-0490-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Osipov, Vasiliy, and Dmitriy Miloserdov. "Neural network event forecasting for robots with continuous training." Information and Control Systems, no. 5 (October 20, 2020): 33–42. http://dx.doi.org/10.31799/1684-8853-2020-5-33-42.

Full text
Abstract:
Introduction: High hopes for a significant expansion of human capabilities in various fields of activity are pinned on the creation and use of highly intelligent robots. To achieve this level of robot intelligence, it is necessary to successfully solve the problems of predicting the external environment and the state of the robots themselves. Solutions based on recurrent neural networks with controlled elements are promising neural network forecasting systems. Purpose: Search for appropriate neural network structures for predicting events. Development of approaches to controlling the associati
APA, Harvard, Vancouver, ISO, and other styles
28

WU, WEI, BAO TONG CUI, and ZHIGANG ZENG. "IMPROVED SUFFICIENT CONDITIONS FOR GLOBAL EXPONENTIAL STABILITY OF RECURRENT NEURAL NETWORKS WITH DISTRIBUTED DELAYS." International Journal of Bifurcation and Chaos 18, no. 07 (2008): 2029–37. http://dx.doi.org/10.1142/s021812740802152x.

Full text
Abstract:
In this paper, the globally exponential stability of recurrent neural networks with continuously distributed delays is investigated. New theoretical results are presented in the presence of external stimuli. It is shown that the recurrent neural network is globally exponentially stable, and the estimated location of the equilibrium point can be obtained. As typical representatives, the Hopfield neural network (HNN) and the cellular neural network (CNN) are examined in detail. Comparison between our results and the previous results admits the improvement of our results.
APA, Harvard, Vancouver, ISO, and other styles
29

Fredman, T. P., and H. Saxén. "On a Recurrent Neural Network Producing Oscillations." International Journal of Neural Systems 08, no. 05n06 (1997): 499–508. http://dx.doi.org/10.1142/s0129065797000483.

Full text
Abstract:
A recurrent two-node neural network producing oscillations is analyzed. The network has no true inputs and the outputs from the network exhibit a circular phase portrait. The weight configuration of the network is investigated, resulting in analytical weight expressions, which are compared with numerical weight estimates obtained by training the network on the desired trajectories. The values predicted by the analytical expressions agree well with the findings from the numerical study, and can also explain the asymptotic properties of the networks studied.
APA, Harvard, Vancouver, ISO, and other styles
30

Kwak, Jin-Yeol, and Yong-Joo Chung. "Sound Event Detection Using Derivative Features in Deep Neural Networks." Applied Sciences 10, no. 14 (2020): 4911. http://dx.doi.org/10.3390/app10144911.

Full text
Abstract:
We propose using derivative features for sound event detection based on deep neural networks. As input to the networks, we used log-mel-filterbank and its first and second derivative features for each frame of the audio signal. Two deep neural networks were used to evaluate the effectiveness of these derivative features. Specifically, a convolutional recurrent neural network (CRNN) was constructed by combining a convolutional neural network and a recurrent neural networks (RNN) followed by a feed-forward neural network (FNN) acting as a classification layer. In addition, a mean-teacher model b
APA, Harvard, Vancouver, ISO, and other styles
31

Scellier, Benjamin, and Yoshua Bengio. "Equivalence of Equilibrium Propagation and Recurrent Backpropagation." Neural Computation 31, no. 2 (2019): 312–29. http://dx.doi.org/10.1162/neco_a_01160.

Full text
Abstract:
Recurrent backpropagation and equilibrium propagation are supervised learning algorithms for fixed-point recurrent neural networks, which differ in their second phase. In the first phase, both algorithms converge to a fixed point that corresponds to the configuration where the prediction is made. In the second phase, equilibrium propagation relaxes to another nearby fixed point corresponding to smaller prediction error, whereas recurrent backpropagation uses a side network to compute error derivatives iteratively. In this work, we establish a close connection between these two algorithms. We s
APA, Harvard, Vancouver, ISO, and other styles
32

Patan, Krzysztof. "Local stability conditions for discrete-time cascade locally recurrent neural networks." International Journal of Applied Mathematics and Computer Science 20, no. 1 (2010): 23–34. http://dx.doi.org/10.2478/v10006-010-0002-x.

Full text
Abstract:
Local stability conditions for discrete-time cascade locally recurrent neural networksThe paper deals with a specific kind of discrete-time recurrent neural network designed with dynamic neuron models. Dynamics are reproduced within each single neuron, hence the network considered is a locally recurrent globally feedforward. A crucial problem with neural networks of the dynamic type is stability as well as stabilization in learning problems. The paper formulates local stability conditions for the analysed class of neural networks using Lyapunov's first method. Moreover, a stabilization problem
APA, Harvard, Vancouver, ISO, and other styles
33

Świć, Antoni, Dariusz Wołos, Arkadiusz Gola, and Grzegorz Kłosowski. "The Use of Neural Networks and Genetic Algorithms to Control Low Rigidity Shafts Machining." Sensors 20, no. 17 (2020): 4683. http://dx.doi.org/10.3390/s20174683.

Full text
Abstract:
The article presents an original machine-learning-based automated approach for controlling the process of machining of low-rigidity shafts using artificial intelligence methods. Three models of hybrid controllers based on different types of neural networks and genetic algorithms were developed. In this study, an objective function optimized by a genetic algorithm was replaced with a neural network trained on real-life data. The task of the genetic algorithm is to select the optimal values of the input parameters of a neural network to ensure minimum deviation. Both input vector values and the
APA, Harvard, Vancouver, ISO, and other styles
34

Kawano, Makoto, and Kazuhiro Ueda. "Microblog Geolocation Estimation with Recurrent Neural Networks." Transactions of the Japanese Society for Artificial Intelligence 32, no. 1 (2017): WII—E_1–8. http://dx.doi.org/10.1527/tjsai.wii-e.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Wang, Jie, Jun Wang, Wen Fang, and Hongli Niu. "Financial Time Series Prediction Using Elman Recurrent Random Neural Networks." Computational Intelligence and Neuroscience 2016 (2016): 1–14. http://dx.doi.org/10.1155/2016/4742515.

Full text
Abstract:
In recent years, financial market dynamics forecasting has been a focus of economic research. To predict the price indices of stock markets, we developed an architecture which combined Elman recurrent neural networks with stochastic time effective function. By analyzing the proposed model with the linear regression, complexity invariant distance (CID), and multiscale CID (MCID) analysis methods and taking the model compared with different models such as the backpropagation neural network (BPNN), the stochastic time effective neural network (STNN), and the Elman recurrent neural network (ERNN),
APA, Harvard, Vancouver, ISO, and other styles
36

Zhang, Zao, and Yuan Dong. "Temperature Forecasting via Convolutional Recurrent Neural Networks Based on Time-Series Data." Complexity 2020 (March 20, 2020): 1–8. http://dx.doi.org/10.1155/2020/3536572.

Full text
Abstract:
Today, artificial intelligence and deep neural networks have been successfully used in many applications that have fundamentally changed people’s lives in many areas. However, very limited research has been done in the meteorology area, where meteorological forecasts still rely on simulations via extensive computing resources. In this paper, we propose an approach to using the neural network to forecast the future temperature according to the past temperature values. Specifically, we design a convolutional recurrent neural network (CRNN) model that is composed of convolution neural network (CN
APA, Harvard, Vancouver, ISO, and other styles
37

Hupkes, Dieuwke, Sara Veldhoen, and Willem Zuidema. "Visualisation and 'Diagnostic Classifiers' Reveal How Recurrent and Recursive Neural Networks Process Hierarchical Structure." Journal of Artificial Intelligence Research 61 (April 30, 2018): 907–26. http://dx.doi.org/10.1613/jair.1.11196.

Full text
Abstract:
We investigate how neural networks can learn and process languages with hierarchical, compositional semantics. To this end, we define the artificial task of processing nested arithmetic expressions, and study whether different types of neural networks can learn to compute their meaning. We find that recursive neural networks can implement a generalising solution to this problem, and we visualise this solution by breaking it up in three steps: project, sum and squash. As a next step, we investigate recurrent neural networks, and show that a gated recurrent unit, that processes its input increme
APA, Harvard, Vancouver, ISO, and other styles
38

Laura, Juan Andres, Gabriel Omar Masi, and Luis Argerich. "From Imitation to Prediction, Data Compression vs Recurrent Neural Networks for Natural Language Processing." Inteligencia Artificial 21, no. 61 (2018): 30. http://dx.doi.org/10.4114/intartif.vol21iss61pp30-46.

Full text
Abstract:
In recent studies Recurrent Neural Networks were used for generative processes and their surprising performance can be explained by their ability to create good predictions. In addition, Data Compression is also based on prediction. What the problem comes down to is whether a data compressor could be used to perform as well as recurrent neural networks in the natural language processing tasks of sentiment analysis and automatic text generation. If this is possible, then the problem comes down to determining if a compression algorithm is even more intelligent than a neural network in such tasks
APA, Harvard, Vancouver, ISO, and other styles
39

Kouroshnia, Hengameh, and Fardad Farokhi. "Adapting and optimization in small signal modeling of nanoscale MOS-FET Using artificial recurrent neural networks." International Academic Journal of Science and Engineering 05, no. 02 (2018): 227–34. http://dx.doi.org/10.9756/iajse/v5i1/1810038.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Mackin, Kenneth J., Ryutaro Fukushima, and Makoto Fujiyoshi. "Prediction of Incinerator Dioxin Emission using Recurrent Neural Networks(Strategic Soft Computing 1,Session: MA1-C)." Abstracts of the international conference on advanced mechatronics : toward evolutionary fusion of IT and mechatronics : ICAM 2004.4 (2004): 23. http://dx.doi.org/10.1299/jsmeicam.2004.4.23_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Fantaye, Tessfu Geteye, Junqing Yu, and Tulu Tilahun Hailu. "Advanced Convolutional Neural Network-Based Hybrid Acoustic Models for Low-Resource Speech Recognition." Computers 9, no. 2 (2020): 36. http://dx.doi.org/10.3390/computers9020036.

Full text
Abstract:
Deep neural networks (DNNs) have shown a great achievement in acoustic modeling for speech recognition task. Of these networks, convolutional neural network (CNN) is an effective network for representing the local properties of the speech formants. However, CNN is not suitable for modeling the long-term context dependencies between speech signal frames. Recently, the recurrent neural networks (RNNs) have shown great abilities for modeling long-term context dependencies. However, the performance of RNNs is not good for low-resource speech recognition tasks, and is even worse than the convention
APA, Harvard, Vancouver, ISO, and other styles
42

FREAN, MARCUS, MATT LILLEY, and PHILLIP BOYLE. "IMPLEMENTING GAUSSIAN PROCESS INFERENCE WITH NEURAL NETWORKS." International Journal of Neural Systems 16, no. 05 (2006): 321–27. http://dx.doi.org/10.1142/s012906570600072x.

Full text
Abstract:
Gaussian processes compare favourably with backpropagation neural networks as a tool for regression, and Bayesian neural networks have Gaussian process behaviour when the number of hidden neurons tends to infinity. We describe a simple recurrent neural network with connection weights trained by one-shot Hebbian learning. This network amounts to a dynamical system which relaxes to a stable state in which it generates predictions identical to those of Gaussian process regression. In effect an infinite number of hidden units in a feed-forward architecture can be replaced by a merely finite number
APA, Harvard, Vancouver, ISO, and other styles
43

Dautel, Alexander Jakob, Wolfgang Karl Härdle, Stefan Lessmann, and Hsin-Vonn Seow. "Forex exchange rate forecasting using deep recurrent neural networks." Digital Finance 2, no. 1-2 (2020): 69–96. http://dx.doi.org/10.1007/s42521-020-00019-x.

Full text
Abstract:
Abstract Deep learning has substantially advanced the state of the art in computer vision, natural language processing, and other fields. The paper examines the potential of deep learning for exchange rate forecasting. We systematically compare long short-term memory networks and gated recurrent units to traditional recurrent network architectures as well as feedforward networks in terms of their directional forecasting accuracy and the profitability of trading model predictions. Empirical results indicate the suitability of deep networks for exchange rate forecasting in general but also evide
APA, Harvard, Vancouver, ISO, and other styles
44

Singh, Kirti, Suju M. George, and P. Rambabu. "Use of a System of Recurrent Neural Networks for Solving the Cell Placement Problem in VLSI Design." International Journal on Artificial Intelligence Tools 06, no. 01 (1997): 15–35. http://dx.doi.org/10.1142/s0218213097000037.

Full text
Abstract:
Cell placement in VLSI design is an NP-complete problem. In this paper, we have tried to solve the standard cell placement problem using the Hopfield neural network model. Furthermore, a new system of coupled recurrent neural networks, which was designed to eliminate the drawbacks of the Hopfield neural network, is introduced. The performance of Hopfield networks with discrete and graded neurons is also investigated. The energy function corresponding to the chosen representation is given and the weight matrix and the inputs needed for the network are also computed in this work. Several differe
APA, Harvard, Vancouver, ISO, and other styles
45

Akdeniz, Esra, Erol Egrioglu, Eren Bas, and Ufuk Yolcu. "An ARMA Type Pi-Sigma Artificial Neural Network for Nonlinear Time Series Forecasting." Journal of Artificial Intelligence and Soft Computing Research 8, no. 2 (2018): 121–32. http://dx.doi.org/10.1515/jaiscr-2018-0009.

Full text
Abstract:
Abstract Real-life time series have complex and non-linear structures. Artificial Neural Networks have been frequently used in the literature to analyze non-linear time series. High order artificial neural networks, in view of other artificial neural network types, are more adaptable to the data because of their expandable model order. In this paper, a new recurrent architecture for Pi-Sigma artificial neural networks is proposed. A learning algorithm based on particle swarm optimization is also used as a tool for the training of the proposed neural network. The proposed new high order artific
APA, Harvard, Vancouver, ISO, and other styles
46

Schuster, M., and K. K. Paliwal. "Bidirectional recurrent neural networks." IEEE Transactions on Signal Processing 45, no. 11 (1997): 2673–81. http://dx.doi.org/10.1109/78.650093.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Cierniak, Robert. "A New Approach to Image Reconstruction from Projections Using a Recurrent Neural Network." International Journal of Applied Mathematics and Computer Science 18, no. 2 (2008): 147–57. http://dx.doi.org/10.2478/v10006-008-0014-y.

Full text
Abstract:
A New Approach to Image Reconstruction from Projections Using a Recurrent Neural NetworkA new neural network approach to image reconstruction from projections considering the parallel geometry of the scanner is presented. To solve this key problem in computed tomography, a special recurrent neural network is proposed. The reconstruction process is performed during the minimization of the energy function in this network. The performed computer simulations show that the neural network reconstruction algorithm designed to work in this way outperforms conventional methods in the obtained image qua
APA, Harvard, Vancouver, ISO, and other styles
48

Nikolic, Kostantin. "Training Neural Network Elements Created From Long Shot Term Memory." Oriental journal of computer science and technology 10, no. 1 (2017): 01–10. http://dx.doi.org/10.13005/ojcst/10.01.01.

Full text
Abstract:
This paper presents the application of stochastic search algorithms to train artificial neural networks. Methodology approaches in the work created primarily to provide training complex recurrent neural networks. It is known that training recurrent networks is more complex than the type of training feedforward neural networks. Through simulation of recurrent networks is realized propagation signal from input to output and training process achieves a stochastic search in the space of parameters. The performance of this type of algorithm is superior to most of the training algorithms, which are
APA, Harvard, Vancouver, ISO, and other styles
49

Vlasov, Konstantin. "Neural Cryptographic Information Security System of Recurrent Convergent Neural Networks." Voprosy kiberbezopasnosti, no. 4(38) (2020): 44–55. http://dx.doi.org/10.21681/2311-3456-2020-04-44-55.

Full text
Abstract:
Abstract. The purpose: to construct an algorithm for information transformation by recurrent convergent neural networks with a given set of local minima of the energy functional for its subsequent application in the field of information security. Method: system analysis of the existing neural network paradigms that can be used for classification of images. Neural cryptographic system synthesis is with analogy methods, recurrent convergent neural networks, noise- resistant encoding and block ciphers algorithms. The result: a promising neural cryptographic system is proposed that can be used to
APA, Harvard, Vancouver, ISO, and other styles
50

Kim, Christopher M., and Carson C. Chow. "Training Spiking Neural Networks in the Strong Coupling Regime." Neural Computation 33, no. 5 (2021): 1199–233. http://dx.doi.org/10.1162/neco_a_01379.

Full text
Abstract:
Abstract Recurrent neural networks trained to perform complex tasks can provide insight into the dynamic mechanism that underlies computations performed by cortical circuits. However, due to a large number of unconstrained synaptic connections, the recurrent connectivity that emerges from network training may not be biologically plausible. Therefore, it remains unknown if and how biological neural circuits implement dynamic mechanisms proposed by the models. To narrow this gap, we developed a training scheme that, in addition to achieving learning goals, respects the structural and dynamic pro
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!