Academic literature on the topic 'RNN (recurrent neural networks)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'RNN (recurrent neural networks).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "RNN (recurrent neural networks)"

1

Ma, Xiao, Peter Karkus, David Hsu, and Wee Sun Lee. "Particle Filter Recurrent Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (2020): 5101–8. http://dx.doi.org/10.1609/aaai.v34i04.5952.

Full text
Abstract:
Recurrent neural networks (RNNs) have been extraordinarily successful for prediction with sequential data. To tackle highly variable and multi-modal real-world data, we introduce Particle Filter Recurrent Neural Networks (PF-RNNs), a new RNN family that explicitly models uncertainty in its internal structure: while an RNN relies on a long, deterministic latent state vector, a PF-RNN maintains a latent state distribution, approximated as a set of particles. For effective learning, we provide a fully differentiable particle filter algorithm that updates the PF-RNN latent state distribution accor
APA, Harvard, Vancouver, ISO, and other styles
2

SCHÄFER, ANTON MAXIMILIAN, and HANS-GEORG ZIMMERMANN. "RECURRENT NEURAL NETWORKS ARE UNIVERSAL APPROXIMATORS." International Journal of Neural Systems 17, no. 04 (2007): 253–63. http://dx.doi.org/10.1142/s0129065707001111.

Full text
Abstract:
Recurrent Neural Networks (RNN) have been developed for a better understanding and analysis of open dynamical systems. Still the question often arises if RNN are able to map every open dynamical system, which would be desirable for a broad spectrum of applications. In this article we give a proof for the universal approximation ability of RNN in state space model form and even extend it to Error Correction and Normalized Recurrent Neural Networks.
APA, Harvard, Vancouver, ISO, and other styles
3

Ma, Qianli, Zhenxi Lin, Enhuan Chen, and Garrison Cottrell. "Temporal Pyramid Recurrent Neural Network." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (2020): 5061–68. http://dx.doi.org/10.1609/aaai.v34i04.5947.

Full text
Abstract:
Learning long-term and multi-scale dependencies in sequential data is a challenging task for recurrent neural networks (RNNs). In this paper, a novel RNN structure called temporal pyramid RNN (TP-RNN) is proposed to achieve these two goals. TP-RNN is a pyramid-like structure and generally has multiple layers. In each layer of the network, there are several sub-pyramids connected by a shortcut path to the output, which can efficiently aggregate historical information from hidden states and provide many gradient feedback short-paths. This avoids back-propagating through many hidden states as in
APA, Harvard, Vancouver, ISO, and other styles
4

Suk-Hwan, Jung, and Chung Yong-Joo. "Sound event detection using deep neural networks." TELKOMNIKA Telecommunication, Computing, Electronics and Control 18, no. 5 (2020): 2587~2596. https://doi.org/10.12928/TELKOMNIKA.v18i5.14246.

Full text
Abstract:
We applied various architectures of deep neural networks for sound event detection and compared their performance using two different datasets. Feed forward neural network (FNN), convolutional neural network (CNN), recurrent neural network (RNN) and convolutional recurrent neural network (CRNN) were implemented using hyper-parameters optimized for each architecture and dataset. The results show that the performance of deep neural networks varied significantly depending on the learning rate, which can be optimized by conducting a series of experiments on the validation data over predetermined r
APA, Harvard, Vancouver, ISO, and other styles
5

Lyu, Shengfei, and Jiaqi Liu. "Convolutional Recurrent Neural Networks for Text Classification." Journal of Database Management 32, no. 4 (2021): 65–82. http://dx.doi.org/10.4018/jdm.2021100105.

Full text
Abstract:
Recurrent neural network (RNN) and convolutional neural network (CNN) are two prevailing architectures used in text classification. Traditional approaches combine the strengths of these two networks by straightly streamlining them or linking features extracted from them. In this article, a novel approach is proposed to maintain the strengths of RNN and CNN to a great extent. In the proposed approach, a bi-directional RNN encodes each word into forward and backward hidden states. Then, a neural tensor layer is used to fuse bi-directional hidden states to get word representations. Meanwhile, a c
APA, Harvard, Vancouver, ISO, and other styles
6

Hindarto, Djarot. "Comparison of RNN Architectures and Non-RNN Architectures in Sentiment Analysis." sinkron 8, no. 4 (2023): 2537–46. http://dx.doi.org/10.33395/sinkron.v8i4.13048.

Full text
Abstract:
This study compares the sentiment analysis performance of multiple Recurrent Neural Network architectures and One-Dimensional Convolutional Neural Networks. THE METHODS EVALUATED ARE simple Recurrent Neural Network, Long Short-Term Memory, Gated Recurrent Unit, Bidirectional Recurrent Neural Network, and 1D ConvNets. A dataset comprising text reviews with positive or negative sentiment labels was evaluated. All evaluated models demonstrated an extremely high accuracy, ranging from 99.81% to 99.99%. Apart from that, the loss generated by these models is also low, ranging from 0.0043 to 0.0021.
APA, Harvard, Vancouver, ISO, and other styles
7

Tridarma, Panggih, and Sukmawati Nur Endah. "Pengenalan Ucapan Bahasa Indonesia Menggunakan MFCC dan Recurrent Neural Network." JURNAL MASYARAKAT INFORMATIKA 11, no. 2 (2020): 36–44. http://dx.doi.org/10.14710/jmasif.11.2.34874.

Full text
Abstract:
Pengenalan ucapan (speech recognition) merupakan perkembangan teknologi dalam bidang suara. Pengenalan ucapan memungkinkan suatu perangkat lunak mengenali kata-kata yang diucapkan oleh manusia dan ditampilkan dalam bentuk tulisan. Namun masih terdapat masalah untuk mengenali kata-kata yang diucapkan, seperti karakteristik suara yang berbeda, usia, kesehatan, dan jenis kelamin. Penelitian ini membahas pengenalan ucapan bahasa Indonesia dengan menggunakan Mel-Frequency Cepstral Coefficient (MFCC) sebagai metode ekstraksi ciri dan Recurrent Neural Network (RNN) sebagai metode pengenalannya dengan
APA, Harvard, Vancouver, ISO, and other styles
8

Kao, Jonathan C. "Considerations in using recurrent neural networks to probe neural dynamics." Journal of Neurophysiology 122, no. 6 (2019): 2504–21. http://dx.doi.org/10.1152/jn.00467.2018.

Full text
Abstract:
Recurrent neural networks (RNNs) are increasingly being used to model complex cognitive and motor tasks performed by behaving animals. RNNs are trained to reproduce animal behavior while also capturing key statistics of empirically recorded neural activity. In this manner, the RNN can be viewed as an in silico circuit whose computational elements share similar motifs with the cortical area it is modeling. Furthermore, because the RNN’s governing equations and parameters are fully known, they can be analyzed to propose hypotheses for how neural populations compute. In this context, we present i
APA, Harvard, Vancouver, ISO, and other styles
9

Rath, Ankit, and Subhrajyoti Ranjan Sahu. "Recurrent Neural Networks for Recommender Systems." Computational Intelligence and Machine Learning 1, no. 1 (2020): 31–36. http://dx.doi.org/10.36647/ciml/01.01.a004.

Full text
Abstract:
The Internet is becoming one of the biggest sources of information in recent years, keeping people updated about everyday events. The information available on it is also growing, with the increase in the use of the Internet. Due to this, it takes a great deal of time and effort to locate relavent knowledge that the user wants. Recommender systems are software mechanisms that automatically suggest relavent user-needed information. Recurrent Neural Networks has lately gained importance in the field of recommender systems, since they give improved results in building deep learning models with seq
APA, Harvard, Vancouver, ISO, and other styles
10

B.Venkateswarlu and Dr. C. Gulzar. "Spam Classification using Recurrent Neural Networks." international journal of engineering technology and management sciences 9, no. 2 (2025): 684–89. https://doi.org/10.46647/ijetms.2025.v09i02.087.

Full text
Abstract:
Spam classification is a critical task in email filtering systems to distinguish between legitimate andspam emails. Traditional machine learning methods have been used for this purpose, but they oftenstruggle to capture the complex patterns and variations in spam emails. In this paper, we propose anovel approach using Recurrent Neural Networks (RNNs) for spam classification. RNNs are wellsuited for sequence modeling tasks like this, as they can capture dependencies between words in anemail. We use a Long Short-Term Memory (LSTM) RNN architecture, known for its ability toretain information over
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "RNN (recurrent neural networks)"

1

Berlati, Alessandro. "Ambiguity in Recurrent Models: Predicting Multiple Hypotheses with Recurrent Neural Networks." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amslaurea.unibo.it/16611/.

Full text
Abstract:
Multiple Hypothesis Prediction (MHP) models have been introduced to deal with uncertainty in feedforward neural networks, in particular it has been shown how to easily convert a standard single-prediction neural network into one able to show many feasible outcomes. Ambiguity, however, is present also in problems where feedback model are needed, such as sequence generation and time series classification. In our work, we propose an extension of MHP to Recurrent Neural Networks (RNNs), especially those consisting of Long Short-Term Memory units. We test the resulting models on both regression an
APA, Harvard, Vancouver, ISO, and other styles
2

Ljungehed, Jesper. "Predicting Customer Churn Using Recurrent Neural Networks." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-210670.

Full text
Abstract:
Churn prediction is used to identify customers that are becoming less loyal and is an important tool for companies that want to stay competitive in a rapidly growing market. In retail, a dynamic definition of churn is needed to identify churners correctly. Customer Lifetime Value (CLV) is the monetary value of a customer relationship. No change in CLV for a given customer indicates a decrease in loyalty. This thesis proposes a novel approach to churn prediction. The proposed model uses a Recurrent Neural Network to identify churners based on Customer Lifetime Value time series regression. The
APA, Harvard, Vancouver, ISO, and other styles
3

Bonato, Tommaso. "Time Series Predictions With Recurrent Neural Networks." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2018.

Find full text
Abstract:
L'obiettivo principale di questa tesi è studiare come gli algoritmi di apprendimento automatico (machine learning in inglese) e in particolare le reti neurali LSTM (Long Short Term Memory) possano essere utilizzati per prevedere i valori futuri di una serie storica regolare come, per esempio, le funzioni seno e coseno. Una serie storica è definita come una sequenza di osservazioni s_t ordinate nel tempo. Inoltre cercheremo di applicare gli stessi principi per prevedere i valori di una serie storica prodotta utilizzando i dati di vendita di un prodotto cosmetico durante un periodo di tre anni.
APA, Harvard, Vancouver, ISO, and other styles
4

Martins, Helder. "Predicting user churn on streaming services using recurrent neural networks." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-217109.

Full text
Abstract:
Providers of online services have witnessed a rapid growth of their user base in the last few years. The phenomenon has attracted an increasing number of competitors determined on obtaining their own share of the market. In this context, the cost of attracting new customers has increased significantly, raising the importance of retaining existing clients. Therefore, it has become progressively more important for the companies to improve user experience and ensure they keep a larger share of their users active in consuming their product. Companies are thus compelled to build tools that can iden
APA, Harvard, Vancouver, ISO, and other styles
5

Fors, Johansson Christoffer. "Arrival Time Predictions for Buses using Recurrent Neural Networks." Thesis, Linköpings universitet, Artificiell intelligens och integrerade datorsystem, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-165133.

Full text
Abstract:
In this thesis, two different types of bus passengers are identified. These two types, namely current passengers and passengers-to-be have different needs in terms of arrival time predictions. A set of machine learning models based on recurrent neural networks and long short-term memory units were developed to meet these needs. Furthermore, bus data from the public transport in Östergötland county, Sweden, were collected and used for training new machine learning models. These new models are compared with the current prediction system that is used today to provide passengers with arrival time
APA, Harvard, Vancouver, ISO, and other styles
6

Vikström, Filip. "A recurrent neural network approach to quantification of risks surrounding the Swedish property market." Thesis, Umeå universitet, Institutionen för matematik och matematisk statistik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-126192.

Full text
Abstract:
As the real estate market plays a central role in a countries financial situation, as a life insurer, a bank and a property developer, Skandia wants a method for better assessing the risks connected to the real estate market. The goal of this paper is to increase the understanding of property market risk and its covariate risks and to conduct an analysis of how a fall in real estate prices could affect Skandia’s exposed assets.This paper explores a recurrent neural network model with the aim of quantifying identified risk factors using exogenous data. The recurrent neural network model is comp
APA, Harvard, Vancouver, ISO, and other styles
7

Rosell, Felicia. "Tracking a ball during bounce and roll using recurrent neural networks." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-239733.

Full text
Abstract:
In many types of sports, on-screen graphics such as an reconstructed ball trajectory, can be displayed for spectators or players in order to increase understanding. One sub-problem of trajectory reconstruction is tracking of ball positions, which is a difficult problem due to the fast and often complex ball movement. Historically, physics based techniques have been used to track ball positions, but this thesis investigates using a recurrent neural network design, in the application of tracking bouncing golf balls. The network is trained and tested on synthetically created golf ball shots, crea
APA, Harvard, Vancouver, ISO, and other styles
8

Jansson, Anton. "Predicting trajectories of golf balls using recurrent neural networks." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-210552.

Full text
Abstract:
This thesis is concerned with the problem of predicting the remaining part of the trajectory of a golf ball as it travels through the air where only the three-dimensional position of the ball is captured. The approach taken to solve this problem relied on recurrent neural networks in the form of the long short-term memory networks (LSTM). The motivation behind this choice was that this type of networks had led to state-of-the-art performance for similar problems such as predicting the trajectory of pedestrians. The results show that using LSTMs led to an average reduction of 36.6 % of the erro
APA, Harvard, Vancouver, ISO, and other styles
9

Wen, Tsung-Hsien. "Recurrent neural network language generation for dialogue systems." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/275648.

Full text
Abstract:
Language is the principal medium for ideas, while dialogue is the most natural and effective way for humans to interact with and access information from machines. Natural language generation (NLG) is a critical component of spoken dialogue and it has a significant impact on usability and perceived quality. Many commonly used NLG systems employ rules and heuristics, which tend to generate inflexible and stylised responses without the natural variation of human language. However, the frequent repetition of identical output forms can quickly make dialogue become tedious for most real-world users.
APA, Harvard, Vancouver, ISO, and other styles
10

Lousseief, Elias. "MahlerNet : Unbounded Orchestral Music with Neural Networks." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-264993.

Full text
Abstract:
Modelling music with mathematical and statistical methods in general, and with neural networks in particular, has a long history and has been well explored in the last decades. Exactly when the first attempt at strictly systematic music took place is hard to say; some would say in the days of Mozart, others would say even earlier, but it is safe to say that the field of algorithmic composition has a long history. Even though composers have always had structure and rules as part of the writing process, implicitly or explicitly, following rules at a stricter level was well investigated in the mi
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "RNN (recurrent neural networks)"

1

Salem, Fathi M. Recurrent Neural Networks. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-89929-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Tyagi, Amit Kumar, and Ajith Abraham. Recurrent Neural Networks. CRC Press, 2022. http://dx.doi.org/10.1201/9781003307822.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Hu, Xiaolin, and P. Balasubramaniam. Recurrent neural networks. InTech, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Hammer, Barbara. Learning with recurrent neural networks. Springer London, 2000. http://dx.doi.org/10.1007/bfb0110016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Yi, Zhang, and K. K. Tan. Convergence Analysis of Recurrent Neural Networks. Springer US, 2004. http://dx.doi.org/10.1007/978-1-4757-3819-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

ElHevnawi, Mahmoud, and Mohamed Mysara. Recurrent neural networks and soft computing. InTech, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

R, Medsker L., and Jain L. C, eds. Recurrent neural networks: Design and applications. CRC Press, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

K, Tan K., ed. Convergence analysis of recurrent neural networks. Kluwer Academic Publishers, 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Graves, Alex. Supervised Sequence Labelling with Recurrent Neural Networks. Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-24797-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Graves, Alex. Supervised Sequence Labelling with Recurrent Neural Networks. Springer Berlin Heidelberg, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "RNN (recurrent neural networks)"

1

Salem, Fathi M. "Gated RNN: The Gated Recurrent Unit (GRU) RNN." In Recurrent Neural Networks. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-89929-5_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Salem, Fathi M. "Gated RNN: The Minimal Gated Unit (MGU) RNN." In Recurrent Neural Networks. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-89929-5_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Salem, Fathi M. "Gated RNN: The Long Short-Term Memory (LSTM) RNN." In Recurrent Neural Networks. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-89929-5_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Salem, Fathi M. "Recurrent Neural Networks (RNN)." In Recurrent Neural Networks. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-89929-5_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Gharehbaghi, Arash. "Recurrent Neural Networks (RNN)." In Deep Learning in Time Series Analysis. CRC Press, 2023. http://dx.doi.org/10.1201/9780429321252-14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Xiao, Cao, and Jimeng Sun. "Recurrent Neural Networks (RNN)." In Introduction to Deep Learning for Healthcare. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-82184-5_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Okadome, Takeshi. "RNN: Recurrent Neural Network." In Essentials of Generative AI. Springer Nature Singapore, 2025. https://doi.org/10.1007/978-981-96-0029-8_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Yellin, Daniel M., and Gail Weiss. "Synthesizing Context-free Grammars from Recurrent Neural Networks." In Tools and Algorithms for the Construction and Analysis of Systems. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-72016-2_19.

Full text
Abstract:
AbstractWe present an algorithm for extracting a subclass of the context free grammars (CFGs) from a trained recurrent neural network (RNN). We develop a new framework, pattern rule sets (PRSs), which describe sequences of deterministic finite automata (DFAs) that approximate a non-regular language. We present an algorithm for recovering the PRS behind a sequence of such automata, and apply it to the sequences of automata extracted from trained RNNs using the $$L^{*}$$ L ∗ algorithm. We then show how the PRS may converted into a CFG, enabling a familiar and useful presentation of the learned l
APA, Harvard, Vancouver, ISO, and other styles
9

Das, Susmita, Amara Tariq, Thiago Santos, Sai Sandeep Kantareddy, and Imon Banerjee. "Recurrent Neural Networks (RNNs): Architectures, Training Tricks, and Introduction to Influential Research." In Machine Learning for Brain Disorders. Springer US, 2012. http://dx.doi.org/10.1007/978-1-0716-3195-9_4.

Full text
Abstract:
AbstractRecurrent neural networks (RNNs) are neural network architectures with hidden state and which use feedback loops to process a sequence of data that ultimately informs the final output. Therefore, RNN models can recognize sequential characteristics in the data and help to predict the next likely data point in the data sequence. Leveraging the power of sequential data processing, RNN use cases tend to be connected to either language models or time-series data analysis. However, multiple popular RNN architectures have been introduced in the field, starting from SimpleRNN and LSTM to deep
APA, Harvard, Vancouver, ISO, and other styles
10

Ghatak, Abhijit. "Recurrent Neural Networks (RNN) or Sequence Models." In Deep Learning with R. Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-13-5850-0_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "RNN (recurrent neural networks)"

1

Anjaneyulu, Battula Prasanna, Chadipiralla Pavan Kunar Reddy, Gangireddy Venkata AjayKumar Reddy, Chava Yogitha, Pandiselvam Pandiyarajan, and Baskaran Maheshwaran. "DeepFake Detection using Convolutional Neural Networks (CNN) and Recurrent Neural Network(RNN)." In 2024 5th International Conference on Data Intelligence and Cognitive Informatics (ICDICI). IEEE, 2024. https://doi.org/10.1109/icdici62993.2024.10810970.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Machikuri, Santoshi Kumari, Kanchan Yadav, Myasar Mundher Adnan, Pokala Krishnaiah, S. Subburam, and K. Sambath Kumar. "Leveraging Recurrent Neural Networks (RNN) for Workforce Planning and Optimization." In 2024 International Conference on IoT, Communication and Automation Technology (ICICAT). IEEE, 2024. https://doi.org/10.1109/icicat62666.2024.10923237.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Adebiyi, Marion O., Oladayo G. Atanda, Chidinma Okeke, Ayodele A. Adebiyi, and Abayomi A. Adebiyi. "Network Intrusion Detection Using K-Nearest Neighbors (KNN) and Recurrent Neural Networks (RNN)." In 2024 International Conference on Science, Engineering and Business for Driving Sustainable Development Goals (SEB4SDG). IEEE, 2024. http://dx.doi.org/10.1109/seb4sdg60871.2024.10629867.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ori, C. Valvil, Srigitha S. Nath, S. Kumaran, Naradasu Subash, Gorla Mahadeva Reddy, and Gurram Venu. "Detecting Chronic Obstructive Pulmonary Disease with Deep Learning Using Recurrent Neural Networks (RNN)." In 2024 International Conference on Sustainable Communication Networks and Application (ICSCNA). IEEE, 2024. https://doi.org/10.1109/icscna63714.2024.10864241.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Fan, Zhengyang, Dan Shen, Yajie Bao, Khanh Pham, Erik Blasch, and Genshe Chen. "RNN-UKF: Enhancing Hyperparameter Auto-Tuning in Unscented Kalman Filters through Recurrent Neural Networks." In 2024 27th International Conference on Information Fusion (FUSION). IEEE, 2024. http://dx.doi.org/10.23919/fusion59988.2024.10706523.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Jadhav, Sairaje S., Shoyeb S. Tahasildar, Shubhada D. Kamble, Pranit N. Sankpal, Aprupa S. Pawar, and Abhijeet A. Urunkar. "Deep Fake Detection using ResNext Convolutional Neural Network (CNN) combined with a Recurrent Neural Network (RNN)." In 2024 5th IEEE Global Conference for Advancement in Technology (GCAT). IEEE, 2024. https://doi.org/10.1109/gcat62922.2024.10923866.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hu, Yujie, Lingyu Zhu, Han Gong, and Xi Chen. "Data-Driven Dynamic Process Modeling Using Temporal RNN Incorporating Output Variable Autocorrelation and Stacked Autoencoder." In The 35th European Symposium on Computer Aided Process Engineering. PSE Press, 2025. https://doi.org/10.69997/sct.150053.

Full text
Abstract:
Dynamic process modeling in process industries has been extensively studied, especially with the development of deep learning techniques. Recurrent neural networks (RNN) and stacked autoencoders (SAE) are two powerful tools for dynamic modeling and data processing. However, most existing research primarily focuses on extracting features from process input data, often neglecting the temporal autocorrelation of output variables. In this work, a hierarchical model based on time-series RNN structure is proposed. The upper layer employs a long short-term memory (LSTM) network to extract temporal fe
APA, Harvard, Vancouver, ISO, and other styles
8

Santosa, Muhammad Patriot Bayu, Ni Luh Wiwik Sri Rahayu Ginantra, Ida Bagus Ary Indra Iswara, and Desak Made Dwi Utami Putra. "Analysis Cryptocurrency Prediction Price Using Recurrent Neural Network (RNN) Gate Recurrent Unit (GRU) Long Short-Term Memory (LSTM)." In 2024 Ninth International Conference on Informatics and Computing (ICIC). IEEE, 2024. https://doi.org/10.1109/icic64337.2024.10957408.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Chen, Weiliang. "Research on Agricultural Enterprise Risk Management Decision Making Based on Recurrent Neural Network (RNN)." In 2025 5th International Conference on Consumer Electronics and Computer Engineering (ICCECE). IEEE, 2025. https://doi.org/10.1109/iccece65250.2025.10984678.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Razio, Aji Awang, Yesi Novaria Kunang, Ilman Zuhri Yadi, and Susan Dian Purnamasari. "Implementation of Neural Machine Transliteration from Komering Language to Indonesian Language Using Recurrent Neural Network (RNN) Model." In 2024 International Conference on Electrical Engineering and Computer Science (ICECOS). IEEE, 2024. https://doi.org/10.1109/icecos63900.2024.10791254.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "RNN (recurrent neural networks)"

1

Pearlmutter, Barak A. Learning State Space Trajectories in Recurrent Neural Networks: A preliminary Report. Defense Technical Information Center, 1988. http://dx.doi.org/10.21236/ada219114.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Talathi, S. S. Deep Recurrent Neural Networks for seizure detection and early seizure detection systems. Office of Scientific and Technical Information (OSTI), 2017. http://dx.doi.org/10.2172/1366924.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Mathia, Karl. Solutions of linear equations and a class of nonlinear equations using recurrent neural networks. Portland State University Library, 2000. http://dx.doi.org/10.15760/etd.1354.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Lin, Linyu, Joomyung Lee, Bikash Poudel, Timothy McJunkin, Nam Dinh, and Vivek Agarwal. Enhancing the Operational Resilience of Advanced Reactors with Digital Twins by Recurrent Neural Networks. Office of Scientific and Technical Information (OSTI), 2021. http://dx.doi.org/10.2172/1835892.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Pasupuleti, Murali Krishna. Neural Computation and Learning Theory: Expressivity, Dynamics, and Biologically Inspired AI. National Education Services, 2025. https://doi.org/10.62311/nesx/rriv425.

Full text
Abstract:
Abstract: Neural computation and learning theory provide the foundational principles for understanding how artificial and biological neural networks encode, process, and learn from data. This research explores expressivity, computational dynamics, and biologically inspired AI, focusing on theoretical expressivity limits, infinite-width neural networks, recurrent and spiking neural networks, attractor models, and synaptic plasticity. The study investigates mathematical models of function approximation, kernel methods, dynamical systems, and stability properties to assess the generalization capa
APA, Harvard, Vancouver, ISO, and other styles
6

Engel, Bernard, Yael Edan, James Simon, Hanoch Pasternak, and Shimon Edelman. Neural Networks for Quality Sorting of Agricultural Produce. United States Department of Agriculture, 1996. http://dx.doi.org/10.32747/1996.7613033.bard.

Full text
Abstract:
The objectives of this project were to develop procedures and models, based on neural networks, for quality sorting of agricultural produce. Two research teams, one in Purdue University and the other in Israel, coordinated their research efforts on different aspects of each objective utilizing both melons and tomatoes as case studies. At Purdue: An expert system was developed to measure variances in human grading. Data were acquired from eight sensors: vision, two firmness sensors (destructive and nondestructive), chlorophyll from fluorescence, color sensor, electronic sniffer for odor detecti
APA, Harvard, Vancouver, ISO, and other styles
7

Yu, Nanpeng, Koji Yamashita, Brandon Foggo, et al. Final Project Report: Discovery of Signatures, Anomalies, and Precursors in Synchrophasor Data with Matrix Profile and Deep Recurrent Neural Networks. Office of Scientific and Technical Information (OSTI), 2022. http://dx.doi.org/10.2172/1874793.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!