Academic literature on the topic 'Recursive Neural Networks (RNNs)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Recursive Neural Networks (RNNs).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Recursive Neural Networks (RNNs)"

1

Zelios, Andreas, Achilleas Grammenos, Maria Papatsimouli, Nikolaos Asimopoulos, and George Fragulis. "Recursive neural networks: recent results and applications." SHS Web of Conferences 139 (2022): 03007. http://dx.doi.org/10.1051/shsconf/202213903007.

Full text
Abstract:
Neural Network’s basic principles and functions are based on the nervous system of living organisms, they aim to simulate neurons of the human brain to solve complicated real-world problems by working in a forward-only manner. A recursive Neural Network on the other hand is based on a recursive design principle over a given sequence input, to come up with a scalar assessment of the structured input. This means that is ideal for a given sequence of input data that is when processed dependent on its previous input sequence, which by default are used in various problems of our era. A common example could be devices such as Amazon Alexa, which uses speech recognition i.e., given an audio input source that receives audio signals, tries to predict logical expressions extracted from its different audio segments to form complete sentences. But RNNs do not come with no problems or difficulties. Today’s problems become more and more complex involving parameters in big data form, therefore a need for bigger and deeper RNNs is being created. This paper aims to explore these problems and ways to reduce them while also providing a description of RNN’s beneficial nature and listing different uses of the state-of-the-art RNNs and their use in different problems as those mentioned above.
APA, Harvard, Vancouver, ISO, and other styles
2

Wang, Qinglong, Kaixuan Zhang, Alexander G. Ororbia II, Xinyu Xing, Xue Liu, and C. Lee Giles. "An Empirical Evaluation of Rule Extraction from Recurrent Neural Networks." Neural Computation 30, no. 9 (2018): 2568–91. http://dx.doi.org/10.1162/neco_a_01111.

Full text
Abstract:
Rule extraction from black box models is critical in domains that require model validation before implementation, as can be the case in credit scoring and medical diagnosis. Though already a challenging problem in statistical learning in general, the difficulty is even greater when highly nonlinear, recursive models, such as recurrent neural networks (RNNs), are fit to data. Here, we study the extraction of rules from second-order RNNs trained to recognize the Tomita grammars. We show that production rules can be stably extracted from trained RNNs and that in certain cases, the rules outperform the trained RNNs.
APA, Harvard, Vancouver, ISO, and other styles
3

Socher, Richard, Andrej Karpathy, Quoc V. Le, Christopher D. Manning, and Andrew Y. Ng. "Grounded Compositional Semantics for Finding and Describing Images with Sentences." Transactions of the Association for Computational Linguistics 2 (December 2014): 207–18. http://dx.doi.org/10.1162/tacl_a_00177.

Full text
Abstract:
Previous work on Recursive Neural Networks (RNNs) shows that these models can produce compositional feature vectors for accurately representing and classifying sentences or images. However, the sentence vectors of previous models cannot accurately represent visually grounded meaning. We introduce the DT-RNN model which uses dependency trees to embed sentences into a vector space in order to retrieve images that are described by those sentences. Unlike previous RNN-based models which use constituency trees, DT-RNNs naturally focus on the action and agents in a sentence. They are better able to abstract from the details of word order and syntactic expression. DT-RNNs outperform other recursive and recurrent neural networks, kernelized CCA and a bag-of-words baseline on the tasks of finding an image that fits a sentence description and vice versa. They also give more similar representations to sentences that describe the same image.
APA, Harvard, Vancouver, ISO, and other styles
4

Cai, Guo-Rong, and Shui-Li Chen. "Recursive Neural Networks Based on PSO for Image Parsing." Abstract and Applied Analysis 2013 (2013): 1–7. http://dx.doi.org/10.1155/2013/617618.

Full text
Abstract:
This paper presents an image parsing algorithm which is based on Particle Swarm Optimization (PSO) and Recursive Neural Networks (RNNs). State-of-the-art method such as traditional RNN-based parsing strategy uses L-BFGS over the complete data for learning the parameters. However, this could cause problems due to the nondifferentiable objective function. In order to solve this problem, the PSO algorithm has been employed to tune the weights of RNN for minimizing the objective. Experimental results obtained on the Stanford background dataset show that our PSO-based training algorithm outperforms traditional RNN, Pixel CRF, region-based energy, simultaneous MRF, and superpixel MRF.
APA, Harvard, Vancouver, ISO, and other styles
5

Pike, Xander, and Jordan Cheer. "Active noise and vibration control of systems with primary path nonlinearities using FxLMS, Neural Networks and Recursive Neural Networks." Journal of the Acoustical Society of America 150, no. 4 (2021): A345. http://dx.doi.org/10.1121/10.0008532.

Full text
Abstract:
Active control systems are often used to surmount the challenges associated with using passive control measures to control low frequencies, since they achieve control without the application of large or heavy control treatments. Historically, linear active control strategies have been used in feed-forward control systems to drive the control source to minimise the signal measured at the error sensor. However, when the response from noise source to error sensor becomes nonlinear, either in the primary or secondary path, the performance of such controllers can suffer. To overcome this limitation, it has previously been shown that Neural Networks (NNs) can outperform linear controllers. Furthermore, Recursive Neural Networks (RNNs) have been shown to outperform classical feed-forward networks in some cases. This is usually explained by the RNNs ability to form a rudimentary “memory.” This paper compares the behaviour of the linear FxLMS algorithm, a NN and an RNN through their application to the control of a simulated system with variable levels of saturation and hysteretic nonlinearities in the primary path. It is demonstrated that the NN is capable of greater control of saturation nonlinearities than FxLMS. Similarly, the RNN is capable of greater control of hysteretic nonlinearities than the NN.
APA, Harvard, Vancouver, ISO, and other styles
6

Raghu, Nagashree, and Gowda Kishore. "Electric vehicle charging state predictions through hybrid deep learning: A review." GSC Advanced Research and Reviews 15, no. 1 (2023): 076–80. https://doi.org/10.5281/zenodo.7929676.

Full text
Abstract:
This review paper discusses the application of hybrid deep learning techniques for predicting the charging state of electric vehicles. The paper highlights the importance of accurate predictions for the efficient management of electric vehicle charging stations. The review focuses on the use of recursive neural networks (RNNs) and the gated recurrent unit (GRU) framework in hybrid deep learning models, which have shown promising results in previous studies. In addition to hybrid deep learning, the paper also examines the use of support vector machines (SVMs) and artificial neural networks (ANNs) in charging state prediction. The strengths and weaknesses of these different approaches are analyzed and compared. The paper concludes that hybrid deep learning models, particularly those using RNNs and GRUs, are a promising approach for accurately predicting electric vehicle charging states. The paper also suggests potential areas for future research to further improve the accuracy and efficiency of charging state predictions.
APA, Harvard, Vancouver, ISO, and other styles
7

Raghu Nagashree and Kishore Gowda. "Electric vehicle charging state predictions through hybrid deep learning: A review." GSC Advanced Research and Reviews 15, no. 1 (2023): 076–80. http://dx.doi.org/10.30574/gscarr.2023.15.1.0116.

Full text
Abstract:
This review paper discusses the application of hybrid deep learning techniques for predicting the charging state of electric vehicles. The paper highlights the importance of accurate predictions for the efficient management of electric vehicle charging stations. The review focuses on the use of recursive neural networks (RNNs) and the gated recurrent unit (GRU) framework in hybrid deep learning models, which have shown promising results in previous studies. In addition to hybrid deep learning, the paper also examines the use of support vector machines (SVMs) and artificial neural networks (ANNs) in charging state prediction. The strengths and weaknesses of these different approaches are analyzed and compared. The paper concludes that hybrid deep learning models, particularly those using RNNs and GRUs, are a promising approach for accurately predicting electric vehicle charging states. The paper also suggests potential areas for future research to further improve the accuracy and efficiency of charging state predictions.
APA, Harvard, Vancouver, ISO, and other styles
8

Yang, Lei, Saddam Aziz, and Zhenyang Yu. "Cybersecurity Challenges in PV-Hydrogen Transport Networks: Leveraging Recursive Neural Networks for Resilient Operation." Energies 18, no. 9 (2025): 2262. https://doi.org/10.3390/en18092262.

Full text
Abstract:
In the rapidly evolving landscape of transportation technologies, hydrogen vehicle networks integrated with photovoltaic (PV) systems represent a significant advancement toward sustainable mobility. However, the integration of such technologies also introduces complex cybersecurity challenges that must be meticulously managed to ensure operational integrity and system resilience. This paper explores the intricate dynamics of cybersecurity in PV-powered hydrogen vehicle networks, focusing on the real-time challenges posed by cyber threats such as False Data Injection Attacks (FDIAs) and their impact on network operations. Our research utilizes a novel hierarchical robust optimization model enhanced by Recursive Neural Networks (RNNs) to improve detection rates and response times to cyber incidents across various severity levels. The initial findings reveal that as the severity of incidents escalates from level 1 to 10, the response time significantly increases from an average of 7 min for low-severity incidents to over 20 min for high-severity scenarios, demonstrating the escalating complexity and resource demands of more severe incidents. Additionally, the study introduces an in-depth examination of the detection dynamics, illustrating that while detection rates generally decrease as incident frequency increases—due to system overload—the employment of advanced RNNs effectively mitigates this trend, sustaining high detection rates of up to 95% even under high-frequency scenarios. Furthermore, we analyze the cybersecurity risks specifically associated with the intermittency of PV-based hydrogen production, demonstrating how fluctuations in solar energy availability can create vulnerabilities that cyberattackers may exploit. We also explore the relationship between incident frequency, detection sensitivity, and the resulting false positive rates, revealing that the optimal adjustment of detection thresholds can reduce false positives by as much as 30%, even under peak load conditions. This paper not only provides a detailed empirical analysis of the cybersecurity landscape in PV-integrated hydrogen vehicle networks but also offers strategic insights into the deployment of AI-enhanced cybersecurity frameworks. The findings underscore the critical need for scalable, responsive cybersecurity solutions that can adapt to the dynamic threat environment of modern transport infrastructures, ensuring the sustainability and safety of solar-powered hydrogen mobility solutions.
APA, Harvard, Vancouver, ISO, and other styles
9

Venturini, M. "Simulation of Compressor Transient Behavior Through Recurrent Neural Network Models." Journal of Turbomachinery 128, no. 3 (2005): 444–54. http://dx.doi.org/10.1115/1.2183315.

Full text
Abstract:
In the paper, self-adapting models capable of reproducing time-dependent data with high computational speed are investigated. The considered models are recurrent feed-forward neural networks (RNNs) with one feedback loop in a recursive computational structure, trained by using a back-propagation learning algorithm. The data used for both training and testing the RNNs have been generated by means of a nonlinear physics-based model for compressor dynamic simulation, which was calibrated on a multistage axial-centrifugal small size compressor. The first step of the analysis is the selection of the compressor maneuver to be used for optimizing RNN training. The subsequent step consists in evaluating the most appropriate RNN structure (optimal number of neurons in the hidden layer and number of outputs) and RNN proper delay time. Then, the robustness of the model response towards measurement uncertainty is ascertained, by comparing the performance of RNNs trained on data uncorrupted or corrupted with measurement errors with respect to the simulation of data corrupted with measurement errors. Finally, the best RNN model is tested on field data taken on the axial-centrifugal compressor on which the physics-based model was calibrated, by comparing physics-based model and RNN predictions against measured data. The comparison between RNN predictions and measured data shows that the agreement can be considered acceptable for inlet pressure, outlet pressure and outlet temperature, while errors are significant for inlet mass flow rate.
APA, Harvard, Vancouver, ISO, and other styles
10

Hong, Chaoqun, Zhiqiang Zeng, Xiaodong Wang, and Weiwei Zhuang. "Multiple Network Fusion with Low-Rank Representation for Image-Based Age Estimation." Applied Sciences 8, no. 9 (2018): 1601. http://dx.doi.org/10.3390/app8091601.

Full text
Abstract:
Image-based age estimation is a challenging task since there are ambiguities between the apparent age of face images and the actual ages of people. Therefore, data-driven methods are popular. To improve data utilization and estimation performance, we propose an image-based age estimation method. Theoretically speaking, the key idea of the proposed method is to integrate multi-modal features of face images. In order to achieve it, we propose a multi-modal learning framework, which is called Multiple Network Fusion with Low-Rank Representation (MNF-LRR). In this process, different deep neural network (DNN) structures, such as autoencoders, Convolutional Neural Networks (CNNs), Recursive Neural Networks (RNNs), and so on, can be used to extract semantic information of facial images. The outputs of these neural networks are then represented in a low-rank feature space. In this way, feature fusion is obtained in this space, and robust multi-modal image features can be computed. An experimental evaluation is conducted on two challenging face datasets for image-based age estimation extracted from the Internet Move Database (IMDB) and Wikipedia (WIKI). The results show the effectiveness of the proposed MNF-LRR.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Recursive Neural Networks (RNNs)"

1

Billingsley, Richard John. "Deep Learning for Semantic and Syntactic Structures." Thesis, The University of Sydney, 2014. http://hdl.handle.net/2123/12825.

Full text
Abstract:
Deep machine learning has enjoyed recent success in vision and speech-to-text tasks, using deep multi-layered neural networks. They have obtained remarkable results particularly where the internal representation of the task is unclear. In parsing, where the structure of syntax is well studied and understood from linguistics, neural networks have so far not performed so well. State-of-the-art parsers use a tree-based graphical model that requires a large number of equivalent classes to represent each parse node and its phrase label. A recursive neural network (RNN) parser has been developed that works well on short sentences, but falls short of the state-of-the-art results on longer sentences. This thesis aims to investigate deep learning and improve parsing by examining how neural networks could perform state-of-the-art parsing by comparison with PCFG parsers. We hypothesize that a neural network could be configured to implement an algorithm parallel to PCFG parsers, and examine their suitability to this task from an analytic perspective. This highlights a missing term that the RNN parser is unable to model, and we identify the role of this missing term in parsing. We finally present two methods to improve the RNN parser by building upon the analysis in earlier chapters, one using an iterative process similar to belief propagation that yields a 0.38% improvement and another replacing the scoring method with a deeper neural model yielding a 0.83% improvement. By examining an RNN parser as an exemplar of a deep neural network, we gain insights to deep machine learning and some of the approximations it must make by comparing it with well studied non-neural parsers that achieve state-of-the-art results. In this way, our research provides a better understanding of deep machine learning and a step towards improvements in parsing that will lead to smarter algorithms that can learn more accurate representations of information and the syntax and semantics of text.
APA, Harvard, Vancouver, ISO, and other styles
2

Braga, Antônio de Pádua. "Design models for recursive binary neural networks." Thesis, Imperial College London, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.336442.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

RAJ, CHAHAT. "CONVOLUTIONAL NEURAL NETWORKERS FOR MULTIMODALS FAKE NEWS DETECTION." Thesis, DELHI TECHNOLOGICAL UNIVERSITY, 2021. http://dspace.dtu.ac.in:8080/jspui/handle/repository/18816.

Full text
Abstract:
An upsurge of false information revolves around the internet. Social media and websites are flooded with unverified news posts. These posts are comprised of text, images, audio, and videos. There is a requirement for a system that detects fake content in multiple data modalities. We have seen a considerable amount of research on classification techniques for textual fake news detection, while frameworks dedicated to visual fake news detection are very few. We explored the state-of-the-art methods using deep networks such as CNNs and RNNs for multi-modal online information credibility analysis. They show rapid improvement in classification tasks without requiring pre-processing. To aid the ongoing research over fake news detection using CNN models, we build textual and visual modules to analyze their performances over multi-modal datasets. We exploit latent features present inside text and images using layers of convolutions. We see how well these convolutional neural networks perform classification when provided with only latent features and analyze what type of images are needed to be fed to perform efficient fake news detection. We propose a multi- modal Coupled ConvNet architecture that fuses both the data modules and efficiently classifies online news depending on its textual and visual content. We thence offer a comparative analysis of the results of all the models utilized over three datasets. The proposed architecture outperforms various state-of-the-art methods for fake news detection with considerably high accuracies.
APA, Harvard, Vancouver, ISO, and other styles
4

St, Aubyn Michael. "Connectionist rule processing using recursive auto-associative memory." Thesis, University of Hertfordshire, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.269445.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Acuna, David A. Elizondo. "The recursive deterministic perceptron and topology reduction strategies for neural networks." Université Louis Pasteur (Strasbourg) (1971-2008), 1997. http://www.theses.fr/1997STR13001.

Full text
Abstract:
Les strategies de reduction de la topologie des reseaux de neurones peuvent potentiellement offrir des avantages en termes de temps d'apprentissage, d'utilisation, de capacite de generalisation, de reduction des besoins materiels, ou comme etant plus proches du modele biologique. Apres avoir presente un etat de l'art des differentes methodes existantes pour developper des reseaux des neurones partiellement connectes, nous proposons quelques nouvelles methodes pour reduir le nombre de neurones intermediaires dans une topologie de reseaux neuronal. Ces methodes sont basees sur la notion de connexions d'ordre superieur. Un nouvel algorithme pour tester la separabilite lineaire et, d'autre part, une borne superieure de convergence pour l'algorithme d'apprentissage du perceptron sont donnes. Nous presentons une generalisation du reseau neuronal du perceptron, que nous nommons perceptron deterministe recursif (rdp) qui permet dans tous les cas de separer deux classes, de facon deterministe (meme si les deux classes ne sont pas directement lineairement separables). Cette generalisation est basee sur l'augmentation de la dimension du vecteur d'entree, laquelle produit plus de degres de liberte. Nous proposons une nouvelle notion de separabilite lineaire pour m classes et montrons comment generaliser le rdp a m classes en utilisant cette nouvelle notion
APA, Harvard, Vancouver, ISO, and other styles
6

Mirshekarianbabaki, Sadegh. "Blood Glucose Level Prediction via Seamless Incorporation of Raw Features Using RNNs." Ohio University / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1523988526094778.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Riddarhaage, Teodor. "Identifying Content Blocks on Web Pages using Recursive Neural Networks and DOM-tree Features." Thesis, Linköpings universitet, Interaktiva och kognitiva system, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-166927.

Full text
Abstract:
The internet is a source of abundant information spread across different web pages. The identification and extraction of information from the internet has long been an active area of research for multiple purposes relating to both research and business intelligence. However, many of the existing systems and techniques rely on assumptions that limit their general applicability and negatively affect their performance as the web changes and evolves. This work explores the use of Recursive Neural Networks (RecNNs) along with the extensive amount of features present in the DOM-trees for web pages as a technique for identifying information on the internet without the need for strict assumptions on the structure or content of web pages. Furthermore, the use of Sparse Group LASSO (SGL) is explored as an effective tool for performing feature selection in the context of web information extraction. The results show that a RecNN model outperforms a similarly structured feedforward baseline for the task of identifying cookie consent dialogs across various web pages. Furthermore, the results suggest that SGL can be used as an effective tool for feature selection of DOM-tree features.
APA, Harvard, Vancouver, ISO, and other styles
8

Octavian, Stan. "New recursive algorithms for training feedforward multilayer perceptrons." Diss., Georgia Institute of Technology, 1999. http://hdl.handle.net/1853/13534.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Mohammadisohrabi, Ali. "Design and implementation of a Recurrent Neural Network for Remaining Useful Life prediction." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020.

Find full text
Abstract:
A key idea underlying many Predictive Maintenance solutions is Remaining Useful Life (RUL) of machine parts, and it simply involves a prediction on the time remaining before a machine part is likely to require repair or replacement. Nowadays, with respect to fact that the systems are getting more complex, the innovative Machine Learning and Deep Learning algorithms can be deployed to study the more sophisticated correlations in complex systems. The exponential increase in both data accumulation and processing power make the Deep Learning algorithms more desirable that before. In this paper a Long Short-Term Memory (LSTM) which is a Recurrent Neural Network is designed to predict the Remaining Useful Life (RUL) of Turbofan Engines. The dataset is taken from NASA data repository. Finally, the performance obtained by RNN is compared to the best Machine Learning algorithm for the dataset.
APA, Harvard, Vancouver, ISO, and other styles
10

Day, Nathan McClain. "Tactile Sensing and Position Estimation Methods for Increased Proprioception of Soft-Robotic Platforms." BYU ScholarsArchive, 2018. https://scholarsarchive.byu.edu/etd/7004.

Full text
Abstract:
Soft robots have the potential to transform the way robots interact with their environment. This is due to their low inertia and inherent ability to more safely interact with the world without damaging themselves or the people around them. However, existing sensing for soft robots has at least partially limited their ability to control interactions with their environment. Tactile sensors could enable soft robots to sense interaction, but most tactile sensors are made from rigid substrates and are not well suited to applications for soft robots that can deform. In addition, the benefit of being able to cheaply manufacture soft robots may be lost if the tactile sensors that cover them are expensive and their resolution does not scale well for manufacturability. Soft robots not only need to know their interaction forces due to contact with their environment, they also need to know where they are in Cartesian space. Because soft robots lack a rigid structure, traditional methods of joint estimation found in rigid robots cannot be employed on soft robotic platforms. This requires a different approach to soft robot pose estimation. This thesis will discuss both tactile force sensing and pose estimation methods for soft-robots. A method to make affordable, high-resolution, tactile sensor arrays (manufactured in rows and columns) that can be used for sensorizing soft robots and other soft bodies isReserved developed. However, the construction results in a sensor array that exhibits significant amounts of cross-talk when two taxels in the same row are compressed. Using the same fabric-based tactile sensor array construction design, two different methods for cross-talk compensation are presented. The first uses a mathematical model to calculate a change in resistance of each taxel directly. The second method introduces additional simple circuit components that enable us to isolate each taxel electrically and relate voltage to force directly. This thesis also discusses various approaches in soft robot pose estimation along with a method for characterizing sensors using machine learning. Particular emphasis is placed on the effectiveness of parameter-based learning versus parameter-free learning, in order to determine which method of machine learning is more appropriate and accurate for soft robot pose estimation. Various machine learning architectures, such as recursive neural networks and convolutional neural networks, are also tested to demonstrate the most effective architecture to use for characterizing soft-robot sensors.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Recursive Neural Networks (RNNs)"

1

Kamp, Yves. Recursive neural networks for associative memory. John Wiley & Sons, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

RNNS/IEEE Symposium on Neuroinformatics and Neurocomputers (1992 Rostov-na-Donu, Russia). The RNNS/IEEE Symposium on Neuroinformatics and Neurocomputers, Rostov-on-Don, Russia, October 7-10, 1992. IEEE Services Center, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Chen, S. A parallel recursive prediction error algorithm for training layered neural networks. University of Sheffield, Dept. of Control Engineering, 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Billings, S. A. A comparison of the backpropagation and recursive prediction error algorithms for training neural networks. University of Sheffield, Dept. of Control Engineering, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Sangeetha, V., and S. Kevin Andrews. Introduction to Artificial Intelligence and Neural Networks. Magestic Technology Solutions (P) Ltd, Chennai, Tamil Nadu, India, 2023. http://dx.doi.org/10.47716/mts/978-93-92090-24-0.

Full text
Abstract:
Artificial Intelligence (AI) has emerged as a defining force in the current era, shaping the contours of technology and deeply permeating our everyday lives. From autonomous vehicles to predictive analytics and personalized recommendations, AI continues to revolutionize various facets of human existence, progressively becoming the invisible hand guiding our decisions. Simultaneously, its growing influence necessitates the need for a nuanced understanding of AI, thereby providing the impetus for this book, “Introduction to Artificial Intelligence and Neural Networks.” This book aims to equip its readers with a comprehensive understanding of AI and its subsets, machine learning and deep learning, with a particular emphasis on neural networks. It is designed for novices venturing into the field, as well as experienced learners who desire to solidify their knowledge base or delve deeper into advanced topics. In Chapter 1, we provide a thorough introduction to the world of AI, exploring its definition, historical trajectory, and categories. We delve into the applications of AI, and underscore the ethical implications associated with its proliferation. Chapter 2 introduces machine learning, elucidating its types and basic algorithms. We examine the practical applications of machine learning and delve into challenges such as overfitting, underfitting, and model validation. Deep learning and neural networks, an integral part of AI, form the crux of Chapter 3. We provide a lucid introduction to deep learning, describe the structure of neural networks, and explore forward and backward propagation. This chapter also delves into the specifics of Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs). In Chapter 4, we outline the steps to train neural networks, including data preprocessing, cost functions, gradient descent, and various optimizers. We also delve into regularization techniques and methods for evaluating a neural network model. Chapter 5 focuses on specialized topics in neural networks such as autoencoders, Generative Adversarial Networks (GANs), Long Short-Term Memory Networks (LSTMs), and Neural Architecture Search (NAS). In Chapter 6, we illustrate the practical applications of neural networks, examining their role in computer vision, natural language processing, predictive analytics, autonomous vehicles, and the healthcare industry. Chapter 7 gazes into the future of AI and neural networks. It discusses the current challenges in these fields, emerging trends, and future ethical considerations. It also examines the potential impacts of AI and neural networks on society. Finally, Chapter 8 concludes the book with a recap of key learnings, implications for readers, and resources for further study. This book aims not only to provide a robust theoretical foundation but also to kindle a sense of curiosity and excitement about the endless possibilities AI and neural networks offer. The journ
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Recursive Neural Networks (RNNs)"

1

Yellin, Daniel M., and Gail Weiss. "Synthesizing Context-free Grammars from Recurrent Neural Networks." In Tools and Algorithms for the Construction and Analysis of Systems. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-72016-2_19.

Full text
Abstract:
AbstractWe present an algorithm for extracting a subclass of the context free grammars (CFGs) from a trained recurrent neural network (RNN). We develop a new framework, pattern rule sets (PRSs), which describe sequences of deterministic finite automata (DFAs) that approximate a non-regular language. We present an algorithm for recovering the PRS behind a sequence of such automata, and apply it to the sequences of automata extracted from trained RNNs using the $$L^{*}$$ L ∗ algorithm. We then show how the PRS may converted into a CFG, enabling a familiar and useful presentation of the learned language.Extracting the learned language of an RNN is important to facilitate understanding of the RNN and to verify its correctness. Furthermore, the extracted CFG can augment the RNN in classifying correct sentences, as the RNN’s predictive accuracy decreases when the recursion depth and distance between matching delimiters of its input sequences increases.
APA, Harvard, Vancouver, ISO, and other styles
2

Beysolow II, Taweh. "Recurrent Neural Networks (RNNs)." In Introduction to Deep Learning Using R. Apress, 2017. http://dx.doi.org/10.1007/978-1-4842-2734-3_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Bisong, Ekaba. "Recurrent Neural Networks (RNNs)." In Building Machine Learning and Deep Learning Models on Google Cloud Platform. Apress, 2019. http://dx.doi.org/10.1007/978-1-4842-4470-8_36.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kobayashi, Naoki, and Minchao Wu. "Neural Network-Guided Synthesis of Recursive List Functions." In Tools and Algorithms for the Construction and Analysis of Systems. Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-30823-9_12.

Full text
Abstract:
AbstractKobayashi et al. have recently proposed NeuGuS, a framework of neural-network-guided synthesis of logical formulas or simple program fragments, where a neural network is first trained based on data, and then a logical formula over integers is constructed by using the weights and biases of the trained network as hints. The previous method was, however, restricted the class of formulas of quantifier-free linear integer arithmetic. In this paper, we propose a NeuGuS method for the synthesis of recursive predicates over lists definable by using the left fold function. To this end, we design and train a special-purpose recurrent neural network (RNN), and use the weights of the trained RNN to synthesize a recursive predicate. We have implemented the proposed method and conducted preliminary experiments to confirm the effectiveness of the method.
APA, Harvard, Vancouver, ISO, and other styles
5

Ji, Jinbao, Zongxiang Hu, Weiqi Zhang, and Sen Yang. "Development of Deep Learning Algorithms, Frameworks and Hardwares." In Proceeding of 2021 International Conference on Wireless Communications, Networking and Applications. Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-2456-9_71.

Full text
Abstract:
AbstractAs the core algorithm of artificial intelligence, deep learning has brought new breakthroughs and opportunities to all walks of life. This paper summarizes the principles of deep learning algorithms such as Autoencoder (AE), Boltzmann Machine (BM), Deep Belief Network (DBM), Convolutional Neural Network (CNN), Recurrent Neural Network (RNN) and Recursive Neural Network (RNN). The characteristics and differences of deep learning frameworks such as Tensorflow, Caffe, Theano and PyTorch are compared and analyzed. Finally, the application and performance of hardware platforms such as CPU and GPU in deep learning acceleration are introduced. In this paper, the development and application of deep learning algorithm, framework and hardware technology can provide reference and basis for the selection of deep learning technology.
APA, Harvard, Vancouver, ISO, and other styles
6

Korytkowski, Marcin, Marcin Gabryel, and Adam Gaweda. "Recursive Probabilistic Neural Networks." In Parallel Processing and Applied Mathematics. Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-24669-5_82.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Bianchini, Monica, Marco Gori, Paolo Mazzoni, Lorenzo Sarti, and Franco Scarselli. "Face Localization with Recursive Neural Networks." In Neural Nets. Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/978-3-540-45216-4_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Wolter, Moritz, Jürgen Gall, and Angela Yao. "Sequence Prediction Using Spectral RNNs." In Artificial Neural Networks and Machine Learning – ICANN 2020. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-61609-0_65.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Laplante, Phillip A., and Satish Mahadevan Srinivasan. "Recurrent Neural Networks (RNNs) for Predictive Analytics." In What Every Engineer Should Know About Data-Driven Analytics. CRC Press, 2023. http://dx.doi.org/10.1201/9781003278177-10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Maggini, Marco. "Recursive neural networks and automata." In Adaptive Processing of Sequences and Data Structures. Springer Berlin Heidelberg, 1998. http://dx.doi.org/10.1007/bfb0054002.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Recursive Neural Networks (RNNs)"

1

Wang, Yiqing, Mingming Song, Ye Xia, Wenjun Cao, and Limin Sun. "Joint Input-State Estimation Based on Recurrent Neural Network Assisted-Augmented Kalman Filter." In IABSE Symposium, Tokyo 2025: Environmentally Friendly Technologies and Structures: Focusing on Sustainable Approaches. International Association for Bridge and Structural Engineering (IABSE), 2025. https://doi.org/10.2749/tokyo.2025.0338.

Full text
Abstract:
<p>The joint estimation of system states and unknown input loads in dynamic civil structures, based on limited observations, has garnered significant attention in recent years. A widely used method for this is the augmented Kalman filter (AKF), which extends the state vector to include unknown inputs, allowing for simultaneous state and input estimation. However, like the classical Kalman filter (KF), the AKF is highly sensitive to the tuning of hyperparameters—specifically, the covariance matrices of process and measurement noise—and to inaccuracies in the state-space model, which hampers its accuracy and robustness in practical applications. To address these challenges, this study proposes a recurrent neural network-assisted AKF (AKFNet) for joint input-response estimation. The AKFNet is a hybrid model that combines data-driven and model-driven approaches by incorporating a recurrent neural network (RNN) into the AKF's recursive framework. Rather than relying solely on state-space information to compute the Kalman gain, the RNN module learns to refine this computation from real data, thereby enhancing the filter's ability to integrate model predictions with measured data. This approach mitigates the limitations of traditional AKF methods in civil engineering, particularly the challenges posed by unknown noise covariances and model errors. The proposed AKFNet is thoroughly validated through both numerical simulations and experimental tests, demonstrating its superiority over the AKF for real-time state and input estimation from sparse response data, even in the presence of complex noise and model errors.</p>
APA, Harvard, Vancouver, ISO, and other styles
2

Ciaramella, Angelo, Emanuel Di Nardo, and Giuseppe Vettigli. "Recursive Learning Framework for Structured Data Agglomeration." In 2024 International Joint Conference on Neural Networks (IJCNN). IEEE, 2024. http://dx.doi.org/10.1109/ijcnn60899.2024.10650385.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Hashimoto, Kazuma, Makoto Miwa, Yoshimasa Tsuruoka, and Takashi Chikayama. "Simple Customization of Recursive Neural Networks for Semantic Relation Classification." In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 2013. http://dx.doi.org/10.18653/v1/d13-1137.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Pomponi, Jary, Mattia Merluzzi, Alessio Devoto, Mateus Pontes Mota, Paolo Di Lorenzo, and Simone Scardapane. "Goal-Oriented Communications Based on Recursive Early Exit Neural Networks." In 2024 58th Asilomar Conference on Signals, Systems, and Computers. IEEE, 2024. https://doi.org/10.1109/ieeeconf60004.2024.10942792.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Venturini, M. "Simulation of Compressor Transient Behavior Through Recurrent Neural Network Models." In ASME Turbo Expo 2005: Power for Land, Sea, and Air. ASMEDC, 2005. http://dx.doi.org/10.1115/gt2005-68030.

Full text
Abstract:
In the paper, self-adapting models capable of reproducing time-dependent data with high computational speed are investigated. The considered models are recurrent feed-forward neural networks (RNNs) with one feedback loop in a recursive computational structure, trained by using a back-propagation learning algorithm. The data used for both training and testing the RNNs have been generated by means of a non-linear physics-based model for compressor dynamic simulation, which was calibrated on a multi-stage axial-centrifugal small size compressor. The first step of the analysis is the selection of the compressor maneuver to be used for optimizing RNN training. The subsequent step consists in evaluating the most appropriate RNN structure (optimal number of neurons in the hidden layer and number of outputs) and RNN proper delay time. Then, the robustness of the model response towards measurement uncertainty is ascertained, by comparing the performance of RNNs trained on data uncorrupted or corrupted with measurement errors with respect to the simulation of data both uncorrupted and corrupted with measurement errors. Finally, the best RNN model is tested on field data taken on the axial-centrifugal compressor on which the physics-based model was calibrated, by comparing physics-based model and RNN predictions against measured data. The comparison between RNN predictions and measured data shows that the agreement can be considered acceptable for inlet pressure, outlet pressure and outlet temperature, while errors are significant for inlet mass flow rate.
APA, Harvard, Vancouver, ISO, and other styles
6

Srikonda, Rohit, Rune Haakonsen, Massimiliano Russo, and Peri Periyasamy. "Real-Time Wellhead Bending Moment Measurement Using Motion Reference Unit (MRU) Sensors and Machine Learning." In ASME 2018 37th International Conference on Ocean, Offshore and Arctic Engineering. American Society of Mechanical Engineers, 2018. http://dx.doi.org/10.1115/omae2018-78301.

Full text
Abstract:
In order to facilitate real-time monitoring of accumulated wellhead fatigue damage, it is necessary to measure the wellhead bending moment in real-time. This paper presents a novel method to estimate the wellhead bending moment in realtime using acceleration and inclination data from the motion reference unit (MRU) sensors installed on BOP and LRJ, riser tension data and a trained neural network model. The method proposed in this paper is designed with a Recursive Neural Network (RNN) model to be trained to estimate the wellhead bending moment in real-time with high accuracy based on motion MRU sensor data and riser tension time series of a few previous cycles. In addition to the power of modeling complex nonlinearities, RNNs provide the advantage of better capturing the dynamic effects by learning to recognize the patterns in the sensor data and riser tension time series. The RNN model is trained using virtual sensor data and wellhead bending moment from a finite element (FE) model of the drilling riser subjected to irregular wave time domain analyses based on a training matrix with limited number of significant height (Hs) and peak period (Tp) combinations. Once trained, tested and deployed, the RNN model can make real-time estimation of the wellhead bending moment based on MRU sensor data and riser tension time series. The RNN model can be an efficient and accurate alternative to a physical model based on the indirect method for real-time calculation of wellhead bending moment using real-time sensor data. A case study is presented to explain the training procedures for the RNN model. A set of test cases that are not included in the training dataset are used to demonstrate the accuracy of the RNN model using Root Mean Squared Error (RMSE), Normalized Root Mean Squared Error (NRMSE) and coefficient of determination (R2) as a metrics.
APA, Harvard, Vancouver, ISO, and other styles
7

Faller, William E., William E. Smith, and Thomas T. Huang. "Application of Recursive Neural Networks to Six Degree-of-Freedom Simulation of an Experimental Model Undergoing Severe Maneuvers." In ASME 1996 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 1996. http://dx.doi.org/10.1115/imece1996-0954.

Full text
Abstract:
Abstract Recursive neural networks (RNN) have been applied to six degree-of-freedom (6-DOF) simulation of an experimental radio controlled model undergoing severe 6-DOF maneuvers. The inputs to the RNN were the time-varying control signals (plane deflection angles, propeller RPM). The required outputs were the time-varying 6-DOF vehicle dynamics [u(t), v(t), w(t), p(t), q(t), r(t)]. The vehicle accelerations, Euler angles and trajectories were also calculated throughout the maneuver. The results showed that all variables (accelerations, velocities, angles and displacements) were accurately predicted throughout the entire maneuver. Further, all maneuvers could be accurately predicted over any time period during which the control inputs were a known function. The results also clearly showed that the RNN 6-DOF simulation generalized to severe maneuvers not used in the development process.
APA, Harvard, Vancouver, ISO, and other styles
8

Omeke, James, and Albertus Retnanto. "Advances in Virtual Flow Metering Using Deep Composite Lstm-Autoencoder Network for Gas-Condensate Wells." In Middle East Oil, Gas and Geosciences Show. SPE, 2023. http://dx.doi.org/10.2118/213614-ms.

Full text
Abstract:
Abstract In terms of cost and execution time, data-driven Virtual Flow Meters (VFM) are alternative solutions to traditional well testing (WT) and physical multiphase flow meters (MPFM) for production rate determination which is needed for critical decisions by operators but faced with the challenge of low accuracy due to the transient and dynamic state of multiphase flow systems. Recently, some progress has been recorded by training steady-state feed-forward neural networks to learn to approximate production rate based on a certain number of input features (e.g., choke opening, pressure, temperature, etc.) without any recursive feedback connection between the network outputs and inputs. This disconnection has impacted their accuracy. Dynamic artificial neural network, for example, the recurrent neural networks (RNN), e.g., LSTM has shown good performance as their architecture allows for the usage of data from the past time step to predict the current time step. Forecast accuracy for RNNs is limited to a short period due to their inherent vanishing gradient issues. While a majority of VFM applications have been developed for oil and gas systems, little or non is applied to gas condensate systems. In this project, a sequence-to-sequence deep composite LSTM-Autoencoders neural network (LSTM-A-NN) was explored and used to demonstrate the ability to leverage its architecture to accurately predict multiphase flow rate for highly dynamic multiphase flow phenomenon associated with retrograde condensate reservoirs. Data used in training and validating the LSTM-A-NN were generated from simulations. First, a 3D compositional simulator (ECLIPSE 300) was used to simulate, as close as possible, a realistic case of a compositional reservoir with flow from the subsurface to the wellhead to generate production rate data. Secondly, an integrated production system was built using GAP to simulate a coupled material-balanced-based inflow with wells and a surface separator. The production output data in this case includes production rates, wellhead pressure, bottom-hole pressure, temperatures, condensate-gas ratio, etc. For both cases, the LSTM-A-NN performance was impressive (mean square error less than 1) and demonstrated its flexibility and scalability with an increasing number of input features (production data). The LSTM-A-NN learns the physics of complex fluid flow through non-linear dimensionality reduction while passing the temporal sequence of production data through its encoder network. The encoded data representation is thereafter decoded and reconstructed such that the output is in the same dimension as the input. The ability to leverage some advanced artificial intelligence frameworks such as a composite LSTM-A-NN has proven that it is possible to achieve the desired accuracy needed in data-driven VFM to meet the requirement of low cost, low execution time, and high accuracy. This project has also demonstrated the ability of the data-driven model to learn the complex dynamics within the temporal ordering of input sequences of production data, with an internal memory adapted to remember or use information across long input sequences, hence, yielding longer and more reliable forecasts, unlike other networks.
APA, Harvard, Vancouver, ISO, and other styles
9

Sun, J., X. Ma, and M. Kazi. "Comparison of Decline Curve Analysis DCA with Recursive Neural Networks RNN for Production Forecast of Multiple Wells." In SPE Western Regional Meeting. Society of Petroleum Engineers, 2018. http://dx.doi.org/10.2118/190104-ms.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Luo, Donghao, Bingbing Ni, Yichao Yan, and Xiaokang Yang. "Image Matching via Loopy RNN." In Twenty-Sixth International Joint Conference on Artificial Intelligence. International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/335.

Full text
Abstract:
Most existing matching algorithms are one-off algorithms, i.e., they usually measure the distance between the two image feature representation vectors for only one time. In contrast, human's vision system achieves this task, i.e., image matching, by recursively looking at specific/related parts of both images and then making the final judgement. Towards this end, we propose a novel loopy recurrent neural network (Loopy RNN), which is capable of aggregating relationship information of two input images in a progressive/iterative manner and outputting the consolidated matching score in the final iteration. A Loopy RNN features two uniqueness. First, built on conventional long short-term memory (LSTM) nodes, it links the output gate of the tail node to the input gate of the head node, thus it brings up symmetry property required for matching. Second, a monotonous loss designed for the proposed network guarantees increasing confidence during the recursive matching process. Extensive experiments on several image matching benchmarks demonstrate the great potential of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Recursive Neural Networks (RNNs)"

1

Cerulli, Giovanni. Deep Learning and AI for Research in Python. Instats Inc., 2023. http://dx.doi.org/10.61700/g6nxp3uxsvu3l469.

Full text
Abstract:
This seminar is an introduction to Deep Learning and Artificial Intelligence methods for the social, economic, and health sciences using Python. After introducing the subject, the seminar will cover the following methods: (i) Feedforward Neural Networks (FNNs) (ii) Convolutional Neural Networks (CNNs); and (iii) Recursive Neural Networks (RNNs). The course will offer various instructional examples using real datasets in Python. An Instats certificate of completion is provided at the end of the seminar, and 2 ECTS equivalent points are offered.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!