Academic literature on the topic 'Simple recurrent neural network'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Simple recurrent neural network.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Simple recurrent neural network"

1

Shapalin, Vitaliy Gennadiyevich, and Denis Vladimirovich Nikolayenko. "Comparison of the structure, efficiency, and speed of operation of feedforward, convolutional, and recurrent neural networks." Research Result. Information technologies 9, no. 4 (2024): 21–35. https://doi.org/10.18413/2518-1092-2024-9-4-0-3.

Full text
Abstract:
This article examines the efficiency of fully connected, recurrent, and convolutional neural networks in the context of developing a simple model for weather forecasting. The architectures and working principles of fully connected neural networks, the structure of one-dimensional and two-dimensional convolutional neural networks, as well as the architecture, features, advantages, and disadvantages of recurrent neural networks—specifically, simple recurrent neural networks, LSTM, and GRU, along with their bidirectional variants for each of the three aforementioned types—are discussed. Based on the available theoretical materials, simple neural networks were developed to compare the efficiency of each architecture, with training time and error magnitude serving as criteria, and temperature, wind speed, and atmospheric pressure as training data. The training speed, minimum and average error values for the fully connected neural network, convolutional neural network, simple recurrent network, LSTM, and GRU, as well as for bidirectional recurrent neural networks, were examined. Based on the results obtained, an analysis was conducted to explore the possible reasons for the effectiveness of each architecture. Graphs were plotted to show the relationship between processing speed and error magnitude for the three datasets examined: temperature, wind speed, and atmospheric pressure. Conclusions were drawn about the efficiency of specific models in the context of forecasting time series of meteorological data.
APA, Harvard, Vancouver, ISO, and other styles
2

Hindarto, Djarot. "Comparison of RNN Architectures and Non-RNN Architectures in Sentiment Analysis." sinkron 8, no. 4 (2023): 2537–46. http://dx.doi.org/10.33395/sinkron.v8i4.13048.

Full text
Abstract:
This study compares the sentiment analysis performance of multiple Recurrent Neural Network architectures and One-Dimensional Convolutional Neural Networks. THE METHODS EVALUATED ARE simple Recurrent Neural Network, Long Short-Term Memory, Gated Recurrent Unit, Bidirectional Recurrent Neural Network, and 1D ConvNets. A dataset comprising text reviews with positive or negative sentiment labels was evaluated. All evaluated models demonstrated an extremely high accuracy, ranging from 99.81% to 99.99%. Apart from that, the loss generated by these models is also low, ranging from 0.0043 to 0.0021. However, there are minor performance differences between the evaluated architectures. The Long Short-Term Memory and Gated Recurrent Unit models mainly perform marginally better than the Simple Recurrent Neural Network, albeit with slightly lower accuracy and loss. In the meantime, the Bidirectional Recurrent Neural Network model demonstrates competitive performance, as it can effectively manage text context from both directions. Additionally, One-Dimensional Convolutional Neural Networks provide satisfactory results, indicating that convolution-based approaches are also effective in sentiment analysis. The findings of this study provide practitioners with essential insights for selecting an appropriate architecture for sentiment analysis tasks. While all models yield excellent performance, the choice of architecture can impact computational efficiency and training time. Therefore, a comprehensive comprehension of the respective characteristics of Recurrent Neural Network architectures and One-Dimensional Convolutional Neural Networks is essential for making more informed decisions when constructing sentiment analysis models.
APA, Harvard, Vancouver, ISO, and other styles
3

Bartsev, S. I., and G. M. Markova. "Decoding of stimuli time series by neural activity patterns of recurrent neural network." Journal of Physics: Conference Series 2388, no. 1 (2022): 012052. http://dx.doi.org/10.1088/1742-6596/2388/1/012052.

Full text
Abstract:
Abstract The study is concerned with question whether it is possible to identify the specific sequence of input stimuli received by artificial neural network using its neural activity pattern. We used neural activity of simple recurrent neural network in course of “Even-Odd” game simulation. For identification of input sequences we applied the method of neural network-based decoding. Multilayer decoding neural network is required for this task. The accuracy of decoding appears up to 80%. Based on the results: 1) residual excitation levels of recurrent network’s neurons are important for stimuli time series processing, 2) trajectories of neural activity of recurrent networks while receiving a specific input stimuli sequence are complex cycles, we claim the presence of neural activity attractors even in extremely simple neural networks. This result suggests the fundamental role of attractor dynamics in reflexive processes.
APA, Harvard, Vancouver, ISO, and other styles
4

Parra Hernández, Rafael, Jaime Álvarez Gallegos, and José A. Hernández Reyes. "Simple recurrent neural network: A neural network structure for control systems." Neurocomputing 23, no. 1-3 (1998): 277–89. http://dx.doi.org/10.1016/s0925-2312(98)00084-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Back, Andrew D., and Ah Chung Tsoi. "A Low-Sensitivity Recurrent Neural Network." Neural Computation 10, no. 1 (1998): 165–88. http://dx.doi.org/10.1162/089976698300017935.

Full text
Abstract:
The problem of high sensitivity in modeling is well known. Small perturbations in the model parameters may result in large, undesired changes in the model behavior. A number of authors have considered the issue of sensitivity in feedforward neural networks from a probabilistic perspective. Less attention has been given to such issues in recurrent neural networks. In this article, we present a new recurrent neural network architecture, that is capable of significantly improved parameter sensitivity properties compared to existing recurrent neural networks. The new recurrent neural network generalizes previous architectures by employing alternative discrete-time operators in place of the shift operator normally used. An analysis of the model demonstrates the existence of parameter sensitivity in recurrent neural networks and supports the proposed architecture. The new architecture performs significantly better than previous recurrent neural networks, as shown by a series of simple numerical experiments.
APA, Harvard, Vancouver, ISO, and other styles
6

Kim Soon, Gan, Chin Kim On, Nordaliela Mohd Rusli, Tan Soo Fun, Rayner Alfred, and Tan Tse Guan. "Comparison of simple feedforward neural network, recurrent neural network and ensemble neural networks in phishing detection." Journal of Physics: Conference Series 1502 (March 2020): 012033. http://dx.doi.org/10.1088/1742-6596/1502/1/012033.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Andriyanov, Nikita A., David A. Petrosov, and Andrey V. Polyakov. "SELECTING AN ARTIFICIAL NEURAL NETWORK ARCHITECTURE FOR ASSESSING THE STATE OF A GENETIC ALGORITHM POPULATION IN THE PROBLEM OF STRUCTURAL-PARAMETRIC SYNTHESIS OF SIMULATION MODELS OF BUSINESS PROCESSES." SOFT MEASUREMENTS AND COMPUTING 12, no. 73 (2023): 70–81. http://dx.doi.org/10.36871/2618-9976.2023.12.007.

Full text
Abstract:
This article proposes a study aimed at determining the architecture of artificial neural networks to solve the problem of determining the population state of a genetic algorithm adapted to solve the problem of structuralparametric synthesis of simulation models of business processes. As the initial data for training the artificial neural network, we used the results of computational experiments obtained when operating a genetic algorithm model based on mathematical nested Petri nets, which solves the problem of synthesizing business process models (Petri net models) based on a given behavior. As examples of artificial neural network architectures for managing the process of finding solutions based on an evolutionary procedure, the following are considered: fully connected artificial neural network (FCNN), simple recurrent artificial neural network (Simple RNN), long shortterm memory recurrent network (LSTN), closed recurrent recurrent network block (GRU) and bidirectional LSTM (Bidirectional LSTM). The deep learning algorithms used were: Support Vector Classifier, Decision Tree Classifier and Random Forest Classifier. The article discusses the presented architectures of artificial neural networks and various training methods. Based on the computational experiments carried out and the analysis of the results obtained, conclusions were drawn about the feasibility of using artificial neural networks with RNN architecture to solve the problem of recognizing the state of the population and controlling the process of synthesis of solutions.
APA, Harvard, Vancouver, ISO, and other styles
8

Cheng, Wei-Chen, Jau-Chi Huang, and Cheng-Yuan Liou. "Segmentation of DNA using simple recurrent neural network." Knowledge-Based Systems 26 (February 2012): 271–80. http://dx.doi.org/10.1016/j.knosys.2011.09.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

TELMOUDI, ACHRAF JABEUR, HATEM TLIJANI, LOTFI NABLI, MAARUF ALI, and RADHI M'HIRI. "A NEW RBF NEURAL NETWORK FOR PREDICTION IN INDUSTRIAL CONTROL." International Journal of Information Technology & Decision Making 11, no. 04 (2012): 749–75. http://dx.doi.org/10.1142/s0219622012500198.

Full text
Abstract:
A novel neural architecture for prediction in industrial control: the 'Double Recurrent Radial Basis Function network' (R2RBF) is introduced for dynamic monitoring and prognosis of industrial processes. Three applications of the R2RBF network on the prediction values confirmed that the proposed architecture minimizes the prediction error. The proposed R2RBF is excited by the recurrence of the output looped neurons on the input layer which produces a dynamic memory on both the input and output layers. Given the learning complexity of neural networks with the use of the back-propagation training method, a simple architecture is proposed consisting of two simple Recurrent Radial Basis Function networks (RRBF). Each RRBF only has the input layer with looped neurons using the sigmoid activation function. The output of the first RRBF also presents an additional input for the second RRBF. An unsupervised learning algorithm is proposed to determine the parameters of the Radial Basis Function (RBF) nodes. The K-means unsupervised learning algorithm used for the hidden layer is enhanced by the initialization of these input parameters by the output parameters of the RCE algorithm.
APA, Harvard, Vancouver, ISO, and other styles
10

Bartsev, S. I., and G. M. Markova. "Recognition of stimulus received by recurrent neural network." Journal of Physics: Conference Series 2094, no. 3 (2021): 032041. http://dx.doi.org/10.1088/1742-6596/2094/3/032041.

Full text
Abstract:
Abstract The study is concerned with the comparison of two methods for identification of stimulus received by artificial neural network using neural activity pattern that corresponds to the period of storing information about this stimulus in the working memory. We used simple recurrent neural networks learned to pass the delayed matching-to-sample test. Neural activity was detected at the period of pause between receiving stimuli. The analysis of neural excitation patterns showed that neural networks encoded variables that were relevant for the task during the delayed matching-to-sample test, and their activity patterns were dynamic. The method of centroids allowed identifying the type of the received stimuli with efficiency up to 75% while the method of neural network-based decoder showed 100% efficiency. In addition, this method was applied to determine the minimal set of neurons whose activity was the most significant for stimulus recognition.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Simple recurrent neural network"

1

Parfitt, Shan Helen. "Explorations in anaphora resolution in artificial neural networks : implications for nativism." Thesis, Imperial College London, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.267247.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Rodriguez, Paul Fabian. "Mathematical foundations of simple recurrent networks /." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 1999. http://wwwlib.umi.com/cr/ucsd/fullcit?p9935464.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Jacobsson, Henrik. "A Comparison of Simple Recurrent and Sequential Cascaded Networks for Formal Language Recognition." Thesis, University of Skövde, Department of Computer Science, 1999. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-391.

Full text
Abstract:
<p>Two classes of recurrent neural network models are compared in this report, simple recurrent networks (SRNs) and sequential cascaded networks (SCNs) which are first- and second-order networks respectively. The comparison is aimed at describing and analysing the behaviour of the networks such that the differences between them become clear. A theoretical analysis, using techniques from dynamic systems theory (DST), shows that the second-order network has more possibilities in terms of dynamical behaviours than the first-order network. It also revealed that the second order network could interpret its context with an input-dependent function in the output nodes. The experiments were based on training with backpropagation (BP) and an evolutionary algorithm (EA) on the AnBn-grammar which requires the ability to count. This analysis revealed some differences between the two training-regimes tested and also between the performance of the two types of networks. The EA was found to be far more reliable than BP in this domain. Another important finding from the experiments was that although the SCN had more possibilities than the SRN in how it could solve the problem, these were not exploited in the domain tested in this project</p>
APA, Harvard, Vancouver, ISO, and other styles
4

Tekin, Mim Kemal. "Vehicle Path Prediction Using Recurrent Neural Network." Thesis, Linköpings universitet, Statistik och maskininlärning, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-166134.

Full text
Abstract:
Vehicle Path Prediction can be used to support Advanced Driver Assistance Systems (ADAS) that covers different technologies like Autonomous Braking System, Adaptive Cruise Control, etc. In this thesis, the vehicle’s future path, parameterized as 5 coordinates along the path, is predicted by using only visual data collected by a front vision sensor. This approach provides cheaper application opportunities without using different sensors. The predictions are done by deep convolutional neural networks (CNN) and the goal of the project is to use recurrent neural networks (RNN) and to investigate the benefits of using reccurence to the task. Two different approaches are used for the models. The first approach is a single-frame approach that makes predictions by using only one image frame as input and predicts the future location points of the car. The single-frame approach is the baseline model. The second approach is a sequential approach that enables the network the usage of historical information of previous image frames in order to predict the vehicle’s future path for the current frame. With this approach, the effect of using recurrence is investigated. Moreover, uncertainty is important for the model reliability. Having a small uncertainty in most of the predictions or having a high uncertainty in unfamiliar situations for the model will increase success of the model. In this project, the uncertainty estimation approach is based on capturing the uncertainty by following a method that allows to work on deep learning models. The uncertainty approach uses the same models that are defined by the first two approaches. Finally, the evaluation of the approaches are done by the mean absolute error and defining two different reasonable tolerance levels for the distance between the prediction path and the ground truth path. The difference between two tolerance levels is that the first one is a strict tolerance level and the the second one is a more relaxed tolerance level. When using strict tolerance level based on distances on test data, 36% of the predictions are accepted for single-frame model, 48% for the sequential model, 27% and 13% are accepted for single-frame and sequential models of uncertainty models. When using relaxed tolerance level on test data, 60% of the predictions are accepted by single-frame model, 67% for the sequential model, 65% and 53% are accepted for single-frame and sequential models of uncertainty models. Furthermore, by using stored information for each sequence, the methods are evaluated for different conditions such as day/night, road type and road cover. As a result, the sequential model outperforms in the majority of the evaluation results.
APA, Harvard, Vancouver, ISO, and other styles
5

Wen, Tsung-Hsien. "Recurrent neural network language generation for dialogue systems." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/275648.

Full text
Abstract:
Language is the principal medium for ideas, while dialogue is the most natural and effective way for humans to interact with and access information from machines. Natural language generation (NLG) is a critical component of spoken dialogue and it has a significant impact on usability and perceived quality. Many commonly used NLG systems employ rules and heuristics, which tend to generate inflexible and stylised responses without the natural variation of human language. However, the frequent repetition of identical output forms can quickly make dialogue become tedious for most real-world users. Additionally, these rules and heuristics are not scalable and hence not trivially extensible to other domains or languages. A statistical approach to language generation can learn language decisions directly from data without relying on hand-coded rules or heuristics, which brings scalability and flexibility to NLG. Statistical models also provide an opportunity to learn in-domain human colloquialisms and cross-domain model adaptations. A robust and quasi-supervised NLG model is proposed in this thesis. The model leverages a Recurrent Neural Network (RNN)-based surface realiser and a gating mechanism applied to input semantics. The model is motivated by the Long-Short Term Memory (LSTM) network. The RNN-based surface realiser and gating mechanism use a neural network to learn end-to-end language generation decisions from input dialogue act and sentence pairs; it also integrates sentence planning and surface realisation into a single optimisation problem. The single optimisation not only bypasses the costly intermediate linguistic annotations but also generates more natural and human-like responses. Furthermore, a domain adaptation study shows that the proposed model can be readily adapted and extended to new dialogue domains via a proposed recipe. Continuing the success of end-to-end learning, the second part of the thesis speculates on building an end-to-end dialogue system by framing it as a conditional generation problem. The proposed model encapsulates a belief tracker with a minimal state representation and a generator that takes the dialogue context to produce responses. These features suggest comprehension and fast learning. The proposed model is capable of understanding requests and accomplishing tasks after training on only a few hundred human-human dialogues. A complementary Wizard-of-Oz data collection method is also introduced to facilitate the collection of human-human conversations from online workers. The results demonstrate that the proposed model can talk to human judges naturally, without any difficulty, for a sample application domain. In addition, the results also suggest that the introduction of a stochastic latent variable can help the system model intrinsic variation in communicative intention much better.
APA, Harvard, Vancouver, ISO, and other styles
6

He, Jian. "Adaptive power system stabilizer based on recurrent neural network." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape8/PQDD_0008/NQ38471.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Gangireddy, Siva Reddy. "Recurrent neural network language models for automatic speech recognition." Thesis, University of Edinburgh, 2017. http://hdl.handle.net/1842/28990.

Full text
Abstract:
The goal of this thesis is to advance the use of recurrent neural network language models (RNNLMs) for large vocabulary continuous speech recognition (LVCSR). RNNLMs are currently state-of-the-art and shown to consistently reduce the word error rates (WERs) of LVCSR tasks when compared to other language models. In this thesis we propose various advances to RNNLMs. The advances are: improved learning procedures for RNNLMs, enhancing the context, and adaptation of RNNLMs. We learned better parameters by a novel pre-training approach and enhanced the context using prosody and syntactic features. We present a pre-training method for RNNLMs, in which the output weights of a feed-forward neural network language model (NNLM) are shared with the RNNLM. This is accomplished by first fine-tuning the weights of the NNLM, which are then used to initialise the output weights of an RNNLM with the same number of hidden units. To investigate the effectiveness of the proposed pre-training method, we have carried out text-based experiments on the Penn Treebank Wall Street Journal data, and ASR experiments on the TED lectures data. Across the experiments, we observe small but significant improvements in perplexity (PPL) and ASR WER. Next, we present unsupervised adaptation of RNNLMs. We adapted the RNNLMs to a target domain (topic or genre or television programme (show)) at test time using ASR transcripts from first pass recognition. We investigated two approaches to adapt the RNNLMs. In the first approach the forward propagating hidden activations are scaled - learning hidden unit contributions (LHUC). In the second approach we adapt all parameters of RNNLM.We evaluated the adapted RNNLMs by showing the WERs on multi genre broadcast speech data. We observe small (on an average 0.1% absolute) but significant improvements in WER compared to a strong unadapted RNNLM model. Finally, we present the context-enhancement of RNNLMs using prosody and syntactic features. The prosody features were computed from the acoustics of the context words and the syntactic features were from the surface form of the words in the context. We trained the RNNLMs with word duration, pause duration, final phone duration, syllable duration, syllable F0, part-of-speech tag and Combinatory Categorial Grammar (CCG) supertag features. The proposed context-enhanced RNNLMs were evaluated by reporting PPL and WER on two speech recognition tasks, Switchboard and TED lectures. We observed substantial improvements in PPL (5% to 15% relative) and small but significant improvements in WER (0.1% to 0.5% absolute).
APA, Harvard, Vancouver, ISO, and other styles
8

Amartur, Sundar C. "Competitive recurrent neural network model for clustering of multispectral data." Case Western Reserve University School of Graduate Studies / OhioLINK, 1995. http://rave.ohiolink.edu/etdc/view?acc_num=case1058445974.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Bopaiah, Jeevith. "A recurrent neural network architecture for biomedical event trigger classification." UKnowledge, 2018. https://uknowledge.uky.edu/cs_etds/73.

Full text
Abstract:
A “biomedical event” is a broad term used to describe the roles and interactions between entities (such as proteins, genes and cells) in a biological system. The task of biomedical event extraction aims at identifying and extracting these events from unstructured texts. An important component in the early stage of the task is biomedical trigger classification which involves identifying and classifying words/phrases that indicate an event. In this thesis, we present our work on biomedical trigger classification developed using the multi-level event extraction dataset. We restrict the scope of our classification to 19 biomedical event types grouped under four broad categories - Anatomical, Molecular, General and Planned. While most of the existing approaches are based on traditional machine learning algorithms which require extensive feature engineering, our model relies on neural networks to implicitly learn important features directly from the text. We use natural language processing techniques to transform the text into vectorized inputs that can be used in a neural network architecture. As per our knowledge, this is the first time neural attention strategies are being explored in the area of biomedical trigger classification. Our best results were obtained from an ensemble of 50 models which produced a micro F-score of 79.82%, an improvement of 1.3% over the previous best score.
APA, Harvard, Vancouver, ISO, and other styles
10

Ljungehed, Jesper. "Predicting Customer Churn Using Recurrent Neural Networks." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-210670.

Full text
Abstract:
Churn prediction is used to identify customers that are becoming less loyal and is an important tool for companies that want to stay competitive in a rapidly growing market. In retail, a dynamic definition of churn is needed to identify churners correctly. Customer Lifetime Value (CLV) is the monetary value of a customer relationship. No change in CLV for a given customer indicates a decrease in loyalty. This thesis proposes a novel approach to churn prediction. The proposed model uses a Recurrent Neural Network to identify churners based on Customer Lifetime Value time series regression. The results show that the model performs better than random. This thesis also investigated the use of the K-means algorithm as a replacement to a rule-extraction algorithm. The K-means algorithm contributed to a more comprehensive analytical context regarding the churn prediction of the proposed model.<br>Illojalitet prediktering används för att identifiera kunder som är påväg att bli mindre lojala och är ett hjälpsamt verktyg för att ett företag ska kunna driva en konkurrenskraftig verksamhet. I detaljhandel behöves en dynamisk definition av illojalitet för att korrekt kunna identifera illojala kunder. Kundens livstidsvärde är ett mått på monetärt värde av en kundrelation. En avstannad förändring av detta värde indikerar en minskning av kundens lojalitet. Denna rapport föreslår en ny metod för att utföra illojalitet prediktering. Den föreslagna metoden består av ett återkommande neuralt nätverk som används för att identifiera illojalitet hos kunder genom att prediktera kunders livstidsvärde. Resultaten visar att den föreslagna modellen presterar bättre jämfört med slumpmässig metod. Rapporten undersöker också användningen av en k-medelvärdesalgoritm som ett substitut för en regelextraktionsalgoritm. K-medelsalgoritm bidrog till en mer omfattande analys av illojalitet predikteringen.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Simple recurrent neural network"

1

Jones, Steven P. Neural network models of simple mechanical systems illustrating the feasibility of accelerated life testing. National Aeronautics and Space Administration, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Recurrent Neural Networks: From Simple to Gated Architectures. Springer International Publishing AG, 2023.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Salem, Fathi M. Recurrent Neural Networks: From Simple to Gated Architectures. Springer International Publishing AG, 2021.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Magic, John, and Mark Magic. Action Recognition Using Python and Recurrent Neural Network. Independently Published, 2019.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Yi, Zhang, and K. K. Tan. Convergence Analysis of Recurrent Neural Networks (Network Theory and Applications). Springer, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Bosco, Joish, and Fateh Khan. Stock Market Prediction and Efficiency Analysis Using Recurrent Neural Network. GRIN Verlag GmbH, 2018.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

V, David. Neural Network Programming with Java: Simple Guide on Neural Networks. CreateSpace Independent Publishing Platform, 2017.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Magic, John, and Mark Magic. Action Recognition: Step-By-step Recognizing Actions with Python and Recurrent Neural Network. Independently Published, 2019.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Shan, Yunting, John Magic, and Mark Magic. Action Recognition: Step-By-step Recognizing Actions with Python and Recurrent Neural Network. Independently Published, 2019.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

CNND simulator, cellular neural network embedded in a simple dual computing structure: User's guide version 1.1. Computer and Automation Institute, Hungarian Academy of Sciences, 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Simple recurrent neural network"

1

Golea, Mostefa, Masahiro Matsuoka, and Yasubumi Sakakibara. "Stochastic simple recurrent neural networks." In Grammatical Interference: Learning Syntax from Sentences. Springer Berlin Heidelberg, 1996. http://dx.doi.org/10.1007/bfb0033360.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Rodan, Ali, and Peter Tiňo. "Simple Deterministically Constructed Recurrent Neural Networks." In Intelligent Data Engineering and Automated Learning – IDEAL 2010. Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15381-5_33.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Hu, Xiaolin, and Bo Zhang. "Another Simple Recurrent Neural Network for Quadratic and Linear Programming." In Advances in Neural Networks – ISNN 2009. Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-01513-7_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Castaño, M. A., F. Casacuberta, and E. Vidal. "Simulation of stochastic regular grammars through simple recurrent networks." In New Trends in Neural Computation. Springer Berlin Heidelberg, 1993. http://dx.doi.org/10.1007/3-540-56798-4_149.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Sifa, Rafet, Daniel Paurat, Daniel Trabold, and Christian Bauckhage. "Simple Recurrent Neural Networks for Support Vector Machine Training." In Artificial Neural Networks and Machine Learning – ICANN 2018. Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-01424-7_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kobayashi, Naoki, and Minchao Wu. "Neural Network-Guided Synthesis of Recursive List Functions." In Tools and Algorithms for the Construction and Analysis of Systems. Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-30823-9_12.

Full text
Abstract:
AbstractKobayashi et al. have recently proposed NeuGuS, a framework of neural-network-guided synthesis of logical formulas or simple program fragments, where a neural network is first trained based on data, and then a logical formula over integers is constructed by using the weights and biases of the trained network as hints. The previous method was, however, restricted the class of formulas of quantifier-free linear integer arithmetic. In this paper, we propose a NeuGuS method for the synthesis of recursive predicates over lists definable by using the left fold function. To this end, we design and train a special-purpose recurrent neural network (RNN), and use the weights of the trained RNN to synthesize a recursive predicate. We have implemented the proposed method and conducted preliminary experiments to confirm the effectiveness of the method.
APA, Harvard, Vancouver, ISO, and other styles
7

Możaryn, Jakub. "NARX Recurrent Neural Network Model of the Graphene-Based Electronic Skin Sensors with Hysteretic Behaviour." In Digital Interaction and Machine Intelligence. Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-37649-8_23.

Full text
Abstract:
AbstractThe electronic skin described in the article comprises screen-printed graphene-based sensors, intended to be used for robotic applications. The precise mathematical model allowing the touch pressure estimation is required during its calibration. The article describes the recurrent neural network model for graphene-based electronic skin calibration, in which parameters are not homogeneous, and the touch force characteristics have visible hysteretic behaviour. The presented method provides a simple alternative to the models known in the literature.
APA, Harvard, Vancouver, ISO, and other styles
8

Manoonpong, Poramate, Frank Pasemann, Christoph Kolodziejski, and Florentin Wörgötter. "Designing Simple Nonlinear Filters Using Hysteresis of Single Recurrent Neurons for Acoustic Signal Recognition in Robots." In Artificial Neural Networks – ICANN 2010. Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15819-3_50.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Saxén, Henrik. "On the Equivalence Between ARMA Models and Simple Recurrent Neural Networks." In Applications of Computer Aided Time Series Modeling. Springer New York, 1997. http://dx.doi.org/10.1007/978-1-4612-2252-1_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Li, Yongtao, and Shigetoshi Nara. "Solving Complex Control Tasks via Simple Rule(s): Using Chaotic Dynamics in a Recurrent Neural Network Model." In The Relevance of the Time Domain to Neural Network Models. Springer US, 2012. http://dx.doi.org/10.1007/978-1-4614-0724-9_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Simple recurrent neural network"

1

Tan, Zhi Qin, Hao Shan Wong, and Chee Seng Chan. "An Embarrassingly Simple Approach for Intellectual Property Rights Protection on Recurrent Neural Networks." In Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, 2022. http://dx.doi.org/10.18653/v1/2022.aacl-main.8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Leal, Sergio, and Luis Lago. "Recurrent Neural Network based Counter Automata." In ESANN 2024. Ciaco - i6doc.com, 2024. http://dx.doi.org/10.14428/esann/2024.es2024-211.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Coelho, Pedro Henrique Gouv�a. "A Simple Recurrent Neural Network Equalizer Structure." In 5. Congresso Brasileiro de Redes Neurais. CNRN, 2016. http://dx.doi.org/10.21528/cbrn2001-106.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

McCann, P. J., and B. L. Kalman. "Parallel training of simple recurrent neural networks." In Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94). IEEE, 1994. http://dx.doi.org/10.1109/icnn.1994.374157.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Nonaka, Hiroki, and Toshimichi Saito. "A Simple Discrete-Time Recurrent Neural Network and its Application." In 2023 International Technical Conference on Circuits/Systems, Computers, and Communications (ITC-CSCC). IEEE, 2023. http://dx.doi.org/10.1109/itc-cscc58803.2023.10212780.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Wong, F. C. K., J. W. Minett, and W. S. Y. Wang. "Reassessing Combinatorial Productivity Exhibited by Simple Recurrent Networks in Language Acquisition." In The 2006 IEEE International Joint Conference on Neural Network Proceedings. IEEE, 2006. http://dx.doi.org/10.1109/ijcnn.2006.246624.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Takahashi, Kazuhiko. "Remarks on Feedforward-Feedback Controller Using Simple Recurrent Quaternion Neural Network." In 2018 IEEE Conference on Control Technology and Applications (CCTA). IEEE, 2018. http://dx.doi.org/10.1109/ccta.2018.8511593.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Hupkes, Dieuwke, and Willem Zuidema. "Visualisation and 'Diagnostic Classifiers' Reveal how Recurrent and Recursive Neural Networks Process Hierarchical Structure (Extended Abstract)." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/796.

Full text
Abstract:
In this paper, we investigate how recurrent neural networks can learn and process languages with hierarchical, compositional semantics. To this end, we define the artificial task of processing nested arithmetic expressions, and study whether different types of neural networks can learn to compute their meaning. We find that simple recurrent networks cannot find a generalising solution to this task, but gated recurrent neural networks perform surprisingly well: networks learn to predict the outcome of the arithmetic expressions with high accuracy, although performance deteriorates somewhat with increasing length. We test multiple hypotheses on the information that is encoded and processed by the networks using a method called diagnostic classification. In this method, simple neural classifiers are used to test sequences of predictions about features of the hidden state representations at each time step. Our results indicate that the networks follow a strategy similar to our hypothesised ‘cumulative strategy’, which explains the high accuracy of the network on novel expressions, the generalisation to longer expressions than seen in training, and the mild deterioration with increasing length. This, in turn, shows that diagnostic classifiers can be a useful technique for opening up the black box of neural networks.
APA, Harvard, Vancouver, ISO, and other styles
9

"IMPLICIT SEQUENCE LEARNING - A Case Study with a 4–2–4 Encoder Simple Recurrent Network." In International Conference on Neural Computation. SciTePress - Science and and Technology Publications, 2010. http://dx.doi.org/10.5220/0003061402790288.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Zhao, Yi, Yanyan Shen, and Junjie Yao. "Recurrent Neural Network for Text Classification with Hierarchical Multiscale Dense Connections." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/757.

Full text
Abstract:
Text classification is a fundamental task in many Natural Language Processing applications. While recurrent neural networks have achieved great success in performing text classification, they fail to capture the hierarchical structure and long-term semantics dependency which are common features of text data. Inspired by the advent of the dense connection pattern in advanced convolutional neural networks, we propose a simple yet effective recurrent architecture, named Hierarchical Mutiscale Densely Connected RNNs (HM-DenseRNNs), which: 1) enables direct access to the hidden states of all preceding recurrent units via dense connections, and 2) organizes multiple densely connected recurrent units into a hierarchical multi-scale structure, where the layers are updated at different scales. HM-DenseRNNs can effectively capture long-term dependencies among words in long text data, and a dense recurrent block is further introduced to reduce the number of parameters and enhance training efficiency. We evaluate the performance of our proposed architecture on three text datasets and the results verify the advantages of HM-DenseRNN over the baseline methods in terms of the classification accuracy.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Simple recurrent neural network"

1

Merkel, Justin. Quantized Recurrent Neural Network on FPGA. Iowa State University, 2022. http://dx.doi.org/10.31274/cc-20240624-1184.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Shao, Lu. Automatic Seizure Detection based on a Convolutional Neural Network-Recurrent Neural Network Model. Iowa State University, 2022. http://dx.doi.org/10.31274/cc-20240624-269.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Brabel, Michael J. Basin Sculpting a Hybrid Recurrent Feedforward Neural Network. Defense Technical Information Center, 1998. http://dx.doi.org/10.21236/ada336386.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Lee, Tsair-Fwu. Advancing Meta-Analysis of Post-Radiotherapy Nasopharyngeal Carcinoma Complications through Recurrent Neural Network-Enabled Natural Language Processing. INPLASY - International Platform of Registered Systematic Review and Meta-analysis Protocols, 2024. http://dx.doi.org/10.37766/inplasy2024.10.0030.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Tarasenko, Andrii O., Yuriy V. Yakimov, and Vladimir N. Soloviev. Convolutional neural networks for image classification. [б. в.], 2020. http://dx.doi.org/10.31812/123456789/3682.

Full text
Abstract:
This paper shows the theoretical basis for the creation of convolutional neural networks for image classification and their application in practice. To achieve the goal, the main types of neural networks were considered, starting from the structure of a simple neuron to the convolutional multilayer network necessary for the solution of this problem. It shows the stages of the structure of training data, the training cycle of the network, as well as calculations of errors in recognition at the stage of training and verification. At the end of the work the results of network training, calculation of recognition error and training accuracy are presented.
APA, Harvard, Vancouver, ISO, and other styles
6

Bodruzzaman, M., and M. A. Essawy. Iterative prediction of chaotic time series using a recurrent neural network. Quarterly progress report, January 1, 1995--March 31, 1995. Office of Scientific and Technical Information (OSTI), 1996. http://dx.doi.org/10.2172/283610.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Mohanty, Subhasish, and Joseph Listwan. Development of Digital Twin Predictive Model for PWR Components: Updates on Multi Times Series Temperature Prediction Using Recurrent Neural Network, DMW Fatigue Tests, System Level Thermal-Mechanical-Stress Analysis. Office of Scientific and Technical Information (OSTI), 2021. http://dx.doi.org/10.2172/1822853.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Engel, Bernard, Yael Edan, James Simon, Hanoch Pasternak, and Shimon Edelman. Neural Networks for Quality Sorting of Agricultural Produce. United States Department of Agriculture, 1996. http://dx.doi.org/10.32747/1996.7613033.bard.

Full text
Abstract:
The objectives of this project were to develop procedures and models, based on neural networks, for quality sorting of agricultural produce. Two research teams, one in Purdue University and the other in Israel, coordinated their research efforts on different aspects of each objective utilizing both melons and tomatoes as case studies. At Purdue: An expert system was developed to measure variances in human grading. Data were acquired from eight sensors: vision, two firmness sensors (destructive and nondestructive), chlorophyll from fluorescence, color sensor, electronic sniffer for odor detection, refractometer and a scale (mass). Data were analyzed and provided input for five classification models. Chlorophyll from fluorescence was found to give the best estimation for ripeness stage while the combination of machine vision and firmness from impact performed best for quality sorting. A new algorithm was developed to estimate and minimize training size for supervised classification. A new criteria was established to choose a training set such that a recurrent auto-associative memory neural network is stabilized. Moreover, this method provides for rapid and accurate updating of the classifier over growing seasons, production environments and cultivars. Different classification approaches (parametric and non-parametric) for grading were examined. Statistical methods were found to be as accurate as neural networks in grading. Classification models by voting did not enhance the classification significantly. A hybrid model that incorporated heuristic rules and either a numerical classifier or neural network was found to be superior in classification accuracy with half the required processing of solely the numerical classifier or neural network. In Israel: A multi-sensing approach utilizing non-destructive sensors was developed. Shape, color, stem identification, surface defects and bruises were measured using a color image processing system. Flavor parameters (sugar, acidity, volatiles) and ripeness were measured using a near-infrared system and an electronic sniffer. Mechanical properties were measured using three sensors: drop impact, resonance frequency and cyclic deformation. Classification algorithms for quality sorting of fruit based on multi-sensory data were developed and implemented. The algorithms included a dynamic artificial neural network, a back propagation neural network and multiple linear regression. Results indicated that classification based on multiple sensors may be applied in real-time sorting and can improve overall classification. Advanced image processing algorithms were developed for shape determination, bruise and stem identification and general color and color homogeneity. An unsupervised method was developed to extract necessary vision features. The primary advantage of the algorithms developed is their ability to learn to determine the visual quality of almost any fruit or vegetable with no need for specific modification and no a-priori knowledge. Moreover, since there is no assumption as to the type of blemish to be characterized, the algorithm is capable of distinguishing between stems and bruises. This enables sorting of fruit without knowing the fruits' orientation. A new algorithm for on-line clustering of data was developed. The algorithm's adaptability is designed to overcome some of the difficulties encountered when incrementally clustering sparse data and preserves information even with memory constraints. Large quantities of data (many images) of high dimensionality (due to multiple sensors) and new information arriving incrementally (a function of the temporal dynamics of any natural process) can now be processed. Furhermore, since the learning is done on-line, it can be implemented in real-time. The methodology developed was tested to determine external quality of tomatoes based on visual information. An improved model for color sorting which is stable and does not require recalibration for each season was developed for color determination. Excellent classification results were obtained for both color and firmness classification. Results indicted that maturity classification can be obtained using a drop-impact and a vision sensor in order to predict the storability and marketing of harvested fruits. In conclusion: We have been able to define quantitatively the critical parameters in the quality sorting and grading of both fresh market cantaloupes and tomatoes. We have been able to accomplish this using nondestructive measurements and in a manner consistent with expert human grading and in accordance with market acceptance. This research constructed and used large databases of both commodities, for comparative evaluation and optimization of expert system, statistical and/or neural network models. The models developed in this research were successfully tested, and should be applicable to a wide range of other fruits and vegetables. These findings are valuable for the development of on-line grading and sorting of agricultural produce through the incorporation of multiple measurement inputs that rapidly define quality in an automated manner, and in a manner consistent with the human graders and inspectors.
APA, Harvard, Vancouver, ISO, and other styles
9

Bouchouev, Ilia, and Wu-Yen (Jonathan) Sun. Myths and Mysteries About Speculation in the Oil Market”. King Abdullah Petroleum Studies and Research Center, 2025. https://doi.org/10.30573/ks--2025-dp13.

Full text
Abstract:
The article provides unique quantitative insights about highly secretive and poorly understood speculation in the oil market. To demystify the behavior of speculators, we look at the problem from five different angles. First, we explain how the presence of large over-the-counter (OTC) oil derivatives market leads to mischaracterization of traditional hedgers and speculators. We then explain what makes oil speculation special and different from speculation across many other commodities. Consequently, we identify the winners and the losers in the oil futures market. To model the behavior of the winners which we associate with fast-moving quantitative hedge funds, we develop the novel framework based on the simple neural-network algorithm. We conclude by analyzing a popular investment strategy of following the winners, or the hedge-funds’ flows.
APA, Harvard, Vancouver, ISO, and other styles
10

Panta, Manisha, Padam Thapa, Md Hoque, et al. Application of deep learning for segmenting seepages in levee systems. Engineer Research and Development Center (U.S.), 2024. http://dx.doi.org/10.21079/11681/49453.

Full text
Abstract:
Seepage is a typical hydraulic factor that can initiate the breaching process in a levee system. If not identified and treated on time, seepages can be a severe problem for levees, weakening the levee structure and eventually leading to collapse. Therefore, it is essential always to be vigilant with regular monitoring procedures to identify seepages throughout these levee systems and perform adequate repairs to limit potential threats from unforeseen levee failures. This paper introduces a fully convolutional neural network to identify and segment seepage from the image in levee systems. To the best of our knowledge, this is the first work in this domain. Applying deep learning techniques for semantic segmentation tasks in real-world scenarios has its own challenges, especially the difficulty for models to effectively learn from complex backgrounds while focusing on simpler objects of interest. This challenge is particularly evident in the task of detecting seepages in levee systems, where the fault is relatively simple compared to the complex and varied background. We addressed this problem by introducing negative images and a controlled transfer learning approach for semantic segmentation for accurate seepage segmentation in levee systems.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography