To see the other types of publications on this topic, follow the link: Recurent neural networks.

Dissertations / Theses on the topic 'Recurent neural networks'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Recurent neural networks.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Černík, Tomáš. "Neuronové sítě s proměnnou topologií." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2016. http://www.nusl.cz/ntk/nusl-255440.

Full text
Abstract:
Master theses deals with Constructive Neural newtorks. First part describes neural networks and coresponding mathematical models. Furher, it shows basic algorithms for learning neural networks and desribes basic constructive algotithms and their modifications. The second part deals with implementation details of selected algorithms and provides their comparision. Further comparision with backpropagation algorithm is provided.
APA, Harvard, Vancouver, ISO, and other styles
2

Nodžák, Petr. "Automatické rozpoznání akordů pomocí hlubokých neuronových sítí." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2020. http://www.nusl.cz/ntk/nusl-433596.

Full text
Abstract:
This work deals with automatic chord recognition using neural networks. The problem was separated into two subproblems. The first subproblem aims to experimental finding of most suitable solution for a acoustic model and the second one aims to experimental finding of most suitable solution for a language model. The problem was solved by iterative method. First a suboptimal solution of the first subproblem was found and then the second one. A total of 19 acoustic and 12 language models were made. Ten training datasets was created for acoustic models and three for language models. In total, over
APA, Harvard, Vancouver, ISO, and other styles
3

Żbikowski, Rafal Waclaw. "Recurrent neural networks some control aspects /." Connect to electronic version, 1994. http://hdl.handle.net/1905/180.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ahamed, Woakil Uddin. "Quantum recurrent neural networks for filtering." Thesis, University of Hull, 2009. http://hydra.hull.ac.uk/resources/hull:2411.

Full text
Abstract:
The essence of stochastic filtering is to compute the time-varying probability densityfunction (pdf) for the measurements of the observed system. In this thesis, a filter isdesigned based on the principles of quantum mechanics where the schrodinger waveequation (SWE) plays the key part. This equation is transformed to fit into the neuralnetwork architecture. Each neuron in the network mediates a spatio-temporal field witha unified quantum activation function that aggregates the pdf information of theobserved signals. The activation function is the result of the solution of the SWE. Theincorpor
APA, Harvard, Vancouver, ISO, and other styles
5

Zbikowski, Rafal Waclaw. "Recurrent neural networks : some control aspects." Thesis, University of Glasgow, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.390233.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Jacobsson, Henrik. "Rule extraction from recurrent neural networks." Thesis, University of Sheffield, 2006. http://etheses.whiterose.ac.uk/6081/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ayoub, Issa. "Multimodal Affective Computing Using Temporal Convolutional Neural Network and Deep Convolutional Neural Networks." Thesis, Université d'Ottawa / University of Ottawa, 2019. http://hdl.handle.net/10393/39337.

Full text
Abstract:
Affective computing has gained significant attention from researchers in the last decade due to the wide variety of applications that can benefit from this technology. Often, researchers describe affect using emotional dimensions such as arousal and valence. Valence refers to the spectrum of negative to positive emotions while arousal determines the level of excitement. Describing emotions through continuous dimensions (e.g. valence and arousal) allows us to encode subtle and complex affects as opposed to discrete emotions, such as the basic six emotions: happy, anger, fear, disgust, sad and n
APA, Harvard, Vancouver, ISO, and other styles
8

Chan, Heather Y. "Gene Network Inference and Expression Prediction Using Recurrent Neural Networks and Evolutionary Algorithms." BYU ScholarsArchive, 2010. https://scholarsarchive.byu.edu/etd/2648.

Full text
Abstract:
We demonstrate the success of recurrent neural networks in gene network inference and expression prediction using a hybrid of particle swarm optimization and differential evolution to overcome the classic obstacle of local minima in training recurrent neural networks. We also provide an improved validation framework for the evaluation of genetic network modeling systems that will result in better generalization and long-term prediction capability. Success in the modeling of gene regulation and prediction of gene expression will lead to more rapid discovery and development of therapeutic medici
APA, Harvard, Vancouver, ISO, and other styles
9

Bonato, Tommaso. "Time Series Predictions With Recurrent Neural Networks." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2018.

Find full text
Abstract:
L'obiettivo principale di questa tesi è studiare come gli algoritmi di apprendimento automatico (machine learning in inglese) e in particolare le reti neurali LSTM (Long Short Term Memory) possano essere utilizzati per prevedere i valori futuri di una serie storica regolare come, per esempio, le funzioni seno e coseno. Una serie storica è definita come una sequenza di osservazioni s_t ordinate nel tempo. Inoltre cercheremo di applicare gli stessi principi per prevedere i valori di una serie storica prodotta utilizzando i dati di vendita di un prodotto cosmetico durante un periodo di tre anni.
APA, Harvard, Vancouver, ISO, and other styles
10

Silfa, Franyell. "Energy-efficient architectures for recurrent neural networks." Doctoral thesis, Universitat Politècnica de Catalunya, 2021. http://hdl.handle.net/10803/671448.

Full text
Abstract:
Deep Learning algorithms have been remarkably successful in applications such as Automatic Speech Recognition and Machine Translation. Thus, these kinds of applications are ubiquitous in our lives and are found in a plethora of devices. These algorithms are composed of Deep Neural Networks (DNNs), such as Convolutional Neural Networks and Recurrent Neural Networks (RNNs), which have a large number of parameters and require a large amount of computations. Hence, the evaluation of DNNs is challenging due to their large memory and power requirements. RNNs are employed to solve sequence to sequ
APA, Harvard, Vancouver, ISO, and other styles
11

Brax, Christoffer. "Recurrent neural networks for time-series prediction." Thesis, University of Skövde, Department of Computer Science, 2000. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-480.

Full text
Abstract:
<p>Recurrent neural networks have been used for time-series prediction with good results. In this dissertation recurrent neural networks are compared with time-delayed feed forward networks, feed forward networks and linear regression models on a prediction task. The data used in all experiments is real-world sales data containing two kinds of segments: campaign segments and non-campaign segments. The task is to make predictions of sales under campaigns. It is evaluated if more accurate predictions can be made when only using the campaign segments of the data.</p><p>Throughout the entire proje
APA, Harvard, Vancouver, ISO, and other styles
12

Rabi, Gihad. "Visual speech recognition by recurrent neural networks." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape16/PQDD_0010/MQ36169.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Miller, Paul Ian. "Recurrent neural networks and adaptive motor control." Thesis, University of Stirling, 1997. http://hdl.handle.net/1893/21520.

Full text
Abstract:
This thesis is concerned with the use of neural networks for motor control tasks. The main goal of the thesis is to investigate ways in which the biological notions of motor programs and Central Pattern Generators (CPGs) may be implemented in a neural network framework. Biological CPGs can be seen as components within a larger control scheme, which is basically modular in design. In this thesis, these ideas are investigated through the use of modular recurrent networks, which are used in a variety of control tasks. The first experimental chapter deals with learning in recurrent networks, and i
APA, Harvard, Vancouver, ISO, and other styles
14

Graves, Alex. "Supervised sequence labelling with recurrent neural networks." kostenfrei, 2008. http://mediatum2.ub.tum.de/doc/673554/673554.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Xie, Xiaohui 1972. "Dynamics and learning in recurrent neural networks." Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/8393.

Full text
Abstract:
Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Brain and Cognitive Sciences, 2002.<br>Includes bibliographical references (p. 141-151).<br>This thesis is a study of dynamics and learning in recurrent neural networks. Many computations of neural systems are carried out through a network of a large number of neurons. With massive feedback connections among these neurons, a study of its dynamics is necessary in order to understand the network's function. In this thesis, I aim at studying several recurrent network models and relating the dynamics with the networks' computation. Fo
APA, Harvard, Vancouver, ISO, and other styles
16

Besharat, Pour Shiva. "Hierarchical sales forecasting using Recurrent Neural Networks." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-290892.

Full text
Abstract:
Sales forecasting equips businesses with the essential basis for planning future investments, controlling costs, and production. This research is in cooperation with a property development company for the purpose of improving the accuracy of manual sales forecasting. The objective is to investigate the effects of using the underlying factors that affect the individual sales of the company in forecasting the company’s income. One approach uses an aggregation of the estimates of the individual sales to approximate the company’s income. This approach uses the underlying hierarchical factors of th
APA, Harvard, Vancouver, ISO, and other styles
17

Alam, Samiul. "Recurrent neural networks in electricity load forecasting." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-233254.

Full text
Abstract:
In this thesis two main studies are conducted to compare the predictive capabilities of feed-forward neural networks (FFNN) and long short-term memory networks (LSTM) in electricity load forecasting. The first study compares univariate networks using past electricity load, as well as multivariate networks using past electricity load and air temperature, in day-ahead load forecasting using varying lookback periods and sparsity of past observations. The second study compares FFNNs and LSTMs of different complexities (i.e. network sizes) when restrictions imposed by limitations of the real world
APA, Harvard, Vancouver, ISO, and other styles
18

Tegnér, Gustaf. "Recurrent neural networks for financial asset forecasting." Thesis, KTH, Matematisk statistik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-229924.

Full text
Abstract:
The application of neural networks in finance has found renewed interest in the past few years. Neural networks have a proven capability of modeling non-linear relationships and have been proven widely successful in domains such as image and speech recognition. These favorable properties of the Neural Network make them an alluring choice of model when studying the financial markets. This thesis is concerned with investigating the use of recurrent neural networks for predicting future financial asset price movements on a set of futures contracts. To aid our research, we compare them to a set of
APA, Harvard, Vancouver, ISO, and other styles
19

Perumal, Subramoniam. "Stability and Switchability in Recurrent Neural Networks." University of Cincinnati / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1227194814.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Ljungehed, Jesper. "Predicting Customer Churn Using Recurrent Neural Networks." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-210670.

Full text
Abstract:
Churn prediction is used to identify customers that are becoming less loyal and is an important tool for companies that want to stay competitive in a rapidly growing market. In retail, a dynamic definition of churn is needed to identify churners correctly. Customer Lifetime Value (CLV) is the monetary value of a customer relationship. No change in CLV for a given customer indicates a decrease in loyalty. This thesis proposes a novel approach to churn prediction. The proposed model uses a Recurrent Neural Network to identify churners based on Customer Lifetime Value time series regression. The
APA, Harvard, Vancouver, ISO, and other styles
21

Potter, Chris, Kurt Kosbar, and Adam Panagos. "MIMO Channel Prediction Using Recurrent Neural Networks." International Foundation for Telemetering, 2008. http://hdl.handle.net/10150/606193.

Full text
Abstract:
ITC/USA 2008 Conference Proceedings / The Forty-Fourth Annual International Telemetering Conference and Technical Exhibition / October 27-30, 2008 / Town and Country Resort & Convention Center, San Diego, California<br>Adaptive modulation is a communication technique capable of maximizing throughput while guaranteeing a fixed symbol error rate (SER). However, this technique requires instantaneous channel state information at the transmitter. This can be obtained by predicting channel states at the receiver and feeding them back to the transmitter. Existing algorithms used to predict single-inp
APA, Harvard, Vancouver, ISO, and other styles
22

Berlati, Alessandro. "Ambiguity in Recurrent Models: Predicting Multiple Hypotheses with Recurrent Neural Networks." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amslaurea.unibo.it/16611/.

Full text
Abstract:
Multiple Hypothesis Prediction (MHP) models have been introduced to deal with uncertainty in feedforward neural networks, in particular it has been shown how to easily convert a standard single-prediction neural network into one able to show many feasible outcomes. Ambiguity, however, is present also in problems where feedback model are needed, such as sequence generation and time series classification. In our work, we propose an extension of MHP to Recurrent Neural Networks (RNNs), especially those consisting of Long Short-Term Memory units. We test the resulting models on both regression an
APA, Harvard, Vancouver, ISO, and other styles
23

Le, Ngan Thi Hoang. "Contextual Recurrent Level Set Networks and Recurrent Residual Networks for Semantic Labeling." Research Showcase @ CMU, 2018. http://repository.cmu.edu/dissertations/1166.

Full text
Abstract:
Semantic labeling is becoming more and more popular among researchers in computer vision and machine learning. Many applications, such as autonomous driving, tracking, indoor navigation, augmented reality systems, semantic searching, medical imaging are on the rise, requiring more accurate and efficient segmentation mechanisms. In recent years, deep learning approaches based on Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) have dramatically emerged as the dominant paradigm for solving many problems in computer vision and machine learning. The main focus of this thes
APA, Harvard, Vancouver, ISO, and other styles
24

Sarti, Paolo. "Embeddings for text classification with recurrent neural networks." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018.

Find full text
Abstract:
L'importanza di metodi automatici per la classificazione ed estrazione di informazioni da testi è cresciuta significativamente negli ultimi anni, a causa della produzione sempre maggiore di questo tipo di dati, specialmente tramite piattaforme web. Questo ha portato allo sviluppo di nuovi algoritmi per analizzare testi non strutturati. Le tecniche di "Embedding", che associano parole o parti di testo di lunghezza variabile a vettori di dimensione fissa mantenendo relazioni di similarità semantica, sono state un grande progresso per il campo del "Natural Language Processing". Inoltre, avanzamen
APA, Harvard, Vancouver, ISO, and other styles
25

Gers, Félix. "Long short-term memory in recurrent neural networks /." [S.l.] : [s.n.], 2001. http://library.epfl.ch/theses/?nr=2366.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Tino, Peter, and Georg Dorffner. "Recurrent neural networks with iterated function systems dynamics." SFB Adaptive Information Systems and Modelling in Economics and Management Science, WU Vienna University of Economics and Business, 1998. http://epub.wu.ac.at/948/1/document.pdf.

Full text
Abstract:
We suggest a recurrent neural network (RNN) model with a recurrent part corresponding to iterative function systems (IFS) introduced by Barnsley [1] as a fractal image compression mechanism. The key idea is that 1) in our model we avoid learning the RNN state part by having non-trainable connections between the context and recurrent layers (this makes the training process less problematic and faster), 2) the RNN state part codes the information processing states in the symbolic input stream in a well-organized and intuitively appealing way. We show that there is a direct correspondence between
APA, Harvard, Vancouver, ISO, and other styles
27

Steinberger, Thomas, and Lucas Zinner. "Complete controllability of discrete-time recurrent neural networks." SFB Adaptive Information Systems and Modelling in Economics and Management Science, WU Vienna University of Economics and Business, 1999. http://epub.wu.ac.at/440/1/document.pdf.

Full text
Abstract:
This paper presents a characterization of complete controllability for the class of discrete-time recurrent neural networks. We prove that complete controllability holds if and only if the rank of the control matrix equals the state space dimension. (author's abstract)<br>Series: Report Series SFB "Adaptive Information Systems and Modelling in Economics and Management Science"
APA, Harvard, Vancouver, ISO, and other styles
28

Mastrogiuseppe, Francesca. "From dynamics to computations in recurrent neural networks." Thesis, Paris Sciences et Lettres (ComUE), 2017. http://www.theses.fr/2017PSLEE048/document.

Full text
Abstract:
Le cortex cérébral des mammifères est constitué de larges et complexes réseaux de neurones. La tâche de ces assemblées de cellules est d’encoder et de traiter, le plus précisément possible, l'information sensorielle issue de notre environnement extérieur. De façon surprenante, les enregistrements électrophysiologiques effectués sur des animaux en comportement ont montré que l’activité corticale est excessivement irrégulière. Les motifs temporels d’activité ainsi que les taux de décharge moyens des cellules varient considérablement d’une expérience à l’autre, et ce malgré des conditions expérim
APA, Harvard, Vancouver, ISO, and other styles
29

Shao, Yuanlong. "Learning Sparse Recurrent Neural Networks in Language Modeling." The Ohio State University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=osu1398942373.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Kolen, John F. "Exploring the computational capabilities of recurrent neural networks /." The Ohio State University, 1994. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487853913100192.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Tekin, Mim Kemal. "Vehicle Path Prediction Using Recurrent Neural Network." Thesis, Linköpings universitet, Statistik och maskininlärning, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-166134.

Full text
Abstract:
Vehicle Path Prediction can be used to support Advanced Driver Assistance Systems (ADAS) that covers different technologies like Autonomous Braking System, Adaptive Cruise Control, etc. In this thesis, the vehicle’s future path, parameterized as 5 coordinates along the path, is predicted by using only visual data collected by a front vision sensor. This approach provides cheaper application opportunities without using different sensors. The predictions are done by deep convolutional neural networks (CNN) and the goal of the project is to use recurrent neural networks (RNN) and to investigate t
APA, Harvard, Vancouver, ISO, and other styles
32

Mehta, Manish P. "Prediction of manufacturing operations sequence using recurrent neural networks." Ohio : Ohio University, 1997. http://www.ohiolink.edu/etd/view.cgi?ohiou1177089656.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Vartak, Aniket Arun. "GAUSS-NEWTON BASED LEARNING FOR FULLY RECURRENT NEURAL NETWORKS." Master's thesis, University of Central Florida, 2004. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/4429.

Full text
Abstract:
The thesis discusses a novel off-line and on-line learning approach for Fully Recurrent Neural Networks (FRNNs). The most popular algorithm for training FRNNs, the Real Time Recurrent Learning (RTRL) algorithm, employs the gradient descent technique for finding the optimum weight vectors in the recurrent neural network. Within the framework of the research presented, a new off-line and on-line variation of RTRL is presented, that is based on the Gauss-Newton method. The method itself is an approximate Newton's method tailored to the specific optimization problem, (non-linear least squares), wh
APA, Harvard, Vancouver, ISO, and other styles
34

Senior, Andrew William. "Off-line cursive handwriting recognition using recurrent neural networks." Thesis, University of Cambridge, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.338024.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Fors, Johansson Christoffer. "Arrival Time Predictions for Buses using Recurrent Neural Networks." Thesis, Linköpings universitet, Artificiell intelligens och integrerade datorsystem, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-165133.

Full text
Abstract:
In this thesis, two different types of bus passengers are identified. These two types, namely current passengers and passengers-to-be have different needs in terms of arrival time predictions. A set of machine learning models based on recurrent neural networks and long short-term memory units were developed to meet these needs. Furthermore, bus data from the public transport in Östergötland county, Sweden, were collected and used for training new machine learning models. These new models are compared with the current prediction system that is used today to provide passengers with arrival time
APA, Harvard, Vancouver, ISO, and other styles
36

Alvarez, Mouravskaia Kevin. "Metaphor identification for Spanish sentences using recurrent neural networks." Master's thesis, Pontificia Universidad Católica del Perú, 2019. http://hdl.handle.net/20.500.12404/16531.

Full text
Abstract:
Metaphors are an important literary figure that is found in books or and daily use. Nowadays it is an essential task for Natural Language Processing (NLP), but the dependence of the context and the lack corpus in other languages make it a bottleneck for some tasks such as translation or interpretation of texts. We present a classification model using recurrent neural networks for metaphor identification in Spanish sentences. We tested our model and his variants on a new corpus in Spanish and compared it with the current baseline using an English corpus. Our best model reports an F-sco
APA, Harvard, Vancouver, ISO, and other styles
37

Chen, Jacob. "Embodied perception during walking using Deep Recurrent Neural Networks." Thesis, University of British Columbia, 2017. http://hdl.handle.net/2429/62171.

Full text
Abstract:
Movements such as walking require knowledge of the environment in order to be robust. This knowledge can be gleaned via embodied perception. While information about the upcoming terrain such as compliance, friction, or slope may be difficult to directly estimate, using the walking motion itself allows for these properties to be implicitly observed over time from the stream of movement data. However, the relationship between a parameter such as ground compliance and the movement data may be complex and difficult to discover. In this thesis, we demonstrate the use of a Deep LSTM Network to estim
APA, Harvard, Vancouver, ISO, and other styles
38

Jansson, Anton. "Predicting trajectories of golf balls using recurrent neural networks." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-210552.

Full text
Abstract:
This thesis is concerned with the problem of predicting the remaining part of the trajectory of a golf ball as it travels through the air where only the three-dimensional position of the ball is captured. The approach taken to solve this problem relied on recurrent neural networks in the form of the long short-term memory networks (LSTM). The motivation behind this choice was that this type of networks had led to state-of-the-art performance for similar problems such as predicting the trajectory of pedestrians. The results show that using LSTMs led to an average reduction of 36.6 % of the erro
APA, Harvard, Vancouver, ISO, and other styles
39

Molter, Colin. "Storing information through complex dynamics in recurrent neural networks." Doctoral thesis, Universite Libre de Bruxelles, 2005. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/211039.

Full text
Abstract:
The neural net computer simulations which will be presented here are based on the acceptance of a set of assumptions that for the last twenty years have been expressed in the fields of information processing, neurophysiology and cognitive sciences. First of all, neural networks and their dynamical behaviors in terms of attractors is the natural way adopted by the brain to encode information. Any information item to be stored in the neural net should be coded in some way or another in one of the dynamical attractors of the brain and retrieved by stimulating the net so as to trap its dynamics in
APA, Harvard, Vancouver, ISO, and other styles
40

Salihoglu, Utku. "Toward a brain-like memory with recurrent neural networks." Doctoral thesis, Universite Libre de Bruxelles, 2009. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210221.

Full text
Abstract:
For the last twenty years, several assumptions have been expressed in the fields of information processing, neurophysiology and cognitive sciences. First, neural networks and their dynamical behaviors in terms of attractors is the natural way adopted by the brain to encode information. Any information item to be stored in the neural network should be coded in some way or another in one of the dynamical attractors of the brain, and retrieved by stimulating the network to trap its dynamics in the desired item’s basin of attraction. The second view shared by neural network researchers is to base
APA, Harvard, Vancouver, ISO, and other styles
41

Pan, YaDung. "Fuzzy adaptive recurrent counterpropagation neural networks: A neural network architecture for qualitative modeling and real-time simulation of dynamic processes." Diss., The University of Arizona, 1995. http://hdl.handle.net/10150/187101.

Full text
Abstract:
In this dissertation, a new artificial neural network (ANN) architecture called fuzzy adaptive recurrent counterpropagation neural network (FARCNN) is presented. FARCNNs can be directly synthesized from a set of training data, making system behavioral learning extremely fast. FARCNNs can be applied directly and effectively to model both static and dynamic system behavior based on observed input/output behavioral patterns alone without need of knowing anything about the internal structure of the system under study. The FARCNN architecture is derived from the methodology of fuzzy inductive reaso
APA, Harvard, Vancouver, ISO, and other styles
42

Oskarsson, Gustav. "Aktieprediktion med neurala nätverk : En jämförelse av statistiska modeller, neurala nätverk och kombinerade neurala nätverk." Thesis, Blekinge Tekniska Högskola, Institutionen för industriell ekonomi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-18214.

Full text
Abstract:
This study is about prediction of the stockmarket through a comparison of neural networks and statistical models. The study aims to improve the accuracy of stock prediction. Much of the research made on predicting shares deals with statistical models, but also neural networks and then mainly the types RNN and CNN. No research has been done on how these neural networks can be combined, which is why this study aims for this. Tests are made on statistical models, neural networks and combined neural networks to predict stocks at minute level. The result shows that a combination of two neural netwo
APA, Harvard, Vancouver, ISO, and other styles
43

Wen, Tsung-Hsien. "Recurrent neural network language generation for dialogue systems." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/275648.

Full text
Abstract:
Language is the principal medium for ideas, while dialogue is the most natural and effective way for humans to interact with and access information from machines. Natural language generation (NLG) is a critical component of spoken dialogue and it has a significant impact on usability and perceived quality. Many commonly used NLG systems employ rules and heuristics, which tend to generate inflexible and stylised responses without the natural variation of human language. However, the frequent repetition of identical output forms can quickly make dialogue become tedious for most real-world users.
APA, Harvard, Vancouver, ISO, and other styles
44

Rodriguez, Paul Fabian. "Mathematical foundations of simple recurrent networks /." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 1999. http://wwwlib.umi.com/cr/ucsd/fullcit?p9935464.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Putchala, Manoj Kumar. "Deep Learning Approach for Intrusion Detection System (IDS) in the Internet of Things (IoT) Network using Gated Recurrent Neural Networks (GRU)." Wright State University / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=wright1503680452498351.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Otte, Sebastian [Verfasser]. "Recurrent Neural Networks for Sequential Pattern Recognition Applications / Sebastian Otte." München : Verlag Dr. Hut, 2017. http://d-nb.info/1149579382/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Do, Ngoc. "Použití rekurentních neuronových sítí pro automatické rozpoznávání řečníka, jazyka a pohlaví." Master's thesis, 2016. http://www.nusl.cz/ntk/nusl-346774.

Full text
Abstract:
Title: Neural networks for automatic speaker, language, and sex identifica- tion Author: Bich-Ngoc Do Department: Institute of Formal and Applied Linguistics Supervisor: Ing. Mgr. Filip Jurek, Ph.D., Institute of Formal and Applied Linguistics and Dr. Marco Wiering, Faculty of Mathematics and Natural Sciences, University of Groningen Abstract: Speaker recognition is a challenging task and has applications in many areas, such as access control or forensic science. On the other hand, in recent years, deep learning paradigm and its branch, deep neural networks have emerged as powerful machine lea
APA, Harvard, Vancouver, ISO, and other styles
48

Sutskever, Ilya. "Training Recurrent Neural Networks." Thesis, 2013. http://hdl.handle.net/1807/36012.

Full text
Abstract:
Recurrent Neural Networks (RNNs) are powerful sequence models that were believed to be difficult to train, and as a result they were rarely used in machine learning applications. This thesis presents methods that overcome the difficulty of training RNNs, and applications of RNNs to challenging problems. We first describe a new probabilistic sequence model that combines Restricted Boltzmann Machines and RNNs. The new model is more powerful than similar models while being less difficult to train. Next, we present a new variant of the Hessian-free (HF) optimizer and show that it can train RNNs
APA, Harvard, Vancouver, ISO, and other styles
49

"Locally connected recurrent neural networks." Chinese University of Hong Kong, 1993. http://library.cuhk.edu.hk/record=b5887724.

Full text
Abstract:
by Evan, Fung-yu Young.<br>Thesis (M.Phil.)--Chinese University of Hong Kong, 1993.<br>Includes bibliographical references (leaves 161-166).<br>List of Figures --- p.vi<br>List of Tables --- p.vii<br>List of Graphs --- p.viii<br>Abstract --- p.ix<br>Chapter Part I --- Learning Algorithms<br>Chapter 1 --- Representing Time in Connectionist Models --- p.1<br>Chapter 1.1 --- Introduction --- p.1<br>Chapter 1.2 --- Temporal Sequences --- p.2<br>Chapter 1.2.1 --- Recognition Tasks --- p.2<br>Chapter 1.2.2 --- Reproduction Tasks --- p.3<br>Chapter 1.2.3 --- Generation Tasks --- p.4<br>Chap
APA, Harvard, Vancouver, ISO, and other styles
50

Hammer, Barbara. "Learning with Recurrent Neural Networks." Doctoral thesis, 2000. https://repositorium.ub.uni-osnabrueck.de/handle/urn:nbn:de:gbv:700-2000091564.

Full text
Abstract:
This thesis examines so called folding neural networks as a mechanism for machine learning. Folding networks form a generalization of partial recurrent neural networks such that they are able to deal with tree structured inputs instead of simple linear lists. In particular, they can handle classical formulas - they were proposed originally for this purpose. After a short explanation of the neural architecture we show that folding networks are well suited as a learning mechanism in principle. This includes three parts: the proof of their universal approximation ability, the aspect of informatio
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!