Academic literature on the topic 'Artificial Neural Networks and Recurrent Neutral Networks'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Artificial Neural Networks and Recurrent Neutral Networks.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Dissertations / Theses on the topic "Artificial Neural Networks and Recurrent Neutral Networks"

1

Kolen, John F. "Exploring the computational capabilities of recurrent neural networks /." The Ohio State University, 1994. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487853913100192.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Shao, Yuanlong. "Learning Sparse Recurrent Neural Networks in Language Modeling." The Ohio State University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=osu1398942373.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Gudjonsson, Ludvik. "Comparison of two methods for evolving recurrent artificial neural networks for." Thesis, University of Skövde, University of Skövde, 1998. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-155.

Full text
Abstract:
<p>n this dissertation a comparison of two evolutionary methods for evolving ANNs for robot control is made. The methods compared are SANE with enforced sub-population and delta-coding, and marker-based encoding. In an attempt to speed up evolution, marker-based encoding is extended with delta-coding. The task selected for comparison is the hunter-prey task. This task requires the robot controller to posess some form of memory as the prey can move out of sensor range. Incremental evolution is used to evolve the complex behaviour that is required to successfully handle this task. The comparison
APA, Harvard, Vancouver, ISO, and other styles
4

Parfitt, Shan Helen. "Explorations in anaphora resolution in artificial neural networks : implications for nativism." Thesis, Imperial College London, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.267247.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

NAPOLI, CHRISTIAN. "A-I: Artificial intelligence." Doctoral thesis, Università degli studi di Catania, 2016. http://hdl.handle.net/20.500.11769/490996.

Full text
Abstract:
In this thesis we proposed new neural architectures and information theory approaches. By means of wavelet analysis, neural networks, and the results of our own creations, namely the wavelet recurrent neural networks and the radial basis probabilistic neural networks,we tried to better understand, model and cope with the human behavior itself. The first idea was to model the workers of a crowdsourcing project as nodes on a cloud-computing system, we also hope to have exceeded the limits of such a definition. We hope to have opened a door on new possibilities to model the behavior of socially i
APA, Harvard, Vancouver, ISO, and other styles
6

Kramer, Gregory Robert. "An analysis of neutral drift's effect on the evolution of a CTRNN locomotion controller with noisy fitness evaluation." Wright State University / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=wright1182196651.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Rallabandi, Pavan Kumar. "Processing hidden Markov models using recurrent neural networks for biological applications." Thesis, University of the Western Cape, 2013. http://hdl.handle.net/11394/4525.

Full text
Abstract:
Philosophiae Doctor - PhD<br>In this thesis, we present a novel hybrid architecture by combining the most popular sequence recognition models such as Recurrent Neural Networks (RNNs) and Hidden Markov Models (HMMs). Though sequence recognition problems could be potentially modelled through well trained HMMs, they could not provide a reasonable solution to the complicated recognition problems. In contrast, the ability of RNNs to recognize the complex sequence recognition problems is known to be exceptionally good. It should be noted that in the past, methods for applying HMMs into RNNs have be
APA, Harvard, Vancouver, ISO, and other styles
8

Salihoglu, Utku. "Toward a brain-like memory with recurrent neural networks." Doctoral thesis, Universite Libre de Bruxelles, 2009. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210221.

Full text
Abstract:
For the last twenty years, several assumptions have been expressed in the fields of information processing, neurophysiology and cognitive sciences. First, neural networks and their dynamical behaviors in terms of attractors is the natural way adopted by the brain to encode information. Any information item to be stored in the neural network should be coded in some way or another in one of the dynamical attractors of the brain, and retrieved by stimulating the network to trap its dynamics in the desired item’s basin of attraction. The second view shared by neural network researchers is to base
APA, Harvard, Vancouver, ISO, and other styles
9

Yang, Jidong. "Road crack condition performance modeling using recurrent Markov chains and artificial neural networks." [Tampa, Fla.] : University of South Florida, 2004. http://purl.fcla.edu/fcla/etd/SFE0000567.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Willmott, Devin. "Recurrent Neural Networks and Their Applications to RNA Secondary Structure Inference." UKnowledge, 2018. https://uknowledge.uky.edu/math_etds/58.

Full text
Abstract:
Recurrent neural networks (RNNs) are state of the art sequential machine learning tools, but have difficulty learning sequences with long-range dependencies due to the exponential growth or decay of gradients backpropagated through the RNN. Some methods overcome this problem by modifying the standard RNN architecure to force the recurrent weight matrix W to remain orthogonal throughout training. The first half of this thesis presents a novel orthogonal RNN architecture that enforces orthogonality of W by parametrizing with a skew-symmetric matrix via the Cayley transform. We present rules for
APA, Harvard, Vancouver, ISO, and other styles
More sources
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!