To see the other types of publications on this topic, follow the link: Neural networks (Computer science).

Dissertations / Theses on the topic 'Neural networks (Computer science)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Neural networks (Computer science).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Landassuri, Moreno Victor Manuel. "Evolution of modular neural networks." Thesis, University of Birmingham, 2012. http://etheses.bham.ac.uk//id/eprint/3243/.

Full text
Abstract:
It is well known that the human brain is highly modular, having a structural and functional organization that allows the different regions of the brain to be reused for different cognitive processes. So far, this has not been fully addressed by artificial systems, and a better understanding of when and how modules emerge is required, with a broad framework indicating how modules could be reused within neural networks. This thesis provides a deep investigation of module formation, module communication (interaction) and module reuse during evolution for a variety of classification and prediction
APA, Harvard, Vancouver, ISO, and other styles
2

Sloan, Cooper Stokes. "Neural bus networks." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/119711.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018.<br>This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.<br>Cataloged from student-submitted PDF version of thesis.<br>Includes bibliographical references (pages 65-68).<br>Bus schedules are unreliable, leaving passengers waiting and increasing commute times. This problem can be solved by modeling the traffic network, and delivering predicted arrival times to passengers. Research att
APA, Harvard, Vancouver, ISO, and other styles
3

Khan, Altaf Hamid. "Feedforward neural networks with constrained weights." Thesis, University of Warwick, 1996. http://wrap.warwick.ac.uk/4332/.

Full text
Abstract:
The conventional multilayer feedforward network having continuous-weights is expensive to implement in digital hardware. Two new types of networks are proposed which lend themselves to cost-effective implementations in hardware and have a fast forward-pass capability. These two differ from the conventional model in having extra constraints on their weights: the first allows its weights to take integer values in the range [-3,3] only, whereas the second restricts its synapses to the set {-1,0,1} while allowing unrestricted offsets. The benefits of the first configuration are in having weights w
APA, Harvard, Vancouver, ISO, and other styles
4

Zaghloul, Waleed A. Lee Sang M. "Text mining using neural networks." Lincoln, Neb. : University of Nebraska-Lincoln, 2005. http://0-www.unl.edu.library.unl.edu/libr/Dissertations/2005/Zaghloul.pdf.

Full text
Abstract:
Thesis (Ph.D.)--University of Nebraska-Lincoln, 2005.<br>Title from title screen (sites viewed on Oct. 18, 2005). PDF text: 100 p. : col. ill. Includes bibliographical references (p. 95-100 of dissertation).
APA, Harvard, Vancouver, ISO, and other styles
5

Hadjifaradji, Saeed. "Learning algorithms for restricted neural networks." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0016/NQ48102.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Cheung, Ka Kit. "Neural networks for optimization." HKBU Institutional Repository, 2001. http://repository.hkbu.edu.hk/etd_ra/291.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ahamed, Woakil Uddin. "Quantum recurrent neural networks for filtering." Thesis, University of Hull, 2009. http://hydra.hull.ac.uk/resources/hull:2411.

Full text
Abstract:
The essence of stochastic filtering is to compute the time-varying probability densityfunction (pdf) for the measurements of the observed system. In this thesis, a filter isdesigned based on the principles of quantum mechanics where the schrodinger waveequation (SWE) plays the key part. This equation is transformed to fit into the neuralnetwork architecture. Each neuron in the network mediates a spatio-temporal field witha unified quantum activation function that aggregates the pdf information of theobserved signals. The activation function is the result of the solution of the SWE. Theincorpor
APA, Harvard, Vancouver, ISO, and other styles
8

Williams, Bryn V. "Evolutionary neural networks : models and applications." Thesis, Aston University, 1995. http://publications.aston.ac.uk/10635/.

Full text
Abstract:
The scaling problems which afflict attempts to optimise neural networks (NNs) with genetic algorithms (GAs) are disclosed. A novel GA-NN hybrid is introduced, based on the bumptree, a little-used connectionist model. As well as being computationally efficient, the bumptree is shown to be more amenable to genetic coding lthan other NN models. A hierarchical genetic coding scheme is developed for the bumptree and shown to have low redundancy, as well as being complete and closed with respect to the search space. When applied to optimising bumptree architectures for classification problems the GA
APA, Harvard, Vancouver, ISO, and other styles
9

De, Jongh Albert. "Neural network ensembles." Thesis, Stellenbosch : Stellenbosch University, 2004. http://hdl.handle.net/10019.1/50035.

Full text
Abstract:
Thesis (MSc)--Stellenbosch University, 2004.<br>ENGLISH ABSTRACT: It is possible to improve on the accuracy of a single neural network by using an ensemble of diverse and accurate networks. This thesis explores diversity in ensembles and looks at the underlying theory and mechanisms employed to generate and combine ensemble members. Bagging and boosting are studied in detail and I explain their success in terms of well-known theoretical instruments. An empirical evaluation of their performance is conducted and I compare them to a single classifier and to each other in terms of accuracy
APA, Harvard, Vancouver, ISO, and other styles
10

Lee, Ji Young Ph D. Massachusetts Institute of Technology. "Information extraction with neural networks." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/111905.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017.<br>Cataloged from PDF version of thesis.<br>Includes bibliographical references (pages 85-97).<br>Electronic health records (EHRs) have been widely adopted, and are a gold mine for clinical research. However, EHRs, especially their text components, remain largely unexplored due to the fact that they must be de-identified prior to any medical investigation. Existing systems for de-identification rely on manual rules or features, which are time-consuming to develop and fine-tun
APA, Harvard, Vancouver, ISO, and other styles
11

Zeng, Brandon. "Towards understanding residual neural networks." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/123067.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019<br>Cataloged from PDF version of thesis.<br>Includes bibliographical references (page 37).<br>Residual networks (ResNets) are now a prominent architecture in the field of deep learning. However, an explanation for their success remains elusive. The original view is that residual connections allows for the training of deeper networks, but it is not clear that added layers are always useful, or even how they are used. In this work, we find that residual connections distribute l
APA, Harvard, Vancouver, ISO, and other styles
12

Sarda, Srikant 1977. "Neural networks and neurophysiological signals." Thesis, Massachusetts Institute of Technology, 1999. http://hdl.handle.net/1721.1/9806.

Full text
Abstract:
Thesis (S.B. and M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1999.<br>Includes bibliographical references (p. 45).<br>The purpose of this thesis project is to develop, implement, and validate a neural network which will classify compound muscle action potentials (CMAPs). The two classes of signals are "via­ble" and "non-viable." This classification system will be used as part of a quality assurance mech­anism on the NC-stat nerve conduction monitoring system. The results show that standard backpropagation neural networks provide excepti
APA, Harvard, Vancouver, ISO, and other styles
13

Nareshkumar, Nithyalakshmi. "Simulataneous versus Successive Learning in Neural Networks." Miami University / OhioLINK, 2005. http://rave.ohiolink.edu/etdc/view?acc_num=miami1134068959.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Amin, Muhamad Kamal M. "Multiple self-organised spiking neural networks." Thesis, Available from the University of Aberdeen Library and Historic Collections Digital Resources. Online version available for University members only until Feb. 1, 2014, 2009. http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?application=DIGITOOL-3&owner=resourcediscovery&custom_att_2=simple_viewer&pid=26029.

Full text
Abstract:
Thesis (Ph.D.)--Aberdeen University, 2009.<br>With: Clustering with self-organised spiking neural network / Muhamad K. Amin ... et al. Joint 4th International Conference on Soft Computing and Intelligent Systems (SCIS) and 9th International Symposium on Advanced Intelligent Systems (SIS) Sept. 17-21, 2008, Nagoya. Japan. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
15

McMichael, Lonny D. (Lonny Dean). "A Neural Network Configuration Compiler Based on the Adaptrode Neuronal Model." Thesis, University of North Texas, 1992. https://digital.library.unt.edu/ark:/67531/metadc501018/.

Full text
Abstract:
A useful compiler has been designed that takes a high level neural network specification and constructs a low level configuration file explicitly specifying all network parameters and connections. The neural network model for which this compiler was designed is the adaptrode neuronal model, and the configuration file created can be used by the Adnet simulation engine to perform network experiments. The specification language is very flexible and provides a general framework from which almost any network wiring configuration may be created. While the compiler was created for the specialized ada
APA, Harvard, Vancouver, ISO, and other styles
16

Yang, Horng-Chang. "Multiresolution neural networks for image edge detection and restoration." Thesis, University of Warwick, 1994. http://wrap.warwick.ac.uk/66740/.

Full text
Abstract:
One of the methods for building an automatic visual system is to borrow the properties of the human visual system (HVS). Artificial neural networks are based on this doctrine and they have been applied to image processing and computer vision. This work focused on the plausibility of using a class of Hopfield neural networks for edge detection and image restoration. To this end, a quadratic energy minimization framework is presented. Central to this framework are relaxation operations, which can be implemented using the class of Hopfield neural networks. The role of the uncertainty principle in
APA, Harvard, Vancouver, ISO, and other styles
17

Polhill, John Gareth. "Guaranteeing generalisation in neural networks." Thesis, University of St Andrews, 1995. http://hdl.handle.net/10023/12878.

Full text
Abstract:
Neural networks need to be able to guarantee their intrinsic generalisation abilities if they are to be used reliably. Mitchell's concept and version spaces technique is able to guarantee generalisation in the symbolic concept-learning environment in which it is implemented. Generalisation, according to Mitchell, is guaranteed when there is no alternative concept that is consistent with all the examples presented so far, except the current concept, given the bias of the user. A form of bidirectional convergence is used by Mitchell to recognise when the no-alternative situation has been reached
APA, Harvard, Vancouver, ISO, and other styles
18

Salama, Rameri. "On evolving modular neural networks." University of Western Australia. Dept. of Computer Science, 2000. http://theses.library.uwa.edu.au/adt-WU2003.0011.

Full text
Abstract:
The basis of this thesis is the presumption that while neural networks are useful structures that can be used to model complex, highly non-linear systems, current methods of training the neural networks are inadequate in some problem domains. Genetic algorithms have been used to optimise both the weights and architectures of neural networks, but these approaches do not treat the neural network in a sensible manner. In this thesis, I define the basis of computation within a neural network as a single neuron and its associated input connections. Sets of these neurons, stored in a matrix
APA, Harvard, Vancouver, ISO, and other styles
19

Bhattacharya, Dipankar. "Neural networks for signal processing." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1996. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/nq21924.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Tipping, Michael E. "Topographic mappings and feed-forward neural networks." Thesis, Aston University, 1996. http://publications.aston.ac.uk/672/.

Full text
Abstract:
This thesis is a study of the generation of topographic mappings - dimension reducing transformations of data that preserve some element of geometric structure - with feed-forward neural networks. As an alternative to established methods, a transformational variant of Sammon's method is proposed, where the projection is effected by a radial basis function neural network. This approach is related to the statistical field of multidimensional scaling, and from that the concept of a 'subjective metric' is defined, which permits the exploitation of additional prior knowledge concerning the data in
APA, Harvard, Vancouver, ISO, and other styles
21

Rountree, Nathan, and n/a. "Initialising neural networks with prior knowledge." University of Otago. Department of Computer Science, 2007. http://adt.otago.ac.nz./public/adt-NZDU20070510.135442.

Full text
Abstract:
This thesis explores the relationship between two classification models: decision trees and multilayer perceptrons. Decision trees carve up databases into box-shaped regions, and make predictions based on the majority class in each box. They are quick to build and relatively easy to interpret. Multilayer perceptrons (MLPs) are often more accurate than decision trees, because they are able to use soft, curved, arbitrarily oriented decision boundaries. Unfortunately MLPs typically require a great deal of effort to determine a good number and arrangement of neural units, and then require many p
APA, Harvard, Vancouver, ISO, and other styles
22

Shah, Jagesh V. (Jagesh Vijaykumar). "Learning dynamics in feedforward neural networks." Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/36541.

Full text
Abstract:
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1995.<br>Includes bibliographical references (leaves 108-115).<br>by Jagesh V. Shah.<br>M.S.
APA, Harvard, Vancouver, ISO, and other styles
23

Mars, Risha R. "Organic LEDs for optoelectronic neural networks." Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/77537.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.<br>Cataloged from PDF version of thesis.<br>Includes bibliographical references (p. 79-81).<br>In this thesis, I investigate the characteristics of Organic Light Emitting Diodes (OLEDs) and assess their suitability for use in the Compact Optoelectronic Integrated Neural (COIN) coprocessor. The COIN coprocessor, a prototype artificial neural network implemented in hardware, seeks to implement neural network algorithms in native optoelectronic hardware in order to do parallel type
APA, Harvard, Vancouver, ISO, and other styles
24

Doshi, Anuja. "Aircraft position prediction using neural networks." Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/33300.

Full text
Abstract:
Thesis (M. Eng. and S.B.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005.<br>Includes bibliographical references (leaf 64).<br>The Federal Aviation Administration (FAA) has been investigating early warning accident prevention systems in an effort to prevent runway collisions. One system in place is the Airport Movement Area Safety System (AMASS), developed under contract with the FAA. AMASS uses a linear prediction system to predict the position of an aircraft 5 to 30 seconds in the future. The system sounds an alarm to warn air traffic contr
APA, Harvard, Vancouver, ISO, and other styles
25

Gu, Youyang. "Food adulteration detection using neural networks." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/106015.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016.<br>This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.<br>Cataloged from student-submitted PDF version of thesis.<br>Includes bibliographical references (pages 99-100).<br>In food safety and regulation, there is a need for an automated system to be able to make predictions on which adulterants (unauthorized substances in food) are likely to appear in which food products. For exampl
APA, Harvard, Vancouver, ISO, and other styles
26

Mehta, Haripriya(Haripriya P. ). "Secure inference of quantized neural networks." Thesis, Massachusetts Institute of Technology, 2020. https://hdl.handle.net/1721.1/127663.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, May, 2020<br>Cataloged from the official PDF of thesis.<br>Includes bibliographical references (pages 63-65).<br>Running image recognition algorithms on medical datasets raises several privacy concerns. Hospitals may not have access to an image recognition model that a third party may have developed, and medical images are HIPAA protected and thus, cannot leave hospital servers. However, with secure neural network inference, hospitals can send encrypted medical images as input to
APA, Harvard, Vancouver, ISO, and other styles
27

Srivastava, Sanjana. "On foveation of deep neural networks." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/123134.

Full text
Abstract:
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.<br>Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019<br>Cataloged from student-submitted PDF version of thesis.<br>Includes bibliographical references (pages 61-63).<br>The human ability to recognize objects is impaired when the object is not shown in full. "Minimal images" are the smallest regions of an image that remain recognizable for humans. [26] show that a slight modificatio
APA, Harvard, Vancouver, ISO, and other styles
28

Behnke, Sven. "Hierarchical neural networks for image interpretation /." Berlin [u.a.] : Springer, 2003. http://www.loc.gov/catdir/enhancements/fy0813/2003059597-d.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Whyte, William John. "Statistical mechanics of neural networks." Thesis, University of Oxford, 1995. http://ora.ox.ac.uk/objects/uuid:e17f9b27-58ac-41ad-8722-cfab75139d9a.

Full text
Abstract:
We investigate five different problems in the field of the statistical mechanics of neural networks. The first three problems involve attractor neural networks that optimise particular cost functions for storage of static memories as attractors of the neural dynamics. We study the effects of replica symmetry breaking (RSB) and attempt to find algorithms that will produce the optimal network if error-free storage is impossible. For the Gardner-Derrida network we show that full RSB is necessary for an exact solution everywhere above saturation. We also show that, no matter what the cost function
APA, Harvard, Vancouver, ISO, and other styles
30

Adamu, Abdullahi S. "An empirical study towards efficient learning in artificial neural networks by neuronal diversity." Thesis, University of Nottingham, 2016. http://eprints.nottingham.ac.uk/33799/.

Full text
Abstract:
Artificial Neural Networks (ANN) are biologically inspired algorithms, and it is natural that it continues to inspire research in artificial neural networks. From the recent breakthrough of deep learning to the wake-sleep training routine, all have a common source of drawing inspiration: biology. The transfer functions of artificial neural networks play the important role of forming decision boundaries necessary for learning. However, there has been relatively little research on transfer function optimization compared to other aspects of neural network optimization. In this work, neuronal dive
APA, Harvard, Vancouver, ISO, and other styles
31

Mohr, Sheila Jean. "Temporal EKG signal classification using neural networks." Master's thesis, This resource online, 1991. http://scholar.lib.vt.edu/theses/available/etd-02022010-020115/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Treadgold, Nicholas K. Computer Science &amp Engineering Faculty of Engineering UNSW. "Constructive neural networks : generalisation, convergence and architectures." Awarded by:University of New South Wales. School of Computer Science and Engineering, 1999. http://handle.unsw.edu.au/1959.4/17615.

Full text
Abstract:
Feedforward neural networks trained via supervised learning have proven to be successful in the field of pattern recognition. The most important feature of a pattern recognition technique is its ability to successfully classify future data. This is known as generalisation. A more practical aspect of pattern recognition methods is how quickly they can be trained and how reliably a good solution is found. Feedforward neural networks have been shown to provide good generali- sation on a variety of problems. A number of training techniques also exist that provide fast convergence. Two problems oft
APA, Harvard, Vancouver, ISO, and other styles
33

Tavanaei, Amirhossein. "Spiking Neural Networks and Sparse Deep Learning." Thesis, University of Louisiana at Lafayette, 2019. http://pqdtopen.proquest.com/#viewpdf?dispub=10807940.

Full text
Abstract:
<p> This document proposes new methods for training multi-layer and deep spiking neural networks (SNNs), specifically, spiking convolutional neural networks (CNNs). Training a multi-layer spiking network poses difficulties because the output spikes do not have derivatives and the commonly used backpropagation method for non-spiking networks is not easily applied. Our methods use novel versions of the brain-like, local learning rule named spike-timing-dependent plasticity (STDP) that incorporates supervised and unsupervised components. Our method starts with conventional learning methods and co
APA, Harvard, Vancouver, ISO, and other styles
34

Czuchry, Andrew J. Jr. "Toward a formalism for the automation of neural network construction and processing control." Diss., Georgia Institute of Technology, 1993. http://hdl.handle.net/1853/9199.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Bragansa, John. "On the performance issues of the bidirectional associative memory." Thesis, Georgia Institute of Technology, 1993. http://hdl.handle.net/1853/17809.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Post, David L. "Network Management: Assessing Internet Network-Element Fault Status Using Neural Networks." Ohio : Ohio University, 2008. http://www.ohiolink.edu/etd/view.cgi?ohiou1220632155.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Morphet, Steven Brian Işık Can. "Modeling neural networks via linguistically interpretable fuzzy inference systems." Related electronic resource: Current Research at SU : database of SU dissertations, recent titles available full text, 2004. http://wwwlib.umi.com/cr/syr/main.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Ngom, Alioune. "Synthesis of multiple-valued logic functions by neural networks." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp03/NQ36787.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Rivest, François. "Knowledge transfer in neural networks : knowledge-based cascade-correlation." Thesis, McGill University, 2002. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=29470.

Full text
Abstract:
Most neural network learning algorithms cannot use knowledge other than what is provided in the training data. Initialized using random weights, they cannot use prior knowledge such as knowledge stored in previously trained networks. This manuscript thesis addresses this problem. It contains a literature review of the relevant static and constructive neural network learning algorithms and of the recent research on transfer of knowledge across neural networks. Manuscript 1 describes a new algorithm, named knowledge-based cascade-correlation (KBCC), which extends the cascade-correlation learning
APA, Harvard, Vancouver, ISO, and other styles
40

Künzle, Philippe. "Building topological maps for robot navigation using neural networks." Thesis, McGill University, 2005. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=82266.

Full text
Abstract:
Robots carrying tasks in an unknown environment often need to build a map in order to be able to navigate. One approach is to create a detailed map of the environment containing the position of obstacles. But this option can use a large amount of memory, especially if the environment is large. Another approach, closer to how people build a mental map, is the topological map. A topological map contains only places that are easy to recognize (landmarks) and links them together.<br>In this thesis, we explore the issue of creating a topological map from range data. A robot in a simulated en
APA, Harvard, Vancouver, ISO, and other styles
41

Yang, Xiao. "Memristor based neural networks : feasibility, theories and approaches." Thesis, University of Kent, 2014. https://kar.kent.ac.uk/49041/.

Full text
Abstract:
Memristor-based neural networks refer to the utilisation of memristors, the newly emerged nanoscale devices, in building neural networks. The memristor was first postulated by Leon Chua in 1971 as the fourth fundamental passive circuit element and experimentally validated by one of HP labs in 2008. Memristors, short for memory-resistor, have a peculiar memory effect which distinguishes them from resistors. By applying a bias voltage across it, the resistance of a memristor, namely memristance, is changed. In addition, the memristance is retained when the power supply is removed which demonstra
APA, Harvard, Vancouver, ISO, and other styles
42

Wang, Fengzhen. "Neural networks for data fusion." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp02/NQ30179.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Horvitz, Richard P. "Symbol Grounding Using Neural Networks." University of Cincinnati / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1337887977.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Turner, Joe. "Application of artificial neural networks in pharmacokinetics /." Connect to full text, 2003. http://setis.library.usyd.edu.au/adt/public_html/adt-NU/public/adt-NU20031007.090937/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Miller, Paul Ian. "Recurrent neural networks and adaptive motor control." Thesis, University of Stirling, 1997. http://hdl.handle.net/1893/21520.

Full text
Abstract:
This thesis is concerned with the use of neural networks for motor control tasks. The main goal of the thesis is to investigate ways in which the biological notions of motor programs and Central Pattern Generators (CPGs) may be implemented in a neural network framework. Biological CPGs can be seen as components within a larger control scheme, which is basically modular in design. In this thesis, these ideas are investigated through the use of modular recurrent networks, which are used in a variety of control tasks. The first experimental chapter deals with learning in recurrent networks, and i
APA, Harvard, Vancouver, ISO, and other styles
46

Chen, Francis Xinghang. "Modeling human vision using feedforward neural networks." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/112824.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016.<br>This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.<br>Cataloged from student-submitted PDF version of thesis.<br>Includes bibliographical references (pages 81-86).<br>In this thesis, we discuss the implementation, characterization, and evaluation of a new computational model for human vision. Our goal is to understand the mechanisms enabling invariant perception under scaling,
APA, Harvard, Vancouver, ISO, and other styles
47

Dernoncourt, Franck. "Sequential short-text classification with neural networks." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/111880.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017.<br>Cataloged from PDF version of thesis.<br>Includes bibliographical references (pages 69-79).<br>Medical practice too often fails to incorporate recent medical advances. The two main reasons are that over 25 million scholarly medical articles have been published, and medical practitioners do not have the time to perform literature reviews. Systematic reviews aim at summarizing published medical evidence, but writing them requires tremendous human efforts. In this thesis, we
APA, Harvard, Vancouver, ISO, and other styles
48

Zhang, Jeffrey M. Eng Massachusetts Institute of Technology. "Enhancing adversarial robustness of deep neural networks." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/122994.

Full text
Abstract:
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.<br>Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019<br>Cataloged from student-submitted PDF version of thesis.<br>Includes bibliographical references (pages 57-58).<br>Logit-based regularization and pretrain-then-tune are two approaches that have recently been shown to enhance adversarial robustness of machine learning models. In the realm of regularization, Zhang et al. (2019) pr
APA, Harvard, Vancouver, ISO, and other styles
49

Miglani, Vivek N. "Comparing learned representations of deep neural networks." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/123048.

Full text
Abstract:
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.<br>Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019<br>Cataloged from student-submitted PDF version of thesis.<br>Includes bibliographical references (pages 63-64).<br>In recent years, a variety of deep neural network architectures have obtained substantial accuracy improvements in tasks such as image classification, speech recognition, and machine translation, yet little is known
APA, Harvard, Vancouver, ISO, and other styles
50

Trinh, Loc Quang. "Greedy layerwise training of convolutional neural networks." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/123128.

Full text
Abstract:
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.<br>Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019<br>Cataloged from student-submitted PDF version of thesis.<br>Includes bibliographical references (pages 61-63).<br>Layerwise training presents an alternative approach to end-to-end back-propagation for training deep convolutional neural networks. Although previous work was unsuccessful in demonstrating the viability of layerwise
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!