Academic literature on the topic 'Neural computer'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Neural computer.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Dissertations / Theses on the topic "Neural computer"

1

Somers, Harriet. "A neural computer." Thesis, University of York, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.362021.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Churcher, Stephen. "VLSI neural networks for computer vision." Thesis, University of Edinburgh, 1993. http://hdl.handle.net/1842/13397.

Full text
Abstract:
Recent years have seen the rise to prominence of a powerful new computational paradigm - the so-called artificial neural network. Loosely based on the microstructure of the central nervous system, neural networks are massively parallel arrangements of simple processing elements (<i>neurons</i>) which communicate with each other through variable strength connections (<i>synapses</i>). The simplicity of such a description belies the complexity of calculations which neural networks are able to perform. Allied to this, the emergent properties of noise resistance, fault tolerance, and large data bandwidths (all arising from the parallel architecture) mean that neural networks, when appropriately implemented, represent a powerful tool for solving many problems which require the processing of real-world data. A computer vision task (viz. the classification of regions in images of segmented natural scenes) is presented, as a problem in which large numbers of data need to be processed quickly and accurately, whilst, in certain circumstances, being disambiguated. Of the classifiers tried, the neural network (a multi-layer perceptron) was found to provide the best overall solution, to the task of distinguishing between regions which were 'roads', and those which were 'not roads'. In order that best use might be made of the parallel processing abilities of neural networks, a variety of special purpose hardware implementations are discussed, before two different analogue VLSI designs are presented, complete with characterisation and test results. The latter of these chips (the EPSILON device) is used as the basis for a practical neuro-computing system. The results of experimentation with different applications are presented. Comparisons with computer simulations demonstrate the accuracy of the chips, and their ability to support learning algorithms, thereby proving the viability of the use of pulsed analogue VLSI techniques for the implementation of artificial neural networks.
APA, Harvard, Vancouver, ISO, and other styles
3

Khan, Altaf Hamid. "Feedforward neural networks with constrained weights." Thesis, University of Warwick, 1996. http://wrap.warwick.ac.uk/4332/.

Full text
Abstract:
The conventional multilayer feedforward network having continuous-weights is expensive to implement in digital hardware. Two new types of networks are proposed which lend themselves to cost-effective implementations in hardware and have a fast forward-pass capability. These two differ from the conventional model in having extra constraints on their weights: the first allows its weights to take integer values in the range [-3,3] only, whereas the second restricts its synapses to the set {-1,0,1} while allowing unrestricted offsets. The benefits of the first configuration are in having weights which are only 3-bits deep and a multiplication operation requiring a maximum of one shift, one add, and one sign-change instruction. The advantages of the second are in having 1-bit synapses and a multiplication operation which consists of a single sign-change instruction. The procedure proposed for training these networks starts like the conventional error backpropagation procedure, but becomes more and more discretised in its behaviour as the network gets closer to an error minimum. Mainly based on steepest descent, it also has a perturbation mechanism to avoid getting trapped in local minima, and a novel mechanism for rounding off 'near integers'. It incorporates weight elimination implicitly, which simplifies the choice of the start-up network configuration for training. It is shown that the integer-weight network, although lacking the universal approximation capability, can implement learning tasks, especially classification tasks, to acceptable accuracies. A new theoretical result is presented which shows that the multiplier-free network is a universal approximator over the space of continuous functions of one variable. In light of experimental results it is conjectured that the same is true for functions of many variables. Decision and error surfaces are used to explore the discrete-weight approximation of continuous-weight networks using discretisation schemes other than integer weights. The results suggest that provided a suitable discretisation interval is chosen, a discrete-weight network can be found which performs as well as a continuous-weight networks, but that it may require more hidden neurons than its conventional counterpart. Experiments are performed to compare the generalisation performances of the new networks with that of the conventional one using three very different benchmarks: the MONK's benchmark, a set of artificial tasks designed to compare the capabilities of learning algorithms, the 'onset of diabetes mellitus' prediction data set, a realistic set with very noisy attributes, and finally the handwritten numeral recognition database, a realistic but very structured data set. The results indicate that the new networks, despite having strong constraints on their weights, have generalisation performances similar to that of their conventional counterparts.
APA, Harvard, Vancouver, ISO, and other styles
4

Kulakov, Anton. "Multiprocessing neural network simulator." Thesis, University of Southampton, 2013. https://eprints.soton.ac.uk/348420/.

Full text
Abstract:
Over the last few years tremendous progress has been made in neuroscience by employing simulation tools for investigating neural network behaviour. Many simulators have been created during last few decades, and their number and set of features continually grows due to persistent interest from groups of researchers and engineers. A simulation software that is able to simulate a large-scale neural network has been developed and presented in this work. Based on a highly abstract integrate-and-fire neuron model a clock-driven sequential simulator has been developed in C++. The created program is able to associate the input patterns with the output patterns. The novel biologically plausible learning mechanism uses Long Term Potentiation and Long Term Depression to change the strength of the connections between the neurons based on a global binary feedback. Later, the sequentially executed model has been extended to a multi-processor system, which executes the described learning algorithm using the event-driven technique on a parallel distributed framework, simulating a neural network asynchronously. This allows the simulation to manage larger scale neural networks being immune to processor failure and communication problems. The multi-processor neural network simulator has been created, the main benefit of which is the possibility to simulate large scale neural networks using high-parallel distributed computing. For that reason the design of the simulator has been implemented considering an efficient weight-adjusting algorithm and an efficient way for asynchronous local communication between processors.
APA, Harvard, Vancouver, ISO, and other styles
5

Durrant, Simon. "Negative correlation in neural systems." Thesis, University of Sussex, 2010. http://sro.sussex.ac.uk/id/eprint/2387/.

Full text
Abstract:
In our attempt to understand neural systems, it is useful to identify statistical principles that may be beneficial in neural information processing, outline how these principles may work in theory, and demonstrate the benefits through computational modelling and simulation. Negative correlation is one such principle, and is the subject of this work. The main body of the work falls into three parts. The first part demonstrates the space filling and accelerated central limit convergence benefits of negative correlation, both generally and in the specific neural context of V1 receptive fields. I outline two new algorithms combining traditional ICA with a correlation objective function. Correlated component analysis seeks components with a given correlation matrix, while correlated basis analysis seeks basis functions with a given correlation matrix. The benefits of recovering components and basis functions with negative correlations are shown. The second part looks at the functional role of negative correlation for integrate- and-fire neurons in the context of suprathreshold stochastic resonance, for neurons receiving Poisson inputs modelled by a diffusion approximation. I show how the SSR effect can be seen in networks of spiking neurons, and further show how correlation can be used to control the noise level, and that optimal information transmission occurs for negatively correlated inputs when parameters take biophysically plausible values. The final part examines the question of how negative correlation may be implemented in the context of small networks of spiking neurons. Networks of integrate-and-fire neurons with and without lateral inhibitory connections are tested, and the networks with the inhibitory connections are found to perform better and show negatively correlated firing patterns. This result is extended to more biophysically detailed neuron and synapse models, highlighting the robust nature of the mechanism. Finally, the mechanism is explained as a threshold-unit approximation to non-threshold maximum likelihood signal/noise decomposition.
APA, Harvard, Vancouver, ISO, and other styles
6

Baker, Thomas Edward. "Implementation limits for artificial neural networks." Full text open access at:, 1990. http://content.ohsu.edu/u?/etd,268.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Lam, Yiu Man. "Self-organized cortical map formation by guiding connections /." View abstract or full-text, 2004. http://library.ust.hk/cgi/db/thesis.pl?ELEC%202005%20LAM.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Adamu, Abdullahi S. "An empirical study towards efficient learning in artificial neural networks by neuronal diversity." Thesis, University of Nottingham, 2016. http://eprints.nottingham.ac.uk/33799/.

Full text
Abstract:
Artificial Neural Networks (ANN) are biologically inspired algorithms, and it is natural that it continues to inspire research in artificial neural networks. From the recent breakthrough of deep learning to the wake-sleep training routine, all have a common source of drawing inspiration: biology. The transfer functions of artificial neural networks play the important role of forming decision boundaries necessary for learning. However, there has been relatively little research on transfer function optimization compared to other aspects of neural network optimization. In this work, neuronal diversity - a property found in biological neural networks- is explored as a potentially promising method of transfer function optimization. This work shows how neural diversity can improve generalization in the context of literature from the bias-variance decomposition and meta-learning. It then demonstrates that neural diversity - represented in the form of transfer function diversity- can exhibit diverse and accurate computational strategies that can be used as ensembles with competitive results without supplementing it with other diversity maintenance schemes that tend to be computationally expensive. This work also presents neural network meta-features described as problem signatures sampled from models with diverse transfer functions for problem characterization. This was shown to meet the criteria of basic properties desired for any meta-feature, i.e. consistency for a problem and discriminatory for different problems. Furthermore, these meta-features were also used to study the underlying computational strategies adopted by the neural network models, which lead to the discovery of the strong discriminatory property of the evolved transfer function. The culmination of this study is the co-evolution of neurally diverse neurons with their weights and topology for efficient learning. It is shown to achieve significant generalization ability as demonstrated by its average MSE of 0.30 on 22 different benchmarks with minimal resources (i.e. two hidden units). Interestingly, these are the properties associated with neural diversity. Thus, showing the properties of efficiency and increased computational capacity could be replicated with transfer function diversity in artificial neural networks.
APA, Harvard, Vancouver, ISO, and other styles
9

McMichael, Lonny D. (Lonny Dean). "A Neural Network Configuration Compiler Based on the Adaptrode Neuronal Model." Thesis, University of North Texas, 1992. https://digital.library.unt.edu/ark:/67531/metadc501018/.

Full text
Abstract:
A useful compiler has been designed that takes a high level neural network specification and constructs a low level configuration file explicitly specifying all network parameters and connections. The neural network model for which this compiler was designed is the adaptrode neuronal model, and the configuration file created can be used by the Adnet simulation engine to perform network experiments. The specification language is very flexible and provides a general framework from which almost any network wiring configuration may be created. While the compiler was created for the specialized adaptrode model, the wiring specification algorithms could also be used to specify the connections in other types of networks.
APA, Harvard, Vancouver, ISO, and other styles
10

Yang, Horng-Chang. "Multiresolution neural networks for image edge detection and restoration." Thesis, University of Warwick, 1994. http://wrap.warwick.ac.uk/66740/.

Full text
Abstract:
One of the methods for building an automatic visual system is to borrow the properties of the human visual system (HVS). Artificial neural networks are based on this doctrine and they have been applied to image processing and computer vision. This work focused on the plausibility of using a class of Hopfield neural networks for edge detection and image restoration. To this end, a quadratic energy minimization framework is presented. Central to this framework are relaxation operations, which can be implemented using the class of Hopfield neural networks. The role of the uncertainty principle in vision is described, which imposes a limit on the simultaneous localisation in both class and position space. It is shown how a multiresolution approach allows the trade off between position and class resolution and ensures both robustness in noise and efficiency of computation. As edge detection and image restoration are ill-posed, some a priori knowledge is needed to regularize these problems. A multiresolution network is proposed to tackle the uncertainty problem and the regularization of these ill-posed image processing problems. For edge detection, orientation information is used to construct a compatibility function for the strength of the links of the proposed Hopfield neural network. Edge detection 'results are presented for a number of synthetic and natural images which show that the iterative network gives robust results at low signal-to-noise ratios (0 dB) and is at least as good as many previous methods at capturing complex region shapes. For restoration, mean square error is used as the quadratic energy function of the Hopfield neural network. The results of the edge detection are used for adaptive restoration. Also shown are the results of restoration using the proposed iterative network framework.
APA, Harvard, Vancouver, ISO, and other styles
More sources
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography