To see the other types of publications on this topic, follow the link: Counter Propagation Neural Networks.

Dissertations / Theses on the topic 'Counter Propagation Neural Networks'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Counter Propagation Neural Networks.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Kane, Andrew. "An instruction systolic array architecture for multiple neural network types." Thesis, Loughborough University, 1998. https://dspace.lboro.ac.uk/2134/16031.

Full text
Abstract:
Modern electronic systems, especially sensor and imaging systems, are beginning to incorporate their own neural network subsystems. In order for these neural systems to learn in real-time they must be implemented using VLSI technology, with as much of the learning processes incorporated on-chip as is possible. The majority of current VLSI implementations literally implement a series of neural processing cells, which can be connected together in an arbitrary fashion. Many do not perform the entire neural learning process on-chip, instead relying on other external systems to carry out part of the computation requirements of the algorithm. The work presented here utilises two dimensional instruction systolic arrays in an attempt to define a general neural architecture which is closer to the biological basis of neural networks - it is the synapses themselves, rather than the neurons, that have dedicated processing units. A unified architecture is described which can be programmed at the microcode level in order to facilitate the processing of multiple neural network types. An essential part of neural network processing is the neuron activation function, which can range from a sequential algorithm to a discrete mathematical expression. The architecture presented can easily carry out the sequential functions, and introduces a fast method of mathematical approximation for the more complex functions. This can be evaluated on-chip, thus implementing the entire neural process within a single system. VHDL circuit descriptions for the chip have been generated, and the systolic processing algorithms and associated microcode instruction set for three different neural paradigms have been designed. A software simulator of the architecture has been written, giving results for several common applications in the field.
APA, Harvard, Vancouver, ISO, and other styles
2

Malalur, Paresh(Paresh G. ). "Interpretable neural networks via alignment and dpstribution Propagation." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/122686.

Full text
Abstract:
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.<br>Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019<br>Cataloged from student-submitted PDF version of thesis.<br>Includes bibliographical references (pages 145-150).<br>In this thesis, we aim to develop methodologies to better understand and improve the performance of Deep Neural Networks in various settings where data is limited or missing. Unlike data-rich tasks where neural networks have achieved human-level performance, other problems are naturally data limited where these models have fallen short of human level performance and where there is abundant room for improvement. We focus on three types of problems where data is limited - one-shot learning and open-set recognition in the one-shot setting, unsupervised learning, and classification with missing data. The first setting of limited data that we tackle is when there are only few examples per object type. During object classification, an attention mechanism can be used to highlight the area of the image that the model focuses on thus offering a narrow view into the mechanism of classification.<br>We expand on this idea by forcing the method to explicitly align images to be classified to reference images representing the classes. The mechanism of alignment is learned and therefore does not require that the reference objects are anything like those being classified. Beyond explanation, our exemplar based cross-alignment method enables classification with only a single example per category (one-shot) or in the absence of any labels about new classes (open-set). While one-shot and open-set recognition operate in cases where complete data is available for few examples, unsupervised and missing data setting focus on cases where the labels are missing or where only partial input is available correspondingly. Variational Auto-encoders are a popular unsupervised learning model which learn how to map the input distribution into a simple latent distribution.<br>We introduce a mechanism of approximate propagation of Gaussian densities through neural networks using the Hellinger distance metric to find the best approximation and demonstrate how to use this framework to improve the latent code efficiency of Variational Auto- Encoders. Expanding on this idea further, we introduce a novel method to learn the mapping between the input space and latent space which further improves the efficiency of the latent code by overcoming the variational bound. The final limited data setting we explore is when the input data is incomplete or very noisy. Neural Networks are inherently feed-forward and hence inference methods developed for probabilistic models can not be applied directly. We introduce two different methods to handle missing data. We first introduce a simple feed-forward model that redefines the linear operator as an ensemble to reweight the activations when portions of its receptive field are missing.<br>We then use some of the insights gained to develop deep networks that propagate distributions of activations instead of point activations allowing us to use message passing methods to compensate for missing data while maintaining the feed-forward style approach when data is not missing.<br>by Paresh Malalur.<br>Ph. D.<br>Ph.D. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
APA, Harvard, Vancouver, ISO, and other styles
3

Fernando, Thudugala Mudalige K. G. "Hydrological applications of MLP neural networks with back-propagation." Thesis, Hong Kong : University of Hong Kong, 2002. http://sunzi.lib.hku.hk/hkuto/record.jsp?B25085517.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Bendelac, Shiri. "Enhanced Neural Network Training Using Selective Backpropagation and Forward Propagation." Thesis, Virginia Tech, 2018. http://hdl.handle.net/10919/83714.

Full text
Abstract:
Neural networks are making headlines every day as the tool of the future, powering artificial intelligence programs and supporting technologies never seen before. However, the training of neural networks can take days or even weeks for bigger networks, and requires the use of super computers and GPUs in academia and industry in order to achieve state of the art results. This thesis discusses employing selective measures to determine when to backpropagate and forward propagate in order to reduce training time while maintaining classification performance. This thesis tests these new algorithms on the MNIST and CASIA datasets, and achieves successful results with both algorithms on the two datasets. The selective backpropagation algorithm shows a reduction of up to 93.3% of backpropagations completed, and the selective forward propagation algorithm shows a reduction of up to 72.90% in forward propagations and backpropagations completed compared to baseline runs of always forward propagating and backpropagating. This work also discusses employing the selective backpropagation algorithm on a modified dataset with disproportional under-representation of some classes compared to others.<br>Master of Science
APA, Harvard, Vancouver, ISO, and other styles
5

Wu, Ing-Chyuan. "Neural networks for financial markets analyses and options valuation /." free to MU campus, to others for purchase, 2002. http://wwwlib.umi.com/cr/mo/fullcit?p3074457.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Breutel, Stephan Werner. "Analysing the behaviour of neural networks." Thesis, Queensland University of Technology, 2004. https://eprints.qut.edu.au/15943/1/Stephan_Breutel_Thesis.pdf.

Full text
Abstract:
A new method is developed to determine a set of informative and refined interface assertions satisfied by functions that are represented by feed-forward neural networks. Neural networks have often been criticized for their low degree of comprehensibility.It is difficult to have confidence in software components if they have no clear and valid interface description. Precise and understandable interface assertions for a neural network based software component are required for safety critical applications and for theintegration into larger software systems. The interface assertions we are considering are of the form &quote if the input x of the neural network is in a region (alpha symbol) of the input space then the output f(x) of the neural network will be in the region (beta symbol) of the output space &quote and vice versa. We are interested in computing refined interface assertions, which can be viewed as the computation of the strongest pre- and postconditions a feed-forward neural network fulfills. Unions ofpolyhedra (polyhedra are the generalization of convex polygons in higher dimensional spaces) are well suited for describing arbitrary regions of higher dimensional vector spaces. Additionally, polyhedra are closed under affine transformations. Given a feed-forward neural network, our method produces an annotated neural network, where each layer is annotated with a set of valid linear inequality predicates. The main challenges for the computation of these assertions is to compute the solution of a non-linear optimization problem and the projection of a polyhedron onto a lower-dimensional subspace.
APA, Harvard, Vancouver, ISO, and other styles
7

Breutel, Stephan Werner. "Analysing the behaviour of neural networks." Queensland University of Technology, 2004. http://eprints.qut.edu.au/15943/.

Full text
Abstract:
A new method is developed to determine a set of informative and refined interface assertions satisfied by functions that are represented by feed-forward neural networks. Neural networks have often been criticized for their low degree of comprehensibility.It is difficult to have confidence in software components if they have no clear and valid interface description. Precise and understandable interface assertions for a neural network based software component are required for safety critical applications and for theintegration into larger software systems. The interface assertions we are considering are of the form &quote if the input x of the neural network is in a region (alpha symbol) of the input space then the output f(x) of the neural network will be in the region (beta symbol) of the output space &quote and vice versa. We are interested in computing refined interface assertions, which can be viewed as the computation of the strongest pre- and postconditions a feed-forward neural network fulfills. Unions ofpolyhedra (polyhedra are the generalization of convex polygons in higher dimensional spaces) are well suited for describing arbitrary regions of higher dimensional vector spaces. Additionally, polyhedra are closed under affine transformations. Given a feed-forward neural network, our method produces an annotated neural network, where each layer is annotated with a set of valid linear inequality predicates. The main challenges for the computation of these assertions is to compute the solution of a non-linear optimization problem and the projection of a polyhedron onto a lower-dimensional subspace.
APA, Harvard, Vancouver, ISO, and other styles
8

Teo, Chin Hock. "Back-propagation neural networks in adaptive control of unknown nonlinear systems." Thesis, Monterey, California. Naval Postgraduate School, 1991. http://hdl.handle.net/10945/26898.

Full text
Abstract:
Approved for public release; distribution is unlimited<br>The objective of this research is to develop a Back-propagation Neural Network (BNN) to control certain classes of unknown nonlinear systems and explore the network's capabilities. The structure of the Direct Model Reference Adaptive Controller (DMRAC) for Linear Time Invariant (LTI) systems with unknown parameters is first analyzed. This structure is then extended using a BNN for adaptive control of unknown nonlinear systems. The specific structure of the BNN DMRAC is developed for control of four general classes of nonlinear systems modeled in discrete time. Experiments are conducted by placing a representative system from each class under the BNN's control. The condition under which the BNN DMRAC can successfully control these systems are investigated. The design and training of the BNN are also studied. The results of the experiments show that the BNN DMRAC works for the representative systems considered, while the conventional least-squares estimator DMRAC fails. Based on analysis and experimental findings, some genera conditions required to ensure that this technique works are postulated and discussed. General guidelines used to achieve the stability of the BNN learning process and good learning convergence are also discussed. To establish this as a general and significant control technique, further research is required to obtain analytically, the conditions for stability of the controlled system, and to develop more specific rules and guidelines in the BNN design and training.
APA, Harvard, Vancouver, ISO, and other styles
9

Cakarcan, Alpay. "Back-propagation neural networks in adaptive control of unknown nonlinear systems." Thesis, Monterey, California. Naval Postgraduate School, 1994. http://hdl.handle.net/10945/30830.

Full text
Abstract:
The objective of this thesis research is to develop a Back-Propagation Neural Network (BNN) to control certain classes of unknown nonlinear systems and explore the network's capabilities. The structure of the Direct Model Reference Adaptive Controller (DMRAC) for Linear Time Invariant (LTI) systems with unknown parameters is first analyzed and then is extended to nonlinear systems by using BNN, Nonminimum phase systems, both linear and nonlinear, have also be considered. The analysis of the experiments shows that the BNN DMRAC gives satisfactory results for the representative nonlinear systems considered, while the conventional least-squares estimator DMRAC fails. Based on the analysis and experimental findings, some general conditions are shown to be required to ensure that this technique is satisfactory. These conditions are presented and discussed. It has been found that further research needs to be done for the nonminimum phase case in order to guarantee stability and tracking. Also, to establish this as a more general and significant control technique, further research is required to develop more specific rules and guidelines for the BNN design and training.
APA, Harvard, Vancouver, ISO, and other styles
10

Kane, Abdoul. "Activity propagation in two-dimensional neuronal networks." Columbus, Ohio : Ohio State University, 2005. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1133461090.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Chen, Peng. "Analysis of contribution rates and prediction based on back propagation neural networks." Thesis, University of Macau, 2017. http://umaclib3.umac.mo/record=b3691340.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Riera, Alexis J. "Predicting permeability and flow capacity distribution with back-propagation artificial neural networks." Morgantown, W. Va. : [West Virginia University Libraries], 2000. http://etd.wvu.edu/templates/showETD.cfm?recnum=1309.

Full text
Abstract:
Thesis (M.S.)--West Virginia University, 2000.<br>Title from document title page. Document formatted into pages; contains xii, 86 p. : ill. (some col.), maps. Includes abstract. Includes bibliographical references (p. 61-63).
APA, Harvard, Vancouver, ISO, and other styles
13

Rose, Stephen Matthew. "Online training of a neural network controller by improved reinforcement back-propagation." Thesis, Georgia Institute of Technology, 2002. http://hdl.handle.net/1853/19177.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Jorgenson, Stephen Craig. "Modelling of electromagnetic propagation in the marine boundary layer using artificial neural networks." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape9/PQDD_0035/MQ65857.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Jorgenson, Stephen Craig. "Modelling of electromagnetic propagation in the marine boundary layer using artificial neural networks." Ottawa : National Library of Canada = Bibliothèque nationale du Canada, 2002. http://www.nlc-bnc.ca/obj/s4/f2/dsk1/tape9/PQDD%5F0035/MQ65857.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Tiezzi, Matteo. "Local Propagation in Neural Network Learning by Architectural Constraints." Doctoral thesis, Università di Siena, 2021. http://hdl.handle.net/11365/1133797.

Full text
Abstract:
A crucial role for the success of the Artificial Neural Networks (ANN) processing scheme has been played by the feed-forward propagation of signals. The input patterns undergo a series of stacked parametrized transformations, which foster deep feature extraction and an increasing representational power. Each artificial neural network layer aggregates information from its incoming connections, projects it to another space, and immediately propagates it to the next layer. Since its introduction in the '80s, BackPropagation (BP) is considered to be the ``de facto'' algorithm for training neural nets. The weights associated to the connections between the network layers are updated due to the backward pass, that is a straightforward derivation of the chain rule for the computation of the derivatives in a composition of functions. This computation requires to store all the intermediate values of the process. Moreover, it implies the use of non-local information, since the activity of one neuron has the ability to affect all the subsequent units up to the last output layer. However, learning in the human brain can be considered a continuous, life-long and gradual process in which neuron activations fire, leveraging local information, both in space, e.g neighboring neurons, and time, e.g. previous states. Following this principle, this thesis is inspired by the ideas of decoupling the computational scheme, behind the standard processing of ANNs, in order to decompose its overall structure into local components. Such local parts are put into communication leveraging the unifying notion of ``constraint''. In particular, a set of additional variables are added to the learning problem, in order to store the information on the status of the constrained neural units. Therefore, it is possible to describe the computations performed by the network itself guiding the evolution of these auxiliary variables via constraints. This choice allows us to setup an optimization procedure that is ``local'', i.e., it does not require (1) to query the whole network, (2) to accomplish the diffusion of the information, or (3) to bufferize data streamed over time in order to be able to compute gradients. The thesis investigates three different learning settings that are instances of the aforementioned scheme: (1) constraints among layers in feed-forward neural networks, (2) constraints among the states of neighboring nodes in Graph Neural Networks, and (3) constraints among predictions over time.
APA, Harvard, Vancouver, ISO, and other styles
17

Tien, Fang-Chih. "Using neural networks for three-dimensional measurement in stereo vision systems /." free to MU campus, to others for purchase, 1996. http://wwwlib.umi.com/cr/mo/fullcit?p9720552.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

ZOPPO, GIANLUCA. "Dynamic Neural Networks and Brain-inspired Computing." Doctoral thesis, Politecnico di Torino, 2022. http://hdl.handle.net/11583/2971123.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Ahmed, Shamsuddin. "Development of self-adaptive back propagation and derivative free training algorithms in artificial neural networks." Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2000. https://ro.ecu.edu.au/theses/1539.

Full text
Abstract:
Three new iterative, dynamically self-adaptive, derivative-free and training parameter free artificial neural network (ANN) training algorithms are developed. They are defined as self-adaptive back propagation, multi-directional and restart ANN training algorithms. The descent direction in self-adaptive back propagation training is determined implicitly by a central difference approximation scheme, which chooses its step size according to the convergence behavior of the error function. This approach trains an ANN when the gradient information of the corresponding error function is not readily available. The self- adaptive variable learning rates per epoch are determined dynamically using a constrained interpolation search. As a result, appropriate descent to the error function is achieved. The multi-directional training algorithm is self-adaptive and derivative free. It orients an initial search vector in a descent location at the early stage of training. Individual learning rates and momentum term for all the ANN weights are determined optimally. The search directions are derived from rectilinear and Euclidean paths, which explore stiff ridges and valleys of the error surface to improve training. The restart training algorithm is derivative free. It redefines a de-generated simplex at a re-scale phase. This multi-parameter training algorithm updates ANN weights simultaneously instead of individually. The descent directions are derived from the centroid of a simplex along a reflection point opposite to the worst vertex. The algorithm is robust and has the ability to improve local search. These ANN training methods are appropriate when there is discontinuity in corresponding ANN error function or the Hessian matrix is ill conditioned or singular. The convergence properties of the algorithms are proved where possible. All the training algorithms successfully train exclusive OR (XOR), parity, character-recognition and forecasting problems. The simulation results with XOR, parity and character recognition problems suggest that all the training algorithms improve significantly over the standard back propagation algorithm in average number of epoch, function evaluations and terminal function values. The multivariate ANN calibration problem as a regression model with small data set is relatively difficult to train. In forecasting problems, an ANN is trained to extrapolate the data in validation period. The extrapolation results are compared with the actual data. The trained ANN performs better than the statistical regression method in mean absolute deviations; mean squared errors and relative percentage error. The restart training algorithm succeeds in training a problem, where other training algorithms face difficulty. It is shown that a seasonal time series problem possesses a Hessian matrix that has a high condition number. Convergence difficulties as well as slow training are therefore not atypical. The research exploits the geometry of the error surface to identify self-adaptive optimized learning rates and momentum terms. Consequently, the algorithms converge with high success rate. These attributes brand the training algorithms as self-adaptive, automatic, parameter free, efficient and easy to use.
APA, Harvard, Vancouver, ISO, and other styles
20

Rimer, Michael Edwin. "Improving Neural Network Classification Training." Diss., CLICK HERE for online access, 2007. http://contentdm.lib.byu.edu/ETD/image/etd2094.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Fernando, Baminahennadige Rasitha Dilanjana Xavier. "Low Power, Dense Circuit Architectures and System Designs for Neural Networks using Emerging Memristors." University of Dayton / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1625595485590874.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Diesmann, Markus. "Conditions for stable propagation of synchronous spiking in cortical neural networks single neuron dynamics and network properties /." [S.l.] : [s.n.], 2002. http://deposit.ddb.de/cgi-bin/dokserv?idn=968772781.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Lindqvist, Emil Gedda &amp Kalle. "Genetically modelled Artificial Neural Networks for Optical Character Recognition : An evaluation of chromosome encodings." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-4504.

Full text
Abstract:
Context. Custom solutions to optical character recognition problems are able to reach higher recognition rates then a generic solution by their ability to exploiting the limitations in the problem domain. Such solutions can be generated with genetic algorithms. This thesis evaluates two different chromosome encodings on an optical character recognition problem with a limited problem domain. Objectives. The main objective for this study is to compare two different chromosome encodings used in a genetic algorithm generating neural networks for an optical character recognition problem to evaluate both the impact on the evolution of the network as well as the networks produced. Methods. A systematic literature review was conducted to find genetic chromosome encodings previously used on similar problem. One well documented chromosome encoding was found. We implemented the found hromosome ncoding called binary, as well as a modified version called weighted binary, which intended to reduce the risk of bad mutations. Both chromosome encodings were evaluated on an optical character recognition problem with a limited problem domain. The experiment was run with two different population sizes, ten and fifty. A baseline for what to consider a good solution on the problem was acquired by implementing a template matching classifier on the same dataset. Template matching was chosen since it is used in existing solutions on the same problem. Results. Both encodings were able to reach good results compared to the baseline. The weighted binary encoding was able to reduce the problem with bad mutations which occurred in the binary encoding. However it also had a negative impact on the ability of finding the best networks. The weighted binary encoding was more prone to enbreeding with a small population than the binary encoding. The best network generated using the binary encoding had a 99.65% recognition rate while the best network generated by the weighted binary encoding had a 99.55% recognition rate. Conclusions. We conclude that it is possible to generate many good solutions for an optical character problem with a limited problem domain. Even though it is possible to reduce the risk of bad mutations in a genetic lgorithm generating neural networks used for optical character recognition by designing the chromosome encoding, it may be more harmful than not doing it.
APA, Harvard, Vancouver, ISO, and other styles
24

Fugate, Earl L. "NONLINEAR SYSTEM MODELING UTILIZING NEURAL NETWORKS: AN APPLICATION TO THE DOUBLE SIDED ARC WELDING PROCESS." Lexington, Ky. : [University of Kentucky Libraries], 2005. http://lib.uky.edu/ETD/ukyelen2005t00307/etd.pdf.

Full text
Abstract:
Thesis (M.S.)--University of Kentucky, 2005.<br>Title from document title page (viewed on November 8, 2005). Document formatted into pages; contains vi, 64 p. : ill. Includes abstract and vita. Includes bibliographical references (p. 61-63).
APA, Harvard, Vancouver, ISO, and other styles
25

Östlin, Erik. "On Radio Wave Propagation Measurements and Modelling for Cellular Mobile Radio Networks." Doctoral thesis, Karlskrona : Blekinge Institute of Technology, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-00443.

Full text
Abstract:
To support the continuously increasing number of mobile telephone users around the world, mobile communication systems have become more advanced and sophisticated in their designs. As a result of the great success with the second generation mobile radio networks, deployment of the third and development of fourth generations, the demand for higher data rates to support available services, such as internet connection, video telephony and personal navigation systems, is ever growing. To be able to meet the requirements regarding bandwidth and number of users, enhancements of existing systems and introductions of conceptually new technologies and techniques have been researched and developed. Although new proposed technologies in theory provide increased network capacity, the backbone of a successful roll-out of a mobile telephone system is inevitably the planning of the network’s cellular structure. Hence, the fundamental aspect to a reliable cellular planning is the knowledge about the physical radio channel for wide sets of different propagation scenarios. Therefore, to study radio wave propagation in typical Australian environments, the Commonwealth Scientific and Industrial Research Organisation (CSIRO) and the Australian Telecommunications Cooperative Research Centre (ATcrc) in collaboration developed a cellular code division multiple access (CDMA) pilot scanner. The pilot scanner measurement equipment enables for radio wave propagation measurements in available commercial CDMA mobile radio networks, which in Australia are usually deployed for extensive rural areas. Over time, the collected measurement data has been used to characterise many different types of mobile radio environments and some of the results are presented in this thesis. The thesis is divided into an introduction section and four parts based on peer-reviewed international research publications. The introduction section presents the reader with some relevant background on channel and propagation modelling. Also, the CDMA scanner measurement system that was developed in parallel with the research results founding this thesis is presented. The first part presents work on the evaluation and development of the different revisions of the Recommendation ITU-R P.1546 point-to-area radio wave propagation prediction model. In particular, the modified application of the terrain clearance angle (TCA) and the calculation method of the effective antenna height are scrutinized. In the second part, the correlation between the smallscale fading characteristics, described by the Ricean K-factor, and the vegetation density in the vicinity of the mobile receiving antenna is investigated. The third part presents an artificial neural network (ANN) based technique incorporated to predict path loss in rural macrocell environments. Obtained results, such as prediction accuracy and training time, are presented for different sized ANNs and different training approaches. Finally, the fourth part proposes an extension of the path loss ANN enabling the model to also predict small-scale fading characteristics.
APA, Harvard, Vancouver, ISO, and other styles
26

Sahasrabudhe, Mandar. "Neural network applications in fluid dynamics." Thesis, Mississippi State : Mississippi State University, 2002. http://library.msstate.edu/etd/show.asp?etd=etd-08112002-221615.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Halabian, Faezeh. "An Enhanced Learning for Restricted Hopfield Networks." Thesis, Université d'Ottawa / University of Ottawa, 2021. http://hdl.handle.net/10393/42271.

Full text
Abstract:
This research investigates developing a training method for Restricted Hopfield Network (RHN) which is a subcategory of Hopfield Networks. Hopfield Networks are recurrent neural networks proposed in 1982 by John Hopfield. They are useful for different applications such as pattern restoration, pattern completion/generalization, and pattern association. In this study, we propose an enhanced training method for RHN which not only improves the convergence of the training sub-routine, but also is shown to enhance the learning capability of the network. Particularly, after describing the architecture/components of the model, we propose a modified variant of SPSA which in conjunction with back-propagation over time result in a training algorithm with an enhanced convergence for RHN. The trained network is also shown to achieve a better memory recall in the presence of noisy/distorted input. We perform several experiments, using various datasets, to verify the convergence of the training sub-routine, evaluate the impact of different parameters of the model, and compare the performance of the trained RHN in recreating distorted input patterns compared to conventional RBM and Hopfield network and other training methods.
APA, Harvard, Vancouver, ISO, and other styles
28

Lin, Yu Chu. "E-government website performance evaluation based on BP neural network." Thesis, University of Macau, 2017. http://umaclib3.umac.mo/record=b3691489.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Civelek, Ferda N. (Ferda Nur). "Temporal Connectionist Expert Systems Using a Temporal Backpropagation Algorithm." Thesis, University of North Texas, 1993. https://digital.library.unt.edu/ark:/67531/metadc278824/.

Full text
Abstract:
Representing time has been considered a general problem for artificial intelligence research for many years. More recently, the question of representing time has become increasingly important in representing human decision making process through connectionist expert systems. Because most human behaviors unfold over time, any attempt to represent expert performance, without considering its temporal nature, can often lead to incorrect results. A temporal feedforward neural network model that can be applied to a number of neural network application areas, including connectionist expert systems, has been introduced. The neural network model has a multi-layer structure, i.e. the number of layers is not limited. Also, the model has the flexibility of defining output nodes in any layer. This is especially important for connectionist expert system applications. A temporal backpropagation algorithm which supports the model has been developed. The model along with the temporal backpropagation algorithm makes it extremely practical to define any artificial neural network application. Also, an approach that can be followed to decrease the memory space used by weight matrix has been introduced. The algorithm was tested using a medical connectionist expert system to show how best we describe not only the disease but also the entire course of the disease. The system, first, was trained using a pattern that was encoded from the expert system knowledge base rules. Following then, series of experiments were carried out using the temporal model and the temporal backpropagation algorithm. The first series of experiments was done to determine if the training process worked as predicted. In the second series of experiments, the weight matrix in the trained system was defined as a function of time intervals before presenting the system with the learned patterns. The result of the two experiments indicate that both approaches produce correct results. The only difference between the two results was that compressing the weight matrix required more training epochs to produce correct results. To get a measure of the correctness of the results, an error measure which is the value of the error squared was summed over all patterns to get a total sum of squares.
APA, Harvard, Vancouver, ISO, and other styles
30

Tapkin, Serkan. "A Recommended Neural Trip Distributon Model." Phd thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/3/663807/index.pdf.

Full text
Abstract:
In this dissertation, it is aimed to develop an approach for the trip distribution element which is one of the important phases of four-step travel demand modelling. The trip distribution problem using back-propagation artificial neural networks has been researched in a limited number of studies and, in a critically evaluated study it has been concluded that the artificial neural networks underperform when compared to the traditional models. The underperformance of back-propagation artificial neural networks appears to be due to the thresholding the linearly combined inputs from the input layer in the hidden layer as well as thresholding the linearly combined outputs from the hidden layer in the output layer. In the proposed neural trip distribution model, it is attempted not to threshold the linearly combined outputs from the hidden layer in the output layer. Thus, in this approach, linearly combined iv inputs are activated in the hidden layer as in most neural networks and the neuron in the output layer is used as a summation unit in contrast to other neural networks. When this developed neural trip distribution model is compared with various approaches as modular, gravity and back-propagation neural models, it has been found that reliable trip distribution predictions are obtained.
APA, Harvard, Vancouver, ISO, and other styles
31

Xu, Jin. "Machine Learning – Based Dynamic Response Prediction of High – Speed Railway Bridges." Thesis, KTH, Bro- och stålbyggnad, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-278538.

Full text
Abstract:
Targeting heavier freights and transporting passengers with higher speeds became the strategic railway development during the past decades significantly increasing interests on railway networks. Among different components of a railway network, bridges constitute a major portion imposing considerable construction and maintenance costs. On the other hand, heavier axle loads and higher trains speeds may cause resonance occurrence on bridges; which consequently limits operational train speed and lines. Therefore, satisfaction of new expectations requires conducting a large number of dynamic assessments/analyses on bridges, especially on existing ones. Evidently, such assessments need detailed information, expert engineers and consuming considerable computational costs. In order to save the computational efforts and decreasing required amount of expertise in preliminary evaluation of dynamic responses, predictive models using artificial neural network (ANN) are proposed in this study. In this regard, a previously developed closed-form solution method (based on solving a series of moving force) was adopted to calculate the dynamic responses (maximum deck deflection and maximum vertical deck acceleration) of randomly generated bridges. Basic variables in generation of random bridges were extracted both from literature and geometrical properties of existing bridges in Sweden. Different ANN architectures including number of inputs and neurons were considered to train the most accurate and computationally cost-effective mode. Then, the most efficient model was selected by comparing their performance using absolute error (ERR), Root Mean Square Error (RMSE) and coefficient of determination (R2). The obtained results revealed that the ANN model can acceptably predict the dynamic responses. The proposed model presents Err of about 11.1% and 9.9% for prediction of maximum acceleration and maximum deflection, respectively. Furthermore, its R2 for maximum acceleration and maximum deflection predictions equal to 0.982 and 0.998, respectively. And its RMSE is 0.309 and 1.51E-04 for predicting the maximum acceleration and maximum deflection prediction, respectively. Finally, sensitivity analyses were conducted to evaluate the importance of each input variable on the outcomes. It was noted that the span length of the bridge and speed of the train are the most influential parameters.
APA, Harvard, Vancouver, ISO, and other styles
32

Oronsaye, Samuel Iyen Jeffrey. "Updating the ionospheric propagation factor, M(3000)F2, global model using the neural network technique and relevant geophysical input parameters." Thesis, Rhodes University, 2013. http://hdl.handle.net/10962/d1001609.

Full text
Abstract:
This thesis presents an update to the ionospheric propagation factor, M(3000)F2, global empirical model developed by Oyeyemi et al. (2007) (NNO). An additional aim of this research was to produce the updated model in a form that could be used within the International Reference Ionosphere (IRI) global model without adding to the complexity of the IRI. M(3000)F2 is the highest frequency at which a radio signal can be received over a distance of 3000 km after reflection in the ionosphere. The study employed the artificial neural network (ANN) technique using relevant geophysical input parameters which are known to influence the M(3000)F2 parameter. Ionosonde data from 135 ionospheric stations globally, including a number of equatorial stations, were available for this work. M(3000)F2 hourly values from 1976 to 2008, spanning all periods of low and high solar activity were used for model development and verification. A preliminary investigation was first carried out using a relatively small dataset to determine the appropriate input parameters for global M(3000)F2 parameter modelling. Inputs representing diurnal variation, seasonal variation, solar variation, modified dip latitude, longitude and latitude were found to be the optimum parameters for modelling the diurnal and seasonal variations of the M(3000)F2 parameter both on a temporal and spatial basis. The outcome of the preliminary study was applied to the overall dataset to develop a comprehensive ANN M(3000)F2 model which displays a remarkable improvement over the NNO model as well as the IRI version. The model shows 7.11% and 3.85% improvement over the NNO model as well as 13.04% and 10.05% over the IRI M(3000)F2 model, around high and low solar activity periods respectively. A comparison of the diurnal structure of the ANN and the IRI predicted values reveal that the ANN model is more effective in representing the diurnal structure of the M(3000)F2 values than the IRI M(3000)F2 model. The capability of the ANN model in reproducing the seasonal variation pattern of the M(3000)F2 values at 00h00UT, 06h00UT, 12h00UT, and l8h00UT more appropriately than the IRI version is illustrated in this work. A significant result obtained in this study is the ability of the ANN model in improving the post-sunset predicted values of the M(3000)F2 parameter which is known to be problematic to the IRI M(3000)F2 model in the low-latitude and the equatorial regions. The final M(3000)F2 model provides for an improved equatorial prediction and a simplified input space that allows for easy incorporation into the IRI model.
APA, Harvard, Vancouver, ISO, and other styles
33

Traver, Michael L. "In-cylinder combustion-based virtual emissions sensing." Morgantown, W. Va. : [West Virginia University Libraries], 1999. http://etd.wvu.edu/templates/showETD.cfm?recnum=459.

Full text
Abstract:
Thesis (Ph. D.)--West Virginia University, 1999.<br>Title from document title page. Document formatted into pages; contains x, 144 p. : ill. (some col.) Includes abstract. Includes bibliographical references (p. 81-84).
APA, Harvard, Vancouver, ISO, and other styles
34

Rosenlew, Matilda, and Timas Ljungdahl. "Using Layer-wise Relevance Propagation and Sensitivity Analysis Heatmaps to understand the Classification of an Image produced by a Neural Network." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-252702.

Full text
Abstract:
Neural networks are regarded as state of the art within many areas of machine learning, however due to their growing complexity and size, a question regarding their trustability and understandability has been raised. Thus, neural networks are often being considered a "black-box". This has lead to the emersion of evaluation methods trying to decipher these complex networks. Two of these methods, layer-wise relevance propagation (LRP) and sensitivity analysis (SA), are used to generate heatmaps, which presents pixels in the input image that have an impact on the classification. In this report, the aim is to do a usability-analysis by evaluating and comparing these methods to see how they can be used in order to understand a particular classification. The method used in this report is to iteratively distort image regions that were highlighted as important by the two heatmapping-methods. This lead to the findings that distorting essential features of an image according to the LRP heatmaps lead to a decrease in classification score, while distorting inessential features of an image according to the combination of SA and LRP heatmaps lead to an increase in classification score. The results corresponded well with the theory of the heatmapping-methods and lead to the conclusion that a combination of the two evaluation methods is advocated for, to fully understand a particular classification.<br>Neurala nätverk betraktas som den senaste tekniken i många områden inom maskininlärning, dock har deras pålitlighet och förståelse ifrågasatts på grund av deras växande komplexitet och storlek. Således, blir neurala nätverk ofta sedda som en "svart låda". Detta har lett till utvecklingen  av evalueringsmetoder som ämnar att tolka dessa komplexa nätverk. Två av dessa metoder, layer-wise relevance propagation (LRP) och sensitivity analysis (SA), används för att generera färgdiagram som visar pixlar i indata-bilden som har en påverkan på klassificeringen. I den här rapporten, är målet att göra en användarbarhets-analys genom att utvärdera och jämföra dessa metoder för att se hur de kan användas för att förstå en specifik klassificering. Metoden som används i denna rapport är att iterativt förvränga bilder genom att följa de två färgdiagrams-metoderna. Detta ledde till insikterna att förvrängning av väsentliga delar av bilden, vilket framgick ur LRP färgdiagrammen, tydligt minskade sannolikheten för klassen. Det framkom även att förvrängning av oväsentliga delar, som framgick genom att kombinera SA och LRP färgdiagrammen, ökade sannolikheten för klassen. Resultaten stämde väl överens med teorin och detta ledde till slutsatsen att en kombination av metoderna rekommenderas för att förstå en specifik klassificering.
APA, Harvard, Vancouver, ISO, and other styles
35

Scarborough, David J. (David James). "An Evaluation of Backpropagation Neural Network Modeling as an Alternative Methodology for Criterion Validation of Employee Selection Testing." Thesis, University of North Texas, 1995. https://digital.library.unt.edu/ark:/67531/metadc277752/.

Full text
Abstract:
Employee selection research identifies and makes use of associations between individual differences, such as those measured by psychological testing, and individual differences in job performance. Artificial neural networks are computer simulations of biological nerve systems that can be used to model unspecified relationships between sets of numbers. Thirty-five neural networks were trained to estimate normalized annual revenue produced by telephone sales agents based on personality and biographic predictors using concurrent validation data (N=1085). Accuracy of the neural estimates was compared to OLS regression and a proprietary nonlinear model used by the participating company to select agents.
APA, Harvard, Vancouver, ISO, and other styles
36

Fernandez, Cesar Aaron Moya. "Two alternative inversion techniques for the determination of seismic site response and propagation-path velocity structure : spectral inversion with reference events and neural networks." 京都大学 (Kyoto University), 2004. http://hdl.handle.net/2433/147831.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Clayton, Arnshea. "The Relative Importance of Input Encoding and Learning Methodology on Protein Secondary Structure Prediction." Digital Archive @ GSU, 2006. http://digitalarchive.gsu.edu/cs_theses/19.

Full text
Abstract:
In this thesis the relative importance of input encoding and learning algorithm on protein secondary structure prediction is explored. A novel input encoding, based on multidimensional scaling applied to a recently published amino acid substitution matrix, is developed and shown to be superior to an arbitrary input encoding. Both decimal valued and binary input encodings are compared. Two neural network learning algorithms, Resilient Propagation and Learning Vector Quantization, which have not previously been applied to the problem of protein secondary structure prediction, are examined. Input encoding is shown to have a greater impact on prediction accuracy than learning methodology with a binary input encoding providing the highest training and test set prediction accuracy.
APA, Harvard, Vancouver, ISO, and other styles
38

Gómez, Cerdà Vicenç. "Algorithms and complex phenomena in networks: Neural ensembles, statistical, interference and online communities." Doctoral thesis, Universitat Pompeu Fabra, 2008. http://hdl.handle.net/10803/7548.

Full text
Abstract:
Aquesta tesi tracta d'algoritmes i fenòmens complexos en xarxes.<br/><br/>En la primera part s'estudia un model de neurones estocàstiques inter-comunicades mitjançant potencials d'acció. Proposem una tècnica de modelització a escala mesoscòpica i estudiem una transició de fase en un acoblament crític entre les neurones. Derivem una regla de plasticitat sinàptica local que fa que la xarxa s'auto-organitzi en el punt crític.<br/><br/>Seguidament tractem el problema d'inferència aproximada en xarxes probabilístiques mitjançant un algorisme que corregeix la solució obtinguda via belief propagation en grafs cíclics basada en una expansió en sèries. Afegint termes de correcció que corresponen a cicles generals en la xarxa, s'obté el resultat exacte. Introduïm i analitzem numèricament una manera de truncar aquesta sèrie.<br/><br/>Finalment analizem la interacció social en una comunitat d'Internet caracteritzant l'estructura de la xarxa d'usuaris, els fluxes de discussió en forma de comentaris i els patrons de temps de reacció davant una nova notícia.<br>This thesis is about algorithms and complex phenomena in networks.<br/><br/>In the first part we study a network model of stochastic spiking neurons. We propose a modelling technique based on a mesoscopic description level and show the presence of a phase transition around a critical coupling strength. We derive a local plasticity which drives the network towards the critical point.<br/><br/>We then deal with approximate inference in probabilistic networks. We develop an algorithm which corrects the belief propagation solution for loopy graphs based on a loop series expansion. By adding correction terms, one for each "generalized loop" in the network, the exact result is recovered. We introduce and analyze numerically a particular way of truncating the series.<br/><br/>Finally, we analyze the social interaction of an Internet community by characterizing the structure of the network of users, their discussion threads and the temporal patterns of reaction times to a new post.
APA, Harvard, Vancouver, ISO, and other styles
39

Cheng, Martin Chun-Sheng, and pjcheng@ozemail com au. "Dynamical Near Optimal Training for Interval Type-2 Fuzzy Neural Network (T2FNN) with Genetic Algorithm." Griffith University. School of Microelectronic Engineering, 2003. http://www4.gu.edu.au:8080/adt-root/public/adt-QGU20030722.172812.

Full text
Abstract:
Type-2 fuzzy logic system (FLS) cascaded with neural network, called type-2 fuzzy neural network (T2FNN), is presented in this paper to handle uncertainty with dynamical optimal learning. A T2FNN consists of type-2 fuzzy linguistic process as the antecedent part and the two-layer interval neural network as the consequent part. A general T2FNN is computational intensive due to the complexity of type 2 to type 1 reduction. Therefore the interval T2FNN is adopted in this paper to simplify the computational process. The dynamical optimal training algorithm for the two-layer consequent part of interval T2FNN is first developed. The stable and optimal left and right learning rates for the interval neural network, in the sense of maximum error reduction, can be derived for each iteration in the training process (back propagation). It can also be shown both learning rates can not be both negative. Further, due to variation of the initial MF parameters, i.e. the spread level of uncertain means or deviations of interval Gaussian MFs, the performance of back propagation training process may be affected. To achieve better total performance, a genetic algorithm (GA) is designed to search better-fit spread rate for uncertain means and near optimal learnings for the antecedent part. Several examples are fully illustrated. Excellent results are obtained for the truck backing-up control and the identification of nonlinear system, which yield more improved performance than those using type-1 FNN.
APA, Harvard, Vancouver, ISO, and other styles
40

Cheng, Martin Chun-Sheng. "Dynamical Near Optimal Training for Interval Type-2 Fuzzy Neural Network (T2FNN) with Genetic Algorithm." Thesis, Griffith University, 2003. http://hdl.handle.net/10072/366350.

Full text
Abstract:
Type-2 fuzzy logic system (FLS) cascaded with neural network, called type-2 fuzzy neural network (T2FNN), is presented in this paper to handle uncertainty with dynamical optimal learning. A T2FNN consists of type-2 fuzzy linguistic process as the antecedent part and the two-layer interval neural network as the consequent part. A general T2FNN is computational intensive due to the complexity of type 2 to type 1 reduction. Therefore the interval T2FNN is adopted in this paper to simplify the computational process. The dynamical optimal training algorithm for the two-layer consequent part of interval T2FNN is first developed. The stable and optimal left and right learning rates for the interval neural network, in the sense of maximum error reduction, can be derived for each iteration in the training process (back propagation). It can also be shown both learning rates can not be both negative. Further, due to variation of the initial MF parameters, i.e. the spread level of uncertain means or deviations of interval Gaussian MFs, the performance of back propagation training process may be affected. To achieve better total performance, a genetic algorithm (GA) is designed to search better-fit spread rate for uncertain means and near optimal learnings for the antecedent part. Several examples are fully illustrated. Excellent results are obtained for the truck backing-up control and the identification of nonlinear system, which yield more improved performance than those using type-1 FNN.<br>Thesis (Masters)<br>Master of Philosophy (MPhil)<br>School of Microelectronic Engineering<br>Full Text
APA, Harvard, Vancouver, ISO, and other styles
41

Manoharan, Madhu. "Evaluation of a neural network for formulating a semi-empirical variable kernel BRDF model." Master's thesis, Mississippi State : Mississippi State University, 2005. http://library.msstate.edu/content/templates/?a=72.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Fu, Ruijun. "Empirical RF Propagation Modeling of Human Body Motions for Activity Classification." Digital WPI, 2012. https://digitalcommons.wpi.edu/etd-theses/1130.

Full text
Abstract:
"Many current and future medical devices are wearable, using the human body as a conduit for wireless communication, which implies that human body serves as a crucial part of the transmission medium in body area networks (BANs). Implantable medical devices such as Pacemaker and Cardiac Defibrillators are designed to provide patients with timely monitoring and treatment. Endoscopy capsules, pH Monitors and blood pressure sensors are used as clinical diagnostic tools to detect physiological abnormalities and replace traditional wired medical devices. Body-mounted sensors need to be investigated for use in providing a ubiquitous monitoring environment. In order to better design these medical devices, it is important to understand the propagation characteristics of channels for in-body and on- body wireless communication in BANs. The IEEE 802.15.6 Task Group 6 is officially working on the standardization of Body Area Network, including the channel modeling and communication protocol design. This thesis is focused on the propagation characteristics of human body movements. Specifically, standing, walking and jogging motions are measured, evaluated and analyzed using an empirical approach. Using a network analyzer, probabilistic models are derived for the communication links in the medical implant communication service band (MICS), the industrial scientific medical band (ISM) and the ultra- wideband (UWB) band. Statistical distributions of the received signal strength and second order statistics are presented to evaluate the link quality and outage performance for on-body to on- body communications at different antenna separations. The Normal distribution, Gamma distribution, Rayleigh distribution, Weibull distribution, Nakagami-m distribution, and Lognormal distribution are considered as potential models to describe the observed variation of received signal strength. Doppler spread in the frequency domain and coherence time in the time domain from temporal variations is analyzed to characterize the stability of the channels induced by human body movements. The shape of the Doppler spread spectrum is also investigated to describe the relationship of the power and frequency in the frequency domain. All these channel characteristics could be used in the design of communication protocols in BANs, as well as providing features to classify different human body activities. Realistic data extracted from built-in sensors in smart devices were used to assist in modeling and classification of human body movements along with the RF sensors. Variance, energy and frequency domain entropy of the data collected from accelerometer and orientation sensors are pre- processed as features to be used in machine learning algorithms. Activity classifiers with Backpropagation Network, Probabilistic Neural Network, k-Nearest Neighbor algorithm and Support Vector Machine are discussed and evaluated as means to discriminate human body motions. The detection accuracy can be improved with both RF and inertial sensors."
APA, Harvard, Vancouver, ISO, and other styles
43

Derras, Boumédiène. "Estimation des mouvements sismiques et de leur variabilité par approche neuronale : Apport à la compréhension des effets de la source, de propagation et de site." Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAU013/document.

Full text
Abstract:
Cette thèse est consacrée à une analyse approfondie de la capacité des "réseaux de neurones artificiels" (RNA) à la prédiction des mouvements sismiques. Un premier volet important concerne la dérivation par RNA de "GMPE" (équations de prédiction du mouvement du sol) et la comparaison des performances ainsi obtenues avec celles des GMPE "classiques" obtenues sur la base de régressions empiriques avec une forme fonctionnelle préétablie (plus ou moins complexe). Pour effectuer l’étude comparative et obtenir les deux composnates inter-événement « betweeen-event » et intra-événement « within-event » de la variabilité aléatoire, nous intégrons l’algorithme du « modèle à effets aléatoires » à l’approche neuronale. Cette approche est testée sur différents jeux de données réelles et synthétiques : la base de données compilée à partir d'événements européens, méditerranéens et du Moyen-Orient (RESORCE : Reference database for Seismic grOund-motion pRediction in Europe), la base de données NGA-West 2 (Next Generation Attenuation West 2 développée aux USA), la base de données japonaise dérivée du réseau accélérométrique KiK-net. En outre, un set de données synthétiques provenant d'une approche par simulation stochastique est utilisé. Les paramètres du mouvement du sol les plus utilisés en génie parasismique (PGA, PGV, spectres de réponse et également, dans certains cas, les fonctions d'amplification locales) sont considérés. Les modèles neuronaux ainsi obtenus, complètement dirigés par les données « data-driven », nous renseignent sur les influences respectives et éventuellement couplées de l’atténuation avec la distance, de l'effet d’échelle lié à la magnitude, des conditions de site et notamment la présence éventuelle de non-linéarités. Un autre volet important est consacré à l'utilisation des RNA pour tester la pertinence de différents proxies de site, au travers de leur capacité à réduire la variabilité aléatoire des prédictions de mouvement du sol. Utilisés individuellement ou en couple, ces proxies de site décrivent de manière plus ou moins détaillée l'influence des conditions de site locales sur le mouvement sismique. Dans ce même volet, nous amorçons également une étude des liens entre les aspects non-linéaire de la réponse de site, et les différents proxies de site. Le troisième volet se concentre sur certain effets liés à la source : analyse de l’influence du style de la faille sismique sur le mouvement du sol, ainsi qu'une approche indirecte de la dépendance entre la magnitude et la chute de contrainte sismique<br>This thesis is devoted to an in-depth analysis of the ability of "Artificial Neural Networks" (ANN) to achieve reliable ground motion predictions. A first important aspect concerns the derivation of "GMPE" (Ground Motion Prediction Equations) with an ANN approach, and the comparison of their performance with those of "classical" GMGEs derived on the basis of empirical regressions with pre-established, more or less complex, functional forms. To perform such a comparison involving the two "betweeen-event" and "within-event" components of the random variability, we adapt the algorithm of the "random effects model" to the neural approach. This approach is tested on various, real and synthetic, datasets: the database compiled from European, Mediterranean and Middle Eastern events (RESORCE: Reference database for Seismic grOund-motion pRediction in Europe), the database NGA West 2 (Next Generation Attenuation West 2 developed in the USA), and the Japanese database derived from the KiK-net accelerometer network. In addition, a comprehensive set of synthetic data is also derived with a stochastic simulation approach. The considered ground motion parameters are those which are most used in earthquake engineering (PGA, PGV, response spectra and also, in some cases, local amplification functions). Such completely "data-driven" neural models, inform us about the respective, and possibly coupled, influences of the amplitude decay with distance, the magnitude scaling effects, and the site conditions, with a particular focus on the detection of non-linearities in site response. Another important aspect is the use of ANNs to test the relevance of different site proxies, through their ability to reduce the random variability of ground motion predictions. The ANN approach allows to use such site proxies either individually or combined, and to investigate their respective impact on the various characteristics of ground motion. The same section also includes an investigation on the links between the non-linear aspects of the site response and the different site proxies. Finally, the third section focuses on a few source-related effects: analysis of the influence of the "style of faulting" on ground motion, and, indirectly, the dependence between magnitude and seismic stress drop
APA, Harvard, Vancouver, ISO, and other styles
44

Pereira, Ariston Leite. "Aplicação de redes neurais artificiais na modelagem de canais de radiopropagação para o Sistema Brasileiro de TV Digital." reponame:Repositório Institucional da UFABC, 2017.

Find full text
Abstract:
Orientador: Prof. Dr. Ivan Roberto Santana Casella<br>Dissertação (mestrado) - Universidade Federal do ABC, Programa de Pós-Graduação em Engenharia Elétrica, 2017.<br>Com o desligamento das transmissoes do sinal de TV analogica e o crescimento de novas instalações do sinal de TV digital em todo territorio nacional para os proximos anos, existe a necessidade de um conhecimento mais aprofundado das caractersticas dos canais de propagação, possibilitando a implantação desses novos sistemas de forma mais otimizada e efciente. Os modelos de propagação propostos para o Sistema Brasileiro de TV Digital seguem recomendações nacionais e internacionais baseadas nos modelos de propagação de larga escala, propostos na literatura cientifica. Contudo, em algumas situaçõess, esses modelos não caracterizam com precisão a propagação da onda eletromagnetica na comunicação entre o transmissor e o receptor, devido aos fenomenos de propagação e interferencias que degradam o sinal. Assim sendo, aplicou-se nesse projeto 03 tecnicas de Redes Neurais Artificiais como aproximadores de funções: Perceptron Multicamadas, Redes de Funções de Base Radial e Rede Neurais com Regressão Generalizada, sendo treinadas com os dados coletados de um levantamento de campo dos canais abertos de TV digital na cidade de São Paulo. Apos a fase de treinamento e utilizando metodos de otimização adequados para redução de overfitting, as melhores configurações de Redes Neurais Artificiais foram analisadas com resultados de saída mais adequados para representar o canal de propagação para o sistema de TV digital e resultados generalizados para diferentes distancias, frequencias e alturas foram gerados. Por fim, uma analise estatistica foi realizada comparando os valores de saida das Redes Neurais Artificiais, com valores praticos do levantamento de campo e os resultados teoricos calculados atraves dos modelos de propagação classicos da literatura cientifica, sinalizando que o uso das tecnicas de Redes Neurais Artificiais é possível na predição de canal de propagação com relativa eficiência de resultados.<br>With the switch-off of the analogue TV signal transmissions and the new digital TV signal installations throughout the national territory for the next years, there is a need for a more in-depth knowledge of the characteristics of the propagation channels, enabling the deployment of these new systems in a more optimized and eficient way. The propagation models proposed for the Brazilian Digital TV System follow national and international recommendations based on the large scale propagation models proposed in the scientific literature. However, in some situations, these models do not accurately characterize the propagation of the electromagnetic wave in the communication between the transmitter and the receiver, due to propagation phenomena and interferences that degrade the signal. Thus, we applied in this project 03 techniques of Artificial Neural Networks as approximations of functions: Multi layer Perceptron, Radial Base Functions Networks and Generalized Regression Neural Network, being trained with data collected from a field survey of open channels of digital TV in the city of S~ao Paulo. After the training phase and using appropriate optimization methods to reduce overfitting, the best configurations of Artificial Neural Networks were analyzed with better output results to represent the propagation channel for the digital TV system and generalized results for diferent distances, Frequencies and heights of the profiles were generated. Finally, a statistical analysis was performed comparing the output values of the Artificial Neural Networks with practical values of the field survey and the theoretical results calculated through the classical propagation models of the scientific literature, signaling that the use of Artificial Neural Networks techniques is possible in the prediction of propagation channel with relative eficiency of results.
APA, Harvard, Vancouver, ISO, and other styles
45

Shen, Wen-chih, and 沈文智. "Apply Counter-propagation Neural Networks to Audio Watermarking and Hashing for Copy Detection." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/71767645029820662391.

Full text
Abstract:
碩士<br>國立雲林科技大學<br>資訊工程研究所<br>94<br>In recent years, how to protect the intellectual property rights (IPR) of digital media is a very important issue. Nowadays, there are two popular approaches to protect the intellectual property rights, watermarking and hashing. In this thesis, we propose apply Counter-propagation Neural Networks (CPN) to audio watermarking and hashing for copy detection. Since the low-frequency component of DWT has capability of robustness, and the CPN has capabilities of memorization and fault tolerance. The watermark is memorized in the synapses of CPN. Therefore, most attacks can not degrade the quality of the extracted watermark. Since audio hashing does not embed information into host audio. The audio hashing concentrates compress information from audio content. Then, the compress information is used to judge whether the audio is copied or not. Different from the CPN-based watermarking, the synchronization code is not embedded into audio for CPN-based hashing. Experimental results show the proposed CPN-based audio watermarking and hashing have capabilities of robustness and authenticity.
APA, Harvard, Vancouver, ISO, and other styles
46

Liu, I.-Chun, and 劉怡君. "Using Back-Propagation Neural Networks to Predict Costs Resulting from Adjacent Damages of Construction - A Case Study in the Hsinchu County." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/84378n.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

"Radio propagation modeling by neural networks." 1996. http://library.cuhk.edu.hk/record=b6073058.

Full text
Abstract:
by Qin Zhou.<br>Thesis (Ph.D.)--Chinese University of Hong Kong, 1996.<br>Includes bibliographical references (p. 196-205).<br>Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web.<br>Mode of access: World Wide Web.
APA, Harvard, Vancouver, ISO, and other styles
48

"Dynamic construction of back-propagation artificial neural networks." Chinese University of Hong Kong, 1991. http://library.cuhk.edu.hk/record=b5886957.

Full text
Abstract:
by Korris Fu-lai Chung.<br>Thesis (M.Phil.) -- Chinese University of Hong Kong, 1991.<br>Bibliography: leaves R-1 - R-5.<br>LIST OF FIGURES --- p.vi<br>LIST OF TABLES --- p.viii<br>Chapter 1 --- INTRODUCTION<br>Chapter 1.1 --- Recent Resurgence of Artificial Neural Networks --- p.1-1<br>Chapter 1.2 --- A Design Problem in Applying Back-Propagation Networks --- p.1-4<br>Chapter 1.3 --- Related Works --- p.1-6<br>Chapter 1.4 --- Objective of the Research --- p.1-8<br>Chapter 1.5 --- Thesis Organization --- p.1-9<br>Chapter 2 --- MULTILAYER FEEDFORWARD NETWORKS (MFNs) AND BACK-PRO- PAGATION (BP) LEARNING ALGORITHM<br>Chapter 2.1 --- Introduction --- p.2-1<br>Chapter 2.2 --- From Perceptrons to MFNs --- p.2-2<br>Chapter 2.3 --- From Delta Rule to BP Algorithm --- p.2-6<br>Chapter 2.4 --- A Variant of BP Algorithm --- p.2-12<br>Chapter 3 --- INTERPRETATIONS AND PROPERTIES OF BP NETWORKS<br>Chapter 3.1 --- Introduction --- p.3-1<br>Chapter 3.2 --- A Pattern Classification View on BP Networks --- p.3-2<br>Chapter 3.2.1 --- Pattern Space Interpretation of BP Networks --- p.3-2<br>Chapter 3.2.2 --- Weight Space Interpretation of BP Networks --- p.3-3<br>Chapter 3.3 --- Local Minimum --- p.3-5<br>Chapter 3.4 --- Generalization --- p.3-6<br>Chapter 4 --- GROWTH OF BP NETWORKS<br>Chapter 4.1 --- Introduction --- p.4-1<br>Chapter 4.2 --- Problem Formulation --- p.4-1<br>Chapter 4.3 --- Learning an Additional Pattern --- p.4-2<br>Chapter 4.4 --- A Progressive Training Algorithm --- p.4-4<br>Chapter 4.5 --- Experimental Results and Performance Analysis --- p.4-7<br>Chapter 4.6 --- Concluding Remarks --- p.4-16<br>Chapter 5 --- PRUNING OF BP NETWORKS<br>Chapter 5.1 --- Introduction --- p.5-1<br>Chapter 5.2 --- Characteristics of Hidden Nodes in Oversized Networks --- p.5-2<br>Chapter 5.2.1 --- Observations from an Empirical Study --- p.5-2<br>Chapter 5.2.2 --- Four Categories of Excessive Nodes --- p.5-3<br>Chapter 5.2.3 --- Why are they excessive ? --- p.5-6<br>Chapter 5.3 --- Pruning of Excessive Nodes --- p.5-9<br>Chapter 5.4 --- Experimental Results and Performance Analysis --- p.5-13<br>Chapter 5.5 --- Concluding Remarks --- p.5-19<br>Chapter 6 --- DYNAMIC CONSTRUCTION OF BP NETWORKS<br>Chapter 6.1 --- A Hybrid Approach --- p.6-1<br>Chapter 6.2 --- Experimental Results and Performance Analysis --- p.6-2<br>Chapter 6.3 --- Concluding Remarks --- p.6-7<br>Chapter 7 --- CONCLUSIONS --- p.7-1<br>Chapter 7.1 --- Contributions --- p.7-1<br>Chapter 7.2 --- Limitations and Suggestions for Further Research --- p.7-2<br>REFERENCES --- p.R-l<br>APPENDIX<br>Chapter A.1 --- A Handwriting Numeral Recognition Experiment: Feature Extraction Technique and Sampling Process --- p.A-1<br>Chapter A.2 --- Determining the distance d= δ2/2r in Lemma 1 --- p.A-2
APA, Harvard, Vancouver, ISO, and other styles
49

Chen, Chang-Chieh, and 陳昌捷. "Stock Index Prediction Using Back Propagation Neural Networks." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/47651467208536124110.

Full text
Abstract:
碩士<br>國立宜蘭大學<br>多媒體網路通訊數位學習碩士在職專班<br>103<br>Abstract The main purpose of this study is to construct a Back Propagation Neural Network ( BPNN) model on MATLAB for predicting the Taiwan stock exchange capitalization weighted stock (TAIEX). The data ranging from 2014.01.02 to 2014.07.31 is selected. The duration is 7 months and there are total 140 recorders. The weighted indexes and technical analysis indicators are screened as the input parameters by using Pearson correlation coefficient. Specifically, the indicators, that the r values are more than 0.7, are selected as the input parameters, and there are total 17 input parameters. The input parameters are divided into three groups where the r values are 0.7, 0.8, and 0.9, respectively. Finally, Mean Absolute Percentage Error (MAPE) is used to evaluate the accuracy of the models. The results show that the MAPE of the prediction for index closed point is 0.6315% . In the short term (8 days) prediction, the accuracy is up to 87.5%. The accuracy of the short term prediction trends for the weighted index is 71.42%。
APA, Harvard, Vancouver, ISO, and other styles
50

Lai, Liang-Bin, and 賴良賓. "Data Mining by Query-Based Back-Propagation Neural Networks." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/65290993455760105933.

Full text
Abstract:
碩士<br>國立中央大學<br>資訊管理研究所<br>91<br>The central focus of data mining in enterprises is to gain insight into large collections of data for making a good prediction and a right decision. Neural networks have been applied to a wide variety of problem domains such as steering motor vehicles, recognizing genes in uncharacterized DNA sequences, scheduling payloads for the space shuttle, and predicting exchange rates. Advantages of neural networks include the high tolerance to noisy data as well as the ability to classify patterns having not been trained. Neural networks have been successfully applied to a wide range of supervised and unsupervised learning problems. However, while being applied in data mining, there are two fundamental considerations - the comprehensibility of learned models and the time required to induce models from large data sets. For the first problem, many approaches have been proposed for extracting rules from trained neural networks. In this thesis, we focus on the second problem. We introduce a query-based learning algorithm to improve neural networks'' performance in data mining. Results show that the proposed algorithm can significantly reduce the training set cardinality. Our future work is to apply this learning procedure to other data mining schemes.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!