Academic literature on the topic 'Counter Propagation Neural Networks'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Counter Propagation Neural Networks.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Counter Propagation Neural Networks"

1

Drgan, Viktor, Katja Venko, Janja Sluga, and Marjana Novič. "Merging Counter-Propagation and Back-Propagation Algorithms: Overcoming the Limitations of Counter-Propagation Neural Network Models." International Journal of Molecular Sciences 25, no. 8 (2024): 4156. http://dx.doi.org/10.3390/ijms25084156.

Full text
Abstract:
Artificial neural networks (ANNs) are nowadays applied as the most efficient methods in the majority of machine learning approaches, including data-driven modeling for assessment of the toxicity of chemicals. We developed a combined neural network methodology that can be used in the scope of new approach methodologies (NAMs) assessing chemical or drug toxicity. Here, we present QSAR models for predicting the physical and biochemical properties of molecules of three different datasets: aqueous solubility, acute fish toxicity toward fat head minnow, and bio-concentration factors. A novel neural network modeling method is developed by combining two neural network algorithms, namely, the counter-propagation modeling strategy (CP-ANN) with the back-propagation-of-errors algorithm (BPE-ANN). The advantage is a short training time, robustness, and good interpretability through the initial CP-ANN part, while the extension with BPE-ANN improves the precision of predictions in the range between minimal and maximal property values of the training data, regardless of the number of neurons in both neural networks, either CP-ANN or BPE-ANN.
APA, Harvard, Vancouver, ISO, and other styles
2

Kuzmanovski, Igor, and Marjana Novič. "Counter-propagation neural networks in Matlab." Chemometrics and Intelligent Laboratory Systems 90, no. 1 (2008): 84–91. http://dx.doi.org/10.1016/j.chemolab.2007.07.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Vracko, Marjan, Denise Mills, and Subhash C. Basak. "Structure-mutagenicity modelling using counter propagation neural networks." Environmental Toxicology and Pharmacology 16, no. 1-2 (2004): 25–36. http://dx.doi.org/10.1016/j.etap.2003.09.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Zeinali, Yasha, and Brett Story. "Structural Impairment Detection Using Deep Counter Propagation Neural Networks." Procedia Engineering 145 (2016): 868–75. http://dx.doi.org/10.1016/j.proeng.2016.04.113.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Hou, Xuan. "Research on Hyperspectral Data Classification Based on Quantum Counter Propagation Neural Network." Advanced Materials Research 546-547 (July 2012): 1377–81. http://dx.doi.org/10.4028/www.scientific.net/amr.546-547.1377.

Full text
Abstract:
It proposes the model and learning algorithm of Quantum Counter Propagation Neural Network and applies which in hyperspectral data classification as well. On one hand, introducing quantum theory into the structure or training process of Counter Propagation Neural Network with regard to improving structure and capacity of Classical Neural Network, enhancing learning and generalization ability of it. On the other hand, establishing a new topological structure and training algorithm of Quantum Counter Propagation Neural Network by the means of quoting the thought, concept and principles of quantum theory directly. To complete the experiment of hyperspectral data classification with three ways and the result shows that effects of Quantum Counter Propagation Neural Network is superior to the traditional classification.
APA, Harvard, Vancouver, ISO, and other styles
6

Thai, Khac-Minh, and Gerhard F. Ecker. "Classification Models for hERG Inhibitors by Counter-Propagation Neural Networks." Chemical Biology & Drug Design 72, no. 4 (2008): 279–89. http://dx.doi.org/10.1111/j.1747-0285.2008.00705.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Zupan, Jure, Marjana Novič, and Johann Gasteiger. "Neural networks with counter-propagation learning strategy used for modelling." Chemometrics and Intelligent Laboratory Systems 27, no. 2 (1995): 175–87. http://dx.doi.org/10.1016/0169-7439(95)80022-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Chang, Chuan-Yu, Hung-Jen Wang, and Wen-Chih Shen. "Copyright-proving scheme for audio with counter-propagation neural networks." Digital Signal Processing 20, no. 4 (2010): 1087–101. http://dx.doi.org/10.1016/j.dsp.2009.12.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Drgan, Viktor, and Benjamin Bajželj. "Application of Supervised SOM Algorithms in Predicting the Hepatotoxic Potential of Drugs." International Journal of Molecular Sciences 22, no. 9 (2021): 4443. http://dx.doi.org/10.3390/ijms22094443.

Full text
Abstract:
The hepatotoxic potential of drugs is one of the main reasons why a number of drugs never reach the market or have to be withdrawn from the market. Therefore, the evaluation of the hepatotoxic potential of drugs is an important part of the drug development process. The aim of this work was to evaluate the relative abilities of different supervised self-organizing algorithms in classifying the hepatotoxic potential of drugs. Two modifications of standard counter-propagation training algorithms were proposed to achieve good separation of clusters on the self-organizing map. A series of optimizations were performed using genetic algorithm to select models developed with counter-propagation neural networks, X-Y fused networks, and the two newly proposed algorithms. The cluster separations achieved by the different algorithms were evaluated using a simple measure presented in this paper. Both proposed algorithms showed a better formation of clusters compared to the standard counter-propagation algorithm. The X-Y fused neural network confirmed its high ability to form well-separated clusters. Nevertheless, one of the proposed algorithms came close to its clustering results, which also resulted in a similar number of selected models.
APA, Harvard, Vancouver, ISO, and other styles
10

Sygnowski, Wojciech. "Counter‐propagation neural network for image compression." Optical Engineering 35, no. 8 (1996): 2214. http://dx.doi.org/10.1117/1.600828.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Counter Propagation Neural Networks"

1

Kane, Andrew. "An instruction systolic array architecture for multiple neural network types." Thesis, Loughborough University, 1998. https://dspace.lboro.ac.uk/2134/16031.

Full text
Abstract:
Modern electronic systems, especially sensor and imaging systems, are beginning to incorporate their own neural network subsystems. In order for these neural systems to learn in real-time they must be implemented using VLSI technology, with as much of the learning processes incorporated on-chip as is possible. The majority of current VLSI implementations literally implement a series of neural processing cells, which can be connected together in an arbitrary fashion. Many do not perform the entire neural learning process on-chip, instead relying on other external systems to carry out part of the computation requirements of the algorithm. The work presented here utilises two dimensional instruction systolic arrays in an attempt to define a general neural architecture which is closer to the biological basis of neural networks - it is the synapses themselves, rather than the neurons, that have dedicated processing units. A unified architecture is described which can be programmed at the microcode level in order to facilitate the processing of multiple neural network types. An essential part of neural network processing is the neuron activation function, which can range from a sequential algorithm to a discrete mathematical expression. The architecture presented can easily carry out the sequential functions, and introduces a fast method of mathematical approximation for the more complex functions. This can be evaluated on-chip, thus implementing the entire neural process within a single system. VHDL circuit descriptions for the chip have been generated, and the systolic processing algorithms and associated microcode instruction set for three different neural paradigms have been designed. A software simulator of the architecture has been written, giving results for several common applications in the field.
APA, Harvard, Vancouver, ISO, and other styles
2

Malalur, Paresh(Paresh G. ). "Interpretable neural networks via alignment and dpstribution Propagation." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/122686.

Full text
Abstract:
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.<br>Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019<br>Cataloged from student-submitted PDF version of thesis.<br>Includes bibliographical references (pages 145-150).<br>In this thesis, we aim to develop methodologies to better understand and improve the performance of Deep Neural Networks in various settings where data is limited or missing. Unlike data-rich tasks where neural networks have achieved human-level performance, other problems are naturally data limited where these models have fallen short of human level performance and where there is abundant room for improvement. We focus on three types of problems where data is limited - one-shot learning and open-set recognition in the one-shot setting, unsupervised learning, and classification with missing data. The first setting of limited data that we tackle is when there are only few examples per object type. During object classification, an attention mechanism can be used to highlight the area of the image that the model focuses on thus offering a narrow view into the mechanism of classification.<br>We expand on this idea by forcing the method to explicitly align images to be classified to reference images representing the classes. The mechanism of alignment is learned and therefore does not require that the reference objects are anything like those being classified. Beyond explanation, our exemplar based cross-alignment method enables classification with only a single example per category (one-shot) or in the absence of any labels about new classes (open-set). While one-shot and open-set recognition operate in cases where complete data is available for few examples, unsupervised and missing data setting focus on cases where the labels are missing or where only partial input is available correspondingly. Variational Auto-encoders are a popular unsupervised learning model which learn how to map the input distribution into a simple latent distribution.<br>We introduce a mechanism of approximate propagation of Gaussian densities through neural networks using the Hellinger distance metric to find the best approximation and demonstrate how to use this framework to improve the latent code efficiency of Variational Auto- Encoders. Expanding on this idea further, we introduce a novel method to learn the mapping between the input space and latent space which further improves the efficiency of the latent code by overcoming the variational bound. The final limited data setting we explore is when the input data is incomplete or very noisy. Neural Networks are inherently feed-forward and hence inference methods developed for probabilistic models can not be applied directly. We introduce two different methods to handle missing data. We first introduce a simple feed-forward model that redefines the linear operator as an ensemble to reweight the activations when portions of its receptive field are missing.<br>We then use some of the insights gained to develop deep networks that propagate distributions of activations instead of point activations allowing us to use message passing methods to compensate for missing data while maintaining the feed-forward style approach when data is not missing.<br>by Paresh Malalur.<br>Ph. D.<br>Ph.D. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
APA, Harvard, Vancouver, ISO, and other styles
3

Fernando, Thudugala Mudalige K. G. "Hydrological applications of MLP neural networks with back-propagation." Thesis, Hong Kong : University of Hong Kong, 2002. http://sunzi.lib.hku.hk/hkuto/record.jsp?B25085517.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Bendelac, Shiri. "Enhanced Neural Network Training Using Selective Backpropagation and Forward Propagation." Thesis, Virginia Tech, 2018. http://hdl.handle.net/10919/83714.

Full text
Abstract:
Neural networks are making headlines every day as the tool of the future, powering artificial intelligence programs and supporting technologies never seen before. However, the training of neural networks can take days or even weeks for bigger networks, and requires the use of super computers and GPUs in academia and industry in order to achieve state of the art results. This thesis discusses employing selective measures to determine when to backpropagate and forward propagate in order to reduce training time while maintaining classification performance. This thesis tests these new algorithms on the MNIST and CASIA datasets, and achieves successful results with both algorithms on the two datasets. The selective backpropagation algorithm shows a reduction of up to 93.3% of backpropagations completed, and the selective forward propagation algorithm shows a reduction of up to 72.90% in forward propagations and backpropagations completed compared to baseline runs of always forward propagating and backpropagating. This work also discusses employing the selective backpropagation algorithm on a modified dataset with disproportional under-representation of some classes compared to others.<br>Master of Science
APA, Harvard, Vancouver, ISO, and other styles
5

Wu, Ing-Chyuan. "Neural networks for financial markets analyses and options valuation /." free to MU campus, to others for purchase, 2002. http://wwwlib.umi.com/cr/mo/fullcit?p3074457.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Breutel, Stephan Werner. "Analysing the behaviour of neural networks." Thesis, Queensland University of Technology, 2004. https://eprints.qut.edu.au/15943/1/Stephan_Breutel_Thesis.pdf.

Full text
Abstract:
A new method is developed to determine a set of informative and refined interface assertions satisfied by functions that are represented by feed-forward neural networks. Neural networks have often been criticized for their low degree of comprehensibility.It is difficult to have confidence in software components if they have no clear and valid interface description. Precise and understandable interface assertions for a neural network based software component are required for safety critical applications and for theintegration into larger software systems. The interface assertions we are considering are of the form &quote if the input x of the neural network is in a region (alpha symbol) of the input space then the output f(x) of the neural network will be in the region (beta symbol) of the output space &quote and vice versa. We are interested in computing refined interface assertions, which can be viewed as the computation of the strongest pre- and postconditions a feed-forward neural network fulfills. Unions ofpolyhedra (polyhedra are the generalization of convex polygons in higher dimensional spaces) are well suited for describing arbitrary regions of higher dimensional vector spaces. Additionally, polyhedra are closed under affine transformations. Given a feed-forward neural network, our method produces an annotated neural network, where each layer is annotated with a set of valid linear inequality predicates. The main challenges for the computation of these assertions is to compute the solution of a non-linear optimization problem and the projection of a polyhedron onto a lower-dimensional subspace.
APA, Harvard, Vancouver, ISO, and other styles
7

Breutel, Stephan Werner. "Analysing the behaviour of neural networks." Queensland University of Technology, 2004. http://eprints.qut.edu.au/15943/.

Full text
Abstract:
A new method is developed to determine a set of informative and refined interface assertions satisfied by functions that are represented by feed-forward neural networks. Neural networks have often been criticized for their low degree of comprehensibility.It is difficult to have confidence in software components if they have no clear and valid interface description. Precise and understandable interface assertions for a neural network based software component are required for safety critical applications and for theintegration into larger software systems. The interface assertions we are considering are of the form &quote if the input x of the neural network is in a region (alpha symbol) of the input space then the output f(x) of the neural network will be in the region (beta symbol) of the output space &quote and vice versa. We are interested in computing refined interface assertions, which can be viewed as the computation of the strongest pre- and postconditions a feed-forward neural network fulfills. Unions ofpolyhedra (polyhedra are the generalization of convex polygons in higher dimensional spaces) are well suited for describing arbitrary regions of higher dimensional vector spaces. Additionally, polyhedra are closed under affine transformations. Given a feed-forward neural network, our method produces an annotated neural network, where each layer is annotated with a set of valid linear inequality predicates. The main challenges for the computation of these assertions is to compute the solution of a non-linear optimization problem and the projection of a polyhedron onto a lower-dimensional subspace.
APA, Harvard, Vancouver, ISO, and other styles
8

Teo, Chin Hock. "Back-propagation neural networks in adaptive control of unknown nonlinear systems." Thesis, Monterey, California. Naval Postgraduate School, 1991. http://hdl.handle.net/10945/26898.

Full text
Abstract:
Approved for public release; distribution is unlimited<br>The objective of this research is to develop a Back-propagation Neural Network (BNN) to control certain classes of unknown nonlinear systems and explore the network's capabilities. The structure of the Direct Model Reference Adaptive Controller (DMRAC) for Linear Time Invariant (LTI) systems with unknown parameters is first analyzed. This structure is then extended using a BNN for adaptive control of unknown nonlinear systems. The specific structure of the BNN DMRAC is developed for control of four general classes of nonlinear systems modeled in discrete time. Experiments are conducted by placing a representative system from each class under the BNN's control. The condition under which the BNN DMRAC can successfully control these systems are investigated. The design and training of the BNN are also studied. The results of the experiments show that the BNN DMRAC works for the representative systems considered, while the conventional least-squares estimator DMRAC fails. Based on analysis and experimental findings, some genera conditions required to ensure that this technique works are postulated and discussed. General guidelines used to achieve the stability of the BNN learning process and good learning convergence are also discussed. To establish this as a general and significant control technique, further research is required to obtain analytically, the conditions for stability of the controlled system, and to develop more specific rules and guidelines in the BNN design and training.
APA, Harvard, Vancouver, ISO, and other styles
9

Cakarcan, Alpay. "Back-propagation neural networks in adaptive control of unknown nonlinear systems." Thesis, Monterey, California. Naval Postgraduate School, 1994. http://hdl.handle.net/10945/30830.

Full text
Abstract:
The objective of this thesis research is to develop a Back-Propagation Neural Network (BNN) to control certain classes of unknown nonlinear systems and explore the network's capabilities. The structure of the Direct Model Reference Adaptive Controller (DMRAC) for Linear Time Invariant (LTI) systems with unknown parameters is first analyzed and then is extended to nonlinear systems by using BNN, Nonminimum phase systems, both linear and nonlinear, have also be considered. The analysis of the experiments shows that the BNN DMRAC gives satisfactory results for the representative nonlinear systems considered, while the conventional least-squares estimator DMRAC fails. Based on the analysis and experimental findings, some general conditions are shown to be required to ensure that this technique is satisfactory. These conditions are presented and discussed. It has been found that further research needs to be done for the nonminimum phase case in order to guarantee stability and tracking. Also, to establish this as a more general and significant control technique, further research is required to develop more specific rules and guidelines for the BNN design and training.
APA, Harvard, Vancouver, ISO, and other styles
10

Kane, Abdoul. "Activity propagation in two-dimensional neuronal networks." Columbus, Ohio : Ohio State University, 2005. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1133461090.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Counter Propagation Neural Networks"

1

Narendra, Kumpati S. Back propagation in dynamical systems containing neural networks. Yale University Center for Systems Science, 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Teo, Chin Hock. Back-propagation neural networks in adaptive control of unknown nonlinear systems. Naval Postgraduate School, 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Anil, Phatak, Chatterji Gano, and Ames Research Center, eds. Scene segmentation of natural images using texture measures and back-propagation. National Aeronautics and Space Administration, Ames Research Center, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Walker, James L. Back propagation neural networks for predicting ultimate strengths of unidirectional graphite/epoxy tensile specimens. National Aeronautics and Space Administration, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

N, Sundararajan, and Foo Shou King, eds. Parallel implementations of backpropagation neural networks on transputers: A study of training set parallelism. World Scientific, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Dayhoff, Judith E. Neural network architectures: An introduction. Van Nostrand Reinhold, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Boden, Margaret A. 4. Artificial neural networks. Oxford University Press, 2018. http://dx.doi.org/10.1093/actrade/9780199602919.003.0004.

Full text
Abstract:
Artificial neural networks (ANNs) are made up of many interconnected units, each one capable of computing only one thing. ANNs have myriad applications, from playing the stock market and monitoring currency fluctuations to recognizing speech or faces. ANNs are parallel-processing virtual machines implemented on classical computers. They are intriguing partly because they are very different from the virtual machines of symbolic AI. Sequential instructions are replaced by massive parallelism, top-down control by bottom-up processing, and logic by probability. ‘Artificial neural networks’ considers the wider implications of ANNs and discusses parallel distributed processing (PDP), learning in neural networks, back-propagation, deep learning, and hybrid systems.
APA, Harvard, Vancouver, ISO, and other styles
8

Christodoulou, Christos, and Michael Georgiopoulos. Applications of Neural Networks in Electromagnetics (Artech House Antennas and Propagation Library). Artech House Publishers, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Intelligent information retrieval using an inductive learning algorithm and a back-propagation neural network. University Microfilms International, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Sangeetha, V., and S. Kevin Andrews. Introduction to Artificial Intelligence and Neural Networks. Magestic Technology Solutions (P) Ltd, Chennai, Tamil Nadu, India, 2023. http://dx.doi.org/10.47716/mts/978-93-92090-24-0.

Full text
Abstract:
Artificial Intelligence (AI) has emerged as a defining force in the current era, shaping the contours of technology and deeply permeating our everyday lives. From autonomous vehicles to predictive analytics and personalized recommendations, AI continues to revolutionize various facets of human existence, progressively becoming the invisible hand guiding our decisions. Simultaneously, its growing influence necessitates the need for a nuanced understanding of AI, thereby providing the impetus for this book, “Introduction to Artificial Intelligence and Neural Networks.” This book aims to equip its readers with a comprehensive understanding of AI and its subsets, machine learning and deep learning, with a particular emphasis on neural networks. It is designed for novices venturing into the field, as well as experienced learners who desire to solidify their knowledge base or delve deeper into advanced topics. In Chapter 1, we provide a thorough introduction to the world of AI, exploring its definition, historical trajectory, and categories. We delve into the applications of AI, and underscore the ethical implications associated with its proliferation. Chapter 2 introduces machine learning, elucidating its types and basic algorithms. We examine the practical applications of machine learning and delve into challenges such as overfitting, underfitting, and model validation. Deep learning and neural networks, an integral part of AI, form the crux of Chapter 3. We provide a lucid introduction to deep learning, describe the structure of neural networks, and explore forward and backward propagation. This chapter also delves into the specifics of Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs). In Chapter 4, we outline the steps to train neural networks, including data preprocessing, cost functions, gradient descent, and various optimizers. We also delve into regularization techniques and methods for evaluating a neural network model. Chapter 5 focuses on specialized topics in neural networks such as autoencoders, Generative Adversarial Networks (GANs), Long Short-Term Memory Networks (LSTMs), and Neural Architecture Search (NAS). In Chapter 6, we illustrate the practical applications of neural networks, examining their role in computer vision, natural language processing, predictive analytics, autonomous vehicles, and the healthcare industry. Chapter 7 gazes into the future of AI and neural networks. It discusses the current challenges in these fields, emerging trends, and future ethical considerations. It also examines the potential impacts of AI and neural networks on society. Finally, Chapter 8 concludes the book with a recap of key learnings, implications for readers, and resources for further study. This book aims not only to provide a robust theoretical foundation but also to kindle a sense of curiosity and excitement about the endless possibilities AI and neural networks offer. The journ
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Counter Propagation Neural Networks"

1

da Silva, Ivan Nunes, Danilo Hernane Spatti, Rogerio Andrade Flauzino, Luisa Helena Bartocci Liboni, and Silas Franco dos Reis Alves. "LVQ and Counter-Propagation Networks." In Artificial Neural Networks. Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-43162-8_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Vijendran, Anna Saro. "Super-Resolution Reconstruction of Infrared Images Adopting Counter Propagation Neural Networks." In Intelligent Systems. Apple Academic Press, 2019. http://dx.doi.org/10.1201/9780429265020-10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

El Haimoudi, Khatir, Ikram Issati, and Ali Daanoun. "The Particularities of the Counter Propagation Neural Network Application in Pattern Recognition Tasks." In Lecture Notes in Networks and Systems. Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-69137-4_42.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Anitha, J., and D. Jude Hemanth. "A Weighted Counter Propagation Neural Network for Abnormal Retinal Image Classification." In Lecture Notes in Electrical Engineering. Springer India, 2014. http://dx.doi.org/10.1007/978-81-322-1817-3_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kaveh, Ali. "Comparative Study of Backpropagation and Improved Counter-Propagation Neural Nets in Structural Analysis and Optimization." In Applications of Artificial Neural Networks and Machine Learning in Civil Engineering. Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-66051-1_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Vivekanandan, N., K. Rajeswari, Sanjay P. Salve, and N. V. Yuvraj Kanna. "Precision Smoke Detection System with Counter Propagation Neural Network and Electronic Olfactory." In Lecture Notes in Mechanical Engineering. Springer Nature Singapore, 2024. https://doi.org/10.1007/978-981-97-7535-4_19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Fjodorova, N., M. Novic, S. Zuperl, and K. Venko. "Counter-Propagation Artificial Neural Network Models for Prediction of Carcinogenicity of Non-congeneric Chemicals for Regulatory Uses." In Challenges and Advances in Computational Chemistry and Physics. Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-56850-8_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Müller, Berndt, Joachim Reinhardt, and Michael T. Strickland. "BTT: Back-Propagation Through Time." In Neural Networks. Springer Berlin Heidelberg, 1995. http://dx.doi.org/10.1007/978-3-642-57760-4_28.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Chauvin, Yves. "Generalization performance of overtrained back-propagation networks." In Neural Networks. Springer Berlin Heidelberg, 1990. http://dx.doi.org/10.1007/3-540-52255-7_26.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Müller, Berndt, and Joachim Reinhardt. "PERBOOL: Learning Boolean Functions with Back-Propagation." In Neural Networks. Springer Berlin Heidelberg, 1990. http://dx.doi.org/10.1007/978-3-642-97239-3_22.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Counter Propagation Neural Networks"

1

Olagunju, Kazeem Michael, Adeleye Samuel Falohun, Alice Oluwafunke Oke, and Marion O. Adebiyi. "Neural Gas Counter Propagation Neural Network (NG CPNN): A Novel Model for Face Recognition." In 2024 International Conference on Science, Engineering and Business for Driving Sustainable Development Goals (SEB4SDG). IEEE, 2024. http://dx.doi.org/10.1109/seb4sdg60871.2024.10630426.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Bhattacharya, Arghya, and V. Premanand. "Parallel Swarm Propagation for Neural Networks." In 2024 International Symposium on Parallel Computing and Distributed Systems (PCDS). IEEE, 2024. http://dx.doi.org/10.1109/pcds61776.2024.10743321.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Tsoi. "Counter propagation network: an interpretation." In International Joint Conference on Neural Networks. IEEE, 1989. http://dx.doi.org/10.1109/ijcnn.1989.118352.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Woods. "Back and counter propagation aberrations." In Proceedings of 1993 IEEE International Conference on Neural Networks (ICNN '93). IEEE, 1988. http://dx.doi.org/10.1109/icnn.1988.23881.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kovacs, L., and G. Terstyanszky. "Boundary region sensitive classification for the counter-propagation neural network." In Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks. IJCNN 2000. Neural Computing: New Challenges and Perspectives for the New Millennium. IEEE, 2000. http://dx.doi.org/10.1109/ijcnn.2000.857819.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kaden, Marika, Ronny Schubert, Mehrdad Mohannazadeh Bakhtiari, Lucas Schwarz, and Thomas Villmann. "The LVQ-based Counter Propagation Network -- an Interpretable Information Bottleneck Approach." In ESANN 2021 - European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning. Ciaco - i6doc.com, 2021. http://dx.doi.org/10.14428/esann/2021.es2021-88.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Wu, Guohua, and Xiaodong Zhou. "A Fast Audio Digital Watermark Method Based on Counter-Propagation Neural Networks." In 2008 International Conference on Computer Science and Software Engineering. IEEE, 2008. http://dx.doi.org/10.1109/csse.2008.675.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Roberts, Peter, and Rodney Walker. "Application of a Counter Propagation Neural Network for Star Identification." In AIAA Guidance, Navigation, and Control Conference and Exhibit. American Institute of Aeronautics and Astronautics, 2005. http://dx.doi.org/10.2514/6.2005-6469.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Anitha, J., C. Kezi Selva Vijila, and D. Jude Hemanth. "An enhanced Counter Propagation Neural Network for abnormal retinal image classification." In 2009 World Congress on Nature & Biologically Inspired Computing (NaBIC). IEEE, 2009. http://dx.doi.org/10.1109/nabic.2009.5393591.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Raviram, P., R. S. D. Wahidabanu, and Purushothaman Srinivasan. "Concurrency Control in CAD With KBMS Using Counter Propagation Neural Network." In 2009 IEEE International Advance Computing Conference (IACC 2009). IEEE, 2009. http://dx.doi.org/10.1109/iadcc.2009.4809244.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Counter Propagation Neural Networks"

1

Engel, Bernard, Yael Edan, James Simon, Hanoch Pasternak, and Shimon Edelman. Neural Networks for Quality Sorting of Agricultural Produce. United States Department of Agriculture, 1996. http://dx.doi.org/10.32747/1996.7613033.bard.

Full text
Abstract:
The objectives of this project were to develop procedures and models, based on neural networks, for quality sorting of agricultural produce. Two research teams, one in Purdue University and the other in Israel, coordinated their research efforts on different aspects of each objective utilizing both melons and tomatoes as case studies. At Purdue: An expert system was developed to measure variances in human grading. Data were acquired from eight sensors: vision, two firmness sensors (destructive and nondestructive), chlorophyll from fluorescence, color sensor, electronic sniffer for odor detection, refractometer and a scale (mass). Data were analyzed and provided input for five classification models. Chlorophyll from fluorescence was found to give the best estimation for ripeness stage while the combination of machine vision and firmness from impact performed best for quality sorting. A new algorithm was developed to estimate and minimize training size for supervised classification. A new criteria was established to choose a training set such that a recurrent auto-associative memory neural network is stabilized. Moreover, this method provides for rapid and accurate updating of the classifier over growing seasons, production environments and cultivars. Different classification approaches (parametric and non-parametric) for grading were examined. Statistical methods were found to be as accurate as neural networks in grading. Classification models by voting did not enhance the classification significantly. A hybrid model that incorporated heuristic rules and either a numerical classifier or neural network was found to be superior in classification accuracy with half the required processing of solely the numerical classifier or neural network. In Israel: A multi-sensing approach utilizing non-destructive sensors was developed. Shape, color, stem identification, surface defects and bruises were measured using a color image processing system. Flavor parameters (sugar, acidity, volatiles) and ripeness were measured using a near-infrared system and an electronic sniffer. Mechanical properties were measured using three sensors: drop impact, resonance frequency and cyclic deformation. Classification algorithms for quality sorting of fruit based on multi-sensory data were developed and implemented. The algorithms included a dynamic artificial neural network, a back propagation neural network and multiple linear regression. Results indicated that classification based on multiple sensors may be applied in real-time sorting and can improve overall classification. Advanced image processing algorithms were developed for shape determination, bruise and stem identification and general color and color homogeneity. An unsupervised method was developed to extract necessary vision features. The primary advantage of the algorithms developed is their ability to learn to determine the visual quality of almost any fruit or vegetable with no need for specific modification and no a-priori knowledge. Moreover, since there is no assumption as to the type of blemish to be characterized, the algorithm is capable of distinguishing between stems and bruises. This enables sorting of fruit without knowing the fruits' orientation. A new algorithm for on-line clustering of data was developed. The algorithm's adaptability is designed to overcome some of the difficulties encountered when incrementally clustering sparse data and preserves information even with memory constraints. Large quantities of data (many images) of high dimensionality (due to multiple sensors) and new information arriving incrementally (a function of the temporal dynamics of any natural process) can now be processed. Furhermore, since the learning is done on-line, it can be implemented in real-time. The methodology developed was tested to determine external quality of tomatoes based on visual information. An improved model for color sorting which is stable and does not require recalibration for each season was developed for color determination. Excellent classification results were obtained for both color and firmness classification. Results indicted that maturity classification can be obtained using a drop-impact and a vision sensor in order to predict the storability and marketing of harvested fruits. In conclusion: We have been able to define quantitatively the critical parameters in the quality sorting and grading of both fresh market cantaloupes and tomatoes. We have been able to accomplish this using nondestructive measurements and in a manner consistent with expert human grading and in accordance with market acceptance. This research constructed and used large databases of both commodities, for comparative evaluation and optimization of expert system, statistical and/or neural network models. The models developed in this research were successfully tested, and should be applicable to a wide range of other fruits and vegetables. These findings are valuable for the development of on-line grading and sorting of agricultural produce through the incorporation of multiple measurement inputs that rapidly define quality in an automated manner, and in a manner consistent with the human graders and inspectors.
APA, Harvard, Vancouver, ISO, and other styles
2

Hart, Carl R., D. Keith Wilson, Chris L. Pettit, and Edward T. Nykaza. Machine-Learning of Long-Range Sound Propagation Through Simulated Atmospheric Turbulence. U.S. Army Engineer Research and Development Center, 2021. http://dx.doi.org/10.21079/11681/41182.

Full text
Abstract:
Conventional numerical methods can capture the inherent variability of long-range outdoor sound propagation. However, computational memory and time requirements are high. In contrast, machine-learning models provide very fast predictions. This comes by learning from experimental observations or surrogate data. Yet, it is unknown what type of surrogate data is most suitable for machine-learning. This study used a Crank-Nicholson parabolic equation (CNPE) for generating the surrogate data. The CNPE input data were sampled by the Latin hypercube technique. Two separate datasets comprised 5000 samples of model input. The first dataset consisted of transmission loss (TL) fields for single realizations of turbulence. The second dataset consisted of average TL fields for 64 realizations of turbulence. Three machine-learning algorithms were applied to each dataset, namely, ensemble decision trees, neural networks, and cluster-weighted models. Observational data come from a long-range (out to 8 km) sound propagation experiment. In comparison to the experimental observations, regression predictions have 5–7 dB in median absolute error. Surrogate data quality depends on an accurate characterization of refractive and scattering conditions. Predictions obtained through a single realization of turbulence agree better with the experimental observations.
APA, Harvard, Vancouver, ISO, and other styles
3

Kirichek, Galina, Vladyslav Harkusha, Artur Timenko, and Nataliia Kulykovska. System for detecting network anomalies using a hybrid of an uncontrolled and controlled neural network. [б. в.], 2020. http://dx.doi.org/10.31812/123456789/3743.

Full text
Abstract:
In this article realization method of attacks and anomalies detection with the use of training of ordinary and attacking packages, respectively. The method that was used to teach an attack on is a combination of an uncontrollable and controlled neural network. In an uncontrolled network, attacks are classified in smaller categories, taking into account their features and using the self- organized map. To manage clusters, a neural network based on back-propagation method used. We use PyBrain as the main framework for designing, developing and learning perceptron data. This framework has a sufficient number of solutions and algorithms for training, designing and testing various types of neural networks. Software architecture is presented using a procedural-object approach. Because there is no need to save intermediate result of the program (after learning entire perceptron is stored in the file), all the progress of learning is stored in the normal files on hard disk.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography