Academic literature on the topic 'Multi-layer neural networks'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Multi-layer neural networks.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Multi-layer neural networks"

1

YEN, GARY, and HAIMING LU. "HIERARCHICAL GENETIC ALGORITHM FOR NEAR-OPTIMAL FEEDFORWARD NEURAL NETWORK DESIGN." International Journal of Neural Systems 12, no. 01 (2002): 31–43. http://dx.doi.org/10.1142/s0129065702001023.

Full text
Abstract:
In this paper, we propose a genetic algorithm based design procedure for a multi-layer feed-forward neural network. A hierarchical genetic algorithm is used to evolve both the neural network's topology and weighting parameters. Compared with traditional genetic algorithm based designs for neural networks, the hierarchical approach addresses several deficiencies, including a feasibility check highlighted in literature. A multi-objective cost function is used herein to optimize the performance and topology of the evolved neural network simultaneously. In the prediction of Mackey–Glass chaotic time series, the networks designed by the proposed approach prove to be competitive, or even superior, to traditional learning algorithms for the multi-layer Perceptron networks and radial-basis function networks. Based upon the chosen cost function, a linear weight combination decision-making approach has been applied to derive an approximated Pareto-optimal solution set. Therefore, designing a set of neural networks can be considered as solving a two-objective optimization problem.
APA, Harvard, Vancouver, ISO, and other styles
2

Ban, Jung-Chao, and Chih-Hung Chang. "The layer effect on multi-layer cellular neural networks." Applied Mathematics Letters 26, no. 7 (2013): 706–9. http://dx.doi.org/10.1016/j.aml.2013.01.013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Scherer, Magdalena. "Multi-layer neural networks for sales forecasting." Journal of Applied Mathematics and Computational Mechanics 17, no. 1 (2018): 61–68. http://dx.doi.org/10.17512/jamcm.2018.1.06.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ban, Jung-Chao, and Chih-Hung Chang. "Hausdorff Dimension of Multi-Layer Neural Networks." Advances in Pure Mathematics 03, no. 09 (2013): 9–14. http://dx.doi.org/10.4236/apm.2013.39a1002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ban, Jung-Chao, and Chih-Hung Chang. "Diamond in multi-layer cellular neural networks." Applied Mathematics and Computation 222 (October 2013): 1–12. http://dx.doi.org/10.1016/j.amc.2013.07.010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Subotin, M., W. Marsh, J. McMichael, J. J. Fung, and I. Dvorchik. "Performance of Multi-Layer Feedforward Neural Networks to Predict Liver Transplantation Outcome." Methods of Information in Medicine 35, no. 01 (1996): 12–18. http://dx.doi.org/10.1055/s-0038-1634637.

Full text
Abstract:
AbstractA novel multisolutional clustering and quantization (MCO) algorithm has been developed that provides a flexible way to preprocess data. It was tested whether it would impact the neural network’s performance favorably and whether the employment of the proposed algorithm would enable neural networks to handle missing data. This was assessed by comparing the performance of neural networks using a well-documented data set to predict outcome following liver transplantation. This new approach to data preprocessing leads to a statistically significant improvement in network performance when compared to simple linear scaling. The obtained results also showed that coding missing data as zeroes in combination with the MCO algorithm, leads to a significant improvement in neural network performance on a data set containing missing values in 59.4% of cases when compared to replacement of missing values with either series means or medians.
APA, Harvard, Vancouver, ISO, and other styles
7

Redwan, Renas M. "Neural networks and Sigmoid Activation Function in Multi-Layer Networks." Qubahan Academic Journal 1, no. 2 (2020): 29–43. http://dx.doi.org/10.48161/qaj.v1n2a11.

Full text
Abstract:
Back propagation neural networks are known for computing the problems that cannot easily be computed (huge datasets analysis or training) in artificial neural networks. The main idea of this paper is to implement XOR logic gate by ANNs using back propagation neural networks for back propagation of errors, and sigmoid activation function. This neural networks to map non-linear threshold gate. The non-linear used to classify binary inputs ( ) and passing it through hidden layer for computing and ( ), after computing errors by ( ) the weights and thetas ( ) are changing according to errors. Sigmoid activation function is = and Derivation of sigmoid is = . The sig(x) and Dsig(x) is between 1 to 0.
APA, Harvard, Vancouver, ISO, and other styles
8

Koyuncu, Hakan. "Determination of positioning accuracies by using fingerprint localisation and artificial neural networks." Thermal Science 23, Suppl. 1 (2019): 99–111. http://dx.doi.org/10.2298/tsci180912334k.

Full text
Abstract:
Fingerprint localisation technique is an effective positioning technique to determine the object locations by using radio signal strength, values in indoors. The technique is subject to big positioning errors due to challenging environmental conditions. In this paper, initially, a fingerprint localisation technique is deployed by using classical k-nearest neighborhood method to determine the unknown object locations. Additionally, several artificial neural networks, are employed, using fingerprint data, such as single-layer feed forward neural network, multi-layer feed forward neural network, multi-layer back propagation neural network, general regression neural network, and deep neural network to determine the same unknown object locations. Fingerprint database is built by received signal strength indicator measurement signatures across the grid locations. The construction and the adapted approach of different neural networks using the fingerprint data are described. The results of them are compared with the classical k-nearest neighborhood method and it was found that deep neural network was the best neural network technique providing the maximum positioning accuracies.
APA, Harvard, Vancouver, ISO, and other styles
9

Liang, Lixin, Ning Li, Yihong Li, and Lin Lin. "Neural Networks with the Correlative Layer for Multi-label Classification." Journal of Physics: Conference Series 2425, no. 1 (2023): 012034. http://dx.doi.org/10.1088/1742-6596/2425/1/012034.

Full text
Abstract:
Abstract Multi-label classification is a very significant but challenging task. Correlation between labels often exists, so recent works have paid much attention to using the correlation between labels to improve the classification performance. However, how to effectively learn the correlation is still a problem. In this paper, a general framework, i.e., the neural network with the correlative layer (CLNN), is proposed, where the correlative layer is used to express the dependencies between labels. Different from existing work, CLNN first trains a neural network without the correlative layer to obtain rough classification results and then trains the whole neural network to adaptively adjust all the weights including those of the correlation layer. Thus CLNN could learn both positive/negative and strong/weak relationships between labels. We test CLNN with three typical neural networks, and experimental results show that neural network can achieve better performance by adding the correlative layer, which demonstrates that the CLNN framework is very effective.
APA, Harvard, Vancouver, ISO, and other styles
10

Mali, H. Hakem Alameady*. "CLASSIFYING POISONOUS AND EDIBLE MUSHROOMS IN THE AGARICUS AND LEPIOTA FAMILY USING MULTILAYER PERCEPTION." INTERNATIONAL JOURNAL OF ENGINEERING SCIENCES & RESEARCH TECHNOLOGY 6`, no. 1 (2017): 154–64. https://doi.org/10.5281/zenodo.233441.

Full text
Abstract:
Classification is one of the applications of feed-forward Artificial Neural Network (ANN). Classification can map data to predefined classes or groups. It is referred to as a supervised learning, because before examining data the classes are always determined. Multi-Layer Perception, is a supervised neutral networks model that is use to train and test data to build a model. In this experiment. Multi-Layer Perception is used to train the Data set to produce a model to make prediction of classifying .After preparing the Mushrooms data for training, only 8124 of dataset instances used to be train. Software used to mining data in this project is Neural Connection Version 2.0. This report, generally explaining the Classification, Multi-Layer Preceptor, Back propagation, Mushrooms, and details on the mining activity done to the selected datasets, to determine whether Mushroom’s attribute is edible or Poison.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Multi-layer neural networks"

1

Cairns, Graham Andrew. "Learning with analogue VLSI multi-layer perceptrons." Thesis, University of Oxford, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.296901.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ahmed, Zulfiqar. "An hybrid architecture for multi-layer feed-forward neural networks." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape8/PQDD_0009/MQ52500.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Tombs, Jonathan Noel. "Multi-layer neural networks and their implementation in analogue VLSI." Thesis, University of Oxford, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.334293.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Zheng, Gonghui. "Design and evaluation of a multi-output-layer perceptron." Thesis, University of Ulster, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.338195.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Setyawati, Bina R. "Multi-layer feed forward neural networks for foreign exchange time series forecasting." Morgantown, W. Va. : [West Virginia University Libraries], 2005. https://eidr.wvu.edu/etd/documentdata.eTD?documentid=4180.

Full text
Abstract:
Thesis (Ph. D.)--West Virginia University, 2005.<br>Title from document title page. Document formatted into pages; contains xii, 185 p. : ill. (some col.). Includes abstract. Includes bibliographical references (p. 140-146).
APA, Harvard, Vancouver, ISO, and other styles
6

Bulbuller, Gokhan. "Recognition of in-ear microphone speech data using multi-layer neural networks." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2006. http://library.nps.navy.mil/uhtbin/hyperion/06Mar%5FBulbuller.pdf.

Full text
Abstract:
Thesis (M.S. in Electrical Engineering)--Naval Postgraduate School, March 2006.<br>"March 2006." Thesis Advisor(s): Monique P. Fargues, Ravi Vaidyanathan. Includes bibliographical references (p. 159-162). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
7

Wang, Hao. "A new scheme for training ReLU-based multi-layer feedforward neural networks." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-217384.

Full text
Abstract:
A new scheme for training Rectified Linear Unit (ReLU) based feedforward neural networks is examined in this thesis. The project starts with the row-by-row updating strategy designed for Single-hidden Layer Feedforward neural Networks (SLFNs). This strategy exploits the properties held by ReLUs and optimizes each row in the input weight matrix individually, under the common optimization scheme. Then the Direct Updating Strategy (DUS), which has two different versions: Vector-Based Method (VBM) and Matrix-Based Method (MBM), is proposed to optimize the input weight matrix as a whole. Finally DUS is extended to Multi-hidden Layer Feedforward neural Networks (MLFNs). Since the extension, for general ReLU-based MLFNs, faces an initialization dilemma, a special structure MLFN is presented. Verification experiments are conducted on six benchmark multi-class classification datasets. The results confirm that MBM algorithm for SLFNs improves the performance of neural networks, compared to its competitor, regularized extreme learning machine. For most datasets involved, MLFNs with the proposed special structure perform better when adding extra hidden layers.<br>Ett nytt schema för träning av rektifierad linjär enhet (ReLU)-baserade och framkopplade neurala nätverk undersöks i denna avhandling. Projektet börjar med en rad-för-rad-uppdateringsstrategi designad för framkopplade neurala nätverk med ett dolt lager (SLFNs). Denna strategi utnyttjar egenskaper i ReLUs och optimerar varje rad i inmatningsviktmatrisen individuellt, enligt en gemensam optimeringsmetod. Därefter föreslås den direkta uppdateringsstrategin (DUS), som har två olika versioner: vektorbaserad metod (VBM) respektive matrisbaserad metod (MBM), för att optimera ingångsviktmatrisen som helhet. Slutli- gen utvidgas DUS till framkopplade neurala nätverk med flera lager (MLFN). Eftersom utvidgningen för generella ReLU-baserade MLFN står inför ett initieringsdilemma presenteras därför en MLFN med en speciell struktur. Verifieringsexperiment utförs på sex datamängder för klassificering av flera klasser. Resultaten bekräftar att MBM-algoritmen för SLFN förbättrar prestanda hos neurala nätverk, jämfört med konkurrenten, den regulariserade extrema inlärningsmaskinen. För de flesta använda dataset, fungerar MLFNs med den föreslagna speciella strukturen bättre när man lägger till extra dolda lager.
APA, Harvard, Vancouver, ISO, and other styles
8

Penny, William Douglas. "The storage, training and generalization properties of multi-layer logical neural networks." Thesis, Brunel University, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.331996.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

McGarry, Kenneth J. "Rule extraction and knowledge transfer from radial basis function neural networks." Thesis, University of Sunderland, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.391744.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Krasniewicz, Jan A. "The application and analysis of genetic algorithms to discover topological free parameters in multi-layer perceptions." Thesis, Birmingham City University, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.367474.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Multi-layer neural networks"

1

Shepherd, Adrian J. Second-order methods for neural networks: Fast and reliable training methods for multi-layer perceptrons. Springer, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Penny, William Douglas. The storage, training and generalization properties of multi-layer logical neural networks. Brunel University, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Dissertation: Autonomous Construction of Multi Layer Perceptron Neural Networks. Storming Media, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Shepherd, Adrian J. Second-Order Methods for Neural Networks: Fast and Reliable Training Methods for Multi-Layer Perceptrons. Springer, 2014.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Shepherd, Adrian J. Second-Order Methods for Neural Networks: Fast and Reliable Training Methods for Multi-Layer Perceptrons. Springer London, Limited, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Multi-layer neural networks"

1

Davalo, Eric, and Patrick Naïm. "Multi-layer Neural Networks." In Neural Networks. Macmillan Education UK, 1991. http://dx.doi.org/10.1007/978-1-349-12312-4_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Shepherd, Adrian J. "Multi-Layer Perceptron Training." In Second-Order Methods for Neural Networks. Springer London, 1997. http://dx.doi.org/10.1007/978-1-4471-0953-2_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Lee, Tsu-Chang. "Multi-Layer Feed-Forward Networks." In Structure Level Adaptation for Artificial Neural Networks. Springer US, 1991. http://dx.doi.org/10.1007/978-1-4615-3954-4_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Widrow, Bernard, and Edward P. Katz. "Backpropagation for Multi-layer Neural Networks." In Springer Series on Bio- and Neurosystems. Springer Nature Switzerland, 2025. https://doi.org/10.1007/978-3-031-80939-2_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Yang, Shuzhong, Siwei Luo, and Jianyu Li. "Building Multi-layer Small World Neural Network." In Advances in Neural Networks - ISNN 2006. Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11759966_102.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Eleuteri, Antonio, Roberto Tagliaferri, and Leopoldo Milano. "Divergence Projections for Variable Selection in Multi–layer Perceptron Networks." In Neural Nets. Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/978-3-540-45216-4_32.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Suresh, Sundaram, Narasimhan Sundararajan, and Ramasamy Savitha. "Fully Complex-valued Multi Layer Perceptron Networks." In Supervised Learning with Complex-valued Neural Networks. Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-29491-4_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Pérez-Miñana, Elena, Peter Ross, and John Hallam. "Multi-layer perceptron design using Delaunay triangulations." In Fuzzy Logic, Neural Networks, and Evolutionary Computation. Springer Berlin Heidelberg, 1996. http://dx.doi.org/10.1007/3-540-61988-7_22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Barber, David, Peter Sollich, and David Saad. "Finite Size Effects in On-Line Learning of Multi-Layer Neural Networks." In Mathematics of Neural Networks. Springer US, 1997. http://dx.doi.org/10.1007/978-1-4615-6099-9_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Sollich, Peter. "Query Learning for Maximum Information Gain in a Multi-Layer Neural Network." In Mathematics of Neural Networks. Springer US, 1997. http://dx.doi.org/10.1007/978-1-4615-6099-9_59.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Multi-layer neural networks"

1

Zou, Hanyi, Huifang Xu, Qingchao Kong, Yilin Cao, and Wenji Mao. "Generating Relevant Article Comments via Variational Multi-Layer Fusion." In 2024 International Joint Conference on Neural Networks (IJCNN). IEEE, 2024. http://dx.doi.org/10.1109/ijcnn60899.2024.10650383.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Sreenivasan, Prashanth, and Yogesh H. Kulkarni. "Computing Midcurve with Multi-Layer and Convolutional Neural Networks." In 2025 International Conference on Computational, Communication and Information Technology (ICCCIT). IEEE, 2025. https://doi.org/10.1109/icccit62592.2025.10928152.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ruiz, José Luis López, Ángeles Verdejo Espinosa, and Macarena Espinilla Estévez. "Federated Learning Methodology for Indoor Location Systems in Multi-Layer Architecture." In 2024 International Joint Conference on Neural Networks (IJCNN). IEEE, 2024. http://dx.doi.org/10.1109/ijcnn60899.2024.10651450.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Han, Chaoqi, Bingcai Chen, Chanjuan Liu, et al. "SlideMLP: A Pure Multi-layer Perceptrons Method For Medical Image Segmentation." In 2024 International Joint Conference on Neural Networks (IJCNN). IEEE, 2024. http://dx.doi.org/10.1109/ijcnn60899.2024.10651002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Liu, Ziao, Xuyang Li, Lin Zhao, Zhongquan Gao, and Yu Zheng. "Industrial Surface Defect Detection via Multi-scale Mask Cross-layer Fusion Network." In 2024 International Joint Conference on Neural Networks (IJCNN). IEEE, 2024. http://dx.doi.org/10.1109/ijcnn60899.2024.10651547.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Du, Ruizhong, Yidan Li, Mingyue Li, Jinjia Peng, Yuting Zhu, and Caixia Ma. "Multi-attribute Semantic Adversarial Attack Based on Cross-layer Interpolation for Face Recognition." In 2024 International Joint Conference on Neural Networks (IJCNN). IEEE, 2024. http://dx.doi.org/10.1109/ijcnn60899.2024.10650828.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Fujino, Kazushi, Keiki Takadama, and Hiroyuki Sato. "Multi-layer Cortical Learning Algorithm for Forecasting Time-series Data with Smoothly Changing Variation Patterns." In 2024 International Joint Conference on Neural Networks (IJCNN). IEEE, 2024. http://dx.doi.org/10.1109/ijcnn60899.2024.10650432.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Sun, Mengyuan. "Music Melody Extraction Algorithm Based on Multi-Layer Perceptron Neural Network." In 2024 4th International Conference on Mobile Networks and Wireless Communications (ICMNWC). IEEE, 2024. https://doi.org/10.1109/icmnwc63764.2024.10871981.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Zhang, Pei, Lihua Zhou, Lizhen Wang, Hongmei Chen, and Qing Xiao. "Multi-masks and Bi-spaces Reconstruction based Single-Layer Auto-encoder for Heterogeneous Graph Representation Learning." In 2024 International Joint Conference on Neural Networks (IJCNN). IEEE, 2024. http://dx.doi.org/10.1109/ijcnn60899.2024.10650586.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Motato, Eliot, and Clark Radcliffe. "Recursive Assembly of Multi-Layer Perceptron Neural Networks." In ASME 2014 Dynamic Systems and Control Conference. American Society of Mechanical Engineers, 2014. http://dx.doi.org/10.1115/dscc2014-5997.

Full text
Abstract:
The objective of this paper is to present a methodology to modularly connect Multi-Layer Perceptron (MLP) neural network models describing static port-based physical behavior. The MLP considered in this work are characterized for an standard format with a single hidden layer with sigmoidal activation functions. Since every port is defined by an input-output pair, the number of outputs of the proposed neural network format is equal to the number of its inputs. This work extends the Model Assembly Method (MAM) used to connect transfer function models and Volterra models to multi-layer perceptron neural networks.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!