To see the other types of publications on this topic, follow the link: Single layer perceptron.

Journal articles on the topic 'Single layer perceptron'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Single layer perceptron.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Shynk, J. J. "Performance surfaces of a single-layer perceptron." IEEE Transactions on Neural Networks 1, no. 3 (1990): 268–74. http://dx.doi.org/10.1109/72.80252.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Agung Riansa, Dimas, Widodo, and Bambang Prasetya Adhi. "Pengenalan Tanda Tangan Menggunakan Algoritma Single Layer Perceptron." PINTER : Jurnal Pendidikan Teknik Informatika dan Komputer 3, no. 1 (2019): 1–6. http://dx.doi.org/10.21009/pinter.3.1.1.

Full text
Abstract:
Tanda tangan adalah sebuah tulisan tangan yang digunakan untuk mengesahkan sebuah dokumen atau surat Keterdapatan tanda tangan dalam sebuah dokumen mengartikan bahwa pihak yang menandatangani mengetahui dan menyetujui seluruh isi dari dokumen. Hal ini menyebabkan tanda tangan dapat dipalsukan oleh pihak yang tidak bertanggung jawab. Tanda tangan dapat dikenali keaslianya secara manual atau dengan penggunaan komputer dengan menggunakan jaringan syaraf tiruan (JST). Perceptron adalah salah satu algoritma jaringan syaraf tiruan yang dapat digunakan untuk mengenali tanda tangan dengan akurat. Algoritma Perceptron merupakan sebuah algoritma yang digunakan untuk supervised learning (Pembelajaran Terarah) yang dapat mengklasifikasi sebuah input yang bersifat linearly seperable (dapat dipisahkan secara linier) kedalam kelas-kelas tertentu. Peneliti menggunakan tanda tangan dari 5 pejabat fakultas teknik universitas negeri Jakarta, terdapat 15 tanda tangan asli masing masing pejabat dan terdapat juga 15 tanda tangan palsu masing masing pejabat, secara keseluruhan terdapat 150 tanda tangan yang akan dijadikan sebagai data uji (data test) dan data latih (data train). K fold-cross validation digunakan untuk mendapatkan tingkat akurasi yang valid dari penggunaan algoritma perceptron. Hasil pengenalan tanda tangan menggunakan algoritma perceptron yang tingkat akurasinya diukur dengan menggunakan k fold-cross validation, memiliki rata-rata akurasi algoritma 78.667%
APA, Harvard, Vancouver, ISO, and other styles
3

Hu, Yi-Chung. "Bankruptcy prediction using ELECTRE-based single-layer perceptron." Neurocomputing 72, no. 13-15 (2009): 3150–57. http://dx.doi.org/10.1016/j.neucom.2009.03.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Takagi, Shiro, Yuki Yoshida, and Masato Okada. "Impact of Layer Normalization on Single-Layer Perceptron — Statistical Mechanical Analysis." Journal of the Physical Society of Japan 88, no. 7 (2019): 074003. http://dx.doi.org/10.7566/jpsj.88.074003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Hu, Yi-Chung, Jen-Hung Wang, and Chia-Ying Chang. "Flow-based grey single-layer perceptron with fuzzy integral." Neurocomputing 91 (August 2012): 86–89. http://dx.doi.org/10.1016/j.neucom.2012.02.022.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Uykan, Z. "Clustering-based algorithms for single-hidden-layer sigmoid perceptron." IEEE Transactions on Neural Networks 14, no. 3 (2003): 708–15. http://dx.doi.org/10.1109/tnn.2003.813532.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Forsström, J. J., K. Irjala, G. Selén, M. Nyström, and P. Eiuund. "Using data preprocessing and single layer perceptron to analyze laboratory data." Scandinavian Journal of Clinical and Laboratory Investigation 55, sup222 (1995): 75–81. http://dx.doi.org/10.3109/00365519509088453.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Saad, D. "Capacity of the single-layer perceptron and minimal trajectory training algorithms." Journal of Physics A: Mathematical and General 26, no. 15 (1993): 3757–73. http://dx.doi.org/10.1088/0305-4470/26/15/025.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Hu, Yi-Chung. "A single-layer perceptron with PROMETHEE methods using novel preference indices." Neurocomputing 73, no. 16-18 (2010): 2920–27. http://dx.doi.org/10.1016/j.neucom.2010.08.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Hu, Yi-Chung. "Nonadditive similarity-based single-layer perceptron for multi-criteria collaborative filtering." Neurocomputing 129 (April 2014): 306–14. http://dx.doi.org/10.1016/j.neucom.2013.09.027.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Ashenayi, Kaveh, James Vogh, Heng-Ming Tai, Mohammad Sayeh, and Mohammad Mostafavi. "Single-Layer Perceptron Capable Of Classifying 2N+1 Distinct Input Patterns." International Journal of Modelling and Simulation 10, no. 4 (1990): 124–28. http://dx.doi.org/10.1080/02286203.1990.11760106.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Shang, C., C. F. N. Cowan, and M. J. J. Holt. "Log-likelihood adaptive algorithm in single-layer perceptron based channel equalisation." Electronics Letters 31, no. 22 (1995): 1900–1902. http://dx.doi.org/10.1049/el:19951323.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Shynk, John J., and Neil J. Bershad. "Stationary points of a single-layer perceptron for nonseparable data models." Neural Networks 6, no. 2 (1993): 189–202. http://dx.doi.org/10.1016/0893-6080(93)90016-p.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Zhang, Xiaoyu, Xiaofeng Chen, Jianfeng Wang, Zhihui Zhan, and Jin Li. "Verifiable privacy-preserving single-layer perceptron training scheme in cloud computing." Soft Computing 22, no. 23 (2018): 7719–32. http://dx.doi.org/10.1007/s00500-018-3233-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Raudys, Šarūnas. "Evolution and generalization of a single neurone: I. Single-layer perceptron as seven statistical classifiers." Neural Networks 11, no. 2 (1998): 283–96. http://dx.doi.org/10.1016/s0893-6080(97)00135-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Wang, Jingjing, Xiaoyu Zhang, Xiaoling Tao, and Jianfeng Wang. "EPSLP: Efficient and privacy-preserving single-layer perceptron learning in cloud computing." Journal of High Speed Networks 24, no. 3 (2018): 259–79. http://dx.doi.org/10.3233/jhs-180594.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Chen, Fangyue, Wenhui Tang, and Guanrong Chen. "Single-layer perceptron and dynamic neuron implementing linearly non-separable Boolean functions." International Journal of Circuit Theory and Applications 37, no. 3 (2009): 433–51. http://dx.doi.org/10.1002/cta.481.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Chang, Jyh‐Yeong, and Jia‐Lin Chen. "Applying fuzzy logic in the modified single‐layer perceptron image segmentation network." Journal of the Chinese Institute of Engineers 23, no. 2 (2000): 197–210. http://dx.doi.org/10.1080/02533839.2000.9670538.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Hu, Yi-Chung. "Classification performance evaluation of single-layer perceptron with Choquet integral-based TOPSIS." Applied Intelligence 29, no. 3 (2007): 204–15. http://dx.doi.org/10.1007/s10489-007-0086-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

BLATT, MARCELO, EYTAN DOMANY, and IDO KANTER. "ON THE EQUIVALENCE OF TWO-LAYERED PERCEPTRONS WITH BINARY NEURONS." International Journal of Neural Systems 06, no. 03 (1995): 225–31. http://dx.doi.org/10.1142/s0129065795000160.

Full text
Abstract:
We consider two-layered perceptrons consisting of N binary input units, K binary hidden units and one binary output unit, in the limit N≫K≥1. We prove that the weights of a regular irreducible network are uniquely determined by its input-output map up to some obvious global symmetries. A network is regular if its K weight vectors from the input layer to the K hidden units are linearly independent. A (single layered) perceptron is said to be irreducible if its output depends on every one of its input units; and a two-layered perceptron is irreducible if the K+1 perceptrons that constitute such network are irreducible. By global symmetries we mean, for instance, permuting the labels of the hidden units. Hence, two irreducible regular two-layered perceptrons that implement the same Boolean function must have the same number of hidden units, and must be composed of equivalent perceptrons.
APA, Harvard, Vancouver, ISO, and other styles
21

Lotfi, Ehsan, and M. R. Akbarzadeh-T. "A Novel Single Neuron Perceptron with Universal Approximation and XOR Computation Properties." Computational Intelligence and Neuroscience 2014 (2014): 1–6. http://dx.doi.org/10.1155/2014/746376.

Full text
Abstract:
We propose a biologically motivated brain-inspired single neuron perceptron (SNP) with universal approximation and XOR computation properties. This computational model extends the input pattern and is based on the excitatory and inhibitory learning rules inspired from neural connections in the human brain’s nervous system. The resulting architecture of SNP can be trained by supervised excitatory and inhibitory online learning rules. The main features of proposed single layer perceptron are universal approximation property and low computational complexity. The method is tested on 6 UCI (University of California, Irvine) pattern recognition and classification datasets. Various comparisons with multilayer perceptron (MLP) with gradient decent backpropagation (GDBP) learning algorithm indicate the superiority of the approach in terms of higher accuracy, lower time, and spatial complexity, as well as faster training. Hence, we believe the proposed approach can be generally applicable to various problems such as in pattern recognition and classification.
APA, Harvard, Vancouver, ISO, and other styles
22

CHEN, HSIAO-CHI, and YI-CHUNG HU. "SINGLE-LAYER PERCEPTRON WITH NON-ADDITIVE PREFERENCE INDICES AND ITS APPLICATION TO BANKRUPTCY PREDICTION." International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 19, no. 05 (2011): 843–61. http://dx.doi.org/10.1142/s021848851100726x.

Full text
Abstract:
Preference Ranking Organization METHods for Enrichment Evaluations (PROMETHEE), based on outranking relation theory, are used extensively in Multi-Criteria Decision Aid (MCDA). In PROMETHEE, an overall preference index based on weighted average aggregation represents the intensity of preference for one pattern over another pattern and can be measured by a given preference function. Unfortunately, as the criteria making up the patterns are not always independent, the assumption of additivity among single-criterion preference indices may not be reasonable. This paper develops a novel PROMETHEE-based perceptron using nonadditive preference indices for ordinal sorting problems. The applicability of the proposed non-additive PROMETHEE-based single-layer perceptron (SLP) to bankruptcy prediction is examined by using a sample of 53 publicly traded, Taiwanese firms that encountered financial failure between 2000 and 2008. The proposed model performs well compared to PROMETHEE with additive preference indices and other additive PROMETHEE-based classification approaches.
APA, Harvard, Vancouver, ISO, and other styles
23

Yoshida, Yuki, Ryo Karakida, Masato Okada, and Shun-ichi Amari. "Statistical Mechanical Analysis of Online Learning with Weight Normalization in Single Layer Perceptron." Journal of the Physical Society of Japan 86, no. 4 (2017): 044002. http://dx.doi.org/10.7566/jpsj.86.044002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Khalid, Muhammad, and Jawar Singh. "Memristive Crossbar Circuits-Based Combinational Logic Classification Using Single Layer Perceptron Learning Rule." Journal of Nanoelectronics and Optoelectronics 12, no. 1 (2017): 47–58. http://dx.doi.org/10.1166/jno.2017.1963.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Coetzee, Frans M., and Virginia L. Stonick. "Topology and Geometry of Single Hidden Layer Network, Least Squares Weight Solutions." Neural Computation 7, no. 4 (1995): 672–705. http://dx.doi.org/10.1162/neco.1995.7.4.672.

Full text
Abstract:
In this paper the topological and geometric properties of the weight solutions for multilayer perceptron (MLP) networks under the MSE error criterion are characterized. The characterization is obtained by analyzing a homotopy from linear to nonlinear networks in which the hidden node function is slowly transformed from a linear to the final sigmoidal nonlinearity. Two different geometric perspectives for this optimization process are developed. The generic topology of the nonlinear MLP weight solutions is described and related to the geometric interpretations, error surfaces, and homotopy paths, both analytically and using carefully constructed examples. These results illustrate that although the natural homotopy provides a practically valuable heuristic for training, it suffers from a number of theoretical and practical difficulties. The linear system is a bifurcation point of the homotopy equations, and solution paths are therefore generically discontinuous. Bifurcations and infinite solutions further occur for data sets that are not of measure zero. These results weaken the guarantees on global convergence and exhaustive behavior normally associated with homotopy methods. However, the analyses presented provide a clear understanding of the relationship between linear and nonlinear perceptron networks, and thus a firm foundation for development of more powerful training methods. The geometric perspectives and generic topological results describing the nature of the solutions are further generally applicable to network analysis and algorithm evaluation.
APA, Harvard, Vancouver, ISO, and other styles
26

Яганов, П. О., та І. В. Редько. "ПЕРСЕПТРОННИЙ КЛАСИФІКАТОР ТЕПЛОВОГО КОМФОРТУ". Bulletin of the Kyiv National University of Technologies and Design. Technical Science Series 128, № 6 (2019): 29–38. http://dx.doi.org/10.30857/1813-6796.2018.6.3.

Full text
Abstract:
Algorithmic control of automated systems by human thermal comfort for the establishment of optimal thermal comfort with the use of classification and computational capabilities of the simplest single layer neural network – perceptron.
APA, Harvard, Vancouver, ISO, and other styles
27

Kůrková, Věra, and Paul C. Kainen. "Functionally Equivalent Feedforward Neural Networks." Neural Computation 6, no. 3 (1994): 543–58. http://dx.doi.org/10.1162/neco.1994.6.3.543.

Full text
Abstract:
For a feedforward perceptron type architecture with a single hidden layer but with a quite general activation function, we characterize the relation between pairs of weight vectors determining networks with the same input-output function.
APA, Harvard, Vancouver, ISO, and other styles
28

Bar, Nirjhar, and Sudip Kumar Das. "Modeling of Gas Holdup and Pressure Drop Using ANN for Gas-Non-Newtonian Liquid Flow in Vertical Pipe." Advanced Materials Research 917 (June 2014): 244–56. http://dx.doi.org/10.4028/www.scientific.net/amr.917.244.

Full text
Abstract:
This paper is an attempt to compare the the performance of the three different Multilayer Perceptron training algorithms namely Backpropagation, Scaled Conjugate Gradient and Levenberg-Marquardt for the prediction of the gas hold up and frictional pressure drop across the vertical pipe for gas non-Newtonian liquid flow from our earlier experimental data. The Multilayer Perceptron consists of a single hidden layer. Four different transfer functions were used in the hidden layer. All three algorithms were useful to predict the gas holdup and frictional pressure drop across the vertical pipe. Statistical analysis using Chi-square test (χ2) confirms that the Backpropagation training algorithm gives the best predictability for both cases.
APA, Harvard, Vancouver, ISO, and other styles
29

Marcialis, Gian Luca, and Fabio Roli. "Fusion of multiple fingerprint matchers by single-layer perceptron with class-separation loss function." Pattern Recognition Letters 26, no. 12 (2005): 1830–39. http://dx.doi.org/10.1016/j.patrec.2005.03.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Mualfah, D., Y. Fatma, and R. A. Ramadhan. "Anti-forensics: the image asymmetry key and single layer perceptron for digital data security." Journal of Physics: Conference Series 1517 (April 2020): 012106. http://dx.doi.org/10.1088/1742-6596/1517/1/012106.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Daqi, Gao, Li Chunxia, and Yang Yunfan. "Task decomposition and modular single-hidden-layer perceptron classifiers for multi-class learning problems." Pattern Recognition 40, no. 8 (2007): 2226–36. http://dx.doi.org/10.1016/j.patcog.2007.01.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Hu, Yi-Chung, and Jung-Fa Tsai. "Evaluating classification performances of single-layer perceptron with a Choquet fuzzy integral-based neuron." Expert Systems with Applications 36, no. 2 (2009): 1793–800. http://dx.doi.org/10.1016/j.eswa.2007.12.006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

NG, WING W. Y., DANIEL S. YEUNG, and ERIC C. C. TSANG. "THE LOCALIZED GENERALIZATION ERROR MODEL FOR SINGLE LAYER PERCEPTRON NEURAL NETWORK AND SIGMOID SUPPORT VECTOR MACHINE." International Journal of Pattern Recognition and Artificial Intelligence 22, no. 01 (2008): 121–35. http://dx.doi.org/10.1142/s0218001408006168.

Full text
Abstract:
We had developed the localized generalization error model for supervised learning with minimization of Mean Square Error. In this work, we extend the error model to Single Layer Perceptron Neural Network (SLPNN) and Support Vector Machine (SVM) with sigmoid kernel function. For a trained SLPNN or SVM and a given training dataset, the proposed error model bounds above the error for unseen samples which are similar to the training samples. As the major component of the localized generalization error model, the stochastic sensitivity measure formula for perceptron neural network derived in this work has relaxed the assumptions of same distribution for all inputs and each sample perturbed only once in previous works. These make the sensitivity measure applicable to pattern classification problems. The stochastic sensitivity measure of SVM with Sigmoid kernel is also derived in this work as a component of the localized generalization error model. At the end of this paper, we discuss the advantages of the proposed error bound over existing error bound.
APA, Harvard, Vancouver, ISO, and other styles
34

Dehuri, Satchidananda, and Sung-Bae Cho. "Learning Fuzzy Network Using Sequence Bound Global Particle Swarm Optimizer." International Journal of Fuzzy System Applications 2, no. 1 (2012): 54–70. http://dx.doi.org/10.4018/ijfsa.2012010104.

Full text
Abstract:
This paper proposes an algorithm for classification by learning fuzzy network with a sequence bound global particle swarm optimizer. The aim of this work can be achieved in two folded. Fold one provides an explicit mapping of an input features from original domain to fuzzy domain with a multiple fuzzy sets and the second fold discusses the novel sequence bound global particle swarm optimizer for evolution of optimal set of connection weights between hidden layer and output layer of the fuzzy network. The novel sequence bound global particle swarm optimizer can solve the problem of premature convergence when learning the fuzzy network plagued with many local optimal solutions. Unlike multi-layer perceptron with many hidden layers it has only single hidden layer. The output layer of this network contains one neuron. This network advocates a simple and understandable architecture for classification. The experimental studies show that the classification accuracy of the proposed algorithm is promising and superior to other alternatives such as multi-layer perceptron and radial basis function network.
APA, Harvard, Vancouver, ISO, and other styles
35

Bershad, N. J., and J. J. Shynk. "Performance analysis of a converged single-layer perceptron for nonseparable data models with bias terms." IEEE Transactions on Signal Processing 42, no. 1 (1994): 175–88. http://dx.doi.org/10.1109/78.258132.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Hu, Yi-Chung. "Nonadditive grey single-layer perceptron with Choquet integral for pattern classification problems using genetic algorithms." Neurocomputing 72, no. 1-3 (2008): 331–40. http://dx.doi.org/10.1016/j.neucom.2008.01.008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Hu, Yi-Chung, and Hsiao-Chi Chen. "Integrating multicriteria PROMETHEE II method into a single-layer perceptron for two-class pattern classification." Neural Computing and Applications 20, no. 8 (2010): 1263–71. http://dx.doi.org/10.1007/s00521-010-0424-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

He, Xin, and Yushi Chen. "Modifications of the Multi-Layer Perceptron for Hyperspectral Image Classification." Remote Sensing 13, no. 17 (2021): 3547. http://dx.doi.org/10.3390/rs13173547.

Full text
Abstract:
Recently, many convolutional neural network (CNN)-based methods have been proposed to tackle the classification task of hyperspectral images (HSI). In fact, CNN has become the de-facto standard for HSI classification. It seems that the traditional neural networks such as multi-layer perceptron (MLP) are not competitive for HSI classification. However, in this study, we try to prove that the MLP can achieve good classification performance of HSI if it is properly designed and improved. The proposed Modified-MLP for HSI classification contains two special parts: spectral–spatial feature mapping and spectral–spatial information mixing. Specifically, for spectral–spatial feature mapping, each input sample of HSI is divided into a sequence of 3D patches with fixed length and then a linear layer is used to map the 3D patches to spectral–spatial features. For spectral–spatial information mixing, all the spectral–spatial features within a single sample are feed into the solely MLP architecture to model the spectral–spatial information across patches for following HSI classification. Furthermore, to obtain the abundant spectral–spatial information with different scales, Multiscale-MLP is proposed to aggregate neighboring patches with multiscale shapes for acquiring abundant spectral–spatial information. In addition, the Soft-MLP is proposed to further enhance the classification performance by applying soft split operation, which flexibly capture the global relations of patches at different positions in the input HSI sample. Finally, label smoothing is introduced to mitigate the overfitting problem in the Soft-MLP (Soft-MLP-L), which greatly improves the classification performance of MLP-based method. The proposed Modified-MLP, Multiscale-MLP, Soft-MLP, and Soft-MLP-L are tested on the three widely used hyperspectral datasets. The proposed Soft-MLP-L leads to the highest OA, which outperforms CNN by 5.76%, 2.55%, and 2.5% on the Salinas, Pavia, and Indian Pines datasets, respectively. The obtained results reveal that the proposed models provide competitive results compared to the state-of-the-art methods, which shows that the MLP-based methods are still competitive for HSI classification.
APA, Harvard, Vancouver, ISO, and other styles
39

Sudhakara, M., and M. Janaki Meena. "Multi-scale fusion for underwater image enhancement using multi-layer perceptron." IAES International Journal of Artificial Intelligence (IJ-AI) 10, no. 2 (2021): 389. http://dx.doi.org/10.11591/ijai.v10.i2.pp389-397.

Full text
Abstract:
<span id="docs-internal-guid-54b35aa6-7fff-0992-ed4c-aca4d05cfcfa"><span>Underwater image enhancement (UIE) is an imperative computer vision activity with many applications and different strategies proposed in recent years. Underwater images are firmly low in quality by a mixture of noise, wavelength dependency, and light attenuation. This paper depicts an effective strategy to improve the quality of degraded underwater images. Existing methods for dehazing in the literature considering dark channel prior utilize two separate phases for evaluating the transmission map (i.e., transmission estimation and transmission refinement). Accurate restoration is not possible with these methods and takes more computational time. A proposed three-step method is an imaging approach that does not need particular hardware or underwater conditions. First, we utilize the multi-layer perceptron (MLP) to comprehensively evaluate transmission maps by base channel, followed by contrast enhancement. Furthermore, a gamma-adjusted version of the MLP recovered image is derived. Finally, the multi-scale fusion method was applied to two attained images. The standardized weight is computed for the two images with three different weights in the fusion process. The quantitative results show that significantly our approach gives the better result with the difference of 0.536, 2.185, and 1.272 for PCQI, UCIQE, and UIQM metrics, respectively, on a single underwater image benchmark dataset. The qualitative results also give better results compared with the state-of-the-art techniques.</span></span>
APA, Harvard, Vancouver, ISO, and other styles
40

Wang, Jidong, Zhilin Xu, and Yanbo Che. "Power Quality Disturbance Classification Based on DWT and Multilayer Perceptron Extreme Learning Machine." Applied Sciences 9, no. 11 (2019): 2315. http://dx.doi.org/10.3390/app9112315.

Full text
Abstract:
In order to effectively identify complex power quality disturbances, a power quality disturbance classification method based on empirical wavelet transform and a multi-layer perceptron extreme learning machine (ELM) is proposed. The model uses the discrete wavelet transform (DWT) multi-resolution method to extract classification features. Combined with hierarchical ELM (H-ELM) characteristics, the particle swarm optimization (PSO) single-object feature selection method is used to select the optimal feature set. The hidden layer of the H-ELM classifier in the model is trained by forward training. Once the previous layer is established, the weight of the current layer can be fixed without fine-tuning. Therefore, the training speed can be accelerated, the recognition accuracy is almost independent of the parameter adjustment, and the model has strong robustness. In order to solve the problem of data imbalance in the actual power system, a data enhancement method is proposed to reduce the impact of data imbalance and enhance the generalization performance of the network. The simulation results showed that the proposed method can identify 16 disturbances efficiently and accurately under different noise conditions, and the robustness of the proposed method is verified by the measured data.
APA, Harvard, Vancouver, ISO, and other styles
41

Claywell, Randall, Laszlo Nadai, Imre Felde, Sina Ardabili, and Amirhosein Mosavi. "Adaptive Neuro-Fuzzy Inference System and a Multilayer Perceptron Model Trained with Grey Wolf Optimizer for Predicting Solar Diffuse Fraction." Entropy 22, no. 11 (2020): 1192. http://dx.doi.org/10.3390/e22111192.

Full text
Abstract:
The accurate prediction of the solar diffuse fraction (DF), sometimes called the diffuse ratio, is an important topic for solar energy research. In the present study, the current state of Diffuse irradiance research is discussed and then three robust, machine learning (ML) models are examined using a large dataset (almost eight years) of hourly readings from Almeria, Spain. The ML models used herein, are a hybrid adaptive network-based fuzzy inference system (ANFIS), a single multi-layer perceptron (MLP) and a hybrid multi-layer perceptron grey wolf optimizer (MLP-GWO). These models were evaluated for their predictive precision, using various solar and DF irradiance data, from Spain. The results were then evaluated using frequently used evaluation criteria, the mean absolute error (MAE), mean error (ME) and the root mean square error (RMSE). The results showed that the MLP-GWO model, followed by the ANFIS model, provided a higher performance in both the training and the testing procedures.
APA, Harvard, Vancouver, ISO, and other styles
42

KINOUCHI, OSAME, and MARCELO H. R. TRAGTENBERG. "MODELING NEURONS BY SIMPLE MAPS." International Journal of Bifurcation and Chaos 06, no. 12a (1996): 2343–60. http://dx.doi.org/10.1142/s0218127496001508.

Full text
Abstract:
We introduce a simple generalization of graded response formal neurons which presents very complex behavior. Phase diagrams in full parameter space are given, showing regions with fixed points, periodic, quasiperiodic and chaotic behavior. These diagrams also represent the possible time series learnable by the simplest feed-forward network, a two input single-layer perceptron. This simple formal neuron (‘dynamical perceptron’) behaves as an excitable ele ment with characteristics very similar to those appearing in more complicated neuron models like FitzHugh-Nagumo and Hodgkin-Huxley systems: natural threshold for action potentials, dampened subthreshold oscillations, rebound response, repetitive firing under constant input, nerve blocking effect etc. We also introduce an ‘adaptive dynamical perceptron’ as a simple model of a bursting neuron of Rose-Hindmarsh type. We show that networks of such elements are interesting models which lie at the interface of neural networks, coupled map lattices, excitable media and self-organized criticality studies.
APA, Harvard, Vancouver, ISO, and other styles
43

Kang, Youngshin, Nakwan Kim, Byoung-Soo Kim, and Min-Jea Tahk. "Autopilot design for tilt-rotor unmanned aerial vehicle with nacelle mounted wing extension using single hidden layer perceptron neural network." Proceedings of the Institution of Mechanical Engineers, Part G: Journal of Aerospace Engineering 231, no. 11 (2016): 1979–92. http://dx.doi.org/10.1177/0954410016664926.

Full text
Abstract:
Single hidden layer perceptron neural network controllers combined with dynamic inversion are applied to the tilt-rotor unmanned aerial vehicle and its variant model with the nacelle mounted wing extension. The bandwidths of the inner loop and outer loop of the controller are designed using the timescale separation approach, which uses the combined analysis of the two loops. The bandwidth of each loop is selected to be close to each other using a combination of the pseudo-control-hedging and the pole-placement method. Similar to the previous studies on sigma-pi neural network, the dynamic inversion at hover conditions of the original tilt-rotor model is used as a baseline for both aircraft, and the compatible solution to the Lyapunov equation is suggested. The single hidden layer perceptron neural network minimizes the error of the inversion model through the back-propagation adaptation. The waypoint guidance is applied to the outermost loop of the neural network controller for autonomous flight which includes vertical take-off and landing as well as nacelle conversion. The simulation results under the two wind conditions for the tilt-rotor aircraft and its variant are presented. The south and north-west wind directions are simulated in order to compare with the results from the existing sigma-pi neural network, and the estimation results of the wind are presented.
APA, Harvard, Vancouver, ISO, and other styles
44

Shynk, J. J., and N. J. Bershad. "Steady-state analysis of a single-layer perceptron based on a system identification model with bias terms." IEEE Transactions on Circuits and Systems 38, no. 9 (1991): 1030–42. http://dx.doi.org/10.1109/31.83874.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

BUSYGIN, Alexander N., Andrey N. BOBYLEV, Alexey A. GUBIN, Alexander D. PISAREV, and Sergey Yu UDOVICHENKO. "NUMERICAL SIMULATION AND EXPERIMENTAL STUDY OF A HARDWARE PULSE NEURAL NETWORK WITH MEMRISTOR SYNAPSES." Tyumen State University Herald. Physical and Mathematical Modeling. Oil, Gas, Energy 7, no. 2 (2021): 223–35. http://dx.doi.org/10.21684/2411-7978-2021-7-2-223-235.

Full text
Abstract:
This article presents the results of a numerical simulation and an experimental study of the electrical circuit of a hardware spiking perceptron based on a memristor-diode crossbar. That has required developing and manufacturing a measuring bench, the electrical circuit of which consists of a hardware perceptron circuit and an input peripheral electrical circuit to implement the activation functions of the neurons and ensure the operation of the memory matrix in a spiking mode. The authors have performed a study of the operation of the hardware spiking neural network with memristor synapses in the form of a memory matrix in the mode of a single-layer perceptron synapses. The perceptron can be considered as the first layer of a biomorphic neural network that performs primary processing of incoming information in a biomorphic neuroprocessor. The obtained experimental and simulation learning curves show the expected increase in the proportion of correct classifications with an increase in the number of training epochs. The authors demonstrate generating a new association during retraining caused by the presence of new input information. Comparison of the results of modeling and an experiment on training a small neural network with a small crossbar will allow creating adequate models of hardware neural networks with a large memristor-diode crossbar. The arrival of new unknown information at the input of the hardware spiking neural network can be related with the generation of new associations in the biomorphic neuroprocessor. With further improvement of the neural network, this information will be comprehended and, therefore, will allow the transition from weak to strong artificial intelligence.
APA, Harvard, Vancouver, ISO, and other styles
46

ELIZONDO, DAVID A., ROBERT MORRIS, TIM WATSON, and BENJAMIN N. PASSOW. "CONSTRUCTIVE RECURSIVE DETERMINISTIC PERCEPTRON NEURAL NETWORKS WITH GENETIC ALGORITHMS." International Journal of Pattern Recognition and Artificial Intelligence 27, no. 06 (2013): 1350019. http://dx.doi.org/10.1142/s0218001413500195.

Full text
Abstract:
The recursive deterministic perceptron (RDP) is a generalization of the single layer perceptron neural network. This neural network can separate, in a deterministic manner, any classification problem (linearly separable or not). It relies on the principle that in any nonlinearly separable (NLS) two-class classification problem, a linearly separable (LS) subset of one or more points belonging to one of the two classes can always be found. Small network topologies can be obtained when the LS subsets are of maximum cardinality. This is referred to as the problem of maximum separability and has been proven to be NP-Complete. Evolutionary computing techniques are applied to handle this problem in a more efficient way than the standard approaches in terms of complexity. These techniques enhance the RDP training in terms of speed of conversion and level of generalization. They provide an alternative to tackle large classification problems which is otherwise not feasible with the algorithmic versions of the RDP training methods.
APA, Harvard, Vancouver, ISO, and other styles
47

Cavuoti, S., C. Tortora, M. Brescia, et al. "Cooperative photometric redshift estimation." Proceedings of the International Astronomical Union 12, S325 (2016): 166–72. http://dx.doi.org/10.1017/s1743921317001296.

Full text
Abstract:
AbstractIn the modern galaxy surveys photometric redshifts play a central role in a broad range of studies, from gravitational lensing and dark matter distribution to galaxy evolution. Using a dataset of ~ 25,000 galaxies from the second data release of the Kilo Degree Survey (KiDS) we obtain photometric redshifts with five different methods: (i) Random forest, (ii) Multi Layer Perceptron with Quasi Newton Algorithm, (iii) Multi Layer Perceptron with an optimization network based on the Levenberg-Marquardt learning rule, (iv) the Bayesian Photometric Redshift model (or BPZ) and (v) a classical SED template fitting procedure (Le Phare). We show how SED fitting techniques could provide useful information on the galaxy spectral type which can be used to improve the capability of machine learning methods constraining systematic errors and reduce the occurrence of catastrophic outliers. We use such classification to train specialized regression estimators, by demonstrating that such hybrid approach, involving SED fitting and machine learning in a single collaborative framework, is capable to improve the overall prediction accuracy of photometric redshifts.
APA, Harvard, Vancouver, ISO, and other styles
48

Phumrattanaprapin, Khanittha, and Punyaphol Horata. "Extended Hierarchical Extreme Learning Machine with Multilayer Perceptron." ECTI Transactions on Computer and Information Technology (ECTI-CIT) 10, no. 2 (2017): 196–204. http://dx.doi.org/10.37936/ecti-cit.2016102.68266.

Full text
Abstract:
The Deep Learning approach provides a high performance of classification, especially when invoking image classification problems. However, a shortcoming of the traditional Deep Learning method is the large time scale of training. The hierarchical extreme learning machine (H-ELM) framework was based on the hierarchical learning architecture of multilayer perceptron to address the problem. H-ELM is composed of two parts; the first entails unsupervised multilayer encoding, and the second is the supervised feature classification. H-ELM can give a higher accuracy rate than the traditional ELM. However, there still remains room to enhance its classification performance. This paper therefore proposes a new method termed the extending hierarchical extreme learning machine (EH-ELM), which extends the number of layers in the supervised portion of the H-ELM from a single layer to multiple layers. To evaluate the performance of the EH-ELM, the various classification datasets were studied and compared with the H-ELM and the multilayer ELM, as well as various state-of-the-art such deep architecture methods. The experimental results show that the EH-ELM improved the accuracy rates over most other methods.
APA, Harvard, Vancouver, ISO, and other styles
49

Amellas, Yousra, Outman El Bakkali, Abdelouahed Djebli, and Adil Echchelh. "Short-term wind speed prediction based on MLP and NARX network models." Indonesian Journal of Electrical Engineering and Computer Science 18, no. 1 (2020): 150. http://dx.doi.org/10.11591/ijeecs.v18.i1.pp150-157.

Full text
Abstract:
The article aims to predict the wind speed by two artificial neural network’s models. The first model is a multilayer perceptron (MLP) treated by back-propagation algorithm and the second one is a recurrent neuron network type, processed by the NARX model. The two models having the same Network’s structure, which they are composed by 4 Inputs layers (Wind Speed, Pressure Temperature and Humidity), an intermediate layer defined by 20 neurons and an activation function, as well as a single output layer characterized by wind speed and a linear function. NARX shows the best results with a regression coefficient R = 0.984 et RMSE = 0.314.
APA, Harvard, Vancouver, ISO, and other styles
50

Al-Saif, Adel M., Mahmoud Abdel-Sattar, Abdulwahed M. Aboukarima, and Dalia H. Eshra. "Application of a multilayer perceptron artificial neural network for identification of peach cultivars based on physical characteristics." PeerJ 9 (June 17, 2021): e11529. http://dx.doi.org/10.7717/peerj.11529.

Full text
Abstract:
In the fresh fruit industry, identification of fruit cultivars and fruit quality is of vital importance. In the current study, nine peach cultivars (Dixon, Early Grande, Flordaprince, Flordastar, Flordaglo, Florda 834, TropicSnow, Desertred, and Swelling) were evaluated for differences in skin color, firmness, and size. Additionally, a multilayer perceptron (MLP) artificial neural network was applied for identification of the cultivars according to these attributes. The MLP was trained with an input layer including six input nodes, a single hidden layer with six hidden nodes, and an output layer with nine output nodes. A hyperbolic tangent activation function was used in the hidden layer and the cross entropy error was given because the softmax activation function was functional to the output layer. Results showed that the cross entropy error was 0.165. The peach identification process was significantly affected by the following variables in order of contribution (normalized importance): polar diameter (100%), L∗ (89.0), b∗ (88.0%), a∗ (78.5%), firmness (71.3%), and cross diameter (37.5.3%). The MLP was found to be a viable method of peach cultivar identification and classification because few identifying attributes were required and an overall classification accuracy of 100% was achieved in the testing phase. Measurements and quantitative discrimination of peach properties are provided in this research; these data may help enhance the processing efficiency and quality of processed peaches.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!