Academic literature on the topic 'Kernelized multivariate fisher discriminant'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Kernelized multivariate fisher discriminant.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Kernelized multivariate fisher discriminant"

1

Nakkiran, Arunadevi, and Vidyaa Thulasiraman. "Elastic net feature selected multivariate discriminant mapreduce classification." Indonesian Journal of Electrical Engineering and Computer Science 26, no. 1 (2022): 587. http://dx.doi.org/10.11591/ijeecs.v26.i1.pp587-596.

Full text
Abstract:
Analyzing the <span>big stream data and other valuable information is a significant task. Several conventional methods are designed to analyze the big stream data. But the scheduling accuracy and time complexity is a significant issue. To resolve, an elastic-net kernelized multivariate discriminant map reduce classification (EKMDMC) is introduced with the novelty of elastic-net regularization-based feature selection and kernelized multivariate fisher Discriminant MapReduce classifier. Initially, the EKMDMC technique executes the feature selection to improve the prediction accuracy using the Elastic-Net regularization method. Elastic-Net regularization method selects relevant features such as central processing unit (CPU) time, memory and bandwidth, energy based on regression function. After selecting relevant features, kernelized multivariate fisher discriminant mapr classifier is used to schedule the tasks to optimize the processing unit. Kernel function is used to find higher similarity of stream data tasks and mean of available classes. Experimental evaluation of proposed EKMDMC technique provides better performance in terms of resource aware predictive scheduling efficiency, false positive rate, scheduling time and memory consumption.</span>
APA, Harvard, Vancouver, ISO, and other styles
2

Nakkiran, Arunadevi, and Vidyaa Thulasiraman. "Elastic net feature selected multivariate discriminant mapreduce classification." Indonesian Journal of Electrical Engineering and Computer Science 26, no. 1 (2022): 587–96. https://doi.org/10.11591/ijeecs.v26.i1.pp587-596.

Full text
Abstract:
Analyzing the big stream data and other valuable information is a significant task. Several conventional methods are designed to analyze the big stream data. But the scheduling accuracy and time complexity is a significant issue. To resolve, an elastic-net kernelized multivariate discriminant map reduce classification (EKMDMC) is introduced with the novelty of elastic-net regularization-based feature selection and kernelized multivariate fisher Discriminant MapReduce classifier. Initially, the EKMDMC technique executes the feature selection to improve the prediction accuracy using the Elastic-Net regularization method. Elastic-Net regularization method selects relevant features such as central processing unit (CPU) time, memory and bandwidth, energy based on regression function. After selecting relevant features, kernelized multivariate fisher discriminant mapr classifier is used to schedule the tasks to optimize the processing unit. Kernel function is used to find higher similarity of stream data tasks and mean of available classes. Experimental evaluation of proposed EKMDMC technique provides better performance in terms of resource aware predictive scheduling efficiency, false positive rate, scheduling time and memory consumption.
APA, Harvard, Vancouver, ISO, and other styles
3

Saeroni, Amanah, Memi Nor Hayati, and Rito Goejantoro. "KLASIFIKASI TINGKAT KELANCARAN NASABAH DALAM MEMBAYAR PREMI DENGAN MENGGUNAKAN METODE K-NEAREST NEIGHBOR DAN ANALISIS DISKRIMINAN FISHER (Studi kasus: Data Nasabah PT. Prudential Life Samarinda Tahun 2019)." Jurnal Statistika Universitas Muhammadiyah Semarang 8, no. 2 (2020): 88. http://dx.doi.org/10.26714/jsunimus.8.2.2020.88-94.

Full text
Abstract:
Classification is a technique to form a model of data that is already known to its classification group. The model that was formed will be used to classify new objects. The K-Nearest Neighbor (K-NN) algorithm is a method for classifying new objects based on their K nearest neighbor. Fisher discriminant analysis is a multivariate technique for separating objects in different groups to form a discriminant function for allocate new objects in groups. This research has a goal to determine the results of classifying customer premium payment status using the K-NN method and Fisher discriminant analysis and comparing the accuracy of the K-NN method classification and Fisher discriminant analysis on the insurance customer premium payment status. The data used is the insurance customer data of PT. Prudential Life Samarinda in 2019 with current premium payment status or non-current premium payment status and four independent variables are age, duration of premium payment, income and premium payment amount. The results of the comparative measurement of accuracy from the two analyzes show that the K-NN method has a higher level of accuracy than Fisher discriminant analysis for the classification of insurance customers premium payment status. The results of misclassification using the APER (Apparent Error Rate) in K-NN method is 15% while in Fisher discriminant analysis is 30%.
APA, Harvard, Vancouver, ISO, and other styles
4

Liu, Xingye, Jingye Li, Xiaohong Chen, Lin Zhou, and Kangkang Guo. "Bayesian discriminant analysis of lithofacies integrate the Fisher transformation and the kernel function estimation." Interpretation 5, no. 2 (2017): SE1—SE10. http://dx.doi.org/10.1190/int-2016-0025.1.

Full text
Abstract:
The accurate identification of lithofacies is indispensable for reservoir parameter prediction. In recent years, the application of multivariate statistical methods has gained more and more attention in petroleum geology. In terms of the identification for lithofacies, the commonly used multivariate statistical methods include discriminant analysis and cluster analysis. Fisher and Bayesian discriminant analyses are two different discriminant analysis methods, which include intrinsic advantages and disadvantages. Given the discriminant efficiency of different methods, calculation cost, difficulty in the degree of determining the parameters, and the ability to analyze statistical characteristics of data, we put forward a new method combined with seismic information to classify reservoir lithologies and pore fluids. This method integrates the advantages of Fisher discrimination, the kernel function, and Bayesian discrimination. First, we analyze training data and search a projection direction. Then, data are transformed through Fisher transformation according to this direction and different kinds of facies can be distinguished more efficiently by exploiting transformed data than by using primitive data. Subsequently, using the kernel function estimates the conditional probability density function of the transformed variable. A classifier is constructed based on Bayesian theory. Then, the pending data are input to the classifier and the solution whose posteriori probability reaches the maximum is extracted as the predicted result at each grid node. An a posteriori probability distribution of predicted lithofacies can be acquired as well, from which interpreters can evaluate the uncertainty of the results. The ultimate goal of this study is to provide a novel and efficient lithofacies discriminant method. Tests on model and field data indicate that our method can obtain more accurate identification results with less uncertainty compared with conventional Fisher approaches and Bayesian methods.
APA, Harvard, Vancouver, ISO, and other styles
5

Imran, Sajida, and Young-Bae Ko. "A Novel Indoor Positioning System Using Kernel Local Discriminant Analysis in Internet-of-Things." Wireless Communications and Mobile Computing 2018 (2018): 1–9. http://dx.doi.org/10.1155/2018/2976751.

Full text
Abstract:
WLAN based localization is a key technique of location-based services (LBS) indoors. However, the indoor environment is complex; received signal strength (RSS) is highly uncertain, multimodal, and nonlinear. The traditional location estimation methods fail to provide fair estimation accuracy under the said environment. We proposed a novel indoor positioning system that considers the nonlinear discriminative feature extraction of RSS using kernel local Fisher discriminant analysis (KLFDA). KLFDA extracts location features in a well-preserved kernelized space. In the new kernel featured space, nonlinear RSS features are characterized effectively. Along with handling of nonlinearity, KLFDA also copes well with the multimodality in the RSS data. By performing KLFDA, the discriminating information contained in RSS is reorganized and maximally extracted. Prior to feature extraction, we performed outlier detection on RSS data to remove any anomalies present in the data. Experimental results show that the proposed approach obtains higher positioning accuracy by extracting maximal discriminate location features and discarding outlying information present in the RSS data.
APA, Harvard, Vancouver, ISO, and other styles
6

Fan, Bing Chen. "Application of Progressively Statistical Discriminant Models." Applied Mechanics and Materials 55-57 (May 2011): 1922–25. http://dx.doi.org/10.4028/www.scientific.net/amm.55-57.1922.

Full text
Abstract:
Discriminant analysis is an important multivariate statistical analysis, and plays an important part in pattern classification, data mining, machine learning et al. In this paper, based on principle of progressively statistical discriminant analysis under Fisher rule, a progressively statistical discriminant model is set up. The authors analyzed the data about the occurrence of the second generation of the corn borer in 21 years from 1985 to 2006 (except 1990) at Linyi, Shandong Province, and then set up three graded recognition pattern. The results tested the pest data showed that the fitting rate is 95.24%, 92.31% and 100% respectively, and that accuracy of forecast is satisfactory.
APA, Harvard, Vancouver, ISO, and other styles
7

Jing, CHEN, and GAO Caixia. "Sparse Linear Discriminant Analysis Based on lq Regularization." Frontiers of Chinese Pure Mathematics 1, no. 2 (2023): 31–38. http://dx.doi.org/10.48014/fcpm.20230529001.

Full text
Abstract:
Linear discriminant analysis plays an important role in feature extraction, data dimensionality reduction, and classification. With the progress of science and technology, the data that need to be processed are becoming increasingly large. However, in high-dimensional situations, linear discriminant analysis faces two problems: the lack of interpretability of the projected data since they all involve all p features, which are linear combinations of all features, as well as the singularity problem of the within-class covariance matrix. There are three different arguments for linear discriminant analysis: multivariate Gaussian model, Fisher discrimination problem, and optimal scoring problem. To solve these two problems, this article establishes a model for solving the kth discriminant component, which first transforms the original model of Fisher discriminant problem in linear discriminant analysis by using a diagonal estimated matrix for the within-class variance in place of the original within-class covariance matrix, which overcomes the singularity problem of the matrix and projects it to an orthogonal projection space to remove its orthogonal constraints, and subsequently an lq norm regularization term is added to enhance its interpretability for the purpose of dimensionality reduction and classification. Finally, an iterative algorithm for solving the model and a convergence analysis are given, and it is proved that the sequence generated by the algorithm is descended and converges to a local minimum of the problem for any initial value.
APA, Harvard, Vancouver, ISO, and other styles
8

Ле Ван Хыонг, Нгуен Нгок Киенг, Нгуен Данг Хой, and Данг Хунг Куонг. "Applying Multivariate Statistical Methods for Predicting Pinus Forest Fire Danger at Bidoup-Nui Ba National Park." Труды Карадагской научной станции им. Т.И. Вяземского - природного заповедника РАН, no. 1 (13) (April 21, 2021): 45–53. http://dx.doi.org/10.21072/eco.2021.13.05.

Full text
Abstract:
The paper presents results of applying multivariate statistical methods (CCA: canonical correlation analysis and DFA: discriminant function analysis) for determining canonical correlation between a set of variables {T, H, m1, K} and a set of variables {Pc, Tc} (T: temperature, H: relative humidity, m1: mass of dry fuels, K: burning coefficient, K = m1/M, with M: total mass of fire fuels, Pc: % burned fuels and Tc: burningtime) as well as through results of discriminant function analysis DFA to set up models of predicting forest fire danger at Bidoup - Nui Ba National Park. From research data in November, December, January, February and March in the period of 2015-2017 from 340 sampling plots (each 2mx2m), at Bidoup - Nui Ba National Park, we carry on data processing on Excel (calculating) and Statgraphics (multivariate statistical methods: CCA&DFA). Three results were revealed from our analysis: (i) Canonical correlation between a set of variables {T, H, m1, K} and a set of variables {Pc, Tc} is highly significant (R = 0.675581 & P = 3.17*10-58<< 0.05); therefore, we can use a set of variables {T, H, m1, K} in models of predicting forest fire danger, (ii) Coefficients of standardized & unstandardized canonical discriminant functions (SCDF &UCDF) and Fisher classification function (FCF) are determined, (iii) Setting up two models of predicting forest fire danger (Mahalanobis distance model & Fisher classification function model).
APA, Harvard, Vancouver, ISO, and other styles
9

Li, Hanqi, Mingxing Jia, and Zhizhong Mao. "Dynamic Feature Extraction-Based Quadratic Discriminant Analysis for Industrial Process Fault Classification and Diagnosis." Entropy 25, no. 12 (2023): 1664. http://dx.doi.org/10.3390/e25121664.

Full text
Abstract:
This paper introduces a novel method for enhancing fault classification and diagnosis in dynamic nonlinear processes. The method focuses on dynamic feature extraction within multivariate time series data and utilizes dynamic reconstruction errors to augment the feature set. A fault classification procedure is then developed, using the weighted maximum scatter difference (WMSD) dimensionality reduction criterion and quadratic discriminant analysis (QDA) classifier. This method addresses the challenge of high-dimensional, sample-limited fault classification, offering early diagnosis capabilities for online samples with smaller amplitudes than the training set. Validation is conducted using a cold rolling mill simulation model, with performance compared to classical methods like linear discriminant analysis (LDA) and kernel Fisher discriminant analysis (KFD). The results demonstrate the superiority of the proposed method for reliable industrial process monitoring and fault diagnosis.
APA, Harvard, Vancouver, ISO, and other styles
10

Ainurrochmah, Alifta, Memi Nor Hayati, and Andi M. Ade Satriya. "Alifta Ainurrochmah, Perbandinga." Jurnal Aplikasi Statistika & Komputasi Statistik 11, no. 2 (2020): 37. http://dx.doi.org/10.34123/jurnalasks.v11i2.156.

Full text
Abstract:
Classification is a technique to form a model of data that is already known to its classification group. The model was formed will be used to classify new objects. Fisher discriminant analysis is multivariate technique to separate objects in different groups. Naive Bayes is a classification technique based on probability and Bayes theorem with assumption of independence. This research has a goal to compare the level of classification accuracy between Fisher's discriminant analysis and Naive Bayes method on the insurance premium payment status customer. The data used four independent variables that is income, age, premium payment period and premium payment amount. The results of misclassification using the APER (Apparent Rate Error) indicate that the naive Bayes method has a higher level of accuracy is 15,38% than Fisher’s discriminant analysis is 46,15% on the insurance premium payment status customer.
APA, Harvard, Vancouver, ISO, and other styles
More sources
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography