To see the other types of publications on this topic, follow the link: Linear classifier.

Journal articles on the topic 'Linear classifier'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Linear classifier.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Chen, Songcan, and Xubing Yang. "Alternative linear discriminant classifier." Pattern Recognition 37, no. 7 (July 2004): 1545–47. http://dx.doi.org/10.1016/j.patcog.2003.11.008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Barlach, Flemming. "A linear classifier design approach." Pattern Recognition 24, no. 9 (January 1991): 871–77. http://dx.doi.org/10.1016/0031-3203(91)90006-q.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Gyamfi, Kojo Sarfo, James Brusey, Andrew Hunt, and Elena Gaura. "Linear classifier design under heteroscedasticity in Linear Discriminant Analysis." Expert Systems with Applications 79 (August 2017): 44–52. http://dx.doi.org/10.1016/j.eswa.2017.02.039.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ellis, Steven P. "When a Constant Classifier is as Good as Any Linear Classifier." Communications in Statistics - Theory and Methods 40, no. 21 (November 2011): 3800–3811. http://dx.doi.org/10.1080/03610926.2010.498650.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Pascadi, Manuela A., and Mihai V. Pascadi. "Non‐linear Trainable Classifier in IRd." Kybernetes 22, no. 1 (January 1993): 13–21. http://dx.doi.org/10.1108/eb005953.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zhu, Changming, Xiang Ji, Chao Chen, Rigui Zhou, Lai Wei, and Xiafen Zhang. "Improved linear classifier model with Nyström." PLOS ONE 13, no. 11 (November 5, 2018): e0206798. http://dx.doi.org/10.1371/journal.pone.0206798.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Li Yujian, Liu Bo, Yang Xinwu, Fu Yaozong, and Li Houjun. "Multiconlitron: A General Piecewise Linear Classifier." IEEE Transactions on Neural Networks 22, no. 2 (February 2011): 276–89. http://dx.doi.org/10.1109/tnn.2010.2094624.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Bertò, Giulia, Daniel Bullock, Pietro Astolfi, Soichi Hayashi, Luca Zigiotto, Luciano Annicchiarico, Francesco Corsini, et al. "Classifyber, a robust streamline-based linear classifier for white matter bundle segmentation." NeuroImage 224 (January 2021): 117402. http://dx.doi.org/10.1016/j.neuroimage.2020.117402.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Kundu, Anirban, Guanxiong Xu, and Chunlin Ji. "Structural Analysis of Cloud Classifier." International Journal of Cloud Applications and Computing 4, no. 1 (January 2014): 63–75. http://dx.doi.org/10.4018/ijcac.2014010106.

Full text
Abstract:
In this paper, structural analysis of Cloud classifier is going to be discussed to make a clear distinction between linear and non-linear Cloud structures. Cloud manager is responsible for managing different activities within Cloud using distinct fields. Cloud path protocol has been defined to disseminate information from one node to another in a efficient way. Broadcasting complexities of linear and non-linear Cloud are also being projected in this paper. Mathematical expressions have been established for defining different performance factors of the Cloud network. In practical scenario, non-linear Cloud structure is more relevant than a linear Cloud structure.
APA, Harvard, Vancouver, ISO, and other styles
10

HUANG, KAI-YI, and P. W. MAUSEL. "Spatial post-processing of spectrally classified video images by a piecewise linear classifier." International Journal of Remote Sensing 14, no. 13 (September 1993): 2563–74. http://dx.doi.org/10.1080/01431169308904293.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Vigneron, Vincent, and Hichem Maaref. "M-ary Rank Classifier Combination: A Binary Linear Programming Problem." Entropy 21, no. 5 (April 26, 2019): 440. http://dx.doi.org/10.3390/e21050440.

Full text
Abstract:
The goal of classifier combination can be briefly stated as combining the decisions of individual classifiers to obtain a better classifier. In this paper, we propose a method based on the combination of weak rank classifiers because rankings contain more information than unique choices for a many-class problem. The problem of combining the decisions of more than one classifier with raw outputs in the form of candidate class rankings is considered and formulated as a general discrete optimization problem with an objective function based on the distance between the data and the consensus decision. This formulation uses certain performance statistics about the joint behavior of the ensemble of classifiers. Assuming that each classifier produces a ranking list of classes, an initial approach leads to a binary linear programming problem with a simple and global optimum solution. The consensus function can be considered as a mapping from a set of individual rankings to a combined ranking, leading to the most relevant decision. We also propose an information measure that quantifies the degree of consensus between the classifiers to assess the strength of the combination rule that is used. It is easy to implement and does not require any training. The main conclusion is that the classification rate is strongly improved by combining rank classifiers globally. The proposed algorithm is tested on real cytology image data to detect cervical cancer.
APA, Harvard, Vancouver, ISO, and other styles
12

Prathibha, P. H., and Dr C. P. Chandran. "Classification Mining SNPs from Leukaemia Cancer Dataset Using Linear Classifier with ACO." Bonfring International Journal of Data Mining 6, no. 2 (April 30, 2016): 10–15. http://dx.doi.org/10.9756/bijdm.8134.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Lee, Chulhee, and Seongyoun Woo. "Linear classifier design in the weight space." Pattern Recognition 88 (April 2019): 210–22. http://dx.doi.org/10.1016/j.patcog.2018.11.024.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Trajdos, Pawel, and Robert Burduk. "Linear classifier combination via multiple potential functions." Pattern Recognition 111 (March 2021): 107681. http://dx.doi.org/10.1016/j.patcog.2020.107681.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Zhurbenko, N. G. "Linear Classifier and Projection Onto a Polytope*." Cybernetics and Systems Analysis 56, no. 3 (May 2020): 485–91. http://dx.doi.org/10.1007/s10559-020-00264-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Zhang, Zhan, Dongling Zhang, and Yingjie Tian. "Kernel-based multiple criteria linear programming classifier." Procedia Computer Science 1, no. 1 (May 2010): 2407–15. http://dx.doi.org/10.1016/j.procs.2010.04.271.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Tsay, Jyh-Jong, and Jing-Doo Wang. "Improving linear classifier for Chinese text categorization." Information Processing & Management 40, no. 2 (March 2004): 223–37. http://dx.doi.org/10.1016/s0306-4573(02)00089-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Kuncheva, Ludmila I., and Juan J. Rodriguez. "Classifier Ensembles with a Random Linear Oracle." IEEE Transactions on Knowledge and Data Engineering 19, no. 4 (April 2007): 500–508. http://dx.doi.org/10.1109/tkde.2007.1016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Jung, Georg, and Manfred Opper. "Selection of examples for a linear classifier." Journal of Physics A: Mathematical and General 29, no. 7 (April 7, 1996): 1367–80. http://dx.doi.org/10.1088/0305-4470/29/7/010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Feng, Qingxiang, Xingjie Zhu, and Jeng-Shyang Pan. "Global linear regression coefficient classifier for recognition." Optik 126, no. 21 (November 2015): 3234–39. http://dx.doi.org/10.1016/j.ijleo.2015.07.116.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Czarnecki, Wojciech Marian, and Jacek Tabor. "Multithreshold Entropy Linear Classifier: Theory and applications." Expert Systems with Applications 42, no. 13 (August 2015): 5591–606. http://dx.doi.org/10.1016/j.eswa.2015.03.007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Du-Ming, T. "Object recognition by a linear weight classifier." Pattern Recognition Letters 16, no. 6 (June 1995): 591–600. http://dx.doi.org/10.1016/0167-8655(95)00003-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Tsai, Du-Ming, and Ming-fong Chen. "Object recognition by a linear weight classifier." Pattern Recognition Letters 16, no. 6 (June 1995): 591–600. http://dx.doi.org/10.1016/0167-8655(95)80005-e.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

SKLANSKY, JACK, and MARK VRIESENGA. "GENETIC SELECTION AND NEURAL MODELING OF PIECEWISE-LINEAR CLASSIFIERS." International Journal of Pattern Recognition and Artificial Intelligence 10, no. 05 (August 1996): 587–612. http://dx.doi.org/10.1142/s0218001496000360.

Full text
Abstract:
Piecewise-linear mathematical structures form a convenient and important framework for implementing trainable and adaptive pattern classifiers. Neural networks and genetic algorithms offer additional approaches with important benefits for the design of such classifiers. In this paper we show how neural modeling and genetic selection can be applied to piecewise-linear structures to optimize both the topology and the parameter values of the network forming the classifier. Such a classifier will tend to have a low error rate and high robustness. We describe applications of these techniques to an adaptive detector of abnormal tissue in mammograms and a detector of straight lines and edges in noisy aerial images.
APA, Harvard, Vancouver, ISO, and other styles
25

Shah, Kulin, and Naresh Manwani. "Sparse Reject Option Classifier Using Successive Linear Programming." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 4870–77. http://dx.doi.org/10.1609/aaai.v33i01.33014870.

Full text
Abstract:
In this paper, we propose an approach for learning sparse reject option classifiers using double ramp loss Ldr. We use DC programming to find the risk minimizer. The algorithm solves a sequence of linear programs to learn the reject option classifier. We show that the loss Ldr is Fisher consistent. We also show that the excess risk of loss Ld is upper bounded by excess risk of Ldr. We derive the generalization error bounds for the proposed approach. We show the effectiveness of the proposed approach by experimenting it on several real world datasets. The proposed approach not only performs comparable to the state of the art, it also successfully learns sparse classifiers.
APA, Harvard, Vancouver, ISO, and other styles
26

Hrebik, Radek, and Jaromir Kukal. "Context Out Classifier." MENDEL 24, no. 1 (June 1, 2018): 101–6. http://dx.doi.org/10.13164/mendel.2018.1.101.

Full text
Abstract:
Novel context out learning approach is discussed as possibility of using simple classifiers which is background of hidden class system. There are two ways how to perform final classification. Having a lot of hidden classes we can build their unions using binary optimization task. Resulting system has the best possible sensitivity over all output classes. Another way is to perform second level linear classification as referential approach. The presented techniques are demonstrated on traditional iris flower task.
APA, Harvard, Vancouver, ISO, and other styles
27

Kim, Moon-Hwan, Young-Hoon Joo, and Jin-Bae Park. "TS Fuzzy Classifier Using A Linear Matrix Inequality." Journal of Korean Institute of Intelligent Systems 14, no. 1 (February 1, 2004): 46–51. http://dx.doi.org/10.5391/jkiis.2004.14.1.046.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Zhang, Zhiwang, Guangxia Gao, Jun Yue, and Yong Shi. "Sparse feature kernel multi-criteria linear programming classifier." Neurocomputing 305 (August 2018): 104–15. http://dx.doi.org/10.1016/j.neucom.2018.04.054.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Tharwat, Alaa. "Linear vs. quadratic discriminant analysis classifier: a tutorial." International Journal of Applied Pattern Recognition 3, no. 2 (2016): 145. http://dx.doi.org/10.1504/ijapr.2016.079050.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Chin-Liang Wang, Che-Ho Wei, and Sin-Horng Chen. "Improved systolic array for linear discriminant function classifier." Electronics Letters 22, no. 2 (1986): 85. http://dx.doi.org/10.1049/el:19860058.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Bozinovski, Stevo. "A representation theorem for linear pattern classifier training." IEEE Transactions on Systems, Man, and Cybernetics SMC-15, no. 1 (January 1985): 159–61. http://dx.doi.org/10.1109/tsmc.1985.6313405.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Wang, Xianji, Xueyi Ye, Bin Li, Xin Li, and Zhenquan Zhuang. "Asymboost-based fisher linear classifier for face recognition." Journal of Electronics (China) 25, no. 3 (May 2008): 352–57. http://dx.doi.org/10.1007/s11767-006-0213-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Ye, Qiaolin, Chunxia Zhao, Haofeng Zhang, and Ning Ye. "Distance difference and linear programming nonparallel plane classifier." Expert Systems with Applications 38, no. 8 (August 2011): 9425–33. http://dx.doi.org/10.1016/j.eswa.2011.01.131.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Tyagi, Kanishka, and Michael Manry. "Multi-step Training of a Generalized Linear Classifier." Neural Processing Letters 50, no. 2 (September 29, 2018): 1341–60. http://dx.doi.org/10.1007/s11063-018-9915-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Wu, Qiang, and Ding-Xuan Zhou. "SVM Soft Margin Classifiers: Linear Programming versus Quadratic Programming." Neural Computation 17, no. 5 (May 1, 2005): 1160–87. http://dx.doi.org/10.1162/0899766053491896.

Full text
Abstract:
Support vector machine (SVM) soft margin classifiers are important learning algorithms for classification problems. They can be stated as convex optimization problems and are suitable for a large data setting. Linear programming SVM classifiers are especially efficient for very large size samples. But little is known about their convergence, compared with the well-understood quadratic programming SVM classifier. In this article, we point out the difficulty and provide an error analysis. Our analysis shows that the convergence behavior of the linear programming SVM is almost the same as that of the quadratic programming SVM. This is implemented by setting a stepping-stone between the linear programming SVM and the classical 1-norm soft margin classifier. An upper bound for the misclassification error is presented for general probability distributions. Explicit learning rates are derived for deterministic and weakly separable distributions, and for distributions satisfying some Tsybakov noise condition.
APA, Harvard, Vancouver, ISO, and other styles
36

Famouri, Mahmoud, Mohammad Taheri, and Zohreh Azimifar. "Fast Linear SVM Validation Based on Early Stopping in Iterative Learning." International Journal of Pattern Recognition and Artificial Intelligence 29, no. 08 (November 22, 2015): 1551013. http://dx.doi.org/10.1142/s0218001415510131.

Full text
Abstract:
Classification is an important field in machine learning and pattern recognition. Amongst various types of classifiers such as nearest neighbor, neural network and Bayesian classifiers, support vector machine (SVM) is known as a very powerful classifier. One of the advantages of SVM in comparison with the other methods, is its efficient and adjustable generalization capability. The performance of SVM classifier depends on its parameters, specially regularization parameter C, that is usually selected by cross-validation. Despite its generalization, SVM suffers from some limitations such as its considerable low speed training phase. Cross-validation is a very time consuming part of training phase, because for any candidate value of the parameter C, the entire process of training and validating must be repeated completely. In this paper, we propose a novel approach for early stopping of the SVM learning algorithm. The proposed early stopping occurs by integrating the validation part into the optimization part of the SVM training without losing any generality or degrading performance of the classifier. Moreover, this method can be considered in conjunction with the other available accelerator methods since there is not any dependency between our proposed method and the other accelerator ones, thus no redundancy will happen. Our method was tested and verified on various UCI repository datasets and the results indicate that this method speeds up the learning phase of SVM without losing any generality or affecting the final model of classifier.
APA, Harvard, Vancouver, ISO, and other styles
37

Tama, Bayu Adhi, and Sunghoon Lim. "A Comparative Performance Evaluation of Classification Algorithms for Clinical Decision Support Systems." Mathematics 8, no. 10 (October 16, 2020): 1814. http://dx.doi.org/10.3390/math8101814.

Full text
Abstract:
Classification algorithms are widely taken into account for clinical decision support systems. However, it is not always straightforward to understand the behavior of such algorithms on a multiple disease prediction task. When a new classifier is introduced, we, in most cases, will ask ourselves whether the classifier performs well on a particular clinical dataset or not. The decision to utilize classifiers mostly relies upon the type of data and classification task, thus making it often made arbitrarily. In this study, a comparative evaluation of a wide-array classifier pertaining to six different families, i.e., tree, ensemble, neural, probability, discriminant, and rule-based classifiers are dealt with. A number of real-world publicly datasets ranging from different diseases are taken into account in the experiment in order to demonstrate the generalizability of the classifiers in multiple disease prediction. A total of 25 classifiers, 14 datasets, and three different resampling techniques are explored. This study reveals that the classifier that is likely to become the best performer is the conditional inference tree forest (cforest), followed by linear discriminant analysis, generalize linear model, random forest, and Gaussian process classifier. This work contributes to existing literature regarding a thorough benchmark of classification algorithms for multiple diseases prediction.
APA, Harvard, Vancouver, ISO, and other styles
38

Yao, Wei Feng, and Xiao Bao Jia. "An Improved SVM Based on Feature Extension and Feature Selection." Applied Mechanics and Materials 552 (June 2014): 128–32. http://dx.doi.org/10.4028/www.scientific.net/amm.552.128.

Full text
Abstract:
Support Vector Machine (SVM) implicitly maps samples from the lower-dimensional feature space to a higher-dimensional space, and designs a non-linear classifier via optimize the linear classifier in the higher-dimensional space. This paper proposed an improved SVM method based on feature extension and feature selection. The method explicitly maps the samples to a higher-dimensional feature space, perform the feature selection in the space, and finally design a linear classifier with a selected feature set. This paper illustrated the reason why the generalization ability is improved by this technique. The experiment results on benchmark datasets show that the improved SVM greatly decreases the error rate compared with other classifiers, which proves the feasibility of the proposed SVM.
APA, Harvard, Vancouver, ISO, and other styles
39

Chen, Jun Ying, Jing Chen, and Zeng Xi Feng. "Shape Classification Using Multiple Classifiers with Different Feature Sets." Advanced Materials Research 368-373 (October 2011): 1583–87. http://dx.doi.org/10.4028/www.scientific.net/amr.368-373.1583.

Full text
Abstract:
In this paper, a new shape classification method based on different feature sets using multiple classifiers is proposed. Different feature sets are derived from the shapes by using different extraction methods. The implements of feature extraction are based on two ways: Fourier descriptors and Zernike moments. Multiple classifiers comprise Normal densities based linear classifier, k-nearest neighbor classifier, Feed-Forward neural network, Radial Basis Function neural network classifier. Each classifier is trained by two feature sets respectively to form two classification results. The final classification results are a combined response of the individual classifier using six different classifier combination rules and the results were compared with those derived from multiple classifiers based on the same feature sets and individual classifier. In this study we examined the different classification tasks on Kimia dataset. For the tasks the best combination strategy was found using the product rule, giving an average recognition rate of 95.83%.
APA, Harvard, Vancouver, ISO, and other styles
40

Jiang, Hongbo, and Yumin Chen. "Neighborhood Granule Classifiers." Applied Sciences 8, no. 12 (December 17, 2018): 2646. http://dx.doi.org/10.3390/app8122646.

Full text
Abstract:
Classifiers are divided into linear and nonlinear classifiers. The linear classifiers are built on a basis of some hyper planes. The nonlinear classifiers are mainly neural networks. In this paper, we propose a novel neighborhood granule classifier based on a concept of granular structure and neighborhood granules of datasets. By introducing a neighborhood rough set model, the condition features and decision features of classification systems are respectively granulated to form some condition neighborhood granules and decision neighborhood granules. These neighborhood granules are sets; thus, their calculations are intersection and union operations of sets. A condition neighborhood granule and a decision neighborhood granule form a granular rule, and the collection of granular rules constitutes a granular rule library. Furthermore, we propose two kinds of distance and similarity metrics to measure granules, which are used for the searching and matching of granules. Thus, we design a granule classifier by the similarity metric. Finally, we use the granule classifier proposed in this paper for a classification test with UCI datasets. The theoretical analysis and experiments show that the proposed granule classifier achieves a better classification performance under an appropriate neighborhood granulation parameter.
APA, Harvard, Vancouver, ISO, and other styles
41

Roh, Seok-Beom, Eun-Jin Hwang, and Tae-Chon Ahn. "Design of Pattern Classification Rule based on Local Linear Discriminant Analysis Classifier by using Differential Evolutionary Algorithm." Journal of Korean Institute of Intelligent Systems 22, no. 1 (February 25, 2012): 81–86. http://dx.doi.org/10.5391/jkiis.2012.22.1.81.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Ren, Yanni, Weite Li, and Jinglu Hu. "A semisupervised classifier based on piecewise linear regression model using gated linear network." IEEJ Transactions on Electrical and Electronic Engineering 15, no. 7 (May 11, 2020): 1048–56. http://dx.doi.org/10.1002/tee.23149.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Xu, Haitao, Liya Fan, and Xizhan Gao. "TBSTM: A Novel and Fast Nonlinear Classification Method for Image Data." International Journal of Pattern Recognition and Artificial Intelligence 29, no. 08 (November 22, 2015): 1551012. http://dx.doi.org/10.1142/s021800141551012x.

Full text
Abstract:
A new classifier for image data classification named as linear twin bounded support tensor machine (linear TBSTM) is proposed by adding regularization terms in objective functions, which results in the realization of structural risk minimization avoids of the singularity of matrices. We know that up to now nonlinear classifiers based on STM for image data classification are not seen more. In order to remedy this limitation, a new matrix kernel function is introduced and based on which the nonlinear version of TBSTM is studied with a detailed theoretical derivation, and then a nonlinear classifier called as nonlinear TBSTM is suggested. In order to examine the effectiveness of the proposed classifiers, a series of comparative experiments with three linear classifiers STM, TSTM and PSTM are performed on 15 binary image classification problems taken from ORL, YALE and AR datasets. Experiment results show that the proposed classifiers are effective and efficient.
APA, Harvard, Vancouver, ISO, and other styles
44

Egbo, I., M. Egbo, and S. I. Onyeagu. "Performance of Robust Linear Classifier with Multivariate Binary Variables." Journal of Mathematics Research 7, no. 4 (November 3, 2015): 104. http://dx.doi.org/10.5539/jmr.v7n4p104.

Full text
Abstract:
<p>This paper focuses on the robust classification procedures in two group discriminant analysis with multivariate binary variables. A normal distribution based data set is generated using the R-software statistical analysis system 2.15.3 using Barlett’s approximation to chi-square, the data set was found to be homogenous and was subjected to five linear classifiers namely: maximum likelihood discriminant function, fisher’s linear discriminant function, likelihood ratio function, full multinomial function and nearest neighbour function rule. To judge the performance of these procedures, the apparent error rates for each procedure are obtained for different sample sizes. The results obtained ranked the procedures as follows: fisher’s linear discriminant function, maximum likelihood, full multinomial, likelihood function and nearest neigbour function.</p>
APA, Harvard, Vancouver, ISO, and other styles
45

Tzelepis, Christos, Vasileios Mezaris, and Ioannis Patras. "Linear Maximum Margin Classifier for Learning from Uncertain Data." IEEE Transactions on Pattern Analysis and Machine Intelligence 40, no. 12 (December 1, 2018): 2948–62. http://dx.doi.org/10.1109/tpami.2017.2772235.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Ma, A. J., P. C. Yuen, and Jian-Huang Lai. "Linear Dependency Modeling for Classifier Fusion and Feature Combination." IEEE Transactions on Pattern Analysis and Machine Intelligence 35, no. 5 (May 2013): 1135–48. http://dx.doi.org/10.1109/tpami.2012.198.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Klass, F. "Unidirectional-flow systolic array for linear discriminant function classifier." Electronics Letters 26, no. 20 (1990): 1702. http://dx.doi.org/10.1049/el:19901087.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Wang, C. L., C. H. Wei, and S. H. Chen. "Erratum: Improved systolic array for linear discriminant function classifier." Electronics Letters 22, no. 9 (1986): 504. http://dx.doi.org/10.1049/el:19860342.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Marques de Sá, J. P., and C. Abreu-Lima. "A new ECG classifier based on linear prediction techniques." Computers and Biomedical Research 19, no. 3 (June 1986): 213–23. http://dx.doi.org/10.1016/0010-4809(86)90017-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Dasari, Sridhar, and I. V. Murali Krishna. "Combined Classifier for Face Recognition using Legendre Moments." Computer Engineering and Applications Journal 1, no. 2 (December 29, 2012): 107–18. http://dx.doi.org/10.18495/comengapp.v1i2.12.

Full text
Abstract:
In this paper, a new combined Face Recognition method based on Legendre moments with Linear Discriminant Analysis and Probabilistic Neural Network is proposed. The Legendre moments are orthogonal and scale invariants hence they are suitable for representing the features of the face images. The proposed face recognition method consists of three steps, i) Feature extraction using Legendre moments ii) Dimensionality reduction using Linear Discrminant Analysis (LDA) and iii) classification using Probabilistic Neural Network (PNN). Linear Discriminant Analysis searches the directions for maximum discrimination of classes in addition to dimensionality reduction. Combination of Legendre moments and Linear Discriminant Analysis is used for improving the capability of Linear Discriminant Analysis when few samples of images are available. Probabilistic Neural network gives fast and accurate classification of face images. Evaluation was performed on two face data bases. First database of 400 face images from Olivetty Research Laboratories (ORL) face database, and the second database of thirteen students are taken. The proposed method gives fast and better recognition rate when compared to other classifiers.DOI:Â 10.18495/comengapp.12.107118
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography