To see the other types of publications on this topic, follow the link: SVM (Support Vector Machine).

Journal articles on the topic 'SVM (Support Vector Machine)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'SVM (Support Vector Machine).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Kurita, Takio. "Support Vector Machine and Generalization." Journal of Advanced Computational Intelligence and Intelligent Informatics 8, no. 2 (2004): 84–92. http://dx.doi.org/10.20965/jaciii.2004.p0084.

Full text
Abstract:
The support vector machine (SVM) has been extended to build up nonlinear classifiers using the kernel trick. As a learning model, it has the best recognition performance among the many methods currently known because it is devised to obtain high performance for unlearned data. This paper reviews how to enhance generalization in learning classifiers centering on the SVM.
APA, Harvard, Vancouver, ISO, and other styles
2

Shanmugapriya, P., and Y. Venkataramani. "Analysis of Speaker Verification System Using Support Vector Machine." JOURNAL OF ADVANCES IN CHEMISTRY 13, no. 10 (2017): 6531–42. http://dx.doi.org/10.24297/jac.v13i10.5839.

Full text
Abstract:
The integration of GMM- super vector and Support Vector Machine (SVM) has become one of most popular strategy in text-independent speaker verification system. This paper describes the application of Fuzzy Support Vector Machine (FSVM) for classification of speakers using GMM-super vectors. Super vectors are formed by stacking the mean vectors of adapted GMMs from UBM using maximum a posteriori (MAP). GMM super vectors characterize speaker’s acoustic characteristics which are used for developing a speaker dependent fuzzy SVM model. Introducing fuzzy theory in support vector machine yields better classification accuracy and requires less number of support vectors. Experiments were conducted on 2001 NIST speaker recognition evaluation corpus. Performance of GMM-FSVM based speaker verification system is compared with the conventional GMM-UBM and GMM-SVM based systems. Experimental results indicate that the fuzzy SVM based speaker verification system with GMM super vector achieves better performance to GMM-UBM system. Â
APA, Harvard, Vancouver, ISO, and other styles
3

Cuong, Nguyen The, and Huynh The Phung. "WEIGHTED STRUCTURAL SUPPORT VECTOR MACHINE." Journal of Computer Science and Cybernetics 37, no. 1 (2021): 43–56. http://dx.doi.org/10.15625/1813-9663/37/1/15396.

Full text
Abstract:
In binary classification problems, two classes of data seem to be different from each other. It is expected to be more complicated due to the clusters in each class also tend to be different. Traditional algorithms as Support Vector Machine (SVM) or Twin Support Vector Machine (TWSVM) cannot sufficiently exploit structural information with cluster granularity of the data, cause limitation on the capability of simulation of data trends. Structural Twin Support Vector Machine (S-TWSVM) sufficiently exploits structural information with cluster granularity for learning a represented hyperplane. Therefore, the capability of S-TWSVM’s data simulation is better than that of TWSVM. However, for the datasets where each class consists of clusters of different trends, the S-TWSVM’s data simulation capability seems restricted. Besides, the training time of S-TWSVM has not been improved compared to TWSVM. This paper proposes a new Weighted Structural - Support Vector Machine (called WS-SVM) for binary classification problems with a class-vs-clusters strategy. Experimental results show that WS-SVM could describe the tendency of the distribution of cluster information. Furthermore, both the theory and experiment show that the training time of the WS-SVM for classification problem has significantly improved compared to S-TWSVM.
APA, Harvard, Vancouver, ISO, and other styles
4

Besrour, Amine, and Riadh Ksantini. "Incremental Subclass Support Vector Machine." International Journal on Artificial Intelligence Tools 28, no. 07 (2019): 1950020. http://dx.doi.org/10.1142/s0218213019500209.

Full text
Abstract:
Support Vector Machine (SVM) is a very competitive linear classifier based on convex optimization problem, were support vectors fully describe decision boundary. Hence, SVM is sensitive to data spread and does not take into account the existence of class subclasses, nor minimizes data dispersion for classification performance improvement. Thus, Kernel subclass SVM (KSSVM) was proposed to handle multimodal data and to minimize data dispersion. Nevertheless, KSSVM has difficulties in classifying sequentially obtained data and handling large scale datasets, since it is based on batch learning. For this reason, we propose a novel incremental KSSVM (iKSSVM) which handles dynamic and large data in a proper manner. The iKSSVM is still based on convex optimization problem and minimizes data dispersion within and between data subclasses incrementally, in order to improve discriminative power and classification performance. An extensive comparative evaluation of the iKSSVM to batch KSSVM, as well as, other contemporary incremental classifiers, on real world datasets, has shown clearly its superiority in terms of classification accuracy.
APA, Harvard, Vancouver, ISO, and other styles
5

Rizal, Reyhan Achmad, Imron Sanjaya Girsang, and Sidik Apriyadi Prasetiyo. "Klasifikasi Wajah Menggunakan Support Vector Machine (SVM)." REMIK (Riset dan E-Jurnal Manajemen Informatika Komputer) 3, no. 2 (2019): 1. http://dx.doi.org/10.33395/remik.v3i2.10080.

Full text
Abstract:
Klasifikasi wajah merupakan teknik yang dapat digunakan untuk membedakan karakteristik pola wajah seseorang. Sistem klasifikasi wajah adalah suatu aplikasi yang membuat sebuah mesin dapat mengenali wajah seseorang sesuai dengan citra wajah yang telah ditraining dan disimpan di dalam database mesin tersebut. Klasifikasi wajah sendiri dapat dilakukan dengan berbagai cara, salah satunya adalah menggunakan metode support vector machine (SVM). Penelitian ini dilakukan dengan sampling yang di ambil dalam variasi posisi pada sudut kemiringan subjek (-90°, -70°, -45°, -25°, -5° ) dan (+90°, +70°, +45°, +25°, +5° ) dengan ukuran citra 640x480. Sistem klasifikasi wajah didalam penelitian ini dibangun dengan menggunakan metode support vector machine (SVM) dan bahasa pemograman Matlap. Penelitian ini menghasilkan tingkat true detection 90% dan false detection 10% dari jumlah sampel 200 subjek yang digunakan.
 Keywords— Klasifikasi wajah, sudut kemiringan, SVM
APA, Harvard, Vancouver, ISO, and other styles
6

Pan, Yuqing, Wenpeng Zhai, Wei Gao, and Xiangjun Shen. "If-SVM: Iterative factoring support vector machine." Multimedia Tools and Applications 79, no. 35-36 (2020): 25441–61. http://dx.doi.org/10.1007/s11042-020-09179-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Panja, Rupan, and Nikhil R. Pal. "MS-SVM: Minimally Spanned Support Vector Machine." Applied Soft Computing 64 (March 2018): 356–65. http://dx.doi.org/10.1016/j.asoc.2017.12.017.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Stawska, Zofia. "SUPPORT VECTOR MACHINE IN GENDER RECOGNITION." Information System in Management 6, no. 4 (2017): 318–29. http://dx.doi.org/10.22630/isim.2017.6.4.6.

Full text
Abstract:
In the paper, Support Vector Machine (SVM) methods are discussed. The SVM algorithm is a very strong classification tool. Its capability in gender recognition in comparison with the other methods is presented here. Different sets of face features derived from the frontal facial image such as eye corners, nostrils, mouth corners etc. are taken into account. The efficiency of different sets of facial features in gender recognition using SVM method is examined.
APA, Harvard, Vancouver, ISO, and other styles
9

Han, Henry, and Xiaoqian Jiang. "Overcome Support Vector Machine Diagnosis Overfitting." Cancer Informatics 13s1 (January 2014): CIN.S13875. http://dx.doi.org/10.4137/cin.s13875.

Full text
Abstract:
Support vector machines (SVMs) are widely employed in molecular diagnosis of disease for their efficiency and robustness. However, there is no previous research to analyze their overfitting in high-dimensional omics data based disease diagnosis, which is essential to avoid deceptive diagnostic results and enhance clinical decision making. In this work, we comprehensively investigate this problem from both theoretical and practical standpoints to unveil the special characteristics of SVM overfitting. We found that disease diagnosis under an SVM classifier would inevitably encounter overfitting under a Gaussian kernel because of the large data variations generated from high-throughput profiling technologies. Furthermore, we propose a novel sparse-coding kernel approach to overcome SVM overfitting in disease diagnosis. Unlike traditional ad-hoc parametric tuning approaches, it not only robustly conquers the overfitting problem, but also achieves good diagnostic accuracy. To our knowledge, it is the first rigorous method proposed to overcome SVM overfitting. Finally, we propose a novel biomarker discovery algorithm: Gene-Switch-Marker (GSM) to capture meaningful biomarkers by taking advantage of SVM overfitting on single genes.
APA, Harvard, Vancouver, ISO, and other styles
10

Tian, Yingjie, Yong Shi, and Xiaohui Liu. "RECENT ADVANCES ON SUPPORT VECTOR MACHINES RESEARCH." Technological and Economic Development of Economy 18, no. 1 (2012): 5–33. http://dx.doi.org/10.3846/20294913.2012.661205.

Full text
Abstract:
Support vector machines (SVMs), with their roots in Statistical Learning Theory (SLT) and optimization methods, have become powerful tools for problem solution in machine learning. SVMs reduce most machine learning problems to optimization problems and optimization lies at the heart of SVMs. Lots of SVM algorithms involve solving not only convex problems, such as linear programming, quadratic programming, second order cone programming, semi-definite programming, but also non-convex and more general optimization problems, such as integer programming, semi-infinite programming, bi-level programming and so on. The purpose of this paper is to understand SVM from the optimization point of view, review several representative optimization models in SVMs, their applications in economics, in order to promote the research interests in both optimization-based SVMs theory and economics applications. This paper starts with summarizing and explaining the nature of SVMs. It then proceeds to discuss optimization models for SVM following three major themes. First, least squares SVM, twin SVM, AUC Maximizing SVM, and fuzzy SVM are discussed for standard problems. Second, support vector ordinal machine, semisupervised SVM, Universum SVM, robust SVM, knowledge based SVM and multi-instance SVM are then presented for nonstandard problems. Third, we explore other important issues such as lp-norm SVM for feature selection, LOOSVM based on minimizing LOO error bound, probabilistic outputs for SVM, and rule extraction from SVM. At last, several applications of SVMs to financial forecasting, bankruptcy prediction, credit risk analysis are introduced.
APA, Harvard, Vancouver, ISO, and other styles
11

Liu, Yangwei, Hu Ding, Ziyun Huang, and Jinhui Xu. "Distributed and Robust Support Vector Machine." International Journal of Computational Geometry & Applications 30, no. 03n04 (2020): 213–33. http://dx.doi.org/10.1142/s0218195920500107.

Full text
Abstract:
In this paper, we consider the distributed version of Support Vector Machine (SVM) under the coordinator model, where all input data (i.e., points in [Formula: see text] space) of SVM are arbitrarily distributed among [Formula: see text] nodes in some network with a coordinator which can communicate with all nodes. We investigate two variants of this problem, with and without outliers. For distributed SVM without outliers, we prove a lower bound on the communication complexity and give a distributed [Formula: see text]-approximation algorithm to reach this lower bound, where [Formula: see text] is a user specified small constant. For distributed SVM with outliers, we present a [Formula: see text]-approximation algorithm to explicitly remove the influence of outliers. Our algorithm is based on a deterministic distributed top [Formula: see text] selection algorithm with communication complexity of [Formula: see text] in the coordinator model.
APA, Harvard, Vancouver, ISO, and other styles
12

Wang, Jian Guo, Liang Wu Cheng, Wen Xing Zhang, and Bo Qin. "A Modified Incremental Support Vector Machine for Regression." Applied Mechanics and Materials 135-136 (October 2011): 63–69. http://dx.doi.org/10.4028/www.scientific.net/amm.135-136.63.

Full text
Abstract:
support vector machine (SVM) has been shown to exhibit superior predictive power compared to traditional approaches in many studies, such as mechanical equipment monitoring and diagnosis. However, SVM training is very costly in terms of time and memory consumption due to the enormous amounts of training data and the quadratic programming problem. In order to improve SVM training speed and accuracy, we propose a modified incremental support vector machine (MISVM) for regression problems in this paper. The main concepts are that using the distance from the margin vectors which violate the Karush-Kuhn-Tucker (KKT) condition to the final decision hyperplane to evaluate the importance of each margin vectors, and the margin vectors whose distance is below the specified value are preserved, the others are eliminated. Then the original SVs and the remaining margin vectors are used to train a new SVM. The proposed MISVM can not only eliminate the unimportant samples such as noise samples, but also preserved the important samples. The effectiveness of the proposed MISVMs is demonstrated with two UCI data sets. These experiments also show that the proposed MISVM is competitive with previously published methods.
APA, Harvard, Vancouver, ISO, and other styles
13

Chen, Xiao Lin, Yan Jiang, Min Jie Chen, Yong Yu, Hong Ping Nie, and Min Li. "A Dynamic Cost Sensitive Support Vector Machine." Advanced Materials Research 424-425 (January 2012): 1342–46. http://dx.doi.org/10.4028/www.scientific.net/amr.424-425.1342.

Full text
Abstract:
A lot of cost-sensitive support machine vector methods are used to handle the imbalanced datasets, but the obtained results are not as perfect as expectation. A promising method is proposed in this paper, named ADC-SVM, which uses genetic algorithm to dynamically search the optimal misclassification cost to build a cost sensitive support machine. We empirically evaluate ADC-SVM with SVM and Cost-sensitive SVM over 8 realistic imbalanced bi-class datasets from UCI. The experimental results show that ADC-SVM outperforms the other two methods over all the imbalanced datasets.
APA, Harvard, Vancouver, ISO, and other styles
14

Abe, Shigeo. "Minimal Complexity Support Vector Machines for Pattern Classification." Computers 9, no. 4 (2020): 88. http://dx.doi.org/10.3390/computers9040088.

Full text
Abstract:
Minimal complexity machines (MCMs) minimize the VC (Vapnik-Chervonenkis) dimension to obtain high generalization abilities. However, because the regularization term is not included in the objective function, the solution is not unique. In this paper, to solve this problem, we discuss fusing the MCM and the standard support vector machine (L1 SVM). This is realized by minimizing the maximum margin in the L1 SVM. We call the machine Minimum complexity L1 SVM (ML1 SVM). The associated dual problem has twice the number of dual variables and the ML1 SVM is trained by alternatingly optimizing the dual variables associated with the regularization term and with the VC dimension. We compare the ML1 SVM with other types of SVMs including the L1 SVM using several benchmark datasets and show that the ML1 SVM performs better than or comparable to the L1 SVM.
APA, Harvard, Vancouver, ISO, and other styles
15

Cao, Jian, Shi Yu Sun, and Xiu Sheng Duan. "Optimal Boundary SVM Incremental Learning Algorithm." Applied Mechanics and Materials 347-350 (August 2013): 2957–62. http://dx.doi.org/10.4028/www.scientific.net/amm.347-350.2957.

Full text
Abstract:
Support vectors (SVs) cant be selected completely in support vector machine (SVM) incremental, resulting incremental learning process cant be sustained. In order to solve this problem, the article proposes optimal boundary SVM incremental learning algorithm. Based on in-depth analysis of the trend of the classification surface and make use of the KKT conditions, selecting the border of the vectors include the support vectors to participate SVM incremental learning. The experiment shows that the algorithm can be completely covered the support vectors and have the identical result with the classic support vector machine, it also saves lots of time. Therefore it can provide the conditions for future large sample classification and incremental learning sustainability.
APA, Harvard, Vancouver, ISO, and other styles
16

Shi, Xu Chao, Qi Xia Liu, and Xiu Juan Lv. "Application of SVM in Predicting the Strength of Cement Stabilized Soil." Applied Mechanics and Materials 160 (March 2012): 313–17. http://dx.doi.org/10.4028/www.scientific.net/amm.160.313.

Full text
Abstract:
Support Vector Machine is a powerful machine learning technique based on statistical learning theory. This paper investigates the potential of support vector machines based regression approach to model the strength of cement stabilized soil from test dates. Support Vector Machine model is proposed to predict compressive strength of cement stabilized soil. And the effects of selecting kernel function on Support Vector Machine modeling are also analyzed. The results show that the Support Vector Machine is more precise in measuring the strength of cement than traditional methods. The Support Vector Machine method has advantages in its simple structure,excellent capability in studying and good application prospects, also it provide us with a novel method of measuring the strength of cement stabilized soil.
APA, Harvard, Vancouver, ISO, and other styles
17

Wei, Li Wei, Qiang Xiao, Ying Zhang, and Xiong Fei Ji. "Credit Risk Evaluation Using a New Classification Model: L1-LS-SVM." Applied Mechanics and Materials 321-324 (June 2013): 1917–20. http://dx.doi.org/10.4028/www.scientific.net/amm.321-324.1917.

Full text
Abstract:
Least squares support vector machine (LS-SVM) has an outstanding advantage of lower computational complexity than that of standard support vector machines. Its shortcomings are the loss of sparseness and robustness. Thus it usually results in slow testing speed and poor generalization performance. In this paper, a least squares support vector machine with L1 penalty (L1-LS-SVM) is proposed to deal with above shortcomings. A minimum of 1-norm based object function is chosen to get the sparse and robust solution based on the idea of basis pursuit (BP) in the whole feasibility region. Some UCI datasets are used to demonstrate the effectiveness of this model. The experimental results show that L1-LS-SVM can obtain a small number of support vectors and improve the generalization ability of LS-SVM.
APA, Harvard, Vancouver, ISO, and other styles
18

Zhang, Ling. "Speech Recognization Based on Support Vector Machine." Advanced Materials Research 433-440 (January 2012): 7516–21. http://dx.doi.org/10.4028/www.scientific.net/amr.433-440.7516.

Full text
Abstract:
Aiming at the deficiency of the local minimum occurring in neural network used for speech recognition, the paper employs support vector machine (SVM) to recognize the speech signal with four different components. First, SVM is utilized to perform the speech recognition. Then, the results are compared with those obtained by the BP neural network method. The comparison shows that SVM effectively overcomes the local minimum existing in neural network and has the advantages of the accurate and fast classification, indicating that SVM looks feasible to recognize the speech signal.
APA, Harvard, Vancouver, ISO, and other styles
19

GENOV, ROMAN, SHANTANU CHAKRABARTTY, and GERT CAUWENBERGHS. "SILICON SUPPORT VECTOR MACHINE WITH ON-LINE LEARNING." International Journal of Pattern Recognition and Artificial Intelligence 17, no. 03 (2003): 385–404. http://dx.doi.org/10.1142/s0218001403002472.

Full text
Abstract:
Training of support vector machines (SVMs) amounts to solving a quadratic programming problem over the training data. We present a simple on-line SVM training algorithm of complexity approximately linear in the number of training vectors, and linear in the number of support vectors. The algorithm implements an on-line variant of sequential minimum optimization (SMO) that avoids the need for adjusting select pairs of training coefficients by adjusting the bias term along with the coefficient of the currently presented training vector. The coefficient assignment is a function of the margin returned by the SVM classifier prior to assignment, subject to inequality constraints. The training scheme lends efficiently to dedicated SVM hardware for real-time pattern recognition, implemented using resources already provided for run-time operation. Performance gains are illustrated using the Kerneltron, a massively parallel mixed-signal VLSI processor for kernel-based real-time video recognition.
APA, Harvard, Vancouver, ISO, and other styles
20

Xia, Xiao-Lei, Weidong Jiao, Kang Li, and George Irwin. "A Novel Sparse Least Squares Support Vector Machines." Mathematical Problems in Engineering 2013 (2013): 1–10. http://dx.doi.org/10.1155/2013/602341.

Full text
Abstract:
The solution of a Least Squares Support Vector Machine (LS-SVM) suffers from the problem of nonsparseness. The Forward Least Squares Approximation (FLSA) is a greedy approximation algorithm with a least-squares loss function. This paper proposes a new Support Vector Machine for which the FLSA is the training algorithm—the Forward Least Squares Approximation SVM (FLSA-SVM). A major novelty of this new FLSA-SVM is that the number of support vectors is the regularization parameter for tuning the tradeoff between the generalization ability and the training cost. The FLSA-SVMs can also detect the linear dependencies in vectors of the input Gramian matrix. These attributes together contribute to its extreme sparseness. Experiments on benchmark datasets are presented which show that, compared to various SVM algorithms, the FLSA-SVM is extremely compact, while maintaining a competitive generalization ability.
APA, Harvard, Vancouver, ISO, and other styles
21

Yeh, Jih Pin. "Detecting Edge Using Support Vector Machine." Advanced Materials Research 588-589 (November 2012): 974–77. http://dx.doi.org/10.4028/www.scientific.net/amr.588-589.974.

Full text
Abstract:
The edge detection is used in many applications in image processing. It is currently crucial technique of image processing. There are various methods for promoting edge detection. Here, it is presented that edge detection can be achieved using Support Vector Machine (SVM). Supervised learning method is applied. Laplacian edge detector is an instructor of Support Vector Machine. In this research, it is presented that any classical method can be applied for training of SVM as edge detector.
APA, Harvard, Vancouver, ISO, and other styles
22

Fujiwara, Shuhei, Akiko Takeda, and Takafumi Kanamori. "DC Algorithm for Extended Robust Support Vector Machine." Neural Computation 29, no. 5 (2017): 1406–38. http://dx.doi.org/10.1162/neco_a_00958.

Full text
Abstract:
Nonconvex variants of support vector machines (SVMs) have been developed for various purposes. For example, robust SVMs attain robustness to outliers by using a nonconvex loss function, while extended [Formula: see text]-SVM (E[Formula: see text]-SVM) extends the range of the hyperparameter by introducing a nonconvex constraint. Here, we consider an extended robust support vector machine (ER-SVM), a robust variant of E[Formula: see text]-SVM. ER-SVM combines two types of nonconvexity from robust SVMs and E[Formula: see text]-SVM. Because of the two nonconvexities, the existing algorithm we proposed needs to be divided into two parts depending on whether the hyperparameter value is in the extended range or not. The algorithm also heuristically solves the nonconvex problem in the extended range. In this letter, we propose a new, efficient algorithm for ER-SVM. The algorithm deals with two types of nonconvexity while never entailing more computations than either E[Formula: see text]-SVM or robust SVM, and it finds a critical point of ER-SVM. Furthermore, we show that ER-SVM includes the existing robust SVMs as special cases. Numerical experiments confirm the effectiveness of integrating the two nonconvexities.
APA, Harvard, Vancouver, ISO, and other styles
23

Santoso, Murtiyanto, Raymond Sutjiadi, and Resmana Lim. "Indonesian Stock Prediction using Support Vector Machine (SVM)." MATEC Web of Conferences 164 (2018): 01031. http://dx.doi.org/10.1051/matecconf/201816401031.

Full text
Abstract:
This project is part of developing software to provide predictive information technology-based services artificial intelligence (Machine Intelligence) or Machine Learning that will be utilized in the money market community. The prediction method used in this early stages uses the combination of Gaussian Mixture Model and Support Vector Machine with Python programming. The system predicts the price of Astra International (stock code: ASII.JK) stock data. The data used was taken during 17 yr period of January 2000 until September 2017. Some data was used for training/modeling (80 % of data) and the remainder (20 %) was used for testing. An integrated model comprising Gaussian Mixture Model and Support Vector Machine system has been tested to predict stock market of ASII.JK for l d in advance. This model has been compared with the Market Cummulative Return. From the results, it is depicts that the Gaussian Mixture Model-Support Vector Machine based stock predicted model, offers significant improvement over the compared models resulting sharpe ratio of 3.22.
APA, Harvard, Vancouver, ISO, and other styles
24

Jumeilah, Fithri Selva. "Penerapan Support Vector Machine (SVM) untuk Pengkategorian Penelitian." Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) 1, no. 1 (2017): 19–25. http://dx.doi.org/10.29207/resti.v1i1.11.

Full text
Abstract:
Research every college will continue to grow. Research will be stored in softcopy and hardcopy. The preparation of the research should be categorized in order to facilitate the search for people who need reference. To categorize the research, we need a method for text mining, one of them is with the implementation of Support Vector Machines (SVM). The data used to recognize the characteristics of each category then it takes secondary data which is a collection of abstracts of research. The data will be pre-processed with several stages: case folding converts all the letters into lowercase, stop words removal removal of very common words, tokenizing discard punctuation, and stemming searching for root words by removing the prefix and suffix. Further data that has undergone preprocessing will be converted into a numerical form with for the term weighting stage that is the weighting contribution of each word. From the results of term weighting then obtained data that can be used for data training and test data. The training process is done by providing input in the form of text data that is known to the class or category. Then by using the Support Vector Machines algorithm, the input data is transformed into a rule, function, or knowledge model that can be used in the prediction process. From the results of this study obtained that the categorization of research produced by SVM has been very good. This is proven by the results of the test which resulted in an accuracy of 90%.
APA, Harvard, Vancouver, ISO, and other styles
25

Ratna S, Dwi, Budi Setyono, and Tyara Herdha. "Bullet Image Classification using Support Vector Machine (SVM)." Journal of Physics: Conference Series 693 (February 2016): 012009. http://dx.doi.org/10.1088/1742-6596/693/1/012009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Qu, Xi Long, Mi An Dai, and Zhen Hui Li. "Key Problem in Support Vector Machine Model." Applied Mechanics and Materials 34-35 (October 2010): 1351–54. http://dx.doi.org/10.4028/www.scientific.net/amm.34-35.1351.

Full text
Abstract:
This study found the development direction of SVM, the research content is the most crucial and fundamental nature in SVM, if achieve this paper targets, it will promote the further application of SVM, and have important theoretical value; In addition, this study are The basic work of nuclear analytical methods, the results can be directly applied to the field of recognition pattern based on nuclear analytical methods (such as Kernel Principal Component Analysis and Kernel Fisher method), so the research results of this paper has good generalized values.
APA, Harvard, Vancouver, ISO, and other styles
27

Knebel, Tilman, Sepp Hochreiter, and Klaus Obermayer. "An SMO Algorithm for the Potential Support Vector Machine." Neural Computation 20, no. 1 (2008): 271–87. http://dx.doi.org/10.1162/neco.2008.20.1.271.

Full text
Abstract:
We describe a fast sequential minimal optimization (SMO) procedure for solving the dual optimization problem of the recently proposed potential support vector machine (P-SVM). The new SMO consists of a sequence of iteration steps in which the Lagrangian is optimized with respect to either one (single SMO) or two (dual SMO) of the Lagrange multipliers while keeping the other variables fixed. An efficient selection procedure for Lagrange multipliers is given, and two heuristics for improving the SMO procedure are described: block optimization and annealing of the regularization parameter ε. A comparison of the variants shows that the dual SMO, including block optimization and annealing, performs efficiently in terms of computation time. In contrast to standard support vector machines (SVMs), the P-SVM is applicable to arbitrary dyadic data sets, but benchmarks are provided against libSVM's ε-SVR and C-SVC implementations for problems that are also solvable by standard SVM methods. For those problems, computation time of the P-SVM is comparable to or somewhat higher than the standard SVM. The number of support vectors found by the P-SVM is usually much smaller for the same generalization performance.
APA, Harvard, Vancouver, ISO, and other styles
28

Giustolisi, Orazio. "Using a multi-objective genetic algorithm for SVM construction." Journal of Hydroinformatics 8, no. 2 (2006): 125–39. http://dx.doi.org/10.2166/hydro.2006.016b.

Full text
Abstract:
Support Vector Machines are kernel machines useful for classification and regression problems. In this paper, they are used for non-linear regression of environmental data. From a structural point of view, Support Vector Machines are particular Artificial Neural Networks and their training paradigm has some positive implications. In fact, the original training approach is useful to overcome the curse of dimensionality and too strict assumptions on statistics of the errors in data. Support Vector Machines and Radial Basis Function Regularised Networks are presented within a common structural framework for non-linear regression in order to emphasise the training strategy for support vector machines and to better explain the multi-objective approach in support vector machines' construction. A support vector machine's performance depends on the kernel parameter, input selection and ε-tube optimal dimension. These will be used as decision variables for the evolutionary strategy based on a Genetic Algorithm, which exhibits the number of support vectors, for the capacity of machine, and the fitness to a validation subset, for the model accuracy in mapping the underlying physical phenomena, as objective functions. The strategy is tested on a case study dealing with groundwater modelling, based on time series (past measured rainfalls and levels) for level predictions at variable time horizons.
APA, Harvard, Vancouver, ISO, and other styles
29

Rafidah, Ali, and Yacob Suhaila. "Modeling River Stream Flow Using Support Vector Machine." Applied Mechanics and Materials 315 (April 2013): 602–5. http://dx.doi.org/10.4028/www.scientific.net/amm.315.602.

Full text
Abstract:
Support Vector Machine (SVM) is a new tool from Artificial Intelligence (AI) field has been successfully applied for a wide variety of problem especially in river stream flow forecasting. In this paper, SVM is proposed for river stream flow forecasting. To assess the effectiveness SVM, we used monthly mean river stream flow record data from Pahang River at Lubok Paku, Pahang. The performance of the SVM model is compared with the statistical Autoregressive Integrated Moving Average (ARIMA) and the result showed that the SVM model performs better than the ARIMA models to forecast river stream flow Pahang River.
APA, Harvard, Vancouver, ISO, and other styles
30

Bensaoucha, Saddam, Youcef Brik, Sandrine Moreau, Sid Ahmed Bessedik, and Aissa Ameur. "Induction machine stator short-circuit fault detection using support vector machine." COMPEL - The international journal for computation and mathematics in electrical and electronic engineering 40, no. 3 (2021): 373–89. http://dx.doi.org/10.1108/compel-06-2020-0208.

Full text
Abstract:
Purpose This paper provides an effective study to detect and locate the inter-turn short-circuit faults (ITSC) in a three-phase induction motor (IM) using the support vector machine (SVM). The characteristics extracted from the analysis of the phase shifts between the stator currents and their corresponding voltages are used as inputs to train the SVM. The latter automatically decides on the IM state, either a healthy motor or a short-circuit fault on one of its three phases. Design/methodology/approach To evaluate the performance of the SVM, three supervised algorithms of machine learning, namely, multi-layer perceptron neural networks (MLPNNs), radial basis function neural networks (RBFNNs) and extreme learning machine (ELM) are used along with the SVM in this study. Thus, all classifiers (SVM, MLPNN, RBFNN and ELM) are tested and the results are compared with the same data set. Findings The obtained results showed that the SVM outperforms MLPNN, RBFNNs and ELM to diagnose the health status of the IM. Especially, this technique (SVM) provides an excellent performance because it is able to detect a fault of two short-circuited turns (early detection) when the IM is operating under a low load. Originality/value The original of this work is to use the SVM algorithm based on the phase shift between the stator currents and their voltages as inputs to detect and locate the ITSC fault.
APA, Harvard, Vancouver, ISO, and other styles
31

ZHENG, SHENG, YUQIU SUN, JINWEN TIAN, and JAIN LIU. "MAPPED LEAST SQUARES SUPPORT VECTOR MACHINE REGRESSION." International Journal of Pattern Recognition and Artificial Intelligence 19, no. 03 (2005): 459–75. http://dx.doi.org/10.1142/s0218001405004058.

Full text
Abstract:
This paper describes a novel version of regression SVM (Support Vector Machines) that is based on the least-squares error. We show that the solution of this optimization problem can be obtained easily once the inverse of a certain matrix is computed. This matrix, however, depends only on the input vectors, but not on the labels. Thus, if many learning problems with the same set of input vectors but different sets of labels have to be solved, it makes sense to compute the inverse of the matrix just once and then use it for computing all subsequent models. The computational complexity to train an regression SVM can be reduced to O (N2), just a matrix multiplication operation, and thus probably faster than known SVM training algorithms that have O (N2) work with loops. We describe applications from image processing, where the input points are usually of the form {(x0 + dx, y0 + dy) : |dx| < m, |dy| < n} and all such set of points can be translated to the same set {(dx, dy) : |dx| < m, |dy| < n} by subtracting (x0, y0) from all the vectors. The experimental results demonstrate that the proposed approach is faster than those processing each learning problem separately.
APA, Harvard, Vancouver, ISO, and other styles
32

Shi, Haifa, Xinbin Zhao, Ling Zhen, and Ling Jing. "Twin Bounded Support Tensor Machine for Classification." International Journal of Pattern Recognition and Artificial Intelligence 30, no. 01 (2015): 1650002. http://dx.doi.org/10.1142/s0218001416500026.

Full text
Abstract:
The traditional vector-based classifiers, such as support vector machine (SVM) and twin support vector machine (TSVM), cannot handle tensor data directly and may not utilize the data informations effectively. In this paper, we propose a novel classifier based on tensor data, called twin bounded support tensor machine (TBSTM) which is an extension of twin bounded support vector machine (TBSVM). Similar to TBSVM, TBSTM gets two hyperplanes and obtains the solution by solving two quadratic programming problems (QPPs). The computational complexity of each QPPs is smaller than that of support tensor machine (STM). TBSTM not only retains the advantage of TBSVM, but also has its unique superior characteristics: (1) it makes full use of the structure information of data; (2) it has acceptable or better classification accuracy compared to STM, TBSVM and SVM; (3) the computational cost is basically less than STM; (4) it can deal with large data that TBSVM is not easy to achieve, especially for small-sample-size (S3) problems; (5) it adopts alternating successive over relaxation iteration (ASOR) method to solve optimization problems which accelerates the pace of training. Finally, we demonstrate the effectiveness and superiority by the experiments based on vector and tensor data.
APA, Harvard, Vancouver, ISO, and other styles
33

Li, Hong Mei, Lin Gen Yang, and Li Hua Zou. "The Research Based on GA-SVM Feature Selection Algorithm." Advanced Materials Research 532-533 (June 2012): 1497–502. http://dx.doi.org/10.4028/www.scientific.net/amr.532-533.1497.

Full text
Abstract:
To make feature subset which can gain the higher classification accuracy rate, the method based on genetic algorithms and the feature selection of support vector machine is proposed. Firstly, the ReliefF algorithm provides a priori information to GA, the parameters of the support vector machine mixed into the genetic encoding,and then using genetic algorithm finds the optimal feature subset and support vector machines parameter combination. Finally, experimental results show that the proposed algorithm can gain the higher classification accuracy rate based on the smaller feature subset.
APA, Harvard, Vancouver, ISO, and other styles
34

Díaz-Vico, David, Jesús Prada, Adil Omari, and José Dorronsoro. "Deep support vector neural networks." Integrated Computer-Aided Engineering 27, no. 4 (2020): 389–402. http://dx.doi.org/10.3233/ica-200635.

Full text
Abstract:
Kernel based Support Vector Machines, SVM, one of the most popular machine learning models, usually achieve top performances in two-class classification and regression problems. However, their training cost is at least quadratic on sample size, making them thus unsuitable for large sample problems. However, Deep Neural Networks (DNNs), with a cost linear on sample size, are able to solve big data problems relatively easily. In this work we propose to combine the advanced representations that DNNs can achieve in their last hidden layers with the hinge and ϵ insensitive losses that are used in two-class SVM classification and regression. We can thus have much better scalability while achieving performances comparable to those of SVMs. Moreover, we will also show that the resulting Deep SVM models are competitive with standard DNNs in two-class classification problems but have an edge in regression ones.
APA, Harvard, Vancouver, ISO, and other styles
35

Lu, Yumao, and Vwani Roychowdhury. "Parallel randomized sampling for support vector machine (SVM) and support vector regression (SVR)." Knowledge and Information Systems 14, no. 2 (2007): 233–47. http://dx.doi.org/10.1007/s10115-007-0082-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Zhang, Zhengxie, Shuguo Pan, Chengfa Gao, Tao Zhao, and Wang Gao. "Support Vector Machine for Regional Ionospheric Delay Modeling." Sensors 19, no. 13 (2019): 2947. http://dx.doi.org/10.3390/s19132947.

Full text
Abstract:
The distribution of total electron content (TEC) in the ionosphere is irregular and complex, and it is hard to model accurately. The polynomial (POLY) model is used extensively for regional ionosphere modeling in two-dimensional space. However, in the active period of the ionosphere, the POLY model is difficult to reflect the distribution and variation of TEC. Aiming at the limitation of the regional POLY model, this paper proposes a new ionosphere modeling method with combining the support vector machine (SVM) regression model and the POLY model. Firstly, the POLY model is established using observations of regional continuously operating reference stations (CORS). Then the SVM regression model is trained to compensate the model error of POLY, and the TEC SVM-P model is obtained by the combination of the POLY and the SVM. The fitting accuracies of the models are verified with the root mean square errors (RMSEs) and static single-frequency precise point positioning (PPP) experiments. The results show that the RMSE of the SVM-P is 0.980 TECU (TEC unit), which produces an improvement of 17.3% compared with the POLY model (1.185 TECU). Using SVM-P models, the positioning accuracies of single-frequency PPP are improved over 40% compared with those using POLY models. The SVM-P is also compared with the back-propagation neural network combined with POLY (BPNN-P), and its performance is also better than BPNN-P (1.070 TECU).
APA, Harvard, Vancouver, ISO, and other styles
37

Chazar, Chalifa, and Bagus Erawan. "Machine Learning Diagnosis Kanker Payudara Menggunakan Algoritma Support Vector Machine." INFORMASI (Jurnal Informatika dan Sistem Informasi) 12, no. 1 (2020): 67–80. http://dx.doi.org/10.37424/informasi.v12i1.48.

Full text
Abstract:
Kanker payudara merupakan penyebab kematian nomor dua pada wanita. Penyakit ini sulit dideteksi pada fase awal. Akan tetapi, kebanyakan penderita baru mengetahui kondisinya setelah memasuki fase tertentu dalam kondisi yang parah dan sulit disembuhkan. Salah satu bentuk pemeriksaan untuk mendiagnosis penyakit kanker payudara adalah dengan melakukan biopsi. Biopsi adalah teknik pemeriksaan yang dilakukan dengan mengambil cairan di payudara menggunakan Fine Needle Aspiration (FNA), selanjutnya hasil biopsi FNA akan diperiksa lagi di laboratorium untuk mendapatkan hasil diagnosis. Untuk mendapatkan hasil yang akurat dari proses biopsi dibutuhkan waktu yang lama. Machine Learning (ML) dapat digunakan untuk mencari dan menemukan pola yang unik dari sekumpulan data. Algoritma Support Vector Machine (SVM) dipilih karena algoritma ini mampu mengklasifikasikan nilai ke dalam kelas-kelas tertentu. Algoritma SVM juga memiliki tingkat akurasi yang lebih tinggi dibandingkan dengan algoritma lainnya. Penelitian ini bertujuan untuk membangun aplikasi ML yang dapat mendiagnosis penyakit kanker payudara dengan menggunakan Algoritma SVM untuk mencari pola data dari sekumpulan data masa lalu untuk menghasilkan hasil diagnosis yang akurat. Hasil dari penelitian ini menunjukan bahwa Algoritma SVM pada ML dapat digunkan untuk mencari suatu pola data dari sekumpulan data masa lalu yang dapat menghasilkan prediksi untuk menentukan sel hidup kanker payudara bersifat ganas atau jinak.
APA, Harvard, Vancouver, ISO, and other styles
38

Li, Ting, Yunong Yang, Yonghui Wang, Chao Chen, and Jinbao Yao. "Traffic fatalities prediction based on support vector machine." Archives of Transport 39, no. 3 (2016): 21–30. http://dx.doi.org/10.5604/08669546.1225447.

Full text
Abstract:
To effectively predict traffic fatalities and promote the friendly development of transportation, a prediction model of traffic fatalities is established based on support vector machine (SVM). As the prediction accuracy of SVM largely depends on the selection of parameters, Particle Swarm Optimization (PSO) is introduced to find the optimal parameters. In this paper, small sample and nonlinear data are used to predict fatalities of traffic accident. Traffic accident statistics data of China from 1981 to 2012 are chosen as experimental data. The input variables for predicting accident are highway mileage, vehicle number and population size while the output variables are traffic fatality. To verify the validity of the proposed prediction method, the back-propagation neural network (BPNN) prediction model and SVM prediction model are also used to predict the traffic fatalities. The results show that compared with BPNN prediction model and SVM model, the prediction model of traffic fatalities based on PSO-SVM has higher prediction precision and smaller errors. The model can be more effective to forecast the traffic fatalities. And the method using particle swarm optimization algorithm for parameter optimization of SVM is feasible and effective. In addition, this method avoids overcomes the problem of “over learning” in neural network training progress
APA, Harvard, Vancouver, ISO, and other styles
39

Shabri, Ani, and Mohd Fahmi Abdul Hamid. "Wavelet-support vector machine for forecasting palm oil price." Malaysian Journal of Fundamental and Applied Sciences 15, no. 3 (2019): 398–406. http://dx.doi.org/10.11113/mjfas.v15n3.1149.

Full text
Abstract:
This study examines the feasibility of applying Wavelet-Support Vector Machine (W-SVM) model in forecasting palm oil price. The conjunction method wavelet-support vector machine (W-SVM) is obtained by the integration of discrete wavelet transform (DWT) method and support vector machine (SVM). In W-SVM model, wavelet transform is used to decompose data series into two parts; approximation series and details series. This decomposed series were then used as the input to the SVM model to forecast the palm oil price. This study also utilizes the application of partial correlation-based input variable selection as the preprocessing steps in determining the best input to the model. The performance of the W-SVM model was then compared with the classical SVM model and also artificial neural network (ANN) model. The empirical result shows that the addition of wavelet technique in W-SVM model enhances the forecasting performance of classical SVM and performs better than ANN.
APA, Harvard, Vancouver, ISO, and other styles
40

Nti, Isaac Kofi, Adebayo Felix Adekoya, and Benjamin Asubam Weyori. "Efficient Stock-Market Prediction Using Ensemble Support Vector Machine." Open Computer Science 10, no. 1 (2020): 153–63. http://dx.doi.org/10.1515/comp-2020-0199.

Full text
Abstract:
AbstractPredicting stock-price remains an important subject of discussion among financial analysts and researchers. However, the advancement in technologies such as artificial intelligence and machine learning techniques has paved the way for better and accurate prediction of stock-price in recent years. Of late, Support Vector Machines (SVM) have earned popularity among Machine Learning (ML) algorithms used for predicting stock price. However, a high percentage of studies in algorithmic investments based on SVM overlooked the overfitting nature of SVM when the input dataset is of high-noise and high-dimension. Therefore, this study proposes a novel homogeneous ensemble classifier called GASVM based on support vector machine enhanced with Genetic Algorithm (GA) for feature-selection and SVM kernel parameter optimisation for predicting the stock market. The GA was introduced in this study to achieve a simultaneous optimal of the diverse design factors of the SVM. Experiments carried out with over eleven (11) years’ stock data from the Ghana Stock Exchange (GSE) yielded compelling results. The outcome shows that the proposed model (named GASVM) outperformed other classical ML algorithms (Decision Tree (DT), Random Forest (RF) and Neural Network (NN)) in predicting a 10-day-ahead stock price movement. The proposed (GASVM) showed a better prediction accuracy of 93.7% compared with 82.3% (RF), 75.3% (DT), and 80.1% (NN). It can, therefore, be deduced from the fallouts that the proposed (GASVM) technique puts-up a practical approach feature-selection and parameter optimisation of the different design features of the SVM and thus remove the need for the labour-intensive parameter optimisation.
APA, Harvard, Vancouver, ISO, and other styles
41

Akinyelu, Andronicus A., and Aderemi O. Adewumi. "Improved Instance Selection Methods for Support Vector Machine Speed Optimization." Security and Communication Networks 2017 (2017): 1–11. http://dx.doi.org/10.1155/2017/6790975.

Full text
Abstract:
Support vector machine (SVM) is one of the top picks in pattern recognition and classification related tasks. It has been used successfully to classify linearly separable and nonlinearly separable data with high accuracy. However, in terms of classification speed, SVMs are outperformed by many machine learning algorithms, especially, when massive datasets are involved. SVM classification speed scales linearly with number of support vectors, and support vectors increase with increase in dataset size. Hence, SVM classification speed can be enormously reduced if it is trained on a reduced dataset. Instance selection techniques are one of the most effective techniques suitable for minimizing SVM training time. In this study, two instance selection techniques suitable for identifying relevant training instances are proposed. The techniques are evaluated on a dataset containing 4000 emails and results obtained compared to other existing techniques. Result reveals excellent improvement in SVM classification speed.
APA, Harvard, Vancouver, ISO, and other styles
42

Shi, Zhi Biao, Quan Gang Song, and Ming Zhao Ma. "Diagnosis for Vibration Fault of Steam Turbine Based on Modified Particle Swarm Optimization Support Vector Machine." Applied Mechanics and Materials 128-129 (October 2011): 113–16. http://dx.doi.org/10.4028/www.scientific.net/amm.128-129.113.

Full text
Abstract:
Due to the influence of artificial factor and slow convergence of particle swarm algorithm (PSO) during parameters selection of support vector machine (SVM), this paper proposes a modified particle swarm optimization support vector machine (MPSO-SVM). A Steam turbine vibration fault diagnosis model was established and the failure data was used in fault diagnosis. The results of application show the model can get automatic optimization about the related parameters of support vector machine and achieve the ideal optimal solution globally. MPSO-SVM strategy is feasible and effective compared with traditional particle swarm optimization support vector machine (PSO-SVM) and genetic algorithm support vector machine (GA-SVM).
APA, Harvard, Vancouver, ISO, and other styles
43

ZHANG, LI, WEI-DA ZHOU, TIAN-TIAN SU, and LI-CHENG JIAO. "DECISION TREE SUPPORT VECTOR MACHINE." International Journal on Artificial Intelligence Tools 16, no. 01 (2007): 1–15. http://dx.doi.org/10.1142/s0218213007003163.

Full text
Abstract:
A new multi-class classifier, decision tree SVM (DTSVM) which is a binary decision tree with a very simple structure is presented in this paper. In DTSVM, a problem of multi-class classification is decomposed into a series of ones of binary classification. Here, the binary decision tree is generated by using kernel clustering algorithm, and each non-leaf node represents one binary classification problem. By compared with the other multi-class classification methods based on the binary classification SVMs, the scale and the complexity of DTSVM are less, smaller number of support vectors are needed, and has faster test speed. The final simulation results confirm the feasibility and the validity of DTSVM.
APA, Harvard, Vancouver, ISO, and other styles
44

Widiastuti, Nelly Indriani, Ednawati Rainarli, and Kania Evita Dewi. "Peringkasan dan Support Vector Machine pada Klasifikasi Dokumen." JURNAL INFOTEL 9, no. 4 (2017): 416. http://dx.doi.org/10.20895/infotel.v9i4.312.

Full text
Abstract:
Classification is the process of grouping objects that have the same features or characteristics into several classes. The automatic documents classification use words frequency that appears on training data as features. The large number of documents cause the number of words that appears as a feature will increase. Therefore, summaries are chosen to reduce the number of words that used in classification. The classification uses multiclass Support Vector Machine (SVM) method. SVM was considered to have a good reputation in the classification. This research tests the effect of summary as selection features into documents classification. The summaries reduce text into 50%. A result obtained that the summaries did not affect value accuracy of classification of documents that use SVM. But, summaries improve the accuracy of Simple Logistic Classifier. The classification testing shows that the accuracy of Naïve Bayes Multinomial (NBM) better than SVM
APA, Harvard, Vancouver, ISO, and other styles
45

TSAI, YIHJIA, and JIH PIN YEH. "SIMPLIFICATION OF SUPPORT VECTOR SOLUTIONS USING AN ARTIFICIAL BEE COLONY ALGORITHM." International Journal of Pattern Recognition and Artificial Intelligence 26, no. 08 (2012): 1250020. http://dx.doi.org/10.1142/s0218001412500206.

Full text
Abstract:
Support vector machines (SVMs) are a relatively recent machine learning technique. One of the SVM problems is that SVM is considerably slower in test phase caused by the large number of support vectors, which limits its practical use. To address this problem, we propose an artificial bee colony (ABC) algorithm to search for an optimal subset of the set of support vectors obtained through the training of the SVM, such that the original discriminant function is best approximated. Experimental results show that the proposed ABC algorithm outperforms some other compared methods in terms of the classification accuracy when the solution is reduced to the same size.
APA, Harvard, Vancouver, ISO, and other styles
46

Huang, Kaizhu, Danian Zheng, Irwin King, and Michael R. Lyu. "Arbitrary Norm Support Vector Machines." Neural Computation 21, no. 2 (2009): 560–82. http://dx.doi.org/10.1162/neco.2008.12-07-667.

Full text
Abstract:
Support vector machines (SVM) are state-of-the-art classifiers. Typically L2-norm or L1-norm is adopted as a regularization term in SVMs, while other norm-based SVMs, for example, the L0-norm SVM or even the L∞-norm SVM, are rarely seen in the literature. The major reason is that L0-norm describes a discontinuous and nonconvex term, leading to a combinatorially NP-hard optimization problem. In this letter, motivated by Bayesian learning, we propose a novel framework that can implement arbitrary norm-based SVMs in polynomial time. One significant feature of this framework is that only a sequence of sequential minimal optimization problems needs to be solved, thus making it practical in many real applications. The proposed framework is important in the sense that Bayesian priors can be efficiently plugged into most learning methods without knowing the explicit form. Hence, this builds a connection between Bayesian learning and the kernel machines. We derive the theoretical framework, demonstrate how our approach works on the L0-norm SVM as a typical example, and perform a series of experiments to validate its advantages. Experimental results on nine benchmark data sets are very encouraging. The implemented L0-norm is competitive with or even better than the standard L2-norm SVM in terms of accuracy but with a reduced number of support vectors, − 9.46% of the number on average. When compared with another sparse model, the relevance vector machine, our proposed algorithm also demonstrates better sparse properties with a training speed over seven times faster.
APA, Harvard, Vancouver, ISO, and other styles
47

Yue, Yan. "A Multi-Classified Method of Support Vector Machine (SVM) Based on Entropy." Applied Mechanics and Materials 241-244 (December 2012): 1629–32. http://dx.doi.org/10.4028/www.scientific.net/amm.241-244.1629.

Full text
Abstract:
Studies propose to combine standard SVM classification with the information entropy to increase SVM classification rate as well as reduce computational load of SVM testing. The algorithm uses the information entropy theory to per-treat samples’ attributes, and can eliminate some attributes which put small impacts on the date classification by introducing the reduction coefficient, and then reduce the amount of support vectors. The results show that this algorithm can reduce the amount of support vectors in the process of the classification with support vector machine, and heighten the recognition rate when the amount of the samples is larger compared to standard SVM and DAGSVM.
APA, Harvard, Vancouver, ISO, and other styles
48

Ridouh, Abdelhakim, Daoud Boutana, and Salah Bourennane. "EEG Signals Classification Using Support Vector Machine." Advanced Science, Engineering and Medicine 12, no. 2 (2020): 215–24. http://dx.doi.org/10.1166/asem.2020.2490.

Full text
Abstract:
We address with this paper some real-life healthy and epileptic EEG signals classification. Our proposed method is based on the use of the discrete wavelet transform (DWT) and Support Vector Machine (SVM). For each EEG signal, five wavelet decomposition level is applied which allow obtaining five spectral sub-bands correspond to five rhythms (Delta, Theta, Alpha, Beta and gamma). After the extraction of some features on each sub-band (energy, standard deviation, and entropy) a moving average (MA) is applied to the resulting features vectors and then used as inputs to SVM to train and test. We test the method on EEG signals during two datasets: normal and epileptics, without and with using MA to compare results. Three parameters are evaluated such as sensitivity, specificity, and accuracy to test the performances of the used methods.
APA, Harvard, Vancouver, ISO, and other styles
49

Wu, Qing, and Wenqing Wang. "Piecewise-Smooth Support Vector Machine for Classification." Mathematical Problems in Engineering 2013 (2013): 1–7. http://dx.doi.org/10.1155/2013/135149.

Full text
Abstract:
Support vector machine (SVM) has been applied very successfully in a variety of classification systems. We attempt to solve the primal programming problems of SVM by converting them into smooth unconstrained minimization problems. In this paper, a new twice continuously differentiable piecewise-smooth function is proposed to approximate the plus function, and it issues a piecewise-smooth support vector machine (PWSSVM). The novel method can efficiently handle large-scale and high dimensional problems. The theoretical analysis demonstrates its advantages in efficiency and precision over other smooth functions. PWSSVM is solved using the fast Newton-Armijo algorithm. Experimental results are given to show the training speed and classification performance of our approach.
APA, Harvard, Vancouver, ISO, and other styles
50

Tanaskuli, M., Ali N. Ahmed, Nuratiah Zaini, et al. "Ozone prediction based on support vector machine." Indonesian Journal of Electrical Engineering and Computer Science 17, no. 3 (2020): 1461. http://dx.doi.org/10.11591/ijeecs.v17.i3.pp1461-1466.

Full text
Abstract:
<p>The prediction of tropospheric ozone concentrations is very important due to negative effects of ozone on human health, atmosphere and vegetation. Ozone Prediction is an intricate procedure and most of the conventional models cannot provide accurate prediction. Machine Learning techniques have been widely used as an effective tool for prediction. This study is investigating the implementation of Support vector Machine-SVM to predict Ozone concentrations. The results show that the SVM is capable in predicting ozone concentrations with acceptable level of accuracy. Sensitivity analysis has been conducted to show what is the most effective parameters on the proposed model<em>.</em></p>
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography