Academic literature on the topic 'Support Vector Machine Regression'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Support Vector Machine Regression.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Support Vector Machine Regression"

1

GUO, Hu-Sheng, and Wen-Jian WANG. "Dynamical Granular Support Vector Regression Machine." Journal of Software 24, no. 11 (January 3, 2014): 2535–47. http://dx.doi.org/10.3724/sp.j.1001.2013.04472.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Sun, Shaochao, and Dao Huang. "Flatheaded Support Vector Machine for Regression." Advanced Science Letters 19, no. 8 (August 1, 2013): 2293–99. http://dx.doi.org/10.1166/asl.2013.4907.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wang, Jian Guo, Liang Wu Cheng, Wen Xing Zhang, and Bo Qin. "A Modified Incremental Support Vector Machine for Regression." Applied Mechanics and Materials 135-136 (October 2011): 63–69. http://dx.doi.org/10.4028/www.scientific.net/amm.135-136.63.

Full text
Abstract:
support vector machine (SVM) has been shown to exhibit superior predictive power compared to traditional approaches in many studies, such as mechanical equipment monitoring and diagnosis. However, SVM training is very costly in terms of time and memory consumption due to the enormous amounts of training data and the quadratic programming problem. In order to improve SVM training speed and accuracy, we propose a modified incremental support vector machine (MISVM) for regression problems in this paper. The main concepts are that using the distance from the margin vectors which violate the Karush-Kuhn-Tucker (KKT) condition to the final decision hyperplane to evaluate the importance of each margin vectors, and the margin vectors whose distance is below the specified value are preserved, the others are eliminated. Then the original SVs and the remaining margin vectors are used to train a new SVM. The proposed MISVM can not only eliminate the unimportant samples such as noise samples, but also preserved the important samples. The effectiveness of the proposed MISVMs is demonstrated with two UCI data sets. These experiments also show that the proposed MISVM is competitive with previously published methods.
APA, Harvard, Vancouver, ISO, and other styles
4

ZHENG, SHENG, YUQIU SUN, JINWEN TIAN, and JAIN LIU. "MAPPED LEAST SQUARES SUPPORT VECTOR MACHINE REGRESSION." International Journal of Pattern Recognition and Artificial Intelligence 19, no. 03 (May 2005): 459–75. http://dx.doi.org/10.1142/s0218001405004058.

Full text
Abstract:
This paper describes a novel version of regression SVM (Support Vector Machines) that is based on the least-squares error. We show that the solution of this optimization problem can be obtained easily once the inverse of a certain matrix is computed. This matrix, however, depends only on the input vectors, but not on the labels. Thus, if many learning problems with the same set of input vectors but different sets of labels have to be solved, it makes sense to compute the inverse of the matrix just once and then use it for computing all subsequent models. The computational complexity to train an regression SVM can be reduced to O (N2), just a matrix multiplication operation, and thus probably faster than known SVM training algorithms that have O (N2) work with loops. We describe applications from image processing, where the input points are usually of the form {(x0 + dx, y0 + dy) : |dx| < m, |dy| < n} and all such set of points can be translated to the same set {(dx, dy) : |dx| < m, |dy| < n} by subtracting (x0, y0) from all the vectors. The experimental results demonstrate that the proposed approach is faster than those processing each learning problem separately.
APA, Harvard, Vancouver, ISO, and other styles
5

Khemchandani, Reshma, Keshav Goyal, and Suresh Chandra. "TWSVR: Regression via Twin Support Vector Machine." Neural Networks 74 (February 2016): 14–21. http://dx.doi.org/10.1016/j.neunet.2015.10.007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Arjmandzadeh, Ameneh, Sohrab Effati, and Mohammad Zamirian. "Interval Support Vector Machine In Regression Analysis." Journal of Mathematics and Computer Science 02, no. 03 (April 15, 2011): 565–71. http://dx.doi.org/10.22436/jmcs.02.03.19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

熊, 令纯. "Five Understandings on Support Vector Machine Regression." Hans Journal of Data Mining 09, no. 02 (2019): 52–59. http://dx.doi.org/10.12677/hjdm.2019.92007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Rastogi (nee Khemchandani), Reshma, Pritam Anand, and Suresh Chandra. "-norm Twin Support Vector Machine-based Regression." Optimization 66, no. 11 (August 21, 2017): 1895–911. http://dx.doi.org/10.1080/02331934.2017.1364739.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Seok, Kyungha, Changha Hwang, and Daehyeon Cho. "PREDICTION INTERVALS FOR SUPPORT VECTOR MACHINE REGRESSION." Communications in Statistics - Theory and Methods 31, no. 10 (January 12, 2002): 1887–98. http://dx.doi.org/10.1081/sta-120014918.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Xu, Qifa, Jinxiu Zhang, Cuixia Jiang, Xue Huang, and Yaoyao He. "Weighted quantile regression via support vector machine." Expert Systems with Applications 42, no. 13 (August 2015): 5441–51. http://dx.doi.org/10.1016/j.eswa.2015.03.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Support Vector Machine Regression"

1

Lee, Keun Joo. "Geometric Tolerancing of Cylindricity Utilizing Support Vector Regression." Scholarly Repository, 2009. http://scholarlyrepository.miami.edu/oa_theses/233.

Full text
Abstract:
In the age where quick turn around time and high speed manufacturing methods are becoming more important, quality assurance is a consistent bottleneck in production. With the development of cheap and fast computer hardware, it has become viable to use machine vision for the collection of data points from a machined part. The generation of these large sample points have necessitated a need for a comprehensive algorithm that will be able to provide accurate results while being computationally efficient. Current established methods are least-squares (LSQ) and non-linear programming (NLP). The LSQ method is often deemed too inaccurate and is prone to providing bad results, while the NLP method is computationally taxing. A novel method of using support vector regression (SVR) to solve the NP-hard problem of cylindricity of machined parts is proposed. This method was evaluated against LSQ and NLP in both accuracy and CPU processing time. An open-source, user-modifiable programming package was developed to test the model. Analysis of test results show the novel SVR algorithm to be a viable alternative in exploring different methods of cylindricity in real-world manufacturing.
APA, Harvard, Vancouver, ISO, and other styles
2

Wu, Zhili. "Regularization methods for support vector machines." HKBU Institutional Repository, 2008. http://repository.hkbu.edu.hk/etd_ra/912.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Shah, Rohan Shiloh. "Support vector machines for classification and regression." Thesis, McGill University, 2007. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=100247.

Full text
Abstract:
In the last decade Support Vector Machines (SVMs) have emerged as an important learning technique for solving classification and regression problems in various fields, most notably in computational biology, finance and text categorization. This is due in part to built-in mechanisms to ensure good generalization which leads to accurate prediction, the use of kernel functions to model non-linear distributions, the ability to train relatively quickly on large data sets using novel mathematical optimization techniques and most significantly the possibility of theoretical analysis using computational learning theory. In this thesis, we discuss the theoretical basis and computational approaches to Support Vector Machines.
APA, Harvard, Vancouver, ISO, and other styles
4

OLIVEIRA, A. B. "Modelo de Predição para análise comparativa de Técnicas Neuro-Fuzzy e de Regressão." Universidade Federal do Espírito Santo, 2010. http://repositorio.ufes.br/handle/10/4218.

Full text
Abstract:
Made available in DSpace on 2016-08-29T15:33:12Z (GMT). No. of bitstreams: 1 tese_3521_.pdf: 2782962 bytes, checksum: d4b2294e5ee9ab86b7a35aec083af692 (MD5) Previous issue date: 2010-02-12
Os Modelos de Predição implementados pelos algoritmos de Aprendizagem de Máquina advindos como linha de pesquisa da Inteligência Computacional são resultantes de pesquisas e investigações empíricas em dados do mundo real. Neste contexto; estes modelos são extraídos para comparação de duas grandes técnicas de aprendizagem de máquina Redes Neuro-Fuzzy e de Regressão aplicadas no intuito de estimar um parâmetro de qualidade do produto em um ambiente industrial sob processo contínuo. Heuristicamente; esses Modelos de Predição são aplicados e comparados em um mesmo ambiente de simulação com intuito de mensurar os níveis de adequação dos mesmos, o poder de desempenho e generalização dos dados empíricos que compõem este cenário (ambiente industrial de mineração).
APA, Harvard, Vancouver, ISO, and other styles
5

Wågberg, Max. "Att förutspå Sveriges bistånd : En jämförelse mellan Support Vector Regression och ARIMA." Thesis, Mittuniversitetet, Institutionen för informationssystem och –teknologi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-36479.

Full text
Abstract:
In recent years, the use of machine learning has increased significantly. Its uses range from making the everyday life easier with voice-guided smart devices to image recognition, or predicting the stock market. Predicting economic values has long been possible by using methods other than machine learning, such as statistical algorithms. These algorithms and machine learning models use time series, which is a set of data points observed constantly over a given time interval, in order to predict data points beyond the original time series. But which of these methods gives the best results? The overall purpose of this project is to predict Sweden’s aid curve using the machine learning model Support Vector Regression and the classic statistical algorithm autoregressive integrated moving average which is abbreviated ARIMA. The time series used in the prediction are annual summaries of Sweden’s total aid to the world from openaid.se since 1998 and up to 2019. SVR and ARIMA are implemented in python with the help of the Scikit- and Statsmodels libraries. The results from SVR and ARIMA are measured in comparison with the original value and their predicted values, while the accuracy is measured in Root Square Mean Error and presented in the results chapter. The result shows that SVR with the RBF-kernel is the algorithm that provides the best results for the data series. All predictions beyond the times series are then visually presented on a openaid prototype page using D3.js
Under det senaste åren har användningen av maskininlärning ökat markant. Dess användningsområden varierar mellan allt från att göra vardagen lättare med röststyrda smarta enheter till bildigenkänning eller att förutspå börsvärden. Att förutspå ekonomiska värden har länge varit möjligt med hjälp av andra metoder än maskininlärning, såsom exempel statistiska algoritmer. Dessa algoritmer och maskininlärningsmodeller använder tidsserier, vilket är en samling datapunkter observerade konstant över en given tidsintervall, för att kunna förutspå datapunkter bortom den originella tidsserien. Men vilken av dessa metoder ger bäst resultat? Projektets övergripande syfte är att förutse sveriges biståndskurva med hjälp av maskininlärningsmodellen Support Vector Regression och den klassiska statistiska algoritmen autoregressive integrated moving average som förkortas ARIMA. Tidsserien som används vid förutsägelsen är årliga summeringar av biståndet från openaid.se sedan år 1998 och fram till 2019. SVR och ARIMA implementeras i python med hjälp av Scikit-learn och Statsmodelsbiblioteken. Resultatet från SVR och ARIMA mäts i jämförelse mellan det originala värdet och deras förutspådda värden medan noggrannheten mäts i root square mean error och presenteras under resultatkapitlet. Resultatet visar att SVR med RBF kärnan är den algoritm som ger det bästa testresultatet för dataserien. Alla förutsägelser bortom tidsserien presenteras därefter visuellt på en openaid prototypsida med hjälp av D3.js.
APA, Harvard, Vancouver, ISO, and other styles
6

Uslan, Volkan. "Support vector machine-based fuzzy systems for quantitative prediction of peptide binding affinity." Thesis, De Montfort University, 2015. http://hdl.handle.net/2086/11170.

Full text
Abstract:
Reliable prediction of binding affinity of peptides is one of the most challenging but important complex modelling problems in the post-genome era due to the diversity and functionality of the peptides discovered. Generally, peptide binding prediction models are commonly used to find out whether a binding exists between a certain peptide(s) and a major histocompatibility complex (MHC) molecule(s). Recent research efforts have been focused on quantifying the binding predictions. The objective of this thesis is to develop reliable real-value predictive models through the use of fuzzy systems. A non-linear system is proposed with the aid of support vector-based regression to improve the fuzzy system and applied to the real value prediction of degree of peptide binding. This research study introduced two novel methods to improve structure and parameter identification of fuzzy systems. First, the support-vector based regression is used to identify initial parameter values of the consequent part of type-1 and interval type-2 fuzzy systems. Second, an overlapping clustering concept is used to derive interval valued parameters of the premise part of the type-2 fuzzy system. Publicly available peptide binding affinity data sets obtained from the literature are used in the experimental studies of this thesis. First, the proposed models are blind validated using the peptide binding affinity data sets obtained from a modelling competition. In that competition, almost an equal number of peptide sequences in the training and testing data sets (89, 76, 133 and 133 peptides for the training and 88, 76, 133 and 47 peptides for the testing) are provided to the participants. Each peptide in the data sets was represented by 643 bio-chemical descriptors assigned to each amino acid. Second, the proposed models are cross validated using mouse class I MHC alleles (H2-Db, H2-Kb and H2-Kk). H2-Db, H2-Kb, and H2-Kk consist of 65 nona-peptides, 62 octa-peptides, and 154 octa-peptides, respectively. Compared to the previously published results in the literature, the support vector-based type-1 and support vector-based interval type-2 fuzzy models yield an improvement in the prediction accuracy. The quantitative predictive performances have been improved as much as 33.6\% for the first group of data sets and 1.32\% for the second group of data sets. The proposed models not only improved the performance of the fuzzy system (which used support vector-based regression), but the support vector-based regression benefited from the fuzzy concept also. The results obtained here sets the platform for the presented models to be considered for other application domains in computational and/or systems biology. Apart from improving the prediction accuracy, this research study has also identified specific features which play a key role(s) in making reliable peptide binding affinity predictions. The amino acid features "Polarity", "Positive charge", "Hydrophobicity coefficient", and "Zimm-Bragg parameter" are considered as highly discriminating features in the peptide binding affinity data sets. This information can be valuable in the design of peptides with strong binding affinity to a MHC I molecule(s). This information may also be useful when designing drugs and vaccines.
APA, Harvard, Vancouver, ISO, and other styles
7

Lee, Ho-Jin. "Functional data analysis: classification and regression." Texas A&M University, 2004. http://hdl.handle.net/1969.1/2805.

Full text
Abstract:
Functional data refer to data which consist of observed functions or curves evaluated at a finite subset of some interval. In this dissertation, we discuss statistical analysis, especially classification and regression when data are available in function forms. Due to the nature of functional data, one considers function spaces in presenting such type of data, and each functional observation is viewed as a realization generated by a random mechanism in the spaces. The classification procedure in this dissertation is based on dimension reduction techniques of the spaces. One commonly used method is Functional Principal Component Analysis (Functional PCA) in which eigen decomposition of the covariance function is employed to find the highest variability along which the data have in the function space. The reduced space of functions spanned by a few eigenfunctions are thought of as a space where most of the features of the functional data are contained. We also propose a functional regression model for scalar responses. Infinite dimensionality of the spaces for a predictor causes many problems, and one such problem is that there are infinitely many solutions. The space of the parameter function is restricted to Sobolev-Hilbert spaces and the loss function, so called, e-insensitive loss function is utilized. As a robust technique of function estimation, we present a way to find a function that has at most e deviation from the observed values and at the same time is as smooth as possible.
APA, Harvard, Vancouver, ISO, and other styles
8

Hechter, Trudie. "A comparison of support vector machines and traditional techniques for statistical regression and classification." Thesis, Stellenbosch : Stellenbosch University, 2004. http://hdl.handle.net/10019.1/49810.

Full text
Abstract:
Thesis (MComm)--Stellenbosch University, 2004.
ENGLISH ABSTRACT: Since its introduction in Boser et al. (1992), the support vector machine has become a popular tool in a variety of machine learning applications. More recently, the support vector machine has also been receiving increasing attention in the statistical community as a tool for classification and regression. In this thesis support vector machines are compared to more traditional techniques for statistical classification and regression. The techniques are applied to data from a life assurance environment for a binary classification problem and a regression problem. In the classification case the problem is the prediction of policy lapses using a variety of input variables, while in the regression case the goal is to estimate the income of clients from these variables. The performance of the support vector machine is compared to that of discriminant analysis and classification trees in the case of classification, and to that of multiple linear regression and regression trees in regression, and it is found that support vector machines generally perform well compared to the traditional techniques.
AFRIKAANSE OPSOMMING: Sedert die bekendstelling van die ondersteuningspuntalgoritme in Boser et al. (1992), het dit 'n populêre tegniek in 'n verskeidenheid masjienleerteorie applikasies geword. Meer onlangs het die ondersteuningspuntalgoritme ook meer aandag in die statistiese gemeenskap begin geniet as 'n tegniek vir klassifikasie en regressie. In hierdie tesis word ondersteuningspuntalgoritmes vergelyk met meer tradisionele tegnieke vir statistiese klassifikasie en regressie. Die tegnieke word toegepas op data uit 'n lewensversekeringomgewing vir 'n binêre klassifikasie probleem sowel as 'n regressie probleem. In die klassifikasiegeval is die probleem die voorspelling van polisvervallings deur 'n verskeidenheid invoer veranderlikes te gebruik, terwyl in die regressiegeval gepoog word om die inkomste van kliënte met behulp van hierdie veranderlikes te voorspel. Die resultate van die ondersteuningspuntalgoritme word met dié van diskriminant analise en klassifikasiebome vergelyk in die klassifikasiegeval, en met veelvoudige linêere regressie en regressiebome in die regressiegeval. Die gevolgtrekking is dat ondersteuningspuntalgoritmes oor die algemeen goed vaar in vergelyking met die tradisionele tegnieke.
APA, Harvard, Vancouver, ISO, and other styles
9

Thorén, Daniel. "Radar based tank level measurement using machine learning : Agricultural machines." Thesis, Linköpings universitet, Programvara och system, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-176259.

Full text
Abstract:
Agriculture is becoming more dependent on computerized solutions to make thefarmer’s job easier. The big step that many companies are working towards is fullyautonomous vehicles that work the fields. To that end, the equipment fitted to saidvehicles must also adapt and become autonomous. Making this equipment autonomoustakes many incremental steps, one of which is developing an accurate and reliable tanklevel measurement system. In this thesis, a system for tank level measurement in a seedplanting machine is evaluated. Traditional systems use load cells to measure the weightof the tank however, these types of systems are expensive to build and cumbersome torepair. They also add a lot of weight to the equipment which increases the fuel consump-tion of the tractor. Thus, this thesis investigates the use of radar sensors together witha number of Machine Learning algorithms. Fourteen radar sensors are fitted to a tankat different positions, data is collected, and a preprocessing method is developed. Then,the data is used to test the following Machine Learning algorithms: Bagged RegressionTrees (BG), Random Forest Regression (RF), Boosted Regression Trees (BRT), LinearRegression (LR), Linear Support Vector Machine (L-SVM), Multi-Layer Perceptron Re-gressor (MLPR). The model with the best 5-fold crossvalidation scores was Random For-est, closely followed by Boosted Regression Trees. A robustness test, using 5 previouslyunseen scenarios, revealed that the Boosted Regression Trees model was the most robust.The radar position analysis showed that 6 sensors together with the MLPR model gavethe best RMSE scores.In conclusion, the models performed well on this type of system which shows thatthey might be a competitive alternative to load cell based systems.
APA, Harvard, Vancouver, ISO, and other styles
10

Persson, Karl. "Predicting movie ratings : A comparative study on random forests and support vector machines." Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-11119.

Full text
Abstract:
The aim of this work is to evaluate the prediction performance of random forests in comparison to support vector machines, for predicting the numerical user ratings of a movie using pre-release attributes such as its cast, directors, budget and movie genres. In order to answer this question an experiment was conducted on predicting the overall user rating of 3376 hollywood movies, using data from the well established movie database IMDb. The prediction performance of the two algorithms was assessed and compared over three commonly used performance and error metrics, as well as evaluated by the means of significance testing in order to further investigate whether or not any significant differences could be identified. The results indicate some differences between the two algorithms, with consistently better performance from random forests in comparison to support vector machines over all of the performance metrics, as well as significantly better results for two out of three metrics. Although a slight difference has been indicated by the results one should also note that both algorithms show great similarities in terms of their prediction performance, making it hard to draw any general conclusions on which algorithm yield the most accurate movie predictions.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Support Vector Machine Regression"

1

Drezet, P. Directly optimized support vector machines for classification and regression. Sheffield: University of Sheffield, Dept. of Automatic Control and Systems Engineering, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Andreas, Christmann, ed. Support vector machines. New York: Springer, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

missing], [name. Least squares support vector machines. Singapore: World Scientific, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Hamel, Lutz. Knowledge discovery with support vector machines. Hoboken, N.J: John Wiley & Sons, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Boyle, Brandon H. Support vector machines: Data analysis, machine learning, and applications. Hauppauge, N.Y: Nova Science Publishers, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Support vector machines for pattern classification. 2nd ed. London: Springer, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

K, Suykens Johan A., Signoretto Marco, and Argyriou Andreas, eds. Regularization, optimization, kernels, and support vector machines. Boca Raton: Taylor & Francis, 2014.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Joachims, Thorsten. Learning to classify text using support vector machines. Boston: Kluwer Academic Publishers, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ertekin, Şeyda. Algorithms for efficient learning systems: Online and active learning approaches. Saarbrücken: VDM Verlag Dr. Müller, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

J, Smola Alexander, ed. Learning with kernels: Support vector machines, regularization, optimization, and beyond. Cambridge, Mass: MIT Press, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Support Vector Machine Regression"

1

Awad, Mariette, and Rahul Khanna. "Support Vector Regression." In Efficient Learning Machines, 67–80. Berkeley, CA: Apress, 2015. http://dx.doi.org/10.1007/978-1-4302-5990-9_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Schleif, Frank-Michael. "Indefinite Support Vector Regression." In Artificial Neural Networks and Machine Learning – ICANN 2017, 313–21. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-68612-7_36.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Jayadeva, Reshma Khemchandani, and Suresh Chandra. "TWSVR: Twin Support Vector Machine Based Regression." In Twin Support Vector Machines, 63–101. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-46186-1_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Berk, Richard A. "Support Vector Machines." In Statistical Learning from a Regression Perspective, 339–59. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-40189-4_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Berk, Richard A. "Support Vector Machines." In Statistical Learning from a Regression Perspective, 291–310. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-44048-4_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ullrich, Katrin, Michael Kamp, Thomas Gärtner, Martin Vogt, and Stefan Wrobel. "Co-Regularised Support Vector Regression." In Machine Learning and Knowledge Discovery in Databases, 338–54. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-71246-8_21.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Martin, Mario. "On-Line Support Vector Machine Regression." In Lecture Notes in Computer Science, 282–94. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-36755-1_24.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Christmann, Andreas. "Regression depth and support vector machine." In DIMACS Series in Discrete Mathematics and Theoretical Computer Science, 71–85. Providence, Rhode Island: American Mathematical Society, 2006. http://dx.doi.org/10.1090/dimacs/072/06.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Díaz-Vico, David, Jesús Prada, Adil Omari, and José R. Dorronsoro. "Deep Support Vector Classification and Regression." In From Bioinspired Systems and Biomedical Applications to Machine Learning, 33–43. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-19651-6_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kenesei, Tamás, and János Abonyi. "Interpretability of Support Vector Machines." In Interpretability of Computational Intelligence-Based Regression Models, 49–60. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-21942-4_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Support Vector Machine Regression"

1

Khemchandani, Reshma, Keshav Goyal, and Suresh Chandra. "Twin Support Vector Machine based Regression." In 2015 Eighth International Conference on Advances in Pattern Recognition (ICAPR). IEEE, 2015. http://dx.doi.org/10.1109/icapr.2015.7050651.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Xue, Zhenxia, and Wanli Liu. "A fuzzy rough support vector regression machine." In 2012 9th International Conference on Fuzzy Systems and Knowledge Discovery (FSKD). IEEE, 2012. http://dx.doi.org/10.1109/fskd.2012.6234232.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Shen, Jin-Dong. "New smooth support vector machine for regression." In 2012 International Conference on Machine Learning and Cybernetics (ICMLC). IEEE, 2012. http://dx.doi.org/10.1109/icmlc.2012.6358931.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Hao, Pei-Yi. "Possibilistic regression analysis by support vector machine." In 2011 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE). IEEE, 2011. http://dx.doi.org/10.1109/fuzzy.2011.6007433.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Fu, Guanghui, and Guanghua Hu. "Total Least Square Support Vector Machine for Regression." In 2008 International Conference on Intelligent Computation Technology and Automation (ICICTA). IEEE, 2008. http://dx.doi.org/10.1109/icicta.2008.134.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Dilmen, Erdem, and Selami Beyhan. "Deep recurrent support vector machine for online regression." In 2017 International Artificial Intelligence and Data Processing Symposium (IDAP). IEEE, 2017. http://dx.doi.org/10.1109/idap.2017.8090243.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Zhang, Hong, and Yongmei Lei. "BSP-based support vector regression machine parallel framework." In 2013 IEEE/ACIS 12th International Conference on Computer and Information Science (ICIS). IEEE, 2013. http://dx.doi.org/10.1109/icis.2013.6607862.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Tamang, Amrita, and Samiksha Shukla. "Water Demand Prediction Using Support Vector Machine Regression." In 2019 International Conference on Data Science and Communication (IconDSC). IEEE, 2019. http://dx.doi.org/10.1109/icondsc.2019.8816969.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Stoean, Ruxandra, D. Dumitrescu, Mike Preuss, and Catalin Stoean. "Evolutionary Support Vector Regression Machines." In 2006 Eighth International Symposium on Symbolic and Numeric Algorithms for Scientific Computing. IEEE, 2006. http://dx.doi.org/10.1109/synasc.2006.39.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kun Fu, You-Hua Wang, Yong-Feng Dong, Xiang-Dan Hou, Xue-Qin Shen, and Wei-Li Yan. "Support vector regression method for boundary value problems." In Proceedings of 2005 International Conference on Machine Learning and Cybernetics. IEEE, 2005. http://dx.doi.org/10.1109/icmlc.2005.1527692.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Support Vector Machine Regression"

1

Puttanapong, Nattapong, Arturo M. Martinez Jr, Mildred Addawe, Joseph Bulan, Ron Lester Durante, and Marymell Martillan. Predicting Poverty Using Geospatial Data in Thailand. Asian Development Bank, December 2020. http://dx.doi.org/10.22617/wps200434-2.

Full text
Abstract:
This study examines an alternative approach in estimating poverty by investigating whether readily available geospatial data can accurately predict the spatial distribution of poverty in Thailand. It also compares the predictive performance of various econometric and machine learning methods such as generalized least squares, neural network, random forest, and support vector regression. Results suggest that intensity of night lights and other variables that approximate population density are highly associated with the proportion of population living in poverty. The random forest technique yielded the highest level of prediction accuracy among the methods considered, perhaps due to its capability to fit complex association structures even with small and medium-sized datasets.
APA, Harvard, Vancouver, ISO, and other styles
2

Gertz, E. M., and J. D. Griffin. Support vector machine classifiers for large data sets. Office of Scientific and Technical Information (OSTI), January 2006. http://dx.doi.org/10.2172/881587.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Alali, Ali. Application of Support Vector Machine in Predicting the Market's Monthly Trend Direction. Portland State University Library, January 2000. http://dx.doi.org/10.15760/etd.1495.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Arun, Ramaiah, and Shanmugasundaram Singaravelan. Classification of Brain Tumour in Magnetic Resonance Images Using Hybrid Kernel Based Support Vector Machine. "Prof. Marin Drinov" Publishing House of Bulgarian Academy of Sciences, October 2019. http://dx.doi.org/10.7546/crabs.2019.10.12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Liu, Y. Support vector machine for the prediction of future trend of Athabasca River (Alberta) flow rate. Natural Resources Canada/ESS/Scientific and Technical Publishing Services, 2017. http://dx.doi.org/10.4095/299739.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Qi, Yuan. Learning Algorithms for Audio and Video Processing: Independent Component Analysis and Support Vector Machine Based Approaches. Fort Belvoir, VA: Defense Technical Information Center, August 2000. http://dx.doi.org/10.21236/ada458739.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Luo, Yuzhou, Rui Wang, Zhongwei Jiang, and Xiqing Zuo. Assessment of the Effect of Health Monitoring System on the Sleep Quality by Using Support Vector Machine. "Prof. Marin Drinov" Publishing House of Bulgarian Academy of Sciences, January 2018. http://dx.doi.org/10.7546/crabs.2018.01.16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Luo, Yuzhou, Rui Wang, Zhongwei Jiang, and Xiqing Zuo. Assessment of the Effect of Health Monitoring System on the Sleep Quality by Using Support Vector Machine. "Prof. Marin Drinov" Publishing House of Bulgarian Academy of Sciences, January 2018. http://dx.doi.org/10.7546/grabs2018.1.16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography