Dissertations / Theses on the topic 'Support vector regression'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Support vector regression.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Shah, Rohan Shiloh. "Support vector machines for classification and regression." Thesis, McGill University, 2007. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=100247.
Full textLee, Keun Joo. "Geometric Tolerancing of Cylindricity Utilizing Support Vector Regression." Scholarly Repository, 2009. http://scholarlyrepository.miami.edu/oa_theses/233.
Full textNayeri, Negin. "Option strategies using hybrid Support Vector Regression - ARIMA." Thesis, KTH, Matematisk statistik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-275719.
Full textI denna uppsats utvärderas användningen av maskininlärning i optionsstrategier med fokus på S&P 500 Index. Den första delen av uppsatsen fokuserar på att testa prognos kraften av Support Vector Regression (SVR) metoden för den realiserade volatiliteten med ett fönster på 20 dagar. Prognos kommer att ske för 1 månad framåt (20 trading dagar). Den andra delen av uppsatsen fokuserar på att skapa en ARIMA-modell som prognostiserar nästa värdet i tidsserien som baseras på skillnaden mellan de erhållna prognoserna samt sanna värdena. Detta görs för att skapa en hybrid SVR-ARIMA-modell. Den nya modellen består nu av ett realiserat volatilitetsvärde härrörande från SVR samt den error som erhållits från ARIMA- modellen. Avslutningsvis kommer de två metoderna, det vill säga SVR och hybrid SVR-ARIMA, jämföras och den modell med bäst resultat användas inom två options strategier. Resultaten visar den lovande prognotiseringsförmågan för SVR-metoden som för denna dataset hade en noggrannhetsnivå på 67 % för realiserad volatiliteten. ARIMA- modellen visar också en framgångsrik prognosförmåga för nästa punkt i tidsserien. Dock överträffar Hybrid SVR-ARIMA-modellen SVR-modellen för detta dataset. Det kan diskuteras ifall framgången med dessa metoder kan bero på att denna dataset täcker åren mellan 2010-2018 och det mycket volatila tiden under finanskrisen 2008 är uteslutet. Detta kan ifrågasätta modellernas prognotiseringsförmåga på högre volatilitetsmarknader. Dock ger användningen av hybrid-SVR-ARIMA-modellen som används inom de två option strategierna en genomsnittlig avkastning på 0,37 % och 1,68 %. Det bör dock noteras att de tillkommande kostnaderna för att handla optioner samt premiekostnaden som skall betalas i samband med köp av optioner inte ingår i avkastningen då dessa kostnader varierar beroende på plats av köp. Denna uppsats har gjorts i samarbete med Crescit Asset Management i Stockholm.
Ericson, Johan. "Lastprediktering : Med Neuralt Nätverk och Support Vector Regression." Thesis, Karlstads universitet, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-73371.
Full textWu, Zhili. "Regularization methods for support vector machines." HKBU Institutional Repository, 2008. http://repository.hkbu.edu.hk/etd_ra/912.
Full textYaohao, Peng. "Support Vector Regression aplicado à previsão de taxas de câmbio." reponame:Repositório Institucional da UnB, 2016. http://repositorio.unb.br/handle/10482/23270.
Full textSubmitted by Fernanda Percia França (fernandafranca@bce.unb.br) on 2017-03-29T16:54:14Z No. of bitstreams: 1 2016_PengYaohao.pdf: 1180450 bytes, checksum: ef8f0884103e32b6a4dac98a1c9dd880 (MD5)
Approved for entry into archive by Raquel Viana(raquelviana@bce.unb.br) on 2017-04-13T20:57:38Z (GMT) No. of bitstreams: 1 2016_PengYaohao.pdf: 1180450 bytes, checksum: ef8f0884103e32b6a4dac98a1c9dd880 (MD5)
Made available in DSpace on 2017-04-13T20:57:38Z (GMT). No. of bitstreams: 1 2016_PengYaohao.pdf: 1180450 bytes, checksum: ef8f0884103e32b6a4dac98a1c9dd880 (MD5)
O presente estudo realizou a previsão da taxa spot de 15 pares de câmbio mediante a aplicação de um algoritmo de aprendizado de máquinas – Support Vector Regression – com base em um modelo fundamentalista composto por 13 variáveis explicativas. Para a estimação das previsões, foram consideradas 9 funções Kernel extraídas da literatura científica, totalizando assim 135 modelos verificados. As previsões foram comparadas com o benchmark Random Walke avaliadas em relação à taxa de acerto direcional do câmbio e às métricas de erro RMSE (raiz quadrada do erro quadrático médio) e MAE (erro absoluto médio). A significância estatística do incremento de poder explicativo dos modelos SVR em relação ao Random Walk foi verificada mediante a aplicação do Reality Check Test de White (2000). Os resultados mostram que os modelos SVR obtiveram desempenho preditivo satisfatório em relação ao benchmark, com vários dos modelos propostos apresentando forte significância estatística de superioridade preditiva.Por outro lado, observou-se que várias funções Kernel comumente utilizadas na literatura científica não lograram êxito em superar o Random Walk, apontando para uma possível lacuna no estado da arte de aprendizado de máquinas aplicada à previsão de taxas de câmbio. Por fim, discutiu-se acerca das implicações dos resultados obtidos para o desenvolvimento futuro da agenda de pesquisa correlata.
This paper aims to forecast the spot exchange rate of 15 currency pairs by applying a machinelearning algorithm – Support Vector Regression – based on a fundamentalist model composedof 13 explanatory variables. The predictions’ estimation were obtained by applying 9different Kernel functions extracted from the scientific literature, resulting in a total of 135 modelsverified. The predictions were compared to the Random Walk benchmark and evaluated for directionalaccuracy rate of exchange pradictions and error performance indices RMSE (root meansquare error) and MAE (mean absolute error). The statistical significance of the explanatorypower gain via SVR models with respect to the Random Walk was checked by applying White(2000)’s Reality Check Test. The results show that SVR models achieved satisfactory predictiveperformance relative to the benchmark, with several of the proposed models showing strong statisticalsignificance of predictive superiority. Furthermore, the results showed that mainstreamKernel functions commonly used in the scientific literature failed to outperform the RandomWalk,indicating a possible gap in the state of art of machine learning methods applications to exchangerates forecasting. Finally, the paper presents a discussion about the implications of the obtainedresults for the future development of related research agendas.
Beltrami, Monica. "Precificação de opções sobre ações por modelos de Support Vector Regression." reponame:Repositório Institucional da UFPR, 2012. http://hdl.handle.net/1884/27334.
Full textWise, John Nathaniel. "Optimization of a low speed wind turbine using support vector regression." Thesis, Stellenbosch : University of Stellenbosch, 2009. http://hdl.handle.net/10019.1/2737.
Full textNUMERICAL design optimization provides a powerful tool that assists designers in improving their products. Design optimization automatically modifies important design parameters to obtain the best product that satisfies all the design requirements. This thesis explores the use of Support Vector Regression (SVR) and demonstrates its usefulness in the numerical optimization of a low-speed wind turbine for the power coe cient, Cp. The optimization design problem is the three-dimensional optimization of a wind turbine blade by making use of four two-dimensional radial stations. The candidate airfoils at these stations are selected from the 4-digit NACA range. A metamodel of the lift and drag coe cients of the NACA 4-digit series is created with SVR by using training points evaluated with XFOIL software. These SVR approximations are used in conjunction with the Blade Element Momentum theory to calculate and optimize the Cp value for the entire blade. The high accuracy attained with the SVR metamodels makes it a viable alternative to using XFOIL directly, as it has the advantages of being faster and easier to couple with the optimizer. The technique developed allows the optimization procedure the freedom to select profiles, angles of attack and chord length from the 4-digit NACA series to find an optimal Cp value. As a result of every radial blade station consisting of a NACA 4-digit series, the same lift and drag metamodels are used for each station. This technique also makes it simple to evaluate the entire blade as one set of design variables. The thesis contains a detailed description of the design and optimization problem, the implementation of the SVR algorithm, the creation of the lift and drag metamodels with SVR and an alternative methodology, the BEM theory and a summary of the results.
Hasanov, Ilgar <1996>. "A Comparison between Support Vector Machines and Logistic Regression for Classification." Master's Degree Thesis, Università Ca' Foscari Venezia, 2022. http://hdl.handle.net/10579/20753.
Full textShen, Judong. "Fusing support vector machines and soft computing for pattern recognition and regression /." Search for this dissertation online, 2005. http://wwwlib.umi.com/cr/ksu/main.
Full textWågberg, Max. "Att förutspå Sveriges bistånd : En jämförelse mellan Support Vector Regression och ARIMA." Thesis, Mittuniversitetet, Institutionen för informationssystem och –teknologi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-36479.
Full textUnder det senaste åren har användningen av maskininlärning ökat markant. Dess användningsområden varierar mellan allt från att göra vardagen lättare med röststyrda smarta enheter till bildigenkänning eller att förutspå börsvärden. Att förutspå ekonomiska värden har länge varit möjligt med hjälp av andra metoder än maskininlärning, såsom exempel statistiska algoritmer. Dessa algoritmer och maskininlärningsmodeller använder tidsserier, vilket är en samling datapunkter observerade konstant över en given tidsintervall, för att kunna förutspå datapunkter bortom den originella tidsserien. Men vilken av dessa metoder ger bäst resultat? Projektets övergripande syfte är att förutse sveriges biståndskurva med hjälp av maskininlärningsmodellen Support Vector Regression och den klassiska statistiska algoritmen autoregressive integrated moving average som förkortas ARIMA. Tidsserien som används vid förutsägelsen är årliga summeringar av biståndet från openaid.se sedan år 1998 och fram till 2019. SVR och ARIMA implementeras i python med hjälp av Scikit-learn och Statsmodelsbiblioteken. Resultatet från SVR och ARIMA mäts i jämförelse mellan det originala värdet och deras förutspådda värden medan noggrannheten mäts i root square mean error och presenteras under resultatkapitlet. Resultatet visar att SVR med RBF kärnan är den algoritm som ger det bästa testresultatet för dataserien. Alla förutsägelser bortom tidsserien presenteras därefter visuellt på en openaid prototypsida med hjälp av D3.js.
OLIVEIRA, A. B. "Modelo de Predição para análise comparativa de Técnicas Neuro-Fuzzy e de Regressão." Universidade Federal do Espírito Santo, 2010. http://repositorio.ufes.br/handle/10/4218.
Full textOs Modelos de Predição implementados pelos algoritmos de Aprendizagem de Máquina advindos como linha de pesquisa da Inteligência Computacional são resultantes de pesquisas e investigações empíricas em dados do mundo real. Neste contexto; estes modelos são extraídos para comparação de duas grandes técnicas de aprendizagem de máquina Redes Neuro-Fuzzy e de Regressão aplicadas no intuito de estimar um parâmetro de qualidade do produto em um ambiente industrial sob processo contínuo. Heuristicamente; esses Modelos de Predição são aplicados e comparados em um mesmo ambiente de simulação com intuito de mensurar os níveis de adequação dos mesmos, o poder de desempenho e generalização dos dados empíricos que compõem este cenário (ambiente industrial de mineração).
Tan, Edward S. "Hyper-wideband OFDM system." Thesis, Georgia Institute of Technology, 2016. http://hdl.handle.net/1853/55056.
Full textAguilar-Martinez, Silvestre. "Forecasts of tropical Pacific sea surface temperatures by neural networks and support vector regression." Thesis, University of British Columbia, 2008. http://hdl.handle.net/2429/7531.
Full textChoy, Kin-yee, and 蔡建怡. "On modelling using radial basis function networks with structure determined by support vector regression." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2004. http://hub.hku.hk/bib/B29329619.
Full textHechter, Trudie. "A comparison of support vector machines and traditional techniques for statistical regression and classification." Thesis, Stellenbosch : Stellenbosch University, 2004. http://hdl.handle.net/10019.1/49810.
Full textENGLISH ABSTRACT: Since its introduction in Boser et al. (1992), the support vector machine has become a popular tool in a variety of machine learning applications. More recently, the support vector machine has also been receiving increasing attention in the statistical community as a tool for classification and regression. In this thesis support vector machines are compared to more traditional techniques for statistical classification and regression. The techniques are applied to data from a life assurance environment for a binary classification problem and a regression problem. In the classification case the problem is the prediction of policy lapses using a variety of input variables, while in the regression case the goal is to estimate the income of clients from these variables. The performance of the support vector machine is compared to that of discriminant analysis and classification trees in the case of classification, and to that of multiple linear regression and regression trees in regression, and it is found that support vector machines generally perform well compared to the traditional techniques.
AFRIKAANSE OPSOMMING: Sedert die bekendstelling van die ondersteuningspuntalgoritme in Boser et al. (1992), het dit 'n populêre tegniek in 'n verskeidenheid masjienleerteorie applikasies geword. Meer onlangs het die ondersteuningspuntalgoritme ook meer aandag in die statistiese gemeenskap begin geniet as 'n tegniek vir klassifikasie en regressie. In hierdie tesis word ondersteuningspuntalgoritmes vergelyk met meer tradisionele tegnieke vir statistiese klassifikasie en regressie. Die tegnieke word toegepas op data uit 'n lewensversekeringomgewing vir 'n binêre klassifikasie probleem sowel as 'n regressie probleem. In die klassifikasiegeval is die probleem die voorspelling van polisvervallings deur 'n verskeidenheid invoer veranderlikes te gebruik, terwyl in die regressiegeval gepoog word om die inkomste van kliënte met behulp van hierdie veranderlikes te voorspel. Die resultate van die ondersteuningspuntalgoritme word met dié van diskriminant analise en klassifikasiebome vergelyk in die klassifikasiegeval, en met veelvoudige linêere regressie en regressiebome in die regressiegeval. Die gevolgtrekking is dat ondersteuningspuntalgoritmes oor die algemeen goed vaar in vergelyking met die tradisionele tegnieke.
Dougherty, Andrew W. "Intelligent Design of Metal Oxide Gas Sensor Arrays Using Reciprocal Kernel Support Vector Regression." The Ohio State University, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=osu1285045610.
Full textYang, Chih-cheng, and 楊智程. "Parameter learning and support vector reduction in support vector regression." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/11288181706062580666.
Full text國立中山大學
電機工程學系研究所
94
The selection and learning of kernel functions is a very important but rarely studied problem in the field of support vector learning. However, the kernel function of a support vector regression has great influence on its performance. The kernel function projects the dataset from the original data space into the feature space, and therefore the problems which can not be done in low dimensions could be done in a higher dimension through the transform of the kernel function. In this paper, there are two main contributions. Firstly, we introduce the gradient descent method to the learning of kernel functions. Using the gradient descent method, we can conduct learning rules of the parameters which indicate the shape and distribution of the kernel functions. Therefore, we can obtain better kernel functions by training their parameters with respect to the risk minimization principle. Secondly, In order to reduce the number of support vectors, we use the orthogonal least squares method. By choosing the representative support vectors, we may remove the less important support vectors in the support vector regression model. The experimental results have shown that our approach can derive better kernel functions than others and has better generalization ability. Also, the number of support vectors can be effectively reduced.
Su, Wei-Han, and 蘇韋瀚. "GDOP approximation using support vector regression." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/15792859065089543216.
Full text樹德科技大學
資訊管理研究所
96
In recent years Global Positioning System (GPS) has been used extensively in various fields. The accuracy of positioning is the key to success of using GPS. Geometric Dilution of Precision (GDOP) is an indicator of how well the constellation of GPS satellites is arranged geometrically. GDOP can be viewed as a multiplicative factor that magnifies ranging errors. GDOP can be calculated by solving positioning equations. Solving such positioning matrices usually involves complicated matrix transformation, which usually is a time- and power-consuming task. This paper presents a support vector regression (SVR) approach for finding regression models which can reasonably eliminate the effects of GDOP. With the proposed method, the processing costs for GPS positioning with low GDOP can be reduced. The experimental results show that the proposed method has good performance.
Yang, Chan-Yun, and 游明揚. "Robust Vicinity-Emphasis Support Vector Regression." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/78797174821559362554.
Full text國立臺北大學
電機資訊產業研發碩士專班
102
The standard support vector machine regression it’s to use famous ε insensitive loss function. The basic idea is not care about errors as long as they are less than ε tube, in other words, just don’t exceed ε tube would not be support vectors. However, the sparseness principle of support vector, the final decision function will influence by support vectors. In the thesis, consider the influence of this insensitive region exemption will cause regression results are not robust and outlier distance is too far but then the decision function to generate contributions. The purpose of this thesis is to be in accordance with the above two reason to modify the existing standard support vector machine regression, hope to complete the robustness support vector machine regression. In the thesis, one different loss function is given. Reconstruct the support vector machine regression by ramp loss function and the concave-convex procedure, and simulation, verification proposed loss function. The proposed method comparison with standard support vector machine regression, have better accuracy rate the efficiency and stability.
Ho, Chia-Hua, and 何家華. "Large-scale Linear Support Vector Regression." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/90643485073745704762.
Full text國立臺灣大學
資訊工程學研究所
100
Support vector regression (SVR) and support vector classification (SVC) are popular learning techniques, but their use with kernels is often time consuming. Recently, linear SVC without kernels has been shown to give competitive accuracy for some applications, but enjoys much faster training/testing. However, few studies have focused on linear SVR. In this thesis, we extend state-of-the-art training methods for linear SVC to linear SVR. We show that the extension is straightforward for some methods, but is not trivial for some others. Our experiments demonstrate that for some problems, the proposed linear-SVR training methods can very efficiently produce models that are as good as kernel SVR.
Chan, Yi-Chao, and 詹壹詔. "Estimation of Parameters in Support Vector Regression." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/42400554890167802923.
Full text國立中山大學
電機工程學系研究所
94
The selection and modification of kernel functions is a very important problem in the field of support vector learning. However, the kernel function of a support vector machine has great influence on its performance. The kernel function projects the dataset from the original data space into the feature space, and therefore the problems which couldn’t be done in low dimensions could be done in a higher dimension through the transform of the kernel function. In this thesis, we adopt the FCM clustering algorithm to group data patterns into clusters, and then use a statistical approach to calculate the standard deviation of each pattern with respect to the other patterns in the same cluster. Therefore we can make a proper estimation on the distribution of data patterns and assign a proper standard deviation for each pattern. The standard deviation is the same as the variance of a radial basis function. Then we have the origin data patterns and the variance of each data pattern for support vector learning. Experimental results have shown that our approach can derive better kernel functions than other methods, and also can have better learning and generalization abilities.
Liang, Yao-Yun, and 梁耀云. "Analysis for fuzzy support vector regression model." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/05349418207038746097.
Full text淡江大學
管理科學研究所碩士班
98
In recent years,introduce the use of SVM for multivariate fuzzy linear and nonlinear regression models with efficiency solutions. However, fuzzy support vector regression model is still complicated to slove the parameters which are all fuzzy numbers. In order to cope with this problem, we adopt the fuzzy possibilistic mean method proposed by Carlsson & Fuller (2001)which is more easily to slove fuzzy support vector regression model. According to parameters are fuzzy numbers or not, there are six kinds of models. Fnally, in data analysis, we can find forecasting vales in our proposed models are fitting very well using RMSE. It is obviously that our proposed fuzzy support vector regression model can be applied to forecast with better forecasting performance
Hsu, Wei-Cheng, and 許維城. "Forecasting Realized Volatility with Support Vector Regression." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/42566041775905716863.
Full textOjemakinde, Bukola Titilayo. "Support vector regression for non-stationary time series." 2006. http://etd.utk.edu/2006/OjemakindeBukola.pdf.
Full textPontil, Massimiliano, Ryan Rifkin, and Theodoros Evgeniou. "From Regression to Classification in Support Vector Machines." 1998. http://hdl.handle.net/1721.1/7258.
Full textPao-Tsun, Lin, and 林保村. "Support Vector Regression: Systematic Design and Performance Analysis." Thesis, 2001. http://ndltd.ncl.edu.tw/handle/62786616763960393790.
Full text國立臺灣科技大學
電機工程系
89
Support Vector Regression (SVR) based on statistical learning is a useful tool for nonlinear regression problems. The SVR method deals with data in a high dimension space by using linear quadratic programming techniques. As a consequence, the regression result has optimal properties. However, if parameters were not properly selected, overfitting and/or underfitting phenomena might occur in SVR. Two parameters , the width of Gaussian kernels and e, the tolerance zone in the cost function are considered in this research. We adopted the concept of the sampling theory into Gaussian Filter to deal with parameter The idea is to analyze the frequency spectrum of training data and to select a cut-off frequency by including 90% of power in spectrum. The corresponding can then be obtained through the sampling theory. In our simulations, it can be found that good performances are observed when the selected frequency is near the cut-off frequency. For another parameter , it is a tradeoff between the number of Support Vectors and the RMSE. By introducing the confidence interval concept, a suitable selection of can be obtained. The idea is to use the -norm (i.e., when ) to estimate the noise distribution of training data. When is obtained by selecting the 90% confidence interval, simulations demonstrated superior performance in various examples. By our systematical design, proper values of and can be obtained and the resultant system performances are nice in all aspects.
Yu-DiChen and 陳煜棣. "Toward a Monotonicity Constrained Support Vector Regression Model." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/3dn457.
Full text國立成功大學
資訊管理研究所
102
Machine learning techniques are widely used for analysis and extraction of knowledge. Data mining, as a tool of machine learning for knowledge discovery in databases (KDD), can automatically or semi-automatically analyze large quantities of data. In recent years, support vector machine (SVM), a state-of-the-art artificial neural network based on statistical learning, has been the focus of research in machine learning due to its excellent ability. Support vector regression (SVR) is the most common form of application of SVMs when the output is continuous. Instead of minimizing the observed training error, SVR attempts to minimize the generalization error bound, so as to achieve generalized performance. SVR has been applied in various fields – time series and financial (noisy and risky) prediction, approximation of complex engineering analyses, convex quadratic programming and choices of loss functions, etc. However, in many real-world problems, incorporating prior knowledge into SVR can improve the quality of models that are only data-driven, and close the wide gap between academic and business goals. We can observe some monotonic relationships between the output value and attributes, and it has been shown that a technique incorporating monotonicity constraints can reduce errors. In this study, we propose a knowledge-oriented new support vector regression model with monotonicity constraints, and exploit the knowledge of experts to retrieve monotonic rules from datasets. After which we construct monotonicity constraints to implement the proposed regression model. Experiments conducted on function prediction and real-world data sets show that the proposed method, which is not only data driven, but also domain knowledge oriented, can help correct the loss of monotonicity in data during the collection process, and performs better than traditional methods.
Su, Tzung-Hui, and 蘇宗輝. "Hybrid Robust Support Vector Machine Regression with Outliers." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/66925773018667043323.
Full text國立宜蘭大學
電機工程學系碩士班
95
In this study, the hybrid robust support vector machines for regression (HRSVMR) is proposed to deal with training data set with outliers. There are two-stage strategies that propose in the proposed approach. In the stage I, called as data preprocessing, the support vector machines for regression (SVMR) approach is used to filter out the outliers in the training data set. Due to the outliers in the training data set are removed, the concepts of robust statistic theory have no need to reduce the outlier’s effect. Then, the training data set except for outliers, called as the reduced training data set is directly used to training the non-robust least squares support vector machines for regression (LS-SVMR) or the non-robust support vector regression networks (SVRNs) in the stage II. Consequently, the learning mechanism of the proposed approach is much easier than the robust support vector regression networks (RSVRNs) approach and the weighted LS-SVMR approach. Based on the simulation results, the performance of the proposed approach with non-robust LS-SVMR is superior to the weighted LS-SVMR approach when the outliers are existed. Besides, the performance of the proposed approach with non-robust SVRNs is also superior to the RSVRNs approach.
Guo, Zhi-Heng, and 郭致亨. "Intuitionistic Fuzzy C-Least Squares Support Vector Regression." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/42862931861349067169.
Full text龍華科技大學
資訊管理系碩士班
103
The C- mean fuzzy clustering method (Fuzzy C-Means, FCM) has been widely used in a variety of different places, this study to develop a new type of intuitive C- fuzzy least squares support vector regression clustering method (Novel intuitionistic fuzzy c-least-squares support vector regression, IFC-LSSVR), and this method was applied to analyze the customer's e-learning platform and wheat seed data for analysis. It is divided into two stages Sammon mapping performed utilizing the dimension data conversion. The first phase will be analyzed by Sammon mapping data for conversion to reduce its complexity, and the second stage uses IFC-LSSVR and PSO. Finally, data analysis, the first stage to explain Sammon mapping The second phase of this study is to compare the proposed IFC-LSSVR and the other two methods (KM, FCM). The results of this study show proposed IFC-LSSVR relatively better than other comparison of methods have been proposed.
YAN, JHIH-HAN, and 嚴智瀚. "Support vector regression based residualcontrol charts with EWMA." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/75767686622400600129.
Full text國立雲林科技大學
工業工程與管理研究所碩士班
100
Support vector regression is a regression technique based on a support vector machine, and on the regression prediction error, and it is small compared to the least squares method so that it has a better predictive ability. In a regression residuals control chart, the general use of traditional regression is into the control chart, but in recent years, some scholars have found that support vector regression in the regression residual control chart is superior to the predictive ability of the traditional regression support vector regression applications. However, a regression residual control chart uses the traditional three times the standard deviation, and the application of information to a linear type. It lacks for non-linear applications in data types, and is ineffective in monitoring small offsets. Therefore, this study will increase the nonlinear types of data of the EWMA control chart and select the combination of support vector regression, and regression control chart monitoring capabilities in the process shift. The simulated data used in this study support vector regression of the EWMA control chart with the traditional return of the EWMA control chart monitoring to make evaluations, respectively, for different offsets, as measured by the average run length. The study results show that support vector regression application of the linear and nonlinear data is, superior to the traditional regression prediction accuracy rate, and the two regression methods EWMA control chart, in order to support vector regression based on the EWMA control chart of the monitoring capabilities compared to the traditional return of the EWMA control chart, is excellent.
Chen, Wei-Lun, and 陳緯倫. "Ordinal regression and applications: A support vector machine approach." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/wxcgvn.
Full text國立中興大學
應用數學系所
100
The analyses of ordinal data have received increasing attention in many research areas such as medicine and image retrieval. In this study, we use a paired-sample approach to classify the ordinal data by the support vector machine (SVM) in combination with the cumulative logit model (CLM). We explain how to determine the parameters of ordinal regression model by this method. The sequential minimal optimization (SMO), which is most commonly used for optimizing the SVM, and an improved version of the algorithm used in this study are discussed. Finally, we apply the method to artificial data, real medicine data and image data to evaluate its classification performances.
Pontil, Massimiliano, Sayan Mukherjee, and Federico Girosi. "On the Noise Model of Support Vector Machine Regression." 1998. http://hdl.handle.net/1721.1/7259.
Full text"Image representation, processing and analysis by support vector regression." 2001. http://library.cuhk.edu.hk/record=b5890679.
Full textThesis (M.Phil.)--Chinese University of Hong Kong, 2001.
Includes bibliographical references (leaves 380-383).
Text in English; abstracts in English and Chinese.
Chow Kai Tik = Zhi yuan shi liang hui gui fa zhi ying xiang biao shi shi ji qi ying xiang chu li yu fen xi / Zhou Qidi.
Abstract in English
Abstract in Chinese
Acknowledgement
Content
List of figures
Chapter Chapter 1 --- Introduction --- p.1-11
Chapter 1.1 --- Introduction --- p.2
Chapter 1.2 --- Road Map --- p.9
Chapter Chapter 2 --- Review of Support Vector Machine --- p.12-124
Chapter 2.1 --- Structural Risk Minimization (SRM) --- p.13
Chapter 2.1.1 --- Introduction
Chapter 2.1.2 --- Structural Risk Minimization
Chapter 2.2 --- Review of Support Vector Machine --- p.21
Chapter 2.2.1 --- Review of Support Vector Classification
Chapter 2.2.2 --- Review of Support Vector Regression
Chapter 2.2.3 --- Review of Support Vector Clustering
Chapter 2.2.4 --- Summary of Support Vector Machines
Chapter 2.3 --- Implementation of Support Vector Machines --- p.60
Chapter 2.3.1 --- Kernel Adatron for Support Vector Classification (KA-SVC)
Chapter 2.3.2 --- Kernel Adatron for Support Vector Regression (KA-SVR)
Chapter 2.3.3 --- Sequential Minimal Optimization for Support Vector Classification (SMO-SVC)
Chapter 2.3.4 --- Sequential Minimal Optimization for Support Vector Regression (SMO-SVR)
Chapter 2.3.5 --- Lagrangian Support Vector Classification (LSVC)
Chapter 2.3.6 --- Lagrangian Support Vector Regression (LSVR)
Chapter 2.4 --- Applications of Support Vector Machines --- p.117
Chapter 2.4.1 --- Applications of Support Vector Classification
Chapter 2.4.2 --- Applications of Support Vector Regression
Chapter Chapter 3 --- Image Representation by Support Vector Regression --- p.125-183
Chapter 3.1 --- Introduction of SVR Representation --- p.116
Chapter 3.1.1 --- Image Representation by SVR
Chapter 3.1.2 --- Implicit Smoothing of SVR representation
Chapter 3.1.3 --- "Different Insensitivity, C value, Kernel and Kernel Parameters"
Chapter 3.2 --- Variation on Encoding Method [Training Process] --- p.154
Chapter 3.2.1 --- Training SVR with Missing Data
Chapter 3.2.2 --- Training SVR with Image Blocks
Chapter 3.2.3 --- Training SVR with Other Variations
Chapter 3.3 --- Variation on Decoding Method [Testing pr Reconstruction Process] --- p.171
Chapter 3.3.1 --- Reconstruction with Different Portion of Support Vectors
Chapter 3.3.2 --- Reconstruction with Different Support Vector Locations and Lagrange Multiplier Values
Chapter 3.3.3 --- Reconstruction with Different Kernels
Chapter 3.4 --- Feature Extraction --- p.177
Chapter 3.4.1 --- Features on Simple Shape
Chapter 3.4.2 --- Invariant of Support Vector Features
Chapter Chapter 4 --- Mathematical and Physical Properties of SYR Representation --- p.184-243
Chapter 4.1 --- Introduction of RBF Kernel --- p.185
Chapter 4.2 --- Mathematical Properties: Integral Properties --- p.187
Chapter 4.2.1 --- Integration of an SVR Image
Chapter 4.2.2 --- Fourier Transform of SVR Image (Hankel Transform of Kernel)
Chapter 4.2.3 --- Cross Correlation between SVR Images
Chapter 4.2.4 --- Convolution of SVR Images
Chapter 4.3 --- Mathematical Properties: Differential Properties --- p.219
Chapter 4.3.1 --- Review of Differential Geometry
Chapter 4.3.2 --- Gradient of SVR Image
Chapter 4.3.3 --- Laplacian of SVR Image
Chapter 4.4 --- Physical Properties --- p.228
Chapter 4.4.1 --- 7Transformation between Reconstructed Image and Lagrange Multipliers
Chapter 4.4.2 --- Relation between Original Image and SVR Approximation
Chapter 4.5 --- Appendix --- p.234
Chapter 4.5.1 --- Hankel Transform for Common Functions
Chapter 4.5.2 --- Hankel Transform for RBF
Chapter 4.5.3 --- Integration of Gaussian
Chapter 4.5.4 --- Chain Rules for Differential Geometry
Chapter 4.5.5 --- Derivation of Gradient of RBF
Chapter 4.5.6 --- Derivation of Laplacian of RBF
Chapter Chapter 5 --- Image Processing in SVR Representation --- p.244-293
Chapter 5.1 --- Introduction --- p.245
Chapter 5.2 --- Geometric Transformation --- p.241
Chapter 5.2.1 --- "Brightness, Contrast and Image Addition"
Chapter 5.2.2 --- Interpolation or Resampling
Chapter 5.2.3 --- Translation and Rotation
Chapter 5.2.4 --- Affine Transformation
Chapter 5.2.5 --- Transformation with Given Optical Flow
Chapter 5.2.6 --- A Brief Summary
Chapter 5.3 --- SVR Image Filtering --- p.261
Chapter 5.3.1 --- Discrete Filtering in SVR Representation
Chapter 5.3.2 --- Continuous Filtering in SVR Representation
Chapter Chapter 6 --- Image Analysis in SVR Representation --- p.294-370
Chapter 6.1 --- Contour Extraction --- p.295
Chapter 6.1.1 --- Contour Tracing by Equi-potential Line [using Gradient]
Chapter 6.1.2 --- Contour Smoothing and Contour Feature Extraction
Chapter 6.2 --- Registration --- p.304
Chapter 6.2.1 --- Registration using Cross Correlation
Chapter 6.2.2 --- Registration using Phase Correlation [Phase Shift in Fourier Transform]
Chapter 6.2.3 --- Analysis of the Two Methods for Registrationin SVR Domain
Chapter 6.3 --- Segmentation --- p.347
Chapter 6.3.1 --- Segmentation by Contour Tracing
Chapter 6.3.2 --- Segmentation by Thresholding on Smoothed or Sharpened SVR Image
Chapter 6.3.3 --- Segmentation by Thresholding on SVR Approximation
Chapter 6.4 --- Appendix --- p.368
Chapter Chapter 7 --- Conclusion --- p.371-379
Chapter 7.1 --- Conclusion and contribution --- p.372
Chapter 7.2 --- Future work --- p.378
Reference --- p.380-383
LAI, CHUNG-YEN, and 賴忠彥. "Prediction of Stock Price using Support Vector Interval Regression." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/btv4y8.
Full text國立高雄應用科技大學
資訊管理研究所碩士班
105
In recent years, Investment and financial management has become a part of people's lives. However, in the field of investment and financial management has a lot of investments, Stock investment has been the main investment and financial management tools for most investors. But the stock market technology analysis is increasingly complex and Technical indicators was also increasing rapidly. Investors in the face of this situation, it has long been difficult to make a rational analysis. So, this study uses technical indicators to predict the stock price of Taiwan stock market, including Stochastics oscillator, Williams %R, Relative Strength Index ... and many more. Use the Support Vector Regression to establish a regression model to predict stock price movements, and according to the previous study of the proposed changes in the definition of insensitive interval, extended out Support vector regression machine with parametric insensitive model. Considering the date of April 28, 2017 Day Trading tax paid by half. Therefore, this study adds the concept of Interval regression to the former method, extended out Support Vector Interval Regression machine with parametric insensitive model. In this study, we found that the rate of parameterized insensitive interval support vector interval regression was better than that of the other two methods. It also indicated that the research method proposed by this study was the profit of investors in stock investment Ability to have some help.
Fang, Hsin-En, and 方信恩. "Support Vector Regression Applied to Estimating Product Form Images." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/37529342064185916008.
Full text國立成功大學
工業設計學系碩博士班
96
In this study, a recently evolved learning machine theory called support vector regression (SVR) is introduced. It is intended to improve the estimating performance of product form image values using a computer system and kernel functions, which are applied to the SVR to increase its ability to resolve nonlinear problems. Two types of samples, which have different data structures derived from virtual 3-D models, are taken as training data for two case studies intended to evaluate the performance of the SVR models. In case study Ι, 32 virtual 3-D models from actual hairdryers are constructed as samples by using different modeling procedures and features. To use these models as input data for training, a process is provided to unify the data structure so as to reconstruct each model as a single UV lofting surface and array points along the surface to provide the data representing an entire 3-D model. Then, a linguistic ‘streamline’ is applied to estimate the power of the SVR. In case study Π, 51 3-D models are prepared as data with systematic structures for testing the SVR using feature-based morphing. To construct the optimal regression model, a technique called two-step cross-validation is used to select the optimal parameter combinations of the SVR model. Finally, the excellent training and predicting power of the SVR model using different kernel functions show the advantages of the SVR approach. The characteristics of the SVR are explained using two types of samples with different data structures.
Wei, Chih-Yuan, and 魏志原. "Sequential Minimal Optimization Algorithm For Robust Support Vector Regression." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/79312955586930977629.
Full text國立臺北大學
電機資訊產業研發碩士專班
102
Based on the statistical learning theory, the support vector machine is excellent with its features in low model complexity and high generalization ability, and is highly potential for the applications in both pattern recognition and function approximation. The quadratic expression in its original model intrinsically corresponds to a high computational complexity in O(n2), and leads it to a curse of dimensionality with the increasing training instances. By employing the sequential minimal optimization (SMO) algorithm which subdivides the big integrated optimization into a series of small two-instance optimization, the computation of the quadratic programming can be effectively reduced, and reach rapidly the optimal solution. With some improved findings, the study extends the SMO for SVM classifications to that for SVM regression. The development would be advantageous to the applications of function approximation.
Liao, Bo-Jyun, and 廖勃鈞. "Application of Support Vector Regression for Physiological Relaxation Assessment." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/01274326398562378450.
Full text國立雲林科技大學
資訊工程系碩士班
101
Nowadays, in order to pursue better quality of life, people usually compete with each other. Thus, people usually suffering from various stresses. Many reports indicated that ninety percent of physiological problems are related with mental factor. Although many researchers had proposed various stresses indicators and the recognition rate is high. However, how to evaluate the stress status of a subject is still an open problem. Therefore, a relaxation assessment system with consideration of physiological signals is proposed in this thesis. A specific designed emotional relaxation experiment was performed to collect physiological signals of subjects. Four biosensors including electrocardiogram (ECG), galvanic skin responses (GSR), blood volume pulse (BVP), and pulse are used. A Go/No-go test was used to help participants concentrate on the experiment. Mathematical tests were performed to induce stress of participants. The participants were asked to fill out a pre-test questionnaire to measure their emotion states. Two stress relaxing techniques: relaxing music and mindfulness meditation body scan were performed. When the relaxing music/mindfulness meditation body scan were finished, the participants were asked to fill out a post-test questionnaire. Then the Support Vector Regression (SVR) is used to train the regression curve of emotional calm. According to the emotional calm curves, we can provide a quantitative calm assessment of subject. To demonstrate the representative of the obtained emotional calm curves, the testing emotional calm curves were compared with three negative emotional curves and the obtained emotional calm curve. Experimental results showed that 89.1% of the testing emotional calm curves could be correctly recognized.
鄭健毅. "Apply support vector regression for electronic industries stock prediction." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/60982683198848794338.
Full text明新科技大學
工業工程與管理研究所
99
ABSTRACT Stock market prediction has been widely discussed in recent years. Investors are often lost in the ups and downs of the stock price on stock market. Many of the theories and methods are used to predict stock market prediction. A large number of political factors, economic factors, human factors and other factors are affecting stock market. To create an accurate model to predict the stock price is not easy. The research aim is to find the best model on stock prediction by using support vector regression (SVR) and back-propagation (BP) neural network. First, author uses the BP neural network and SVR to build the prediction model, and then employs rough set theory (RST) to select the most significant factor in the prediction model. At the same time, author conducts the prediction performance using the full and reduced prediction model. Finally, compare BP with SVR prediction performance in term of MSE, MAD and MAPE. The related research companies include tsmc company, AU Optronics, MediaTek, HTC, and Foxconn Technology Group. This research computes their twelve technical indicators everyday from July 1, 2007 to June 30, 2010. According to the performance measure, the BP neural network is little lower than the SVR with respect to full and reduced model. Although the performance may not meet the research objectives, the results can actually discover the most significant factors in the prediction model. Keywords: Support Vector Regression (SVR), Rough set theory, BP, Feature selection, Technical indicator
CHEN, MING-ZHE, and 陳銘哲. "Using Support Vector Regression for Chiller Analysis and Prediction." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/5m7429.
Full text國立臺北科技大學
冷凍空調工程系所
105
Through the research, air condition at buildings that accounts for 43% being energy consumption and the energy consumption of chiller accounts for 60% of the air condition. How to make chiller keep running at high efficiency was the most point project. Our team which using by ASHRAE standard to get short-term measurement data was the mean method to obtain data by chiller. We were found two deficiency at our project processing. (1)The great difference COP values at different measurement cycle. (2)Using the data which measurement by different season was calculated by regression analysis at same standard. It could be huge difference at measurement uncertainty. In this study the research methods are divided into three parts: (1) Discuss the chiller performance using ASHRAE chiller performance equation combine regression analysis and the method of Support Vector Machine. The results show that the average absolute percentage error is 7.96%, which is much lower than the 15.41% of the ASHRAE Guideline 14, when using the Support Vector Machine to predict the performance of chiller performance data in different seasons. (2) Through cross-build model and performance prediction this research use 7 chiller performance data to sort out 450 pieces analysis information. (3) Finally, 7 important factors that effects the use of support vector regression for predicting chiller performance error are analyzed by using backward law in regression analysis. The most influential factor is the absolute value of the average temperature difference in the testing data and the training data.
Chen, Kuan-Yu, and 陳寬裕. "An Evolutionary Support Vector Regression in Tourism Demand Forecasting." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/bpqjrc.
Full text長榮大學
經營管理研究所
95
The tourism industry, which benefits the transportation, accommodation, catering, entertainment and retailing sectors, has been blooming in the past few decades. The 20th century witnessed a steady increase in tourism all over the world. Each country wants to know its international visitors and tourism receipts in order to choose an appropriate strategy for its economic well-being. Hence, a reliable forecast is needed and plays a major role in tourism planning. Support vector machine(SVM) is first applied in pattern recognition problem, however, introduction of ε- insensitive loss function by Vapnik, SVM has been developed in solving non-linear regression estimation problem, such new techniques called support vector regression(SVR). This study will apply support vector regression technique to construct the prediction model of tourism demand. In order to construct effective SVR model, we have to set SVR’s parameters carefully. This study proposes a new model called GA-SVR that searching for SVR’s optimal parameters through applies real-valued genetic algorithms, and uses the optimal parameters to construct SVR model. The conclusion in the forecasting the tourism demand study. SVR model shows its reliabilities and as a good prediction technique. Its generalization performance is even more accurate than neural networks. Moreover, in order to test the importance and understand the features of SVR model, this study implements sensitivity analysis technique, the analysis demonstrates that incorrectly selected parameters will lead the model’s results in the risk of over-fitting, or under-fitting.
LU, HUI-FEN, and 呂蕙芬. "A Kernel-based Feature Selection Method for Support Vector Regression." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/57avjc.
Full text國立臺中教育大學
教育資訊與測驗統計研究所碩士在職專班
105
According to numbers of research literature, data characteristics affect the data forecast results, and even affect the system performance. With the rapid development of science and technology, a number of fields can be collected in the number of tens of thousands of features, and can be used as a training set of the number of samples is far less than the number of features, the characteristics of the selected advantagesinclude: easy to understand, reduce the calculation and data storage pressure ,Reduce the dimension to improve the prediction accuracy of the model. Through the appropriate feature selection method, you can reduce the sample training time and improve the accuracy of prediction. When the data is mapped into the high-dimensional space to do the grouping, adding the data index or characteristic can reduce the calculation time and find the most characteristic feature data set, and the minimum and the minimum value of the intra-group prediction error Feature set, which is an important goal of this study.Therefore, this study develops a nucleation feature selection method which is suitable for supporting vector regression, which is an important research topic forfeature selection. In this study, we use the UCI Machine Learning Repository database and the semantic space lexical data of primary and secondary schools, using the featureselection method, forward feature selection method, backward feature selection method and kernel function feature selection method Set validation. The results of this study show that the support vector regression combined with the study of the proposed nuclear feature selection method has a smaller error rate. Support vector regression is superior to linear regression and nonlinear regression in data prediction. It is not only effective to reduce the dimension of data, but also to obtain smaller prediction error. There are a wide variety of features in the general data set. Through the feature selection method, the repetitive features or non-influential noise characteristics can be eliminated, so that the feature vector and the feature vector dimension expressed by the data can be effectively removed and the less influential Eigenvalues, to improve the forecast performance, if widely used in various fields to effectively reduce the results of dimension prediction data, will greatly enhance the performance.
Yang, Chun-Yu, and 楊竣宇. "Modeling Technology of Robust Interval Support Vector Interval Regression Networks." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/23576084553547875386.
Full text國立宜蘭大學
電機工程學系碩士班
99
In the real world applications, people usually handle symbolic interval-values data, which also contain outliers. In order to overcome these problems, we use Hausdorff distances to replace interval kernel of Gaussian kernel functions. This method is called support vector regression with interval input data (SVRI2). It is can provide the best parameters for interval support vector interval regression networks (ISVIRNs) to generate initial structure and train effectively.. In this study, we insert robust method into ISVIRNs. This method is called robust interval support vector interval regression networks (RISVIRNs). It uses condition to determine that interval data set is outlier or not. If it is outlier then pull back interval data to center line of initial structure, otherwise not do anything. The simulation results with outlier shows that RISVIRNs proposed can provide satisfactory result and predicts line accurately in the future.
Hsu, Wen-Yang, and 許文揚. "Applying Support Vector Regression to the Prediction of Typhoon-Rainfall." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/29756524001113685616.
Full text大葉大學
工業工程與科技管理學系
94
It belongs to the typhoon to take place and bring the natural calamity of great injury frequently in Taiwan. The statistic frequency of happened typhoon of Central Weather Bureau is about thirteen times equally all the year, and concentrating between June and November. During this time is more frequency happened on August and September. On this time, the southwest-airstream is in vogue. The rainfall of typhoon and southwest-airstream are sizable and occur the great injury. In order to take precautions the great injury, this paper purpose the support vector regression of support vector machine to predict the rainfall. The input factors are route of typhoon, seat point of typhoon, maximum air pressure, maximum velocity near typhoon center and the radian of storm. The output factor is rainfall. The result is to confer the prediction ability of rainfall of according to typhoon’s route and subregion’s rainfall under typhoon’s route.
Chou, Yu-Che, and 周祐徹. "Revising Traditional Technological Prediction Model by Support Vector Regression Tools." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/82866748048600355430.
Full text大葉大學
工業工程與科技管理學系
94
With Social development, Scientific and technological progress, technology will become the essential and important element. No matter in the science and technology, economy and transnational competition, all rely on technology. Therefore, it will attach gradually importance to technology forecasting. From the aspect of the technical patent behaviors of enterprises, it can find out enterprises invest in the quantity of resource for the area of technology. Patent also can predict the development trend of technology and measure competitiveness of enterprises. It’s a key and makes enterprises develop continuously forever. This research makes patent search for OLED technique in a patent database. It gets patent data is used in support vector regression (SVR) tools and carried out short-term forecast. The short-term forecast result of SVR will compare with short-term forecast of traditional Logistic model. Then, the better short-term result puts into Logistic model and simulates trend result. The trend result will compare with simulating trend result of traditional Logistic model. It proves that SVR tools can revise traditional Logistic model and raise the ability of forecasting of traditional Logistic model. Furthermore, according to trend result of revising, this research finally provides objective opinion as reference of making better policy.
HSU, CHIA-CHU, and 許家駒. "Embedded Support Vector Regression on CMAC and CMAC-GBF Techniques." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/78584507823749143918.
Full text國立宜蘭大學
電機工程學系碩士班
95
Abstract In this thesis, we integrate the techniques of cerebellar model articulation controller (CMAC) and support vector regression (SVR) to develop a more efficient scheme. In general, CMAC has some attractive features. The most important ones are its extremely fast learning capability and the special architecture that lets effective digital hardware implementation as possible. On the other hand, a SVR is a novel method for tackling the problems of function approximation and regression estimation based on the statistical learning theory and has robust properties that against noise. In this thesis, we propose the SVR-based CMAC (CMAC-GBF) systems that combined by SVR and CMAC (CMAC-GBF) systems. From the results of simulation, the proposed structure has high accuracy and noise against. Besides, the experimental testing results demonstrate that the SVR-based CMAC (CMAC-GBF) systems outperform the original CMAC (CMAC-GBF) systems.
Lin, Ynag Shun, and 楊舜麟. "Using Optimization Algorithms to Select Parameters of Support Vector Regression." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/00606759842868945892.
Full text大葉大學
工業工程與科技管理學系
94
Organized approaches in selection support vector regression (SVR) parameters are lacking. Furthermore, the determination of SVR parameters affects the prediction accuracy a lot. This research applied four algorithms, namely Ant colony system, Tabu search, Immune algorithm and Particle swarm optimization to choose SVR parameters. For each algorithms, the forecasting errors are treated as objective functions. In additional, many examples from real word are used to demonstrate the performance of the proposed models
Purnamasari, Ragil, and 李桔萍. "Improving Accuracy of Preliminary Cost Estimation Using Support Vector Regression." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/51605211122639264614.
Full text國立中央大學
營建管理研究所
104
Preliminary cost estimation is an important stage for construction projects. During the stage, any contractor and owner is able to determine whether his/her project is feasible. A typical preliminary cost estimation for a building construction project in Indonesia may take weeks and have an error rate varying from -12.97% to +26.80%. Previous studies also concluded that 74% of cost overruns are caused due to underestimation. The research objectives, therefore, are (1) to determine factors that influence cost estimation in Indonesia and (2) to develop a Support Vector Regression (SVR) model in an attempt to improve accuracy and to reduce workhours for preliminary cost estimation. Literature review identified 14 factors that influence cost estimation the most for all types of construction projects around the world. Considering these factors as the model bases, data collection randomly gathered 104 building cases in Indonesia containing valid information for the proposed model. The SVR model with the radial basis function kernel was established after data trimming, analysis, and normalization. The model then was evaluated and implemented using the 5-folds cross validation and yielded the average accuracy at 95.79% for preliminary cost estimation of building construction projects. The accuracy has been improved 8.71% between the original data and the results from the SVR model. Time spent for conducting such a preliminary cost estimation has been significantly reduced from weeks by human estimators to less than one second by the model. The SVR model is efficient in both accuracy and time-saving.
Chen, Hsi–An, and 陳璽安. "Applying the Support Vector Regression to the Missing Value Problems." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/02368330022191248233.
Full text華梵大學
資訊管理學系碩士班
98
Data Mining is now widespread used for many enterprises. There could be missing data from paperwork to electronic system because of human error or out–of–date information. Usually these data might be deleted or using average value, 0 and mode value to fill the missing values, but this can only applicable for fewer data. It will certainly affect the accuracy of data and ultimately unable to provide reliable information to the user. This thesis use open datasets in the test. It use some data with missing values at random from the open datasets, then use average value, 0, Back–propagation Network (BPN) and Support Vector Regression (SVR) to analyze numerical backfill. Finally this thesis use regression tree to analyze the comparisons. The result shows that anticipation value by using SVR has the closest average error to the original value for missing value.
Tsung-HsuanLai and 賴宗煊. "Image Completion Using Prediction Concept Based on Support Vector Regression." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/39707980562772998744.
Full text國立成功大學
製造資訊與系統研究所碩博士班
100
Image completion is a technique widely used that automatically removes objects or repairs damaged portions of image. However, information regarding the original image is often lacking in structure reconstruction, and as a result, images with complex structures are difficult to restore. This study proposed a SVR-oriented Image Completion (SVR-IC) method, the goal of which is to predict the original structure of unknown areas and then repair or make appropriate adjustments to the structure and texture of the damaged area. From the experimental results, SVR-IC produced images of good quality that were superior to those of other methods. The results show that integrated structure prediction to image completion can effectively enhance the quality of the restored image.