Dissertations / Theses on the topic 'Support vector machine. Interval. Kernel'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 39 dissertations / theses for your research on the topic 'Support vector machine. Interval. Kernel.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Takahashi, Adriana. "M?quina de vetores-suporte intervalar." Universidade Federal do Rio Grande do Norte, 2012. http://repositorio.ufrn.br:8080/jspui/handle/123456789/15225.
Full textThe Support Vector Machines (SVM) has attracted increasing attention in machine learning area, particularly on classification and patterns recognition. However, in some cases it is not easy to determinate accurately the class which given pattern belongs. This thesis involves the construction of a intervalar pattern classifier using SVM in association with intervalar theory, in order to model the separation of a pattern set between distinct classes with precision, aiming to obtain an optimized separation capable to treat imprecisions contained in the initial data and generated during the computational processing. The SVM is a linear machine. In order to allow it to solve real-world problems (usually nonlinear problems), it is necessary to treat the pattern set, know as input set, transforming from nonlinear nature to linear problem. The kernel machines are responsible to do this mapping. To create the intervalar extension of SVM, both for linear and nonlinear problems, it was necessary define intervalar kernel and the Mercer s theorem (which caracterize a kernel function) to intervalar function
As m?quinas de vetores suporte (SVM - Support Vector Machines) t?m atra?do muita aten??o na ?rea de aprendizagem de m?quinas, em especial em classifica??o e reconhecimento de padr?es, por?m, em alguns casos nem sempre ? f?cil classificar com precis?o determinados padr?es entre classes distintas. Este trabalho envolve a constru??o de um classificador de padr?es intervalar, utilizando a SVM associada com a teoria intervalar, de modo a modelar com uma precis?o controlada a separa??o entre classes distintas de um conjunto de padr?es, com o objetivo de obter uma separa??o otimizada tratando de imprecis?es contidas nas informa??es do conjunto de padr?es, sejam nos dados iniciais ou erros computacionais. A SVM ? uma m?quina linear, e para que ela possa resolver problemas do mundo real, geralmente problemas n?o lineares, ? necess?rio tratar o conjunto de padr?es, mais conhecido como conjunto de entrada, de natureza n?o linear para um problema linear, as m?quinas kernels s?o respons?veis por esse mapeamento. Para a extens?o intervalar da SVM, tanto para problemas lineares quanto n?o lineares, este trabalho introduz a defini??o de kernel intervalar, bem como estabelece o teorema que valida uma fun??o ser um kernel, o teorema de Mercer para fun??es intervalares
Tsang, Wai-Hung. "Scaling up support vector machines /." View abstract or full-text, 2007. http://library.ust.hk/cgi/db/thesis.pl?CSED%202007%20TSANG.
Full textShilton, Alistair. "Design and training of support vector machines." Connect to thesis, 2006. http://repository.unimelb.edu.au/10187/443.
Full textNguyen, Van Toi. "Visual interpretation of hand postures for human-machine interaction." Thesis, La Rochelle, 2015. http://www.theses.fr/2015LAROS035/document.
Full textNowadays, people want to interact with machines more naturally. One of the powerful communication channels is hand gesture. Vision-based approach has involved many researchers because this approach does not require any extra device. One of the key problems we need to resolve is hand posture recognition on RGB images because it can be used directly or integrated into a multi-cues hand gesture recognition. The main challenges of this problem are illumination differences, cluttered background, background changes, high intra-class variation, and high inter-class similarity. This thesis proposes a hand posture recognition system consists two phases that are hand detection and hand posture recognition. In hand detection step, we employed Viola-Jones detector with proposed concept Internal Haar-like feature. The proposed hand detection works in real-time within frames captured from real complex environments and avoids unexpected effects of background. The proposed detector outperforms original Viola-Jones detector using traditional Haar-like feature. In hand posture recognition step, we proposed a new hand representation based on a good generic descriptor that is kernel descriptor (KDES). When applying KDES into hand posture recognition, we proposed three improvements to make it more robust that are adaptive patch, normalization of gradient orientation in patches, and hand pyramid structure. The improvements make KDES invariant to scale change, patch-level feature invariant to rotation, and final hand representation suitable to hand structure. Based on these improvements, the proposed method obtains better results than original KDES and a state of the art method
Karode, Andrew. "Support vector machine classification of network streams using a spectrum kernel encoding." Winston-Salem, NC : Wake Forest University, 2008. http://dspace.zsr.wfu.edu/jspui/handle/10339/38157.
Full textTitle from electronic thesis title page. Thesis advisor: William H. Turkett Jr. Includes bibliographical references (p. 61-65).
Duman, Asli. "Multiple Criteria Sorting Methods Based On Support Vector Machines." Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12612863/index.pdf.
Full textWestin, Emil. "Authorship classification using the Vector Space Model and kernel methods." Thesis, Uppsala universitet, Statistiska institutionen, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-412897.
Full textLuo, Tong. "Scaling up support vector machines with application to plankton recognition." [Tampa, Fla.] : University of South Florida, 2005. http://purl.fcla.edu/fcla/etd/SFE0001154.
Full textPilkington, Nicholas Charles Victor. "Hyperparameter optimisation for multiple kernels." Thesis, University of Cambridge, 2014. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.648763.
Full textWang, Zhuang. "Budgeted Online Kernel Classifiers for Large Scale Learning." Diss., Temple University Libraries, 2010. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/89554.
Full textPh.D.
In the environment where new large scale problems are emerging in various disciplines and pervasive computing applications are becoming more common, there is an urgent need for machine learning algorithms that could process increasing amounts of data using comparatively smaller computing resources in a computational efficient way. Previous research has resulted in many successful learning algorithms that scale linearly or even sub-linearly with sample size and dimension, both in runtime and in space. However, linear or even sub-linear space scaling is often not sufficient, because it implies an unbounded growth in memory with sample size. This clearly opens another challenge: how to learn from large, or practically infinite, data sets or data streams using memory limited resources. Online learning is an important learning scenario in which a potentially unlimited sequence of training examples is presented one example at a time and can only be seen in a single pass. This is opposed to offline learning where the whole collection of training examples is at hand. The objective is to learn an accurate prediction model from the training stream. Upon on repetitively receiving fresh example from stream, typically, online learning algorithms attempt to update the existing model without retraining. The invention of the Support Vector Machines (SVM) attracted a lot of interest in adapting the kernel methods for both offline and online learning. Typical online learning for kernel classifiers consists of observing a stream of training examples and their inclusion as prototypes when specified conditions are met. However, such procedure could result in an unbounded growth in the number of prototypes. In addition to the danger of the exceeding the physical memory, this also implies an unlimited growth in both update and prediction time. To address this issue, in my dissertation I propose a series of kernel-based budgeted online algorithms, which have constant space and constant update and prediction time. This is achieved by maintaining a fixed number of prototypes under the memory budget. Most of the previous works on budgeted online algorithms focus on kernel perceptron. In the first part of the thesis, I review and discuss these existing algorithms and then propose a kernel perceptron algorithm which removes the prototype with the minimal impact on classification accuracy to maintain the budget. This is achieved by dual use of cached prototypes for both model presentation and validation. In the second part, I propose a family of budgeted online algorithms based on the Passive-Aggressive (PA) style. The budget maintenance is achieved by introducing an additional constraint into the original PA optimization problem. A closed-form solution was derived for the budget maintenance and model update. In the third part, I propose a budgeted online SVM algorithm. The proposed algorithm guarantees that the optimal SVM solution is maintained on all the prototype examples at any time. To maximize the accuracy, prototypes are constructed to approximate the data distribution near the decision boundary. In the fourth part, I propose a family of budgeted online algorithms for multi-class classification. The proposed algorithms are the recently proposed SVM training algorithm Pegasos. I prove that the gap between the budgeted Pegasos and the optimal SVM solution directly depends on the average model degradation due to budget maintenance. Following the analysis, I studied greedy multi-class budget maintenance methods based on removal, projection and merging of SVs. In each of these four parts, the proposed algorithms were experimentally evaluated against the state-of-art competitors. The results show that the proposed budgeted online algorithms outperform the competitive algorithm and achieve accuracy comparable to non-budget counterparts while being extremely computationally efficient.
Temple University--Theses
Zhang, Hang. "Distributed Support Vector Machine With Graphics Processing Units." ScholarWorks@UNO, 2009. http://scholarworks.uno.edu/td/991.
Full textZwald, Laurent. "Performances statistiques d'algorithmes d'apprentissage : "Kernel projection machine" et analyse en composantes principales à noyau." Paris 11, 2005. https://tel.archives-ouvertes.fr/tel-00012011.
Full textThis thesis takes place within the framework of statistical learning. It brings contributions to the machine learning community using modern statistical techniques based on progress in the study of empirical processes. The first part investigates the statistical properties of Kernel Principal Component Analysis (KPCA). The behavior of the reconstruction error is studied with a non-asymptotique point of view and concentration inequalities of the eigenvalues of the kernel matrix are provided. All these results correspond to fast convergence rates. Non-asymptotic results concerning the eigenspaces of KPCA themselves are also provided. A new algorithm of classification has been designed in the second part: the Kernel Projection Machine (KPM). It is inspired by the Support Vector Machines (SVM). Besides, it highlights that the selection of a vector space by a dimensionality reduction method such as KPCA regularizes suitably. The choice of the vector space involved in the KPM is guided by statistical studies of model selection using the penalized minimization of the empirical loss. This regularization procedure is intimately connected with the finite dimensional projections studied in the statistical work of Birge and Massart. The performances of KPM and SVM are then compared on some data sets. Each topic tackled in this thesis raises new questions
Vishwanathan, S. V. N. "Kernel Methods Fast Algorithms and real life applications." Thesis, Indian Institute of Science, 2003. http://hdl.handle.net/2005/49.
Full textGilani, Syed Hassan. "Road Sign Recognition based onInvariant Features using SupportVector Machine." Thesis, Högskolan Dalarna, Datateknik, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:du-2760.
Full textChen, Xiaoyi. "Transfer Learning with Kernel Methods." Thesis, Troyes, 2018. http://www.theses.fr/2018TROY0005.
Full textTransfer Learning aims to take advantage of source data to help the learning task of related but different target data. This thesis contributes to homogeneous transductive transfer learning where no labeled target data is available. In this thesis, we relax the constraint on conditional probability of labels required by covariate shift to be more and more general, based on which the alignment of marginal probabilities of source and target observations renders source and target similar. Thus, firstly, a maximum likelihood based approach is proposed. Secondly, SVM is adapted to transfer learning with an extra MMD-like constraint where Maximum Mean Discrepancy (MMD) measures this similarity. Thirdly, KPCA is used to align data in a RKHS on minimizing MMD. We further develop the KPCA based approach so that a linear transformation in the input space is enough for a good and robust alignment in the RKHS. Experimentally, our proposed approaches are very promising
Garg, Aditie. "Designing Reactive Power Control Rules for Smart Inverters using Machine Learning." Thesis, Virginia Tech, 2018. http://hdl.handle.net/10919/83558.
Full textMaster of Science
Linton, Thomas. "Forecasting hourly electricity consumption for sets of households using machine learning algorithms." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-186592.
Full textFör att ta itu med ineffektivitet, avfall, och de negativa konsekvenserna av elproduktion så vill företag och myndigheter se beteendeförändringar bland hushållskonsumenter. För att skapa beteendeförändringar så behöver konsumenterna bättre återkoppling när det gäller deras elförbrukning. Den nuvarande återkopplingen i en månads- eller kvartalsfaktura ger konsumenten nästan ingen användbar information om hur deras beteenden relaterar till deras konsumtion. Smarta mätare finns nu överallt i de utvecklade länderna och de kan ge en mängd information om bostäders konsumtion, men denna data används främst som underlag för fakturering och inte som ett verktyg för att hjälpa konsumenterna att minska sin konsumtion. En komponent som krävs för att leverera innovativa återkopplingsmekanismer är förmågan att förutse elförbrukningen på hushållsskala. Arbetet som presenteras i denna avhandling är en utvärdering av noggrannheten hos ett urval av kärnbaserad maskininlärningsmetoder för att förutse den sammanlagda förbrukningen för olika stora uppsättningar av hushåll. Arbetet i denna avhandling visar att "k-Nearest Neighbour Regression" och "Gaussian Process Regression" är de mest exakta metoder inom problemets begränsningar. Förutom noggrannhet, så görs en utvärdering av fördelar, nackdelar och prestanda hos varje maskininlärningsmetod.
Kingravi, Hassan. "Reduced-set models for improving the training and execution speed of kernel methods." Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/51799.
Full textDíaz, Jorge Luis Guevara. "Modelos de aprendizado supervisionado usando métodos kernel, conjuntos fuzzy e medidas de probabilidade." Universidade de São Paulo, 2015. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-03122015-155546/.
Full textThis thesis proposes a methodology based on kernel methods, probability measures and fuzzy sets, to analyze datasets whose individual observations are itself sets of points, instead of individual points. Fuzzy sets and probability measures are used to model observations; and kernel methods to analyze the data. Fuzzy sets are used when the observation contain imprecise, vague or linguistic values. Whereas probability measures are used when the observation is given as a set of multidimensional points in a $D$-dimensional Euclidean space. Using this methodology, it is possible to address a wide range of machine learning problems for such datasets. Particularly, this work presents data description models when observations are modeled by probability measures. Those description models are applied to the group anomaly detection task. This work also proposes a new class of kernels, \\emph{the kernels on fuzzy sets}, that are reproducing kernels able to map fuzzy sets to a geometric feature spaces. Those kernels are similarity measures between fuzzy sets. We give from basic definitions to applications of those kernels in machine learning problems as supervised classification and a kernel two-sample test. Potential applications of those kernels include machine learning and patter recognition tasks over fuzzy data; and computational tasks requiring a similarity measure estimation between fuzzy sets.
Franchi, Gianni. "Machine learning spatial appliquée aux images multivariées et multimodales." Thesis, Paris Sciences et Lettres (ComUE), 2016. http://www.theses.fr/2016PSLEM071/document.
Full textThis thesis focuses on multivariate spatial statistics and machine learning applied to hyperspectral and multimodal and images in remote sensing and scanning electron microscopy (SEM). In this thesis the following topics are considered:Fusion of images:SEM allows us to acquire images from a given sample using different modalities. The purpose of these studies is to analyze the interest of fusion of information to improve the multimodal SEM images acquisition. We have modeled and implemented various techniques of image fusion of information, based in particular on spatial regression theory. They have been assessed on various datasets.Spatial classification of multivariate image pixels:We have proposed a novel approach for pixel classification in multi/hyper-spectral images. The aim of this technique is to represent and efficiently describe the spatial/spectral features of multivariate images. These multi-scale deep descriptors aim at representing the content of the image while considering invariances related to the texture and to its geometric transformations.Spatial dimensionality reduction:We have developed a technique to extract a feature space using morphological principal component analysis. Indeed, in order to take into account the spatial and structural information we used mathematical morphology operators
Ashrafi, Parivash. "Predicting the absorption rate of chemicals through mammalian skin using machine learning algorithms." Thesis, University of Hertfordshire, 2016. http://hdl.handle.net/2299/17310.
Full textAbo, Al Ahad George, and Abbas Salami. "Machine Learning for Market Prediction : Soft Margin Classifiers for Predicting the Sign of Return on Financial Assets." Thesis, Linköpings universitet, Produktionsekonomi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-151459.
Full textWehmann, Adam. "A Spatial-Temporal Contextual Kernel Method for Generating High-Quality Land-Cover Time Series." The Ohio State University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=osu1398866264.
Full textSangnier, Maxime. "Outils d'apprentissage automatique pour la reconnaissance de signaux temporels." Rouen, 2015. http://www.theses.fr/2015ROUES064.
Full textThe work presented here tackles two different subjects in the wide thematic of how to build a numerical system to recognize temporal signals, mainly from limited observations. The first one is automatic feature extraction. For this purpose, we present a column generation algorithm, which is able to jointly learn a discriminative Time-Frequency (TF) transform, cast as a filter bank, with a support vector machine. This algorithm extends the state of the art on multiple kernel learning by non-linearly combining an infinite amount of kernels. The second direction of research is the way to handle the temporal nature of the signals. While our first contribution pointed out the importance of correctly choosing the time resolution to get a discriminative TF representation, the role of the time is clearly enlightened in early recognition of signals. Our second contribution lies in this field and introduces a methodological framework for early detection of a special event in a time-series, that is detecting an event before it ends. This framework builds upon multiple instance learning and similarity spaces by fitting them to the particular case of temporal sequences. Furthermore, our early detector comes with an efficient learning algorithm and theoretical guarantees on its generalization ability. Our two contributions have been empirically evaluated with brain-computer interface signals, soundscapes and human actions movies
Louradour, Jérôme. "Noyaux de séquences pour la vérification du locuteur par machines à vecteurs de support." Toulouse 3, 2007. http://www.theses.fr/2007TOU30004.
Full textThis thesis is focused on the application of Support Vector Machines (SVM) to Automatic Text-Independent Speaker Verification. This speech processing task consists in determining whether a speech utterance was pronounced or not by a target speaker, without any constraint on the speech content. In order to apply a kernel method such as SVM to this binary classification of variable-length sequences, an appropriate approach is to use kernels that can handle sequences, and not acoustic vectors within sequences. As explained in the thesis report, both theoretical and practical reasons justify the effort of searching such kernels. The present study concentrates in exploring several aspects of kernels for sequences, and in applying them to a very large database speaker verification problem under realistic recording conditions. After reviewing emergent methods to conceive sequence kernels and presenting them in a unified framework, we propose a new family of such kernels : the Feature Space Normalized Sequence (FSNS) kernels. These kernels are a generalization of the GLDS kernel, which is now well-known for its efficiency in speaker verification. A theoretical and algorithmic study of FSNS kernels is carried out. In particular, several forms are introduced and justified, and a sparse greedy matrix approximation method is used to suggest an efficient and suitable implementation of FSNS kernels for speaker verification. .
Behúň, Kamil. "Příznaky z videa pro klasifikaci." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2013. http://www.nusl.cz/ntk/nusl-236367.
Full textAbdallah, Fahed. "Noyaux reproduisants et critères de contraste pour l'élaboration de détecteurs à structure imposée." Troyes, 2004. http://www.theses.fr/2004TROY0002.
Full textIn this thesis, we consider statistical learning machines with try to infer rules from a given set or observations in order to make correct predictions on unseen examples. Building upon the theory of reproducing kernels, we develop a generalized linear detector in transformed spaces of high dimension, without explicitly doing any calculus in these spaces. The method is based on the optimization of the best second-order criterion with respect to the problem to solve. In fact, theoretical results show that second-order criteria are able, under some mild conditions, to guarantee the best solution in the sense of classical detection theories. Achieving a good generalisation performance with a receiver requires matching its complexity to the amount of available training data. This problem, known as the curse of dimensionality, has been studied theoretically by Vapnik and Chervonenkis. In this dissertation, we propose complexity control procedures in order to improve the performance of these receivers when few training data are available. Simulation results on real and synthetic data show clearly the competitiveness of our approach compared with other state of the art existing kernel methods like Support Vector Machines
Bhadra, Sahely. "Learning Robust Support Vector Machine Classifiers With Uncertain Observations." Thesis, 2012. http://etd.iisc.ernet.in/handle/2005/2475.
Full textWang, Chea-Wei, and 王麒瑋. "Index of Kernel Functions for Support Vector Machine." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/43487331964499753219.
Full text國立成功大學
工業管理科學系碩博士班
91
A Support Vector Machine (SVM) is a learning machine of novel type, based on statistical learning framework. It has become an increasingly popular tool for machine learning tasks such as classification, regression or novelty detection. To increase learning accuracy of a SVM, kernel plays an important role. This research aims at finding an index of kernels for support vector machines. Used simulation data are produced following Dirichlet and normal distributions. A real experiment for choosing kernel functions is also provided.
Ruei-YaoHuang and 黃瑞堯. "Video and Image Applications Based on Kernel Support Vector Machine (SVM)." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/67090107280042729427.
Full text國立成功大學
電機工程學系碩博士班
98
We use a classification method based on Kernel support vector machines (Kernel SVM), that can be applied to various types of data. We use Kernel SVM to extract the video highlights of sport and classify textile grade. Different form original classification method, we optimize the parameters and the features by Genetic Algorithm. The Kernel SVM is composed of the training mode and the analysis mode. In the training mode, we adopt the Kernel SVM to train classification function. In the analysis mode, we use the classification function to generate the classification result. We use the video and audio features without predefining any highlight rule of the events. The precision of highlight extraction by Kernel SVM can achieve about 81%, while that of textile grade classification is approximately 83% The experimental results show the proposed method can extract video highlights of sport, and it can also be applied to textile grade classification.
"Image representation, processing and analysis by support vector regression." 2001. http://library.cuhk.edu.hk/record=b5890679.
Full textThesis (M.Phil.)--Chinese University of Hong Kong, 2001.
Includes bibliographical references (leaves 380-383).
Text in English; abstracts in English and Chinese.
Chow Kai Tik = Zhi yuan shi liang hui gui fa zhi ying xiang biao shi shi ji qi ying xiang chu li yu fen xi / Zhou Qidi.
Abstract in English
Abstract in Chinese
Acknowledgement
Content
List of figures
Chapter Chapter 1 --- Introduction --- p.1-11
Chapter 1.1 --- Introduction --- p.2
Chapter 1.2 --- Road Map --- p.9
Chapter Chapter 2 --- Review of Support Vector Machine --- p.12-124
Chapter 2.1 --- Structural Risk Minimization (SRM) --- p.13
Chapter 2.1.1 --- Introduction
Chapter 2.1.2 --- Structural Risk Minimization
Chapter 2.2 --- Review of Support Vector Machine --- p.21
Chapter 2.2.1 --- Review of Support Vector Classification
Chapter 2.2.2 --- Review of Support Vector Regression
Chapter 2.2.3 --- Review of Support Vector Clustering
Chapter 2.2.4 --- Summary of Support Vector Machines
Chapter 2.3 --- Implementation of Support Vector Machines --- p.60
Chapter 2.3.1 --- Kernel Adatron for Support Vector Classification (KA-SVC)
Chapter 2.3.2 --- Kernel Adatron for Support Vector Regression (KA-SVR)
Chapter 2.3.3 --- Sequential Minimal Optimization for Support Vector Classification (SMO-SVC)
Chapter 2.3.4 --- Sequential Minimal Optimization for Support Vector Regression (SMO-SVR)
Chapter 2.3.5 --- Lagrangian Support Vector Classification (LSVC)
Chapter 2.3.6 --- Lagrangian Support Vector Regression (LSVR)
Chapter 2.4 --- Applications of Support Vector Machines --- p.117
Chapter 2.4.1 --- Applications of Support Vector Classification
Chapter 2.4.2 --- Applications of Support Vector Regression
Chapter Chapter 3 --- Image Representation by Support Vector Regression --- p.125-183
Chapter 3.1 --- Introduction of SVR Representation --- p.116
Chapter 3.1.1 --- Image Representation by SVR
Chapter 3.1.2 --- Implicit Smoothing of SVR representation
Chapter 3.1.3 --- "Different Insensitivity, C value, Kernel and Kernel Parameters"
Chapter 3.2 --- Variation on Encoding Method [Training Process] --- p.154
Chapter 3.2.1 --- Training SVR with Missing Data
Chapter 3.2.2 --- Training SVR with Image Blocks
Chapter 3.2.3 --- Training SVR with Other Variations
Chapter 3.3 --- Variation on Decoding Method [Testing pr Reconstruction Process] --- p.171
Chapter 3.3.1 --- Reconstruction with Different Portion of Support Vectors
Chapter 3.3.2 --- Reconstruction with Different Support Vector Locations and Lagrange Multiplier Values
Chapter 3.3.3 --- Reconstruction with Different Kernels
Chapter 3.4 --- Feature Extraction --- p.177
Chapter 3.4.1 --- Features on Simple Shape
Chapter 3.4.2 --- Invariant of Support Vector Features
Chapter Chapter 4 --- Mathematical and Physical Properties of SYR Representation --- p.184-243
Chapter 4.1 --- Introduction of RBF Kernel --- p.185
Chapter 4.2 --- Mathematical Properties: Integral Properties --- p.187
Chapter 4.2.1 --- Integration of an SVR Image
Chapter 4.2.2 --- Fourier Transform of SVR Image (Hankel Transform of Kernel)
Chapter 4.2.3 --- Cross Correlation between SVR Images
Chapter 4.2.4 --- Convolution of SVR Images
Chapter 4.3 --- Mathematical Properties: Differential Properties --- p.219
Chapter 4.3.1 --- Review of Differential Geometry
Chapter 4.3.2 --- Gradient of SVR Image
Chapter 4.3.3 --- Laplacian of SVR Image
Chapter 4.4 --- Physical Properties --- p.228
Chapter 4.4.1 --- 7Transformation between Reconstructed Image and Lagrange Multipliers
Chapter 4.4.2 --- Relation between Original Image and SVR Approximation
Chapter 4.5 --- Appendix --- p.234
Chapter 4.5.1 --- Hankel Transform for Common Functions
Chapter 4.5.2 --- Hankel Transform for RBF
Chapter 4.5.3 --- Integration of Gaussian
Chapter 4.5.4 --- Chain Rules for Differential Geometry
Chapter 4.5.5 --- Derivation of Gradient of RBF
Chapter 4.5.6 --- Derivation of Laplacian of RBF
Chapter Chapter 5 --- Image Processing in SVR Representation --- p.244-293
Chapter 5.1 --- Introduction --- p.245
Chapter 5.2 --- Geometric Transformation --- p.241
Chapter 5.2.1 --- "Brightness, Contrast and Image Addition"
Chapter 5.2.2 --- Interpolation or Resampling
Chapter 5.2.3 --- Translation and Rotation
Chapter 5.2.4 --- Affine Transformation
Chapter 5.2.5 --- Transformation with Given Optical Flow
Chapter 5.2.6 --- A Brief Summary
Chapter 5.3 --- SVR Image Filtering --- p.261
Chapter 5.3.1 --- Discrete Filtering in SVR Representation
Chapter 5.3.2 --- Continuous Filtering in SVR Representation
Chapter Chapter 6 --- Image Analysis in SVR Representation --- p.294-370
Chapter 6.1 --- Contour Extraction --- p.295
Chapter 6.1.1 --- Contour Tracing by Equi-potential Line [using Gradient]
Chapter 6.1.2 --- Contour Smoothing and Contour Feature Extraction
Chapter 6.2 --- Registration --- p.304
Chapter 6.2.1 --- Registration using Cross Correlation
Chapter 6.2.2 --- Registration using Phase Correlation [Phase Shift in Fourier Transform]
Chapter 6.2.3 --- Analysis of the Two Methods for Registrationin SVR Domain
Chapter 6.3 --- Segmentation --- p.347
Chapter 6.3.1 --- Segmentation by Contour Tracing
Chapter 6.3.2 --- Segmentation by Thresholding on Smoothed or Sharpened SVR Image
Chapter 6.3.3 --- Segmentation by Thresholding on SVR Approximation
Chapter 6.4 --- Appendix --- p.368
Chapter Chapter 7 --- Conclusion --- p.371-379
Chapter 7.1 --- Conclusion and contribution --- p.372
Chapter 7.2 --- Future work --- p.378
Reference --- p.380-383
Asharaf, S. "Efficient Kernel Methods For Large Scale Classification." Thesis, 2007. http://hdl.handle.net/2005/1076.
Full textMiao, Chuxiong. "A support vector machine model for pipe crack size classification." Master's thesis, 2009. http://hdl.handle.net/10048/400.
Full textTitle from pdf file main screen (viewed on July 16, 2009). "A thesis submitted to the Faculty of Graduate Studies and Research in partial fulfillment of the requirements for the degree of Master of Science, Department of Mechanical Engineering, University of Alberta." Includes bibliographical references.
"Sparse learning under regularization framework." Thesis, 2011. http://library.cuhk.edu.hk/record=b6075111.
Full textThe first part of this thesis develops a novel online learning framework to solve group lasso and multi-task feature selection. To the best our knowledge, the proposed online learning framework is the first framework for the corresponding models. The main advantages of the online learning algorithms are that (1) they can work on the applications where training data appear sequentially; consequently, the training procedure can be started at any time; (2) they can handle data up to any size with any number of features. The efficiency of the algorithms is attained because we derive closed-form solutions to update the weights of the corresponding models. At each iteration, the online learning algorithms just need O (d) time complexity and memory cost for group lasso, while they need O (d x Q) for multi-task feature selection, where d is the number of dimensions and Q is the number of tasks. Moreover, we provide theoretical analysis for the average regret of the online learning algorithms, which also guarantees the convergence rate of the algorithms. In addition, we extend the online learning framework to solve several related models which yield more sparse solutions.
The second part of this thesis addresses a general scenario of semi-supervised learning for the binary classification problern, where the unlabeled data may be a mixture of relevant and irrelevant data to the target binary classification task. Without specifying the relatedness in the unlabeled data, we develop a novel maximum margin classifier, named the tri-class support vector machine (3C-SVM), to seek an inductive rule that can separate these data into three categories: --1, +1, or 0. This is achieved by adopting a novel min loss function and following the maximum entropy principle. For the implementation, we approximate the problem and solve it by a standard concaveconvex procedure (CCCP). The approach is very efficient and it is possible to solve large-scale datasets.
The third part of this thesis focuses on multiple kernel learning (MKL) to solve the insufficiency of the L1-MKL and the Lp-MKL models. Hence, we propose a generalized MKL (GMKL) model by introducing an elastic net-type constraint on the kernel weights. More specifically, it is an MKL model with a constraint on a linear combination of the L1-norm and the square of the L2-norm on the kernel weights to seek the optimal kernel combination weights. Therefore, previous MKL problems based on the L1-norm or the L2-norm constraints can be regarded as its special cases. Moreover, our GMKL enjoys the favorable sparsity property on the solution and also facilitates the grouping effect. In addition, the optimization of our GMKL is a convex optimization problem, where a local solution is the globally optimal solution. We further derive the level method to efficiently solve the optimization problem.
Yang, Haiqin.
Advisers: Kuo Chin Irwin King; Michael Rung Tsong Iyu.
Source: Dissertation Abstracts International, Volume: 73-04, Section: B, page: .
Thesis (Ph.D.)--Chinese University of Hong Kong, 2011.
Includes bibliographical references (leaves 152-173).
Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [201-] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Abstract also in Chinese.
Baek, Seung Hyun. "Kernel-Based Data Mining Approach with Variable Selection for Nonlinear High-Dimensional Data." 2010. http://trace.tennessee.edu/utk_graddiss/676.
Full textSentelle, Christopher. "Practical Implementations of the Active Set Method for Support Vector Machine Training with Semi-definite Kernels." Doctoral diss., 2014. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/6178.
Full textPh.D.
Doctorate
Electrical Engineering and Computer Science
Engineering and Computer Science
Electrical Engineering
Phoungphol, Piyaphol. "A Classification Framework for Imbalanced Data." 2013. http://scholarworks.gsu.edu/cs_diss/78.
Full textEvgeniou, Theodoros, and Massimiliano Pontil. "A Note on the Generalization Performance of Kernel Classifiers with Margin." 2000. http://hdl.handle.net/1721.1/7169.
Full textJuozenaite, Ineta. "Application of machine learning techniques for solving real world business problems : the case study - target marketing of insurance policies." Master's thesis, 2018. http://hdl.handle.net/10362/32410.
Full textThe concept of machine learning has been around for decades, but now it is becoming more and more popular not only in the business, but everywhere else as well. It is because of increased amount of data, cheaper data storage, more powerful and affordable computational processing. The complexity of business environment leads companies to use data-driven decision making to work more efficiently. The most common machine learning methods, like Logistic Regression, Decision Tree, Artificial Neural Network and Support Vector Machine, with their applications are reviewed in this work. Insurance industry has one of the most competitive business environment and as a result, the use of machine learning techniques is growing in this industry. In this work, above mentioned machine learning methods are used to build predictive model for target marketing campaign of caravan insurance policies to achieve greater profitability. Information Gain and Chi-squared metrics, Regression Stepwise, R package “Boruta”, Spearman correlation analysis, distribution graphs by target variable, as well as basic statistics of all variables are used for feature selection. To solve this real-world business problem, the best final chosen predictive model is Multilayer Perceptron with backpropagation learning algorithm with 1 hidden layer and 12 hidden neurons.