Academic literature on the topic 'Graph regularization'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Graph regularization.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Graph regularization"

1

Yang, Han, Kaili Ma, and James Cheng. "Rethinking Graph Regularization for Graph Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 5 (May 18, 2021): 4573–81. http://dx.doi.org/10.1609/aaai.v35i5.16586.

Full text
Abstract:
The graph Laplacian regularization term is usually used in semi-supervised representation learning to provide graph structure information for a model f(X). However, with the recent popularity of graph neural networks (GNNs), directly encoding graph structure A into a model, i.e., f(A, X), has become the more common approach. While we show that graph Laplacian regularization brings little-to-no benefit to existing GNNs, and propose a simple but non-trivial variant of graph Laplacian regularization, called Propagation-regularization (P-reg), to boost the performance of existing GNN models. We provide formal analyses to show that P-reg not only infuses extra information (that is not captured by the traditional graph Laplacian regularization) into GNNs, but also has the capacity equivalent to an infinite-depth graph convolutional network. We demonstrate that P-reg can effectively boost the performance of existing GNN models on both node-level and graph-level tasks across many different datasets.
APA, Harvard, Vancouver, ISO, and other styles
2

Dal Col, Alcebiades, and Fabiano Petronetto. "Graph regularization multidimensional projection." Pattern Recognition 129 (September 2022): 108690. http://dx.doi.org/10.1016/j.patcog.2022.108690.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Chen, Binghui, Pengyu Li, Zhaoyi Yan, Biao Wang, and Lei Zhang. "Deep Metric Learning with Graph Consistency." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 2 (May 18, 2021): 982–90. http://dx.doi.org/10.1609/aaai.v35i2.16182.

Full text
Abstract:
Deep Metric Learning (DML) has been more attractive and widely applied in many computer vision tasks, in which a discriminative embedding is requested such that the image features belonging to the same class are gathered together and the ones belonging to different classes are pushed apart. Most existing works insist to learn this discriminative embedding by either devising powerful pair-based loss functions or hard-sample mining strategies. However, in this paper, we start from an another perspective and propose Deep Consistent Graph Metric Learning (CGML) framework to enhance the discrimination of the learned embedding. It is mainly achieved by rethinking the conventional distance constraints as a graph regularization and then introducing a Graph Consistency regularization term, which intends to optimize the feature distribution from a global graph perspective. Inspired by the characteristic of our defined ’Discriminative Graph’, which regards DML from another novel perspective, the Graph Consistency regularization term encourages the sub-graphs randomly sampled from the training set to be consistent. We show that our CGML indeed serves as an efficient technique for learning towards discriminative embedding and is applicable to various popular metric objectives, e.g. Triplet, N-Pair and Binomial losses. This paper empirically and experimentally demonstrates the effectiveness of our graph regularization idea, achieving competitive results on the popular CUB, CARS, Stanford Online Products and In-Shop datasets.
APA, Harvard, Vancouver, ISO, and other styles
4

Huang, Xiayuan, Xiangli Nie, and Hong Qiao. "PolSAR Image Feature Extraction via Co-Regularized Graph Embedding." Remote Sensing 12, no. 11 (May 28, 2020): 1738. http://dx.doi.org/10.3390/rs12111738.

Full text
Abstract:
Dimensionality reduction (DR) methods based on graph embedding are widely used for feature extraction. For these methods, the weighted graph plays a vital role in the process of DR because it can characterize the data’s structure information. Moreover, the similarity measurement is a crucial factor for constructing a weighted graph. Wishart distance of covariance matrices and Euclidean distance of polarimetric features are two important similarity measurements for polarimetric synthetic aperture radar (PolSAR) image classification. For obtaining a satisfactory PolSAR image classification performance, a co-regularized graph embedding (CRGE) method by combing the two distances is proposed for PolSAR image feature extraction in this paper. Firstly, two weighted graphs are constructed based on the two distances to represent the data’s local structure information. Specifically, the neighbouring samples are sought in a local patch to decrease computation cost and use spatial information. Next the DR model is constructed based on the two weighted graphs and co-regularization. The co-regularization aims to minimize the dissimilarity of low-dimensional features corresponding to two weighted graphs. We employ two types of co-regularization and the corresponding algorithms are proposed. Ultimately, the obtained low-dimensional features are used for PolSAR image classification. Experiments are implemented on three PolSAR datasets and results show that the co-regularized graph embedding can enhance the performance of PolSAR image classification.
APA, Harvard, Vancouver, ISO, and other styles
5

Liu, Fei, Sounak Chakraborty, Fan Li, Yan Liu, and Aurelie C. Lozano. "Bayesian Regularization via Graph Laplacian." Bayesian Analysis 9, no. 2 (June 2014): 449–74. http://dx.doi.org/10.1214/14-ba860.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Bo, Deyu, Binbin Hu, Xiao Wang, Zhiqiang Zhang, Chuan Shi, and Jun Zhou. "Regularizing Graph Neural Networks via Consistency-Diversity Graph Augmentations." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 4 (June 28, 2022): 3913–21. http://dx.doi.org/10.1609/aaai.v36i4.20307.

Full text
Abstract:
Despite the remarkable performance of graph neural networks (GNNs) in semi-supervised learning, it is criticized for not making full use of unlabeled data and suffering from over-fitting. Recently, graph data augmentation, used to improve both accuracy and generalization of GNNs, has received considerable attentions. However, one fundamental question is how to evaluate the quality of graph augmentations in principle? In this paper, we propose two metrics, Consistency and Diversity, from the aspects of augmentation correctness and generalization. Moreover, we discover that existing augmentations fall into a dilemma between these two metrics. Can we find a graph augmentation satisfying both consistency and diversity? A well-informed answer can help us understand the mechanism behind graph augmentation and improve the performance of GNNs. To tackle this challenge, we analyze two representative semi-supervised learning algorithms: label propagation (LP) and consistency regularization (CR). We find that LP utilizes the prior knowledge of graphs to improve consistency and CR adopts variable augmentations to promote diversity. Based on this discovery, we treat neighbors as augmentations to capture the prior knowledge embodying homophily assumption, which promises a high consistency of augmentations. To further promote diversity, we randomly replace the immediate neighbors of each node with its remote neighbors. After that, a neighbor-constrained regularization is proposed to enforce the predictions of the augmented neighbors to be consistent with each other. Extensive experiments on five real-world graphs validate the superiority of our method in improving the accuracy and generalization of GNNs.
APA, Harvard, Vancouver, ISO, and other styles
7

Le, Tuan M. V., and Hady W. Lauw. "Semantic Visualization with Neighborhood Graph Regularization." Journal of Artificial Intelligence Research 55 (April 28, 2016): 1091–133. http://dx.doi.org/10.1613/jair.4983.

Full text
Abstract:
Visualization of high-dimensional data, such as text documents, is useful to map out the similarities among various data points. In the high-dimensional space, documents are commonly represented as bags of words, with dimensionality equal to the vocabulary size. Classical approaches to document visualization directly reduce this into visualizable two or three dimensions. Recent approaches consider an intermediate representation in topic space, between word space and visualization space, which preserves the semantics by topic modeling. While aiming for a good fit between the model parameters and the observed data, previous approaches have not considered the local consistency among data instances. We consider the problem of semantic visualization by jointly modeling topics and visualization on the intrinsic document manifold, modeled using a neighborhood graph. Each document has both a topic distribution and visualization coordinate. Specifically, we propose an unsupervised probabilistic model, called SEMAFORE, which aims to preserve the manifold in the lower-dimensional spaces through a neighborhood regularization framework designed for the semantic visualization task. To validate the efficacy of SEMAFORE, our comprehensive experiments on a number of real-life text datasets of news articles and Web pages show that the proposed methods outperform the state-of-the-art baselines on objective evaluation metrics.
APA, Harvard, Vancouver, ISO, and other styles
8

Long, Mingsheng, Jianmin Wang, Guiguang Ding, Dou Shen, and Qiang Yang. "Transfer Learning with Graph Co-Regularization." Proceedings of the AAAI Conference on Artificial Intelligence 26, no. 1 (September 20, 2021): 1033–39. http://dx.doi.org/10.1609/aaai.v26i1.8290.

Full text
Abstract:
Transfer learning proves to be effective for leveraging labeled data in the source domain to build an accurate classifier in the target domain. The basic assumption behind transfer learning is that the involved domains share some common latent factors. Previous methods usually explore these latent factors by optimizing two separate objective functions, i.e., either maximizing the empirical likelihood, or preserving the geometric structure. Actually, these two objective functions are complementary to each other and optimizing them simultaneously can make the solution smoother and further improve the accuracy of the final model. In this paper, we propose a novel approach called Graph co-regularized Transfer Learning (GTL) for this purpose, which integrates the two objective functions seamlessly into one unified optimization problem. Thereafter, we present an iterative algorithm for the optimization problem with rigorous analysis on convergence and complexity. Our empirical study on two open data sets validates that GTL can consistently improve the classification accuracy compared to the state-of-the-art transfer learning methods.
APA, Harvard, Vancouver, ISO, and other styles
9

Long, Mingsheng, Jianmin Wang, Guiguang Ding, Dou Shen, and Qiang Yang. "Transfer Learning with Graph Co-Regularization." IEEE Transactions on Knowledge and Data Engineering 26, no. 7 (July 2014): 1805–18. http://dx.doi.org/10.1109/tkde.2013.97.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Lezoray, Olivier, Abderrahim Elmoataz, and Sébastien Bougleux. "Graph regularization for color image processing." Computer Vision and Image Understanding 107, no. 1-2 (July 2007): 38–55. http://dx.doi.org/10.1016/j.cviu.2006.11.015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Graph regularization"

1

Yekollu, Srikar. "Graph Based Regularization of Large Covariance Matrices." The Ohio State University, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=osu1237243768.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Gkirtzou, Aikaterini. "Sparsity regularization and graph-based representation in medical imaging." Phd thesis, Ecole Centrale Paris, 2013. http://tel.archives-ouvertes.fr/tel-00960163.

Full text
Abstract:
Medical images have been used to depict the anatomy or function. Their high-dimensionality and their non-linearity nature makes their analysis a challenging problem. In this thesis, we address the medical image analysis from the viewpoint of statistical learning theory. First, we examine regularization methods for analyzing MRI data. In this direction, we introduce a novel regularization method, the k-support regularized Support Vector Machine. This algorithm extends the 1 regularized SVM to a mixed norm of both '1 and '2 norms. We evaluate our algorithm in a neuromuscular disease classification task. Second, we approach the problem of graph representation and comparison for analyzing medical images. Graphs are a technique to represent data with inherited structure. Despite the significant progress in graph kernels, existing graph kernels focus on either unlabeled or discretely labeled graphs, while efficient and expressive representation and comparison of graphs with continuous high-dimensional vector labels, remains an open research problem. We introduce a novel method, the pyramid quantized Weisfeiler-Lehman graph representation to tackle the graph comparison problem for continuous vector labeled graphs. Our algorithm considers statistics of subtree patterns based on the Weisfeiler-Lehman algorithm and uses a pyramid quantization strategy to determine a logarithmic number of discrete labelings. We evaluate our algorithm on two different tasks with real datasets. Overall, as graphs are fundamental mathematical objects and regularization methods are used to control ill-pose problems, both proposed algorithms are potentially applicable to a wide range of domains.
APA, Harvard, Vancouver, ISO, and other styles
3

Sousa, Celso Andre Rodrigues de. "Constrained graph-based semi-supervised learning with higher order regularization." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-08122017-102557/.

Full text
Abstract:
Graph-based semi-supervised learning (SSL) algorithms have been widely studied in the last few years. Most of these algorithms were designed from unconstrained optimization problems using a Laplacian regularizer term as smoothness functional in an attempt to reflect the intrinsic geometric structure of the datas marginal distribution. Although a number of recent research papers are still focusing on unconstrained methods for graph-based SSL, a recent statistical analysis showed that many of these algorithms may be unstable on transductive regression. Therefore, we focus on providing new constrained methods for graph-based SSL. We begin by analyzing the regularization framework of existing unconstrained methods. Then, we incorporate two normalization constraints into the optimization problem of three of these methods. We show that the proposed optimization problems have closed-form solution. By generalizing one of these constraints to any distribution, we provide generalized methods for constrained graph-based SSL. The proposed methods have a more flexible regularization framework than the corresponding unconstrained methods. More precisely, our methods can deal with any graph Laplacian and use higher order regularization, which is effective on general SSL taks. In order to show the effectiveness of the proposed methods, we provide comprehensive experimental analyses. Specifically, our experiments are subdivided into two parts. In the first part, we evaluate existing graph-based SSL algorithms on time series data to find their weaknesses. In the second part, we evaluate the proposed constrained methods against six state-of-the-art graph-based SSL algorithms on benchmark data sets. Since the widely used best case analysis may hide useful information concerning the SSL algorithms performance with respect to parameter selection, we used recently proposed empirical evaluation models to evaluate our results. Our results show that our methods outperforms the competing methods on most parameter settings and graph construction methods. However, we found a few experimental settings in which our methods showed poor performance. In order to facilitate the reproduction of our results, the source codes, data sets, and experimental results are freely available.
Algoritmos de aprendizado semissupervisionado baseado em grafos foram amplamente estudados nos últimos anos. A maioria desses algoritmos foi projetada a partir de problemas de otimização sem restrições usando um termo regularizador Laplaciano como funcional de suavidade numa tentativa de refletir a estrutura geométrica intrínsica da distribuição marginal dos dados. Apesar de vários artigos científicos recentes continuarem focando em métodos sem restrição para aprendizado semissupervisionado em grafos, uma análise estatística recente mostrou que muitos desses algoritmos podem ser instáveis em regressão transdutiva. Logo, nós focamos em propor novos métodos com restrições para aprendizado semissupervisionado em grafos. Nós começamos analisando o framework de regularização de métodos sem restrições existentes. Então, nós incorporamos duas restrições de normalização no problema de otimização de três desses métodos. Mostramos que os problemas de otimização propostos possuem solução de forma fechada. Ao generalizar uma dessas restrições para qualquer distribuição, provemos métodos generalizados para aprendizado semissupervisionado restrito baseado em grafos. Os métodos propostos possuem um framework de regularização mais flexível que os métodos sem restrições correspondentes. Mais precisamente, nossos métodos podem lidar com qualquer Laplaciano em grafos e usar regularização de ordem elevada, a qual é efetiva em tarefas de aprendizado semissupervisionado em geral. Para mostrar a efetividade dos métodos propostos, nós provemos análises experimentais robustas. Especificamente, nossos experimentos são subdivididos em duas partes. Na primeira parte, avaliamos algoritmos de aprendizado semissupervisionado em grafos existentes em dados de séries temporais para encontrar possíveis fraquezas desses métodos. Na segunda parte, avaliamos os métodos restritos propostos contra seis algoritmos de aprendizado semissupervisionado baseado em grafos do estado da arte em conjuntos de dados benchmark. Como a amplamente usada análise de melhor caso pode esconder informações relevantes sobre o desempenho dos algoritmos de aprendizado semissupervisionado com respeito à seleção de parâmetros, nós usamos modelos de avaliação empírica recentemente propostos para avaliar os nossos resultados. Nossos resultados mostram que os nossos métodos superam os demais métodos na maioria das configurações de parâmetro e métodos de construção de grafos. Entretanto, encontramos algumas configurações experimentais nas quais nossos métodos mostraram baixo desempenho. Para facilitar a reprodução dos nossos resultados, os códigos fonte, conjuntos de dados e resultados experimentais estão disponíveis gratuitamente.
APA, Harvard, Vancouver, ISO, and other styles
4

Gao, Xi. "Graph-based Regularization in Machine Learning: Discovering Driver Modules in Biological Networks." VCU Scholars Compass, 2015. http://scholarscompass.vcu.edu/etd/3942.

Full text
Abstract:
Curiosity of human nature drives us to explore the origins of what makes each of us different. From ancient legends and mythology, Mendel's law, Punnett square to modern genetic research, we carry on this old but eternal question. Thanks to technological revolution, today's scientists try to answer this question using easily measurable gene expression and other profiling data. However, the exploration can easily get lost in the data of growing volume, dimension, noise and complexity. This dissertation is aimed at developing new machine learning methods that take data from different classes as input, augment them with knowledge of feature relationships, and train classification models that serve two goals: 1) class prediction for previously unseen samples; 2) knowledge discovery of the underlying causes of class differences. Application of our methods in genetic studies can help scientist take advantage of existing biological networks, generate diagnosis with higher accuracy, and discover the driver networks behind the differences. We proposed three new graph-based regularization algorithms. Graph Connectivity Constrained AdaBoost algorithm combines a connectivity module, a deletion function, and a model retraining procedure with the AdaBoost classifier. Graph-regularized Linear Programming Support Vector Machine integrates penalty term based on submodular graph cut function into linear classifier's objective function. Proximal Graph LogisticBoost adds lasso and graph-based penalties into logistic risk function of an ensemble classifier. Results of tests of our models on simulated biological datasets show that the proposed methods are able to produce accurate, sparse classifiers, and can help discover true genetic differences between phenotypes.
APA, Harvard, Vancouver, ISO, and other styles
5

Lyons, Corey Francis. "The Γ0 Graph of a p-Regular Partition." University of Akron / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=akron1271082086.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zeng, Jianfeng. "Time Series Forecasting using Temporal Regularized Matrix Factorization and Its Application to Traffic Speed Datasets." University of Cincinnati / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1617109307510099.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Kilinc, Ismail Ozsel. "Graph-based Latent Embedding, Annotation and Representation Learning in Neural Networks for Semi-supervised and Unsupervised Settings." Scholar Commons, 2017. https://scholarcommons.usf.edu/etd/7415.

Full text
Abstract:
Machine learning has been immensely successful in supervised learning with outstanding examples in major industrial applications such as voice and image recognition. Following these developments, the most recent research has now begun to focus primarily on algorithms which can exploit very large sets of unlabeled examples to reduce the amount of manually labeled data required for existing models to perform well. In this dissertation, we propose graph-based latent embedding/annotation/representation learning techniques in neural networks tailored for semi-supervised and unsupervised learning problems. Specifically, we propose a novel regularization technique called Graph-based Activity Regularization (GAR) and a novel output layer modification called Auto-clustering Output Layer (ACOL) which can be used separately or collaboratively to develop scalable and efficient learning frameworks for semi-supervised and unsupervised settings. First, singularly using the GAR technique, we develop a framework providing an effective and scalable graph-based solution for semi-supervised settings in which there exists a large number of observations but a small subset with ground-truth labels. The proposed approach is natural for the classification framework on neural networks as it requires no additional task calculating the reconstruction error (as in autoencoder based methods) or implementing zero-sum game mechanism (as in adversarial training based methods). We demonstrate that GAR effectively and accurately propagates the available labels to unlabeled examples. Our results show comparable performance with state-of-the-art generative approaches for this setting using an easier-to-train framework. Second, we explore a different type of semi-supervised setting where a coarse level of labeling is available for all the observations but the model has to learn a fine, deeper level of latent annotations for each one. Problems in this setting are likely to be encountered in many domains such as text categorization, protein function prediction, image classification as well as in exploratory scientific studies such as medical and genomics research. We consider this setting as simultaneously performed supervised classification (per the available coarse labels) and unsupervised clustering (within each one of the coarse labels) and propose a novel framework combining GAR with ACOL, which enables the network to perform concurrent classification and clustering. We demonstrate how the coarse label supervision impacts performance and the classification task actually helps propagate useful clustering information between sub-classes. Comparative tests on the most popular image datasets rigorously demonstrate the effectiveness and competitiveness of the proposed approach. The third and final setup builds on the prior framework to unlock fully unsupervised learning where we propose to substitute real, yet unavailable, parent- class information with pseudo class labels. In this novel unsupervised clustering approach the network can exploit hidden information indirectly introduced through a pseudo classification objective. We train an ACOL network through this pseudo supervision together with unsupervised objective based on GAR and ultimately obtain a k-means friendly latent representation. Furthermore, we demonstrate how the chosen transformation type impacts performance and helps propagate the latent information that is useful in revealing unknown clusters. Our results show state-of-the-art performance for unsupervised clustering tasks on MNIST, SVHN and USPS datasets with the highest accuracies reported to date in the literature.
APA, Harvard, Vancouver, ISO, and other styles
8

Zapién, Arreola Karina. "Algorithme de chemin de régularisation pour l'apprentissage statistique." Thesis, Rouen, INSA, 2009. http://www.theses.fr/2009ISAM0001/document.

Full text
Abstract:
La sélection d’un modèle approprié est l’une des tâches essentielles de l’apprentissage statistique. En général, pour une tâche d’apprentissage donnée, on considère plusieurs classes de modèles ordonnées selon un certain ordre de « complexité». Dans ce cadre, le processus de sélection de modèle revient `a trouver la « complexité » optimale, permettant d’estimer un modèle assurant une bonne généralisation. Ce problème de sélection de modèle se résume à l’estimation d’un ou plusieurs hyper-paramètres définissant la complexité du modèle, par opposition aux paramètres qui permettent de spécifier le modèle dans la classe de complexité choisie. L’approche habituelle pour déterminer ces hyper-paramètres consiste à utiliser une « grille ». On se donne un ensemble de valeurs possibles et on estime, pour chacune de ces valeurs, l’erreur de généralisation du meilleur modèle. On s’intéresse, dans cette thèse, à une approche alternative consistant à calculer l’ensemble des solutions possibles pour toutes les valeurs des hyper-paramètres. C’est ce qu’on appelle le chemin de régularisation. Il se trouve que pour les problèmes d’apprentissage qui nous intéressent, des programmes quadratiques paramétriques, on montre que le chemin de régularisation associé à certains hyper-paramètres est linéaire par morceaux et que son calcul a une complexité numérique de l’ordre d’un multiple entier de la complexité de calcul d’un modèle avec un seul jeu hyper-paramètres. La thèse est organisée en trois parties. La première donne le cadre général des problèmes d’apprentissage de type SVM (Séparateurs à Vaste Marge ou Support Vector Machines) ainsi que les outils théoriques et algorithmiques permettant d’appréhender ce problème. La deuxième partie traite du problème d’apprentissage supervisé pour la classification et l’ordonnancement dans le cadre des SVM. On montre que le chemin de régularisation de ces problèmes est linéaire par morceaux. Ce résultat nous permet de développer des algorithmes originaux de discrimination et d’ordonnancement. La troisième partie aborde successivement les problèmes d’apprentissage semi supervisé et non supervisé. Pour l’apprentissage semi supervisé, nous introduisons un critère de parcimonie et proposons l’algorithme de chemin de régularisation associé. En ce qui concerne l’apprentissage non supervisé nous utilisons une approche de type « réduction de dimension ». Contrairement aux méthodes à base de graphes de similarité qui utilisent un nombre fixe de voisins, nous introduisons une nouvelle méthode permettant un choix adaptatif et approprié du nombre de voisins
The selection of a proper model is an essential task in statistical learning. In general, for a given learning task, a set of parameters has to be chosen, each parameter corresponds to a different degree of “complexity”. In this situation, the model selection procedure becomes a search for the optimal “complexity”, allowing us to estimate a model that assures a good generalization. This model selection problem can be summarized as the calculation of one or more hyperparameters defining the model complexity in contrast to the parameters that allow to specify a model in the chosen complexity class. The usual approach to determine these parameters is to use a “grid search”. Given a set of possible values, the generalization error for the best model is estimated for each of these values. This thesis is focused in an alternative approach consisting in calculating the complete set of possible solution for all hyperparameter values. This is what is called the regularization path. It can be shown that for the problems we are interested in, parametric quadratic programming (PQP), the corresponding regularization path is piece wise linear. Moreover, its calculation is no more complex than calculating a single PQP solution. This thesis is organized in three chapters, the first one introduces the general setting of a learning problem under the Support Vector Machines’ (SVM) framework together with the theory and algorithms that allow us to find a solution. The second part deals with supervised learning problems for classification and ranking using the SVM framework. It is shown that the regularization path of these problems is piecewise linear and alternative proofs to the one of Rosset [Ross 07b] are given via the subdifferential. These results lead to the corresponding algorithms to solve the mentioned supervised problems. The third part deals with semi-supervised learning problems followed by unsupervised learning problems. For the semi-supervised learning a sparsity constraint is introduced along with the corresponding regularization path algorithm. Graph-based dimensionality reduction methods are used for unsupervised learning problems. Our main contribution is a novel algorithm that allows to choose the number of nearest neighbors in an adaptive and appropriate way contrary to classical approaches based on a fix number of neighbors
APA, Harvard, Vancouver, ISO, and other styles
9

Hafiene, Yosra. "Continuum limits of evolution and variational problems on graphs." Thesis, Normandie, 2018. http://www.theses.fr/2018NORMC254/document.

Full text
Abstract:
L’opérateur du p-Laplacien non local, l’équation d’évolution et la régularisation variationnelle associées régies par un noyau donné ont des applications dans divers domaines de la science et de l’ingénierie. En particulier, ils sont devenus des outils modernes pour le traitement massif des données (y compris les signaux, les images, la géométrie) et dans les tâches d’apprentissage automatique telles que la classification. En pratique, cependant, ces modèles sont implémentés sous forme discrète (en espace et en temps, ou en espace pour la régularisation variationnelle) comme approximation numérique d’un problème continu, où le noyau est remplacé par la matrice d’adjacence d’un graphe. Pourtant, peu de résultats sur la consistence de ces discrétisations sont disponibles. En particulier, il est largement ouvert de déterminer quand les solutions de l’équation d’évolution ou du problème variationnel des tâches basées sur des graphes convergent (dans un sens approprié) à mesure que le nombre de sommets augmente, vers un objet bien défini dans le domaine continu, et si oui, à quelle vitesse. Dans ce manuscrit, nous posons les bases pour aborder ces questions.En combinant des outils de la théorie des graphes, de l’analyse convexe, de la théorie des semi- groupes non linéaires et des équations d’évolution, nous interprétons rigoureusement la limite continue du problème d’évolution et du problème variationnel du p-Laplacien discrets sur graphes. Plus précisé- ment, nous considérons une suite de graphes (déterministes) convergeant vers un objet connu sous le nom de graphon. Si les problèmes d’évolution et variationnel associés au p-Laplacien continu non local sont discrétisés de manière appropriée sur cette suite de graphes, nous montrons que la suite des solutions des problèmes discrets converge vers la solution du problème continu régi par le graphon, lorsque le nombre de sommets tend vers l’infini. Ce faisant, nous fournissons des bornes d’erreur/consistance.Cela permet à son tour d’établir les taux de convergence pour différents modèles de graphes. En parti- culier, nous mettons en exergue le rôle de la géométrie/régularité des graphons. Pour les séquences de graphes aléatoires, en utilisant des inégalités de déviation (concentration), nous fournissons des taux de convergence nonasymptotiques en probabilité et présentons les différents régimes en fonction de p, de la régularité du graphon et des données initiales
The non-local p-Laplacian operator, the associated evolution equation and variational regularization, governed by a given kernel, have applications in various areas of science and engineering. In particular, they are modern tools for massive data processing (including signals, images, geometry), and machine learning tasks such as classification. In practice, however, these models are implemented in discrete form (in space and time, or in space for variational regularization) as a numerical approximation to a continuous problem, where the kernel is replaced by an adjacency matrix of a graph. Yet, few results on the consistency of these discretization are available. In particular it is largely open to determine when do the solutions of either the evolution equation or the variational problem of graph-based tasks converge (in an appropriate sense), as the number of vertices increases, to a well-defined object in the continuum setting, and if yes, at which rate. In this manuscript, we lay the foundations to address these questions.Combining tools from graph theory, convex analysis, nonlinear semigroup theory and evolution equa- tions, we give a rigorous interpretation to the continuous limit of the discrete nonlocal p-Laplacian evolution and variational problems on graphs. More specifically, we consider a sequence of (determin- istic) graphs converging to a so-called limit object known as the graphon. If the continuous p-Laplacian evolution and variational problems are properly discretized on this graph sequence, we prove that the solutions of the sequence of discrete problems converge to the solution of the continuous problem governed by the graphon, as the number of graph vertices grows to infinity. Along the way, we provide a consistency/error bounds. In turn, this allows to establish the convergence rates for different graph models. In particular, we highlight the role of the graphon geometry/regularity. For random graph se- quences, using sharp deviation inequalities, we deliver nonasymptotic convergence rates in probability and exhibit the different regimes depending on p, the regularity of the graphon and the initial data
APA, Harvard, Vancouver, ISO, and other styles
10

Richard, Émile. "Regularization methods for prediction in dynamic graphs and e-marketing applications." Phd thesis, École normale supérieure de Cachan - ENS Cachan, 2012. http://tel.archives-ouvertes.fr/tel-00906066.

Full text
Abstract:
Predicting connections among objects, based either on a noisy observation or on a sequence of observations, is a problem of interest for numerous applications such as recommender systems for e-commerce and social networks, and also in system biology, for inferring interaction patterns among proteins. This work presents formulations of the graph prediction problem, in both dynamic and static scenarios, as regularization problems. In the static scenario we encode the mixture of two different kinds of structural assumptions in a convex penalty involving the L1 and the trace norm. In the dynamic setting we assume that certain graph features, such as the node degree, follow a vector autoregressive model and we propose to use this information to improve the accuracy of prediction. The solutions of the optimization problems are studied both from an algorithmic and statistical point of view. Empirical evidences on synthetic and real data are presented showing the benefit of using the suggested methods.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Graph regularization"

1

Dai, Xin-Yu, Chuan Cheng, Shujian Huang, and Jiajun Chen. "Sentiment Classification with Graph Sparsity Regularization." In Computational Linguistics and Intelligent Text Processing, 140–51. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-18117-2_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Tong, Alexander, David van Dijk, Jay S. Stanley III, Matthew Amodio, Kristina Yim, Rebecca Muhle, James Noonan, Guy Wolf, and Smita Krishnaswamy. "Interpretable Neuron Structuring with Graph Spectral Regularization." In Lecture Notes in Computer Science, 509–21. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-44584-3_40.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Zhang, Fan, and Edwin R. Hancock. "Riemannian Graph Diffusion for DT-MRI Regularization." In Medical Image Computing and Computer-Assisted Intervention – MICCAI 2006, 234–42. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11866763_29.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Hein, Matthias. "Uniform Convergence of Adaptive Graph-Based Regularization." In Learning Theory, 50–64. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11776420_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Zheng, Haixia, and Horace H. S. Ip. "Graph-Based Label Propagation with Dissimilarity Regularization." In Lecture Notes in Computer Science, 47–58. Cham: Springer International Publishing, 2013. http://dx.doi.org/10.1007/978-3-319-03731-8_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Candemir, Sema, and Yusuf Sinan Akgül. "Adaptive Regularization Parameter for Graph Cut Segmentation." In Lecture Notes in Computer Science, 117–26. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-13772-3_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Bougleux, Sébastien, and Abderrahim Elmoataz. "Image Smoothing and Segmentation by Graph Regularization." In Advances in Visual Computing, 745–52. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11595755_95.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Tam, Zhi-Rui, Yi-Lun Wu, and Hong-Han Shuai. "Improving Entity Disambiguation Using Knowledge Graph Regularization." In Advances in Knowledge Discovery and Data Mining, 341–53. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-05933-9_27.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Marques, Manuel D. P. Monteiro. "Regularization and Graph Approximation of a Discontinuous Evolution." In Differential Inclusions in Nonsmooth Mechanical Problems, 27–44. Basel: Birkhäuser Basel, 1993. http://dx.doi.org/10.1007/978-3-0348-7614-8_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Minervini, Pasquale, Claudia d’Amato, Nicola Fanizzi, and Floriana Esposito. "Graph-Based Regularization for Transductive Class-Membership Prediction." In Uncertainty Reasoning for the Semantic Web III, 202–18. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-13413-0_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Graph regularization"

1

Luo, Xin, Ye Yuan, and Di Wu. "Adaptive Regularization-Incorporated Latent Factor Analysis." In 2020 IEEE International Conference on Knowledge Graph (ICKG). IEEE, 2020. http://dx.doi.org/10.1109/icbk50248.2020.00074.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Sacca, Claudio, Michelangelo Diligenti, and Marco Gori. "Graph and Manifold Co-regularization." In 2013 12th International Conference on Machine Learning and Applications (ICMLA). IEEE, 2013. http://dx.doi.org/10.1109/icmla.2013.58.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Rey, Samuel, and Antonio G. Marques. "Robust Graph-Filter Identification with Graph Denoising Regularization." In ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2021. http://dx.doi.org/10.1109/icassp39728.2021.9414909.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Zhang, F., and E. R. Hancock. "Tensor MRI Regularization via Graph Diffusion." In British Machine Vision Conference 2006. British Machine Vision Association, 2006. http://dx.doi.org/10.5244/c.20.61.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kheradmand, Amin, and Peyman Milanfar. "Motion deblurring with graph Laplacian regularization." In IS&T/SPIE Electronic Imaging, edited by Nitin Sampat, Radka Tezaur, and Dietmar Wüller. SPIE, 2015. http://dx.doi.org/10.1117/12.2084585.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Yang, Maosheng, Mario Coutino, Elvin Isufi, and Geert Leus. "Node Varying Regularization for Graph Signals." In 2020 28th European Signal Processing Conference (EUSIPCO). IEEE, 2021. http://dx.doi.org/10.23919/eusipco47968.2020.9287807.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Zhang, Qiang, and Zhenjiang Miao. "Subspace Clustering via Sparse Graph Regularization." In 2017 4th IAPR Asian Conference on Pattern Recognition (ACPR). IEEE, 2017. http://dx.doi.org/10.1109/acpr.2017.94.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Tsuda, Koji. "Entire regularization paths for graph data." In the 24th international conference. New York, New York, USA: ACM Press, 2007. http://dx.doi.org/10.1145/1273496.1273612.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Yu, Tianshu, and Ruisheng Wang. "Graph matching with low-rank regularization." In 2016 IEEE Winter Conference on Applications of Computer Vision (WACV). IEEE, 2016. http://dx.doi.org/10.1109/wacv.2016.7477730.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Xue, Jiaqi, and Bin Zhang. "Adaptive Projected Clustering with Graph Regularization." In 2022 26th International Conference on Pattern Recognition (ICPR). IEEE, 2022. http://dx.doi.org/10.1109/icpr56361.2022.9956370.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography