Academic literature on the topic 'Graph regularization'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Graph regularization.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Graph regularization"
Yang, Han, Kaili Ma, and James Cheng. "Rethinking Graph Regularization for Graph Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 5 (May 18, 2021): 4573–81. http://dx.doi.org/10.1609/aaai.v35i5.16586.
Full textDal Col, Alcebiades, and Fabiano Petronetto. "Graph regularization multidimensional projection." Pattern Recognition 129 (September 2022): 108690. http://dx.doi.org/10.1016/j.patcog.2022.108690.
Full textChen, Binghui, Pengyu Li, Zhaoyi Yan, Biao Wang, and Lei Zhang. "Deep Metric Learning with Graph Consistency." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 2 (May 18, 2021): 982–90. http://dx.doi.org/10.1609/aaai.v35i2.16182.
Full textHuang, Xiayuan, Xiangli Nie, and Hong Qiao. "PolSAR Image Feature Extraction via Co-Regularized Graph Embedding." Remote Sensing 12, no. 11 (May 28, 2020): 1738. http://dx.doi.org/10.3390/rs12111738.
Full textLiu, Fei, Sounak Chakraborty, Fan Li, Yan Liu, and Aurelie C. Lozano. "Bayesian Regularization via Graph Laplacian." Bayesian Analysis 9, no. 2 (June 2014): 449–74. http://dx.doi.org/10.1214/14-ba860.
Full textBo, Deyu, Binbin Hu, Xiao Wang, Zhiqiang Zhang, Chuan Shi, and Jun Zhou. "Regularizing Graph Neural Networks via Consistency-Diversity Graph Augmentations." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 4 (June 28, 2022): 3913–21. http://dx.doi.org/10.1609/aaai.v36i4.20307.
Full textLe, Tuan M. V., and Hady W. Lauw. "Semantic Visualization with Neighborhood Graph Regularization." Journal of Artificial Intelligence Research 55 (April 28, 2016): 1091–133. http://dx.doi.org/10.1613/jair.4983.
Full textLong, Mingsheng, Jianmin Wang, Guiguang Ding, Dou Shen, and Qiang Yang. "Transfer Learning with Graph Co-Regularization." Proceedings of the AAAI Conference on Artificial Intelligence 26, no. 1 (September 20, 2021): 1033–39. http://dx.doi.org/10.1609/aaai.v26i1.8290.
Full textLong, Mingsheng, Jianmin Wang, Guiguang Ding, Dou Shen, and Qiang Yang. "Transfer Learning with Graph Co-Regularization." IEEE Transactions on Knowledge and Data Engineering 26, no. 7 (July 2014): 1805–18. http://dx.doi.org/10.1109/tkde.2013.97.
Full textLezoray, Olivier, Abderrahim Elmoataz, and Sébastien Bougleux. "Graph regularization for color image processing." Computer Vision and Image Understanding 107, no. 1-2 (July 2007): 38–55. http://dx.doi.org/10.1016/j.cviu.2006.11.015.
Full textDissertations / Theses on the topic "Graph regularization"
Yekollu, Srikar. "Graph Based Regularization of Large Covariance Matrices." The Ohio State University, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=osu1237243768.
Full textGkirtzou, Aikaterini. "Sparsity regularization and graph-based representation in medical imaging." Phd thesis, Ecole Centrale Paris, 2013. http://tel.archives-ouvertes.fr/tel-00960163.
Full textSousa, Celso Andre Rodrigues de. "Constrained graph-based semi-supervised learning with higher order regularization." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-08122017-102557/.
Full textAlgoritmos de aprendizado semissupervisionado baseado em grafos foram amplamente estudados nos últimos anos. A maioria desses algoritmos foi projetada a partir de problemas de otimização sem restrições usando um termo regularizador Laplaciano como funcional de suavidade numa tentativa de refletir a estrutura geométrica intrínsica da distribuição marginal dos dados. Apesar de vários artigos científicos recentes continuarem focando em métodos sem restrição para aprendizado semissupervisionado em grafos, uma análise estatística recente mostrou que muitos desses algoritmos podem ser instáveis em regressão transdutiva. Logo, nós focamos em propor novos métodos com restrições para aprendizado semissupervisionado em grafos. Nós começamos analisando o framework de regularização de métodos sem restrições existentes. Então, nós incorporamos duas restrições de normalização no problema de otimização de três desses métodos. Mostramos que os problemas de otimização propostos possuem solução de forma fechada. Ao generalizar uma dessas restrições para qualquer distribuição, provemos métodos generalizados para aprendizado semissupervisionado restrito baseado em grafos. Os métodos propostos possuem um framework de regularização mais flexível que os métodos sem restrições correspondentes. Mais precisamente, nossos métodos podem lidar com qualquer Laplaciano em grafos e usar regularização de ordem elevada, a qual é efetiva em tarefas de aprendizado semissupervisionado em geral. Para mostrar a efetividade dos métodos propostos, nós provemos análises experimentais robustas. Especificamente, nossos experimentos são subdivididos em duas partes. Na primeira parte, avaliamos algoritmos de aprendizado semissupervisionado em grafos existentes em dados de séries temporais para encontrar possíveis fraquezas desses métodos. Na segunda parte, avaliamos os métodos restritos propostos contra seis algoritmos de aprendizado semissupervisionado baseado em grafos do estado da arte em conjuntos de dados benchmark. Como a amplamente usada análise de melhor caso pode esconder informações relevantes sobre o desempenho dos algoritmos de aprendizado semissupervisionado com respeito à seleção de parâmetros, nós usamos modelos de avaliação empírica recentemente propostos para avaliar os nossos resultados. Nossos resultados mostram que os nossos métodos superam os demais métodos na maioria das configurações de parâmetro e métodos de construção de grafos. Entretanto, encontramos algumas configurações experimentais nas quais nossos métodos mostraram baixo desempenho. Para facilitar a reprodução dos nossos resultados, os códigos fonte, conjuntos de dados e resultados experimentais estão disponíveis gratuitamente.
Gao, Xi. "Graph-based Regularization in Machine Learning: Discovering Driver Modules in Biological Networks." VCU Scholars Compass, 2015. http://scholarscompass.vcu.edu/etd/3942.
Full textLyons, Corey Francis. "The Γ0 Graph of a p-Regular Partition." University of Akron / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=akron1271082086.
Full textZeng, Jianfeng. "Time Series Forecasting using Temporal Regularized Matrix Factorization and Its Application to Traffic Speed Datasets." University of Cincinnati / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1617109307510099.
Full textKilinc, Ismail Ozsel. "Graph-based Latent Embedding, Annotation and Representation Learning in Neural Networks for Semi-supervised and Unsupervised Settings." Scholar Commons, 2017. https://scholarcommons.usf.edu/etd/7415.
Full textZapién, Arreola Karina. "Algorithme de chemin de régularisation pour l'apprentissage statistique." Thesis, Rouen, INSA, 2009. http://www.theses.fr/2009ISAM0001/document.
Full textThe selection of a proper model is an essential task in statistical learning. In general, for a given learning task, a set of parameters has to be chosen, each parameter corresponds to a different degree of “complexity”. In this situation, the model selection procedure becomes a search for the optimal “complexity”, allowing us to estimate a model that assures a good generalization. This model selection problem can be summarized as the calculation of one or more hyperparameters defining the model complexity in contrast to the parameters that allow to specify a model in the chosen complexity class. The usual approach to determine these parameters is to use a “grid search”. Given a set of possible values, the generalization error for the best model is estimated for each of these values. This thesis is focused in an alternative approach consisting in calculating the complete set of possible solution for all hyperparameter values. This is what is called the regularization path. It can be shown that for the problems we are interested in, parametric quadratic programming (PQP), the corresponding regularization path is piece wise linear. Moreover, its calculation is no more complex than calculating a single PQP solution. This thesis is organized in three chapters, the first one introduces the general setting of a learning problem under the Support Vector Machines’ (SVM) framework together with the theory and algorithms that allow us to find a solution. The second part deals with supervised learning problems for classification and ranking using the SVM framework. It is shown that the regularization path of these problems is piecewise linear and alternative proofs to the one of Rosset [Ross 07b] are given via the subdifferential. These results lead to the corresponding algorithms to solve the mentioned supervised problems. The third part deals with semi-supervised learning problems followed by unsupervised learning problems. For the semi-supervised learning a sparsity constraint is introduced along with the corresponding regularization path algorithm. Graph-based dimensionality reduction methods are used for unsupervised learning problems. Our main contribution is a novel algorithm that allows to choose the number of nearest neighbors in an adaptive and appropriate way contrary to classical approaches based on a fix number of neighbors
Hafiene, Yosra. "Continuum limits of evolution and variational problems on graphs." Thesis, Normandie, 2018. http://www.theses.fr/2018NORMC254/document.
Full textThe non-local p-Laplacian operator, the associated evolution equation and variational regularization, governed by a given kernel, have applications in various areas of science and engineering. In particular, they are modern tools for massive data processing (including signals, images, geometry), and machine learning tasks such as classification. In practice, however, these models are implemented in discrete form (in space and time, or in space for variational regularization) as a numerical approximation to a continuous problem, where the kernel is replaced by an adjacency matrix of a graph. Yet, few results on the consistency of these discretization are available. In particular it is largely open to determine when do the solutions of either the evolution equation or the variational problem of graph-based tasks converge (in an appropriate sense), as the number of vertices increases, to a well-defined object in the continuum setting, and if yes, at which rate. In this manuscript, we lay the foundations to address these questions.Combining tools from graph theory, convex analysis, nonlinear semigroup theory and evolution equa- tions, we give a rigorous interpretation to the continuous limit of the discrete nonlocal p-Laplacian evolution and variational problems on graphs. More specifically, we consider a sequence of (determin- istic) graphs converging to a so-called limit object known as the graphon. If the continuous p-Laplacian evolution and variational problems are properly discretized on this graph sequence, we prove that the solutions of the sequence of discrete problems converge to the solution of the continuous problem governed by the graphon, as the number of graph vertices grows to infinity. Along the way, we provide a consistency/error bounds. In turn, this allows to establish the convergence rates for different graph models. In particular, we highlight the role of the graphon geometry/regularity. For random graph se- quences, using sharp deviation inequalities, we deliver nonasymptotic convergence rates in probability and exhibit the different regimes depending on p, the regularity of the graphon and the initial data
Richard, Émile. "Regularization methods for prediction in dynamic graphs and e-marketing applications." Phd thesis, École normale supérieure de Cachan - ENS Cachan, 2012. http://tel.archives-ouvertes.fr/tel-00906066.
Full textBook chapters on the topic "Graph regularization"
Dai, Xin-Yu, Chuan Cheng, Shujian Huang, and Jiajun Chen. "Sentiment Classification with Graph Sparsity Regularization." In Computational Linguistics and Intelligent Text Processing, 140–51. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-18117-2_11.
Full textTong, Alexander, David van Dijk, Jay S. Stanley III, Matthew Amodio, Kristina Yim, Rebecca Muhle, James Noonan, Guy Wolf, and Smita Krishnaswamy. "Interpretable Neuron Structuring with Graph Spectral Regularization." In Lecture Notes in Computer Science, 509–21. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-44584-3_40.
Full textZhang, Fan, and Edwin R. Hancock. "Riemannian Graph Diffusion for DT-MRI Regularization." In Medical Image Computing and Computer-Assisted Intervention – MICCAI 2006, 234–42. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11866763_29.
Full textHein, Matthias. "Uniform Convergence of Adaptive Graph-Based Regularization." In Learning Theory, 50–64. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11776420_7.
Full textZheng, Haixia, and Horace H. S. Ip. "Graph-Based Label Propagation with Dissimilarity Regularization." In Lecture Notes in Computer Science, 47–58. Cham: Springer International Publishing, 2013. http://dx.doi.org/10.1007/978-3-319-03731-8_5.
Full textCandemir, Sema, and Yusuf Sinan Akgül. "Adaptive Regularization Parameter for Graph Cut Segmentation." In Lecture Notes in Computer Science, 117–26. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-13772-3_13.
Full textBougleux, Sébastien, and Abderrahim Elmoataz. "Image Smoothing and Segmentation by Graph Regularization." In Advances in Visual Computing, 745–52. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11595755_95.
Full textTam, Zhi-Rui, Yi-Lun Wu, and Hong-Han Shuai. "Improving Entity Disambiguation Using Knowledge Graph Regularization." In Advances in Knowledge Discovery and Data Mining, 341–53. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-05933-9_27.
Full textMarques, Manuel D. P. Monteiro. "Regularization and Graph Approximation of a Discontinuous Evolution." In Differential Inclusions in Nonsmooth Mechanical Problems, 27–44. Basel: Birkhäuser Basel, 1993. http://dx.doi.org/10.1007/978-3-0348-7614-8_2.
Full textMinervini, Pasquale, Claudia d’Amato, Nicola Fanizzi, and Floriana Esposito. "Graph-Based Regularization for Transductive Class-Membership Prediction." In Uncertainty Reasoning for the Semantic Web III, 202–18. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-13413-0_11.
Full textConference papers on the topic "Graph regularization"
Luo, Xin, Ye Yuan, and Di Wu. "Adaptive Regularization-Incorporated Latent Factor Analysis." In 2020 IEEE International Conference on Knowledge Graph (ICKG). IEEE, 2020. http://dx.doi.org/10.1109/icbk50248.2020.00074.
Full textSacca, Claudio, Michelangelo Diligenti, and Marco Gori. "Graph and Manifold Co-regularization." In 2013 12th International Conference on Machine Learning and Applications (ICMLA). IEEE, 2013. http://dx.doi.org/10.1109/icmla.2013.58.
Full textRey, Samuel, and Antonio G. Marques. "Robust Graph-Filter Identification with Graph Denoising Regularization." In ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2021. http://dx.doi.org/10.1109/icassp39728.2021.9414909.
Full textZhang, F., and E. R. Hancock. "Tensor MRI Regularization via Graph Diffusion." In British Machine Vision Conference 2006. British Machine Vision Association, 2006. http://dx.doi.org/10.5244/c.20.61.
Full textKheradmand, Amin, and Peyman Milanfar. "Motion deblurring with graph Laplacian regularization." In IS&T/SPIE Electronic Imaging, edited by Nitin Sampat, Radka Tezaur, and Dietmar Wüller. SPIE, 2015. http://dx.doi.org/10.1117/12.2084585.
Full textYang, Maosheng, Mario Coutino, Elvin Isufi, and Geert Leus. "Node Varying Regularization for Graph Signals." In 2020 28th European Signal Processing Conference (EUSIPCO). IEEE, 2021. http://dx.doi.org/10.23919/eusipco47968.2020.9287807.
Full textZhang, Qiang, and Zhenjiang Miao. "Subspace Clustering via Sparse Graph Regularization." In 2017 4th IAPR Asian Conference on Pattern Recognition (ACPR). IEEE, 2017. http://dx.doi.org/10.1109/acpr.2017.94.
Full textTsuda, Koji. "Entire regularization paths for graph data." In the 24th international conference. New York, New York, USA: ACM Press, 2007. http://dx.doi.org/10.1145/1273496.1273612.
Full textYu, Tianshu, and Ruisheng Wang. "Graph matching with low-rank regularization." In 2016 IEEE Winter Conference on Applications of Computer Vision (WACV). IEEE, 2016. http://dx.doi.org/10.1109/wacv.2016.7477730.
Full textXue, Jiaqi, and Bin Zhang. "Adaptive Projected Clustering with Graph Regularization." In 2022 26th International Conference on Pattern Recognition (ICPR). IEEE, 2022. http://dx.doi.org/10.1109/icpr56361.2022.9956370.
Full text