Academic literature on the topic 'Kurdyka-Lojasiewicz'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Kurdyka-Lojasiewicz.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Kurdyka-Lojasiewicz"

1

Tran, Phuong Minh, and Nhan Thanh Nguyen. "On the Convergence of Bounded Solutions of Non Homogeneous Gradient-like Systems." Journal of Advanced Engineering and Computation 1, no. 1 (2017): 61. http://dx.doi.org/10.25073/jaec.201711.50.

Full text
Abstract:
We study the long time behavior of the bounded solutions of non homogeneous gradient-like system which admits a strict Lyapunov function. More precisely, we show that any bounded solution of the gradient-like system converges to an accumulation point as time goes to infinity under some mild hypotheses. As in homogeneous case, the key assumptions for this system are also the angle condition and the Kurdyka-Lojasiewicz inequality. The convergence result will be proved under a L1 -condition of the perturbation term. Moreover, if the Lyapunov function satisfies a Lojasiewicz inequality then the ra
APA, Harvard, Vancouver, ISO, and other styles
2

Hu, Zhenlong, and Zhongming Chen. "Proximal Alternating Linearized Minimization Algorithm for Sparse Tensor Train Decomposition." Statistics, Optimization & Information Computing 13, no. 5 (2025): 2119–35. https://doi.org/10.19139/soic-2310-5070-2440.

Full text
Abstract:
We address the sparse tensor train (TT) decomposition problem by incorporating an L1-norm regularization term. To improve numerical stability, orthogonality constraints are imposed on the problem. The tensor is expressed in the TT format, and the proximal alternating linearized minimization (PALM) algorithm is employed to solve the problem. Furthermore, we verify that the objective function qualifies as a Kurdyka-Lojasiewicz (KL) function and provide a convergence analysis. Numerical experiments on synthetic data and real data also demonstrate the efficiency of the proposed algorithm.
APA, Harvard, Vancouver, ISO, and other styles
3

Papa Quiroz, Erik A., and Jose L. Huaman ˜Naupa. "Método del Punto Proximal Inexacto Usando Cuasi-Distancias para Optimización de Funciones KL." Pesquimat 25, no. 1 (2022): 22–35. http://dx.doi.org/10.15381/pesquimat.v25i1.23144.

Full text
Abstract:
Se introduce un algoritmo de punto proximal inexacto utilizando cuasi-distancias para dar solución a un problema de minimización en el espacio Euclideano. Este algoritmo ha sido motivado por el método proximal introducido por Attouch et al. [1] pero en este caso consideramos cuasi-distancias en vez de la distancia Euclidiana, funciones que satisfacen la desigualdad de Kurdyka-Lojasiewicz, errores vectoriales en el residual del punto crítico de los subproblemas proximales regula-rizados. Obtenemos bajo algunos supuestos adicionales la convergencia global de la sucesión generada por el algoritmo
APA, Harvard, Vancouver, ISO, and other styles
4

Luo, Zhijun, Zhibin Zhu, and Benxin Zhang. "A LogTVSCAD Nonconvex Regularization Model for Image Deblurring in the Presence of Impulse Noise." Discrete Dynamics in Nature and Society 2021 (October 26, 2021): 1–19. http://dx.doi.org/10.1155/2021/3289477.

Full text
Abstract:
This paper proposes a nonconvex model (called LogTVSCAD) for deblurring images with impulsive noises, using the log-function penalty as the regularizer and adopting the smoothly clipped absolute deviation (SCAD) function as the data-fitting term. The proposed nonconvex model can effectively overcome the poor performance of the classical TVL1 model for high-level impulsive noise. A difference of convex functions algorithm (DCA) is proposed to solve the nonconvex model. For the model subproblem, we consider the alternating direction method of multipliers (ADMM) algorithm to solve it. The global
APA, Harvard, Vancouver, ISO, and other styles
5

Bento, G. C., and A. Soubeyran. "A Generalized Inexact Proximal Point Method for Nonsmooth Functions that Satisfies Kurdyka Lojasiewicz Inequality." Set-Valued and Variational Analysis 23, no. 3 (2015): 501–17. http://dx.doi.org/10.1007/s11228-015-0319-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Bonettini, Silvia, Danilo Pezzi, Marco Prato, and Simone Rebegoldi. "On an iteratively reweighted linesearch based algorithm for nonconvex composite optimization." Inverse Problems, April 4, 2023. http://dx.doi.org/10.1088/1361-6420/acca43.

Full text
Abstract:
Abstract In this paper we propose a new algorithm for solving a class of nonsmooth nonconvex problems, which is obtained by combining the iteratively reweighted scheme with a finite number of forward–backward iterations based on a linesearch procedure. The new method overcomes some limitations of linesearch forward–backward methods, since it can be applied also to minimize functions containing terms that are both nonsmooth and nonconvex. Moreover, the combined scheme can take advantage of acceleration techniques consisting in suitable selection rules for the algorithm parameters. We develop th
APA, Harvard, Vancouver, ISO, and other styles
7

Yuan, Ganzhao, and Bernard Ghanem. "A Proximal Alternating Direction Method for Semi-Definite Rank Minimization." Proceedings of the AAAI Conference on Artificial Intelligence 30, no. 1 (2016). http://dx.doi.org/10.1609/aaai.v30i1.10228.

Full text
Abstract:
Semi-definite rank minimization problems model a wide range of applications in both signal processing and machine learning fields. This class of problem is NP-hard in general. In this paper, we propose a proximal Alternating Direction Method (ADM) for the well-known semi-definite rank regularized minimization problem. Specifically, we first reformulate this NP-hard problem as an equivalent biconvex MPEC (Mathematical Program with Equilibrium Constraints), and then solve it using proximal ADM, which involves solving a sequence of structured convex semi-definite subproblems to find a desirable s
APA, Harvard, Vancouver, ISO, and other styles
8

Bourkhissi, Lahcen El, and Ion Necoara. "Complexity of linearized quadratic penalty for optimization with nonlinear equality constraints." Journal of Global Optimization, December 2, 2024. https://doi.org/10.1007/s10898-024-01456-3.

Full text
Abstract:
AbstractIn this paper we consider a nonconvex optimization problem with nonlinear equality constraints. We assume that both, the objective function and the functional constraints, are locally smooth. For solving this problem, we propose a linearized quadratic penalty method, i.e., we linearize the objective function and the functional constraints in the penalty formulation at the current iterate and add a quadratic regularization, thus yielding a subproblem that is easy to solve, and whose solution is the next iterate. Under a new adaptive regularization parameter choice, we provide convergenc
APA, Harvard, Vancouver, ISO, and other styles
9

Dong, Guozhi, Michael Hintermüller, and Clemens Sirotenko. "Dictionary Learning Based Regularization in Quantitative MRI: A Nested Alternating Optimization Framework." Inverse Problems, July 14, 2025. https://doi.org/10.1088/1361-6420/adef74.

Full text
Abstract:
Abstract In this article we propose a novel regularization method for a class of nonlinear inverse problems that is inspired by an application in quantitative magnetic resonance imaging (MRI). The latter is a special instance of a general dynamical image reconstruction technique. The radio-frequency pulse sequence in MRI gives rise to a time discrete physics-based mathematical model which acts as a side constraint in our inverse problem. As regularization we employ dictionary learning, a method that has been proven to be effective in classical MRI. For computing a solution of the resulting non
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Kurdyka-Lojasiewicz"

1

Nguyen, Trong Phong. "Inégalités de Kurdyka-Lojasiewicz et convexité : algorithmes et applications." Thesis, Toulouse 1, 2017. http://www.theses.fr/2017TOU10022/document.

Full text
Abstract:
Cette thèse traite des méthodes de descente d’ordre un pour les problèmes de minimisation. Elle comprend trois parties. Dans la première partie, nous apportons une vue d’ensemble des bornes d’erreur et les premières briques d’unification d’un concept. Nous montrons en effet la place centrale de l’inégalité du gradient de Lojasiewicz, en mettant en relation cette inégalité avec les bornes d’erreur. Dans la seconde partie, en usant de l’inégalité de Kurdyka-Lojasiewicz (KL), nous apportons un nouvel outil pour calculer la complexité des m´méthodes de descente d’ordre un pour la minimisation conv
APA, Harvard, Vancouver, ISO, and other styles
2

Assunção, Filho Pedro Bonfim de. "Um algoritmo proximal com quase-distância." Universidade Federal de Goiás, 2015. http://repositorio.bc.ufg.br/tede/handle/tede/4521.

Full text
Abstract:
Submitted by Luciana Ferreira (lucgeral@gmail.com) on 2015-05-14T15:48:06Z No. of bitstreams: 2 Dissertação - Pedro Bonfim de Assunção Filho - 2015.pdf: 1595722 bytes, checksum: f3fd3bdb8a9b340d60e156dcf07a9d63 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5)<br>Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2015-05-14T15:51:54Z (GMT) No. of bitstreams: 2 Dissertação - Pedro Bonfim de Assunção Filho - 2015.pdf: 1595722 bytes, checksum: f3fd3bdb8a9b340d60e156dcf07a9d63 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a77147
APA, Harvard, Vancouver, ISO, and other styles
3

Sousa, Júnior Valdinês Leite de. "Sobre a convergência de métodos de descida em otimização não-suave: aplicações à ciência comportamental." Universidade Federal de Goiás, 2017. http://repositorio.bc.ufg.br/tede/handle/tede/6864.

Full text
Abstract:
Submitted by Cássia Santos (cassia.bcufg@gmail.com) on 2017-02-22T12:12:47Z No. of bitstreams: 2 Tese - Valdinês Leite de Sousa Júnior - 2017.pdf: 2145153 bytes, checksum: 388666d9bc1ff5aa261882785a3cc5e0 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)<br>Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2017-02-22T13:04:40Z (GMT) No. of bitstreams: 2 Tese - Valdinês Leite de Sousa Júnior - 2017.pdf: 2145153 bytes, checksum: 388666d9bc1ff5aa261882785a3cc5e0 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)<b
APA, Harvard, Vancouver, ISO, and other styles
4

Bensaid, Bilel. "Analyse et développement de nouveaux optimiseurs en Machine Learning." Electronic Thesis or Diss., Bordeaux, 2024. http://www.theses.fr/2024BORD0218.

Full text
Abstract:
Ces dernières années, l’intelligence artificielle (IA) est confrontée à deux défis majeurs (parmi d’autres), à savoir l’explicabilité et la frugalité, dans un contexte d’intégration de l’IA dans des systèmes critiques ou embarqués et de raréfaction des ressources. Le défi est d’autant plus conséquent que les modèles proposés apparaissent commes des boîtes noires étant donné le nombre faramineux d’hyperparamètres à régler (véritable savoir-faire) pour les faire fonctionner. Parmi ces paramètres, l’optimiseur ainsi que les réglages qui lui sont associés ont un rôle critique dans la bonne mise en
APA, Harvard, Vancouver, ISO, and other styles
5

Xu, Yangyang. "Block Coordinate Descent for Regularized Multi-convex Optimization." Thesis, 2013. http://hdl.handle.net/1911/72066.

Full text
Abstract:
This thesis considers regularized block multi-convex optimization, where the feasible set and objective function are generally non-convex but convex in each block of variables. I review some of its interesting examples and propose a generalized block coordinate descent (BCD) method. The generalized BCD uses three different block-update schemes. Based on the property of one block subproblem, one can freely choose one of the three schemes to update the corresponding block of variables. Appropriate choices of block-update schemes can often speed up the algorithm and greatly save computing time.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!