Literatura académica sobre el tema "Convex minimization"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Convex minimization".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Artículos de revistas sobre el tema "Convex minimization"

1

Li, Duan, Zhi-You Wu, Heung-Wing Joseph Lee, Xin-Min Yang y Lian-Sheng Zhang. "Hidden Convex Minimization". Journal of Global Optimization 31, n.º 2 (febrero de 2005): 211–33. http://dx.doi.org/10.1007/s10898-004-5697-5.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Mayeli, Azita. "Non-convex Optimization via Strongly Convex Majorization-minimization". Canadian Mathematical Bulletin 63, n.º 4 (10 de diciembre de 2019): 726–37. http://dx.doi.org/10.4153/s0008439519000730.

Texto completo
Resumen
AbstractIn this paper, we introduce a class of nonsmooth nonconvex optimization problems, and we propose to use a local iterative minimization-majorization (MM) algorithm to find an optimal solution for the optimization problem. The cost functions in our optimization problems are an extension of convex functions with MC separable penalty, which were previously introduced by Ivan Selesnick. These functions are not convex; therefore, convex optimization methods cannot be applied here to prove the existence of optimal minimum point for these functions. For our purpose, we use convex analysis tools to first construct a class of convex majorizers, which approximate the value of non-convex cost function locally, then use the MM algorithm to prove the existence of local minimum. The convergence of the algorithm is guaranteed when the iterative points $x^{(k)}$ are obtained in a ball centred at $x^{(k-1)}$ with small radius. We prove that the algorithm converges to a stationary point (local minimum) of cost function when the surregators are strongly convex.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Scarpa, Luca y Ulisse Stefanelli. "Stochastic PDEs via convex minimization". Communications in Partial Differential Equations 46, n.º 1 (14 de octubre de 2020): 66–97. http://dx.doi.org/10.1080/03605302.2020.1831017.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Thach, P. T. "Convex minimization under Lipschitz constraints". Journal of Optimization Theory and Applications 64, n.º 3 (marzo de 1990): 595–614. http://dx.doi.org/10.1007/bf00939426.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Mifflin, Robert y Claudia Sagastizábal. "A -algorithm for convex minimization". Mathematical Programming 104, n.º 2-3 (14 de julio de 2005): 583–608. http://dx.doi.org/10.1007/s10107-005-0630-3.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Shioura, Akiyoshi. "Minimization of an M-convex function". Discrete Applied Mathematics 84, n.º 1-3 (mayo de 1998): 215–20. http://dx.doi.org/10.1016/s0166-218x(97)00140-6.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

O'Hara, John G., Paranjothi Pillay y Hong-Kun Xu. "Iterative Approaches to Convex Minimization Problems". Numerical Functional Analysis and Optimization 25, n.º 5-6 (enero de 2004): 531–46. http://dx.doi.org/10.1081/nfa-200041707.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Ye, Qiaolin, Chunxia Zhao, Ning Ye y Xiaobo Chen. "Localized twin SVM via convex minimization". Neurocomputing 74, n.º 4 (enero de 2011): 580–87. http://dx.doi.org/10.1016/j.neucom.2010.09.015.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Akagi, Goro y Ulisse Stefanelli. "Doubly Nonlinear Equations as Convex Minimization". SIAM Journal on Mathematical Analysis 46, n.º 3 (enero de 2014): 1922–45. http://dx.doi.org/10.1137/13091909x.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Stefanov, Stefan M. "Convex separable minimization with box constraints". PAMM 7, n.º 1 (diciembre de 2007): 2060045–46. http://dx.doi.org/10.1002/pamm.200700535.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Más fuentes

Tesis sobre el tema "Convex minimization"

1

NediÄ, Angelia. "Subgradient methods for convex minimization". Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/16843.

Texto completo
Resumen
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2002.
Includes bibliographical references (p. 169-174).
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Many optimization problems arising in various applications require minimization of an objective cost function that is convex but not differentiable. Such a minimization arises, for example, in model construction, system identification, neural networks, pattern classification, and various assignment, scheduling, and allocation problems. To solve convex but not differentiable problems, we have to employ special methods that can work in the absence of differentiability, while taking the advantage of convexity and possibly other special structures that our minimization problem may possess. In this thesis, we propose and analyze some new methods that can solve convex (not necessarily differentiable) problems. In particular, we consider two classes of methods: incremental and variable metric.
by Angelia Nedić.
Ph.D.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Apidopoulos, Vasileios. "Inertial Gradient-Descent algorithms for convex minimization". Thesis, Bordeaux, 2019. http://www.theses.fr/2019BORD0175/document.

Texto completo
Resumen
Cette thèse porte sur l’étude des méthodes inertielles pour résoudre les problèmes de minimisation convexe structurés. Depuis les premiers travaux de Polyak et Nesterov, ces méthodes sont devenues très populaires, grâce à leurs effets d’accélération. Dans ce travail, on étudie une famille d’algorithmes de gradient proximal inertiel de type Nesterov avec un choix spécifique de suites de sur-relaxation. Les différentes propriétés de convergence de cette famille d’algorithmes sont présentées d’une manière unifiée, en fonction du paramètre de sur-relaxation. En outre, on étudie ces propriétés, dans le cas des fonctions lisses vérifiant des hypothèses géométriques supplémentaires, comme la condition de croissance (ou condition de Łojasiewicz). On montre qu’en combinant cette condition de croissance avec une condition de planéité (flatness) sur la géométrie de la fonction minimisante, on obtient de nouveaux taux de convergence. La stratégie adoptée ici, utilise des analogies du continu vers le discret, en passant des systèmes dynamiques continus en temps à des schémas discrets. En particulier, la famille d’algorithmes inertiels qui nous intéresse, peut être identifiée comme un schéma aux différences finies d’une équation/inclusion différentielle. Cette approche donne les grandes lignes d’une façon de transposer les différents résultats et leurs démonstrations du continu au discret. Cela ouvre la voie à de nouveaux schémas inertiels possibles, issus du même système dynamique
This Thesis focuses on the study of inertial methods for solving composite convex minimization problems. Since the early works of Polyak and Nesterov, inertial methods become very popular, thanks to their acceleration effects. Here, we study a family of Nesterov-type inertial proximalgradient algorithms with a particular over-relaxation sequence. We give a unified presentation about the different convergence properties of this family of algorithms, depending on the over-relaxation parameter. In addition we addressing this issue, in the case of a smooth function with additional geometrical structure, such as the growth (or Łojasiewicz) condition. We show that by combining growth condition and a flatness-type condition on the geometry of the minimizing function, we are able to obtain some new convergence rates. Our analysis follows a continuous-to-discrete trail, passing from continuous-on time-dynamical systems to discrete schemes. In particular the family of inertial algorithms that interest us, can be identified as a finite difference scheme of a differential equation/inclusion. This approach provides a useful guideline, which permits to transpose the different results and their proofs from the continuous system to the discrete one. This opens the way for new possible inertial schemes, derived by the same dynamical system
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Gräser, Carsten [Verfasser]. "Convex minimization and phase field models / Carsten Gräser". Berlin : Freie Universität Berlin, 2011. http://d-nb.info/1026174848/34.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

El, Gheche Mireille. "Proximal methods for convex minimization of Phi-divergences : application to computer vision". Thesis, Paris Est, 2014. http://www.theses.fr/2014PEST1018/document.

Texto completo
Resumen
Cette thèse s'inscrit dans le contexte de l'optimisation convexe. Elle apporte à ce domaine deux contributions principales. La première porte sur les méthodes d'optimisation convexe non lisse appliquées à la vision par ordinateur. Quant à la seconde, elle fournit de nouveaux résultats théoriques concernant la manipulation de mesures de divergences, telles que celles utilisées en théorie de l'information et dans divers problèmes d'optimisation. Le principe de la stéréovision consiste à exploiter deux images d'une même scène prises sous deux points de vue, afin de retrouver les pixels homologues et de se ramener ainsi à un problème d'estimation d'un champ de disparité. Dans ce travail, le problème de l'estimation de la disparité est considéré en présence de variations d'illumination. Ceci se traduit par l'ajout, dans la fonction objective globale à minimiser, d'un facteur multiplicatif variant spatialement, estimé conjointement avec la disparité. Nous avons mis l'accent sur l'avantage de considérer plusieurs critères convexes et non-nécessairement différentiables, et d'exploiter des images multicomposantes (par exemple, des images couleurs) pour améliorer les performances de notre méthode. Le problème d'estimation posé est résolu en utilisant un algorithme parallèle proximal basé sur des développements récents en analyse convexe. Dans une seconde partie, nous avons étendu notre approche au cas multi-vues qui est un sujet de recherche relativement nouveau. Cette extension s'avère particulièrement utile dans le cadre d'applications où les zones d'occultation sont très larges et posent de nombreuses difficultés. Pour résoudre le problème d'optimisation associé, nous avons utilisé des algorithmes proximaux en suivant des approches multi-étiquettes relaxés de manière convexe. Les algorithmes employés présentent l'avantage de pouvoir gérer simultanément un grand nombre d'images et de contraintes, ainsi que des critères convexes et non convexes. Des résultats sur des images synthétiques ont permis de valider l'efficacité de ces méthodes, pour différentes mesures d'erreur. La dernière partie de cette thèse porte sur les problèmes d'optimisation convexe impliquant des mesures d'information (Phi-divergences), qui sont largement utilisés dans le codage source et le codage canal. Ces mesures peuvent être également employées avec succès dans des problèmes inverses rencontrés dans le traitement du signal et de l'image. Les problèmes d'optimisation associés sont souvent difficiles à résoudre en raison de leur grande taille. Dans ce travail, nous avons établi les expressions des opérateurs proximaux de ces divergences. En s'appuyant sur ces résultats, nous avons développé une approche proximale reposant sur l'usage de méthodes primales-duales. Ceci nous a permis de répondre à une large gamme de problèmes d'optimisation convexe dont la fonction objective comprend un terme qui s'exprime sous la forme de l'une de ces divergences
Convex optimization aims at searching for the minimum of a convex function over a convex set. While the theory of convex optimization has been largely explored for about a century, several related developments have stimulated a new interest in the topic. The first one is the emergence of efficient optimization algorithms, such as proximal methods, which allow one to easily solve large-size nonsmooth convex problems in a parallel manner. The second development is the discovery of the fact that convex optimization problems are more ubiquitous in practice than was thought previously. In this thesis, we address two different problems within the framework of convex optimization. The first one is an application to computer stereo vision, where the goal is to recover the depth information of a scene from a pair of images taken from the left and right positions. The second one is the proposition of new mathematical tools to deal with convex optimization problems involving information measures, where the objective is to minimize the divergence between two statistical objects such as random variables or probability distributions. We propose a convex approach to address the problem of dense disparity estimation under varying illumination conditions. A convex energy function is derived for jointly estimating the disparity and the illumination variation. The resulting problem is tackled in a set theoretic framework and solved using proximal tools. It is worth emphasizing the ability of this method to process multicomponent images under illumination variation. The conducted experiments indicate that this approach can effectively deal with the local illumination changes and yields better results compared with existing methods. We then extend the previous approach to the problem of multi-view disparity estimation. Rather than estimating a single depth map, we estimate a sequence of disparity maps, one for each input image. We address this problem by adopting a discrete reformulation that can be efficiently solved through a convex relaxation. This approach offers the advantage of handling both convex and nonconvex similarity measures within the same framework. We have shown that the additional complexity required by the application of our method to the multi-view case is small with respect to the stereo case. Finally, we have proposed a novel approach to handle a broad class of statistical distances, called $varphi$-divergences, within the framework of proximal algorithms. In particular, we have developed the expression of the proximity operators of several $varphi$-divergences, such as Kulback-Leibler, Jeffrey-Kulback, Hellinger, Chi-Square, I$_{alpha}$, and Renyi divergences. This allows proximal algorithms to deal with problems involving such divergences, thus overcoming the limitations of current state-of-the-art approaches for similar problems. The proposed approach is validated in two different contexts. The first is an application to image restoration that illustrates how to employ divergences as a regularization term, while the second is an application to image registration that employs divergences as a data fidelity term
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Doto, James William. "Conditional uniform convexity in Orlicz spaces and minimization problems". Thesis, Georgia Institute of Technology, 2002. http://hdl.handle.net/1853/27352.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Croxton, Keely L., Bernard Gendon y Thomas L. Magnanti. "A Comparison of Mixed-Integer Programming Models for Non-Convex Piecewise Linear Cost Minimization Problems". Massachusetts Institute of Technology, Operations Research Center, 2002. http://hdl.handle.net/1721.1/5233.

Texto completo
Resumen
We study a generic minimization problem with separable non-convex piecewise linear costs, showing that the linear programming (LP) relaxation of three textbook mixed integer programming formulations each approximates the cost function by its lower convex envelope. We also show a relationship between this result and classical Lagrangian duality theory.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

He, Niao. "Saddle point techniques in convex composite and error-in-measurement optimization". Diss., Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/54400.

Texto completo
Resumen
This dissertation aims to develop efficient algorithms with improved scalability and stability properties for large-scale optimization and optimization under uncertainty, and to bridge some of the gaps between modern optimization theories and recent applications emerging in the Big Data environment. To this end, the dissertation is dedicated to two important subjects -- i) Large-scale Convex Composite Optimization and ii) Error-in-Measurement Optimization. In spite of the different natures of these two topics, the common denominator, to be presented, lies in their accommodation for systematic use of saddle point techniques for mathematical modeling and numerical processing. The main body can be split into three parts. In the first part, we consider a broad class of variational inequalities with composite structures, allowing to cover the saddle point/variational analogies of the classical convex composite minimization (i.e. summation of a smooth convex function and a simple nonsmooth convex function). We develop novel composite versions of the state-of-the-art Mirror Descent and Mirror Prox algorithms aimed at solving such type of problems. We demonstrate that the algorithms inherit the favorable efficiency estimate of their prototypes when solving structured variational inequalities. Moreover, we develop several variants of the composite Mirror Prox algorithm along with their corresponding complexity bounds, allowing the algorithm to handle the case of imprecise prox mapping as well as the case when the operator is represented by an unbiased stochastic oracle. In the second part, we investigate four general types of large-scale convex composite optimization problems, including (a) multi-term composite minimization, (b) linearly constrained composite minimization, (c) norm-regularized nonsmooth minimization, and (d) maximum likelihood Poisson imaging. We demonstrate that the composite Mirror Prox, when integrated with saddle point techniques and other algorithmic tools, can solve all these optimization problems with the best known so far rates of convergences. Our main related contributions are as follows. Firstly, regards to problems of type (a), we develop an optimal algorithm by integrating the composite Mirror Prox with a saddle point reformulation based on exact penalty. Secondly, regards to problems of type (b), we develop a novel algorithm reducing the problem to solving a ``small series'' of saddle point subproblems and achieving an optimal, up to log factors, complexity bound. Thirdly, regards to problems of type (c), we develop a Semi-Proximal Mirror-Prox algorithm by leveraging the saddle point representation and linear minimization over problems' domain and attain optimality both in the numbers of calls to the first order oracle representing the objective and calls to the linear minimization oracle representing problem's domain. Lastly, regards to problem (d), we show that the composite Mirror Prox when applied to the saddle point reformulation circumvents the difficulty with non-Lipschitz continuity of the objective and exhibits better convergence rate than the typical rate for nonsmooth optimization. We conduct extensive numerical experiments and illustrate the practical potential of our algorithms in a wide spectrum of applications in machine learning and image processing. In the third part, we examine error-in-measurement optimization, referring to decision-making problems with data subject to measurement errors; such problems arise naturally in a number of important applications, such as privacy learning, signal processing, and portfolio selection. Due to the postulated observation scheme and specific structure of the problem, straightforward application of standard stochastic optimization techniques such as Stochastic Approximation (SA) and Sample Average Approximation (SAA) are out of question. Our goal is to develop computationally efficient and, hopefully, not too conservative data-driven techniques applicable to a broad scope of problems and allowing for theoretical performance guarantees. We present two such approaches -- one depending on a fully algorithmic calculus of saddle point representations of convex-concave functions and the other depending on a general approximation scheme of convex stochastic programming. Both approaches allow us to convert the problem of interests to a form amenable for SA or SAA. The latter developments are primarily focused on two important applications -- affine signal processing and indirect support vector machines.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Ghebremariam, Samuel. "Energy Production Cost and PAR Minimization in Multi-Source Power Networks". University of Akron / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=akron1336517757.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Sinha, Arunesh. "Audit Games". Research Showcase @ CMU, 2014. http://repository.cmu.edu/dissertations/487.

Texto completo
Resumen
Modern organizations (e.g., hospitals, banks, social networks, search engines) hold large volumes of personal information, and rely heavily on auditing for enforcement of privacy policies. These audit mechanisms combine automated methods with human input to detect and punish violators. Since human audit resources are limited, and often not sufficient to investigate all potential violations, current state-of-the -art audit tools provide heuristics to guide human effort. However, numerous reports of privacy breaches caused by malicious insiders bring to question the effectiveness of these audit mechanisms. Our thesis is that effective audit resource allocation and punishment levels can be efficiently computed by modeling the audit process as a game between a rational auditor and a rational or worst-case auditee. We present several results in support of the thesis. In the worst-case adversary setting, we design a game model taking into account organizational cost of auditing and loss from violations. We propose the notion of low regret as a desired audit property and provide a regret minimizing audit algorithm that outputs an optimal audit resource allocation strategy. The algorithm improves upon prior regret bounds in the partial information setting. In the rational adversary setting, we enable punishments by the auditor, and model the adversary's utility as a trade-off between the benefit from violations and loss due to punishment when detected. Our Stackelberg game model generalizes an existing deployed security game model with punishment parameters. It applies to natural auditing settings with multiple auditors where each auditor is restricted to audit a subset of the potential violations. We provide novel polynomial time algorithms to approximate the non-convex optimization problem used to compute the Stackelberg equilibrium. The algorithms output optimal audit resource allocation strategy and punishment levels. We also provide a method to reduce the optimization problem size, achieving up to 5x speedup for realistic instances of the audit problem, and for the related security game instances.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Caillaud, Corentin. "Asymptotical estimates for some algorithms for data and image processing : a study of the Sinkhorn algorithm and a numerical analysis of total variation minimization". Thesis, Institut polytechnique de Paris, 2020. http://www.theses.fr/2020IPPAX023.

Texto completo
Resumen
Cette thèse traite de problèmes discrets d'optimisation convexe et s'intéresse à des estimations de leurs taux de convergence. Elle s'organise en deux parties indépendantes.Dans la première partie, nous étudions le taux de convergence de l'algorithme de Sinkhorn et de certaines de ses variantes. Cet algorithme apparaît dans le cadre du Transport Optimal (TO) par l'intermédiaire d'une régularisation entropique. Ses itérations, comme celles de ses variantes, s'écrivent sous la forme de produits composante par composante de matrices et de vecteurs positifs. Pour les étudier, nous proposons une nouvelle approche basée sur des inégalités de convexité simples et menant au taux de convergence linéaire observé en pratique. Nous étendons ce résultat à un certain type de variantes de l'algorithme que nous appelons algorithmes de Sinkhorn équilibrés de dimension 1. Nous présentons ensuite des techniques numériques traitant le cas de la convergence vers zéro du paramètre de régularisation des problèmes de TO. Enfin, nous menons l'analyse complète du taux de convergence en dimension 2.Dans la deuxième partie, nous donnons des estimations d'erreur pour deux discrétisations de la variation totale (TV) dans le modèle de Rudin, Osher et Fatemi (ROF). Ce problème de débruitage d'image, qui revient à calculer l'opérateur proximal de la variation totale, bénéficie de propriétés d'isotropie assurant la conservation de discontinuités nettes dans les images débruitées, et ce dans toutes les directions. En discrétisant le problème sur un maillage carré de taille h et en utilisant une variation totale discrète standard dite TV isotrope, cette propriété est perdue. Nous démontrons que dans une direction particulière l'erreur sur l'énergie est d'ordre h^{2/3}, ce qui est relativement élevé face aux attentes pour de meilleures discrétisations. Notre preuve repose sur l'analyse d'un problème équivalent en dimension 1 et de la TV perturbée qui y intervient. La deuxième variation totale discrète que nous considérons copie la définition de la variation totale continue en remplaçant les champs duaux habituels par des champs discrets dits de Raviart-Thomas. Nous retrouvons ainsi le caractère isotrope du modèle ROF discret. Pour conclure, nous prouvons, pour cette variation totale et sous certaines hypothèses, une estimation d'erreur en O(h)
This thesis deals with discrete optimization problems and investigates estimates of their convergence rates. It is divided into two independent parts.The first part addresses the convergence rate of the Sinkhorn algorithm and of some of its variants. This algorithm appears in the context of Optimal Transportation (OT) through entropic regularization. Its iterations, and the ones of the Sinkhorn-like variants, are written as componentwise products of nonnegative vectors and matrices. We propose a new approach to analyze them, based on simple convex inequalities and leading to the linear convergence rate that is observed in practice. We extend this result to a particular type of variants of the algorithm that we call 1D balanced Sinkhorn-like algorithms. In addition, we present some numerical techniques dealing with the convergence towards zero of the regularizing parameter of the OT problems. Lastly, we conduct the complete analysis of the convergence rate in dimension 2. In the second part, we establish error estimates for two discretizations of the total variation (TV) in the Rudin-Osher-Fatemi (ROF) model. This image denoising problem, that is solved by computing the proximal operator of the total variation, enjoys isotropy properties ensuring the preservation of sharp discontinuities in the denoised images in every direction. When the problem is discretized into a square mesh of size h and one uses a standard discrete total variation -- the so-called isotropic TV -- this property is lost. We show that in a particular direction the error in the energy is of order h^{2/3} which is relatively large with respect to what one can expect with better discretizations. Our proof relies on the analysis of an equivalent 1D denoising problem and of the perturbed TV it involves. The second discrete total variation we consider mimics the definition of the continuous total variation replacing the usual dual fields by discrete Raviart-Thomas fields. Doing so, we recover an isotropic behavior of the discrete ROF model. Finally, we prove a O(h) error estimate for this variant under standard hypotheses
Los estilos APA, Harvard, Vancouver, ISO, etc.
Más fuentes

Libros sobre el tema "Convex minimization"

1

Hiriart-Urruty, Jean-Baptiste. Convex analysis and minimization algorithms. Berlin: Springer-Verlag, 1993.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Hiriart-Urruty, Jean-Baptiste. Convex analysis and minimization algorithms. Berlin: Springer-Verlag, 1993.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Hiriart-Urruty, Jean-Baptiste. Convex analysis and minimization algorithms. 2a ed. Berlin: Springer-Verlag, 1996.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Hiriart-Urruty, Jean-Baptiste. Convex analysis and minimization algorithms. Berlin: Springer-Verlag, 1993.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Hiriart-Urruty, Jean-Baptiste y Claude Lemaréchal. Convex Analysis and Minimization Algorithms I. Berlin, Heidelberg: Springer Berlin Heidelberg, 1993. http://dx.doi.org/10.1007/978-3-662-02796-7.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Hiriart-Urruty, Jean-Baptiste y Claude Lemaréchal. Convex Analysis and Minimization Algorithms II. Berlin, Heidelberg: Springer Berlin Heidelberg, 1993. http://dx.doi.org/10.1007/978-3-662-06409-2.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Convex Analysis And Minimization Algorithms. Springer, 2010.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Lemarechal, Claude y Jean-Baptiste Hiriart-Urruty. Convex Analysis and Minimization Algorithms I: Fundamentals. Springer, 2011.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Lemarechal, Claude y Jean-Baptiste Hiriart-Urruty. Convex Analysis and Minimization Algorithms I: Fundamentals (Grundlehren der mathematischen Wissenschaften Book 305). Springer, 2011.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Lemarechal, Claude y Jean-Baptiste Hiriart-Urruty. Convex Analysis and Minimization Algorithms: Part 2: Advanced Theory and Bundle Methods (Grundlehren der mathematischen Wissenschaften). Springer, 2001.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.

Capítulos de libros sobre el tema "Convex minimization"

1

Lange, Kenneth. "Convex Minimization Algorithms". En Springer Texts in Statistics, 415–44. New York, NY: Springer New York, 2012. http://dx.doi.org/10.1007/978-1-4614-5838-8_16.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Bauschke, Heinz H. y Patrick L. Combettes. "Convex Minimization Problems". En CMS Books in Mathematics, 189–201. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-48311-5_11.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Troutman, John L. "Minimization of Convex Functions". En Variational Calculus and Optimal Control, 53–96. New York, NY: Springer New York, 1996. http://dx.doi.org/10.1007/978-1-4612-0737-5_4.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

J. Zaslavski, Alexander. "Minimization of Quasiconvex Functions". En Convex Optimization with Computational Errors, 287–93. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-37822-6_10.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

J. Zaslavski, Alexander. "Minimization of Sharp Weakly Convex Functions". En Convex Optimization with Computational Errors, 295–320. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-37822-6_11.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Falb, Peter. "Minimization of Functionals: Uniformly Convex Spaces". En Direct Methods in Control Problems, 31–39. New York, NY: Springer New York, 2019. http://dx.doi.org/10.1007/978-0-8176-4723-0_5.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Li, S. Z., Y. H. Huang, J. S. Fu y K. L. Chan. "Edge-preserving smoothing by convex minimization". En Computer Vision — ACCV'98, 746–53. Berlin, Heidelberg: Springer Berlin Heidelberg, 1997. http://dx.doi.org/10.1007/3-540-63930-6_190.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Souza de Cursi, Eduardo, Rubens Sampaio y Piotr Breitkopf. "Minimization of a Non-Convex Function". En Modeling and Convexity, 61–68. Hoboken, NJ USA: John Wiley & Sons, Inc., 2013. http://dx.doi.org/10.1002/9781118622438.ch4.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Kiwiel, K. C. "Descent Methods for Nonsmooth Convex Constrained Minimization". En Nondifferentiable Optimization: Motivations and Applications, 203–14. Berlin, Heidelberg: Springer Berlin Heidelberg, 1985. http://dx.doi.org/10.1007/978-3-662-12603-5_19.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Tuzikov, Alexander V. y Stanislav A. Sheynin. "Minkowski Sum Volume Minimization for Convex Polyhedra". En Mathematical Morphology and its Applications to Image and Signal Processing, 33–40. Boston, MA: Springer US, 2002. http://dx.doi.org/10.1007/0-306-47025-x_5.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.

Actas de conferencias sobre el tema "Convex minimization"

1

Dvijotham, Krishnamurthy, Emanuel Todorov y Maryam Fazel. "Convex control design via covariance minimization". En 2013 51st Annual Allerton Conference on Communication, Control, and Computing (Allerton). IEEE, 2013. http://dx.doi.org/10.1109/allerton.2013.6736510.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Margellos, Kostas, Alessandro Falsone, Simone Garatti y Maria Prandini. "Proximal minimization based distributed convex optimization". En 2016 American Control Conference (ACC). IEEE, 2016. http://dx.doi.org/10.1109/acc.2016.7525287.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Souiai, Mohamed, Martin R. Oswald, Youngwook Kee, Junmo Kim, Marc Pollefeys y Daniel Cremers. "Entropy Minimization for Convex Relaxation Approaches". En 2015 IEEE International Conference on Computer Vision (ICCV). IEEE, 2015. http://dx.doi.org/10.1109/iccv.2015.207.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Tran-Dinh, Quoc, Yen-Huan Li y Volkan Cevher. "Barrier smoothing for nonsmooth convex minimization". En ICASSP 2014 - 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2014. http://dx.doi.org/10.1109/icassp.2014.6853848.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Rothvoss, Thomas. "Constructive Discrepancy Minimization for Convex Sets". En 2014 IEEE 55th Annual Symposium on Foundations of Computer Science (FOCS). IEEE, 2014. http://dx.doi.org/10.1109/focs.2014.23.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Combettes, Patrick L. y Jean-Christophe Pesquet. "Split convex minimization algorithm for signal recovery". En ICASSP 2009 - 2009 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 2009. http://dx.doi.org/10.1109/icassp.2009.4959676.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Slavakis, Konstantinos. "Stochastic Composite Convex Minimization with Affine Constraints". En 2018 52nd Asilomar Conference on Signals, Systems, and Computers. IEEE, 2018. http://dx.doi.org/10.1109/acssc.2018.8645298.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Zhang, Hu, Pan Zhou, Yi Yang y Jiashi Feng. "Generalized Majorization-Minimization for Non-Convex Optimization". En Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/591.

Texto completo
Resumen
Majorization-Minimization (MM) algorithms optimize an objective function by iteratively minimizing its majorizing surrogate and offer attractively fast convergence rate for convex problems. However, their convergence behaviors for non-convex problems remain unclear. In this paper, we propose a novel MM surrogate function from strictly upper bounding the objective to bounding the objective in expectation. With this generalized surrogate conception, we develop a new optimization algorithm, termed SPI-MM, that leverages the recent proposed SPIDER for more efficient non-convex optimization. We prove that for finite-sum problems, the SPI-MM algorithm converges to an stationary point within deterministic and lower stochastic gradient complexity. To our best knowledge, this work gives the first non-asymptotic convergence analysis for MM-alike algorithms in general non-convex optimization. Extensive empirical studies on non-convex logistic regression and sparse PCA demonstrate the advantageous efficiency of the proposed algorithm and validate our theoretical results.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Bliek, Laurens, Michel Verhaegen y Sander Wahls. "Online function minimization with convex random relu expansions". En 2017 IEEE 27th International Workshop on Machine Learning for Signal Processing (MLSP). IEEE, 2017. http://dx.doi.org/10.1109/mlsp.2017.8168109.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Yuan, Gonglin, Zengxin Wei y Guangjun Zhu. "A spectral gradient algorithm for nonsmooth convex minimization". En 2012 4th Electronic System-Integration Technology Conference (ESTC). IEEE, 2012. http://dx.doi.org/10.1109/estc.2012.6485724.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.

Informes sobre el tema "Convex minimization"

1

Giles, Daniel. The Majorization Minimization Principle and Some Applications in Convex Optimization. Portland State University Library, enero de 2015. http://dx.doi.org/10.15760/honors.175.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía