To see the other types of publications on this topic, follow the link: Convex minimization.

Dissertations / Theses on the topic 'Convex minimization'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 35 dissertations / theses for your research on the topic 'Convex minimization.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

NediÄ, Angelia. "Subgradient methods for convex minimization." Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/16843.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2002.
Includes bibliographical references (p. 169-174).
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Many optimization problems arising in various applications require minimization of an objective cost function that is convex but not differentiable. Such a minimization arises, for example, in model construction, system identification, neural networks, pattern classification, and various assignment, scheduling, and allocation problems. To solve convex but not differentiable problems, we have to employ special methods that can work in the absence of differentiability, while taking the advantage of convexity and possibly other special structures that our minimization problem may possess. In this thesis, we propose and analyze some new methods that can solve convex (not necessarily differentiable) problems. In particular, we consider two classes of methods: incremental and variable metric.
by Angelia Nedić.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
2

Apidopoulos, Vasileios. "Inertial Gradient-Descent algorithms for convex minimization." Thesis, Bordeaux, 2019. http://www.theses.fr/2019BORD0175/document.

Full text
Abstract:
Cette thèse porte sur l’étude des méthodes inertielles pour résoudre les problèmes de minimisation convexe structurés. Depuis les premiers travaux de Polyak et Nesterov, ces méthodes sont devenues très populaires, grâce à leurs effets d’accélération. Dans ce travail, on étudie une famille d’algorithmes de gradient proximal inertiel de type Nesterov avec un choix spécifique de suites de sur-relaxation. Les différentes propriétés de convergence de cette famille d’algorithmes sont présentées d’une manière unifiée, en fonction du paramètre de sur-relaxation. En outre, on étudie ces propriétés, dans le cas des fonctions lisses vérifiant des hypothèses géométriques supplémentaires, comme la condition de croissance (ou condition de Łojasiewicz). On montre qu’en combinant cette condition de croissance avec une condition de planéité (flatness) sur la géométrie de la fonction minimisante, on obtient de nouveaux taux de convergence. La stratégie adoptée ici, utilise des analogies du continu vers le discret, en passant des systèmes dynamiques continus en temps à des schémas discrets. En particulier, la famille d’algorithmes inertiels qui nous intéresse, peut être identifiée comme un schéma aux différences finies d’une équation/inclusion différentielle. Cette approche donne les grandes lignes d’une façon de transposer les différents résultats et leurs démonstrations du continu au discret. Cela ouvre la voie à de nouveaux schémas inertiels possibles, issus du même système dynamique
This Thesis focuses on the study of inertial methods for solving composite convex minimization problems. Since the early works of Polyak and Nesterov, inertial methods become very popular, thanks to their acceleration effects. Here, we study a family of Nesterov-type inertial proximalgradient algorithms with a particular over-relaxation sequence. We give a unified presentation about the different convergence properties of this family of algorithms, depending on the over-relaxation parameter. In addition we addressing this issue, in the case of a smooth function with additional geometrical structure, such as the growth (or Łojasiewicz) condition. We show that by combining growth condition and a flatness-type condition on the geometry of the minimizing function, we are able to obtain some new convergence rates. Our analysis follows a continuous-to-discrete trail, passing from continuous-on time-dynamical systems to discrete schemes. In particular the family of inertial algorithms that interest us, can be identified as a finite difference scheme of a differential equation/inclusion. This approach provides a useful guideline, which permits to transpose the different results and their proofs from the continuous system to the discrete one. This opens the way for new possible inertial schemes, derived by the same dynamical system
APA, Harvard, Vancouver, ISO, and other styles
3

Gräser, Carsten [Verfasser]. "Convex minimization and phase field models / Carsten Gräser." Berlin : Freie Universität Berlin, 2011. http://d-nb.info/1026174848/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

El, Gheche Mireille. "Proximal methods for convex minimization of Phi-divergences : application to computer vision." Thesis, Paris Est, 2014. http://www.theses.fr/2014PEST1018/document.

Full text
Abstract:
Cette thèse s'inscrit dans le contexte de l'optimisation convexe. Elle apporte à ce domaine deux contributions principales. La première porte sur les méthodes d'optimisation convexe non lisse appliquées à la vision par ordinateur. Quant à la seconde, elle fournit de nouveaux résultats théoriques concernant la manipulation de mesures de divergences, telles que celles utilisées en théorie de l'information et dans divers problèmes d'optimisation. Le principe de la stéréovision consiste à exploiter deux images d'une même scène prises sous deux points de vue, afin de retrouver les pixels homologues et de se ramener ainsi à un problème d'estimation d'un champ de disparité. Dans ce travail, le problème de l'estimation de la disparité est considéré en présence de variations d'illumination. Ceci se traduit par l'ajout, dans la fonction objective globale à minimiser, d'un facteur multiplicatif variant spatialement, estimé conjointement avec la disparité. Nous avons mis l'accent sur l'avantage de considérer plusieurs critères convexes et non-nécessairement différentiables, et d'exploiter des images multicomposantes (par exemple, des images couleurs) pour améliorer les performances de notre méthode. Le problème d'estimation posé est résolu en utilisant un algorithme parallèle proximal basé sur des développements récents en analyse convexe. Dans une seconde partie, nous avons étendu notre approche au cas multi-vues qui est un sujet de recherche relativement nouveau. Cette extension s'avère particulièrement utile dans le cadre d'applications où les zones d'occultation sont très larges et posent de nombreuses difficultés. Pour résoudre le problème d'optimisation associé, nous avons utilisé des algorithmes proximaux en suivant des approches multi-étiquettes relaxés de manière convexe. Les algorithmes employés présentent l'avantage de pouvoir gérer simultanément un grand nombre d'images et de contraintes, ainsi que des critères convexes et non convexes. Des résultats sur des images synthétiques ont permis de valider l'efficacité de ces méthodes, pour différentes mesures d'erreur. La dernière partie de cette thèse porte sur les problèmes d'optimisation convexe impliquant des mesures d'information (Phi-divergences), qui sont largement utilisés dans le codage source et le codage canal. Ces mesures peuvent être également employées avec succès dans des problèmes inverses rencontrés dans le traitement du signal et de l'image. Les problèmes d'optimisation associés sont souvent difficiles à résoudre en raison de leur grande taille. Dans ce travail, nous avons établi les expressions des opérateurs proximaux de ces divergences. En s'appuyant sur ces résultats, nous avons développé une approche proximale reposant sur l'usage de méthodes primales-duales. Ceci nous a permis de répondre à une large gamme de problèmes d'optimisation convexe dont la fonction objective comprend un terme qui s'exprime sous la forme de l'une de ces divergences
Convex optimization aims at searching for the minimum of a convex function over a convex set. While the theory of convex optimization has been largely explored for about a century, several related developments have stimulated a new interest in the topic. The first one is the emergence of efficient optimization algorithms, such as proximal methods, which allow one to easily solve large-size nonsmooth convex problems in a parallel manner. The second development is the discovery of the fact that convex optimization problems are more ubiquitous in practice than was thought previously. In this thesis, we address two different problems within the framework of convex optimization. The first one is an application to computer stereo vision, where the goal is to recover the depth information of a scene from a pair of images taken from the left and right positions. The second one is the proposition of new mathematical tools to deal with convex optimization problems involving information measures, where the objective is to minimize the divergence between two statistical objects such as random variables or probability distributions. We propose a convex approach to address the problem of dense disparity estimation under varying illumination conditions. A convex energy function is derived for jointly estimating the disparity and the illumination variation. The resulting problem is tackled in a set theoretic framework and solved using proximal tools. It is worth emphasizing the ability of this method to process multicomponent images under illumination variation. The conducted experiments indicate that this approach can effectively deal with the local illumination changes and yields better results compared with existing methods. We then extend the previous approach to the problem of multi-view disparity estimation. Rather than estimating a single depth map, we estimate a sequence of disparity maps, one for each input image. We address this problem by adopting a discrete reformulation that can be efficiently solved through a convex relaxation. This approach offers the advantage of handling both convex and nonconvex similarity measures within the same framework. We have shown that the additional complexity required by the application of our method to the multi-view case is small with respect to the stereo case. Finally, we have proposed a novel approach to handle a broad class of statistical distances, called $varphi$-divergences, within the framework of proximal algorithms. In particular, we have developed the expression of the proximity operators of several $varphi$-divergences, such as Kulback-Leibler, Jeffrey-Kulback, Hellinger, Chi-Square, I$_{alpha}$, and Renyi divergences. This allows proximal algorithms to deal with problems involving such divergences, thus overcoming the limitations of current state-of-the-art approaches for similar problems. The proposed approach is validated in two different contexts. The first is an application to image restoration that illustrates how to employ divergences as a regularization term, while the second is an application to image registration that employs divergences as a data fidelity term
APA, Harvard, Vancouver, ISO, and other styles
5

Doto, James William. "Conditional uniform convexity in Orlicz spaces and minimization problems." Thesis, Georgia Institute of Technology, 2002. http://hdl.handle.net/1853/27352.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Croxton, Keely L., Bernard Gendon, and Thomas L. Magnanti. "A Comparison of Mixed-Integer Programming Models for Non-Convex Piecewise Linear Cost Minimization Problems." Massachusetts Institute of Technology, Operations Research Center, 2002. http://hdl.handle.net/1721.1/5233.

Full text
Abstract:
We study a generic minimization problem with separable non-convex piecewise linear costs, showing that the linear programming (LP) relaxation of three textbook mixed integer programming formulations each approximates the cost function by its lower convex envelope. We also show a relationship between this result and classical Lagrangian duality theory.
APA, Harvard, Vancouver, ISO, and other styles
7

He, Niao. "Saddle point techniques in convex composite and error-in-measurement optimization." Diss., Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/54400.

Full text
Abstract:
This dissertation aims to develop efficient algorithms with improved scalability and stability properties for large-scale optimization and optimization under uncertainty, and to bridge some of the gaps between modern optimization theories and recent applications emerging in the Big Data environment. To this end, the dissertation is dedicated to two important subjects -- i) Large-scale Convex Composite Optimization and ii) Error-in-Measurement Optimization. In spite of the different natures of these two topics, the common denominator, to be presented, lies in their accommodation for systematic use of saddle point techniques for mathematical modeling and numerical processing. The main body can be split into three parts. In the first part, we consider a broad class of variational inequalities with composite structures, allowing to cover the saddle point/variational analogies of the classical convex composite minimization (i.e. summation of a smooth convex function and a simple nonsmooth convex function). We develop novel composite versions of the state-of-the-art Mirror Descent and Mirror Prox algorithms aimed at solving such type of problems. We demonstrate that the algorithms inherit the favorable efficiency estimate of their prototypes when solving structured variational inequalities. Moreover, we develop several variants of the composite Mirror Prox algorithm along with their corresponding complexity bounds, allowing the algorithm to handle the case of imprecise prox mapping as well as the case when the operator is represented by an unbiased stochastic oracle. In the second part, we investigate four general types of large-scale convex composite optimization problems, including (a) multi-term composite minimization, (b) linearly constrained composite minimization, (c) norm-regularized nonsmooth minimization, and (d) maximum likelihood Poisson imaging. We demonstrate that the composite Mirror Prox, when integrated with saddle point techniques and other algorithmic tools, can solve all these optimization problems with the best known so far rates of convergences. Our main related contributions are as follows. Firstly, regards to problems of type (a), we develop an optimal algorithm by integrating the composite Mirror Prox with a saddle point reformulation based on exact penalty. Secondly, regards to problems of type (b), we develop a novel algorithm reducing the problem to solving a ``small series'' of saddle point subproblems and achieving an optimal, up to log factors, complexity bound. Thirdly, regards to problems of type (c), we develop a Semi-Proximal Mirror-Prox algorithm by leveraging the saddle point representation and linear minimization over problems' domain and attain optimality both in the numbers of calls to the first order oracle representing the objective and calls to the linear minimization oracle representing problem's domain. Lastly, regards to problem (d), we show that the composite Mirror Prox when applied to the saddle point reformulation circumvents the difficulty with non-Lipschitz continuity of the objective and exhibits better convergence rate than the typical rate for nonsmooth optimization. We conduct extensive numerical experiments and illustrate the practical potential of our algorithms in a wide spectrum of applications in machine learning and image processing. In the third part, we examine error-in-measurement optimization, referring to decision-making problems with data subject to measurement errors; such problems arise naturally in a number of important applications, such as privacy learning, signal processing, and portfolio selection. Due to the postulated observation scheme and specific structure of the problem, straightforward application of standard stochastic optimization techniques such as Stochastic Approximation (SA) and Sample Average Approximation (SAA) are out of question. Our goal is to develop computationally efficient and, hopefully, not too conservative data-driven techniques applicable to a broad scope of problems and allowing for theoretical performance guarantees. We present two such approaches -- one depending on a fully algorithmic calculus of saddle point representations of convex-concave functions and the other depending on a general approximation scheme of convex stochastic programming. Both approaches allow us to convert the problem of interests to a form amenable for SA or SAA. The latter developments are primarily focused on two important applications -- affine signal processing and indirect support vector machines.
APA, Harvard, Vancouver, ISO, and other styles
8

Ghebremariam, Samuel. "Energy Production Cost and PAR Minimization in Multi-Source Power Networks." University of Akron / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=akron1336517757.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Sinha, Arunesh. "Audit Games." Research Showcase @ CMU, 2014. http://repository.cmu.edu/dissertations/487.

Full text
Abstract:
Modern organizations (e.g., hospitals, banks, social networks, search engines) hold large volumes of personal information, and rely heavily on auditing for enforcement of privacy policies. These audit mechanisms combine automated methods with human input to detect and punish violators. Since human audit resources are limited, and often not sufficient to investigate all potential violations, current state-of-the -art audit tools provide heuristics to guide human effort. However, numerous reports of privacy breaches caused by malicious insiders bring to question the effectiveness of these audit mechanisms. Our thesis is that effective audit resource allocation and punishment levels can be efficiently computed by modeling the audit process as a game between a rational auditor and a rational or worst-case auditee. We present several results in support of the thesis. In the worst-case adversary setting, we design a game model taking into account organizational cost of auditing and loss from violations. We propose the notion of low regret as a desired audit property and provide a regret minimizing audit algorithm that outputs an optimal audit resource allocation strategy. The algorithm improves upon prior regret bounds in the partial information setting. In the rational adversary setting, we enable punishments by the auditor, and model the adversary's utility as a trade-off between the benefit from violations and loss due to punishment when detected. Our Stackelberg game model generalizes an existing deployed security game model with punishment parameters. It applies to natural auditing settings with multiple auditors where each auditor is restricted to audit a subset of the potential violations. We provide novel polynomial time algorithms to approximate the non-convex optimization problem used to compute the Stackelberg equilibrium. The algorithms output optimal audit resource allocation strategy and punishment levels. We also provide a method to reduce the optimization problem size, achieving up to 5x speedup for realistic instances of the audit problem, and for the related security game instances.
APA, Harvard, Vancouver, ISO, and other styles
10

Caillaud, Corentin. "Asymptotical estimates for some algorithms for data and image processing : a study of the Sinkhorn algorithm and a numerical analysis of total variation minimization." Thesis, Institut polytechnique de Paris, 2020. http://www.theses.fr/2020IPPAX023.

Full text
Abstract:
Cette thèse traite de problèmes discrets d'optimisation convexe et s'intéresse à des estimations de leurs taux de convergence. Elle s'organise en deux parties indépendantes.Dans la première partie, nous étudions le taux de convergence de l'algorithme de Sinkhorn et de certaines de ses variantes. Cet algorithme apparaît dans le cadre du Transport Optimal (TO) par l'intermédiaire d'une régularisation entropique. Ses itérations, comme celles de ses variantes, s'écrivent sous la forme de produits composante par composante de matrices et de vecteurs positifs. Pour les étudier, nous proposons une nouvelle approche basée sur des inégalités de convexité simples et menant au taux de convergence linéaire observé en pratique. Nous étendons ce résultat à un certain type de variantes de l'algorithme que nous appelons algorithmes de Sinkhorn équilibrés de dimension 1. Nous présentons ensuite des techniques numériques traitant le cas de la convergence vers zéro du paramètre de régularisation des problèmes de TO. Enfin, nous menons l'analyse complète du taux de convergence en dimension 2.Dans la deuxième partie, nous donnons des estimations d'erreur pour deux discrétisations de la variation totale (TV) dans le modèle de Rudin, Osher et Fatemi (ROF). Ce problème de débruitage d'image, qui revient à calculer l'opérateur proximal de la variation totale, bénéficie de propriétés d'isotropie assurant la conservation de discontinuités nettes dans les images débruitées, et ce dans toutes les directions. En discrétisant le problème sur un maillage carré de taille h et en utilisant une variation totale discrète standard dite TV isotrope, cette propriété est perdue. Nous démontrons que dans une direction particulière l'erreur sur l'énergie est d'ordre h^{2/3}, ce qui est relativement élevé face aux attentes pour de meilleures discrétisations. Notre preuve repose sur l'analyse d'un problème équivalent en dimension 1 et de la TV perturbée qui y intervient. La deuxième variation totale discrète que nous considérons copie la définition de la variation totale continue en remplaçant les champs duaux habituels par des champs discrets dits de Raviart-Thomas. Nous retrouvons ainsi le caractère isotrope du modèle ROF discret. Pour conclure, nous prouvons, pour cette variation totale et sous certaines hypothèses, une estimation d'erreur en O(h)
This thesis deals with discrete optimization problems and investigates estimates of their convergence rates. It is divided into two independent parts.The first part addresses the convergence rate of the Sinkhorn algorithm and of some of its variants. This algorithm appears in the context of Optimal Transportation (OT) through entropic regularization. Its iterations, and the ones of the Sinkhorn-like variants, are written as componentwise products of nonnegative vectors and matrices. We propose a new approach to analyze them, based on simple convex inequalities and leading to the linear convergence rate that is observed in practice. We extend this result to a particular type of variants of the algorithm that we call 1D balanced Sinkhorn-like algorithms. In addition, we present some numerical techniques dealing with the convergence towards zero of the regularizing parameter of the OT problems. Lastly, we conduct the complete analysis of the convergence rate in dimension 2. In the second part, we establish error estimates for two discretizations of the total variation (TV) in the Rudin-Osher-Fatemi (ROF) model. This image denoising problem, that is solved by computing the proximal operator of the total variation, enjoys isotropy properties ensuring the preservation of sharp discontinuities in the denoised images in every direction. When the problem is discretized into a square mesh of size h and one uses a standard discrete total variation -- the so-called isotropic TV -- this property is lost. We show that in a particular direction the error in the energy is of order h^{2/3} which is relatively large with respect to what one can expect with better discretizations. Our proof relies on the analysis of an equivalent 1D denoising problem and of the perturbed TV it involves. The second discrete total variation we consider mimics the definition of the continuous total variation replacing the usual dual fields by discrete Raviart-Thomas fields. Doing so, we recover an isotropic behavior of the discrete ROF model. Finally, we prove a O(h) error estimate for this variant under standard hypotheses
APA, Harvard, Vancouver, ISO, and other styles
11

Repetti, Audrey. "Algorithmes d'optimisation en grande dimension : applications à la résolution de problèmes inverses." Thesis, Paris Est, 2015. http://www.theses.fr/2015PESC1032/document.

Full text
Abstract:
Une approche efficace pour la résolution de problèmes inverses consiste à définir le signal (ou l'image) recherché(e) par minimisation d'un critère pénalisé. Ce dernier s'écrit souvent sous la forme d'une somme de fonctions composées avec des opérateurs linéaires. En pratique, ces fonctions peuvent n'être ni convexes ni différentiables. De plus, les problèmes auxquels on doit faire face sont souvent de grande dimension. L'objectif de cette thèse est de concevoir de nouvelles méthodes pour résoudre de tels problèmes de minimisation, tout en accordant une attention particulière aux coûts de calculs ainsi qu'aux résultats théoriques de convergence. Une première idée pour construire des algorithmes rapides d'optimisation est d'employer une stratégie de préconditionnement, la métrique sous-jacente étant adaptée à chaque itération. Nous appliquons cette technique à l'algorithme explicite-implicite et proposons une méthode, fondée sur le principe de majoration-minimisation, afin de choisir automatiquement les matrices de préconditionnement. L'analyse de la convergence de cet algorithme repose sur l'inégalité de Kurdyka-L ojasiewicz. Une seconde stratégie consiste à découper les données traitées en différents blocs de dimension réduite. Cette approche nous permet de contrôler à la fois le nombre d'opérations s'effectuant à chaque itération de l'algorithme, ainsi que les besoins en mémoire, lors de son implémentation. Nous proposons ainsi des méthodes alternées par bloc dans les contextes de l'optimisation non convexe et convexe. Dans le cadre non convexe, une version alternée par bloc de l'algorithme explicite-implicite préconditionné est proposée. Les blocs sont alors mis à jour suivant une règle déterministe acyclique. Lorsque des hypothèses supplémentaires de convexité peuvent être faites, nous obtenons divers algorithmes proximaux primaux-duaux alternés, permettant l'usage d'une règle aléatoire arbitraire de balayage des blocs. L'analyse théorique de ces algorithmes stochastiques d'optimisation convexe se base sur la théorie des opérateurs monotones. Un élément clé permettant de résoudre des problèmes d'optimisation de grande dimension réside dans la possibilité de mettre en oeuvre en parallèle certaines étapes de calculs. Cette parallélisation est possible pour les algorithmes proximaux primaux-duaux alternés par bloc que nous proposons: les variables primales, ainsi que celles duales, peuvent être mises à jour en parallèle, de manière tout à fait flexible. A partir de ces résultats, nous déduisons de nouvelles méthodes distribuées, où les calculs sont répartis sur différents agents communiquant entre eux suivant une topologie d'hypergraphe. Finalement, nos contributions méthodologiques sont validées sur différentes applications en traitement du signal et des images. Nous nous intéressons dans un premier temps à divers problèmes d'optimisation faisant intervenir des critères non convexes, en particulier en restauration d'images lorsque l'image originale est dégradée par un bruit gaussien dépendant du signal, en démélange spectral, en reconstruction de phase en tomographie, et en déconvolution aveugle pour la reconstruction de signaux sismiques parcimonieux. Puis, dans un second temps, nous abordons des problèmes convexes intervenant dans la reconstruction de maillages 3D et dans l'optimisation de requêtes pour la gestion de bases de données
An efficient approach for solving an inverse problem is to define the recovered signal/image as a minimizer of a penalized criterion which is often split in a sum of simpler functions composed with linear operators. In the situations of practical interest, these functions may be neither convex nor smooth. In addition, large scale optimization problems often have to be faced. This thesis is devoted to the design of new methods to solve such difficult minimization problems, while paying attention to computational issues and theoretical convergence properties. A first idea to build fast minimization algorithms is to make use of a preconditioning strategy by adapting, at each iteration, the underlying metric. We incorporate this technique in the forward-backward algorithm and provide an automatic method for choosing the preconditioning matrices, based on a majorization-minimization principle. The convergence proofs rely on the Kurdyka-L ojasiewicz inequality. A second strategy consists of splitting the involved data in different blocks of reduced dimension. This approach allows us to control the number of operations performed at each iteration of the algorithms, as well as the required memory. For this purpose, block alternating methods are developed in the context of both non-convex and convex optimization problems. In the non-convex case, a block alternating version of the preconditioned forward-backward algorithm is proposed, where the blocks are updated according to an acyclic deterministic rule. When additional convexity assumptions can be made, various alternating proximal primal-dual algorithms are obtained by using an arbitrary random sweeping rule. The theoretical analysis of these stochastic convex optimization algorithms is grounded on the theory of monotone operators. A key ingredient in the solution of high dimensional optimization problems lies in the possibility of performing some of the computation steps in a parallel manner. This parallelization is made possible in the proposed block alternating primal-dual methods where the primal variables, as well as the dual ones, can be updated in a quite flexible way. As an offspring of these results, new distributed algorithms are derived, where the computations are spread over a set of agents connected through a general hyper graph topology. Finally, our methodological contributions are validated on a number of applications in signal and image processing. First, we focus on optimization problems involving non-convex criteria, in particular image restoration when the original image is corrupted with a signal dependent Gaussian noise, spectral unmixing, phase reconstruction in tomography, and blind deconvolution in seismic sparse signal reconstruction. Then, we address convex minimization problems arising in the context of 3D mesh denoising and in query optimization for database management
APA, Harvard, Vancouver, ISO, and other styles
12

Camargo, Fernando Taietti. "Estudo comparativo de passos espectrais e buscas lineares não monótonas." Universidade de São Paulo, 2008. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-16062008-211538/.

Full text
Abstract:
O método do Gradiente Espectral, introduzido por Barzilai e Borwein e analisado por Raydan, para minimização irrestrita, é um método simples cujo desempenho é comparável ao de métodos tradicionais como, por exemplo, gradientes conjugados. Desde a introdução do método, assim como da sua extensão para minimização em conjuntos convexos, foram introduzidas várias combinações de passos espectrais diferentes, assim como de buscas lineares não monótonas diferentes. Dos resultados numéricos apresentados em vários trabalhos não é possível inferir se existem diferenças significativas no desempenho dos diversos métodos. Além disso, também não fica clara a relevância das buscas não monótonas como uma ferramenta em si próprias ou se, na verdade, elas são úteis apenas para permitir que o método seja o mais parecido possível com o método original de Barzilai e Borwein. O objetivo deste trabalho é comparar os diversos métodos recentemente introduzidos como combinações de diferentes buscas lineares não monótonas e diferentes passos espectrais para encontrar a melhor combinação e, a partir daí, aferir o desempenho numérico do método.
The Spectral Gradient method, introduced by Barzilai and Borwein and analized by Raydan for unconstrained minimization, is a simple method whose performance is comparable to traditional methods, such as conjugate gradients. Since the introduction of method, as well as its extension to minimization of convex sets, there were introduced various combinations of different spectral steplengths, as well as different nonmonotone line searches. By the numerical results presented in many studies it is not possible to infer whether there are siginificant differences in the performance of various methods. It also is not sure the relevance of the nonmonotone line searches as a tool in themselves or whether, in fact, they are usefull only to allow the method to be as similar as possible with the original method of Barzilai e Borwein. The objective of this study is to compare the different methods recently introduced as different combinations of nonmonotone linear searches and different spectral steplengths to find the best combination and from there, evaluating the numerical performance of the method.
APA, Harvard, Vancouver, ISO, and other styles
13

Couprie, Camille. "Graph-based variational optimization and applications in computer vision." Phd thesis, Université Paris-Est, 2011. http://tel.archives-ouvertes.fr/tel-00666878.

Full text
Abstract:
Many computer vision applications such as image filtering, segmentation and stereovision can be formulated as optimization problems. Recently discrete, convex, globally optimal methods have received a lot of attention. Many graph-based methods suffer from metrication artefacts, segmented contours are blocky in areas where contour information is lacking. In the first part of this work, we develop a discrete yet isotropic energy minimization formulation for the continuous maximum flow problem that prevents metrication errors. This new convex formulation leads us to a provably globally optimal solution. The employed interior point method can optimize the problem faster than the existing continuous methods. The energy formulation is then adapted and extended to multi-label problems, and shows improvements over existing methods. Fast parallel proximal optimization tools have been tested and adapted for the optimization of this problem. In the second part of this work, we introduce a framework that generalizes several state-of-the-art graph-based segmentation algorithms, namely graph cuts, random walker, shortest paths, and watershed. This generalization allowed us to exhibit a new case, for which we developed a globally optimal optimization method, named "Power watershed''. Our proposed power watershed algorithm computes a unique global solution to multi labeling problems, and is very fast. We further generalize and extend the framework to applications beyond image segmentation, for example image filtering optimizing an L0 norm energy, stereovision and fast and smooth surface reconstruction from a noisy cloud of 3D points
APA, Harvard, Vancouver, ISO, and other styles
14

Bot, Radu Ioan, Ernö Robert Csetnek, and Erika Nagy. "Solving systems of monotone inclusions via primal-dual splitting techniques." Universitätsbibliothek Chemnitz, 2013. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-108172.

Full text
Abstract:
In this paper we propose an algorithm for solving systems of coupled monotone inclusions in Hilbert spaces. The operators arising in each of the inclusions of the system are processed in each iteration separately, namely, the single-valued are evaluated explicitly (forward steps), while the set-valued ones via their resolvents (backward steps). In addition, most of the steps in the iterative scheme can be executed simultaneously, this making the method applicable to a variety of convex minimization problems. The numerical performances of the proposed splitting algorithm are emphasized through applications in average consensus on colored networks and image classification via support vector machines.
APA, Harvard, Vancouver, ISO, and other styles
15

Santos, Luiz Gustavo de Moura dos. "Métodos de busca em coordenada." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-04012018-162035/.

Full text
Abstract:
Problemas reais em áreas como aprendizado de máquina têm chamado atenção pela enorme quantidade de variáveis (> 10^6) e volume de dados. Em problemas dessa escala o custo para se obter e trabalhar com informações de segunda ordem são proibitivos. Tais problemas apresentam características que podem ser aproveitadas por métodos de busca em coordenada. Essa classe de métodos é caracterizada pela alteração de apenas uma ou poucas variáveis a cada iteração. A variante do método comumente descrita na literatura é a minimização cíclica de variáveis. Porém, resultados recentes sugerem que variantes aleatórias do método possuem melhores garantias de convergência. Nessa variante, a cada iteração, a variável a ser alterada é sorteada com uma probabilidade preestabelecida não necessariamente uniforme. Neste trabalho estudamos algumas variações do método de busca em coordenada. São apresentados aspectos teóricos desses métodos, porém focamos nos aspectos práticos de implementação e na comparação experimental entre variações do método de busca em coordenada aplicados a diferentes problemas com aplicações reais.
Real world problemas in areas such as machine learning are known for the huge number of decision variables (> 10^6) and data volume. For such problems working with second order derivatives is prohibitive. These problems have properties that benefits the application of coordinate descent/minimization methods. These kind of methods are defined by the change of a single, or small number of, decision variable at each iteration. In the literature, the commonly found description of this type of method is based on the cyclic change of variables. Recent papers have shown that randomized versions of this method have better convergence properties. This version is based on the change of a single variable chosen randomly at each iteration, based on a fixed, but not necessarily uniform, distribution. In this work we present some theoretical aspects of such methods, but we focus on practical aspects.
APA, Harvard, Vancouver, ISO, and other styles
16

Cruz, Cavalcanti Yanna. "Factor analysis of dynamic PET images." Thesis, Toulouse, INPT, 2018. http://www.theses.fr/2018INPT0078/document.

Full text
Abstract:
La tomographie par émission de positrons (TEP) est une technique d'imagerie nucléaire noninvasive qui permet de quantifier les fonctions métaboliques des organes à partir de la diffusion d'un radiotraceur injecté dans le corps. Alors que l'imagerie statique est souvent utilisée afin d'obtenir une distribution spatiale de la concentration du traceur, une meilleure évaluation de la cinétique du traceur est obtenue par des acquisitions dynamiques. En ce sens, la TEP dynamique a suscité un intérêt croissant au cours des dernières années, puisqu'elle fournit des informations à la fois spatiales et temporelles sur la structure des prélèvements de traceurs en biologie \textit{in vivo}. Les techniques de quantification les plus efficaces en TEP dynamique nécessitent souvent une estimation de courbes temps-activité (CTA) de référence représentant les tissus ou une fonction d'entrée caractérisant le flux sanguin. Dans ce contexte, de nombreuses méthodes ont été développées pour réaliser une extraction non-invasive de la cinétique globale d'un traceur, appelée génériquement analyse factorielle. L'analyse factorielle est une technique d'apprentissage non-supervisée populaire pour identifier un modèle ayant une signification physique à partir de données multivariées. Elle consiste à décrire chaque voxel de l'image comme une combinaison de signatures élémentaires, appelées \textit{facteurs}, fournissant non seulement une CTA globale pour chaque tissu, mais aussi un ensemble des coefficients reliant chaque voxel à chaque CTA tissulaire. Parallèlement, le démélange - une instance particulière d'analyse factorielle - est un outil largement utilisé dans la littérature de l'imagerie hyperspectrale. En imagerie TEP dynamique, elle peut être très pertinente pour l'extraction des CTA, puisqu'elle prend directement en compte à la fois la non-négativité des données et la somme-à-une des proportions de facteurs, qui peuvent être estimées à partir de la diffusion du sang dans le plasma et les tissus. Inspiré par la littérature de démélange hyperspectral, ce manuscrit s'attaque à deux inconvénients majeurs des techniques générales d'analyse factorielle appliquées en TEP dynamique. Le premier est l'hypothèse que la réponse de chaque tissu à la distribution du traceur est spatialement homogène. Même si cette hypothèse d'homogénéité a prouvé son efficacité dans plusieurs études d'analyse factorielle, elle ne fournit pas toujours une description suffisante des données sousjacentes, en particulier lorsque des anomalies sont présentes. Pour faire face à cette limitation, les modèles proposés ici permettent un degré de liberté supplémentaire aux facteurs liés à la liaison spécifique. Dans ce but, une perturbation spatialement variante est introduite en complément d'une CTA nominale et commune. Cette variation est indexée spatialement et contrainte avec un dictionnaire, qui est soit préalablement appris ou explicitement modélisé par des non-linéarités convolutives affectant les tissus de liaisons non-spécifiques. Le deuxième inconvénient est lié à la distribution du bruit dans les images PET. Même si le processus de désintégration des positrons peut être décrit par une distribution de Poisson, le bruit résiduel dans les images TEP reconstruites ne peut généralement pas être simplement modélisé par des lois de Poisson ou gaussiennes. Nous proposons donc de considérer une fonction de coût générique, appelée $\beta$-divergence, capable de généraliser les fonctions de coût conventionnelles telles que la distance euclidienne, les divergences de Kullback-Leibler et Itakura-Saito, correspondant respectivement à des distributions gaussiennes, de Poisson et Gamma. Cette fonction de coût est appliquée à trois modèles d'analyse factorielle afin d'évaluer son impact sur des images TEP dynamiques avec différentes caractéristiques de reconstruction
Thanks to its ability to evaluate metabolic functions in tissues from the temporal evolution of a previously injected radiotracer, dynamic positron emission tomography (PET) has become an ubiquitous analysis tool to quantify biological processes. Several quantification techniques from the PET imaging literature require a previous estimation of global time-activity curves (TACs) (herein called \textit{factors}) representing the concentration of tracer in a reference tissue or blood over time. To this end, factor analysis has often appeared as an unsupervised learning solution for the extraction of factors and their respective fractions in each voxel. Inspired by the hyperspectral unmixing literature, this manuscript addresses two main drawbacks of general factor analysis techniques applied to dynamic PET. The first one is the assumption that the elementary response of each tissue to tracer distribution is spatially homogeneous. Even though this homogeneity assumption has proven its effectiveness in several factor analysis studies, it may not always provide a sufficient description of the underlying data, in particular when abnormalities are present. To tackle this limitation, the models herein proposed introduce an additional degree of freedom to the factors related to specific binding. To this end, a spatially-variant perturbation affects a nominal and common TAC representative of the high-uptake tissue. This variation is spatially indexed and constrained with a dictionary that is either previously learned or explicitly modelled with convolutional nonlinearities affecting non-specific binding tissues. The second drawback is related to the noise distribution in PET images. Even though the positron decay process can be described by a Poisson distribution, the actual noise in reconstructed PET images is not expected to be simply described by Poisson or Gaussian distributions. Therefore, we propose to consider a popular and quite general loss function, called the $\beta$-divergence, that is able to generalize conventional loss functions such as the least-square distance, Kullback-Leibler and Itakura-Saito divergences, respectively corresponding to Gaussian, Poisson and Gamma distributions. This loss function is applied to three factor analysis models in order to evaluate its impact on dynamic PET images with different reconstruction characteristics
APA, Harvard, Vancouver, ISO, and other styles
17

Vigano, Nicola Roberto. "Full-field X-ray orientation imaging using convex optimization and a discrete representation of six-dimensional position - orientation space." Thesis, Lyon, INSA, 2015. http://www.theses.fr/2015ISAL0095/document.

Full text
Abstract:
Cette thèse de doctorat introduit un modèle et un algorithme six-dimensions pour la reconstruction des orientations cristallines locales dans les matériaux polycristallins. Le modèle s’applique actuellement aux données obtenues avec un rayonnement synchrotron (faisceau parallèle et monochromatique), mais il est également possible d’envisager des extensions aux instruments et sources de laboratoire (polychromatique et divergent). Le travail présenté est principalement une extension de la technique connue sous le nom de “Diffraction Contrast Tomography” (DCT) qui permet la reconstruction de la forme et de l’orientation cristalline des grains dans des matériaux polycristallins (avec certaines restrictions concernant la taille et le nombre total de grains ainsi que la mosaicité intragranulaire)
This Ph.D. thesis is about the development and formalization of a six-dimensional tomography method, for the reconstruction of local orientation in poly-crystalline materials. This method is based on a technique known as diffraction contract tomography (DCT), mainly used in synchrotrons, with a monochromatic and parallel high energy X-ray beam. DCT exists since over a decade now, but it was always employed to analyze undeformed or nearly undeformed materials, described by “grains” with a certain average orientation. Because an orientation can be parametrized by the used of only three num- bers, the local orientation in the grains is modelled by a six-dimensional space X6 = R3 ⊗ O3, that is the outer product between a three-dimensional real- space and another three-dimensional orientation-space. This means that for each point of the real-space, there could be a full three-dimensional orientation- space, which however in practice is restricted to a smaller region of interest called “local orientation-space”. The reconstruction problem is then formulated as a global minimisation prob- lem, where the reconstruction of a single grain is the solution that minimizes a functional. There can be different choices for the functionals to use, and they depend on the type of reconstructions one is looking for, and on the type of a priori knowledge is available. All the functionals used include a data fidelity term which ensures that the reconstruction is consistent with the measured diffraction data, and then an additional regularization term is added, like the l1-norm minimization of the solution vector, that tries to limit the number of orientations per real-space voxel, or a Total Variation operator over the sum of the orientation part of the six-dimensional voxels, in order to enforce the homogeneity of the grain volume. When first published, the results on synthetic data from the third chapter high- lighted some key features of the proposed framework, and showed that it was in principle possible to extend DCT to the reconstruction of moderately de- formed materials, but it was unclear whether it could work in practice. The following chapters instead confirm that the proposed framework is viable for reconstructing moderately deformed materials, and that in conjunction with other techniques, it could also overcome the limitations imposed by the grain indexing, and be applied to more challenging textured materials
APA, Harvard, Vancouver, ISO, and other styles
18

Balmand, Samuel. "Quelques contributions à l'estimation de grandes matrices de précision." Thesis, Paris Est, 2016. http://www.theses.fr/2016PESC1024/document.

Full text
Abstract:
Sous l'hypothèse gaussienne, la relation entre indépendance conditionnelle et parcimonie permet de justifier la construction d'estimateurs de l'inverse de la matrice de covariance -- également appelée matrice de précision -- à partir d'approches régularisées. Cette thèse, motivée à l'origine par la problématique de classification d'images, vise à développer une méthode d'estimation de la matrice de précision en grande dimension, lorsque le nombre $n$ d'observations est petit devant la dimension $p$ du modèle. Notre approche repose essentiellement sur les liens qu'entretiennent la matrice de précision et le modèle de régression linéaire. Elle consiste à estimer la matrice de précision en deux temps. Les éléments non diagonaux sont tout d'abord estimés en considérant $p$ problèmes de minimisation du type racine carrée des moindres carrés pénalisés par la norme $ell_1$.Les éléments diagonaux sont ensuite obtenus à partir du résultat de l'étape précédente, par analyse résiduelle ou maximum de vraisemblance. Nous comparons ces différents estimateurs des termes diagonaux en fonction de leur risque d'estimation. De plus, nous proposons un nouvel estimateur, conçu de sorte à tenir compte de la possible contamination des données par des {em outliers}, grâce à l'ajout d'un terme de régularisation en norme mixte $ell_2/ell_1$. L'analyse non-asymptotique de la convergence de notre estimateur souligne la pertinence de notre méthode
Under the Gaussian assumption, the relationship between conditional independence and sparsity allows to justify the construction of estimators of the inverse of the covariance matrix -- also called precision matrix -- from regularized approaches. This thesis, originally motivated by the problem of image classification, aims at developing a method to estimate the precision matrix in high dimension, that is when the sample size $n$ is small compared to the dimension $p$ of the model. Our approach relies basically on the connection of the precision matrix to the linear regression model. It consists of estimating the precision matrix in two steps. The off-diagonal elements are first estimated by solving $p$ minimization problems of the type $ell_1$-penalized square-root of least-squares. The diagonal entries are then obtained from the result of the previous step, by residual analysis of likelihood maximization. This various estimators of the diagonal entries are compared in terms of estimation risk. Moreover, we propose a new estimator, designed to consider the possible contamination of data by outliers, thanks to the addition of a $ell_2/ell_1$ mixed norm regularization term. The nonasymptotic analysis of the consistency of our estimator points out the relevance of our method
APA, Harvard, Vancouver, ISO, and other styles
19

Popov, Petr. "Nouvelles méthodes de calcul pour la prédiction des interactions protéine-protéine au niveau structural." Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GRENM005/document.

Full text
Abstract:
Le docking moléculaire est une méthode permettant de prédire l'orientation d'une molécule donnée relativement à une autre lorsque celles-ci forment un complexe. Le premier algorithme de docking moléculaire a vu jour en 1990 afin de trouver de nouveaux candidats face à la protéase du VIH-1. Depuis, l'utilisation de protocoles de docking est devenue une pratique standard dans le domaine de la conception de nouveaux médicaments. Typiquement, un protocole de docking comporte plusieurs phases. Il requiert l'échantillonnage exhaustif du site d'interaction où les éléments impliqués sont considérées rigides. Des algorithmes de clustering sont utilisés afin de regrouper les candidats à l'appariement similaires. Des méthodes d'affinage sont appliquées pour prendre en compte la flexibilité au sein complexe moléculaire et afin d'éliminer de possibles artefacts de docking. Enfin, des algorithmes d'évaluation sont utilisés pour sélectionner les meilleurs candidats pour le docking. Cette thèse présente de nouveaux algorithmes de protocoles de docking qui facilitent la prédiction des structures de complexes protéinaires, une des cibles les plus importantes parmi les cibles visées par les méthodes de conception de médicaments. Une première contribution concerne l‘algorithme Docktrina qui permet de prédire les conformations de trimères protéinaires triangulaires. Celui-ci prend en entrée des prédictions de contacts paire-à-paire à partir d'hypothèse de corps rigides. Ensuite toutes les combinaisons possibles de paires de monomères sont évalués à l'aide d'un test de distance RMSD efficace. Cette méthode à la fois rapide et efficace améliore l'état de l'art sur les protéines trimères. Deuxièmement, nous présentons RigidRMSD une librairie C++ qui évalue en temps constant les distances RMSD entre conformations moléculaires correspondant à des transformations rigides. Cette librairie est en pratique utile lors du clustering de positions de docking, conduisant à des temps de calcul améliorés d'un facteur dix, comparé aux temps de calcul des algorithmes standards. Une troisième contribution concerne KSENIA, une fonction d'évaluation à base de connaissance pour l'étude des interactions protéine-protéine. Le problème de la reconstruction de fonction d'évaluation est alors formulé et résolu comme un problème d'optimisation convexe. Quatrièmement, CARBON, un nouvel algorithme pour l'affinage des candidats au docking basés sur des modèles corps-rigides est proposé. Le problème d'optimisation de corps-rigides est vu comme le calcul de trajectoires quasi-statiques de corps rigides influencés par la fonction énergie. CARBON fonctionne aussi bien avec un champ de force classique qu'avec une fonction d'évaluation à base de connaissance. CARBON est aussi utile pour l'affinage de complexes moléculaires qui comportent des clashes stériques modérés à importants. Finalement, une nouvelle méthode permet d'estimer les capacités de prédiction des fonctions d'évaluation. Celle-ci permet d‘évaluer de façon rigoureuse la performance de la fonction d'évaluation concernée sur des benchmarks de complexes moléculaires. La méthode manipule la distribution des scores attribués et non pas directement les scores de conformations particulières, ce qui la rend avantageuse au regard des critères standard basés sur le score le plus élevé. Les méthodes décrites au sein de la thèse sont testées et validées sur différents benchmarks protéines-protéines. Les algorithmes implémentés ont été utilisés avec succès pour la compétition CAPRI concernant la prédiction de complexes protéine-protéine. La méthodologie développée peut facilement être adaptée pour de la reconnaissance d'autres types d'interactions moléculaires impliquant par exemple des ligands, de l'ARN… Les implémentations en C++ des différents algorithmes présentés seront mises à disposition comme SAMSON Elements de la plateforme logicielle SAMSON sur http://www.samson-connect.net ou sur http://nano-d.inrialpes.fr/software
Molecular docking is a method that predicts orientation of one molecule with respect to another one when forming a complex. The first computational method of molecular docking was applied to find new candidates against HIV-1 protease in 1990. Since then, using of docking pipelines has become a standard practice in drug discovery. Typically, a docking protocol comprises different phases. The exhaustive sampling of the binding site upon rigid-body approximation of the docking subunits is required. Clustering algorithms are used to group similar binding candidates. Refinement methods are applied to take into account flexibility of the molecular complex and to eliminate possible docking artefacts. Finally, scoring algorithms are employed to select the best binding candidates. The current thesis presents novel algorithms of docking protocols that facilitate structure prediction of protein complexes, which belong to one of the most important target classes in the structure-based drug design. First, DockTrina - a new algorithm to predict conformations of triangular protein trimers (i.e. trimers with pair-wise contacts between all three pairs of proteins) is presented. The method takes as input pair-wise contact predictions from a rigid-body docking program. It then scans and scores all possible combinations of pairs of monomers using a very fast root mean square deviation (RMSD) test. Being fast and efficient, DockTrina outperforms state-of-the-art computational methods dedicated to predict structure of protein oligomers on the collected benchmark of protein trimers. Second, RigidRMSD - a C++ library that in constant time computes RMSDs between molecular poses corresponding to rigid-body transformations is presented. The library is practically useful for clustering docking poses, resulting in ten times speed up compared to standard RMSD-based clustering algorithms. Third, KSENIA - a novel knowledge-based scoring function for protein-protein interactions is developed. The problem of scoring function reconstruction is formulated and solved as a convex optimization problem. As a result, KSENIA is a smooth function and, thus, is suitable for the gradient-base refinement of molecular structures. Remarkably, it is shown that native interfaces of protein complexes provide sufficient information to reconstruct a well-discriminative scoring function. Fourth, CARBON - a new algorithm for the rigid-body refinement of docking candidates is proposed. The rigid-body optimization problem is viewed as the calculation of quasi-static trajectories of rigid bodies influenced by the energy function. To circumvent the typical problem of incorrect stepsizes for rotation and translation movements of molecular complexes, the concept of controlled advancement is introduced. CARBON works well both in combination with a classical force-field and a knowledge-based scoring function. CARBON is also suitable for refinement of molecular complexes with moderate and large steric clashes between its subunits. Finally, a novel method to evaluate prediction capability of scoring functions is introduced. It allows to rigorously assess the performance of the scoring function of interest on benchmarks of molecular complexes. The method manipulates with the score distributions rather than with scores of particular conformations, which makes it advantageous compared to the standard hit-rate criteria. The methods described in the thesis are tested and validated on various protein-protein benchmarks. The implemented algorithms are successfully used in the CAPRI contest for structure prediction of protein-protein complexes. The developed methodology can be easily adapted to the recognition of other types of molecular interactions, involving ligands, polysaccharides, RNAs, etc. The C++ versions of the presented algorithms will be made available as SAMSON Elements for the SAMSON software platform at http://www.samson-connect.net or at http://nano-d.inrialpes.fr/software
APA, Harvard, Vancouver, ISO, and other styles
20

Hägglund, Andreas, and Moa Källgren. "Impact of Engine Dynamics on Optimal Energy Management Strategies for Hybrid Electric Vehicles." Thesis, Linköpings universitet, Fordonssystem, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-148890.

Full text
Abstract:
In recent years, rules and regulations regarding fuel consumption of vehicles and the amount of emissions produced by them are becoming stricter. This has led the automotive industry to develop more advanced solutions to propel vehicles to meet the legal requirements. The Hybrid Electric Vehicle is one of the solutions that is becoming more popular in the automotive industry. It consists of an electrical driveline combined with a conventional powertrain, propelled by either a diesel or petrol engine. Two power sources create the possibility to choose when and how to use the power sources to propel the vehicle. The strategy that decides how this is done is referred to as an energy management strategy. Today most energy management strategies only try to reduce fuel consumption using models that describe the steady state behaviour of the engine. In other words, no reduction of emissions is achieved and all transient behaviour is considered negligible.  In this thesis, an energy management strategy incorporating engine dynamics to reduce fuel consumption and nitrogen oxide emissions have been designed. First, the models that describe how fuel consumption and nitrogen oxide emissions behave during transient engine operation are developed. Then, an energy management strategy is developed consisting of a model predictive controller that combines the equivalent consumption minimization strategy and convex optimization. Results indicate that by considering engine dynamics in the energy management strategy, both fuel consumption and nitrogen oxide emissions can be reduced. Furthermore, it is also shown that the major reduction in fuel consumption and nitrogen oxide emissions is achieved for short prediction horizons.
APA, Harvard, Vancouver, ISO, and other styles
21

Asif, Muhammad Salman. "Primal dual pursuit a homotopy based algorithm for the Dantzig selector /." Thesis, Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/24693.

Full text
Abstract:
Thesis (M. S.)--Electrical and Computer Engineering, Georgia Institute of Technology, 2009.
Committee Chair: Romberg, Justin; Committee Member: McClellan, James; Committee Member: Mersereau, Russell
APA, Harvard, Vancouver, ISO, and other styles
22

Pennanen, H. (Harri). "Coordinated beamforming in cellular and cognitive radio networks." Doctoral thesis, Oulun yliopisto, 2015. http://urn.fi/urn:isbn:9789526208978.

Full text
Abstract:
Abstract This thesis focuses on the design of coordinated downlink beamforming techniques for wireless multi-cell multi-user multi-antenna systems. In particular, cellular and cognitive radio networks are considered. In general, coordinated beamforming schemes aim to improve system performance, especially at the cell-edge area, by controlling inter-cell interference. In this work, special emphasis is put on practical coordinated beamforming designs that can be implemented in a decentralized manner by relying on local channel state information (CSI) and low-rate backhaul signaling. The network design objective is the sum power minimization (SPMin) of base stations (BSs) while providing the guaranteed minimum rate for each user. Decentralized coordinated beamforming techniques are developed for cellular multi-user multiple-input single-output (MISO) systems. The proposed iterative algorithms are based on classical primal and dual decomposition methods. The SPMin problem is decomposed into two optimization levels, i.e., BS-specific subproblems for the beamforming design and a network-wide master problem for the inter-cell interference coordination. After the acquisition of local CSI, each BS can independently compute its transmit beamformers by solving the subproblem via standard convex optimization techniques. Interference coordination is managed by solving the master problem via a traditional subgradient method that requires scalar information exchange between the BSs. The algorithms make it possible to satisfy the user-specific rate constraints for any iteration. Hence, delay and signaling overhead can be reduced by limiting the number of performed iterations. In this respect, the proposed algorithms are applicable to practical implementations unlike most of the existing decentralized approaches. The numerical results demonstrate that the algorithms provide significant performance gains over zero-forcing beamforming strategies. Coordinated beamforming is also studied in cellular multi-user multiple-input multiple-output (MIMO) systems. The corresponding non-convex SPMin problem is divided into transmit and receive beamforming optimization steps that are alternately solved via successive convex approximation method and the linear minimum mean square error criterion, respectively, until the desired level of convergence is attained. In addition to centralized design, two decentralized primal decomposition-based algorithms are proposed wherein the transmit and receive beamforming designs are facilitated by a combination of pilot and backhaul signaling. The results show that the proposed MIMO algorithms notably outperform the MISO ones. Finally, cellular coordinated beamforming strategies are extended to multi-user MISO cognitive radio systems, where primary and secondary networks share the same spectrum. Here, network optimization is performed for the secondary system with additional interference constraints imposed for the primary users. Decentralized algorithms are proposed based on primal decomposition and an alternating direction method of multipliers
Tiivistelmä Tämä väitöskirja keskittyy yhteistoiminnallisten keilanmuodostustekniikoiden suunnitteluun langattomissa monisolu- ja moniantennijärjestelmissä, erityisesti solukko- ja kognitiiviradioverkoissa. Yhteistoiminnalliset keilanmuodostustekniikat pyrkivät parantamaan verkkojen suorituskykyä kontrolloimalla monisoluhäiriötä, erityisesti tukiasemasolujen reuna-alueilla. Tässä työssä painotetaan erityisesti käytännöllisten yhteistoiminnallisten keilanmuodostustekniikoiden suunnittelua, joka voidaan toteuttaa hajautetusti perustuen paikalliseen kanavatietoon ja tukiasemien väliseen informaationvaihtoon. Verkon suunnittelutavoite on minimoida tukiasemien kokonaislähetysteho samalla, kun jokaiselle käyttäjälle taataan tietty vähimmäistiedonsiirtonopeus. Hajautettuja yhteistoiminnallisia keilanmuodostustekniikoita kehitetään moni-tulo yksi-lähtö -solukkoverkoille. Oletuksena on, että tukiasemat ovat varustettuja monilla lähetysantenneilla, kun taas päätelaitteissa on vain yksi vastaanotinantenni. Ehdotetut iteratiiviset algoritmit perustuvat klassisiin primaali- ja duaalihajotelmiin. Lähetystehon minimointiongelma hajotetaan kahteen optimointitasoon: tukiasemakohtaisiin aliongelmiin keilanmuodostusta varten ja verkkotason pääongelmaan monisoluhäiriön hallintaa varten. Paikallisen kanavatiedon hankkimisen jälkeen jokainen tukiasema laskee itsenäisesti lähetyskeilansa ratkaisemalla aliongelmansa käyttäen apunaan standardeja konveksioptimointitekniikoita. Monisoluhäiriötä kontrolloidaan ratkaisemalla pääongelma käyttäen perinteistä aligradienttimenetelmää. Tämä vaatii tukiasemien välistä informaationvaihtoa. Ehdotetut algoritmit takaavat käyttäjäkohtaiset tiedonsiirtonopeustavoitteet jokaisella iterointikierroksella. Tämä mahdollistaa viiveen pienentämisen ja tukiasemien välisen informaatiovaihdon kontrolloimisen. Tästä syystä ehdotetut algoritmit soveltuvat käytännön toteutuksiin toisin kuin useimmat aiemmin ehdotetut hajautetut algoritmit. Numeeriset tulokset osoittavat, että väitöskirjassa ehdotetut algoritmit tuovat merkittävää verkon suorituskyvyn parannusta verrattaessa aiempiin nollaanpakotus -menetelmiin. Yhteistoiminnallista keilanmuodostusta tutkitaan myös moni-tulo moni-lähtö -solukkoverkoissa, joissa tukiasemat sekä päätelaitteet ovat varustettuja monilla antenneilla. Tällaisessa verkossa lähetystehon minimointiongelma on ei-konveksi. Optimointiongelma jaetaan lähetys- ja vastaanottokeilanmuodostukseen, jotka toistetaan vuorotellen, kunnes algoritmi konvergoituu. Lähetyskeilanmuodostusongelma ratkaistaan peräkkäisillä konvekseilla approksimaatioilla. Vastaanottimen keilanmuodostus toteutetaan summaneliövirheen minimoinnin kautta. Keskitetyn algoritmin lisäksi tässä työssä kehitetään myös kaksi hajautettua algoritmia, jotka perustuvat primaalihajotelmaan. Hajautettua toteutusta helpotetaan pilottisignaloinnilla ja tukiasemien välisellä informaationvaihdolla. Numeeriset tulokset osoittavat, että moni-tulo moni-lähtö -tekniikoilla on merkittävästi parempi suorituskyky kuin moni-tulo yksi-lähtö -tekniikoilla. Lopuksi yhteistoiminnallista keilanmuodostusta tarkastellaan kognitiiviradioverkoissa, joissa primaari- ja sekundaarijärjestelmät jakavat saman taajuuskaistan. Lähetystehon optimointi suoritetaan sekundaariverkolle samalla minimoiden primaarikäyttäjille aiheuttamaa häiriötä. Väitöskirjassa kehitetään kaksi hajautettua algoritmia, joista toinen perustuu primaalihajotelmaan ja toinen kerrointen vaihtelevan suunnan menetelmään
APA, Harvard, Vancouver, ISO, and other styles
23

Artina, Marco [Verfasser], Massimo [Akademischer Betreuer] Fornasier, Karl [Akademischer Betreuer] Kunisch, and Antonin [Akademischer Betreuer] Chambolle. "Lagrangian Methods for Constrained Non-Convex Minimizations and Applications in Fracture Mechanics / Marco Artina. Betreuer: Massimo Fornasier. Gutachter: Karl Kunisch ; Antonin Chambolle ; Massimo Fornasier." München : Universitätsbibliothek der TU München, 2015. http://d-nb.info/1088724981/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Jalalzai, Khalid. "Regularization of inverse problems in image processing." Phd thesis, Ecole Polytechnique X, 2012. http://pastel.archives-ouvertes.fr/pastel-00787790.

Full text
Abstract:
Les problèmes inverses consistent à retrouver une donnée qui a été transformée ou perturbée. Ils nécessitent une régularisation puisque mal posés. En traitement d'images, la variation totale en tant qu'outil de régularisation a l'avantage de préserver les discontinuités tout en créant des zones lisses, résultats établis dans cette thèse dans un cadre continu et pour des énergies générales. En outre, nous proposons et étudions une variante de la variation totale. Nous établissons une formulation duale qui nous permet de démontrer que cette variante coïncide avec la variation totale sur des ensembles de périmètre fini. Ces dernières années les méthodes non-locales exploitant les auto-similarités dans les images ont connu un succès particulier. Nous adaptons cette approche au problème de complétion de spectre pour des problèmes inverses généraux. La dernière partie est consacrée aux aspects algorithmiques inhérents à l'optimisation des énergies convexes considérées. Nous étudions la convergence et la complexité d'une famille récente d'algorithmes dits Primal-Dual.
APA, Harvard, Vancouver, ISO, and other styles
25

Traoré, Abraham. "Contribution à la décomposition de données multimodales avec des applications en apprentisage de dictionnaires et la décomposition de tenseurs de grande taille." Thesis, Normandie, 2019. http://www.theses.fr/2019NORMR068/document.

Full text
Abstract:
Dans ce travail, on s'intéresse à des outils mathématiques spéciaux appelés tenseurs qui sont formellement définis comme des tableaux multidimensionnels définis sur le produit tensoriel d'espaces vectoriels (chaque espace vectoriel étant muni de son système de coordonnées), le nombre d'espaces vectoriels impliqués dans ce produit étant l'ordre du tenseur. L'intérêt pour les tenseurs est motivé par certains travaux expérimentaux qui ont prouvé, dans divers contextes, que traiter des données multidimensionnelles avec des tenseurs plutôt que des matrices donne un meilleur résultat aussi bien pour des tâches de régression que de classification. Dans le cadre de la thèse, nous nous sommes focalisés sur une décomposition dite de Tucker et avons mis en place une méthode pour l'apprentissage de dictionnaires, une technique pour l'apprentissage en ligne de dictionnaires, une approche pour la décomposition d'un tenseur de grandes tailles et enfin une méthodologie pour la décomposition d'un tenseur qui croît par rapport à tous les modes. De nouveaux résultats théoriques concernant la convergence et la vitesse de convergence sont établis et l'efficacité des algorithmes proposés, reposant soit sur la minimisation alternée, soit sur la descente de gradients par coordonnées, est démontrée sur des problèmes réels
In this work, we are interested in special mathematical tools called tensors, that are multidimensional arrays defined on tensor product of some vector spaces, each of which has its own coordinate system and the number of spaces involved in this product is generally referred to as order. The interest for these tools stem from some empirical works (for a range of applications encompassing both classification and regression) that prove the superiority of tensor processing with respect to matrix decomposition techniques. In this thesis framework, we focused on specific tensor model named Tucker and established new approaches for miscellaneous tasks such as dictionary learning, online dictionary learning, large-scale processing as well as the decomposition of a tensor evolving with respect to each of its modes. New theoretical results are established and the efficiency of the different algorithms, which are based either on alternate minimization or coordinate gradient descent, is proven via real-world problems
APA, Harvard, Vancouver, ISO, and other styles
26

Nguyen, Hao Thanh. "Greedy Strategies for Convex Minimization." Thesis, 2013. http://hdl.handle.net/1969.1/151372.

Full text
Abstract:
We have investigated two greedy strategies for finding an approximation to the minimum of a convex function E, defined on a Hilbert space H. We have proved convergence rates for a modification of the orthogonal matching pursuit and its weak version under suitable conditions on the objective function E. These conditions involve the behavior of the moduli of smoothness and the modulus of uniform convexity of E.
APA, Harvard, Vancouver, ISO, and other styles
27

"A decomposition algorithm for convex differentiable minimization." Laboratory for Information and Decision Systems, Massachusetts Institute of Technology], 1989. http://hdl.handle.net/1721.1/3120.

Full text
Abstract:
by Paul Tseng.
Cover title.
Includes bibliographical references.
Partially supported by the U.S. Army Research Office (Center for Intelligent Control Systems) DAAL03-86-K-0171 Partially supported by the National Science Foundation. NSF-ECS-8519058
APA, Harvard, Vancouver, ISO, and other styles
28

"Partial proximal minimization algorithms for convex programming." Massachusetts Institute of Technology, Laboratory for Information and Decision Systems], 1993. http://hdl.handle.net/1721.1/3313.

Full text
Abstract:
by Dimitri P. Bertsekas and Paul Tseng.
Caption title.
Includes bibliographical references (p. 25-27).
Supported by National Science Foundation. DDM-8903385 CCR-9103804 Supported by the Army Research Office. ARO DAAL03-92-G-0115
APA, Harvard, Vancouver, ISO, and other styles
29

Netrapalli, Praneeth Kumar. "Provable alternating minimization for non-convex learning problems." Thesis, 2014. http://hdl.handle.net/2152/25931.

Full text
Abstract:
Alternating minimization (AltMin) is a generic term for a widely popular approach in non-convex learning: often, it is possible to partition the variables into two (or more) sets, so that the problem is convex/tractable in one set if the other is held fixed (and vice versa). This allows for alternating between optimally updating one set of variables, and then the other. AltMin methods typically do not have associated global consistency guarantees; even though they are empirically observed to perform better than methods (e.g. based on convex optimization) that do have guarantees. In this thesis, we obtain rigorous performance guarantees for AltMin in three statistical learning settings: low rank matrix completion, phase retrieval and learning sparsely-used dictionaries. The overarching theme behind our results consists of two parts: (i) devising new initialization procedures (as opposed to doing so randomly, as is typical), and (ii) establishing exponential local convergence from this initialization. Our work shows that the pursuit of statistical guarantees can yield algorithmic improvements (initialization in our case) that perform better in practice.
text
APA, Harvard, Vancouver, ISO, and other styles
30

"On the convergence of the coordinate descent method for convex differentiable minimization." Laboratory for Information and Decision Systems, Massachusetts Institute of Technology], 1989. http://hdl.handle.net/1721.1/3164.

Full text
Abstract:
by Zhi-Quan Luo and Paul Tseng.
Cover title.
Includes bibliographical references (p. 31-34).
Research partially supported by the U.S. Army Research Office (Center for Intelligent Control Systems) DAAL03-86-K-0171 Research partially supported by the National Science Foundation. NSF-ECS-8519058 Research partially supported by the Science and Engineering Research Board of McMaster University.
APA, Harvard, Vancouver, ISO, and other styles
31

"On the linear convergence of descent methods for convex essentially smooth minimization." Massachusetts Institute of Technology, Laboratory for Information and Decision Systems, 1990. http://hdl.handle.net/1721.1/3206.

Full text
Abstract:
by Zhi-Quan Luo, Paul Tseng.
Cover title.
Includes bibliographical references (p. 27-31).
Research supported by the U.S. Army Research Office. DAAL03-86-K-0171 Research supported by the National Science Foundation. NSF-DDM-8903385 Research supported by a grant from the Science and Engineering Research Board of McMaster University.
APA, Harvard, Vancouver, ISO, and other styles
32

Ames, Brendan. "Convex relaxation for the planted clique, biclique, and clustering problems." Thesis, 2011. http://hdl.handle.net/10012/6113.

Full text
Abstract:
A clique of a graph G is a set of pairwise adjacent nodes of G. Similarly, a biclique (U, V ) of a bipartite graph G is a pair of disjoint, independent vertex sets such that each node in U is adjacent to every node in V in G. We consider the problems of identifying the maximum clique of a graph, known as the maximum clique problem, and identifying the biclique (U, V ) of a bipartite graph that maximizes the product |U | · |V |, known as the maximum edge biclique problem. We show that finding a clique or biclique of a given size in a graph is equivalent to finding a rank one matrix satisfying a particular set of linear constraints. These problems can be formulated as rank minimization problems and relaxed to convex programming by replacing rank with its convex envelope, the nuclear norm. Both problems are NP-hard yet we show that our relaxation is exact in the case that the input graph contains a large clique or biclique plus additional nodes and edges. For each problem, we provide two analyses of when our relaxation is exact. In the first, the diversionary edges are added deterministically by an adversary. In the second, each potential edge is added to the graph independently at random with fixed probability p. In the random case, our bounds match the earlier bounds of Alon, Krivelevich, and Sudakov, as well as Feige and Krauthgamer for the maximum clique problem. We extend these results and techniques to the k-disjoint-clique problem. The maximum node k-disjoint-clique problem is to find a set of k disjoint cliques of a given input graph containing the maximum number of nodes. Given input graph G and nonnegative edge weights w, the maximum mean weight k-disjoint-clique problem seeks to identify the set of k disjoint cliques of G that maximizes the sum of the average weights of the edges, with respect to w, of the complete subgraphs of G induced by the cliques. These problems may be considered as a way to pose the clustering problem. In clustering, one wants to partition a given data set so that the data items in each partition or cluster are similar and the items in different clusters are dissimilar. For the graph G such that the set of nodes represents a given data set and any two nodes are adjacent if and only if the corresponding items are similar, clustering the data into k disjoint clusters is equivalent to partitioning G into k-disjoint cliques. Similarly, given a complete graph with nodes corresponding to a given data set and edge weights indicating similarity between each pair of items, the data may be clustered by solving the maximum mean weight k-disjoint-clique problem. We show that both instances of the k-disjoint-clique problem can be formulated as rank constrained optimization problems and relaxed to semidefinite programs using the nuclear norm relaxation of rank. We also show that when the input instance corresponds to a collection of k disjoint planted cliques plus additional edges and nodes, this semidefinite relaxation is exact for both problems. We provide theoretical bounds that guarantee exactness of our relaxation and provide empirical examples of successful applications of our algorithm to synthetic data sets, as well as data sets from clustering applications.
APA, Harvard, Vancouver, ISO, and other styles
33

Chen, Yao-Jen, and 陳銚壬. "Global optimality conditions for non-convex minimization problems based on L-subgradient and Lagrange duality theory." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/16060317204572593698.

Full text
Abstract:
碩士
國立成功大學
數學系應用數學碩博士班
97
In this thesis, we study the global optimality conditions for non-convex minimization problems. With the extended concept of gradient, lower bound function, and Farkas’ lemma, we have the extended KKT conditions for characterizing the non-convex, non-differentiable function. The extension is based on the concept of L-subdifferential, S-property, and solvability property, introduced by Jeyakumar, Rubinov, and Wu, but we have focused on the geometrical connections and interpretation of those abstract properties. The concepts are particularly applied for non-convex quadratic minimization problems and we have made comparisons with the canonical duality theory, introduced by Fang, Gao, Sheu, and Wu, by figures.
APA, Harvard, Vancouver, ISO, and other styles
34

Padakandla, Arun. "Interference Management For Vector Gaussian Multiple Access Channels." Thesis, 2008. http://hdl.handle.net/2005/702.

Full text
Abstract:
In this thesis, we consider a vector Gaussian multiple access channel (MAC) with users demanding reliable communication at specific (Shannon-theoretic) rates. The objective is to assign vectors and powers to these users such that their rate requirements are met and the sum of powers received is minimum. We identify this power minimization problem as an instance of a separable convex optimization problem with linear ascending constraints. Under an ordering condition on the slopes of the functions at the origin, an algorithm that determines the optimum point in a finite number of steps is described. This provides a complete characterization of the minimum sum power for the vector Gaussian multiple access channel. Furthermore, we prove a strong duality between the above sum power minimization problem and the problem of sum rate maximization under power constraints. We then propose finite step algorithms to explicitly identify an assignment of vectors and powers that solve the above power minimization and sum rate maximization problems. The distinguishing feature of the proposed algorithms is the size of the output vector sets. In particular, we prove an upper bound on the size of the vector sets that is independent of the number of users. Finally, we restrict vectors to an orthonormal set. The goal is to identify an assignment of vectors (from an orthonormal set) to users such that the user rate requirements is met with minimum sum power. This is a combinatorial optimization problem. We study the complexity of the decision version of this problem. Our results indicate that when the dimensionality of the vector set is part of the input, the decision version is NP-complete.
APA, Harvard, Vancouver, ISO, and other styles
35

Hoang, Thai Duy. "Fourier and Variational Based Approaches for Fingerprint Segmentation." Doctoral thesis, 2015. http://hdl.handle.net/11858/00-1735-0000-0022-5FEF-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography