Tesis sobre el tema "Convex minimization"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte los 35 mejores tesis para su investigación sobre el tema "Convex minimization".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.
NediÄ, Angelia. "Subgradient methods for convex minimization". Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/16843.
Texto completoIncludes bibliographical references (p. 169-174).
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Many optimization problems arising in various applications require minimization of an objective cost function that is convex but not differentiable. Such a minimization arises, for example, in model construction, system identification, neural networks, pattern classification, and various assignment, scheduling, and allocation problems. To solve convex but not differentiable problems, we have to employ special methods that can work in the absence of differentiability, while taking the advantage of convexity and possibly other special structures that our minimization problem may possess. In this thesis, we propose and analyze some new methods that can solve convex (not necessarily differentiable) problems. In particular, we consider two classes of methods: incremental and variable metric.
by Angelia NediÄ.
Ph.D.
Apidopoulos, Vasileios. "Inertial Gradient-Descent algorithms for convex minimization". Thesis, Bordeaux, 2019. http://www.theses.fr/2019BORD0175/document.
Texto completoThis Thesis focuses on the study of inertial methods for solving composite convex minimization problems. Since the early works of Polyak and Nesterov, inertial methods become very popular, thanks to their acceleration effects. Here, we study a family of Nesterov-type inertial proximalgradient algorithms with a particular over-relaxation sequence. We give a unified presentation about the different convergence properties of this family of algorithms, depending on the over-relaxation parameter. In addition we addressing this issue, in the case of a smooth function with additional geometrical structure, such as the growth (or Łojasiewicz) condition. We show that by combining growth condition and a flatness-type condition on the geometry of the minimizing function, we are able to obtain some new convergence rates. Our analysis follows a continuous-to-discrete trail, passing from continuous-on time-dynamical systems to discrete schemes. In particular the family of inertial algorithms that interest us, can be identified as a finite difference scheme of a differential equation/inclusion. This approach provides a useful guideline, which permits to transpose the different results and their proofs from the continuous system to the discrete one. This opens the way for new possible inertial schemes, derived by the same dynamical system
Gräser, Carsten [Verfasser]. "Convex minimization and phase field models / Carsten Gräser". Berlin : Freie Universität Berlin, 2011. http://d-nb.info/1026174848/34.
Texto completoEl, Gheche Mireille. "Proximal methods for convex minimization of Phi-divergences : application to computer vision". Thesis, Paris Est, 2014. http://www.theses.fr/2014PEST1018/document.
Texto completoConvex optimization aims at searching for the minimum of a convex function over a convex set. While the theory of convex optimization has been largely explored for about a century, several related developments have stimulated a new interest in the topic. The first one is the emergence of efficient optimization algorithms, such as proximal methods, which allow one to easily solve large-size nonsmooth convex problems in a parallel manner. The second development is the discovery of the fact that convex optimization problems are more ubiquitous in practice than was thought previously. In this thesis, we address two different problems within the framework of convex optimization. The first one is an application to computer stereo vision, where the goal is to recover the depth information of a scene from a pair of images taken from the left and right positions. The second one is the proposition of new mathematical tools to deal with convex optimization problems involving information measures, where the objective is to minimize the divergence between two statistical objects such as random variables or probability distributions. We propose a convex approach to address the problem of dense disparity estimation under varying illumination conditions. A convex energy function is derived for jointly estimating the disparity and the illumination variation. The resulting problem is tackled in a set theoretic framework and solved using proximal tools. It is worth emphasizing the ability of this method to process multicomponent images under illumination variation. The conducted experiments indicate that this approach can effectively deal with the local illumination changes and yields better results compared with existing methods. We then extend the previous approach to the problem of multi-view disparity estimation. Rather than estimating a single depth map, we estimate a sequence of disparity maps, one for each input image. We address this problem by adopting a discrete reformulation that can be efficiently solved through a convex relaxation. This approach offers the advantage of handling both convex and nonconvex similarity measures within the same framework. We have shown that the additional complexity required by the application of our method to the multi-view case is small with respect to the stereo case. Finally, we have proposed a novel approach to handle a broad class of statistical distances, called $varphi$-divergences, within the framework of proximal algorithms. In particular, we have developed the expression of the proximity operators of several $varphi$-divergences, such as Kulback-Leibler, Jeffrey-Kulback, Hellinger, Chi-Square, I$_{alpha}$, and Renyi divergences. This allows proximal algorithms to deal with problems involving such divergences, thus overcoming the limitations of current state-of-the-art approaches for similar problems. The proposed approach is validated in two different contexts. The first is an application to image restoration that illustrates how to employ divergences as a regularization term, while the second is an application to image registration that employs divergences as a data fidelity term
Doto, James William. "Conditional uniform convexity in Orlicz spaces and minimization problems". Thesis, Georgia Institute of Technology, 2002. http://hdl.handle.net/1853/27352.
Texto completoCroxton, Keely L., Bernard Gendon y Thomas L. Magnanti. "A Comparison of Mixed-Integer Programming Models for Non-Convex Piecewise Linear Cost Minimization Problems". Massachusetts Institute of Technology, Operations Research Center, 2002. http://hdl.handle.net/1721.1/5233.
Texto completoHe, Niao. "Saddle point techniques in convex composite and error-in-measurement optimization". Diss., Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/54400.
Texto completoGhebremariam, Samuel. "Energy Production Cost and PAR Minimization in Multi-Source Power Networks". University of Akron / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=akron1336517757.
Texto completoSinha, Arunesh. "Audit Games". Research Showcase @ CMU, 2014. http://repository.cmu.edu/dissertations/487.
Texto completoCaillaud, Corentin. "Asymptotical estimates for some algorithms for data and image processing : a study of the Sinkhorn algorithm and a numerical analysis of total variation minimization". Thesis, Institut polytechnique de Paris, 2020. http://www.theses.fr/2020IPPAX023.
Texto completoThis thesis deals with discrete optimization problems and investigates estimates of their convergence rates. It is divided into two independent parts.The first part addresses the convergence rate of the Sinkhorn algorithm and of some of its variants. This algorithm appears in the context of Optimal Transportation (OT) through entropic regularization. Its iterations, and the ones of the Sinkhorn-like variants, are written as componentwise products of nonnegative vectors and matrices. We propose a new approach to analyze them, based on simple convex inequalities and leading to the linear convergence rate that is observed in practice. We extend this result to a particular type of variants of the algorithm that we call 1D balanced Sinkhorn-like algorithms. In addition, we present some numerical techniques dealing with the convergence towards zero of the regularizing parameter of the OT problems. Lastly, we conduct the complete analysis of the convergence rate in dimension 2. In the second part, we establish error estimates for two discretizations of the total variation (TV) in the Rudin-Osher-Fatemi (ROF) model. This image denoising problem, that is solved by computing the proximal operator of the total variation, enjoys isotropy properties ensuring the preservation of sharp discontinuities in the denoised images in every direction. When the problem is discretized into a square mesh of size h and one uses a standard discrete total variation -- the so-called isotropic TV -- this property is lost. We show that in a particular direction the error in the energy is of order h^{2/3} which is relatively large with respect to what one can expect with better discretizations. Our proof relies on the analysis of an equivalent 1D denoising problem and of the perturbed TV it involves. The second discrete total variation we consider mimics the definition of the continuous total variation replacing the usual dual fields by discrete Raviart-Thomas fields. Doing so, we recover an isotropic behavior of the discrete ROF model. Finally, we prove a O(h) error estimate for this variant under standard hypotheses
Repetti, Audrey. "Algorithmes d'optimisation en grande dimension : applications à la résolution de problèmes inverses". Thesis, Paris Est, 2015. http://www.theses.fr/2015PESC1032/document.
Texto completoAn efficient approach for solving an inverse problem is to define the recovered signal/image as a minimizer of a penalized criterion which is often split in a sum of simpler functions composed with linear operators. In the situations of practical interest, these functions may be neither convex nor smooth. In addition, large scale optimization problems often have to be faced. This thesis is devoted to the design of new methods to solve such difficult minimization problems, while paying attention to computational issues and theoretical convergence properties. A first idea to build fast minimization algorithms is to make use of a preconditioning strategy by adapting, at each iteration, the underlying metric. We incorporate this technique in the forward-backward algorithm and provide an automatic method for choosing the preconditioning matrices, based on a majorization-minimization principle. The convergence proofs rely on the Kurdyka-L ojasiewicz inequality. A second strategy consists of splitting the involved data in different blocks of reduced dimension. This approach allows us to control the number of operations performed at each iteration of the algorithms, as well as the required memory. For this purpose, block alternating methods are developed in the context of both non-convex and convex optimization problems. In the non-convex case, a block alternating version of the preconditioned forward-backward algorithm is proposed, where the blocks are updated according to an acyclic deterministic rule. When additional convexity assumptions can be made, various alternating proximal primal-dual algorithms are obtained by using an arbitrary random sweeping rule. The theoretical analysis of these stochastic convex optimization algorithms is grounded on the theory of monotone operators. A key ingredient in the solution of high dimensional optimization problems lies in the possibility of performing some of the computation steps in a parallel manner. This parallelization is made possible in the proposed block alternating primal-dual methods where the primal variables, as well as the dual ones, can be updated in a quite flexible way. As an offspring of these results, new distributed algorithms are derived, where the computations are spread over a set of agents connected through a general hyper graph topology. Finally, our methodological contributions are validated on a number of applications in signal and image processing. First, we focus on optimization problems involving non-convex criteria, in particular image restoration when the original image is corrupted with a signal dependent Gaussian noise, spectral unmixing, phase reconstruction in tomography, and blind deconvolution in seismic sparse signal reconstruction. Then, we address convex minimization problems arising in the context of 3D mesh denoising and in query optimization for database management
Camargo, Fernando Taietti. "Estudo comparativo de passos espectrais e buscas lineares não monótonas". Universidade de São Paulo, 2008. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-16062008-211538/.
Texto completoThe Spectral Gradient method, introduced by Barzilai and Borwein and analized by Raydan for unconstrained minimization, is a simple method whose performance is comparable to traditional methods, such as conjugate gradients. Since the introduction of method, as well as its extension to minimization of convex sets, there were introduced various combinations of different spectral steplengths, as well as different nonmonotone line searches. By the numerical results presented in many studies it is not possible to infer whether there are siginificant differences in the performance of various methods. It also is not sure the relevance of the nonmonotone line searches as a tool in themselves or whether, in fact, they are usefull only to allow the method to be as similar as possible with the original method of Barzilai e Borwein. The objective of this study is to compare the different methods recently introduced as different combinations of nonmonotone linear searches and different spectral steplengths to find the best combination and from there, evaluating the numerical performance of the method.
Couprie, Camille. "Graph-based variational optimization and applications in computer vision". Phd thesis, Université Paris-Est, 2011. http://tel.archives-ouvertes.fr/tel-00666878.
Texto completoBot, Radu Ioan, Ernö Robert Csetnek y Erika Nagy. "Solving systems of monotone inclusions via primal-dual splitting techniques". Universitätsbibliothek Chemnitz, 2013. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-108172.
Texto completoSantos, Luiz Gustavo de Moura dos. "Métodos de busca em coordenada". Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-04012018-162035/.
Texto completoReal world problemas in areas such as machine learning are known for the huge number of decision variables (> 10^6) and data volume. For such problems working with second order derivatives is prohibitive. These problems have properties that benefits the application of coordinate descent/minimization methods. These kind of methods are defined by the change of a single, or small number of, decision variable at each iteration. In the literature, the commonly found description of this type of method is based on the cyclic change of variables. Recent papers have shown that randomized versions of this method have better convergence properties. This version is based on the change of a single variable chosen randomly at each iteration, based on a fixed, but not necessarily uniform, distribution. In this work we present some theoretical aspects of such methods, but we focus on practical aspects.
Cruz, Cavalcanti Yanna. "Factor analysis of dynamic PET images". Thesis, Toulouse, INPT, 2018. http://www.theses.fr/2018INPT0078/document.
Texto completoThanks to its ability to evaluate metabolic functions in tissues from the temporal evolution of a previously injected radiotracer, dynamic positron emission tomography (PET) has become an ubiquitous analysis tool to quantify biological processes. Several quantification techniques from the PET imaging literature require a previous estimation of global time-activity curves (TACs) (herein called \textit{factors}) representing the concentration of tracer in a reference tissue or blood over time. To this end, factor analysis has often appeared as an unsupervised learning solution for the extraction of factors and their respective fractions in each voxel. Inspired by the hyperspectral unmixing literature, this manuscript addresses two main drawbacks of general factor analysis techniques applied to dynamic PET. The first one is the assumption that the elementary response of each tissue to tracer distribution is spatially homogeneous. Even though this homogeneity assumption has proven its effectiveness in several factor analysis studies, it may not always provide a sufficient description of the underlying data, in particular when abnormalities are present. To tackle this limitation, the models herein proposed introduce an additional degree of freedom to the factors related to specific binding. To this end, a spatially-variant perturbation affects a nominal and common TAC representative of the high-uptake tissue. This variation is spatially indexed and constrained with a dictionary that is either previously learned or explicitly modelled with convolutional nonlinearities affecting non-specific binding tissues. The second drawback is related to the noise distribution in PET images. Even though the positron decay process can be described by a Poisson distribution, the actual noise in reconstructed PET images is not expected to be simply described by Poisson or Gaussian distributions. Therefore, we propose to consider a popular and quite general loss function, called the $\beta$-divergence, that is able to generalize conventional loss functions such as the least-square distance, Kullback-Leibler and Itakura-Saito divergences, respectively corresponding to Gaussian, Poisson and Gamma distributions. This loss function is applied to three factor analysis models in order to evaluate its impact on dynamic PET images with different reconstruction characteristics
Vigano, Nicola Roberto. "Full-field X-ray orientation imaging using convex optimization and a discrete representation of six-dimensional position - orientation space". Thesis, Lyon, INSA, 2015. http://www.theses.fr/2015ISAL0095/document.
Texto completoThis Ph.D. thesis is about the development and formalization of a six-dimensional tomography method, for the reconstruction of local orientation in poly-crystalline materials. This method is based on a technique known as diffraction contract tomography (DCT), mainly used in synchrotrons, with a monochromatic and parallel high energy X-ray beam. DCT exists since over a decade now, but it was always employed to analyze undeformed or nearly undeformed materials, described by “grains” with a certain average orientation. Because an orientation can be parametrized by the used of only three num- bers, the local orientation in the grains is modelled by a six-dimensional space X6 = R3 ⊗ O3, that is the outer product between a three-dimensional real- space and another three-dimensional orientation-space. This means that for each point of the real-space, there could be a full three-dimensional orientation- space, which however in practice is restricted to a smaller region of interest called “local orientation-space”. The reconstruction problem is then formulated as a global minimisation prob- lem, where the reconstruction of a single grain is the solution that minimizes a functional. There can be different choices for the functionals to use, and they depend on the type of reconstructions one is looking for, and on the type of a priori knowledge is available. All the functionals used include a data fidelity term which ensures that the reconstruction is consistent with the measured diffraction data, and then an additional regularization term is added, like the l1-norm minimization of the solution vector, that tries to limit the number of orientations per real-space voxel, or a Total Variation operator over the sum of the orientation part of the six-dimensional voxels, in order to enforce the homogeneity of the grain volume. When first published, the results on synthetic data from the third chapter high- lighted some key features of the proposed framework, and showed that it was in principle possible to extend DCT to the reconstruction of moderately de- formed materials, but it was unclear whether it could work in practice. The following chapters instead confirm that the proposed framework is viable for reconstructing moderately deformed materials, and that in conjunction with other techniques, it could also overcome the limitations imposed by the grain indexing, and be applied to more challenging textured materials
Balmand, Samuel. "Quelques contributions à l'estimation de grandes matrices de précision". Thesis, Paris Est, 2016. http://www.theses.fr/2016PESC1024/document.
Texto completoUnder the Gaussian assumption, the relationship between conditional independence and sparsity allows to justify the construction of estimators of the inverse of the covariance matrix -- also called precision matrix -- from regularized approaches. This thesis, originally motivated by the problem of image classification, aims at developing a method to estimate the precision matrix in high dimension, that is when the sample size $n$ is small compared to the dimension $p$ of the model. Our approach relies basically on the connection of the precision matrix to the linear regression model. It consists of estimating the precision matrix in two steps. The off-diagonal elements are first estimated by solving $p$ minimization problems of the type $ell_1$-penalized square-root of least-squares. The diagonal entries are then obtained from the result of the previous step, by residual analysis of likelihood maximization. This various estimators of the diagonal entries are compared in terms of estimation risk. Moreover, we propose a new estimator, designed to consider the possible contamination of data by outliers, thanks to the addition of a $ell_2/ell_1$ mixed norm regularization term. The nonasymptotic analysis of the consistency of our estimator points out the relevance of our method
Popov, Petr. "Nouvelles méthodes de calcul pour la prédiction des interactions protéine-protéine au niveau structural". Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GRENM005/document.
Texto completoMolecular docking is a method that predicts orientation of one molecule with respect to another one when forming a complex. The first computational method of molecular docking was applied to find new candidates against HIV-1 protease in 1990. Since then, using of docking pipelines has become a standard practice in drug discovery. Typically, a docking protocol comprises different phases. The exhaustive sampling of the binding site upon rigid-body approximation of the docking subunits is required. Clustering algorithms are used to group similar binding candidates. Refinement methods are applied to take into account flexibility of the molecular complex and to eliminate possible docking artefacts. Finally, scoring algorithms are employed to select the best binding candidates. The current thesis presents novel algorithms of docking protocols that facilitate structure prediction of protein complexes, which belong to one of the most important target classes in the structure-based drug design. First, DockTrina - a new algorithm to predict conformations of triangular protein trimers (i.e. trimers with pair-wise contacts between all three pairs of proteins) is presented. The method takes as input pair-wise contact predictions from a rigid-body docking program. It then scans and scores all possible combinations of pairs of monomers using a very fast root mean square deviation (RMSD) test. Being fast and efficient, DockTrina outperforms state-of-the-art computational methods dedicated to predict structure of protein oligomers on the collected benchmark of protein trimers. Second, RigidRMSD - a C++ library that in constant time computes RMSDs between molecular poses corresponding to rigid-body transformations is presented. The library is practically useful for clustering docking poses, resulting in ten times speed up compared to standard RMSD-based clustering algorithms. Third, KSENIA - a novel knowledge-based scoring function for protein-protein interactions is developed. The problem of scoring function reconstruction is formulated and solved as a convex optimization problem. As a result, KSENIA is a smooth function and, thus, is suitable for the gradient-base refinement of molecular structures. Remarkably, it is shown that native interfaces of protein complexes provide sufficient information to reconstruct a well-discriminative scoring function. Fourth, CARBON - a new algorithm for the rigid-body refinement of docking candidates is proposed. The rigid-body optimization problem is viewed as the calculation of quasi-static trajectories of rigid bodies influenced by the energy function. To circumvent the typical problem of incorrect stepsizes for rotation and translation movements of molecular complexes, the concept of controlled advancement is introduced. CARBON works well both in combination with a classical force-field and a knowledge-based scoring function. CARBON is also suitable for refinement of molecular complexes with moderate and large steric clashes between its subunits. Finally, a novel method to evaluate prediction capability of scoring functions is introduced. It allows to rigorously assess the performance of the scoring function of interest on benchmarks of molecular complexes. The method manipulates with the score distributions rather than with scores of particular conformations, which makes it advantageous compared to the standard hit-rate criteria. The methods described in the thesis are tested and validated on various protein-protein benchmarks. The implemented algorithms are successfully used in the CAPRI contest for structure prediction of protein-protein complexes. The developed methodology can be easily adapted to the recognition of other types of molecular interactions, involving ligands, polysaccharides, RNAs, etc. The C++ versions of the presented algorithms will be made available as SAMSON Elements for the SAMSON software platform at http://www.samson-connect.net or at http://nano-d.inrialpes.fr/software
Hägglund, Andreas y Moa Källgren. "Impact of Engine Dynamics on Optimal Energy Management Strategies for Hybrid Electric Vehicles". Thesis, Linköpings universitet, Fordonssystem, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-148890.
Texto completoAsif, Muhammad Salman. "Primal dual pursuit a homotopy based algorithm for the Dantzig selector /". Thesis, Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/24693.
Texto completoCommittee Chair: Romberg, Justin; Committee Member: McClellan, James; Committee Member: Mersereau, Russell
Pennanen, H. (Harri). "Coordinated beamforming in cellular and cognitive radio networks". Doctoral thesis, Oulun yliopisto, 2015. http://urn.fi/urn:isbn:9789526208978.
Texto completoTiivistelmä Tämä väitöskirja keskittyy yhteistoiminnallisten keilanmuodostustekniikoiden suunnitteluun langattomissa monisolu- ja moniantennijärjestelmissä, erityisesti solukko- ja kognitiiviradioverkoissa. Yhteistoiminnalliset keilanmuodostustekniikat pyrkivät parantamaan verkkojen suorituskykyä kontrolloimalla monisoluhäiriötä, erityisesti tukiasemasolujen reuna-alueilla. Tässä työssä painotetaan erityisesti käytännöllisten yhteistoiminnallisten keilanmuodostustekniikoiden suunnittelua, joka voidaan toteuttaa hajautetusti perustuen paikalliseen kanavatietoon ja tukiasemien väliseen informaationvaihtoon. Verkon suunnittelutavoite on minimoida tukiasemien kokonaislähetysteho samalla, kun jokaiselle käyttäjälle taataan tietty vähimmäistiedonsiirtonopeus. Hajautettuja yhteistoiminnallisia keilanmuodostustekniikoita kehitetään moni-tulo yksi-lähtö -solukkoverkoille. Oletuksena on, että tukiasemat ovat varustettuja monilla lähetysantenneilla, kun taas päätelaitteissa on vain yksi vastaanotinantenni. Ehdotetut iteratiiviset algoritmit perustuvat klassisiin primaali- ja duaalihajotelmiin. Lähetystehon minimointiongelma hajotetaan kahteen optimointitasoon: tukiasemakohtaisiin aliongelmiin keilanmuodostusta varten ja verkkotason pääongelmaan monisoluhäiriön hallintaa varten. Paikallisen kanavatiedon hankkimisen jälkeen jokainen tukiasema laskee itsenäisesti lähetyskeilansa ratkaisemalla aliongelmansa käyttäen apunaan standardeja konveksioptimointitekniikoita. Monisoluhäiriötä kontrolloidaan ratkaisemalla pääongelma käyttäen perinteistä aligradienttimenetelmää. Tämä vaatii tukiasemien välistä informaationvaihtoa. Ehdotetut algoritmit takaavat käyttäjäkohtaiset tiedonsiirtonopeustavoitteet jokaisella iterointikierroksella. Tämä mahdollistaa viiveen pienentämisen ja tukiasemien välisen informaatiovaihdon kontrolloimisen. Tästä syystä ehdotetut algoritmit soveltuvat käytännön toteutuksiin toisin kuin useimmat aiemmin ehdotetut hajautetut algoritmit. Numeeriset tulokset osoittavat, että väitöskirjassa ehdotetut algoritmit tuovat merkittävää verkon suorituskyvyn parannusta verrattaessa aiempiin nollaanpakotus -menetelmiin. Yhteistoiminnallista keilanmuodostusta tutkitaan myös moni-tulo moni-lähtö -solukkoverkoissa, joissa tukiasemat sekä päätelaitteet ovat varustettuja monilla antenneilla. Tällaisessa verkossa lähetystehon minimointiongelma on ei-konveksi. Optimointiongelma jaetaan lähetys- ja vastaanottokeilanmuodostukseen, jotka toistetaan vuorotellen, kunnes algoritmi konvergoituu. Lähetyskeilanmuodostusongelma ratkaistaan peräkkäisillä konvekseilla approksimaatioilla. Vastaanottimen keilanmuodostus toteutetaan summaneliövirheen minimoinnin kautta. Keskitetyn algoritmin lisäksi tässä työssä kehitetään myös kaksi hajautettua algoritmia, jotka perustuvat primaalihajotelmaan. Hajautettua toteutusta helpotetaan pilottisignaloinnilla ja tukiasemien välisellä informaationvaihdolla. Numeeriset tulokset osoittavat, että moni-tulo moni-lähtö -tekniikoilla on merkittävästi parempi suorituskyky kuin moni-tulo yksi-lähtö -tekniikoilla. Lopuksi yhteistoiminnallista keilanmuodostusta tarkastellaan kognitiiviradioverkoissa, joissa primaari- ja sekundaarijärjestelmät jakavat saman taajuuskaistan. Lähetystehon optimointi suoritetaan sekundaariverkolle samalla minimoiden primaarikäyttäjille aiheuttamaa häiriötä. Väitöskirjassa kehitetään kaksi hajautettua algoritmia, joista toinen perustuu primaalihajotelmaan ja toinen kerrointen vaihtelevan suunnan menetelmään
Artina, Marco [Verfasser], Massimo [Akademischer Betreuer] Fornasier, Karl [Akademischer Betreuer] Kunisch y Antonin [Akademischer Betreuer] Chambolle. "Lagrangian Methods for Constrained Non-Convex Minimizations and Applications in Fracture Mechanics / Marco Artina. Betreuer: Massimo Fornasier. Gutachter: Karl Kunisch ; Antonin Chambolle ; Massimo Fornasier". München : Universitätsbibliothek der TU München, 2015. http://d-nb.info/1088724981/34.
Texto completoJalalzai, Khalid. "Regularization of inverse problems in image processing". Phd thesis, Ecole Polytechnique X, 2012. http://pastel.archives-ouvertes.fr/pastel-00787790.
Texto completoTraoré, Abraham. "Contribution à la décomposition de données multimodales avec des applications en apprentisage de dictionnaires et la décomposition de tenseurs de grande taille". Thesis, Normandie, 2019. http://www.theses.fr/2019NORMR068/document.
Texto completoIn this work, we are interested in special mathematical tools called tensors, that are multidimensional arrays defined on tensor product of some vector spaces, each of which has its own coordinate system and the number of spaces involved in this product is generally referred to as order. The interest for these tools stem from some empirical works (for a range of applications encompassing both classification and regression) that prove the superiority of tensor processing with respect to matrix decomposition techniques. In this thesis framework, we focused on specific tensor model named Tucker and established new approaches for miscellaneous tasks such as dictionary learning, online dictionary learning, large-scale processing as well as the decomposition of a tensor evolving with respect to each of its modes. New theoretical results are established and the efficiency of the different algorithms, which are based either on alternate minimization or coordinate gradient descent, is proven via real-world problems
Nguyen, Hao Thanh. "Greedy Strategies for Convex Minimization". Thesis, 2013. http://hdl.handle.net/1969.1/151372.
Texto completo"A decomposition algorithm for convex differentiable minimization". Laboratory for Information and Decision Systems, Massachusetts Institute of Technology], 1989. http://hdl.handle.net/1721.1/3120.
Texto completoCover title.
Includes bibliographical references.
Partially supported by the U.S. Army Research Office (Center for Intelligent Control Systems) DAAL03-86-K-0171 Partially supported by the National Science Foundation. NSF-ECS-8519058
"Partial proximal minimization algorithms for convex programming". Massachusetts Institute of Technology, Laboratory for Information and Decision Systems], 1993. http://hdl.handle.net/1721.1/3313.
Texto completoCaption title.
Includes bibliographical references (p. 25-27).
Supported by National Science Foundation. DDM-8903385 CCR-9103804 Supported by the Army Research Office. ARO DAAL03-92-G-0115
Netrapalli, Praneeth Kumar. "Provable alternating minimization for non-convex learning problems". Thesis, 2014. http://hdl.handle.net/2152/25931.
Texto completotext
"On the convergence of the coordinate descent method for convex differentiable minimization". Laboratory for Information and Decision Systems, Massachusetts Institute of Technology], 1989. http://hdl.handle.net/1721.1/3164.
Texto completoCover title.
Includes bibliographical references (p. 31-34).
Research partially supported by the U.S. Army Research Office (Center for Intelligent Control Systems) DAAL03-86-K-0171 Research partially supported by the National Science Foundation. NSF-ECS-8519058 Research partially supported by the Science and Engineering Research Board of McMaster University.
"On the linear convergence of descent methods for convex essentially smooth minimization". Massachusetts Institute of Technology, Laboratory for Information and Decision Systems, 1990. http://hdl.handle.net/1721.1/3206.
Texto completoCover title.
Includes bibliographical references (p. 27-31).
Research supported by the U.S. Army Research Office. DAAL03-86-K-0171 Research supported by the National Science Foundation. NSF-DDM-8903385 Research supported by a grant from the Science and Engineering Research Board of McMaster University.
Ames, Brendan. "Convex relaxation for the planted clique, biclique, and clustering problems". Thesis, 2011. http://hdl.handle.net/10012/6113.
Texto completoChen, Yao-Jen y 陳銚壬. "Global optimality conditions for non-convex minimization problems based on L-subgradient and Lagrange duality theory". Thesis, 2009. http://ndltd.ncl.edu.tw/handle/16060317204572593698.
Texto completo國立成功大學
數學系應用數學碩博士班
97
In this thesis, we study the global optimality conditions for non-convex minimization problems. With the extended concept of gradient, lower bound function, and Farkas’ lemma, we have the extended KKT conditions for characterizing the non-convex, non-differentiable function. The extension is based on the concept of L-subdifferential, S-property, and solvability property, introduced by Jeyakumar, Rubinov, and Wu, but we have focused on the geometrical connections and interpretation of those abstract properties. The concepts are particularly applied for non-convex quadratic minimization problems and we have made comparisons with the canonical duality theory, introduced by Fang, Gao, Sheu, and Wu, by figures.
Padakandla, Arun. "Interference Management For Vector Gaussian Multiple Access Channels". Thesis, 2008. http://hdl.handle.net/2005/702.
Texto completoHoang, Thai Duy. "Fourier and Variational Based Approaches for Fingerprint Segmentation". Doctoral thesis, 2015. http://hdl.handle.net/11858/00-1735-0000-0022-5FEF-2.
Texto completo