To see the other types of publications on this topic, follow the link: Positive-definite matrices.

Dissertations / Theses on the topic 'Positive-definite matrices'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 18 dissertations / theses for your research on the topic 'Positive-definite matrices.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Heyfron, Peter. "Positive functions defined on Hermitian positive semi-definite matrices." Thesis, Imperial College London, 1990. http://hdl.handle.net/10044/1/46339.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ho, Man-Kiu, and 何文翹. "Iterative methods for non-hermitian positive semi-definite systems." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2004. http://hub.hku.hk/bib/B30289403.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Cavers, Ian Alfred. "Tiebreaking the minimum degree algorithm for ordering sparse symmetric positive definite matrices." Thesis, University of British Columbia, 1987. http://hdl.handle.net/2429/27855.

Full text
Abstract:
The minimum degree algorithm is known as an effective scheme for identifying a fill reduced ordering for symmetric, positive definite, sparse linear systems, to be solved using a Cholesky factorization. Although the original algorithm has been enhanced to improve the efficiency of its implementation, ties between minimum degree elimination candidates are still arbitrarily broken. For many systems, the fill levels of orderings produced by the minimum degree algorithm are very sensitive to the precise manner in which these ties are resolved. This thesis introduces several tiebreaking enhancements of the minimum degree algorithm. Emphasis is placed upon a tiebreaking strategy based upon the deficiency of minium degree elimination candidates, and which can consistently identify low fill orderings for a wide spectrum of test problems. All tiebreaking strategies are fully integrated into implementations of the minimum degree algorithm based upon a quotient graph model, including indistinguishable sets represented by uneliminated supernodes. The resulting programs are tested on a wide variety of sparse systems in order to investigate the performance of the algorithm enhanced by the tiebreaking strategies and the quality of the orderings they produce.
Science, Faculty of
Computer Science, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
4

Birk, Sebastian [Verfasser]. "Deflated Shifted Block Krylov Subspace Methods for Hermitian Positive Definite Matrices / Sebastian Birk." Wuppertal : Universitätsbibliothek Wuppertal, 2015. http://d-nb.info/1073127559/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Woodgate, K. G. "Optimization over positive semi-definite symmetric matrices with application to Quasi-Newton algorithms." Thesis, Imperial College London, 1987. http://hdl.handle.net/10044/1/46914.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Tsai, Wenyu Julie. "Neural computation of the eigenvectors of a symmetric positive definite matrix." CSUSB ScholarWorks, 1996. https://scholarworks.lib.csusb.edu/etd-project/1242.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Nader, Rafic. "A study concerning the positive semi-definite property for similarity matrices and for doubly stochastic matrices with some applications." Thesis, Normandie, 2019. http://www.theses.fr/2019NORMC210.

Full text
Abstract:
La théorie des matrices s'est développée rapidement au cours des dernières décennies en raison de son large éventail d'applications et de ses nombreux liens avec différents domaines des mathématiques, de l'économie, de l'apprentissage automatique et du traitement du signal. Cette thèse concerne trois axes principaux liés à deux objets d'étude fondamentaux de la théorie des matrices et apparaissant naturellement dans de nombreuses applications, à savoir les matrices semi-définies positives et les matrices doublement stochastiques.Un concept qui découle naturellement du domaine de l'apprentissage automatique et qui est lié à la propriété semi-définie positive est celui des matrices de similarité. En fait, les matrices de similarité qui sont semi-définies positives revêtent une importance particulière en raison de leur capacité à définir des distances métriques. Cette thèse explorera la propriété semi-définie positive pour une liste de matrices de similarité trouvées dans la littérature. De plus, nous présentons de nouveaux résultats concernant les propriétés définie positive et semi-définie trois-positive de certains matrices de similarité. Une discussion détaillée des nombreuses applications de tous ces propriétés dans divers domaines est également établie.D'autre part, un problème récent de l'analyse matricielle implique l'étude des racines des matrices stochastiques, ce qui s'avère important dans les modèles de chaîne de Markov en finance. Nous étendons l'analyse de ce problème aux matrices doublement stochastiques semi-définies positives. Nous montrons d'abord certaines propriétés géométriques de l'ensemble de toutes les matrices semi-définies positives doublement stochastiques d'ordre n ayant la p-ième racine doublement stochastique pour un entier donné p . En utilisant la théorie des M-matrices et le problème inverse des valeurs propres des matrices symétriques doublement stochastiques (SDIEP), nous présentons également quelques méthodes pour trouver des classes de matrices semi-définies positives doublement stochastiques ayant des p-ièmes racines doublement stochastiques pour tout entier p.Dans le contexte du SDIEP, qui est le problème de caractériser ces listes de nombres réels qui puissent constituer le spectre d’une matrice symétrique doublement stochastique, nous présentons quelques nouveaux résultats le long de cette ligne. En particulier, nous proposons d’utiliser une méthode récursive de construction de matrices doublement stochastiques afin d'obtenir de nouvelles conditions suffisantes indépendantes pour SDIEP. Enfin, nous concentrons notre attention sur les spectres normalisés de Suleimanova, qui constituent un cas particulier des spectres introduits par Suleimanova. En particulier, nous prouvons que de tels spectres ne sont pas toujours réalisables et nous construisons trois familles de conditions suffisantes qui affinent les conditions suffisantes précédemment connues pour SDIEP dans le cas particulier des spectres normalisés de Suleimanova
Matrix theory has shown its importance by its wide range of applications in different fields such as statistics, machine learning, economics and signal processing. This thesis concerns three main axis related to two fundamental objects of study in matrix theory and that arise naturally in many applications, that are positive semi-definite matrices and doubly stochastic matrices.One concept which stems naturally from machine learning area and is related to the positive semi-definite property, is the one of similarity matrices. In fact, similarity matrices that are positive semi-definite are of particular importance because of their ability to define metric distances. This thesis will explore the latter desirable structure for a list of similarity matrices found in the literature. Moreover, we present new results concerning the strictly positive definite and the three positive semi-definite properties of particular similarity matrices. A detailed discussion of the many applications of all these properties in various fields is also established.On the other hand, an interesting research field in matrix analysis involves the study of roots of stochastic matrices which is important in Markov chain models in finance and healthcare. We extend the analysis of this problem to positive semi-definite doubly stochastic matrices.Our contributions include some geometrical properties of the set of all positive semi-definite doubly stochastic matrices of order n with nonnegative pth roots for a given integer p. We also present methods for finding classes of positive semi-definite doubly stochastic matrices that have doubly stochastic pth roots for all p, by making use of the theory of M-Matrices and the symmetric doubly stochastic inverse eigenvalue problem (SDIEP), which is also of independent interest.In the context of the SDIEP, which is the problem of characterising those lists of real numbers which are realisable as the spectrum of some symmetric doubly stochastic matrix, we present some new results along this line. In particular, we propose to use a recursive method on constructing doubly stochastic matrices from smaller size matrices with known spectra to obtain new independent sufficient conditions for SDIEP. Finally, we focus our attention on the realizability by a symmetric doubly stochastic matrix of normalised Suleimanova spectra which is a normalized variant of the spectra introduced by Suleimanova. In particular, we prove that such spectra is not always realizable for odd orders and we construct three families of sufficient conditions that make a refinement for previously known sufficient conditions for SDIEP in the particular case of normalized Suleimanova spectra
APA, Harvard, Vancouver, ISO, and other styles
8

陳志輝 and Chi-fai Alan Bryan Chan. "Some aspects of generalized numerical ranges and numerical radii associated with positive semi-definite functions." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1993. http://hub.hku.hk/bib/B31232954.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Chan, Chi-fai Alan Bryan. "Some aspects of generalized numerical ranges and numerical radii associated with positive semi-definite functions /." [Hong Kong : University of Hong Kong], 1993. http://sunzi.lib.hku.hk/hkuto/record.jsp?B13525256.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Bajracharya, Neeraj. "Level Curves of the Angle Function of a Positive Definite Symmetric Matrix." Thesis, University of North Texas, 2009. https://digital.library.unt.edu/ark:/67531/metadc28376/.

Full text
Abstract:
Given a real N by N matrix A, write p(A) for the maximum angle by which A rotates any unit vector. Suppose that A and B are positive definite symmetric (PDS) N by N matrices. Then their Jordan product {A, B} := AB + BA is also symmetric, but not necessarily positive definite. If p(A) + p(B) is obtuse, then there exists a special orthogonal matrix S such that {A, SBS^(-1)} is indefinite. Of course, if A and B commute, then {A, B} is positive definite. Our work grows from the following question: if A and B are commuting positive definite symmetric matrices such that p(A) + p(B) is obtuse, what is the minimal p(S) such that {A, SBS^(-1)} indefinite? In this dissertation we will describe the level curves of the angle function mapping a unit vector x to the angle between x and Ax for a 3 by 3 PDS matrix A, and discuss their interaction with those of a second such matrix.
APA, Harvard, Vancouver, ISO, and other styles
11

Simpson, Daniel Peter. "Krylov subspace methods for approximating functions of symmetric positive definite matrices with applications to applied statistics and anomalous diffusion." Queensland University of Technology, 2008. http://eprints.qut.edu.au/29751/.

Full text
Abstract:
Matrix function approximation is a current focus of worldwide interest and finds application in a variety of areas of applied mathematics and statistics. In this thesis we focus on the approximation of A..=2b, where A 2 Rnn is a large, sparse symmetric positive definite matrix and b 2 Rn is a vector. In particular, we will focus on matrix function techniques for sampling from Gaussian Markov random fields in applied statistics and the solution of fractional-in-space partial differential equations. Gaussian Markov random fields (GMRFs) are multivariate normal random variables characterised by a sparse precision (inverse covariance) matrix. GMRFs are popular models in computational spatial statistics as the sparse structure can be exploited, typically through the use of the sparse Cholesky decomposition, to construct fast sampling methods. It is well known, however, that for sufficiently large problems, iterative methods for solving linear systems outperform direct methods. Fractional-in-space partial differential equations arise in models of processes undergoing anomalous diffusion. Unfortunately, as the fractional Laplacian is a non-local operator, numerical methods based on the direct discretisation of these equations typically requires the solution of dense linear systems, which is impractical for fine discretisations. In this thesis, novel applications of Krylov subspace approximations to matrix functions for both of these problems are investigated. Matrix functions arise when sampling from a GMRF by noting that the Cholesky decomposition A = LLT is, essentially, a `square root' of the precision matrix A. Therefore, we can replace the usual sampling method, which forms x = L..T z, with x = A..1=2z, where z is a vector of independent and identically distributed standard normal random variables. Similarly, the matrix transfer technique can be used to build solutions to the fractional Poisson equation of the form n = A..=2b, where A is the finite difference approximation to the Laplacian. Hence both applications require the approximation of f(A)b, where f(t) = t..=2 and A is sparse. In this thesis we will compare the Lanczos approximation, the shift-and-invert Lanczos approximation, the extended Krylov subspace method, rational approximations and the restarted Lanczos approximation for approximating matrix functions of this form. A number of new and novel results are presented in this thesis. Firstly, we prove the convergence of the matrix transfer technique for the solution of the fractional Poisson equation and we give conditions by which the finite difference discretisation can be replaced by other methods for discretising the Laplacian. We then investigate a number of methods for approximating matrix functions of the form A..=2b and investigate stopping criteria for these methods. In particular, we derive a new method for restarting the Lanczos approximation to f(A)b. We then apply these techniques to the problem of sampling from a GMRF and construct a full suite of methods for sampling conditioned on linear constraints and approximating the likelihood. Finally, we consider the problem of sampling from a generalised Matern random field, which combines our techniques for solving fractional-in-space partial differential equations with our method for sampling from GMRFs.
APA, Harvard, Vancouver, ISO, and other styles
12

Chen, Chia-Liang, and 陳家樑. "SOME INEQUALITIES FOR POSITIVE DEFINITE MATRICES." Thesis, 1995. http://ndltd.ncl.edu.tw/handle/00819699627310628085.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Huang, De. "Positive Definite Matrices: Compression, Decomposition, Eigensolver, and Concentration." Thesis, 2020. https://thesis.library.caltech.edu/13715/8/Huang_De_2020.pdf.

Full text
Abstract:

For many decades, the study of positive-definite (PD) matrices has been one of the most popular subjects among a wide range of scientific researches. A huge mass of successful models on PD matrices has been proposed and developed in the fields of mathematics, physics, biology, etc., leading to a celebrated richness of theories and algorithms. In this thesis, we draw our attention to a general class of PD matrices that can be decomposed as the sum of a sequence of positive-semidefinite matrices. For this class of PD matrices, we will develop theories and algorithms on operator compression, multilevel decomposition, eigenpair computation, and spectrum concentration. We divide these contents into three main parts.

In the first part, we propose an adaptive fast solver for the preceding class of PD matrices which includes the well-known graph Laplacians. We achieve this by establishing an adaptive operator compression scheme and a multiresolution matrix factorization algorithm which have nearly optimal performance on both complexity and well-posedness. To develop our methods, we introduce a novel notion of energy decomposition for PD matrices and two important local measurement quantities, which provide theoretical guarantee and computational guidance for the construction of an appropriate partition and a nested adaptive basis.

In the second part, we propose a new iterative method to hierarchically compute a relatively large number of leftmost eigenpairs of a sparse PD matrix under the multiresolution matrix compression framework. We exploit the well-conditioned property of every decomposition components by integrating the multiresolution framework into the Implicitly Restarted Lanczos method. We achieve this combination by proposing an extension-refinement iterative scheme, in which the intrinsic idea is to decompose the target spectrum into several segments such that the corresponding eigenproblem in each segment is well-conditioned.

In the third part, we derive concentration inequalities on partial sums of eigenvalues of random PD matrices by introducing the notion of k-trace. For this purpose, we establish a generalized Lieb's concavity theorem, which extends the original Lieb's concavity theorem from the normal trace to k-traces. Our argument employs a variety of matrix techniques and concepts, including exterior algebra, mixed discriminant, and operator interpolation.

APA, Harvard, Vancouver, ISO, and other styles
14

Wu, Bo-Jun, and 吳博雋. "GPU-based Cholesky decomposition for symmetric positive definite matrices." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/99593048031102219679.

Full text
Abstract:
碩士
淡江大學
資訊管理學系碩士班
100
This work aims to apply the recently developed Graphics Processing Unit (GPU) to performance enhancement of a specific matrix operation. When solving the linear least squares problem, it is often necessary to compute the inverse of a covariance matrix. As the covariance matrix satisfies the condition of a symmetric positive definite matrix, the Cholesky decomposition can be used which is twice as fast as the LU decomposition on general matrices. In recent years, with the advances in technology, a graphics card can accommodate hundreds of cores compared to the small number of 8 or 16 cores on CPU. Therefore a trend is seen to use the graphics card as a general purpose graphics processing unit (GPGPU) for parallel computation. There are already many works on parallel matrix operations in the literature. This work will focus on the Cholesky decomposition commonly used in computing the inverse of a symmetric positive definite matrix. First, several open source GPU-based Cholesky decomposition programs on the Internet were located, analyzed, and evaluated. Then several strategies for performance improvement were proposed and tested. After experiments, compared to the CPU version using the Intel Math Kernel Library (MKL), our proposed GPU improvement strategy can achieve a speedup of 3.5x on the Cholesky decomposition of a square matrix of dimension 10,000.
APA, Harvard, Vancouver, ISO, and other styles
15

Lin, Shihhua, and 林世華. "Information Metric and Geometric Mean of Positive Definite Matrices." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/63531465884689548214.

Full text
Abstract:
碩士
東海大學
應用數學系
101
The geometric mean of two positive definite matrices was first given by Pusz and Woronowicz in 1975. It has many properties of the geometric mean of two positive numbers. In 2004, Ando, Li and Mathias listed ten properties that a geometric mean of m matrices should satisfy and give a definition of geometric mean of m matrices by a iteration which satisfy these ten properties. For the geometric mean of two positive matrices, there is an interesting relationship between matrix geometric mean and the information metric. That is, consider the set of all positive definite matrices as a Riemannian manifold with the information metric, then the geometric mean of two positive definite matrices is the middle point of the geodesic which connecting this two matrices. In this paper, we present two different proofs, the variation and the exponential map for the relationship. And we verify that what properties will hold for the geometric mean of two matrices. In the case of m=3, we introduce a completely elementary proof for the convergence of iteration, then we give proofs for the most of these ten properties.
APA, Harvard, Vancouver, ISO, and other styles
16

Khoury, Maroun Clive. "Products of diagonalizable matrices." Diss., 2009. http://hdl.handle.net/10500/787.

Full text
Abstract:
Chapter 1 reviews better-known factorization theorems of a square matrix. For example, a square matrix over a field can be expressed as a product of two symmetric matrices; thus square matrices over real numbers can be factorized into two diagonalizable matrices. Factorizing matrices over complex num hers into Hermitian matrices is discussed. The chapter concludes with theorems that enable one to prescribe the eigenvalues of the factors of a square matrix, with some degree of freedom. Chapter 2 proves that a square matrix over arbitrary fields (with one exception) can be expressed as a product of two diagona lizab le matrices. The next two chapters consider decomposition of singular matrices into Idempotent matrices, and of nonsingutar matrices into Involutions. Chapter 5 studies factorization of a comp 1 ex matrix into Positive-( semi )definite matrices, emphasizing the least number of such factors required
Mathematical Sciences
M.Sc. (MATHEMATICS)
APA, Harvard, Vancouver, ISO, and other styles
17

Khoury, Maroun Clive. "Products of diagonalizable matrices." Diss., 2002. http://hdl.handle.net/10500/17081.

Full text
Abstract:
Chapter 1 reviews better-known factorization theorems of a square matrix. For example, a square matrix over a field can be expressed as a product of two symmetric matrices; thus square matrices over real numbers can be factorized into two diagonalizable matrices. Factorizing matrices over complex numbers into Hermitian matrices is discussed. The chapter concludes with theorems that enable one to prescribe the eigenvalues of the factors of a square matrix, with some degree of freedom. Chapter 2 proves that a square matrix over arbitrary fields (with one exception) can be expressed as a product of two diagonalizable matrices. The next two chapters consider decomposition of singular matrices into Idempotent matrices, and of nonsingular matrices into Involutions. Chapter 5 studies factorization of a complex matrix into Positive-(semi)definite matrices, emphasizing the least number of such factors required.
Mathematical Sciences
M. Sc. (Mathematics)
APA, Harvard, Vancouver, ISO, and other styles
18

Gaoseb, Frans Otto. "Spectral factorization of matrices." Diss., 2020. http://hdl.handle.net/10500/26844.

Full text
Abstract:
Abstract in English
The research will analyze and compare the current research on the spectral factorization of non-singular and singular matrices. We show that a nonsingular non-scalar matrix A can be written as a product A = BC where the eigenvalues of B and C are arbitrarily prescribed subject to the condition that the product of the eigenvalues of B and C must be equal to the determinant of A. Further, B and C can be simultaneously triangularised as a lower and upper triangular matrix respectively. Singular matrices will be factorized in terms of nilpotent matrices and otherwise over an arbitrary or complex field in order to present an integrated and detailed report on the current state of research in this area. Applications related to unipotent, positive-definite, commutator, involutory and Hermitian factorization are studied for non-singular matrices, while applications related to positive-semidefinite matrices are investigated for singular matrices. We will consider the theorems found in Sourour [24] and Laffey [17] to show that a non-singular non-scalar matrix can be factorized spectrally. The same two articles will be used to show applications to unipotent, positive-definite and commutator factorization. Applications related to Hermitian factorization will be considered in [26]. Laffey [18] shows that a non-singular matrix A with det A = ±1 is a product of four involutions with certain conditions on the arbitrary field. To aid with this conclusion a thorough study is made of Hoffman [13], who shows that an invertible linear transformation T of a finite dimensional vector space over a field is a product of two involutions if and only if T is similar to T−1. Sourour shows in [24] that if A is an n × n matrix over an arbitrary field containing at least n + 2 elements and if det A = ±1, then A is the product of at most four involutions. We will review the work of Wu [29] and show that a singular matrix A of order n ≥ 2 over the complex field can be expressed as a product of two nilpotent matrices, where the rank of each of the factors is the same as A, except when A is a 2 × 2 nilpotent matrix of rank one. Nilpotent factorization of singular matrices over an arbitrary field will also be investigated. Laffey [17] shows that the result of Wu, which he established over the complex field, is also valid over an arbitrary field by making use of a special matrix factorization involving similarity to an LU factorization. His proof is based on an application of Fitting's Lemma to express, up to similarity, a singular matrix as a direct sum of a non-singular and nilpotent matrix, and then to write the non-singular component as a product of a lower and upper triangular matrix using a matrix factorization theorem of Sourour [24]. The main theorem by Sourour and Tang [26] will be investigated to highlight the necessary and sufficient conditions for a singular matrix to be written as a product of two matrices with prescribed eigenvalues. This result is used to prove applications related to positive-semidefinite matrices for singular matrices.
National Research Foundation of South Africa
Mathematical Sciences
M Sc. (Mathematics)
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography