To see the other types of publications on this topic, follow the link: Matrix algebra.

Dissertations / Theses on the topic 'Matrix algebra'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Matrix algebra.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Rife, Susan A. "Matrix algebra." Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1996. http://handle.dtic.mil/100.2/ADA316035.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Delatorre, Anthony R., and William K. Cooke. "Matrix algebra." Thesis, Monterey, California. Naval Postgraduate School, 1998. http://hdl.handle.net/10945/8658.

Full text
Abstract:
Approved for public release; Distribution is unlimited<br>This thesis is designed to act as an instructor's supplement for refresher matrix algebra courses at the Naval Postgraduate School (NPS). The need for a beginning matrix algebra supplement is driven by the unique circumstances of most NPS students. Most military students attend XPS several years after receiving their undergraduate degrees. This supplement, unlike most college textbooks, bridges the gap between the student's educational lay-off and the rigors of mathematically oriented degrees such as applied math, operations research and engineering. By reviewing the fundamental concepts of vectors and matrices, and performing basic operations with them, the student quickly develops the background needed in NPS's demanding curriculums. This supplement focuses on matrix and vector operations, linear transformations, systems of linear equations, and computational techniques for solving systems of linear equations. The goal is to enhance current matrix algebra textbooks and help the beginning student build a foundation for higher level engineering and mathematics based courses.
APA, Harvard, Vancouver, ISO, and other styles
3

Rubensson, Emanuel H. "Matrix Algebra for Quantum Chemistry." Doctoral thesis, Stockholm : Bioteknologi, Kungliga Tekniska högskolan, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-9447.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Esslamzadeh, Gholam Hossein. "Banach algebra structure and amenability of a class of matrix algebras with applications." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape15/PQDD_0002/NQ29033.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Truong, Kevin. "Statistics of eigenvectors in non-invariant random matrix ensembles." Thesis, University of Nottingham, 2018. http://eprints.nottingham.ac.uk/50928/.

Full text
Abstract:
In this thesis we begin by presenting an introduction on random matrices, their different classes and applications in quantum mechanics to study the characteristics of the eigenvectors of a particular random matrix model. The focus of this work is on one of the oldest and most well-known symmetry classes of random matrices - the Gaussian unitary ensemble. We look at how the different possible deformations of the Gaussian unitary ensemble could have an impact on the nature of the eigenvectors, and back up our results by numerical simulations to confirm validity. We will begin exploring the structure of the eigenvectors by employing the supersymmetry technique, a method for studying eigenvectors of complex quantum systems. In particular, we can analyse the moments of the eigenvectors, a quantity used in the classification of eigenvectors, in different random matrix models. Eigenvectors can either be extended, localised or critical and the scaling of the moments of the eigenvectors with matrix size N is used to determine the exact type. This enables one to study the transition of the eigenvectors from extended to localised and the intermediate stages. We consider different classes of random matrices, such as random matrices with an external source and structured random matrices. In particular, we studied the Rosenzweig-Porter model by generalising our previous results from a deterministic potential to a random one and study the impact of such an alteration to the model.
APA, Harvard, Vancouver, ISO, and other styles
6

Ammar, Gregory, Christian Mehl, and Volker Mehrmann. "Schur-Like Forms for Matrix Lie Groups, Lie Algebras and Jordan Algebras." Universitätsbibliothek Chemnitz, 2005. http://nbn-resolving.de/urn:nbn:de:swb:ch1-200501032.

Full text
Abstract:
We describe canonical forms for elements of a classical Lie group of matrices under similarity transformations in the group. Matrices in the associated Lie algebra and Jordan algebra of matrices inherit related forms under these similarity transformations. In general, one cannot achieve diagonal or Schur form, but the form that can be achieved displays the eigenvalues of the matrix. We also discuss matrices in intersections of these classes and their Schur-like forms. Such multistructered matrices arise in applications from quantum physics and quantum chemistry.
APA, Harvard, Vancouver, ISO, and other styles
7

Boito, Paola. "Structured matrix based methods for approximate polynomial GCD." Doctoral thesis, Scuola Normale Superiore, 2008. http://hdl.handle.net/11384/85672.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Wilkerson, Owen Tanner. "Fast, Sparse Matrix Factorization and Matrix Algebra via Random Sampling for Integral Equation Formulations in Electromagnetics." UKnowledge, 2019. https://uknowledge.uky.edu/ece_etds/147.

Full text
Abstract:
Many systems designed by electrical & computer engineers rely on electromagnetic (EM) signals to transmit, receive, and extract either information or energy. In many cases, these systems are large and complex. Their accurate, cost-effective design requires high-fidelity computer modeling of the underlying EM field/material interaction problem in order to find a design with acceptable system performance. This modeling is accomplished by projecting the governing Maxwell equations onto finite dimensional subspaces, which results in a large matrix equation representation (Zx = b) of the EM problem. In the case of integral equation-based formulations of EM problems, the M-by-N system matrix, Z, is generally dense. For this reason, when treating large problems, it is necessary to use compression methods to store and manipulate Z. One such sparse representation is provided by so-called H^2 matrices. At low-to-moderate frequencies, H^2 matrices provide a controllably accurate data-sparse representation of Z. The scale at which problems in EM are considered ``large'' is continuously being redefined to be larger. This growth of problem scale is not only happening in EM, but respectively across all other sub-fields of computational science as well. The pursuit of increasingly large problems is unwavering in all these sub-fields, and this drive has long outpaced the rate of advancements in processing and storage capabilities in computing. This has caused computational science communities to now face the computational limitations of standard linear algebraic methods that have been relied upon for decades to run quickly and efficiently on modern computing hardware. This common set of algorithms can only produce reliable results quickly and efficiently for small to mid-sized matrices that fit into the memory of the host computer. Therefore, the drive to pursue larger problems has even began to outpace the reasonable capabilities of these common numerical algorithms; the deterministic numerical linear algebra algorithms that have gotten matrix computation this far have proven to be inadequate for many problems of current interest. This has computational science communities focusing on improvements in their mathematical and software approaches in order to push further advancement. Randomized numerical linear algebra (RandNLA) is an emerging area that both academia and industry believe to be strong candidates to assist in overcoming the limitations faced when solving massive and computationally expensive problems. This thesis presents results of recent work that uses a random sampling method (RSM) to implement algebraic operations involving multiple H^2 matrices. Significantly, this work is done in a manner that is non-invasive to an existing H^2 code base for filling and factoring H^2 matrices. The work presented thus expands the existing code's capabilities with minimal impact on existing (and well-tested) applications. In addition to this work with randomized H^2 algebra, improvements in sparse factorization methods for the compressed H^2 data structure will also be presented. The reported developments in filling and factoring H^2 data structures assist in, and allow for, the further pursuit of large and complex problems in computational EM (CEM) within simulation code bases that utilize the H^2 data structure.
APA, Harvard, Vancouver, ISO, and other styles
9

Wilding, David. "Linear algebra over semirings." Thesis, University of Manchester, 2015. https://www.research.manchester.ac.uk/portal/en/theses/linear-algebra-over-semirings(1dfe7143-9341-4dd1-a0d1-ab976628442d).html.

Full text
Abstract:
Motivated by results of linear algebra over fields, rings and tropical semirings, we present a systematic way to understand the behaviour of matrices with entries in an arbitrary semiring. We focus on three closely related problems concerning the row and column spaces of matrices. This allows us to isolate and extract common properties that hold for different reasons over different semirings, yet also lets us identify which features of linear algebra are specific to particular types of semiring. For instance, the row and column spaces of a matrix over a field are isomorphic to each others' duals, as well as to each other, but over a tropical semiring only the first of these properties holds in general (this in itself is a surprising fact). Instead of being isomorphic, the row space and column space of a tropical matrix are anti-isomorphic in a certain order-theoretic and algebraic sense. The first problem is to describe the kernels of the row and column spaces of a given matrix. These equivalence relations generalise the orthogonal complement of a set of vectors, and the nature of their equivalence classes is entirely dependent upon the kind of semiring in question. The second, Hahn-Banach type, problem is to decide which linear functionals on row and column spaces of matrices have a linear extension. If they all do, the underlying semiring is called exact, and in this case the row and column spaces of any matrix are isomorphic to each others' duals. The final problem is to explain the connection between the row space and column space of each matrix. Our notion of a conjugation on a semiring accounts for the different possibilities in a unified manner, as it guarantees the existence of bijections between row and column spaces and lets us focus on the peculiarities of those bijections. Our main original contribution is the systematic approach described above, but along the way we establish several new results about exactness of semirings. We give sufficient conditions for a subsemiring of an exact semiring to inherit exactness, and we apply these conditions to show that exactness transfers to finite group semirings. We also show that every Boolean ring is exact. This result is interesting because it allows us to construct a ring which is exact (also known as FP-injective) but not self-injective. Finally, we consider exactness for residuated lattices, showing that every involutive residuated lattice is exact. We end by showing that the residuated lattice of subsets of a finite monoid is exact if and only if the monoid is a group.
APA, Harvard, Vancouver, ISO, and other styles
10

Tadanki, Sasidhar. "Multiple resonant multiconductor transmission line resonator design using circulant block matrix algebra." Digital WPI, 2018. https://digitalcommons.wpi.edu/etd-dissertations/249.

Full text
Abstract:
The purpose of this dissertation is to provide a theoretical model to design RF coils using multiconductor transmission line (MTL) structures for MRI applications. In this research, an MTL structure is represented as a multiport network using its port admittance matrix. Resonant conditions and closed-form solutions for different port resonant modes are calculated by solving the eigenvalue problem of port admittance matrix using block matrix algebra. A mathematical proof to show that the solution of the characteristic equation of the port admittance matrix is equivalent to solving the source side input impedance is presented. The proof is derived by writing the transmission chain parameter matrix of an MTL structure, and mathematically manipulating the chain parameter matrix to produce a solution to the characteristic equation of the port admittance matrix. A port admittance matrix can be formulated to take one of the forms depending on the type of MTL structure: a circulant matrix, or a circulant block matrix (CB), or a block circulant circulant block matrix (BCCB). A circulant matrix can be diagonalized by a simple Fourier matrix, and a BCCB matrix can be diagonalized by using matrices formed from Kronecker products of Fourier matrices. For a CB matrix, instead of diagonalizing to compute the eigenvalues, a powerful technique called “reduced dimension method� can be used. In the reduced dimension method, the eigenvalues of a circulant block matrix are computed as a set of the eigenvalues of matrices of reduced dimension. The required reduced dimension matrices are created using a combination of the polynomial representor of a circulant matrix and a permutation matrix. A detailed mathematical formulation of the reduced dimension method is presented in this thesis. With the application of the reduced dimension method for a 2n+1 MTL structure, the computation of eigenvalues for a 4n X 4n port admittance matrix is simplified to the computation of eigenvalues of 2n matrices of size 2 X 2. In addition to reduced computations, the model also facilitates analytical formulations for coil resonant conditions. To demonstrate the effectiveness of the proposed methods (2n port model and reduced dimension method), a two-step approach was adopted. First, a standard published RF coil was analyzed using the proposed models. The obtained resonant conditions are then compared with the published values and are verified by full-wave numerical simulations. Second, two new dual tuned coils, a surface coil design using the 2n port model, and a volume coil design using the reduced dimensions method are proposed, constructed, and bench tested. Their validation was carried out by employing 3D EM simulations as well as undertaking MR imaging on clinical scanners. Imaging experiments were conducted on phantoms, and the investigations indicate that the RF coils achieve good performance characteristics and a high signal-to-noise ratio in the regions of interest.
APA, Harvard, Vancouver, ISO, and other styles
11

Gardiner, Eric. "The design of non-orthogonal experiments with a factorial treatment structure." Thesis, University of Reading, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.280904.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Veras, Richard Michael. "A Systematic Approach for Obtaining Performance on Matrix-Like Operations." Research Showcase @ CMU, 2017. http://repository.cmu.edu/dissertations/1011.

Full text
Abstract:
Scientific Computation provides a critical role in the scientific process because it allows us ask complex queries and test predictions that would otherwise be unfeasible to perform experimentally. Because of its power, Scientific Computing has helped drive advances in many fields ranging from Engineering and Physics to Biology and Sociology to Economics and Drug Development and even to Machine Learning and Artificial Intelligence. Common among these domains is the desire for timely computational results, thus a considerable amount of human expert effort is spent towards obtaining performance for these scientific codes. However, this is no easy task because each of these domains present their own unique set of challenges to software developers, such as domain specific operations, structurally complex data and ever-growing datasets. Compounding these problems are the myriads of constantly changing, complex and unique hardware platforms that an expert must target. Unfortunately, an expert is typically forced to reproduce their effort across multiple problem domains and hardware platforms. In this thesis, we demonstrate the automatic generation of expert level high-performance scientific codes for Dense Linear Algebra (DLA), Structured Mesh (Stencil), Sparse Linear Algebra and Graph Analytic. In particular, this thesis seeks to address the issue of obtaining performance on many complex platforms for a certain class of matrix-like operations that span across many scientific, engineering and social fields. We do this by automating a method used for obtaining high performance in DLA and extending it to structured, sparse and scale-free domains. We argue that it is through the use of the underlying structure found in the data from these domains that enables this process. Thus, obtaining performance for most operations does not occur in isolation of the data being operated on, but instead depends significantly on the structure of the data.
APA, Harvard, Vancouver, ISO, and other styles
13

Lundholm, Douglas. "Zero-energy states in supersymmetric matrix models." Doctoral thesis, KTH, Matematik (Avd.), 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-12846.

Full text
Abstract:
The work of this Ph.D. thesis in mathematics concerns the problem of determining existence, uniqueness, and structure of zero-energy states in supersymmetric matrix models, which arise from a quantum mechanical description of the physics of relativistic membranes, reduced Yang-Mills gauge theory, and of nonperturbative features of string theory, respectively M-theory. Several new approaches to this problem are introduced and considered in the course of seven scientific papers, including: construction by recursive methods (Papers A and D), deformations and alternative models (Papers B and C), averaging with respect to symmetries (Paper E), and weighted supersymmetry and index theory (Papers F and G). The mathematical tools used and developed for these approaches include Clifford algebras and associated representation theory, structure of supersymmetric quantum mechanics, as well as spectral theory of (matrix-) Schrödinger operators.<br>QC20100629
APA, Harvard, Vancouver, ISO, and other styles
14

Stothers, Andrew James. "On the complexity of matrix multiplication." Thesis, University of Edinburgh, 2010. http://hdl.handle.net/1842/4734.

Full text
Abstract:
The evaluation of the product of two matrices can be very computationally expensive. The multiplication of two n×n matrices, using the “default” algorithm can take O(n3) field operations in the underlying field k. It is therefore desirable to find algorithms to reduce the “cost” of multiplying two matrices together. If multiplication of two n × n matrices can be obtained in O(nα) operations, the least upper bound for α is called the exponent of matrix multiplication and is denoted by ω. A bound for ω < 3 was found in 1968 by Strassen in his algorithm. He found that multiplication of two 2 × 2 matrices could be obtained in 7 multiplications in the underlying field k, as opposed to the 8 required to do the same multiplication previously. Using recursion, we are able to show that ω ≤ log2 7 < 2.8074, which is better than the value of 3 we had previously. In chapter 1, we look at various techniques that have been found for reducing ω. These include Pan’s Trilinear Aggregation, Bini’s Border Rank and Sch¨onhage’s Asymptotic Sum inequality. In chapter 2, we look in detail at the current best estimate of ω found by Coppersmith and Winograd. We also propose a different method of evaluating the “value” of trilinear forms. Chapters 3 and 4 build on the work of Coppersmith and Winograd and examine how cubing and raising to the fourth power of Coppersmith and Winograd’s “complicated” algorithm affect the value of ω, if at all. Finally, in chapter 5, we look at the Group-Theoretic context proposed by Cohn and Umans, and see how we can derive some of Coppersmith and Winograd’s values using this method, as well as showing how working in this context can perhaps be more conducive to showing ω = 2.
APA, Harvard, Vancouver, ISO, and other styles
15

Kannan, Ramaseshan. "Numerical linear algebra problems in structural analysis." Thesis, University of Manchester, 2014. https://www.research.manchester.ac.uk/portal/en/theses/numerical-linear-algebra-problems-in-structural-analysis(7df0f708-fc12-4807-a1f5-215960d9c4d4).html.

Full text
Abstract:
A range of numerical linear algebra problems that arise in finite element-based structural analysis are considered. These problems were encountered when implementing the finite element method in the software package Oasys GSA. We present novel solutions to these problems in the form of a new method for error detection, algorithms with superior numerical effeciency and algorithms with scalable performance on parallel computers. The solutions and their corresponding software implementations have been integrated into GSA's program code and we present results that demonstrate the use of these implementations by engineers to solve real-world structural analysis problems.
APA, Harvard, Vancouver, ISO, and other styles
16

Murfitt, Louise. "Discrete event dynamic systems in max-algebra : realisation and related combinatorial problems." Thesis, University of Birmingham, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.368451.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Phillips, Adam. "GPU Accelerated Approach to Numerical Linear Algebra and Matrix Analysis with CFD Applications." Honors in the Major Thesis, University of Central Florida, 2014. http://digital.library.ucf.edu/cdm/ref/collection/ETH/id/1635.

Full text
Abstract:
A GPU accelerated approach to numerical linear algebra and matrix analysis with CFD applications is presented. The works objectives are to (1) develop stable and efficient algorithms utilizing multiple NVIDIA GPUs with CUDA to accelerate common matrix computations, (2) optimize these algorithms through CPU/GPU memory allocation, GPU kernel development, CPU/GPU communication, data transfer and bandwidth control to (3) develop parallel CFD applications for Navier Stokes and Lattice Boltzmann analysis methods. Special consideration will be given to performing the linear algebra algorithms under certain matrix types (banded, dense, diagonal, sparse, symmetric and triangular). Benchmarks are performed for all analyses with baseline CPU times being determined to find speed-up factors and measure computational capability of the GPU accelerated algorithms. The GPU implemented algorithms used in this work along with the optimization techniques performed are measured against preexisting work and test matrices available in the NIST Matrix Market. CFD analysis looked to strengthen the assessment of this work by providing a direct engineering application to analysis that would benefit from matrix optimization techniques and accelerated algorithms. Overall, this work desired to develop optimization for selected linear algebra and matrix computations performed with modern GPU architectures and CUDA developer which were applied directly to mathematical and engineering applications through CFD analysis.<br>B.S.<br>Bachelors<br>Mathematics<br>Sciences
APA, Harvard, Vancouver, ISO, and other styles
18

Song, Zixu. "Software engineering abstractions for a numerical linear algebra library." Thesis, University of Manchester, 2012. https://www.research.manchester.ac.uk/portal/en/theses/software-engineering-abstractions-for-a-numerical-linear-algebra-library(68304a9b-56db-404b-8ffb-4613f5102c1a).html.

Full text
Abstract:
This thesis aims at building a numerical linear algebra library with appropriate software engineering abstractions. Three areas of knowledge, namely, Numerical Linear Algebra (NLA), Software Engineering and Compiler Optimisation Techniques, are involved. Numerical simulation is widely used in a large number of distinct disciplines to help scientists understand and discover the world. The solutions to frequently occurring numerical problems have been implemented in subroutines, which were then grouped together to form libraries for ease of use. The design, implementation and maintenance of a NLA library require a great deal of work so that the other two topics, namely, software engineering and compiler optimisation techniques have emerged. Generally speaking, these both try to divide the system into smaller and controllable concerns, and allow the programmer to deal with fewer concerns at one time. Band matrix operation, as a new level of abstraction, is proposed for simplifying library implementation and enhancing extensibility for future functionality upgrades. Iteration Space Partitioning (ISP) is applied, in order to make the performance of this generalised implementation for band matrices comparable to that of the specialised implementations for dense and triangular matrices. The optimisation of ISP can be either programmed using the pointcut-advice model of Aspect-Oriented Programming, or integrated as part of a compiler. This naturally leads to a comparison of these two different techniques for resolving one fundamental problem. The thesis shows that software engineering properties of a library, such as modularity and extensibility, can be improved by the use of the appropriate level of abstraction, while performance is either not sacrificed at all, or at least the loss of performance is limited. In other words, the perceived trade-off between the use of high-level abstraction and fast execution is made less significant than previously assumed.
APA, Harvard, Vancouver, ISO, and other styles
19

Zhang, Yun. "LARGE-SCALE MICROARRAY DATA ANALYSIS USING GPU- ACCELERATED LINEAR ALGEBRA LIBRARIES." OpenSIUC, 2012. https://opensiuc.lib.siu.edu/theses/878.

Full text
Abstract:
The biological datasets produced as a result of high-throughput genomic research such as specifically microarrays, contain vast amounts of knowledge for entire genome and their expression affiliations. Gene clustering from such data is a challenging task due to the huge data size and high complexity of the algorithms as well as the visualization needs. Most of the existing analysis methods for genome-wide gene expression profiles are sequential programs using greedy algorithms and require subjective human decision. Recently, Zhu et al. proposed a parallel Random matrix theory (RMT) based approach for generating transcriptional networks, which is much more resistant to high level of noise in the data [9] without human intervention. Nowadays GPUs are designed to be used more efficiently for general purpose computing [1] and are vastly superior to CPUs [6] in terms of threading performance. Our kernel functions running on GPU utilizes the functions from both the libraries of Compute Unified Basic Linear Algebra Subroutines (CUBLAS) and Compute Unified Linear Algebra (CULA) which implements the Linear Algebra Package (LAPACK). Our experiment results show that GPU program can achieve an average speed-up of 2~3 times for some simulated datasets.
APA, Harvard, Vancouver, ISO, and other styles
20

Vasireddy, Jhansi Lakshmi. "Applications of Linear Algebra to Information Retrieval." Digital Archive @ GSU, 2009. http://digitalarchive.gsu.edu/math_theses/71.

Full text
Abstract:
Some of the theory of nonnegative matrices is first presented. The Perron-Frobenius theorem is highlighted. Some of the important linear algebraic methods of information retrieval are surveyed. Latent Semantic Indexing (LSI), which uses the singular value de-composition is discussed. The Hyper-Text Induced Topic Search (HITS) algorithm is next considered; here the power method for finding dominant eigenvectors is employed. Through the use of a theorem by Sinkohrn and Knopp, a modified HITS method is developed. Lastly, the PageRank algorithm is discussed. Numerical examples and MATLAB programs are also provided.
APA, Harvard, Vancouver, ISO, and other styles
21

Perlepes, Serafim Theodore. "Neural computation of all eigenpairs of a matrix with real eigenvalues." CSUSB ScholarWorks, 1999. https://scholarworks.lib.csusb.edu/etd-project/1525.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Patlola, Phanindher R. "Efficient Evaluation of Makespan for a Manufacturing System Using Max-Plus Algebra." Ohio University / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1304980385.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Imaev, Aleksey A. "Hierarchical Modeling of Manufacturing Systems Using Max-Plus Algebra." Ohio University / OhioLINK, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1257871858.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Shank, Stephen David. "Low-rank solution methods for large-scale linear matrix equations." Diss., Temple University Libraries, 2014. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/273331.

Full text
Abstract:
Mathematics<br>Ph.D.<br>We consider low-rank solution methods for certain classes of large-scale linear matrix equations. Our aim is to adapt existing low-rank solution methods based on standard, extended and rational Krylov subspaces to solve equations which may viewed as extensions of the classical Lyapunov and Sylvester equations. The first class of matrix equations that we consider are constrained Sylvester equations, which essentially consist of Sylvester's equation along with a constraint on the solution matrix. These therefore constitute a system of matrix equations. The second are generalized Lyapunov equations, which are Lyapunov equations with additional terms. Such equations arise as computational bottlenecks in model order reduction.<br>Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
25

Gnedin, Wassilij [Verfasser], Igor [Akademischer Betreuer] Burban, and Peter [Akademischer Betreuer] Littelmann. "Tame matrix problems in Lie theory and commutative algebra / Wassilij Gnedin. Gutachter: Igor Burban ; Peter Littelmann." Köln : Universitäts- und Stadtbibliothek Köln, 2015. http://d-nb.info/109842736X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Tappenden, Rachael Elizabeth Helen. "Development & Implementation of Algorithms for Fast Image Reconstruction." Thesis, University of Canterbury. Mathematics and Statistics, 2011. http://hdl.handle.net/10092/5998.

Full text
Abstract:
Signal and image processing is important in a wide range of areas, including medical and astronomical imaging, and speech and acoustic signal processing. There is often a need for the reconstruction of these objects to be very fast, as they have some cost (perhaps a monetary cost, although often it is a time cost) attached to them. This work considers the development of algorithms that allow these signals and images to be reconstructed quickly and without perceptual quality loss. The main problem considered here is that of reducing the amount of time needed for images to be reconstructed, by decreasing the amount of data necessary for a high quality image to be produced. In addressing this problem two basic ideas are considered. The first is a subset selection problem where the aim is to extract a subset of data, of a predetermined size, from a much larger data set. To do this we first need some metric with which to measure how `good' (or how close to `best') a data subset is. Then, using this metric, we seek an algorithm that selects an appropriate data subset from which an accurate image can be reconstructed. Current algorithms use a criterion based upon the trace of a matrix. In this work we derive a simpler criterion based upon the determinant of a matrix. We construct two new algorithms based upon this new criterion and provide numerical results to demonstrate their accuracy and efficiency. A row exchange strategy is also described, which takes a given subset and performs interchanges to improve the quality of the selected subset. The second idea is, given a reduced set of data, how can we quickly reconstruct an accurate signal or image? Compressed sensing provides a mathematical framework that explains that if a signal or image is known to be sparse relative to some basis, then it may be accurately reconstructed from a reduced set of data measurements. The reconstruction process can be posed as a convex optimization problem. We introduce an algorithm that aims to solve the corresponding problem and accurately reconstruct the desired signal or image. The algorithm is based upon the Barzilai-Borwein algorithm and tailored specifically to the compressed sensing framework. Numerical experiments show that the algorithm is competitive with currently used algorithms. Following the success of compressed sensing for sparse signal reconstruction, we consider whether it is possible to reconstruct other signals with certain structures from reduced data sets. Specifically, signals that are a combination of a piecewise constant part and a sparse component are considered. A reconstruction process for signals of this type is detailed and numerical results are presented.
APA, Harvard, Vancouver, ISO, and other styles
27

Peebles, John Lee Thompson Jr. "Hypergraph Capacity with Applications to Matrix Multiplication." Scholarship @ Claremont, 2013. http://scholarship.claremont.edu/hmc_theses/48.

Full text
Abstract:
The capacity of a directed hypergraph is a particular numerical quantity associated with a hypergraph. It is of interest because of certain important connections to longstanding conjectures in theoretical computer science related to fast matrix multiplication and perfect hashing as well as various longstanding conjectures in extremal combinatorics. We give an overview of the concept of the capacity of a hypergraph and survey a few basic results regarding this quantity. Furthermore, we discuss the Lovász number of an undirected graph, which is known to upper bound the capacity of the graph (and in practice appears to be the best such general purpose bound). We then elaborate on some attempted generalizations/modifications of the Lovász number to undirected hypergraphs that we have tried. It is not currently known whether these attempted generalizations/modifications upper bound the capacity of arbitrary hypergraphs. An important method for proving lower bounds on hypergraph capacity is to exhibit a large independent set in a strong power of the hypergraph. We examine methods for this and show a barrier to attempts to usefully generalize certain of these methods to hypergraphs. We then look at cap sets: independent sets in powers of a certain hypergraph. We examine certain structural properties of them with the hope of finding ones that allow us to prove upper bounds on their size. Finally, we consider two interesting generalizations of capacity and use one of them to formulate several conjectures about connections between cap sets and sunflower-free sets.
APA, Harvard, Vancouver, ISO, and other styles
28

Lamas, Daviña Alejandro. "Dense and sparse parallel linear algebra algorithms on graphics processing units." Doctoral thesis, Universitat Politècnica de València, 2018. http://hdl.handle.net/10251/112425.

Full text
Abstract:
Una línea de desarrollo seguida en el campo de la supercomputación es el uso de procesadores de propósito específico para acelerar determinados tipos de cálculo. En esta tesis estudiamos el uso de tarjetas gráficas como aceleradores de la computación y lo aplicamos al ámbito del álgebra lineal. En particular trabajamos con la biblioteca SLEPc para resolver problemas de cálculo de autovalores en matrices de gran dimensión, y para aplicar funciones de matrices en los cálculos de aplicaciones científicas. SLEPc es una biblioteca paralela que se basa en el estándar MPI y está desarrollada con la premisa de ser escalable, esto es, de permitir resolver problemas más grandes al aumentar las unidades de procesado. El problema lineal de autovalores, Ax = lambda x en su forma estándar, lo abordamos con el uso de técnicas iterativas, en concreto con métodos de Krylov, con los que calculamos una pequeña porción del espectro de autovalores. Este tipo de algoritmos se basa en generar un subespacio de tamaño reducido (m) en el que proyectar el problema de gran dimensión (n), siendo m << n. Una vez se ha proyectado el problema, se resuelve este mediante métodos directos, que nos proporcionan aproximaciones a los autovalores del problema inicial que queríamos resolver. Las operaciones que se utilizan en la expansión del subespacio varían en función de si los autovalores deseados están en el exterior o en el interior del espectro. En caso de buscar autovalores en el exterior del espectro, la expansión se hace mediante multiplicaciones matriz-vector. Esta operación la realizamos en la GPU, bien mediante el uso de bibliotecas o mediante la creación de funciones que aprovechan la estructura de la matriz. En caso de autovalores en el interior del espectro, la expansión requiere resolver sistemas de ecuaciones lineales. En esta tesis implementamos varios algoritmos para la resolución de sistemas de ecuaciones lineales para el caso específico de matrices con estructura tridiagonal a bloques, que se ejecutan en GPU. En el cálculo de las funciones de matrices hemos de diferenciar entre la aplicación directa de una función sobre una matriz, f(A), y la aplicación de la acción de una función de matriz sobre un vector, f(A)b. El primer caso implica un cálculo denso que limita el tamaño del problema. El segundo permite trabajar con matrices dispersas grandes, y para resolverlo también hacemos uso de métodos de Krylov. La expansión del subespacio se hace mediante multiplicaciones matriz-vector, y hacemos uso de GPUs de la misma forma que al resolver autovalores. En este caso el problema proyectado comienza siendo de tamaño m, pero se incrementa en m en cada reinicio del método. La resolución del problema proyectado se hace aplicando una función de matriz de forma directa. Nosotros hemos implementado varios algoritmos para calcular las funciones de matrices raíz cuadrada y exponencial, en las que el uso de GPUs permite acelerar el cálculo.<br>One line of development followed in the field of supercomputing is the use of specific purpose processors to speed up certain types of computations. In this thesis we study the use of graphics processing units as computer accelerators and apply it to the field of linear algebra. In particular, we work with the SLEPc library to solve large scale eigenvalue problems, and to apply matrix functions in scientific applications. SLEPc is a parallel library based on the MPI standard and is developed with the premise of being scalable, i.e. to allow solving larger problems by increasing the processing units. We address the linear eigenvalue problem, Ax = lambda x in its standard form, using iterative techniques, in particular with Krylov's methods, with which we calculate a small portion of the eigenvalue spectrum. This type of algorithms is based on generating a subspace of reduced size (m) in which to project the large dimension problem (n), being m << n. Once the problem has been projected, it is solved by direct methods, which provide us with approximations of the eigenvalues of the initial problem we wanted to solve. The operations used in the expansion of the subspace vary depending on whether the desired eigenvalues are from the exterior or from the interior of the spectrum. In the case of searching for exterior eigenvalues, the expansion is done by matrix-vector multiplications. We do this on the GPU, either by using libraries or by creating functions that take advantage of the structure of the matrix. In the case of eigenvalues from the interior of the spectrum, the expansion requires solving linear systems of equations. In this thesis we implemented several algorithms to solve linear systems of equations for the specific case of matrices with a block-tridiagonal structure, that are run on GPU. In the computation of matrix functions we have to distinguish between the direct application of a matrix function, f(A), and the action of a matrix function on a vector, f(A)b. The first case involves a dense computation that limits the size of the problem. The second allows us to work with large sparse matrices, and to solve it we also make use of Krylov's methods. The expansion of subspace is done by matrix-vector multiplication, and we use GPUs in the same way as when solving eigenvalues. In this case the projected problem starts being of size m, but it is increased by m on each restart of the method. The solution of the projected problem is done by directly applying a matrix function. We have implemented several algorithms to compute the square root and the exponential matrix functions, in which the use of GPUs allows us to speed up the computation.<br>Una línia de desenvolupament seguida en el camp de la supercomputació és l'ús de processadors de propòsit específic per a accelerar determinats tipus de càlcul. En aquesta tesi estudiem l'ús de targetes gràfiques com a acceleradors de la computació i ho apliquem a l'àmbit de l'àlgebra lineal. En particular treballem amb la biblioteca SLEPc per a resoldre problemes de càlcul d'autovalors en matrius de gran dimensió, i per a aplicar funcions de matrius en els càlculs d'aplicacions científiques. SLEPc és una biblioteca paral·lela que es basa en l'estàndard MPI i està desenvolupada amb la premissa de ser escalable, açò és, de permetre resoldre problemes més grans en augmentar les unitats de processament. El problema lineal d'autovalors, Ax = lambda x en la seua forma estàndard, ho abordem amb l'ús de tècniques iteratives, en concret amb mètodes de Krylov, amb els quals calculem una xicoteta porció de l'espectre d'autovalors. Aquest tipus d'algorismes es basa a generar un subespai de grandària reduïda (m) en el qual projectar el problema de gran dimensió (n), sent m << n. Una vegada s'ha projectat el problema, es resol aquest mitjançant mètodes directes, que ens proporcionen aproximacions als autovalors del problema inicial que volíem resoldre. Les operacions que s'utilitzen en l'expansió del subespai varien en funció de si els autovalors desitjats estan en l'exterior o a l'interior de l'espectre. En cas de cercar autovalors en l'exterior de l'espectre, l'expansió es fa mitjançant multiplicacions matriu-vector. Aquesta operació la realitzem en la GPU, bé mitjançant l'ús de biblioteques o mitjançant la creació de funcions que aprofiten l'estructura de la matriu. En cas d'autovalors a l'interior de l'espectre, l'expansió requereix resoldre sistemes d'equacions lineals. En aquesta tesi implementem diversos algorismes per a la resolució de sistemes d'equacions lineals per al cas específic de matrius amb estructura tridiagonal a blocs, que s'executen en GPU. En el càlcul de les funcions de matrius hem de diferenciar entre l'aplicació directa d'una funció sobre una matriu, f(A), i l'aplicació de l'acció d'una funció de matriu sobre un vector, f(A)b. El primer cas implica un càlcul dens que limita la grandària del problema. El segon permet treballar amb matrius disperses grans, i per a resoldre-ho també fem ús de mètodes de Krylov. L'expansió del subespai es fa mitjançant multiplicacions matriu-vector, i fem ús de GPUs de la mateixa forma que en resoldre autovalors. En aquest cas el problema projectat comença sent de grandària m, però s'incrementa en m en cada reinici del mètode. La resolució del problema projectat es fa aplicant una funció de matriu de forma directa. Nosaltres hem implementat diversos algorismes per a calcular les funcions de matrius arrel quadrada i exponencial, en les quals l'ús de GPUs permet accelerar el càlcul.<br>Lamas Daviña, A. (2018). Dense and sparse parallel linear algebra algorithms on graphics processing units [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/112425<br>TESIS
APA, Harvard, Vancouver, ISO, and other styles
29

Tiger, Norkvist Axel. "Morphisms of real calculi from a geometric and algebraic perspective." Licentiate thesis, Linköpings universitet, Algebra, geometri och diskret matematik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-175740.

Full text
Abstract:
Noncommutative geometry has over the past four of decades grown into a rich field of study. Novel ideas and concepts are rapidly being developed, and a notable application of the theory outside of pure mathematics is quantum theory. This thesis will focus on a derivation-based approach to noncommutative geometry using the framework of real calculi, which is a rather direct approach to the subject. Due to their direct nature, real calculi are useful when studying classical concepts in Riemannian geometry and how they may be generalized to a noncommutative setting. This thesis aims to shed light on algebraic aspects of real calculi by introducing a concept of morphisms of real calculi, which enables the study of real calculi on a structural level. In particular, real calculi over matrix algebras are discussed both from an algebraic and a geometric perspective.Morphisms are also interpreted geometrically, giving a way to develop a noncommutative theory of embeddings. As an example, the noncommutative torus is minimally embedded into the noncommutative 3-sphere.<br>Ickekommutativ geometri har under de senaste fyra decennierna blivit ett etablerat forskningsområde inom matematiken. Nya idéer och koncept utvecklas i snabb takt, och en viktig fysikalisk tillämpning av teorin är inom kvantteorin. Denna avhandling kommer att fokusera på ett derivationsbaserat tillvägagångssätt inom ickekommutativ geometri där ramverket real calculi används, vilket är ett relativt direkt sätt att studera ämnet på. Eftersom analogin mellan real calculi och klassisk Riemanngeometri är intuitivt klar så är real calculi användbara när man undersöker hur klassiska koncept inom Riemanngeometri kan generaliseras till en ickekommutativ kontext. Denna avhandling ämnar att klargöra vissa algebraiska aspekter av real calculi genom att introducera morfismer för dessa, vilket möjliggör studiet av real calculi på en strukturell nivå. I synnerhet diskuteras real calculi över matrisalgebror från både ett algebraiskt och ett geometriskt perspektiv. Morfismer tolkas även geometriskt, vilket leder till en ickekommutativ teori för inbäddningar. Som ett exempel blir den ickekommutativa torusen minimalt inbäddad i den ickekommutativa 3-sfären.
APA, Harvard, Vancouver, ISO, and other styles
30

Cheng, Howard. "Algorithms for Normal Forms for Matrices of Polynomials and Ore Polynomials." Thesis, University of Waterloo, 2003. http://hdl.handle.net/10012/1088.

Full text
Abstract:
In this thesis we study algorithms for computing normal forms for matrices of Ore polynomials while controlling coefficient growth. By formulating row reduction as a linear algebra problem, we obtain a fraction-free algorithm for row reduction for matrices of Ore polynomials. The algorithm allows us to compute the rank and a basis of the left nullspace of the input matrix. When the input is restricted to matrices of shift polynomials and ordinary polynomials, we obtain fraction-free algorithms for computing row-reduced forms and weak Popov forms. These algorithms can be used to compute a greatest common right divisor and a least common left multiple of such matrices. Our fraction-free row reduction algorithm can be viewed as a generalization of subresultant algorithms. The linear algebra formulation allows us to obtain bounds on the size of the intermediate results and to analyze the complexity of our algorithms. We then make use of the fraction-free algorithm as a basis to formulate modular algorithms for computing a row-reduced form, a weak Popov form, and the Popov form of a polynomial matrix. By examining the linear algebra formulation, we develop criteria for detecting unlucky homomorphisms and determining the number of homomorphic images required.
APA, Harvard, Vancouver, ISO, and other styles
31

Floderová, Hana. "Geometrické struktury založené na kvaternionech." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2010. http://www.nusl.cz/ntk/nusl-229021.

Full text
Abstract:
A pair (V, G) is called geometric structure, where V is a vector space and G is a subgroup GL(V), which is a set of transmission matrices. In this thesis we classify structures, which are based on properties of quaternions. Geometric structures based on quaternions are called triple structures. Triple structures are four structures with similar properties as quaternions. Quaternions are generated from real numbers and three complex units. We write quaternions in this shape a+bi+cj+dk.
APA, Harvard, Vancouver, ISO, and other styles
32

Reis, Júlio César dos 1979. "Graduações e identidades graduadas para álgebras de matrizes." [s.n.], 2012. http://repositorio.unicamp.br/jspui/handle/REPOSIP/306363.

Full text
Abstract:
Orientador: Plamen Emilov Kochloukov<br>Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Matemática, Estatística e Computação Científica<br>Made available in DSpace on 2018-08-20T11:39:36Z (GMT). No. of bitstreams: 1 Reis_JulioCesardos_D.pdf: 2452563 bytes, checksum: 63f8b1d463a36f74d57c1d71769dc9ae (MD5) Previous issue date: 2012<br>Resumo: Na presente tese, fornecemos bases das identidades polinomiais graduadas de...Observação: O resumo, na íntegra, poderá ser visualizado no texto completo da tese digital<br>Abstract: In this PhD thesis we give bases of the graded polynomial identities of...Note: The complete abstract is available with the full electronic document<br>Doutorado<br>Matematica<br>Doutor em Matemática
APA, Harvard, Vancouver, ISO, and other styles
33

Herrero, Zaragoza Jose Ramón. "A framework for efficient execution of matrix computations." Doctoral thesis, Universitat Politècnica de Catalunya, 2006. http://hdl.handle.net/10803/5991.

Full text
Abstract:
Matrix computations lie at the heart of most scientific computational tasks. The solution of linear systems of equations is a very frequent operation in many fields in science, engineering, surveying, physics and others. Other matrix operations occur frequently in many other fields such as pattern recognition and classification, or multimedia applications. Therefore, it is important to perform matrix operations efficiently. The work in this thesis focuses on the efficient execution on commodity processors of matrix operations which arise frequently in different fields.<br/><br/>We study some important operations which appear in the solution of real world problems: some sparse and dense linear algebra codes and a classification algorithm. In particular, we focus our attention on the efficient execution of the following operations: sparse Cholesky factorization; dense matrix multiplication; dense Cholesky factorization; and Nearest Neighbor Classification.<br/><br/>A lot of research has been conducted on the efficient parallelization of numerical algorithms. However, the efficiency of a parallel algorithm depends ultimately on the performance obtained from the computations performed on each node. The work presented in this thesis focuses on the sequential execution on a single processor.<br/><br/><br/>There exists a number of data structures for sparse computations which can be used in order to avoid the storage of and computation on zero elements. We work with a hierarchical data structure known as hypermatrix. A matrix is subdivided recursively an arbitrary number of times. Several pointer matrices are used to store the location of<br/>submatrices at each level. The last level consists of data submatrices which are dealt with as dense submatrices. When the block size of this dense submatrices is small, the number of zeros can be greatly reduced. However, the performance obtained from BLAS3 routines drops heavily. Consequently, there is a trade-off in the size of data submatrices used for a sparse Cholesky factorization with the hypermatrix scheme. Our goal is that of reducing the overhead introduced by the unnecessary operation on zeros when a hypermatrix data structure is used to produce a sparse Cholesky factorization. In this work we study several techniques for reducing such overhead in order to obtain high performance.<br/><br/>One of our goals is the creation of codes which work efficiently on different platforms when operating on dense matrices. To obtain high performance, the resources offered by the CPU must be properly utilized. At the same time, the memory hierarchy must be exploited to tolerate increasing memory latencies. To achieve the former, we produce inner kernels which use the CPU very efficiently. To achieve the latter, we investigate nonlinear data layouts. Such data formats can contribute to the effective use of the memory system.<br/><br/>The use of highly optimized inner kernels is of paramount importance for obtaining efficient numerical algorithms. Often, such kernels are created by hand. However, we want to create efficient inner kernels for a variety of processors using a general approach and avoiding hand-made codification in assembly language. In this work, we present an alternative way to produce efficient kernels automatically, based on a set of simple codes written in a high level language, which can be parameterized at compilation time. The advantage of our method lies in the ability to generate very efficient inner kernels by means of a good compiler. Working on regular codes for small matrices most of the compilers we used in different platforms were creating very efficient inner kernels for matrix multiplication. Using the resulting kernels we have been able to produce high performance sparse and dense linear algebra codes on a variety of platforms.<br/><br/>In this work we also show that techniques used in linear algebra codes can be useful in other fields. We present the work we have done in the optimization of the Nearest Neighbor classification focusing on the speed of the classification process.<br/><br/>Tuning several codes for different problems and machines can become a heavy and unbearable task. For this reason we have developed an environment for development and automatic benchmarking of codes which is presented in this thesis.<br/><br/>As a practical result of this work, we have been able to create efficient codes for several matrix operations on a variety of platforms. Our codes are highly competitive with other state-of-art codes for some problems.
APA, Harvard, Vancouver, ISO, and other styles
34

Ouzký, Karel. "Zlomkový simplexový algoritmus ve VBA." Master's thesis, Vysoká škola ekonomická v Praze, 2009. http://www.nusl.cz/ntk/nusl-17368.

Full text
Abstract:
Basic idea of fractal simplex algorithm is based in the theory of matrix counting and knowledge of matrix representation of simplex tabuleao from revised simplex method. My desire is to explain theoretical basics on which this algorithm works and provide solution in language Visual Basic for Applications in application MS Excel 2007. Main benefit I see in the fact, that algorithm can solved specific class of mathematical problems in a way of exactness counting whithout necessity of using decimal numbers.
APA, Harvard, Vancouver, ISO, and other styles
35

Kernert, David. "Density-Aware Linear Algebra in a Column-Oriented In-Memory Database System." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-210043.

Full text
Abstract:
Linear algebra operations appear in nearly every application in advanced analytics, machine learning, and of various science domains. Until today, many data analysts and scientists tend to use statistics software packages or hand-crafted solutions for their analysis. In the era of data deluge, however, the external statistics packages and custom analysis programs that often run on single-workstations are incapable to keep up with the vast increase in data volume and size. In particular, there is an increasing demand of scientists for large scale data manipulation, orchestration, and advanced data management capabilities. These are among the key features of a mature relational database management system (DBMS). With the rise of main memory database systems, it now has become feasible to also consider applications that built up on linear algebra. This thesis presents a deep integration of linear algebra functionality into an in-memory column-oriented database system. In particular, this work shows that it has become feasible to execute linear algebra queries on large data sets directly in a DBMS-integrated engine (LAPEG), without the need of transferring data and being restricted by hard disc latencies. From various application examples that are cited in this work, we deduce a number of requirements that are relevant for a database system that includes linear algebra functionality. Beside the deep integration of matrices and numerical algorithms, these include optimization of expressions, transparent matrix handling, scalability and data-parallelism, and data manipulation capabilities. These requirements are addressed by our linear algebra engine. In particular, the core contributions of this thesis are: firstly, we show that the columnar storage layer of an in-memory DBMS yields an easy adoption of efficient sparse matrix data types and algorithms. Furthermore, we show that the execution of linear algebra expressions significantly benefits from different techniques that are inspired from database technology. In a novel way, we implemented several of these optimization strategies in LAPEG’s optimizer (SpMachO), which uses an advanced density estimation method (SpProdest) to predict the matrix density of intermediate results. Moreover, we present an adaptive matrix data type AT Matrix to obviate the need of scientists for selecting appropriate matrix representations. The tiled substructure of AT Matrix is exploited by our matrix multiplication to saturate the different sockets of a multicore main-memory platform, reaching up to a speed-up of 6x compared to alternative approaches. Finally, a major part of this thesis is devoted to the topic of data manipulation; where we propose a matrix manipulation API and present different mutable matrix types to enable fast insertions and deletes. We finally conclude that our linear algebra engine is well-suited to process dynamic, large matrix workloads in an optimized way. In particular, the DBMS-integrated LAPEG is filling the linear algebra gap, and makes columnar in-memory DBMS attractive as efficient, scalable ad-hoc analysis platform for scientists.
APA, Harvard, Vancouver, ISO, and other styles
36

Nguyen, Hong Diep. "Efficient algorithms for verified scientific computing : Numerical linear algebra using interval arithmetic." Phd thesis, Ecole normale supérieure de lyon - ENS LYON, 2011. http://tel.archives-ouvertes.fr/tel-00680352.

Full text
Abstract:
Interval arithmetic is a means to compute verified results. However, a naive use of interval arithmetic does not provide accurate enclosures of the exact results. Moreover, interval arithmetic computations can be time-consuming. We propose several accurate algorithms and efficient implementations in verified linear algebra using interval arithmetic. Two fundamental problems are addressed, namely the multiplication of interval matrices and the verification of a floating-point solution of a linear system. For the first problem, we propose two algorithms which offer new tradeoffs between speed and accuracy. For the second problem, which is the verification of the solution of a linear system, our main contributions are twofold. First, we introduce a relaxation technique, which reduces drastically the execution time of the algorithm. Second, we propose to use extended precision for few, well-chosen parts of the computations, to gain accuracy without losing much in term of execution time.
APA, Harvard, Vancouver, ISO, and other styles
37

Britto, Marta Aparecida Ferreira de Oliveira. "Matrizes: propostas de aplicação no ensino médio." Universidade Federal de Juiz de Fora, 2014. https://repositorio.ufjf.br/jspui/handle/ufjf/786.

Full text
Abstract:
Submitted by Renata Lopes (renatasil82@gmail.com) on 2016-02-18T13:23:35Z No. of bitstreams: 1 martaaparecidaferreiradeoliveirabritto.pdf: 973294 bytes, checksum: 3bb46c887b254be0f099b06ad0b53d7c (MD5)<br>Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2016-02-26T13:30:03Z (GMT) No. of bitstreams: 1 martaaparecidaferreiradeoliveirabritto.pdf: 973294 bytes, checksum: 3bb46c887b254be0f099b06ad0b53d7c (MD5)<br>Made available in DSpace on 2016-02-26T13:30:03Z (GMT). No. of bitstreams: 1 martaaparecidaferreiradeoliveirabritto.pdf: 973294 bytes, checksum: 3bb46c887b254be0f099b06ad0b53d7c (MD5) Previous issue date: 2014-03-18<br>CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior<br>Neste trabalho, abordamos algumas aplicações de matrizes que julgamos ser possível inserir na educação básica, com o intuito de fornecer ao aluno uma visão da utilidade da matemática no mundo real, contribuindo para tornar o seu ensino mais dinâmico e atraente. As aplicações que apontamos são criptografia, cadeias de Markov, grafos, transformações no plano e sistemas lineares. Percebemos que o tratamento dado a este tópico aparece, em geral, de maneira muito tímida nos livros didáticos de ensino médio e que raramente aparecem atividades que as envolvam. No entanto, o tema é muito abrangente e rico, podendo ser relacionado a inúmeras áreas do conhecimento humano, como administração, economia, biologia, computação e física, podendo ser uma ferramenta útil para as atividades interdisciplinares. Notamos ser possível explorar o conceito de matriz, sua representação, suas operações, propriedades e definições através de problemas contextualizados. No decorrer deste trabalho, coletamos sugestões de atividades presentes em artigos, dissertações e livros de Álgebra Linear.<br>In this paper, we intent to make an approach of some matrix application that we judge possible to introduce in basic level education, in order to give students a broader vision of the real world mathematical utility, contributing to make a more dynamic and attractive mathematics teaching. The applications are cryptography, Markov chains, graphs, plane transforms and linear systems. We realized that the treatment given in basic textbooks to this topic is, frequently, sketchy and superficial, scarcely happening to make activities encompassing these topics. However, this is a very rich and broadening topic, that can be related to many areas of human knowledge such as Administration, Economy, Biology, Computer Science and Physics, working as an useful tool to educational interdisciplinarity . Then, it is possible to explore the matrix concept, its representations, its operations, properties and definitions to contextualized problems. Along this paper, we collect several activity suggestions found in articles, essays and Linear Algebra textbooks.
APA, Harvard, Vancouver, ISO, and other styles
38

Saak, Jens. "Efficient Numerical Solution of Large Scale Algebraic Matrix Equations in PDE Control and Model Order Reduction." Doctoral thesis, Universitätsbibliothek Chemnitz, 2009. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-200901642.

Full text
Abstract:
Matrix Lyapunov and Riccati equations are an important tool in mathematical systems theory. They are the key ingredients in balancing based model order reduction techniques and linear quadratic regulator problems. For small and moderately sized problems these equations are solved by techniques with at least cubic complexity which prohibits their usage in large scale applications. Around the year 2000 solvers for large scale problems have been introduced. The basic idea there is to compute a low rank decomposition of the quadratic and dense solution matrix and in turn reduce the memory and computational complexity of the algorithms. In this thesis efficiency enhancing techniques for the low rank alternating directions implicit iteration based solution of large scale matrix equations are introduced and discussed. Also the applicability in the context of real world systems is demonstrated. The thesis is structured in seven central chapters. After the introduction chapter 2 introduces the basic concepts and notations needed as fundamental tools for the remainder of the thesis. The next chapter then introduces a collection of test examples spanning from easily scalable academic test systems to badly conditioned technical applications which are used to demonstrate the features of the solvers. Chapter four and five describe the basic solvers and the modifications taken to make them applicable to an even larger class of problems. The following two chapters treat the application of the solvers in the context of model order reduction and linear quadratic optimal control of PDEs. The final chapter then presents the extensive numerical testing undertaken with the solvers proposed in the prior chapters. Some conclusions and an appendix complete the thesis.
APA, Harvard, Vancouver, ISO, and other styles
39

Bergami, Giacomo. "Hypergraph Mining for Social Networks." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2014. http://amslaurea.unibo.it/7106/.

Full text
Abstract:
Nowadays, more and more data is collected in large amounts, such that the need of studying it both efficiently and profitably is arising; we want to acheive new and significant informations that weren't known before the analysis. At this time many graph mining algorithms have been developed, but an algebra that could systematically define how to generalize such operations is missing. In order to propel the development of a such automatic analysis of an algebra, We propose for the first time (to the best of my knowledge) some primitive operators that may be the prelude to the systematical definition of a hypergraph algebra in this regard.
APA, Harvard, Vancouver, ISO, and other styles
40

Karlsson, Andréas. "Algorithm Adaptation and Optimization of a Novel DSP Vector Co-processor." Thesis, Linköping University, Computer Engineering, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-57427.

Full text
Abstract:
<p>The Division of Computer Engineering at Linköping's university is currently researching the possibility to create a highly parallel DSP platform, that can keep up with the computational needs of upcoming standards for various applications, at low cost and low power consumption. The architecture is called ePUMA and it combines a general RISC DSP master processor with eight SIMD co-processors on a single chip. The master processor will act as the main processor for general tasks and execution control, while the co-processors will accelerate computing intensive and parallel DSP kernels.This thesis investigates the performance potential of the co-processors by implementing matrix algebra kernels for QR decomposition, LU decomposition, matrix determinant and matrix inverse, that run on a single co-processor. The kernels will then be evaluated to find possible problems with the co-processors' microarchitecture and suggest solutions to the problems that might exist. The evaluation shows that the performance potential is very good, but a few problems have been identified, that causes significant overhead in the kernels. Pipeline mismatches, that occurs due to different pipeline lengths for different instructions, causes pipeline hazards and the current solution to this, doesn't allow effective use of the pipeline. In some cases, the single port memories will cause bottlenecks, but the thesis suggests that the situation could be greatly improved by using buffered memory write-back. Also, the lack of register forwarding makes kernels with many data dependencies run unnecessarily slow.</p>
APA, Harvard, Vancouver, ISO, and other styles
41

Åkerling, Erik, and Jimmy Jerenfelt. "Analys och framtagning av algoritm för rodermätning." Thesis, Örebro universitet, Institutionen för naturvetenskap och teknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-23459.

Full text
Abstract:
Arbetet är ett utredningsarbete som går ut på att försöka lokalisera felkällor och göra förbättringar på en testutrustning som mäter rodervinklar på akterdelen på en robot. Rapporten innehåller en översiktlig bild över den tidigare metoden och dess felkällor som hittas vid test av den tidigare metoden. Utredningen utmanar också många utav antagandena som är gjorda för beräkningarna av den tidigare metoden. Detta utförs för att kunna bekräfta eller dementera antagandena. Detta görs i form av matematiska modeller som testar olika delar av metoden. Varje del i rapporten består av en beskrivning av vad kapitlet avser följt av felkällorna som upptäckts i metoden när den testas i modellen. Det framtagna metodförslaget utsätts samma prövning som den tidigare metoden för att utreda skillnaderna. I resultatet kan man se de slutsatser som dragit av varje del av utförandet.<br>The task is an investigation to try and locate errors and make improvements on a test equipment that measures rudder angles on the rear-end of a robot. The report contains an overview of the previous method and the errors that is found by testing it. The investigation also challenges many of the assumptions made when the previous method was made. This was made in order to either confirm or deny the assumptions. This is done by the use of mathematical models to simulate different parts of the method. Each part of the report consists of a description of the section followed by explaining the discovered errors that was found by testing the method in the models. The new produced method suggestion is exposed to the same tests as the previous method to discern the differences. The conclusions made from the sections can be found in the results.
APA, Harvard, Vancouver, ISO, and other styles
42

Theveny, Philippe. "Numerical Quality and High Performance In Interval Linear Algebra on Multi-Core Processors." Thesis, Lyon, École normale supérieure, 2014. http://www.theses.fr/2014ENSL0941/document.

Full text
Abstract:
L'objet est de comparer des algorithmes de multiplication de matrices à coefficients intervalles et leurs implémentations.Le premier axe est la mesure de la précision numérique. Les précédentes analyses d'erreur se limitent à établir une borne sur la surestimation du rayon du résultat en négligeant les erreurs dues au calcul en virgule flottante. Après examen des différentes possibilités pour quantifier l'erreur d'approximation entre deux intervalles, l'erreur d'arrondi est intégrée dans l'erreur globale. À partir de jeux de données aléatoires, la dispersion expérimentale de l'erreur globale permet d'éclairer l'importance des différentes erreurs (de méthode et d'arrondi) en fonction de plusieurs facteurs : valeur et homogénéité des précisions relatives des entrées, dimensions des matrices, précision de travail. Cette démarche conduit à un nouvel algorithme moins coûteux et tout aussi précis dans certains cas déterminés.Le deuxième axe est d'exploiter le parallélisme des opérations. Les implémentations précédentes se ramènent à des produits de matrices de nombres flottants. Pour contourner les limitations d'une telle approche sur la validité du résultat et sur la capacité à monter en charge, je propose une implémentation par blocs réalisée avec des threads OpenMP qui exécutent des noyaux de calcul utilisant les instructions vectorielles. L'analyse des temps d'exécution sur une machine de 4 octo-coeurs montre que les coûts de calcul sont du même ordre de grandeur sur des matrices intervalles et numériques de même dimension et que l'implémentation par bloc passe mieux à l'échelle que l'implémentation avec plusieurs appels aux routines BLAS<br>This work aims at determining suitable scopes for several algorithms of interval matrices multiplication.First, we quantify the numerical quality. Former error analyses of interval matrix products establish bounds on the radius overestimation by neglecting the roundoff error. We discuss here several possible measures for interval approximations. We then bound the roundoff error and compare experimentally this bound with the global error distribution on several random data sets. This approach enlightens the relative importance of the roundoff and arithmetic errors depending on the value and homogeneity of relative accuracies of inputs, on the matrix dimension, and on the working precision. This also leads to a new algorithm that is cheaper yet as accurate as previous ones under well-identified conditions.Second, we exploit the parallelism of linear algebra. Previous implementations use calls to BLAS routines on numerical matrices. We show that this may lead to wrong interval results and also restrict the scalability of the performance when the core count increases. To overcome these problems, we implement a blocking version with OpenMP threads executing block kernels with vector instructions. The timings on a 4-octo-core machine show that this implementation is more scalable than the BLAS one and that the cost of numerical and interval matrix products are comparable
APA, Harvard, Vancouver, ISO, and other styles
43

Julius, Hayden. "Nonstandard solutions of linear preserver problems." Kent State University / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=kent1626101272174819.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Upadrasta, Bharat. "Boolean factor analysis a review of a novel method of matrix decomposition and neural network Boolean factor analysis /." Diss., Online access via UMI:, 2009.

Find full text
Abstract:
Thesis (M.S.)--State University of New York at Binghamton, Thomas J. Watson School of Engineering and Applied Science, Department of Systems Science and Industrial Engineering, 2009.<br>Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
45

Shen, Chong. "Topic Analysis of Tweets on the European Refugee Crisis Using Non-negative Matrix Factorization." Scholarship @ Claremont, 2016. http://scholarship.claremont.edu/cmc_theses/1388.

Full text
Abstract:
The ongoing European Refugee Crisis has been one of the most popular trending topics on Twitter for the past 8 months. This paper applies topic modeling on bulks of tweets to discover the hidden patterns within these social media discussions. In particular, we perform topic analysis through solving Non-negative Matrix Factorization (NMF) as an Inexact Alternating Least Squares problem. We accelerate the computation using techniques including tweet sampling and augmented NMF, compare NMF results with different ranks and visualize the outputs through topic representation and frequency plots. We observe that supportive sentiments maintained a strong presence while negative sentiments such as safety concerns have emerged over time.
APA, Harvard, Vancouver, ISO, and other styles
46

Fernández, Victor Leandro. "Fibrilação de logicas na hierarquia de Leibniz." [s.n.], 2005. http://repositorio.unicamp.br/jspui/handle/REPOSIP/280595.

Full text
Abstract:
Orientador: Marcelo Esteban Coniglio<br>Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Filosofia e Ciencias Humanas<br>Made available in DSpace on 2018-08-04T20:57:48Z (GMT). No. of bitstreams: 1 Fernandez_VictorLeandro_D.pdf: 6531217 bytes, checksum: 2a972c9e9fa860af8f9cc57b3e1bb73d (MD5) Previous issue date: 2005<br>Resumo: Neste trabalho investigamos com um enfoque abstrato um processo de combinações de lógicas conhecido como Fibrilação de lógicas. Em particular estudamos a transferência, mediante fibrilação, de certas propriedades intrínsecas às lógicas proposicionais. As noções mencionadas são as de protoalgebrizabilidade, equivalencialidade e algebrizabilidade. Ditas noções fazem parte da "Hierarquia de Leibniz" , conceito fundamental da chamada Lógica Algébrica Abstrata. Tal hierarquia classifica as diferentes lógicas segundo o seu grau de algebrizabilidade. Assim, nesta tese estudaremos se, quando duas lógicas possuem alguma dessas propriedades, a fibrilação delas possui também tal característica. Com o objetivo de diferençar os diferentes modos de fibrilação existentes na literatura, analisamos duas maneiras de fibrilar lógicas: Fibrilação categorial (ou C-fibrilação) e Fibrilação no sentido de D. Gabbay (G-fibrilação). Também estudamos uma variante da Gfibrilação de lógicas conhecida como Fusão de lógicas. Assim, damos diferentes condições que devem valer para que a C-fibrilação de uma lógica protoalgébrica seja também protoalgébrica, e procedemos de forma similar com as outras propriedades que constituem a Hierarquia de Leibniz. No caso da G-fibrilação e da fusão de lógicas chegamos a diversos resultados análogos aos anteriores, os quais permitem ter uma visão geral da relação entre Lógica Algébrica Abstrata e as Combinações de lógicas<br>Abstract: ln this thesis we investigate, with an abstract approach, a process of combinations of logics known as fibring of logics. ln particular we study the transference by fibring of certain properties, intrinsic to propositionallogics: protoalgebricity, equivalenciality and algebraizability. The notions above belong to the "Leibniz Hierarchy", a fundamental concept of the so-called Abstract Algebraic Logic. Such hierarchy classifies the logics according to its algebraizability degree. So, in this thesis we will study whether, given two logics having some of these properties, the fibring of them still has that property. With the aim of distinguishing the different techniques of fibring existing in the literature, we analyze two methods of fibring logics: Categorial Fibring (or C-fibring) and Fibring in D. Gabbay's sense (G-fibring). We also study a variant of G-fibring known as fusion of logics. So, we give different conditions that must hold in order to obtain a protoalgebraic logic by means of C-fibring of protoalgebric logics. We proceed in a similar way with the other properties that constitutes the Leibniz Hierarchy. With respect to G-fibring and fusion, we arrive to similar results which allow us to get an overview of the relation between Abstract AIgebraic Logic and the subject of combinations of logics<br>Doutorado<br>Doutor em Filosofia
APA, Harvard, Vancouver, ISO, and other styles
47

Luz, B. R. M. "Raiz quadrada de matrizes de ordem 2x2." Universidade Federal de Goiás, 2014. http://repositorio.bc.ufg.br/tede/handle/tede/4085.

Full text
Abstract:
Submitted by Luanna Matias (lua_matias@yahoo.com.br) on 2015-02-04T18:11:54Z No. of bitstreams: 2 Dissertacao - Berto Rodrigo Marinho da Luz.pdf: 651792 bytes, checksum: fc6bc3999eef4a5a3eab227ec82638b0 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5)<br>Rejected by Erika Demachki (erikademachki@gmail.com), reason: on 2015-02-04T18:25:04Z (GMT)<br>Submitted by Luanna Matias (lua_matias@yahoo.com.br) on 2015-02-04T18:26:14Z No. of bitstreams: 2 Dissertacao - Berto Rodrigo Marinho da Luz.pdf: 651792 bytes, checksum: fc6bc3999eef4a5a3eab227ec82638b0 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5)<br>Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2015-02-05T09:58:43Z (GMT) No. of bitstreams: 2 Dissertacao - Berto Rodrigo Marinho da Luz.pdf: 651792 bytes, checksum: fc6bc3999eef4a5a3eab227ec82638b0 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5)<br>Made available in DSpace on 2015-02-05T09:58:43Z (GMT). No. of bitstreams: 2 Dissertacao - Berto Rodrigo Marinho da Luz.pdf: 651792 bytes, checksum: fc6bc3999eef4a5a3eab227ec82638b0 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) Previous issue date: 2014-03-07<br>Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES<br>Mathematics is an essential subject today, with the most varied applications. However, certain mathematical de nitions depends on the prerequisites. Thinking about it, this work deals on a method of calculating square root matrices of order 2. As presented de nitions are organized in a gradual way. For this we will use some de nitions known as multiplication of matrices, determinants and matrix diagonalization.<br>A matemática é uma disciplina essencial nos dias atuais, com as mais variadas aplicações. Porém, certas defi nições matemáticas dependem de pré-requisistos. Pensando nisso, este trabalho trata sobre um método de calcular raiz quadrada de matrizes de ordem 2. As de nições apresentadas estão organizados de forma gradativa. Para isso usaremos algumas de nições conhecidas como multiplicação de matrizes, determinantes e diagonalização de matrizes.
APA, Harvard, Vancouver, ISO, and other styles
48

Fasi, Massimiliano. "Weighted geometric mean of large-scale matrices: numerical analysis and algorithms." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2015. http://amslaurea.unibo.it/8274/.

Full text
Abstract:
Computing the weighted geometric mean of large sparse matrices is an operation that tends to become rapidly intractable, when the size of the matrices involved grows. However, if we are not interested in the computation of the matrix function itself, but just in that of its product times a vector, the problem turns simpler and there is a chance to solve it even when the matrix mean would actually be impossible to compute. Our interest is motivated by the fact that this calculation has some practical applications, related to the preconditioning of some operators arising in domain decomposition of elliptic problems. In this thesis, we explore how such a computation can be efficiently performed. First, we exploit the properties of the weighted geometric mean and find several equivalent ways to express it through real powers of a matrix. Hence, we focus our attention on matrix powers and examine how well-known techniques can be adapted to the solution of the problem at hand. In particular, we consider two broad families of approaches for the computation of f(A) v, namely quadrature formulae and Krylov subspace methods, and generalize them to the pencil case f(A\B) v. Finally, we provide an extensive experimental evaluation of the proposed algorithms and also try to assess how convergence speed and execution time are influenced by some characteristics of the input matrices. Our results suggest that a few elements have some bearing on the performance and that, although there is no best choice in general, knowing the conditioning and the sparsity of the arguments beforehand can considerably help in choosing the best strategy to tackle the problem.
APA, Harvard, Vancouver, ISO, and other styles
49

Castelain, Jean-Marie. "Application de la méthode hypercomplexe aux modélisations géométriques et différentielles des robots constitués d'une chaîne cinématique simple." Valenciennes, 1986. https://ged.uphf.fr/nuxeo/site/esupversions/bc4218bf-1a8a-40bb-b433-53644d00e666.

Full text
Abstract:
L'objet de la thèse est l'application de la méthode hypercomplexe a l'étude cinématique d'une classe importante de robots. Apres avoir précise la structure algébrique générale des systèmes hypercomplexes et précise les éléments de l'analyse hypercomplexe nécessaire a l'objet, une étude des mouvements de points et de droites orientées est développée sur base d'un système hypercomplexe particulier: les biquaternions. Dans une seconde partie du mémoire, cette étude est appliquée a l'analyse géométrique des chaines cinématiques élémentaires et ouvertes. L’efficacité de la méthode hypercomplexe est révélée par l'application des modélisations proposées, a la commande de robots industriels
APA, Harvard, Vancouver, ISO, and other styles
50

OLIVEIRA, Marciel Medeiros de. "Identidades de álgebras de matrizes e Teorema de Amitsur-Levitzki." Universidade Federal de Campina Grande, 2010. http://dspace.sti.ufcg.edu.br:8080/jspui/handle/riufcg/1239.

Full text
Abstract:
Submitted by Johnny Rodrigues (johnnyrodrigues@ufcg.edu.br) on 2018-07-25T16:54:03Z No. of bitstreams: 1 MACIEL MEDEIROS DE OLIVEIRA - DISSERTAÇÃO PPGMAT 2010..pdf: 998582 bytes, checksum: 142de66a057d7d36764dfcef2f50590c (MD5)<br>Made available in DSpace on 2018-07-25T16:54:03Z (GMT). No. of bitstreams: 1 MACIEL MEDEIROS DE OLIVEIRA - DISSERTAÇÃO PPGMAT 2010..pdf: 998582 bytes, checksum: 142de66a057d7d36764dfcef2f50590c (MD5) Previous issue date: 2010-12<br>Capes<br>Neste trabalho fazemos uma abordagem sobre as identidades polinomiais da álgebra das matrizes Mn(K), onde K é um corpo. Inicialmente, apresentamos as provas de Rosset e Swan para o Teorema de Amitsur-Levitzki. Em seguida, fazemos um estudo sobre as identidades de Mn(K) de grau2n+1 para n >2 (considerando charK=0) e fechamos essa abordagem com a apresentação da resposta de Chang para a questão sugeridaporFormaneksobreminimalidadedeuminteiropositivomtalqueopolinômio duplo de Capelli Dm é uma identidade para Mn(K).<br>In this work we approach polinomial identities of the algebra of matrix Mn(K), whereK isafield. Initially, we present the Rosset’s and Swan’s proofs for the Theorem of Amitsur-Levitzki. Afterward, we make a study on the identities of Mn(K) of2n+1 degree (considering charK =0). We end this approach with the presentation of the minimality of a integer positive number m such that the Capelli double polinomial Dm is an identity of Mn(K).
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography