To see the other types of publications on this topic, follow the link: Algebras, Linear Algorithms.

Journal articles on the topic 'Algebras, Linear Algorithms'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Algebras, Linear Algorithms.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Chyzak, F., A. Quadrat, and D. Robertz. "Effective algorithms for parametrizing linear control systems over Ore algebras." Applicable Algebra in Engineering, Communication and Computing 16, no. 5 (November 2005): 319–76. http://dx.doi.org/10.1007/s00200-005-0188-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Faybusovich, Leonid. "Linear systems in Jordan algebras and primal-dual interior-point algorithms." Journal of Computational and Applied Mathematics 86, no. 1 (November 1997): 149–75. http://dx.doi.org/10.1016/s0377-0427(97)00153-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Sayadi Shahraki, Marzieh, Maryam Zangiabadi, and Hossein Mansouri. "A wide neighborhood predictor–corrector infeasible-interior-point method for Cartesian P∗(κ)-LCP over symmetric cones." Asian-European Journal of Mathematics 09, no. 03 (August 2, 2016): 1650049. http://dx.doi.org/10.1142/s1793557116500492.

Full text
Abstract:
In this paper, we present a predictor–corrector infeasible-interior-point method based on a new wide neighborhood of the central path for linear complementarity problem over symmetric cones (SCLCP) with the Cartesian [Formula: see text]-property. The convergence of the algorithm is proved for commutative class of search directions. Moreover, using the theory of Euclidean Jordan algebras and some elegant tools, the iteration bound improves the earlier complexity of these kind of algorithms for the Cartesian [Formula: see text]-SCLCPs.
APA, Harvard, Vancouver, ISO, and other styles
4

Chyzak, F., A. Quadrat, and D. Robertz. "Linear control systems over ore algebras: Effective algorithms for the computation of parametrizations." IFAC Proceedings Volumes 36, no. 19 (September 2003): 147–54. http://dx.doi.org/10.1016/s1474-6670(17)33317-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Wani Jamaludin, Irma Wani Jamaludin, and Norhaliza Abdul Wahab. "Recursive Subspace Identification Algorithm using the Propagator Based Method." Indonesian Journal of Electrical Engineering and Computer Science 6, no. 1 (April 1, 2017): 172. http://dx.doi.org/10.11591/ijeecs.v6.i1.pp172-179.

Full text
Abstract:
<p>Subspace model identification (SMI) method is the effective method in identifying dynamic state space linear multivariable systems and it can be obtained directly from the input and output data. Basically, subspace identifications are based on algorithms from numerical algebras which are the QR decomposition and Singular Value Decomposition (SVD). In industrial applications, it is essential to have online recursive subspace algorithms for model identification where the parameters can vary in time. However, because of the SVD computational complexity that involved in the algorithm, the classical SMI algorithms are not suitable for online application. Hence, it is essential to discover the alternative algorithms in order to apply the concept of subspace identification recursively. In this paper, the recursive subspace identification algorithm based on the propagator method which avoids the SVD computation is proposed. The output from Numerical Subspace State Space System Identification (N4SID) and Multivariable Output Error State Space (MOESP) methods are also included in this paper.</p>
APA, Harvard, Vancouver, ISO, and other styles
6

JAMIOŁKOWSKI, ANDRZEJ. "ON APPLICATIONS OF PI-ALGEBRAS IN THE ANALYSIS OF QUANTUM CHANNELS." International Journal of Quantum Information 10, no. 08 (December 2012): 1241007. http://dx.doi.org/10.1142/s0219749912410079.

Full text
Abstract:
In this paper, we discuss some constructive procedures which can be used in characterizations of linear transformations which preserve the set of states of a fixed quantum system. Our methods are based on analyzing an explicit form of a linear positive map in its Kraus representation. In particular, we discuss the so-called partial commutativity of operators and its applications to investigation of decoherence-free subspaces. These subspaces can also be considered as a special class of quantum error correcting codes. Using the concept of standard polynomials and Amitsur–Levitzki theorem and other ideas from the so-called polynomial identity algebras (PI-algebras) we discuss some effective algorithms for analyzing properties of quantum channels.
APA, Harvard, Vancouver, ISO, and other styles
7

Bluman, G. W., and S. Kumei. "Symmetry-based algorithms to relate partial differential equations: I. Local symmetries." European Journal of Applied Mathematics 1, no. 3 (September 1990): 189–216. http://dx.doi.org/10.1017/s0956792500000176.

Full text
Abstract:
Simple and systematic algorithms for relating differential equations are given. They are based on comparing the local symmetries admitted by the equations. Comparisons of the infinitesimal generators and their Lie algebras of given and target equations lead to necessary conditions for the existence of mappings which relate them. Necessary and sufficient conditions are presented for the existence of invertible mappings from a given nonlinear system of partial differential equations to some linear system of equations with examples including the hodograph and Legendre transformations, and the linearizations of a nonlinear telegraph equation, a nonlinear diffusion equation, and nonlinear fluid flow equations. Necessary and sufficient conditions are also given for the existence of an invertible point transformation which maps a linear partial differential equation with variable coefficients to a linear equation with constant coefficients. Other types of mappings are also considered including the Miura transformation and the invertible mapping which relates the cylindrical KdV and the KdV equations.
APA, Harvard, Vancouver, ISO, and other styles
8

Chen, Chin Chun, Yuan Horng Lin, Jeng Ming Yih, and Sue Fen Huang. "Construct Knowledge Structure of Linear Algebra." Advanced Materials Research 211-212 (February 2011): 793–97. http://dx.doi.org/10.4028/www.scientific.net/amr.211-212.793.

Full text
Abstract:
Apply interpretive structural modeling to construct knowledge structure of linear algebra. New fuzzy clustering algorithms improved fuzzy c-means algorithm based on Mahalanobis distance has better performance than fuzzy c-means algorithm. Each cluster of data can easily describe features of knowledge structures individually. The results show that there are six clusters and each cluster has its own cognitive characteristics. The methodology can improve knowledge management in classroom more feasible.
APA, Harvard, Vancouver, ISO, and other styles
9

Demmel, James, Ioana Dumitriu, Olga Holtz, and Plamen Koev. "Accurate and efficient expression evaluation and linear algebra." Acta Numerica 17 (April 25, 2008): 87–145. http://dx.doi.org/10.1017/s0962492906350015.

Full text
Abstract:
We survey and unify recent results on the existence of accurate algorithms for evaluating multivariate polynomials, and more generally for accurate numerical linear algebra with structured matrices. By ‘accurate’ we mean that the computed answer has relative error less than 1, i.e., has some correct leading digits. We also address efficiency, by which we mean algorithms that run in polynomial time in the size of the input. Our results will depend strongly on the model of arithmetic: most of our results will use the so-called traditional model (TM), where the computed result of op(a, b), a binary operation like a+b, is given by op(a, b) * (1+δ) where all we know is that |δ| ≤ ε ≪ 1. Here ε is a constant also known as machine epsilon.We will see a common reason for the following disparate problems to permit accurate and efficient algorithms using only the four basic arithmetic operations: finding the eigenvalues of a suitably discretized scalar elliptic PDE, finding eigenvalues of arbitrary products, inverses, or Schur complements of totally non-negative matrices (such as Cauchy and Vandermonde), and evaluating the Motzkin polynomial. Furthermore, in all these cases the high accuracy is ‘deserved’, i.e., the answer is determined much more accurately by the data than the conventional condition number would suggest.In contrast, we will see that evaluating even the simple polynomial x + y + z accurately is impossible in the TM, using only the basic arithmetic operations. We give a set of necessary and sufficient conditions to decide whether a high accuracy algorithm exists in the TM, and describe progress toward a decision procedure that will take any problem and provide either a high-accuracy algorithm or a proof that none exists.When no accurate algorithm exists in the TM, it is natural to extend the set of available accurate operations by a library of additional operations, such as x + y + z, dot products, or indeed any enumerable set which could then be used to build further accurate algorithms. We show how our accurate algorithms and decision procedure for finding them extend to this case.Finally, we address other models of arithmetic, and the relationship between (im)possibility in the TM and (in)efficient algorithms operating on numbers represented as bit strings.
APA, Harvard, Vancouver, ISO, and other styles
10

Fabregat-Traver, Diego, and Paolo Bientinesi. "Application-tailored linear algebra algorithms." International Journal of High Performance Computing Applications 27, no. 4 (July 18, 2013): 426–39. http://dx.doi.org/10.1177/1094342013494428.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Liu, Hsiang Chuan, Yen Kuei Yu, Jeng Ming Yih, and Chin Chun Chen. "Identifying the Mastery Concepts in Linear Algebra by Using FCM-CM Algorithm." Applied Mechanics and Materials 44-47 (December 2010): 3897–901. http://dx.doi.org/10.4028/www.scientific.net/amm.44-47.3897.

Full text
Abstract:
Euclidean distance function based fuzzy clustering algorithms can only be used to detect spherical structural clusters. Gustafson-Kessel (GK) clustering algorithm and Gath-Geva (GG) clustering algorithm were developed to detect non-spherical structural clusters by employing Mahalanobis distance in objective function, however, both of them need to add some constrains for Mahalanobis distance. In this paper, the authors’ improved Fuzzy C-Means algorithm based on common Mahalanobis distance (FCM-CM) is used to identify the mastery concepts in linear algebra, for comparing the performances with other four partition algorithms; FCM-M, GG, GK, and FCM. The result shows that FCM-CM has better performance than others.
APA, Harvard, Vancouver, ISO, and other styles
12

Ballard, G., E. Carson, J. Demmel, M. Hoemmen, N. Knight, and O. Schwartz. "Communication lower bounds and optimal algorithms for numerical linear algebra." Acta Numerica 23 (May 2014): 1–155. http://dx.doi.org/10.1017/s0962492914000038.

Full text
Abstract:
The traditional metric for the efficiency of a numerical algorithm has been the number of arithmetic operations it performs. Technological trends have long been reducing the time to perform an arithmetic operation, so it is no longer the bottleneck in many algorithms; rather, communication, or moving data, is the bottleneck. This motivates us to seek algorithms that move as little data as possible, either between levels of a memory hierarchy or between parallel processors over a network. In this paper we summarize recent progress in three aspects of this problem. First we describe lower bounds on communication. Some of these generalize known lower bounds for dense classical (O(n3)) matrix multiplication to all direct methods of linear algebra, to sequential and parallel algorithms, and to dense and sparse matrices. We also present lower bounds for Strassen-like algorithms, and for iterative methods, in particular Krylov subspace methods applied to sparse matrices. Second, we compare these lower bounds to widely used versions of these algorithms, and note that these widely used algorithms usually communicate asymptotically more than is necessary. Third, we identify or invent new algorithms for most linear algebra problems that do attain these lower bounds, and demonstrate large speed-ups in theory and practice.
APA, Harvard, Vancouver, ISO, and other styles
13

Demmel, James W., Michael T. Heath, and Henk A. van der Vorst. "Parallel numerical linear algebra." Acta Numerica 2 (January 1993): 111–97. http://dx.doi.org/10.1017/s096249290000235x.

Full text
Abstract:
We survey general techniques and open problems in numerical linear algebra on parallel architectures. We first discuss basic principles of paralled processing, describing the costs of basic operations on parallel machines, including general principles for constructing efficient algorithms. We illustrate these principles using current architectures and software systems, and by showing how one would implement matrix multiplication. Then, we present direct and iterative algorithms for solving linear systems of equations, linear least squares problems, the symmetric eigenvalue problem, the nonsymmetric eigenvalue problem, and the singular value decomposition. We consider dense, band and sparse matrices.
APA, Harvard, Vancouver, ISO, and other styles
14

Dongarra, Jack J., and Victor Eijkhout. "Numerical linear algebra algorithms and software." Journal of Computational and Applied Mathematics 123, no. 1-2 (November 2000): 489–514. http://dx.doi.org/10.1016/s0377-0427(00)00400-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Chu, Moody T. "Linear algebra algorithms as dynamical systems." Acta Numerica 17 (April 25, 2008): 1–86. http://dx.doi.org/10.1017/s0962492906340019.

Full text
Abstract:
Any logical procedure that is used to reason or to infer either deductively or inductively, so as to draw conclusions or make decisions, can be called, in a broad sense, a realization process. A realization process usually assumes the recursive form that one state develops into another state by following a certain specific rule. Such an action is generally formalized as a dynamical system. In mathematics, especially for existence questions, a realization process often appears in the form of an iterative procedure or a differential equation. For years researchers have taken great effort to describe, analyse, and modify realization processes for various applications.The thrust in this exposition is to exploit the notion of dynamical systems as a special realization process for problems arising from the field of linear algebra. Several differential equations whose solutions evolve in submanifolds of matrices are cast in fairly general frameworks, of which special cases have been found to afford unified and fundamental insights into the structure and behaviour of existing discrete methods and, now and then, suggest new and improved numerical methods. In some cases, there are remarkable connections between smooth flows and discrete numerical algorithms. In other cases, the flow approach seems advantageous in tackling very difficult open problems. Various aspects of the recent development and application in this direction are discussed in this paper.
APA, Harvard, Vancouver, ISO, and other styles
16

Kannan, Ravindran, and Santosh Vempala. "Randomized algorithms in numerical linear algebra." Acta Numerica 26 (May 1, 2017): 95–135. http://dx.doi.org/10.1017/s0962492917000058.

Full text
Abstract:
This survey provides an introduction to the use of randomization in the design of fast algorithms for numerical linear algebra. These algorithms typically examine only a subset of the input to solve basic problems approximately, including matrix multiplication, regression and low-rank approximation. The survey describes the key ideas and gives complete proofs of the main results in the field. A central unifying idea is sampling the columns (or rows) of a matrix according to their squared lengths.
APA, Harvard, Vancouver, ISO, and other styles
17

Andersen, B. S., F. Gustavson, A. Karaivanov, J. Wasniewski, and P. Y. Yalamov. "LAWRA – LINEAR ALGEBRA WITH RECURSIVE ALGORITHMS." Mathematical Modelling and Analysis 4, no. 1 (December 15, 1999): 7–17. http://dx.doi.org/10.3846/13926292.1999.9637105.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Agarwal, R. C., F. G. Gustavson, and M. Zubair. "Improving performance of linear algebra algorithms for dense matrices, using algorithmic prefetch." IBM Journal of Research and Development 38, no. 3 (May 1994): 265–75. http://dx.doi.org/10.1147/rd.383.0265.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Barrowclough, Oliver J. D., and Tor Dokken. "Approximate Implicitization Using Linear Algebra." Journal of Applied Mathematics 2012 (2012): 1–25. http://dx.doi.org/10.1155/2012/293746.

Full text
Abstract:
We consider a family of algorithms for approximate implicitization of rational parametric curves and surfaces. The main approximation tool in all of the approaches is the singular value decomposition, and they are therefore well suited to floating-point implementation in computer-aided geometric design (CAGD) systems. We unify the approaches under the names of commonly known polynomial basis functions and consider various theoretical and practical aspects of the algorithms. We offer new methods for a least squares approach to approximate implicitization using orthogonal polynomials, which tend to be faster and more numerically stable than some existing algorithms. We propose several simple propositions relating the properties of the polynomial bases to their implicit approximation properties.
APA, Harvard, Vancouver, ISO, and other styles
20

Martinsson, Per-Gunnar, and Joel A. Tropp. "Randomized numerical linear algebra: Foundations and algorithms." Acta Numerica 29 (May 2020): 403–572. http://dx.doi.org/10.1017/s0962492920000021.

Full text
Abstract:
This survey describes probabilistic algorithms for linear algebraic computations, such as factorizing matrices and solving linear systems. It focuses on techniques that have a proven track record for real-world problems. The paper treats both the theoretical foundations of the subject and practical computational issues.Topics include norm estimation, matrix approximation by sampling, structured and unstructured random embeddings, linear regression problems, low-rank approximation, subspace iteration and Krylov methods, error estimation and adaptivity, interpolatory and CUR factorizations, Nyström approximation of positive semidefinite matrices, single-view (‘streaming’) algorithms, full rank-revealing factorizations, solvers for linear systems, and approximation of kernel matrices that arise in machine learning and in scientific computing.
APA, Harvard, Vancouver, ISO, and other styles
21

Gallivan, K. A., R. J. Plemmons, and A. H. Sameh. "Parallel Algorithms for Dense Linear Algebra Computations." SIAM Review 32, no. 1 (March 1990): 54–135. http://dx.doi.org/10.1137/1032002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Demmel, J., J. Dongarra, V. Eijkhout, E. Fuentes, A. Petitet, R. Vuduc, R. C. Whaley, and K. Yelick. "Self-Adapting Linear Algebra Algorithms and Software." Proceedings of the IEEE 93, no. 2 (February 2005): 293–312. http://dx.doi.org/10.1109/jproc.2004.840848.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Angelaccio, M., and M. Colajanni. "Unifying and optimizing parallel linear algebra algorithms." IEEE Transactions on Parallel and Distributed Systems 4, no. 12 (1993): 1382–97. http://dx.doi.org/10.1109/71.250119.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Wang, Rui, Yue Wang, Yanping Li, Wenming Cao, and Yi Yan. "Geometric Algebra-Based ESPRIT Algorithm for DOA Estimation." Sensors 21, no. 17 (September 3, 2021): 5933. http://dx.doi.org/10.3390/s21175933.

Full text
Abstract:
Direction-of-arrival (DOA) estimation plays an important role in array signal processing, and the Estimating Signal Parameter via Rotational Invariance Techniques (ESPRIT) algorithm is one of the typical super resolution algorithms for direction finding in an electromagnetic vector-sensor (EMVS) array; however, existing ESPRIT algorithms treat the output of the EMVS array either as a “long vector”, which will inevitably lead to loss of the orthogonality of the signal components, or a quaternion matrix, which may result in some missing information. In this paper, we propose a novel ESPRIT algorithm based on Geometric Algebra (GA-ESPRIT) to estimate 2D-DOA with double parallel uniform linear arrays. The algorithm combines GA with the principle of ESPRIT, which models the multi-dimensional signals in a holistic way, and then the direction angles can be calculated by different GA matrix operations to keep the correlations among multiple components of the EMVS. Experimental results demonstrate that the proposed GA-ESPRIT algorithm is robust to model errors and achieves less time complexity and smaller memory requirements.
APA, Harvard, Vancouver, ISO, and other styles
25

Howle, Victoria E., Robert C. Kirby, Kevin Long, Brian Brennan, and Kimberly Kennedy. "Playa: High-Performance Programmable Linear Algebra." Scientific Programming 20, no. 3 (2012): 257–73. http://dx.doi.org/10.1155/2012/606215.

Full text
Abstract:
This paper introduces Playa, a high-level user interface layer for composing algorithms for complex multiphysics problems out of objects from other Trilinos packages. Among other features, Playa provides very high-performance overloaded operators implemented through an expression template mechanism. In this paper, we give an overview of the central Playa objects from a user's perspective, show application to a sequence of increasingly complex solver algorithms, provide timing results for Playa's overloaded operators and other functions, and briefly survey some of the implementation issues involved.
APA, Harvard, Vancouver, ISO, and other styles
26

Eaves, B. Curtis, and Uriel G. Rothblum. "Linear Problems and Linear Algorithms." Journal of Symbolic Computation 20, no. 2 (August 1995): 207–14. http://dx.doi.org/10.1006/jsco.1995.1047.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

von Sohsten de Medeiros, Airton. "Elementary Linear Algebra and the Division Algorithm." College Mathematics Journal 33, no. 1 (January 2002): 51. http://dx.doi.org/10.2307/1558982.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Edelman, A., S. Heller, and S. Lennart Johnsson. "Index transformation algorithms in a linear algebra framework." IEEE Transactions on Parallel and Distributed Systems 5, no. 12 (1994): 1302–9. http://dx.doi.org/10.1109/71.334903.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

O'Leary, Dianne P., and A. Yeremin. "The linear algebra of block quasi-newton algorithms." Linear Algebra and its Applications 212-213 (November 1994): 153–68. http://dx.doi.org/10.1016/0024-3795(94)90401-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Pennestrì, E., and R. Stefanelli. "Linear algebra and numerical algorithms using dual numbers." Multibody System Dynamics 18, no. 3 (August 3, 2007): 323–44. http://dx.doi.org/10.1007/s11044-007-9088-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Bientinesi, Paolo, John A. Gunnels, Margaret E. Myers, Enrique S. Quintana-Ortí, and Robert A. van de Geijn. "The science of deriving dense linear algebra algorithms." ACM Transactions on Mathematical Software 31, no. 1 (March 2005): 1–26. http://dx.doi.org/10.1145/1055531.1055532.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Chen, Chin Chun, Yuan Horng Lin, Jeng Ming Yih, and Shu Yi Juan. "Construct Concept Structure for Linear Algebra Based on Cognition Diagnosis and Clustering with Mahalanobis Distances." Advanced Materials Research 211-212 (February 2011): 756–60. http://dx.doi.org/10.4028/www.scientific.net/amr.211-212.756.

Full text
Abstract:
Euclidean distance function based fuzzy clustering algorithms can only be used to detect spherical structural clusters. The purpose of this study is improved Fuzzy C-Means algorithm based on Mahalanobis distance to identify concept structure for Linear Algebra. In addition, Concept structure analysis (CSA) could provide individualized knowledge structure. CSA algorithm is the major methodology and it is based on fuzzy logic model of perception (FLMP) and interpretive structural modeling (ISM). CSA could display individualized knowledge structure and clearly represent hierarchies and linkage among concepts for each examinee. Each cluster of data can easily describe features of knowledge structures. The results show that there are five clusters and each cluster has its own cognitive characteristics. In this study, the author provide the empirical data for concepts of linear algebra from university students. To sum up, the methodology can improve knowledge management in classroom more feasible. Finally, the result shows that Algorithm based on Mahalanobis distance has better performance than Fuzzy C-Means algorithm.
APA, Harvard, Vancouver, ISO, and other styles
33

Eldén, Lars. "Numerical linear algebra in data mining." Acta Numerica 15 (May 2006): 327–84. http://dx.doi.org/10.1017/s0962492906240017.

Full text
Abstract:
Ideas and algorithms from numerical linear algebra are important in several areas of data mining. We give an overview of linear algebra methods in text mining (information retrieval), pattern recognition (classification of handwritten digits), and PageRank computations for web search engines. The emphasis is on rank reduction as a method of extracting information from a data matrix, low-rank approximation of matrices using the singular value decomposition and clustering, and on eigenvalue methods for network analysis.
APA, Harvard, Vancouver, ISO, and other styles
34

Hoffmann, Walter, and Kitty Potma. "Implementing linear algebra algorithms on a Meiko computing surface." Applied Numerical Mathematics 8, no. 2 (September 1991): 127–48. http://dx.doi.org/10.1016/0168-9274(91)90047-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Dimov, I., V. Alexandrov, and A. Karaivanova. "Parallel resolvent Monte Carlo algorithms for linear algebra problems." Mathematics and Computers in Simulation 55, no. 1-3 (February 2001): 25–35. http://dx.doi.org/10.1016/s0378-4754(00)00243-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Zhang, Yongzhe, Ariful Azad, and Aydın Buluç. "Parallel algorithms for finding connected components using linear algebra." Journal of Parallel and Distributed Computing 144 (October 2020): 14–27. http://dx.doi.org/10.1016/j.jpdc.2020.04.009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Khuri-Makdisi, Kamal. "Linear algebra algorithms for divisors on an algebraic curve." Mathematics of Computation 73, no. 245 (July 7, 2003): 333–57. http://dx.doi.org/10.1090/s0025-5718-03-01567-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Krüger, Jens, and Rüdiger Westermann. "Linear algebra operators for GPU implementation of numerical algorithms." ACM Transactions on Graphics 22, no. 3 (July 2003): 908–16. http://dx.doi.org/10.1145/882262.882363.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Dekker, T. J., W. Hoffmann, and P. P. M. De Rijk. "Algorithms for solving numerical linear algebra problems on supercomputers." Future Generation Computer Systems 4, no. 4 (March 1989): 255–63. http://dx.doi.org/10.1016/0167-739x(89)90001-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Shi, Xiaofei, and David P. Woodruff. "Sublinear Time Numerical Linear Algebra for Structured Matrices." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 4918–25. http://dx.doi.org/10.1609/aaai.v33i01.33014918.

Full text
Abstract:
We show how to solve a number of problems in numerical linear algebra, such as least squares regression, lp-regression for any p ≥ 1, low rank approximation, and kernel regression, in time T(A)poly(log(nd)), where for a given input matrix A ∈ Rn×d, T(A) is the time needed to compute A · y for an arbitrary vector y ∈ Rd. Since T(A) ≤ O(nnz(A)), where nnz(A) denotes the number of non-zero entries of A, the time is no worse, up to polylogarithmic factors, as all of the recent advances for such problems that run in input-sparsity time. However, for many applications, T(A) can be much smaller than nnz(A), yielding significantly sublinear time algorithms. For example, in the overconstrained (1+ε)-approximate polynomial interpolation problem, A is a Vandermonde matrix and T(A) = O(n log n); in this case our running time is n · poly (log n) + poly (d/ε) and we recover the results of Avron, Sindhwani, and Woodruff (2013) as a special case. For overconstrained autoregression, which is a common problem arising in dynamical systems, T(A) = O(n log n), and we immediately obtain n· poly (log n) + poly(d/ε) time. For kernel autoregression, we significantly improve the running time of prior algorithms for general kernels. For the important case of autoregression with the polynomial kernel and arbitrary target vector b ∈ Rn, we obtain even faster algorithms. Our algorithms show that, perhaps surprisingly, most of these optimization problems do not require much more time than that of a polylogarithmic number of matrix-vector multiplications.
APA, Harvard, Vancouver, ISO, and other styles
41

Abdelfattah, A., H. Anzt, J. Dongarra, M. Gates, A. Haidar, J. Kurzak, P. Luszczek, S. Tomov, I. Yamazaki, and A. YarKhan. "Linear algebra software for large-scale accelerated multicore computing." Acta Numerica 25 (May 1, 2016): 1–160. http://dx.doi.org/10.1017/s0962492916000015.

Full text
Abstract:
Many crucial scientific computing applications, ranging from national security to medical advances, rely on high-performance linear algebra algorithms and technologies, underscoring their importance and broad impact. Here we present the state-of-the-art design and implementation practices for the acceleration of the predominant linear algebra algorithms on large-scale accelerated multicore systems. Examples are given with fundamental dense linear algebra algorithms – from the LU, QR, Cholesky, and LDLT factorizations needed for solving linear systems of equations, to eigenvalue and singular value decomposition (SVD) problems. The implementations presented are readily available via the open-source PLASMA and MAGMA libraries, which represent the next generation modernization of the popular LAPACK library for accelerated multicore systems.To generate the extreme level of parallelism needed for the efficient use of these systems, algorithms of interest are redesigned and then split into well-chosen computational tasks. The task execution is scheduled over the computational components of a hybrid system of multicore CPUs with GPU accelerators and/or Xeon Phi coprocessors, using either static scheduling or light-weight runtime systems. The use of light-weight runtime systems keeps scheduling overheads low, similar to static scheduling, while enabling the expression of parallelism through sequential-like code. This simplifies the development effort and allows exploration of the unique strengths of the various hardware components. Finally, we emphasize the development of innovative linear algebra algorithms using three technologies – mixed precision arithmetic, batched operations, and asynchronous iterations – that are currently of high interest for accelerated multicore systems.
APA, Harvard, Vancouver, ISO, and other styles
42

Zhang, Hui. "The Application of Linear Algebra Algorithm in the Production of Linear Matrix Inequalities." Applied Mechanics and Materials 192 (July 2012): 406–11. http://dx.doi.org/10.4028/www.scientific.net/amm.192.406.

Full text
Abstract:
Discusses the theory and symbolic of the algorithm gives another potential application, but also in the system and control. For example, for the question, has made with special structure, but LMI problem data, may cause factorizations LMI more compact. One can even imagine using the algorithm around, looking for the opportunity to LMI automatic eliminate variables, so simplify problem solving, before they get a lot of influence and a potential solutions. We describe theory, the algorithm can be used to factor in the non commuting variable polynomial matrix and application system switches and control problem into a linear matrix inequality.
APA, Harvard, Vancouver, ISO, and other styles
43

Fieker, Claus, and Willem A. de Graaf. "Finding Integral Linear Dependencies of Algebraic Numbers and Algebraic Lie Algebras." LMS Journal of Computation and Mathematics 10 (2007): 271–87. http://dx.doi.org/10.1112/s1461157000001406.

Full text
Abstract:
Abstract:We give an algorithm for finding the module of linear dependencies of the roots of a monic integral polynomial. Using this, we describe an algorithm for constructing the algebraic hull of a given matrix Lie algebra in characteristic zero.
APA, Harvard, Vancouver, ISO, and other styles
44

Karthigai Selvam, S., and S. Selvam. "Image Compression Techniques Using Linear Algebra with SVD Algorithm." Asian Journal of Engineering and Applied Technology 10, no. 1 (May 5, 2021): 22–28. http://dx.doi.org/10.51983/ajeat-2021.10.1.2724.

Full text
Abstract:
In recent days, the data are transformed in the form of multimedia data such as images, graphics, audio and video. Multimedia data require a huge amount of storage capacity and transmission bandwidth. Consequently, data compression is used for reducing the data redundancy and serves more storage of data. In this paper, addresses the problem (demerits) of the lossy compression of images. This proposed method is deals on SVD Power Method that overcomes the demerits of Python SVD function. In our experimental result shows superiority of proposed compression method over those of Python SVD function and some various compression techniques. In addition, the proposed method also provides different degrees of error flexibility, which give minimum of execution of time and a better image compression.
APA, Harvard, Vancouver, ISO, and other styles
45

Flegar, Goran, Hartwig Anzt, Terry Cojean, and Enrique S. Quintana-Ortí. "Adaptive Precision Block-Jacobi for High Performance Preconditioning in the Ginkgo Linear Algebra Software." ACM Transactions on Mathematical Software 47, no. 2 (April 2021): 1–28. http://dx.doi.org/10.1145/3441850.

Full text
Abstract:
The use of mixed precision in numerical algorithms is a promising strategy for accelerating scientific applications. In particular, the adoption of specialized hardware and data formats for low-precision arithmetic in high-end GPUs (graphics processing units) has motivated numerous efforts aiming at carefully reducing the working precision in order to speed up the computations. For algorithms whose performance is bound by the memory bandwidth, the idea of compressing its data before (and after) memory accesses has received considerable attention. One idea is to store an approximate operator–like a preconditioner–in lower than working precision hopefully without impacting the algorithm output. We realize the first high-performance implementation of an adaptive precision block-Jacobi preconditioner which selects the precision format used to store the preconditioner data on-the-fly, taking into account the numerical properties of the individual preconditioner blocks. We implement the adaptive block-Jacobi preconditioner as production-ready functionality in the Ginkgo linear algebra library, considering not only the precision formats that are part of the IEEE standard, but also customized formats which optimize the length of the exponent and significand to the characteristics of the preconditioner blocks. Experiments run on a state-of-the-art GPU accelerator show that our implementation offers attractive runtime savings.
APA, Harvard, Vancouver, ISO, and other styles
46

Rump, Siegfried M. "Computable backward error bounds for basic algorithms in linear algebra." Nonlinear Theory and Its Applications, IEICE 6, no. 3 (2015): 360–63. http://dx.doi.org/10.1587/nolta.6.360.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Bini, Dario, Marilena Mitrouli, Marc Van Barel, and Joab Winkler. "Structured Numerical Linear and Multilinear Algebra: Analysis, Algorithms and Applications." Linear Algebra and its Applications 502 (August 2016): 1–4. http://dx.doi.org/10.1016/j.laa.2016.03.042.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Patel, Apoorva, and Anjani Priyadarsini. "Efficient quantum algorithms for state measurement and linear algebra applications." International Journal of Quantum Information 16, no. 06 (September 2018): 1850048. http://dx.doi.org/10.1142/s021974991850048x.

Full text
Abstract:
We present an algorithm for measurement of [Formula: see text]-local operators in a quantum state, which scales logarithmically both in the system size and the output accuracy. The key ingredients of the algorithm are a digital representation of the quantum state, and a decomposition of the measurement operator in a basis of operators with known discrete spectra. We then show how this algorithm can be combined with (a) Hamiltonian evolution to make quantum simulations efficient, (b) the Newton–Raphson method based solution of matrix inverse to efficiently solve linear simultaneous equations, and (c) Chebyshev expansion of matrix exponentials to efficiently evaluate thermal expectation values. The general strategy may be useful in solving many other linear algebra problems efficiently.
APA, Harvard, Vancouver, ISO, and other styles
49

Ho, Kenneth. "FLAM: Fast Linear Algebra in MATLAB - Algorithms for Hierarchical Matrices." Journal of Open Source Software 5, no. 51 (July 4, 2020): 1906. http://dx.doi.org/10.21105/joss.01906.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Cheng, Howard, and George Labahn. "Applying linear algebra routines to modular ore polynomial matrix algorithms." ACM Communications in Computer Algebra 43, no. 3/4 (June 24, 2010): 78–79. http://dx.doi.org/10.1145/1823931.1823937.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography