To see the other types of publications on this topic, follow the link: LU factorization.

Journal articles on the topic 'LU factorization'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'LU factorization.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Ng, Wei Shean, and Wei Wen Tan. "Some properties of various types of matrix factorization." ITM Web of Conferences 36 (2021): 03003. http://dx.doi.org/10.1051/itmconf/20213603003.

Full text
Abstract:
Matrix factorizations or matrix decompositions are methods that represent a matrix as a product of two or more matrices. There are various types of matrix factorizations such as LU factorization, Cholesky factorization, singular value decomposition etc. Matrix factorization is widely used in pattern recognition, image denoising, data clustering etc. Motivated by these applications, some properties and applications of various types of matrix factorizations are studied. One of the purposes of matrix factorization is to ease the computation. Thus, comparisons in term of computation time of various matrix factorizations in different areas are carried out.
APA, Harvard, Vancouver, ISO, and other styles
2

Grünbaum, F. Alberto, and Manuel D. de la Iglesia. "Stochastic LU factorizations, Darboux transformations and urn models." Journal of Applied Probability 55, no. 3 (2018): 862–86. http://dx.doi.org/10.1017/jpr.2018.55.

Full text
Abstract:
Abstract We consider upper‒lower (UL) (and lower‒upper (LU)) factorizations of the one-step transition probability matrix of a random walk with the state space of nonnegative integers, with the condition that both upper and lower triangular matrices in the factorization are also stochastic matrices. We provide conditions on the free parameter of the UL factorization in terms of certain continued fractions such that this stochastic factorization is possible. By inverting the order of the factors (also known as a Darboux transformation) we obtain a new family of random walks where it is possible to state the spectral measures in terms of a Geronimus transformation. We repeat this for the LU factorization but without a free parameter. Finally, we apply our results in two examples; the random walk with constant transition probabilities, and the random walk generated by the Jacobi orthogonal polynomials. In both situations we obtain urn models associated with all the random walks in question.
APA, Harvard, Vancouver, ISO, and other styles
3

Ogita, Takeshi. "Accurate Matrix Factorization: Inverse LU and Inverse QR Factorizations." SIAM Journal on Matrix Analysis and Applications 31, no. 5 (2010): 2477–97. http://dx.doi.org/10.1137/090754376.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Qian, Guoyou, and Jingya Lu. "LU-FACTORIZATIONS OF SYMMETRIC MATRICES WITH APPLICATIONS." Asian-European Journal of Mathematics 03, no. 01 (2010): 133–43. http://dx.doi.org/10.1142/s179355711000009x.

Full text
Abstract:
In this paper, we describe explicitly the LU -factorization of a symmetric matrix of order n with n ≤ 7 when each of its ordered principal minors is nonzero. By using this result and some other related results on non-singularity previously given by Smith, Beslin, Hong, Lee and Ligh in the literature, we establish several theorems concerning LU -factorizations of power GCD matrices, power LCM matrices and reciprocal power GCD matrices and reciprocal power LCM matrices.
APA, Harvard, Vancouver, ISO, and other styles
5

Babarinsa, Olayiwola, Azfi Zaidi Mohammad Sofi, Mohd Asrul Hery Ibrahim, and Hailiz Kamarulhaili. "Optimized Cramer’s Rule in WZ Factorization and Applications." European Journal of Pure and Applied Mathematics 13, no. 4 (2020): 1035–54. http://dx.doi.org/10.29020/nybg.ejpam.v13i4.3818.

Full text
Abstract:
In this paper, W Z factorization is optimized with a proposed Cramer’s rule and compared with classical Cramer’s rule to solve the linear systems of the factorization technique. The matrix norms and performance time of WZ factorization together with LU factorization are analyzed using sparse matrices on MATLAB via AMD and Intel processor to deduce that the optimized Cramer’s rule in the factorization algorithm yields accurate results than LU factorization and conventional W Z factorization. In all, the matrix group and Schur complement for every Zsystem (2×2 block triangular matrices from Z-matrix) are established.
APA, Harvard, Vancouver, ISO, and other styles
6

WU, CHI-YE, and TING-ZHU HUANG. "PERTURBATION THEORY FOR THE LU AND QR FACTORIZATIONS." ANZIAM Journal 49, no. 4 (2008): 451–61. http://dx.doi.org/10.1017/s1446181108000138.

Full text
Abstract:
AbstractIn this paper we derive perturbation theorems for the LU and QR factors. Moreover, bounds for κL(A)/κL′(A) and κU(A)/κ′U(A) are given for the LU factorization of a nonsingular matrix. By applying pivoting strategies in the LU factorization, estimates for κL(PAQ)/κL′(PAQ) and κU(PAQ)/κ′U(PAQ) are also obtained.
APA, Harvard, Vancouver, ISO, and other styles
7

Iakymchuk, Roman, Stef Graillat, David Defour, and Enrique S. Quintana-Ortí. "Hierarchical approach for deriving a reproducible unblocked LU factorization." International Journal of High Performance Computing Applications 33, no. 5 (2019): 791–803. http://dx.doi.org/10.1177/1094342019832968.

Full text
Abstract:
We propose a reproducible variant of the unblocked LU factorization for graphics processor units (GPUs). For this purpose, we build upon Level-1/2 BLAS kernels that deliver correctly-rounded and reproducible results for the dot (inner) product, vector scaling, and the matrix-vector product. In addition, we draw a strategy to enhance the accuracy of the triangular solve via iterative refinement. Following a bottom-up approach, we finally construct a reproducible unblocked implementation of the LU factorization for GPUs, which accommodates partial pivoting for stability and can be eventually integrated in a high performance and stable algorithm for the (blocked) LU factorization.
APA, Harvard, Vancouver, ISO, and other styles
8

Almeida, C. G., and S. A. E. Remigio. "Sufficient Conditions for Existence of the LU Factorization of Toeplitz Symmetric Tridiagonal Matrices." Trends in Computational and Applied Mathematics 24, no. 1 (2023): 177–90. http://dx.doi.org/10.5540/tcam.2022.024.01.00177.

Full text
Abstract:
The characterization of inverses of symmetric tridiagonal and block tridiagonal matrices and the development of algorithms for finding the inverse of any general non-singular tridiagonal matrix are subjects that have been studied by many authors. The results of these research usually depend on the existence of the LU factorization of a non-sigular matrix A, such that A = LU. Besides, the conditions that ensure the nonsingularity of A and its LU factorization are not promptly obtained. Then, we are going to present in this work two extremely simple sufficient conditions for existence of the LU factorization of a Toeplitz symmetric tridiagonal matrix A. We take into consideration the roots of the modified Chebyshev polynomial, and we also present an analysis based on the parameters of the Crout’s method.
APA, Harvard, Vancouver, ISO, and other styles
9

AL-AYYOUB, ABDEL-ELAH, and KHALED DAY. "FAST LU FACTORIZATION ON THE HYPERSTAR INTERCONNECTION NETWORK." Journal of Interconnection Networks 03, no. 03n04 (2002): 231–43. http://dx.doi.org/10.1142/s0219265902000641.

Full text
Abstract:
The hyperstar network has been recently proposed as an attractive product network that outperforms many popular topologies in various respects. In this paper we explore additional capabilities for the hyperstar network through an efficient parallel algorithm for solving the LU factorization problem on this network. The proposed parallel algorithm uses O(n) communication time on a hyperstar formed by the cross-product of two n-star graphs. This communication time improves the best known result for the hypercube-based LU factorization by a factor of log(n), and improves the best known result for the mesh-based LU factorization by a factor of (n - 1)!.
APA, Harvard, Vancouver, ISO, and other styles
10

Amestoy, Patrick R., and Chiara Puglisi. "An Unsymmetrized Multifrontal LU Factorization." SIAM Journal on Matrix Analysis and Applications 24, no. 2 (2002): 553–69. http://dx.doi.org/10.1137/s0895479800375370.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Neta, Beny, and Heng-Ming Tai. "LU factorization on parallel computers." Computers & Mathematics with Applications 11, no. 6 (1985): 573–79. http://dx.doi.org/10.1016/0898-1221(85)90039-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Rahman, Abdel Radi Abdel Rahman Abdel Gadir Abdel, and Shady Seed EL Okuer. "Divide-and-Conquer Strategy with LU Factorization for Inverting Symmetric Positive Definite Matrices." Applied Sciences Research Periodicals 3, no. 01 (2025): 17–25. https://doi.org/10.63002/asrp.301.727.

Full text
Abstract:
We studied the solution of a system of equations Ax=b with singular and nearly singular, symmetric positive definite coefficient matrix A. Our algorithm based on, the Divide and Conquer strategy leading to the Divide-and-Conquer Algorithm (D&C algorithm) with, LU Factorization algorithm. The LU Factorization was used to convert the matrix into a product of the form LU, where L is a lower triangular matrix and U is upper triangular matrix. The algorithm was been implemented on MATLAB and simulated as a user-subroutine. The user-subroutine is considering MATLAB features for reducing the round-off error especially for sensitive systems. Numerical examples was given of a non- singular matrix and another for ill-conditioned matrix. The effect of round-off error was analyzed. We compared results with previous ones, where LU factorization is used.
APA, Harvard, Vancouver, ISO, and other styles
13

Chen, Jile, and Peimin Zhu. "An Alternate GPU-Accelerated Algorithm for Very Large Sparse LU Factorization." Mathematics 11, no. 14 (2023): 3149. http://dx.doi.org/10.3390/math11143149.

Full text
Abstract:
The LU factorization of very large sparse matrices requires a significant amount of computing resources, including memory and broadband communication. A hybrid MPI + OpenMP + CUDA algorithm named SuperLU3D can efficiently compute the LU factorization with GPU acceleration. However, this algorithm faces difficulties when dealing with very large sparse matrices with limited GPU resources. Factorizing very large matrices involves a vast amount of nonblocking communication between processes, often leading to a break in SuperLU3D calculation due to the overflow of cluster communication buffers. In this paper, we present an improved GPU-accelerated algorithm named SuperLU3D_Alternate for the LU factorization of very large sparse matrices with fewer GPU resources. The basic idea is “divide and conquer”, which means dividing a very large matrix into multiple submatrices, performing LU factorization on each submatrix, and then assembling the factorized results of all submatrices into two complete matrices L and U. In detail, according to the number of available GPUs, a very large matrix is first divided into multiple submatrices using the elimination tree. Then, the LU factorization of each submatrix is alternately computed with limited GPU resources, and its intermediate LU factors from GPUs are saved to the host memory or hard disk. Finally, after finishing the LU factorization of all submatrices, these factorized submatrices are assembled into a complete lower triangular matrix L and a complete upper triangular matrix U, respectively. The SuperLU3D_Alternate algorithm is suitable for hybrid CPU/GPU cluster systems, especially for a subset of nodes without GPUs. To accommodate different hardware resources in various clusters, we designed the algorithm to run in the following three cases: sufficient memory for GPU nodes, insufficient memory for GPU nodes, and insufficient memory for the entire cluster. The results from LU factorization test on different matrices in various cases show that the larger the matrix is, the more efficient this algorithm is under the same GPU memory consumption. In our numerical experiments, SuperLU3D_Alternate achieves speeds of up to 8× that of SuperLU3D (CPU only) and 2.5× that of SuperLU3D (CPU + GPU) on the hybrid cluster with six Tesla V100S GPUs. Furthermore, when the matrix is too big to be handled by SuperLU3D, SuperLU3D_Alternate can still utilize the cluster’s host memory or hard disk to solve it. By reducing the amount of data exchange to prevent exceeding the buffer’s limit of the cluster MPI nonblocking communication, our algorithm enhances the stability of the program.
APA, Harvard, Vancouver, ISO, and other styles
14

Vigon, Vincent. "LU-Factorization Versus Wiener-Hopf Factorization for Markov Chains." Acta Applicandae Mathematicae 128, no. 1 (2013): 1–37. http://dx.doi.org/10.1007/s10440-013-9799-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Rahman, Abdel Radi Abdel Rahman Abdel Gadir Abdel, Shady Seed El Okuer, and Musa Adam Abdullah. "Inverting Symmetric Positive Definite Matrices using Divide and Conquer Mathematical Technique with LU Factorization." European Journal of Mathematics and Statistics 6, no. 1 (2025): 8–14. https://doi.org/10.24018/ejmath.2025.6.1.387.

Full text
Abstract:
We looked at the solution of a system of equations Ax=b with symmetric positive definite coefficient matrix A that has singular and nearly singular values. Our technique, which is based on the Divide and Conquer strategy, combines the LU Factorization algorithm with the Divide-and-Conquer technique (D&C algorithm). The matrix was transformed into a product of the type LU using the LU Factorization, where L is a lower triangular matrix and U is an upper triangular matrix. MATLAB was used to implement the algorithm and simulate it as a user-subroutine. In order to reduce the round-off error, particularly for sensitive systems, the user-subroutine takes into account MATLAB characteristics. A non-singular matrix and an ill-conditioned matrix were both numerically demonstrated. Analysis was done on the impact of round-off error. We contrasted the results with those from earlier studies that employed LU factorization.
APA, Harvard, Vancouver, ISO, and other styles
16

Voon, Chen Huey, Tang Ker Shin, and Ng Wei Shean. "Chinese Character Recognition Using Non-negative Matrix Factorization." Jurnal Kejuruteraan 36, no. 2 (2024): 653–60. http://dx.doi.org/10.17576/jkukm-2024-36(2)-24.

Full text
Abstract:
Non-negative matrix factorization (NMF) was introduced by Paatero and Tapper in 1994 and it was a general way of reducing the dimension of the matrix with non-negative entries. Non-negative matrix factorization is very useful in many data analysis applications such as character recognition, text mining, and others. This paper aims to study the application in Chinese character recognition using non-negative matrix factorization. Python was used to carry out the LU factorization and non-negative matrix factorization of a Chinese character in Boolean Matrix. Preliminary analysis confirmed that the data size of and and are chosen for the NMF of the Boolean matrix. In this project, one hundred printed Chinese characters were selected, and all the Chinese characters can be categorized into ten categories according to the number of strokes , for . The Euclidean distance between the Boolean matrix of a Chinese character and the matrix after both LU factorization and NMF is calculated for further analysis. Paired t-test confirmed that the factorization of Chinese characters in the Boolean matrix using NMF is better than the LU factorization. Finally, ten handwritten Chinese characters were selected to test whether the program is able to identify the handwritten and the printed Chinese characters. Experimental results showed that 70% of the characters can be recognized via the least Euclidean distance obtained. NMF is suitable to be applied in Chinese character recognition since it can reduce the dimension of the image and the error between the original Boolean matrix and after NMF is less than 5%.
APA, Harvard, Vancouver, ISO, and other styles
17

SHAHZADEH FAZELI, Seyed Abolfazl, Azam GHODRATNAMA, Azam SADEGHIAN, and Seyed Mehdi KARBASSI. "Incomplete LU Factorization on Projection Method." Cumhuriyet Science Journal 37, no. 3 (2016): 164. http://dx.doi.org/10.17776/csj.18615.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Chow, Edmond, and Aftab Patel. "Fine-Grained Parallel Incomplete LU Factorization." SIAM Journal on Scientific Computing 37, no. 2 (2015): C169—C193. http://dx.doi.org/10.1137/140968896.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Čiegis, Raimondas, and Vadimas Starikovičius. "A parallel algorithm of LU factorization." Lietuvos matematikos rinkinys, no. II (December 14, 1998): 384–89. https://doi.org/10.15388/lmd.1998.37942.

Full text
Abstract:
This paper discusses issues in the design of LU factorization parallel algorithm on distributed memory parallel computers. A new cyclic distribution method is proposed for heterogeneous parallel computers, which include virtual parallel computers. The efficiency of the algorithm is investigated and results of computational experiments are given.
APA, Harvard, Vancouver, ISO, and other styles
20

Quintana-Ortí, Enrique S., and Robert A. Van De Geijn. "Updating an LU Factorization with Pivoting." ACM Transactions on Mathematical Software 35, no. 2 (2008): 1–16. http://dx.doi.org/10.1145/1377612.1377615.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

D’Azevedo, E., and J. C. Hill. "Parallel LU Factorization on GPU Cluster." Procedia Computer Science 9 (2012): 67–75. http://dx.doi.org/10.1016/j.procs.2012.04.008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Jia, Yulu, Piotr Luszczek, and Jack Dongarra. "Multi-GPU Implementation of LU Factorization." Procedia Computer Science 9 (2012): 106–15. http://dx.doi.org/10.1016/j.procs.2012.04.012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Ahac, Alan A., John J. Buoni, and D. D. Olesky. "Stable LU factorization of H-matrices." Linear Algebra and its Applications 99 (February 1988): 97–110. http://dx.doi.org/10.1016/0024-3795(88)90127-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Dongarra, Jack, Victor Eijkhout, and Piotr Łuszczek. "Recursive Approach in Sparse Matrix LU Factorization." Scientific Programming 9, no. 1 (2001): 51–60. http://dx.doi.org/10.1155/2001/569670.

Full text
Abstract:
This paper describes a recursive method for the LU factorization of sparse matrices. The recursive formulation of common linear algebra codes has been proven very successful in dense matrix computations. An extension of the recursive technique for sparse matrices is presented. Performance results given here show that the recursive approach may perform comparable to leading software packages for sparse matrix factorization in terms of execution time, memory usage, and error estimates of the solution.
APA, Harvard, Vancouver, ISO, and other styles
25

Almeida, César Guilherme de, and Santos Alberto Enriquez Remigio. "Sobre matrizes pentadiagonais não estritamente diagonais dominantes." REMAT: Revista Eletrônica da Matemática 10, no. 2 (2024): e3007. http://dx.doi.org/10.35819/remat2024v10i2id7012.

Full text
Abstract:
Based on Crout's method, we will present, in this work, new non singularity criteria and sufficient conditions for existence of the LU factorization, for non strictly diagonally dominant pentadiagonal matrices. Crout's method is a recursive process of n stages that obtains the factorization A = LU of a pentadiagonal matrix of order n. In this recursive process of obtaining both the lower triangular matrix L and the upper triangular matrix U, the parameters alpha_i, 1 <= i <= n, must be non-zero to ensure that det(A) neq 0 and A = LU. Crout's recursive method is replaced by the analysis of sufficient conditions that can be verified simultaneously with low computational cost.
APA, Harvard, Vancouver, ISO, and other styles
26

Vishwas, B. C., Abhishek Gadia, and Mainak Chaudhuri. "Implementing a Parallel Matrix Factorization Library on the Cell Broadband Engine." Scientific Programming 17, no. 1-2 (2009): 3–29. http://dx.doi.org/10.1155/2009/710321.

Full text
Abstract:
Matrix factorization (or often called decomposition) is a frequently used kernel in a large number of applications ranging from linear solvers to data clustering and machine learning. The central contribution of this paper is a thorough performance study of four popular matrix factorization techniques, namely, LU, Cholesky, QR and SVD on the STI Cell broadband engine. The paper explores algorithmic as well as implementation challenges related to the Cell chip-multiprocessor and explains how we achieve near-linear speedup on most of the factorization techniques for a range of matrix sizes. For each of the factorization routines, we identify the bottleneck kernels and explain how we have attempted to resolve the bottleneck and to what extent we have been successful. Our implementations, for the largest data sets that we use, running on a two-node 3.2 GHz Cell BladeCenter (exercising a total of sixteen SPEs), on average, deliver 203.9, 284.6, 81.5, 243.9 and 54.0 GFLOPS for dense LU, dense Cholesky, sparse Cholesky, QR and SVD, respectively. The implementations achieve speedup of 11.2, 12.8, 10.6, 13.0 and 6.2, respectively for dense LU, dense Cholesky, sparse Cholesky, QR and SVD, when running on sixteen SPEs. We discuss the interesting interactions that result from parallelization of the factorization routines on a two-node non-uniform memory access (NUMA) Cell Blade cluster.
APA, Harvard, Vancouver, ISO, and other styles
27

Hook, James, and Françoise Tisseur. "Incomplete LU Preconditioner Based on Max-Plus Approximation of LU Factorization." SIAM Journal on Matrix Analysis and Applications 38, no. 4 (2017): 1160–89. http://dx.doi.org/10.1137/16m1094579.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Andrews, Kevin T., Philip W. Smith, and Joseph D. Ward. "LU-Factorization of Operators on l 1." Proceedings of the American Mathematical Society 98, no. 2 (1986): 247. http://dx.doi.org/10.2307/2045692.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Neuman, C. P. "On the LU factorization of Hessenberg matrices." IEEE Transactions on Systems, Man, and Cybernetics 19, no. 1 (1989): 139–40. http://dx.doi.org/10.1109/21.24544.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Mohamed, A. Gaber, Geoffrey C. Fox, and Gregor von Laszewski. "Blocked LU Factorization on a Multiprocessor Computer." Computer-Aided Civil and Infrastructure Engineering 8, no. 1 (1993): 45–56. http://dx.doi.org/10.1111/j.1467-8667.1993.tb00191.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Bekaskos, M. P., and D. J. Evans. "Systolic lu-factorization “dequeues” for trldiagonal systems." International Journal of Computer Mathematics 25, no. 3-4 (1988): 299–320. http://dx.doi.org/10.1080/00207168808803675.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Hoffmann, Walter. "The Gauss-Huard algorithm and LU factorization." Linear Algebra and its Applications 275-276 (May 1998): 281–86. http://dx.doi.org/10.1016/s0024-3795(97)10021-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Shen, Yun-Qiu, and Tjalling J. Ypma. "Solving Separable Nonlinear Equations Using LU Factorization." ISRN Mathematical Analysis 2013 (June 24, 2013): 1–5. http://dx.doi.org/10.1155/2013/258072.

Full text
Abstract:
Separable nonlinear equations have the form where the matrix and the vector are continuously differentiable functions of and . We assume that and has full rank. We present a numerical method to compute the solution for fully determined systems () and compatible overdetermined systems (). Our method reduces the original system to a smaller system of equations in alone. The iterative process to solve the smaller system only requires the LU factorization of one matrix per step, and the convergence is quadratic. Once has been obtained, is computed by direct solution of a linear system. Details of the numerical implementation are provided and several examples are presented.
APA, Harvard, Vancouver, ISO, and other styles
34

Grigori, Laura, James W. Demmel, and Hua Xiang. "CALU: A Communication Optimal LU Factorization Algorithm." SIAM Journal on Matrix Analysis and Applications 32, no. 4 (2011): 1317–50. http://dx.doi.org/10.1137/100788926.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Chang, Xiao-Wen, and Christopher C. Paige. "On the sensitivity of the LU factorization." BIT Numerical Mathematics 38, no. 3 (1998): 486–501. http://dx.doi.org/10.1007/bf02510255.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Melkemi, Lamine, and Faouzia Rajeh. "Block LU-factorization of confluent Vandermonde matrices." Applied Mathematics Letters 23, no. 7 (2010): 747–50. http://dx.doi.org/10.1016/j.aml.2010.03.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Li, Shengguo, Ming Gu, and Lizhi Cheng. "Fast structured LU factorization for nonsymmetric matrices." Numerische Mathematik 127, no. 1 (2013): 35–55. http://dx.doi.org/10.1007/s00211-013-0582-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Saad, Yousef. "ILUT: A dual threshold incomplete LU factorization." Numerical Linear Algebra with Applications 1, no. 4 (1994): 387–402. http://dx.doi.org/10.1002/nla.1680010405.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

MALARD, J., and C. C. PAIGE. "DATA REPLICATION IN DENSE MATRIX FACTORIZATION." Parallel Processing Letters 03, no. 04 (1993): 419–30. http://dx.doi.org/10.1142/s0129626493000459.

Full text
Abstract:
Gossiping is proposed as the preferred communication primitive for replicating pivot data in dense matrix factorization on message passing multicomputer. Performance gains are demonstrated on a hypercube for LU factorization algorithms based on gossiping as opposed to broadcasting. This finding has consequences for the design of numerical software libraries.
APA, Harvard, Vancouver, ISO, and other styles
40

GONZALEZ, PATRICIA, JOSE C. CABALEIRO, and TOMAS F. PENA. "PARALLEL INCOMPLETE LU FACTORIZATION AS A PRECONDITIONER FOR KRYLOV SUBSPACE METHODS." Parallel Processing Letters 09, no. 04 (1999): 467–74. http://dx.doi.org/10.1142/s0129626499000438.

Full text
Abstract:
In this paper we describe a new method for the ILU(0) factorization of sparse systems in distributed memory multiprocessor architectures. This method uses a symbolic reordering technique, so the final system can be grouped in blocks where the rows are independent and the factorization of these entries can be carried out in parallel. The parallel ILU(0) factorization has been tested on the Cray T3E multicomputer using the MPI communication library. The performance was analysed using matrices from the Harwell–Boeing collection.
APA, Harvard, Vancouver, ISO, and other styles
41

Finta, Béla. "Existence and Uniqueness of the Infinite Matrix Factorization LU." Acta Marisiensis. Seria Technologica 16, no. 1 (2019): 31–33. http://dx.doi.org/10.2478/amset-2019-0006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

AHMED, DELBRIN H. "PARALLELIZE AND ANALYSIS LU FACTORIZATION AND QUADRANT INTERLOCKING FACTORIZATION ALGORITHM IN OPENMP." Journal of Duhok University 20, no. 1 (2017): 46–53. http://dx.doi.org/10.26682/sjuod.2018.20.1.5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Pryce, John D., and Emmanuel M. Tadjouddine. "Fast Automatic Differentiation Jacobians by Compact LU Factorization." SIAM Journal on Scientific Computing 30, no. 4 (2008): 1659–77. http://dx.doi.org/10.1137/050644847.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Putcha, Mohan S. "Big cells and LU factorization in reductive monoids." Proceedings of the American Mathematical Society 130, no. 12 (2002): 3507–13. http://dx.doi.org/10.1090/s0002-9939-02-06515-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Geist, George A., and Charles H. Romine. "$LU$ Factorization Algorithms on Distributed-Memory Multiprocessor Architectures." SIAM Journal on Scientific and Statistical Computing 9, no. 4 (1988): 639–49. http://dx.doi.org/10.1137/0909042.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Wu, Rongteng, and Xiaohong Xie. "Two-Stage Column Block Parallel LU Factorization Algorithm." IEEE Access 8 (2020): 2645–55. http://dx.doi.org/10.1109/access.2019.2962355.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Caruso, Xavier. "Random matrices over a DVR and LU factorization." Journal of Symbolic Computation 71 (November 2015): 98–123. http://dx.doi.org/10.1016/j.jsc.2014.12.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Coleman, Evan, and Masha Sosonkina. "Self-stabilizing fine-grained parallel incomplete LU factorization." Sustainable Computing: Informatics and Systems 19 (September 2018): 291–304. http://dx.doi.org/10.1016/j.suscom.2018.01.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Bueno, M. I., and C. R. Johnson. "Minimum deviation, quasi-LU factorization of nonsingular matrices." Linear Algebra and its Applications 427, no. 1 (2007): 99–118. http://dx.doi.org/10.1016/j.laa.2007.06.023.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Johnson, Charles R., D. Dale Olesky, and P. van den Driessche. "Sign determinancy in LU factorization of P-matrices." Linear Algebra and its Applications 217 (March 1995): 155–66. http://dx.doi.org/10.1016/0024-3795(94)00061-h.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography