Academic literature on the topic 'LU factorization'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'LU factorization.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "LU factorization"

1

Ng, Wei Shean, and Wei Wen Tan. "Some properties of various types of matrix factorization." ITM Web of Conferences 36 (2021): 03003. http://dx.doi.org/10.1051/itmconf/20213603003.

Full text
Abstract:
Matrix factorizations or matrix decompositions are methods that represent a matrix as a product of two or more matrices. There are various types of matrix factorizations such as LU factorization, Cholesky factorization, singular value decomposition etc. Matrix factorization is widely used in pattern recognition, image denoising, data clustering etc. Motivated by these applications, some properties and applications of various types of matrix factorizations are studied. One of the purposes of matrix factorization is to ease the computation. Thus, comparisons in term of computation time of various matrix factorizations in different areas are carried out.
APA, Harvard, Vancouver, ISO, and other styles
2

Grünbaum, F. Alberto, and Manuel D. de la Iglesia. "Stochastic LU factorizations, Darboux transformations and urn models." Journal of Applied Probability 55, no. 3 (2018): 862–86. http://dx.doi.org/10.1017/jpr.2018.55.

Full text
Abstract:
Abstract We consider upper‒lower (UL) (and lower‒upper (LU)) factorizations of the one-step transition probability matrix of a random walk with the state space of nonnegative integers, with the condition that both upper and lower triangular matrices in the factorization are also stochastic matrices. We provide conditions on the free parameter of the UL factorization in terms of certain continued fractions such that this stochastic factorization is possible. By inverting the order of the factors (also known as a Darboux transformation) we obtain a new family of random walks where it is possible to state the spectral measures in terms of a Geronimus transformation. We repeat this for the LU factorization but without a free parameter. Finally, we apply our results in two examples; the random walk with constant transition probabilities, and the random walk generated by the Jacobi orthogonal polynomials. In both situations we obtain urn models associated with all the random walks in question.
APA, Harvard, Vancouver, ISO, and other styles
3

Ogita, Takeshi. "Accurate Matrix Factorization: Inverse LU and Inverse QR Factorizations." SIAM Journal on Matrix Analysis and Applications 31, no. 5 (2010): 2477–97. http://dx.doi.org/10.1137/090754376.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Qian, Guoyou, and Jingya Lu. "LU-FACTORIZATIONS OF SYMMETRIC MATRICES WITH APPLICATIONS." Asian-European Journal of Mathematics 03, no. 01 (2010): 133–43. http://dx.doi.org/10.1142/s179355711000009x.

Full text
Abstract:
In this paper, we describe explicitly the LU -factorization of a symmetric matrix of order n with n ≤ 7 when each of its ordered principal minors is nonzero. By using this result and some other related results on non-singularity previously given by Smith, Beslin, Hong, Lee and Ligh in the literature, we establish several theorems concerning LU -factorizations of power GCD matrices, power LCM matrices and reciprocal power GCD matrices and reciprocal power LCM matrices.
APA, Harvard, Vancouver, ISO, and other styles
5

Babarinsa, Olayiwola, Azfi Zaidi Mohammad Sofi, Mohd Asrul Hery Ibrahim, and Hailiz Kamarulhaili. "Optimized Cramer’s Rule in WZ Factorization and Applications." European Journal of Pure and Applied Mathematics 13, no. 4 (2020): 1035–54. http://dx.doi.org/10.29020/nybg.ejpam.v13i4.3818.

Full text
Abstract:
In this paper, W Z factorization is optimized with a proposed Cramer’s rule and compared with classical Cramer’s rule to solve the linear systems of the factorization technique. The matrix norms and performance time of WZ factorization together with LU factorization are analyzed using sparse matrices on MATLAB via AMD and Intel processor to deduce that the optimized Cramer’s rule in the factorization algorithm yields accurate results than LU factorization and conventional W Z factorization. In all, the matrix group and Schur complement for every Zsystem (2×2 block triangular matrices from Z-matrix) are established.
APA, Harvard, Vancouver, ISO, and other styles
6

WU, CHI-YE, and TING-ZHU HUANG. "PERTURBATION THEORY FOR THE LU AND QR FACTORIZATIONS." ANZIAM Journal 49, no. 4 (2008): 451–61. http://dx.doi.org/10.1017/s1446181108000138.

Full text
Abstract:
AbstractIn this paper we derive perturbation theorems for the LU and QR factors. Moreover, bounds for κL(A)/κL′(A) and κU(A)/κ′U(A) are given for the LU factorization of a nonsingular matrix. By applying pivoting strategies in the LU factorization, estimates for κL(PAQ)/κL′(PAQ) and κU(PAQ)/κ′U(PAQ) are also obtained.
APA, Harvard, Vancouver, ISO, and other styles
7

Iakymchuk, Roman, Stef Graillat, David Defour, and Enrique S. Quintana-Ortí. "Hierarchical approach for deriving a reproducible unblocked LU factorization." International Journal of High Performance Computing Applications 33, no. 5 (2019): 791–803. http://dx.doi.org/10.1177/1094342019832968.

Full text
Abstract:
We propose a reproducible variant of the unblocked LU factorization for graphics processor units (GPUs). For this purpose, we build upon Level-1/2 BLAS kernels that deliver correctly-rounded and reproducible results for the dot (inner) product, vector scaling, and the matrix-vector product. In addition, we draw a strategy to enhance the accuracy of the triangular solve via iterative refinement. Following a bottom-up approach, we finally construct a reproducible unblocked implementation of the LU factorization for GPUs, which accommodates partial pivoting for stability and can be eventually integrated in a high performance and stable algorithm for the (blocked) LU factorization.
APA, Harvard, Vancouver, ISO, and other styles
8

Almeida, C. G., and S. A. E. Remigio. "Sufficient Conditions for Existence of the LU Factorization of Toeplitz Symmetric Tridiagonal Matrices." Trends in Computational and Applied Mathematics 24, no. 1 (2023): 177–90. http://dx.doi.org/10.5540/tcam.2022.024.01.00177.

Full text
Abstract:
The characterization of inverses of symmetric tridiagonal and block tridiagonal matrices and the development of algorithms for finding the inverse of any general non-singular tridiagonal matrix are subjects that have been studied by many authors. The results of these research usually depend on the existence of the LU factorization of a non-sigular matrix A, such that A = LU. Besides, the conditions that ensure the nonsingularity of A and its LU factorization are not promptly obtained. Then, we are going to present in this work two extremely simple sufficient conditions for existence of the LU factorization of a Toeplitz symmetric tridiagonal matrix A. We take into consideration the roots of the modified Chebyshev polynomial, and we also present an analysis based on the parameters of the Crout’s method.
APA, Harvard, Vancouver, ISO, and other styles
9

AL-AYYOUB, ABDEL-ELAH, and KHALED DAY. "FAST LU FACTORIZATION ON THE HYPERSTAR INTERCONNECTION NETWORK." Journal of Interconnection Networks 03, no. 03n04 (2002): 231–43. http://dx.doi.org/10.1142/s0219265902000641.

Full text
Abstract:
The hyperstar network has been recently proposed as an attractive product network that outperforms many popular topologies in various respects. In this paper we explore additional capabilities for the hyperstar network through an efficient parallel algorithm for solving the LU factorization problem on this network. The proposed parallel algorithm uses O(n) communication time on a hyperstar formed by the cross-product of two n-star graphs. This communication time improves the best known result for the hypercube-based LU factorization by a factor of log(n), and improves the best known result for the mesh-based LU factorization by a factor of (n - 1)!.
APA, Harvard, Vancouver, ISO, and other styles
10

Amestoy, Patrick R., and Chiara Puglisi. "An Unsymmetrized Multifrontal LU Factorization." SIAM Journal on Matrix Analysis and Applications 24, no. 2 (2002): 553–69. http://dx.doi.org/10.1137/s0895479800375370.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "LU factorization"

1

Syed, Akber. "A Hardware Interpreter for Sparse Matrix LU Factorization." University of Cincinnati / OhioLINK, 2002. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1024934521.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Schenk, Olaf. "Scalable parallel sparse LU factorization methods on shared memory multiprocessors /." Zürich, 2000. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=13515.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

THIYAGARAJAN, SANJEEV. "REDUCING MEMORY SPACE FOR COMPLETELY UNROLLED LU FACTORIZATION OF SPARSE MATRICES." University of Cincinnati / OhioLINK, 2001. http://rave.ohiolink.edu/etdc/view?acc_num=ucin990556295.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Somers, Gregory W. "Acceleration of Block-Aware Matrix Factorization on Heterogeneous Platforms." Thesis, Université d'Ottawa / University of Ottawa, 2016. http://hdl.handle.net/10393/35128.

Full text
Abstract:
Block-structured matrices arise in several contexts in circuit simulation problems. These matrices typically inherit the pattern of sparsity from the circuit connectivity. However, they are also characterized by dense spots or blocks. Direct factorization of those matrices has emerged as an attractive approach if the host memory is sufficiently large to store the block-structured matrix. The approach proposed in this thesis aims to accelerate the direct factorization of general block-structured matrices by leveraging the power of multiple OpenCL accelerators such as Graphical Processing Units (GPUs). The proposed approach utilizes the notion of a Directed Acyclic Graph representing the matrix in order to schedule its factorization on multiple accelerators. This thesis also describes memory management techniques that enable handling large matrices while minimizing the amount of memory transfer over the PCIe bus between the host CPU and the attached devices. The results demonstrate that by using two GPUs the proposed approach can achieve a nearly optimal speedup when compared to a single GPU platform.
APA, Harvard, Vancouver, ISO, and other styles
5

Netzer, Gilbert. "Efficient LU Factorization for Texas Instruments Keystone Architecture Digital Signal Processors." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-170445.

Full text
Abstract:
The energy consumption of large-scale high-performance computer (HPC) systems has become one of the foremost concerns of both data-center operators and computer manufacturers. This has renewed interest in alternative computer architectures that could offer substantially better energy-efficiency.Yet, the for the evaluation of the potential of these architectures necessary well-optimized implementations of typical HPC benchmarks are often not available for these for the HPC industry novel architectures. The in this work presented LU factorization benchmark implementation aims to provide such a high-quality tool for the HPC industry standard high-performance LINPACK benchmark (HPL) for the eight-core Texas Instruments TMS320C6678 digitalsignal processor (DSP). The presented implementation could perform the LU factorization at up to 30.9 GF/s at 1.25 GHz core clock frequency by using all the eight DSP cores of the System-on-Chip (SoC). This is 77% of the attainable peak double-precision floating-point performance of the DSP, a level of efficiency that is comparable to the efficiency expected on traditional x86-based processor architectures. A presented detailed performance analysis shows that this is largely due to the optimized implementation of the embedded generalized matrix-matrix multiplication (GEMM). For this operation, the on-chip direct memory access (DMA) engines were used to transfer the necessary data from the external DDR3 memory to the core-private and shared scratchpad memory. This allowed to overlap the data transfer with computations on the DSP cores. The computations were in turn optimized by using software pipeline techniques and were partly implemented in assembly language. With these optimization the performance of the matrix multiplication reached up to 95% of attainable peak performance. A detailed description of these two key optimization techniques and their application to the LU factorization is included. Using a specially instrumented Advantech TMDXEVM6678L evaluation module, described in detail in related work, allowed to measure the SoC’s energy efficiency of up to 2.92 GF/J while executing the presented benchmark. Results from the verification of the benchmark execution using standard HPL correctness checks and an uncertainty analysis of the experimentally gathered data are also presented.<br>Energiförbrukningen av storskaliga högpresterande datorsystem (HPC) har blivit ett av de främsta problemen för såväl ägare av dessa system som datortillverkare. Det har lett till ett förnyat intresse för alternativa datorarkitekturer som kan vara betydligt mer effektiva ur energiförbrukningssynpunkt. För detaljerade analyser av prestanda och energiförbrukning av dessa för HPC-industrin nya arkitekturer krävs väloptimerade implementationer av standard HPC-bänkmärkningsproblem. Syftet med detta examensarbete är att tillhandhålla ett sådant högkvalitativt verktyg i form av en implementation av ett bänkmärkesprogram för LU-faktorisering för den åttakärniga digitala signalprocessorn (DSP) TMS320C6678 från Texas Instruments. Bänkmärkningsproblemet är samma som för det inom HPC-industrin välkända bänkmärket “high-performance LINPACK” (HPL). Den här presenterade implementationen nådde upp till en prestanda av 30,9 GF/s vid 1,25 GHz klockfrekvens genom att samtidigt använda alla åtta kärnor i DSP:n. Detta motsvarar 77% av den teoretiskt uppnåbara prestandan, vilket är jämförbart med förväntningar på effektivteten av mer traditionella x86-baserade system. En detaljerad prestandaanalys visar att detta tillstor del uppnås genom den högoptimerade implementationen av den ingående matris-matris-multiplikationen. Användandet av specialiserade “direct memory access” (DMA) hårdvaruenheter för kopieringen av data mellan det externa DDR3 minnet och det interna kärn-privata och delade arbetsminnet tillät att överlappa dessa operationer med beräkningar. Optimerade mjukvaruimplementationer av dessa beräkningar, delvis utförda i maskinspåk, tillät att utföra matris-multiplikationen med upp till 95% av den teoretiskt nåbara prestandan. I rapporten ges en detaljerad beskrivning av dessa två nyckeltekniker. Energiförbrukningen vid exekvering av det implementerade bänkmärket kunde med hjälp av en för ändamålet anpassad Advantech TMDXEVM6678L evalueringsmodul bestämmas till maximalt 2,92 GF/J. Resultat från verifikationen av bänkmärkesimplementationen och en uppskattning av mätosäkerheten vid de experimentella mätningarna presenteras också.
APA, Harvard, Vancouver, ISO, and other styles
6

Cantane, Daniela Renata. "Contribuição da atualização da decomposição LU no metodo Simplex." [s.n.], 2009. http://repositorio.unicamp.br/jspui/handle/REPOSIP/260212.

Full text
Abstract:
Orientadores: Aurelio Ribeiro Leite de Oliveira, Christiano Lyra Filho<br>Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação<br>Made available in DSpace on 2018-08-14T10:57:39Z (GMT). No. of bitstreams: 1 Cantane_DanielaRenata.pdf: 1253133 bytes, checksum: 870b16a2b9360f77ebd88f50491d181c (MD5) Previous issue date: 2009<br>Resumo: A solução eficiente de sistemas lineares é fundamental em problemas de otimização linear e o primeiro método a obter êxito nesta classe de problemas foi o método Simplex. Com o objetivo de desenvolver alternativas eficientes para sua implementação, são apresentadas nesta tese técnicas de atualização da decomposição LU da base para aperfeiçoar a solução dos sistemas lineares oriundos do método Simplex, utilizando um reordenamento estático nas colunas da matriz. Uma simulação do método Simplex é implementada, realizando troca de bases obtidas pelo MINOS e verificando sua esparsidade. Somente os elementos afetados pela mudança de base são considerados para obter uma atualização da decomposição LU eficaz. As colunas da matriz são reordenadas de acordo com três estratégias: mínimo grau; forma bloco triangular e estratégia de Björck. Assim, obtém-se uma decomposição esparsa para qualquer base sem esforço computacional para obter a ordem das colunas, pois o reordenamento da matriz é estático e as colunas da base obedecem esta ordem. A forma bloco triangular obteve os melhores resultados, para os maiores problemas testados, em relação ao mínimo grau e a estratégia de Björck. Resultados computacionais para problemas da Netlib mostram a robustez e um bom desempenho computacional do método de atualização da decomposição LU proposto, pois não são necessárias refatorações periódicas da base como nos métodos de atualização tradicionais. O método proposto obteve uma redução do número de elementos não nulos da base em relação ao MINOS. Esta abordagem foi aplicada em problemas de corte de estoque e a atualização da decomposição LU proposta obteve uma redução do tempo computacional na solução destes problemas em relação ao GPLK.<br>Abstract: Finding efficient solution of linear systems is fundamental in the linear programming problems and the first method to obtain success for this class of problems was the Simplex method. With the objective to develop efficient alternatives to its implementation, techniques of the simplex basis LU factorization update are developed in this thesis to improve the solution of the Simplex method linear systems towards a matrix columns static reordering. A simulation of the Simplex method is implemented, carrying through the change of basis obtained from MINOS and verifying its sparsity. Only the factored columns actually modified by the change of the base are carried through to obtain an efficient LU factorization update. The matrix columns are reordered according to three strategies: minimum degree; block triangular form and the Björck strategy. Thus, sparse factorizations are obtained for any base without computational effort to obtain the order of columns, since the reordering of the matrix is static and base columns follow this ordering. The application of the block triangular form achieved the best results, for larger scale problems tested, in comparison to minimum degree method and the Björck strategy. Computational results for Netlib problems show the robustness of this approach and good computational performance, since there is no need of periodical factorizations as used in traditional updating methods. The proposed method obtained a reduction of the nonzero entries of the basis with respect to MINOS. This approach was applied in the cutting stock problems and the proposed method achieved a reduction of the computational time in the solution of such problems with respect to the GLPK.<br>Universidade Estadual de Campi<br>Automação<br>Doutor em Engenharia Elétrica
APA, Harvard, Vancouver, ISO, and other styles
7

Herrmann, Julien. "Memory-aware Algorithms and Scheduling Techniques for Matrix Computattions." Thesis, Lyon, École normale supérieure, 2015. http://www.theses.fr/2015ENSL1043/document.

Full text
Abstract:
Dans cette thèse, nous nous sommes penchés d’un point de vue à la foisthéorique et pratique sur la conception d’algorithmes et detechniques d’ordonnancement adaptées aux architectures complexes dessuperordinateurs modernes. Nous nous sommes en particulier intéressésà l’utilisation mémoire et la gestion des communications desalgorithmes pour le calcul haute performance (HPC). Nous avonsexploité l’hétérogénéité des superordinateurs modernes pour améliorerles performances du calcul matriciel. Nous avons étudié lapossibilité d’alterner intelligemment des étapes de factorisation LU(plus rapide) et des étapes de factorisation QR (plus stablenumériquement mais plus deux fois plus coûteuses) pour résoudre unsystème linéaire dense. Nous avons amélioré les performances desystèmes d’exécution dynamique à l’aide de pré-calculs statiquesprenants en compte l’ensemble du graphe de tâches de la factorisationCholesky ainsi que l’hétérogénéité de l’architecture. Nous noussommes intéressés à la complexité du problème d’ordonnancement degraphes de tâches utilisant de gros fichiers d’entrée et de sortiesur une architecture hétérogène avec deux types de ressources,utilisant chacune une mémoire spécifique. Nous avons conçu denombreuses heuristiques en temps polynomial pour la résolution deproblèmes généraux que l’on avait prouvés NP-complet aupréalable. Enfin, nous avons conçu des algorithmes optimaux pourordonnancer un graphe de différentiation automatique sur uneplateforme avec deux types de mémoire : une mémoire gratuite maislimitée et une mémoire coûteuse mais illimitée<br>Throughout this thesis, we have designed memory-aware algorithms and scheduling techniques suitedfor modern memory architectures. We have shown special interest in improving the performance ofmatrix computations on multiple levels. At a high level, we have introduced new numerical algorithmsfor solving linear systems on large distributed platforms. Most of the time, these linear solvers rely onruntime systems to handle resources allocation and data management. We also focused on improving thedynamic schedulers embedded in these runtime systems by adding static information to their decisionprocess. We proposed new memory-aware dynamic heuristics to schedule workflows, that could beimplemented in such runtime systems.Altogether, we have dealt with multiple state-of-the-art factorization algorithms used to solve linearsystems, like the LU, QR and Cholesky factorizations. We targeted different platforms ranging frommulticore processors to distributed memory clusters, and worked with several reference runtime systemstailored for these architectures, such as P A RSEC and StarPU. On a theoretical side, we took specialcare of modelling convoluted hierarchical memory architectures. We have classified the problems thatare arising when dealing with these storage platforms. We have designed many efficient polynomial-timeheuristics on general problems that had been shown NP-complete beforehand
APA, Harvard, Vancouver, ISO, and other styles
8

Assis, Carmencita Ferreira Silva. "Sistemas lineares: métodos de eliminação de Gauss e fatoração LU." Universidade Federal de Goiás, 2014. http://repositorio.bc.ufg.br/tede/handle/tede/4490.

Full text
Abstract:
Submitted by Luciana Ferreira (lucgeral@gmail.com) on 2015-05-07T13:34:15Z No. of bitstreams: 2 Dissertação - Carmencita Ferreira Silva Assis - 2014.pdf: 1032992 bytes, checksum: dcfbc22b53a2352c6e65a7615ffb72b5 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5)<br>Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2015-05-07T13:40:12Z (GMT) No. of bitstreams: 2 Dissertação - Carmencita Ferreira Silva Assis - 2014.pdf: 1032992 bytes, checksum: dcfbc22b53a2352c6e65a7615ffb72b5 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5)<br>Made available in DSpace on 2015-05-07T13:40:12Z (GMT). No. of bitstreams: 2 Dissertação - Carmencita Ferreira Silva Assis - 2014.pdf: 1032992 bytes, checksum: dcfbc22b53a2352c6e65a7615ffb72b5 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) Previous issue date: 2014-03-20<br>Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES<br>This work aims to present te hniques for solving systems of linear equations, in its traditional formulation, where it sought to explore the referen es ommonly used in ourses in linear algebra and numeri al omputation, fo using on the dire t methods of Gauss elimination and LU fa torization. Troubleshooters established in the literature are ondu ted, in order to illustrate the operation and appli ation of su h methods to real problems, thus highlighting the possibility of inserting them in high s hool. The ontents were treated and exposed so that exemplify the diversity of areas in luding linear systems, su h as engineering, e onomi s and biology, showing the gains that an be a hieved by students if they have onta t with the methods as soon as possible. At the end we suggest the use of omputational resour es in math lasses, sin e the redu tion of time spent in algebrai manipulation will allow the tea her to deepen the on epts and to address larger systems, to enhan e the resolution perspe tive, and motivate the student in the learning pro ess.<br>Este trabalho tem por objetivo apresentar té ni as de resolução de sistemas de equações lineares, em sua formulação tradi ional, onde se bus ou explorar as referên ias usualmente utilizadas em ursos de álgebra linear e ál ulo numéri o, enfo ando os métodos diretos de Eliminação de Gauss e Fatoração LU. Resoluções de problemas onsolidados na literatura são realizadas, om a nalidade de ilustrar o fun ionamento e apli ação de tais métodos em problemas reais, desta ando assim a possibilidade de inserção dos mesmos no Ensino Médio. Os onteúdos foram tratados e expostos de modo que exempli quem a diversidade de áreas que abrangem os sistemas lineares, tais omo engenharia, e onomia e biologia, mostrando os ganhos que podem ser al ançados pelos alunos, se tiverem ontato om os métodos o quanto antes. Ao nal sugere- se a utilização de re ursos omputa ionais nas aulas de matemáti a, uma vez que a redução do tempo empregado na manipulação algébri a permitirá que o professor possa aprofundar os on eitos e abordar sistemas de maior porte, que ampliem a perspe tiva de resolução, além de motivar o aluno no pro esso de aprendizagem.
APA, Harvard, Vancouver, ISO, and other styles
9

Pathanjali, Nandini. "Pipelined IEEE-754 Double Precision Floating Point Arithmetic Operators on Virtex FPGA’s." University of Cincinnati / OhioLINK, 2002. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1017085297.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Lee, Eun-Joo. "Accurate and Robust Preconditioning Techniques for Solving General Sparse Linear Systems." UKnowledge, 2008. http://uknowledge.uky.edu/gradschool_diss/650.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "LU factorization"

1

Demmel, James W. Block LU factorization. Research Institute for Advanced Computer Science, NASA Ames Research Center, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Obayashi, Shigeru. Navier-Stokes simulation of wind-tunnel flow using LU-ADI factorization algorithm. National Aeronautics and Space Administration, Ames Research Center, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Obayashi, Shigeru. Navier-Stokes simulation of wind-tunnel flow using LU-ADI factorization algorithm. National Aeronautics and Space Administration, Ames Research Center, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Obayashi, Shigeru. Navier-Stokes simulation of wind-tunnel flow using LU-ADI factorization algorithm. National Aeronautics and Space Administration, Ames Research Center, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Obayashi, Shigeru. Navier-Stokes simulation of wind-tunnel flow using LU-ADI factorization algorithm. National Aeronautics and Space Administration, Ames Research Center, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

D, Simon Horst, Tang Wei-Pai, and Research Institute for Advanced Computer Science (U.S.), eds. Spectral ordering techniques for incomplete LU preconditioners for CG methods. Research Institute for Advanced Computer Science, NASA Ames Research Center, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Zürich, Eidgenössische Technische Hochschule, ed. Scalable parallel sparse LU factorization methods on shared memory multiprocessors. 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Scalable parallel sparse LU factorization methods on shared memory multiprocessors. Hartung-Gorre, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

National Aeronautics and Space Administration (NASA) Staff. Navier-Stokes Simulation of Wind-Tunnel Flow Using Lu-Adi Factorization Algorithm. Independently Published, 2018.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Hadfield, Steven Michael. On the LU factorization of sequences of identically structured sparse matrices within a distributed memory environment. 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "LU factorization"

1

Dongarra, Jack, Piotr Luszczek, Paul Feautrier, et al. "LU Factorization." In Encyclopedia of Parallel Computing. Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-09766-4_2029.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Scott, Jennifer, and Miroslav Tůma. "Sparse LU Factorizations." In Nečas Center Series. Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-25820-6_6.

Full text
Abstract:
AbstractThis chapter considers the LU factorization of a general nonsymmetric nonsingular sparse matrix A. In practice, numerical pivoting for stability and/or ordering of A to limit fill-in in the factors is often needed and the computed factorization is then of a permuted matrix PAQ. Pivoting is discussed in Chapter 7 and ordering algorithms in Chapter 8.
APA, Harvard, Vancouver, ISO, and other styles
3

Tinetti, Fernando G., and Armando E. De Giusti. "Broadcast-Based Parallel LU Factorization." In Euro-Par 2005 Parallel Processing. Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11549468_94.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

PAN, Ping-Qi. "Face Method with LU Factorization." In Linear Programming Computation. Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-19-0147-8_23.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Stromberg, Marc. "LU Factorization of Any Matrix." In Trends in Mathematics. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-49716-3_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

PAN, Ping-Qi. "Dual Face Method with LU Factorization." In Linear Programming Computation. Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-19-0147-8_24.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Oppe, Thomas C., and David R. Kincaid. "Parallel LU-factorization algorithms for dense matrices." In Lecture Notes in Computer Science. Springer Berlin Heidelberg, 1988. http://dx.doi.org/10.1007/3-540-18991-2_34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Margaritis, K., and D. J. Evans. "Parallel systolic LU factorization for simplex updates." In Lecture Notes in Computer Science. Springer Berlin Heidelberg, 1988. http://dx.doi.org/10.1007/3-540-18991-2_46.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Donfack, Simplice, Laura Grigori, and Amal Khabou. "Avoiding Communication through a Multilevel LU Factorization." In Euro-Par 2012 Parallel Processing. Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-32820-6_55.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Asenjo, R., and E. L. Zapata. "Sparse LU factorization of the Cray T3D." In High-Performance Computing and Networking. Springer Berlin Heidelberg, 1995. http://dx.doi.org/10.1007/bfb0046701.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "LU factorization"

1

Freire, Manuel, Ernesto Dufrechou, and Pablo Ezzatti. "A synchronization-free incomplete LU factorization for GPUs with level-set analysis." In 2025 33rd Euromicro International Conference on Parallel, Distributed, and Network-Based Processing (PDP). IEEE, 2025. https://doi.org/10.1109/pdp66500.2025.00037.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Monawwar, Haya, Ahmad Ali, Hantao Cui, Jonathan Maack, and Min Xiong. "Impact of Reordering on the LU Factorization Performance of Bordered Block-Diagonal Sparse Matrix." In 2024 56th North American Power Symposium (NAPS). IEEE, 2024. http://dx.doi.org/10.1109/naps61145.2024.10741847.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Dumas, Jean-Guillaume, Joris van der Hoeven, Clément Pernet, and Daniel S. Roche. "LU Factorization with Errors." In ISSAC '19: International Symposium on Symbolic and Algebraic Computation. ACM, 2019. http://dx.doi.org/10.1145/3326229.3326244.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Badawy, Mohammad Osama, Yasser Y. Hanafy, and Ramy Eltarras. "LU factorization using multithreaded system." In 2012 22nd International Conference on Computer Theory and Applications (ICCTA). IEEE, 2012. http://dx.doi.org/10.1109/iccta.2012.6523540.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Nguyen, Patrick, Luca Rigazio, Christian Wellekens, and Jean-Claude Junqua. "LU factorization for feature transformation." In 7th International Conference on Spoken Language Processing (ICSLP 2002). ISCA, 2002. http://dx.doi.org/10.21437/icslp.2002-56.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Iakymchuk, Roman, Enrique S. Quintana-Orti, Erwin Laure, and Stef Graillat. "Towards Reproducible Blocked LU Factorization." In 2017 IEEE International Parallel and Distributed Processing Symposium: Workshops (IPDPSW). IEEE, 2017. http://dx.doi.org/10.1109/ipdpsw.2017.94.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Agullo, Emmanuel, Cedric Augonnet, Jack Dongarra, et al. "LU factorization for accelerator-based systems." In 2011 9th IEEE/ACS International Conference on Computer Systems and Applications (AICCSA). IEEE, 2011. http://dx.doi.org/10.1109/aiccsa.2011.6126599.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Lindquist, Neil, Mark Gates, Piotr Luszczek, and Jack Dongarra. "Threshold Pivoting for Dense LU Factorization." In 2022 IEEE/ACM Workshop on Latest Advances in Scalable Algorithms for Large-Scale Heterogeneous Systems (ScalAH). IEEE, 2022. http://dx.doi.org/10.1109/scalah56622.2022.00010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Kikuchi, Hisakazu, Junghyeun Hwang, Shogo Muramatsu, and Jaeho Shin. "Reversible component transforms by the LU factorization." In 2010 Picture Coding Symposium (PCS). IEEE, 2010. http://dx.doi.org/10.1109/pcs.2010.5702475.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Shen, Kai, Xiangmin Jiao, and Tao Yang. "Elimination forest guided 2D sparse LU factorization." In the tenth annual ACM symposium. ACM Press, 1998. http://dx.doi.org/10.1145/277651.277658.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "LU factorization"

1

Amestoy, Patrick R., and Chiara Puglisi. An unsymmetrized multifrontal LU factorization. Office of Scientific and Technical Information (OSTI), 2000. http://dx.doi.org/10.2172/776628.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kurzak, Jakub, Pitior Luszczek, Mathieu Faverge, and Jack Dongarra. LU Factorization with Partial Pivoting for a Multi-CPU, Multi-GPU Shared Memory System. Office of Scientific and Technical Information (OSTI), 2012. http://dx.doi.org/10.2172/1173291.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

D`Azevedo, E. F., and J. J. Dongarra. The design and implementation of the parallel out-of-core ScaLAPACK LU, QR and Cholesky factorization routines. Office of Scientific and Technical Information (OSTI), 1997. http://dx.doi.org/10.2172/296722.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography