Academic literature on the topic 'Loop parallelization'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Loop parallelization.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Loop parallelization"

1

Aiken, A., and A. Nicolau. "Optimal loop parallelization." ACM SIGPLAN Notices 23, no. 7 (1988): 308–17. http://dx.doi.org/10.1145/960116.54021.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kumar, M. "Automatic Loop Parallelization." Computer Journal 40, no. 6 (1997): 301. http://dx.doi.org/10.1093/comjnl/40.6.301.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

GRIEBL, MARTIN, and CHRISTIAN LENGAUER. "ON THE SPACE-TIME MAPPING OF WHILE-LOOPS." Parallel Processing Letters 04, no. 03 (1994): 221–32. http://dx.doi.org/10.1142/s0129626494000223.

Full text
Abstract:
A WHILE-loop can be viewed as a FOR-loop with a dynamic upper bound. The computational model of polytopes is useful for the automatic parallelization of FOR-loops. We investigate its potential for the parallelization of WHILE-loops.
APA, Harvard, Vancouver, ISO, and other styles
4

Gasperoni, F., U. Schwiegelshohn, and K. Ebcioğlu. "On optimal loop parallelization." ACM SIGMICRO Newsletter 20, no. 3 (1989): 141–47. http://dx.doi.org/10.1145/75395.75411.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

ANDERSON, RICHARD J., and BARBARA B. SIMONS. "A FAST HEURISTIC FOR LOOP PARALLELIZATION." Parallel Processing Letters 04, no. 03 (1994): 281–99. http://dx.doi.org/10.1142/s0129626494000272.

Full text
Abstract:
We present a fast loop parallelization heuristic that assigns separate invocations of a loop to different processors. If the loop contains data dependences between iterations, later iterations can be delayed while awaiting a result computed in an earlier iteration. In this paper we study a scheduling problem, called the Delay Problem, that approximates the problem of minimizing the delay in the start time of loops with loop-carried dependences. Our major result is a fast (O(n log 2 n)) time algorithm for the case where the precedence constraints are a forest of in-trees or a forest of out-tree
APA, Harvard, Vancouver, ISO, and other styles
6

Ciaca, Monica-Iuliana, Loredana Mocean, Alexandru Vancea, and Mihai Avornicului. "Optimal Parallelization Of Loop Structures." INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY 15, no. 7 (2016): 6907–13. http://dx.doi.org/10.24297/ijct.v15i7.3974.

Full text
Abstract:
This paper is intended to be a follow up of the work done by the authors in previous articles. On one hand it is concluded with a theorem that proves to be a definite answer to one very important research direction and on the other hand it is an opening of new research directions in the field of loop structures automatic parallelization.
APA, Harvard, Vancouver, ISO, and other styles
7

BARNETT, MICHAEL, and CHRISTIAN LENGAUER. "UNIMODULARITY AND THE PARALLELIZATION OF LOOPS." Parallel Processing Letters 02, no. 02n03 (1992): 273–81. http://dx.doi.org/10.1142/s0129626492000416.

Full text
Abstract:
The parallelization of loops can be made formal by basing it on an algebraic theory of loop transformations. In this theory, the concept of unimodularity arises. We discuss the pros and cons of insisting on unimodularity.
APA, Harvard, Vancouver, ISO, and other styles
8

Darte, Alain, Georges-André Silber, and Frédéric Vivien. "Combining Retiming and Scheduling Techniques for Loop Parallelization and Loop Tiling." Parallel Processing Letters 07, no. 04 (1997): 379–92. http://dx.doi.org/10.1142/s0129626497000383.

Full text
Abstract:
Tiling is a technique used for exploiting medium-grain parallelism in nested loops. It relies on a first step that detects sets of permutable nested loops. All algorithms developed so far consider the statements of the loop body as a single block, in other words, they are not able to take advantage of the structure of dependences between different statements. In this paper, we overcame this limitation by showing how the structure of the reduced dependence graph can be taken into account for detecting more permutable loops. Our method combines graph retiming techniques and graph scheduling tech
APA, Harvard, Vancouver, ISO, and other styles
9

Oancea, Cosmin E., and Lawrence Rauchwerger. "Logical inference techniques for loop parallelization." ACM SIGPLAN Notices 47, no. 6 (2012): 509–20. http://dx.doi.org/10.1145/2345156.2254124.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Größlinger, Armin, Martin Griebl, and Christian Lengauer. "Quantifier elimination in automatic loop parallelization." Journal of Symbolic Computation 41, no. 11 (2006): 1206–21. http://dx.doi.org/10.1016/j.jsc.2005.09.012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Loop parallelization"

1

Wottrich, Rodolfo Guilherme 1990. "Loop parallelization in the cloud using OpenMP and MapReduce." [s.n.], 2014. http://repositorio.unicamp.br/jspui/handle/REPOSIP/275500.

Full text
Abstract:
Orientadores: Guido Costa Souza de Araújo, Rodolfo Jardim de Azevedo<br>Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação<br>Made available in DSpace on 2018-08-24T12:44:05Z (GMT). No. of bitstreams: 1 Wottrich_RodolfoGuilherme_M.pdf: 2132128 bytes, checksum: b8ac1197909b6cdaf96b95d6097649f3 (MD5) Previous issue date: 2014<br>Resumo: A busca por paralelismo sempre foi um importante objetivo no projeto de sistemas computacionais, conduzida principalmente pelo constante interesse na redução de tempos de execução de aplicações. Programação paralela é uma área d
APA, Harvard, Vancouver, ISO, and other styles
2

Zhang, Chenggang, and 张呈刚. "Run-time loop parallelization with efficient dependency checking on GPU-accelerated platforms." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2011. http://hub.hku.hk/bib/B47167658.

Full text
Abstract:
General-Purpose computing on Graphics Processing Units (GPGPU) has attracted a lot of attention recently. Exciting results have been reported in using GPUs to accelerate applications in various domains such as scientific simulations, data mining, bio-informatics and computational finance. However, up to now GPUs can only accelerate data-parallel loops with statically analyzable parallelism. Loops with dynamic parallelism (e.g., with array accesses through subscripted subscripts), an important pattern in many general-purpose applications, cannot be parallelized on GPUs using existing technolog
APA, Harvard, Vancouver, ISO, and other styles
3

Han, Guodong, and 韩国栋. "Profile-guided loop parallelization and co-scheduling on GPU-based heterogeneous many-core architectures." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2013. http://hub.hku.hk/bib/B50534257.

Full text
Abstract:
The GPU-based heterogeneous architectures (e.g., Tianhe-1A, Nebulae), composing multi-core CPU and GPU, have drawn increasing adoptions and are becoming the norm of supercomputing as they are cost-effective and power-efficient. However, programming such heterogeneous architectures still requires significant effort from application developers using sophisticated GPU programming languages such as CUDA and OpenCL. Although some automatic parallelization tools utilizing static analysis could ease the programming efforts, this approach could only parallelize loops 100% free of inter-iteration dep
APA, Harvard, Vancouver, ISO, and other styles
4

Hartono, Albert. "Tools for Performance Optimizations and Tuning of Affine Loop Nests." The Ohio State University, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=osu1259685041.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Sukumaran, Rajam Aravind. "Beyond the realm of the polyhedral model : combining speculative program parallelization with polyhedral compilation." Thesis, Strasbourg, 2015. http://www.theses.fr/2015STRAD040/document.

Full text
Abstract:
Dans cette thèse, nous présentons nos contributions à Apollo (Automatic speculative POLyhedral Loop Optimizer), qui est un compilateur automatique combinant la parallélisation spéculative et le modèle polyédrique, afin d’optimiser les codes à la volée. En effectuant une instrumentation partielle au cours de l’exécution, et en la soumettant à une interpolation, Apollo est capable de construire un modèle polyédrique spéculatif dynamiquement. Ce modèle spéculatif est ensuite transmis à Pluto, qui est un ordonnanceur polyédrique statique. Apollo sélectionne ensuite un des squelettes d’optimisation
APA, Harvard, Vancouver, ISO, and other styles
6

Dang, Francis Hoai Dinh. "Speculative parallelization of partially parallel loops." [College Station, Tex. : Texas A&M University, 2007. http://hdl.handle.net/1969.1/ETD-TAMU-1271.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Trifunovic, Konrad. "Efficient search-based strategies for polyhedral compilation : algorithms and experience in a production compiler." Phd thesis, Université Paris Sud - Paris XI, 2011. http://tel.archives-ouvertes.fr/tel-00661334.

Full text
Abstract:
In order to take the performance advantages of the current multicore and heterogeneous architectures the compilers are required to perform more and more complex program transformations. The search space of the possible program optimizations is huge and unstructured. Selecting the best transformation and predicting the potential performance benefits of that transformation is the major problem in today's optimizing compilers. The promising approach to handling the program optimizations is to focus on the automatic loop optimizations expressed in the polyhedral model. The current approaches for o
APA, Harvard, Vancouver, ISO, and other styles
8

Cohen, Albert. "Analyse et transformation de programmes: du modèle polyédrique aux langages formels." Phd thesis, Université de Versailles-Saint Quentin en Yvelines, 1999. http://tel.archives-ouvertes.fr/tel-00550829.

Full text
Abstract:
Les microprocesseurs et les architectures parallèles d'aujourd'hui lancent de nouveaux défis aux techniques de compilation. En présence de parallélisme, les optimisations deviennent trop spécifiques et complexes pour être laissées au soin du programmeur. Les techniques de parallélisation automatique dépassent le cadre traditionnel des applications numériques et abordent de nouveaux modèles de programmes, tels que les nids de boucles non affines, les appels récursifs et les structures de données dynamiques. Des analyses précises sont au c{\oe}ur de la détection du parallélisme, elles rassemblen
APA, Harvard, Vancouver, ISO, and other styles
9

Feng, Shuangtong. "Efficient Parallelization of 2D Ising Spin Systems." Thesis, Virginia Tech, 2001. http://hdl.handle.net/10919/36263.

Full text
Abstract:
The problem of efficient parallelization of 2D Ising spin systems requires realistic algorithmic design and implementation based on an understanding of issues from computer science and statistical physics. In this work, we not only consider fundamental parallel computing issues but also ensure that the major constraints and criteria of 2D Ising spin systems are incorporated into our study. This realism in both parallel computation and statistical physics has rarely been reflected in previous research for this problem. <p> In this thesis,we designed and implemented a variety of parallel algorit
APA, Harvard, Vancouver, ISO, and other styles
10

Ravishankar, Mahesh. "Automatic Parallelization of Loops with Data Dependent Control Flow and Array Access Patterns." The Ohio State University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=osu1400085733.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Loop parallelization"

1

Banerjee, Utpal. Loop parallelization. Kluwer Academic, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Banerjee, Utpal. Loop Parallelization. Springer US, 1994. http://dx.doi.org/10.1007/978-1-4757-5676-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Tanase, Alexandru-Petru, Frank Hannig, and Jürgen Teich. Symbolic Parallelization of Nested Loop Programs. Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-73909-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Saltz, Joel H. Run-time parallelization and scheduling of loops. ICASE, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Banerjee, Utpal. Loop Parallelization. Springer, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Teich, Jürgen, Alexandru-Petru Tanase, and Frank Hannig. Symbolic Parallelization of Nested Loop Programs. Springer, 2018.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Tanase, Alexandru-Petru. Symbolic Parallelization of Nested Loop Programs. Springer, 2018.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Run-time parallelization and scheduling of loops. Institute for Computer Applications in Science and Engineering, NASA Langley Research Center, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ravi, Mirchandaney, Baxter Doug, and Institute for Computer Applications in Science and Engineering., eds. Run-time parallelization and scheduling of loops. Institute for Computer Applications in Science and Engineering, NASA Langley Research Center, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Run-time parallelization and scheduling of loops. Institute for Computer Applications in Science and Engineering, NASA Langley Research Center, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Loop parallelization"

1

Banerjee, Utpal. "Loop Permutations." In Loop Parallelization. Springer US, 1994. http://dx.doi.org/10.1007/978-1-4757-5676-0_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Banerjee, Utpal. "Background." In Loop Parallelization. Springer US, 1994. http://dx.doi.org/10.1007/978-1-4757-5676-0_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Banerjee, Utpal. "Unimodular Transformations." In Loop Parallelization. Springer US, 1994. http://dx.doi.org/10.1007/978-1-4757-5676-0_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Banerjee, Utpal. "Remainder Transformations." In Loop Parallelization. Springer US, 1994. http://dx.doi.org/10.1007/978-1-4757-5676-0_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Banerjee, Utpal. "Program Partitioning." In Loop Parallelization. Springer US, 1994. http://dx.doi.org/10.1007/978-1-4757-5676-0_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Wonnacott, David, Barbara Chapman, James LaGrone, et al. "Optimistic Loop Parallelization." In Encyclopedia of Parallel Computing. Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-09766-4_2142.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Dongarra, Jack, Piotr Luszczek, Paul Feautrier, et al. "Loop Nest Parallelization." In Encyclopedia of Parallel Computing. Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-09766-4_228.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Leasure, Bruce, David J. Kuck, Sergei Gorlatch, et al. "Parallelization, Loop Nest." In Encyclopedia of Parallel Computing. Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-09766-4_2502.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Darte, Alain, Yves Robert, and Frédéric Vivien. "Loop Parallelization Algorithms." In Compiler Optimizations for Scalable Parallel Systems. Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-45403-9_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Tanase, Alexandru-Petru, Frank Hannig, and Jürgen Teich. "Symbolic Parallelization." In Symbolic Parallelization of Nested Loop Programs. Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-73909-0_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Loop parallelization"

1

Aiken, A., and A. Nicolau. "Optimal loop parallelization." In the ACM SIGPLAN 1988 conference. ACM Press, 1988. http://dx.doi.org/10.1145/53990.54021.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Gasperoni, F., U. Schwiegelshohn, and K. Ebcioğlu. "On optimal loop parallelization." In the 22nd annual workshop. ACM Press, 1989. http://dx.doi.org/10.1145/75362.75411.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Dutta, Sudakshina, Dipankar Sarkar, Arvind Rawat, and Kulwant Singh. "Validation of Loop Parallelization and Loop Vectorization Transformations." In 11th International Conference on Evaluation of Novel Software Approaches to Software Engineering. SCITEPRESS - Science and and Technology Publications, 2016. http://dx.doi.org/10.5220/0005869501950202.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Oancea, Cosmin E., and Lawrence Rauchwerger. "Logical inference techniques for loop parallelization." In the 33rd ACM SIGPLAN conference. ACM Press, 2012. http://dx.doi.org/10.1145/2254064.2254124.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Lucas, Divino Cesar S., and Guido Araujo. "The Batched DOACROSS loop parallelization algorithm." In 2015 International Conference on High Performance Computing & Simulation (HPCS). IEEE, 2015. http://dx.doi.org/10.1109/hpcsim.2015.7237079.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Vasiladiotis, Christos, Roberto Castaneda Lozano, Murray Cole, and Bjorn Franke. "Loop Parallelization using Dynamic Commutativity Analysis." In 2021 IEEE/ACM International Symposium on Code Generation and Optimization (CGO). IEEE, 2021. http://dx.doi.org/10.1109/cgo51591.2021.9370319.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Chamski. "Nested loop sequences: towards efficient loop structures in automatic parallelization." In Proceedings of the Twenty-Seventh Annual Hawaii International Conference on System Sciences. IEEE Comput. Soc. Press, 1994. http://dx.doi.org/10.1109/hicss.1994.323283.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Pereda, Alexis, David R. C. Hill, Claude Mazel, and Bruno Bachelet. "Static Loop Parallelization Decision Using Template Metaprogramming." In 2018 International Conference on High Performance Computing & Simulation (HPCS). IEEE, 2018. http://dx.doi.org/10.1109/hpcs.2018.00159.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Chemeris, Alexander, Julia Gorunova, and Dmiry Lazorenko. "Loop nests parallelization for digital system synthesis." In 2013 11th East-West Design and Test Symposium (EWDTS). IEEE, 2013. http://dx.doi.org/10.1109/ewdts.2013.6673180.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Shao, Shengjia, Shouyi Yin, Leibo Liu, and Shaojun Wei. "Map-reduce inspired loop parallelization on CGRA." In 2014 IEEE International Symposium on Circuits and Systems (ISCAS). IEEE, 2014. http://dx.doi.org/10.1109/iscas.2014.6865364.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Loop parallelization"

1

Symes, William W., and Michel Kern. Loop Level Parallelization of a Seismic Inversion Code. Defense Technical Information Center, 1993. http://dx.doi.org/10.21236/ada444976.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Baxter, Doug, Joel Salz, Martin Schultz, and Stan Eisenstat. Preconditioned Krylov Solvers and Methods for Runtime Loop Parallelization. Defense Technical Information Center, 1988. http://dx.doi.org/10.21236/ada206388.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!