To see the other types of publications on this topic, follow the link: Loop parallelization.

Dissertations / Theses on the topic 'Loop parallelization'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 19 dissertations / theses for your research on the topic 'Loop parallelization.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Wottrich, Rodolfo Guilherme 1990. "Loop parallelization in the cloud using OpenMP and MapReduce." [s.n.], 2014. http://repositorio.unicamp.br/jspui/handle/REPOSIP/275500.

Full text
Abstract:
Orientadores: Guido Costa Souza de Araújo, Rodolfo Jardim de Azevedo<br>Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação<br>Made available in DSpace on 2018-08-24T12:44:05Z (GMT). No. of bitstreams: 1 Wottrich_RodolfoGuilherme_M.pdf: 2132128 bytes, checksum: b8ac1197909b6cdaf96b95d6097649f3 (MD5) Previous issue date: 2014<br>Resumo: A busca por paralelismo sempre foi um importante objetivo no projeto de sistemas computacionais, conduzida principalmente pelo constante interesse na redução de tempos de execução de aplicações. Programação paralela é uma área d
APA, Harvard, Vancouver, ISO, and other styles
2

Zhang, Chenggang, and 张呈刚. "Run-time loop parallelization with efficient dependency checking on GPU-accelerated platforms." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2011. http://hub.hku.hk/bib/B47167658.

Full text
Abstract:
General-Purpose computing on Graphics Processing Units (GPGPU) has attracted a lot of attention recently. Exciting results have been reported in using GPUs to accelerate applications in various domains such as scientific simulations, data mining, bio-informatics and computational finance. However, up to now GPUs can only accelerate data-parallel loops with statically analyzable parallelism. Loops with dynamic parallelism (e.g., with array accesses through subscripted subscripts), an important pattern in many general-purpose applications, cannot be parallelized on GPUs using existing technolog
APA, Harvard, Vancouver, ISO, and other styles
3

Han, Guodong, and 韩国栋. "Profile-guided loop parallelization and co-scheduling on GPU-based heterogeneous many-core architectures." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2013. http://hub.hku.hk/bib/B50534257.

Full text
Abstract:
The GPU-based heterogeneous architectures (e.g., Tianhe-1A, Nebulae), composing multi-core CPU and GPU, have drawn increasing adoptions and are becoming the norm of supercomputing as they are cost-effective and power-efficient. However, programming such heterogeneous architectures still requires significant effort from application developers using sophisticated GPU programming languages such as CUDA and OpenCL. Although some automatic parallelization tools utilizing static analysis could ease the programming efforts, this approach could only parallelize loops 100% free of inter-iteration dep
APA, Harvard, Vancouver, ISO, and other styles
4

Hartono, Albert. "Tools for Performance Optimizations and Tuning of Affine Loop Nests." The Ohio State University, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=osu1259685041.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Sukumaran, Rajam Aravind. "Beyond the realm of the polyhedral model : combining speculative program parallelization with polyhedral compilation." Thesis, Strasbourg, 2015. http://www.theses.fr/2015STRAD040/document.

Full text
Abstract:
Dans cette thèse, nous présentons nos contributions à Apollo (Automatic speculative POLyhedral Loop Optimizer), qui est un compilateur automatique combinant la parallélisation spéculative et le modèle polyédrique, afin d’optimiser les codes à la volée. En effectuant une instrumentation partielle au cours de l’exécution, et en la soumettant à une interpolation, Apollo est capable de construire un modèle polyédrique spéculatif dynamiquement. Ce modèle spéculatif est ensuite transmis à Pluto, qui est un ordonnanceur polyédrique statique. Apollo sélectionne ensuite un des squelettes d’optimisation
APA, Harvard, Vancouver, ISO, and other styles
6

Dang, Francis Hoai Dinh. "Speculative parallelization of partially parallel loops." [College Station, Tex. : Texas A&M University, 2007. http://hdl.handle.net/1969.1/ETD-TAMU-1271.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Trifunovic, Konrad. "Efficient search-based strategies for polyhedral compilation : algorithms and experience in a production compiler." Phd thesis, Université Paris Sud - Paris XI, 2011. http://tel.archives-ouvertes.fr/tel-00661334.

Full text
Abstract:
In order to take the performance advantages of the current multicore and heterogeneous architectures the compilers are required to perform more and more complex program transformations. The search space of the possible program optimizations is huge and unstructured. Selecting the best transformation and predicting the potential performance benefits of that transformation is the major problem in today's optimizing compilers. The promising approach to handling the program optimizations is to focus on the automatic loop optimizations expressed in the polyhedral model. The current approaches for o
APA, Harvard, Vancouver, ISO, and other styles
8

Cohen, Albert. "Analyse et transformation de programmes: du modèle polyédrique aux langages formels." Phd thesis, Université de Versailles-Saint Quentin en Yvelines, 1999. http://tel.archives-ouvertes.fr/tel-00550829.

Full text
Abstract:
Les microprocesseurs et les architectures parallèles d'aujourd'hui lancent de nouveaux défis aux techniques de compilation. En présence de parallélisme, les optimisations deviennent trop spécifiques et complexes pour être laissées au soin du programmeur. Les techniques de parallélisation automatique dépassent le cadre traditionnel des applications numériques et abordent de nouveaux modèles de programmes, tels que les nids de boucles non affines, les appels récursifs et les structures de données dynamiques. Des analyses précises sont au c{\oe}ur de la détection du parallélisme, elles rassemblen
APA, Harvard, Vancouver, ISO, and other styles
9

Feng, Shuangtong. "Efficient Parallelization of 2D Ising Spin Systems." Thesis, Virginia Tech, 2001. http://hdl.handle.net/10919/36263.

Full text
Abstract:
The problem of efficient parallelization of 2D Ising spin systems requires realistic algorithmic design and implementation based on an understanding of issues from computer science and statistical physics. In this work, we not only consider fundamental parallel computing issues but also ensure that the major constraints and criteria of 2D Ising spin systems are incorporated into our study. This realism in both parallel computation and statistical physics has rarely been reflected in previous research for this problem. <p> In this thesis,we designed and implemented a variety of parallel algorit
APA, Harvard, Vancouver, ISO, and other styles
10

Ravishankar, Mahesh. "Automatic Parallelization of Loops with Data Dependent Control Flow and Array Access Patterns." The Ohio State University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=osu1400085733.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Jimborean, Alexandra. "Adapting the polytope model for dynamic and speculative parallelization." Phd thesis, Université de Strasbourg, 2012. http://tel.archives-ouvertes.fr/tel-00733850.

Full text
Abstract:
In this thesis, we present a Thread-Level Speculation (TLS) framework whose main feature is to speculatively parallelize a sequential loop nest in various ways, to maximize performance. We perform code transformations by applying the polyhedral model that we adapted for speculative and runtime code parallelization. For this purpose, we designed a parallel code pattern which is patched by our runtime system according to the profiling information collected on some execution samples. We show on several benchmarks that our framework yields good performance on codes which could not be handled effic
APA, Harvard, Vancouver, ISO, and other styles
12

Carrascal, Manzanares Carlos. "Parallélisation d’un code éléments finis spectraux. Application au contrôle non destructif par ultrasons." Electronic Thesis or Diss., Sorbonne université, 2019. http://www.theses.fr/2019SORUS586.

Full text
Abstract:
Le sujet de cette thèse consiste à étudier diverses pistes pour optimiser le temps de calcul de la méthode des éléments finis spectraux d'ordre élevé (SFEM). L’objectif est d’améliorer la performance en se basant sur des architectures facilement accessibles, à savoir des processeurs multicœurs SIMD et des processeurs graphiques. Les noyaux de calcul étant limités par les accès mémoire (signe d’une faible intensité arithmétique), la plupart des optimisations présentées visent la réduction et l’accélération des accès mémoire. Une indexation améliorée des matrices et vecteurs, une combinaison des
APA, Harvard, Vancouver, ISO, and other styles
13

Liu, Li, and 劉立. "A Basis Theory for Loop Parallelization." Thesis, 1995. http://ndltd.ncl.edu.tw/handle/71089128535342433703.

Full text
Abstract:
博士<br>國立臺灣大學<br>資訊工程研究所<br>84<br>Parallelism extraction, iteration partitioning, data partitioning and scheduling are the most important issues in parallelizing compilers. Parallelism extraction is to find parallelizable computations. Iteration partitioning concerns maximizing the number of independent partitions. Data partitioning tries to group data used by iterations with dependences to reduce the communications. Loop scheduling is to synchronize the dependent iterations. Since all these
APA, Harvard, Vancouver, ISO, and other styles
14

Chen, Jiun-An, and 陳俊安. "On the Synchronization Problem in Loop Parallelization." Thesis, 1996. http://ndltd.ncl.edu.tw/handle/24330563448510709213.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

CHEN, YUEH CHIH, and 陳約志. "A Loop and Array Parallelization Technique on Distrbuted-memory Multiprocessor System." Thesis, 1996. http://ndltd.ncl.edu.tw/handle/89121136639528379970.

Full text
Abstract:
碩士<br>國立中山大學<br>資訊工程研究所<br>84<br>In this paper, we discuss how to parallelize the double loop and distribute any array in the double loop so that a generated parallel program can be executed on a 1-dimension-form distributed-memory multiprocessor system, then we can get a good speedup. On a distributed-memory multiprocessor system, a parallel program must avoid as much synchronization and communication between nodes as possible; otherwise we don't have to parallelize it because of the worse
APA, Harvard, Vancouver, ISO, and other styles
16

Wu, Chi-Fan, and 巫濟帆. "A Run-Time Loop Parallelization Technique on Shared-Memory Multiprocessor Systems." Thesis, 2000. http://ndltd.ncl.edu.tw/handle/69474130464895861085.

Full text
Abstract:
碩士<br>國立中山大學<br>電機工程學系研究所<br>88<br>High performance computing power is important for the current advanced calculations of scientific applications. A multiprocessor system obtains its high performance from the fact that some computations can proceed in parallel. A parallelizing compiler can take a sequential program as input and automatically translate it into parallel form for the target multiprocessor system. But when loops with arrays of irregular, nonlinear or dynamic access patterns, no any current parallelizing compiler can determine whether data dependences exist at compile-time. Thus a
APA, Harvard, Vancouver, ISO, and other styles
17

Kao, Shih-Hung, and 高世宏. "DOACROSS Loops Parallelization for Parallelizing Compilers." Thesis, 1995. http://ndltd.ncl.edu.tw/handle/02486619045673065283.

Full text
Abstract:
碩士<br>國立交通大學<br>資訊科學學系<br>83<br>Loop-level parallelism is the most common resource to be exploited by parallelizing compiler. On most existing parallelizing compiler, only DOALL loops parallelization are supported. However, DOACROSS loops which are ignored by most current parallelizing compiler exist plentiful parallelism. In this thesis, a DOACROSS loops parallelization model is proposed. The parallelization for DOACROSS loops is divided into two parts
APA, Harvard, Vancouver, ISO, and other styles
18

Jeyaraman, Thulasiraman. "Run-time parallelization of irregular DOACROSS loops." 1996. http://hdl.handle.net/1993/19211.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Chen, Chiao-Wu, and 陳昭宇. "A Study and Analysis on Parallelization Techniques for Non- uniform Data Dependence Nested Loops." Thesis, 1996. http://ndltd.ncl.edu.tw/handle/18584318364235163038.

Full text
Abstract:
碩士<br>國立交通大學<br>資訊工程學系<br>84<br>The purpose of this thesis is to design a loop paralellization techniquefor non-uniform data dependence loops. In the past studies, most loop parallelizationtechniques are focused on the problem of uniform dependence loops. However, whenthe loop is non-uniform dependent these methods are either fail or inefficient[Dart 94][Lamp 74][Dhol 92][Berg 87]. According to the survey of the relationbetween array subscripts and data dependence by Zhiyu[Shen 89], there a
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!