To see the other types of publications on this topic, follow the link: Parallelismo.

Dissertations / Theses on the topic 'Parallelismo'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Parallelismo.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Pasini, Jacopo. "Algebre di Lie e gruppi risolubili e nilpotenti: un parallelismo." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2016. http://amslaurea.unibo.it/10079/.

Full text
Abstract:
Tesi in algebra che propone uno studio parallelo di risolubilità e nilpotenza nei gruppi e nelle algebre di Lie. Vengono descritte dapprima le algebre di Lie in modo da fornire una conoscenza preliminare riguardo a questa struttura algebrica. In seguito esse vengono messe a confronto con i gruppi sotto l'aspetto appunto di risolubilità e nilpotenza.
APA, Harvard, Vancouver, ISO, and other styles
2

Congiu, Elena <1992&gt. "La retorica dell’immigrazione oggi: un parallelismo tra Italia e Francia." Master's Degree Thesis, Università Ca' Foscari Venezia, 2018. http://hdl.handle.net/10579/13388.

Full text
Abstract:
La domanda da cui prende avvio la riflessione è se sia giustificabile in quanto necessaria alla salvaguardia della sicurezza statale una politica di chiusura dei confini ovverosia la negazione sovranista dell’ospitalità. Nel tentativo di rispondere al quesito si ripercorrono i punti cardine della storia delle migrazioni internazionali, dando particolare rilievo all’esodo, per poi concentrarsi sui movimenti migratori volontari e coatti che hanno interessato l’Europa dagli albori della modernità al secondo dopoguerra. Segue un’analisi del contributo offerto dal dibattito filosofico alla lettura del fenomeno migratorio e all’elaborazione di risposte concrete ad esso. Al termine del primo capitolo introduttivo si torna poi a restringere il campo sulle peculiarità del fenomeno migratorio rispettivamente in Italia e in Francia. Il secondo capitolo dell’elaborato consiste invece in uno studio di alcuni articoli tratti dai siti ufficiali di due quotidiani conservatori, l’uno nostrano, Il Giornale, l’altro d’oltralpe, Le Figaro, equiparabili per target e tiratura. La scelta degli articoli di giornale presi in esame coincide con i risultati di una ricerca eseguita per parole chiave, le quali costituendo un binomio con il comune denominatore immigration rinviano in ordine a tre tipologie di “sicurezza” individuate da Foucault. Infine, il terzo capitolo analizza i discorsi pubblici di due massimi esponenti dell’estremismo di destra in Italia e in Francia: Matteo Salvini e Marine Le Pen, rilevando il ruolo cruciale giocato dal topos dell’emergenza immigrazione e delle contromisure necessarie nelle rispettive campagne elettorali.
APA, Harvard, Vancouver, ISO, and other styles
3

Zanirati, Matteo. "Implementazione di algoritmi di data mining in architetture ad elevato parallelismo." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2013. http://amslaurea.unibo.it/5398/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Hushi, Bajram. "Sviluppo di un modulo di visualizzazione, memorizzazione e analisi dati in real-time con elevato parallelismo." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2015. http://amslaurea.unibo.it/9257/.

Full text
Abstract:
Progettazione e implementazione dei moduli di visualizzazione, memorizzazione e analisi di un sistema software di acquisizione dati in real-time da dispositivi prodotti da Elements s.r.l. La tesi mostra tutte le fasi di analisi, progettazione, implementazione e testing dei moduli sviluppati.
APA, Harvard, Vancouver, ISO, and other styles
5

Antonacci, Antonello. "Sviluppo e analisi di un algoritmo parallelo di riconoscimento di oggetti in immagini." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2016. http://amslaurea.unibo.it/10508/.

Full text
Abstract:
Implementazione sequenziale e parallela dell'algoritmo Evolution Constructed feature per il riconoscimento di oggetti in immagini. Analisi dei risultati ottenuti dall'algoritmo tramite la precision e la recall. Confronto dei tempi di esecuzione delle due versioni dell'algoritmo al fine di valutare gli effettivi guadagni ottenuti tramite la parallelizzazione.
APA, Harvard, Vancouver, ISO, and other styles
6

Romagnoli, Simone. "Analisi del linguaggio x10 per architetture parallele: il caso di studio dell’algoritmo Gift Wrapping." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/21642/.

Full text
Abstract:
L’obiettivo di questa tesi consiste nello studio di x10, un linguaggio di programmazione orientato agli oggetti le cui componenti estendono linguaggi sequenziali come Java e C++ per permettere il calcolo parallelo. Per trarre delle conclusioni pratiche, questa tesi analizza l’algoritmo per il calcolo dell’inviluppo convesso, uno degli algoritmi fondamentali usati in geometria computazionale, e ne illustra una versione parallela implementata in x10.
APA, Harvard, Vancouver, ISO, and other styles
7

Casadei, Lorenzo. "Gravità teleparallela." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2014. http://amslaurea.unibo.it/6842/.

Full text
Abstract:
Questo elaborato espone l'equivalenza tra la relatività generale di Einstein e una teoria poco conosciuta chiamata Gravità Teleparallela. Sebbene possono sembrare diverse, esse sono due modi equivalenti di vedere l'universo, la prima con spaziotempo curvo, curvatura e traiettorie geodetiche; la seconda con spazio piatto e la curvatura che si comporta come una forza. Per queste teorie si rivelano fondamentali elementi di geometria differenziale e tensoriale, come i tensori metrici, tensori di Riemann, derivate covarianti, oltre ai concetti fisici di tetrade, connessioni di Lorentz, sistemi inerziali e non.
APA, Harvard, Vancouver, ISO, and other styles
8

Mokkedem, Abdelillah. "Verification et raffinement de programmes parallèles dans une logique temporelle compositionnelle : application au langage SDL." Vandoeuvre-les-Nancy, INPL, 1994. http://www.theses.fr/1994INPL051N.

Full text
Abstract:
Le contexte dans lequel la logique temporelle linéaire (TL) s'est révélée un outil efficace est celui de la vérification a priori. Cela exige la connaissance préalable entière du programme que l'on veut vérifier. Une première partie de notre travail s'inscrit dans ce cadre et consiste a concevoir une méthode interactive de preuves de propriétés d'invariance et de fatalité de programmes parallèles. Cette méthode, basée sur une théorie appelée crocos, est appliquée au langage SDL. La logique temporelle perd rapidement ses atouts des que l'on s'intéresse a la vérification a posteriori ou un programme parallèle est vérifie progressivement durant le cycle de son développement. Le travail présenté dans la seconde partie de cette thèse concerne l'étude d'une logique temporelle plus raffinée dont la sémantique doit satisfaire la propriété de full abstraction. La nouvelle logique temporelle ainsi établie permet d'éviter que des détails trop atomiques par rapport a un niveau d'abstraction donné soient présents au niveau de la sémantique temporelle des programmes parallèles. Outre la vérification compositionnelle et le développement systématique des programmes parallèles, le raffinement de programmes s'avère rigoureusement axiomatise a l'intérieur de la nouvelle logique temporelle
APA, Harvard, Vancouver, ISO, and other styles
9

Durand, Irène. "Un Modèle d'interprétation répartie pour une architecture multiprocesseur Prolog." Toulouse 3, 1986. http://www.theses.fr/1986TOU30141.

Full text
Abstract:
Dans un premier chapitre sont presentes les principaux modeles existants pour l'interpretation parallele de prolog. Nous proposons au chapitre ii un modele d'interpretation qui prend en compte les parallelismes et et ou de prolog. Il s'appuie sur le graphe de connexion de r. Kowalski. Chaque arc correspond a un resolvant possible. La resolution est dirigee par des messages et les nouveaux resolvants sont construits et executes en parallele. Le processus d'unification entre environnements qui conditionne la creation de nouveaux arcs est detaille au chapitre iii. Le chapitre iv decrit l'interprete reparti et le simulateur de la structure multiprocesseur qui ont valide le modele. Les premieres extensions et optimisations sont exposees. Enfin, le chapitre v presente une nouvelle version de l'interprete. Elle detecte dynamiquement l'independance des litteraux au sein d'un meme resolvant. Elle optimise encore l'espace de recherche et permet d'obtenir un degre de parallelisme-et plus eleve grace a la possibilite d'executer simultanement des sous-resolvants independants
APA, Harvard, Vancouver, ISO, and other styles
10

Ioannou, Nikolas. "Complementing user-level coarse-grain parallelism with implicit speculative parallelism." Thesis, University of Edinburgh, 2012. http://hdl.handle.net/1842/7900.

Full text
Abstract:
Multi-core and many-core systems are the norm in contemporary processor technology and are expected to remain so for the foreseeable future. Parallel programming is, thus, here to stay and programmers have to endorse it if they are to exploit such systems for their applications. Programs using parallel programming primitives like PThreads or OpenMP often exploit coarse-grain parallelism, because it offers a good trade-off between programming effort versus performance gain. Some parallel applications show limited or no scaling beyond a number of cores. Given the abundant number of cores expected in future many-cores, several cores would remain idle in such cases while execution performance stagnates. This thesis proposes using cores that do not contribute to performance improvement for running implicit fine-grain speculative threads. In particular, we present a many-core architecture and protocols that allow applications with coarse-grain explicit parallelism to further exploit implicit speculative parallelism within each thread. We show that complementing parallel programs with implicit speculative mechanisms offers significant performance improvements for a large and diverse set of parallel benchmarks. Implicit speculative parallelism frees the programmer from the additional effort to explicitly partition the work into finer and properly synchronized tasks. Our results show that, for a many-core comprising 128 cores supporting implicit speculative parallelism in clusters of 2 or 4 cores, performance improves on top of the highest scalability point by 44% on average for the 4-core cluster and by 31% on average for the 2-core cluster. We also show that this approach often leads to better performance and energy efficiency compared to existing alternatives such as Core Fusion and Turbo Boost. Moreover, we present a dynamic mechanism to choose the number of explicit and implicit threads, which performs within 6% of the static oracle selection of threads. To improve energy efficiency processors allow for Dynamic Voltage and Frequency Scaling (DVFS), which enables changing their performance and power consumption on-the-fly. We evaluate the amenability of the proposed explicit plus implicit threads scheme to traditional power management techniques for multithreaded applications and identify room for improvement. We thus augment prior schemes and introduce a novel multithreaded power management scheme that accounts for implicit threads and aims to minimize the Energy Delay2 product (ED2). Our scheme comprises two components: a “local” component that tries to adapt to the different program phases on a per explicit thread basis, taking into account implicit thread behavior, and a “global” component that augments the local components with information regarding inter-thread synchronization. Experimental results show a reduction of ED2 of 8% compared to having no power management, with an average reduction in power of 15% that comes at a minimal loss of performance of less than 3% on average.
APA, Harvard, Vancouver, ISO, and other styles
11

Goubault, Éric. "Geometrie du parallelisme." Palaiseau, Ecole polytechnique, 1995. http://www.theses.fr/1995EPXX0046.

Full text
Abstract:
Cette these est consacree a l'etude d'un modele geometrique des systemes paralleles et distribues, les automates de dimension superieure (hda). Cette geometrie apparait essentiellement sous forme de complexes cubiques et de complexes double de modules formalisant le flot du temps sur des formes de dimension eventuellement elevee. On montre dans un premier que les hda permettent de donner des semantiques a un certain nombre de langages paralleles, et ce de facon categorique et a la sos. Ce modele s'avere plus expressif qu'un bon nombre d'autres modeles operationnels du vrai parallelisme en ce qu'il permet de distinguer les differents niveaux de parallelisme et d'exclusion mutuelle, et de decrire ainsi les strategies d'allocation des actions sur les processeurs, c'est a dire de facon plus generale, les proprietes d'ordonnancement des actions. Les outils de la topologie algebrique et d'algebre homologique sont bien adaptes au calcul ou a la caracterisation de telles proprietes. Certains groupes d'homologie enumerent les branchements et les confluences, d'autres, les exclusions mutuelles en toute dimension. Une theorie homotopique permet d'isoler les ordonnancements essentiels d'un systeme. On donne ensuite quelques applications en derivant un algorithme de verification de protocoles et un algorithme de parallelisation. On relie cette theorie au probleme de la presentation de monoides par des systemes de reecriture canoniques, ainsi qu'a des questions de solvabilite de protocoles sur des machines distribuees robustes aux pannes. On y traite enfin de proprietes de sequentialisation pour des bases de donnees paralleles, et d'impossibilite d'implementation de semaphores dans des langages sans synchronisation ni atomicite. En dernier lieu, on generalise ce modele en un modele plus proche des techniques simpliciales et enfin en un modele plus proche des techniques de geometrie differentielle, etant a meme de decrire des systemes paralleles temps reel
APA, Harvard, Vancouver, ISO, and other styles
12

González, Vélez Horacio. "Adaptive structured parallelism." Thesis, University of Edinburgh, 2008. http://hdl.handle.net/1842/29122.

Full text
Abstract:
Algorithmic skeletons abstract commonly-used patterns of parallel computation, communication, and interaction. Parallel programs are expressed by interweaving parameterised skeletons analogously to the way in which structured sequential programs are developed, using well-defined constructs. Skeletons provide top-down design composition and control inheritance throughout the program structure. Based on the algorithmic skeleton concept, structured parallelism provides a high-level parallel programming technique which allows the conceptual description of parallel programs whilst fostering platform independence and algorithm abstraction. By decoupling the algorithm specification from machine-dependent structural considerations, structured parallelism allows programmers to code programs regardless of how the computation and communications will be executed in the system platform. Meanwhile, large non-dedicated multiprocessing systems have long posed a challenge to known distributed systems programming techniques as a result of the inherent heterogeneity and dynamism of their resources. Scant research has been devoted to the use of structural information provided by skeletons in adaptively improving program performance, based on resource utilisation. This thesis presents a methodology to improve skeletal parallel programming in heterogeneous distributed systems by introducing adaptivity through resource awareness. As we hypothesise that a skeletal program should be able to adapt to the dynamic resource conditions over time using its structural forecasting information, we have developed ASPara: Adaptive Structured Parallelism. ASPara is a generic methodology to incorporate structural information at compilation into a parallel program, which will help it to adapt at execution. By means of the skeleton API, ASPara instruments a skeletal program with a series of pragmatic rules, which depend on particular performance thresholds based on the nature of the skeleton, the computation/communication ratio of the program, and the availability of resources in the system. Every rule essentially determines the scheduling for the given skeleton. ASpara is comprised of four phases: programming, compilation, calibration, and execution. We illustrate the feasibility of this approach and its associated performance improvements using independent case studies based on two algorithmic skeletons, the task farm and the pipeline, programmed in C and MPI and executed in a non-dedicated heterogeneous bi-cluster system.
APA, Harvard, Vancouver, ISO, and other styles
13

Calderon, Jose Manuel. "Improving implicit parallelism." Thesis, University of York, 2015. http://etheses.whiterose.ac.uk/13147/.

Full text
Abstract:
We propose a new technique for exploiting the inherent parallelism in lazy functional programs. Known as implicit parallelism, the goal of writing a sequential program and having the compiler improve its performance by determining what can be executed in parallel has been studied for many years. Our technique abandons the idea that a compiler should accomplish this feat in ‘one shot’ with static analysis and instead allow the compiler to improve upon the static analysis using iterative feedback. We demonstrate that iterative feedback can be relatively simple when the source language is a lazy purely functional programming language. We present three main contributions to the field: the auto- matic derivation of parallel strategies from a demand on a structure, and two new methods of feedback-directed auto-parallelisation. The first method treats the runtime of the program as a black box and uses the ‘wall-clock’ time as a fitness function to guide a heuristic search on bitstrings representing the parallel setting of the program. The second feedback approach is profile directed. This allows the compiler to use profile data that is gathered by the runtime system as the pro- gram executes. This allows the compiler to determine which threads are not worth the overhead of creating them. Our results show that the use of feedback-directed compilation can be a good source of refinement for the static analysis techniques that struggle to account for the cost of a computation. This lifts the burden of ‘is this parallelism worthwhile?’ away from the static phase of compilation and to the runtime, which is better equipped to answer the question.
APA, Harvard, Vancouver, ISO, and other styles
14

Xu, Hua. "Scheduling with flexible parallelism." Diss., Georgia Institute of Technology, 1992. http://hdl.handle.net/1853/23388.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Åberg, Ludvig. "Parallelism within queue application." Thesis, Mittuniversitetet, Avdelningen för informationssystem och -teknologi, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-31575.

Full text
Abstract:
The aim of this thesis was to modify an existing order queue application which was unable to execute orders in a queue in parallel which in turn could lead to a bad user experience due to the increased queue delay. The thesis proposes two queue structures to allow parallel execution within a queue. One of the two is selected for implemented in the modified order queue application. The implementation was carried out in Java EE and used different types of frameworks such as JPQL. Some parts of the order queue application had to be modified to be able to handle the new queue structure. New attributes that defines dependencies of the orders are used to find a suitable parent for each order in the queue. The queue structure was visualized making it possible to see the execution in real time, and a test server was implemented to test the queue structure. This resulted in a working prototype able to handle dependencies and parallel orders. The modified order queue application was performance measured and compared to the original order queue application. The measurement showed that the modified order queue application performed better than the original order queue application in terms of execution time below a certain number of queues. Future work includes optimizing the methods and queries in the implementation to increase the performance and to handle parallelism within the orders.
APA, Harvard, Vancouver, ISO, and other styles
16

Chronaki, Catherine Eleftherios. "Parallelism in declarative languages /." Online version of thesis, 1990. http://hdl.handle.net/1850/10793.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Rehfuss, Paul Stephen. "Parallelism in contextual processing /." Full text open access at:, 1999. http://content.ohsu.edu/u?/etd,272.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

MORIN, REMI. "Categories de modeles du parallelisme." Paris 11, 1999. http://www.theses.fr/1999PA112032.

Full text
Abstract:
Dans une premiere partie, ce memoire etudie des representations de systemes paralleles constituees d'un automate et d'une relation d'independance qui indique quelles actions sont susceptibles d'etre executees simultanement par des processus differents. Ces automates paralleles permettent de representer les reseaux de petri saufs et les automates asynchrones de zielonka. Par analogie avec le probleme de la synthese de reseaux, nous posons la question de savoir quels automates paralleles correspondent effectivement a un systeme d'automates communicants. Nous resolvons ce probleme pour diverses classes de systemes comme par exemple les produits d'automates a la arnold-nivat ou les automates asynchrones a lecture concurrente. Nous nous interessons ensuite en particulier aux automates paralleles qui modelisent un systeme d'agents communicant soit par envoie de messages, soit par memoires partagees. Dans une seconde partie, nous etudions deux modeles qui permettent de decrire les executions des reseaux de petri non-saufs. D'une part, nous introduisons une notion de trace locale qui generalise les traces de mazurkiewicz classiques et qui est adaptee aux phenomenes de concurrence locale et d'autoconcurrence. D'autre part, nous caracterisons le pouvoir d'expression des structures d'evenements locales ; ce modele introduit par hoogers, kleijn et thiagarajan est une extension des structures d'evenements premieres avec conflit binaire ; il permet d'etendre aux reseaux de petri generaux la notion de depliage introduite par nielsen, plotkin et winskel, et utilisee par macmillan puis esparza pour etablir des techniques de verification automatique. Nous developpons alors une methode tres generale de traduction des traces locales en structures d'evenements locales ; ceci nous conduit a une etude exhaustive des interpretations semantiquement correctes de la notion d'evenement ainsi qu'a de multiples variations de la notion de depliage pour les reseaux de petri.
APA, Harvard, Vancouver, ISO, and other styles
19

Piquer, José. "Parallelisme et distribution en lisp." Palaiseau, Ecole polytechnique, 1991. http://www.theses.fr/1991EPXX0001.

Full text
Abstract:
Cette these presente une nouvelle approximation au parallelisme et la distribution en lisp, partant d'un modele de machine distribuee sans memoire commune. La semantique de memoire partagee de lisp est preservee en utilisant des pointeurs distants et des protocoles de coherence. L'ensemble est appele transpive. Une mise en uvre de transpive sur le lisp pour transputers, permet aux taches de partager les donnees independamment de leur placement. Les effets de bords sont admis. Un nouvel algorithme de glanage de cellules distribuees, appele compteurs indirects de reference, est aussi presente. Cet algorithme est tres simple et supporte la migration des objets
APA, Harvard, Vancouver, ISO, and other styles
20

Mussi, Philippe. "Modeles quantitatifs pour le parallelisme." Paris 5, 1990. http://www.theses.fr/1990PA05S009.

Full text
Abstract:
Cette these s'interesse a divers problemes lies a l'evaluation de performances de systemes paralleles. Apres une introduction presentant les architectures paralleles et les systemes repartis, puis les problemes poses par leur modelisation et les approches possibles pour l'evaluation de leurs performances, nous decrivons un interpreteur parallele d'expressions arithmetiques et nous evaluons, grace a l'utilisation de grammaires stochastiques, divers indices de performances pour ce systeme. Nous etudions ensuite deux politiques differentes pour l'execution parallele de programmes a structure en arbre. Sous l'hypothese de temps d'execution independants et exponentiels pour chacun des noeuds, nous calculons la distribution du temps total d'execution de l'arbre. Nous decrivons et optimisons ensuite la parallelisation d'algorithmes de physique des particules. Les algorithmes etudies ont ete implementes sur la machine opsila d'architecture spmd en utilisant une methode de parallelisation par decomposition du domaine. Nous utilisons des techniques d'analyse de reseaux de files d'attente pour optimiser cette decomposition. Nous decrivons enfin un systeme pour la simulation de reseaux de files d'attente sur un reseau de transputers. La simulation d'un reseau est decomposee en autant de processus occam que de files d'attente. Chaque processus est ensuite place sur le reseau physique des processeurs par l'intermediaire d'un systeme de routage de messages permettant de s'affranchir des limitations engendrees par les communications par rendez-vous imposees par le langage occam. Diverses politiques de synchronisation des simulateurs elementaires time-warp, synchronisation forte sont implementees sur ce systeme, afin de comparer leurs performances.
APA, Harvard, Vancouver, ISO, and other styles
21

Nagy, Marius. "Parallelism in real-time computation." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2002. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp05/MQ65642.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Crooke, David. "Practical structured parallelism using BMF." Thesis, University of Edinburgh, 1998. http://hdl.handle.net/1842/746.

Full text
Abstract:
This thesis concerns the use of the Bird- Meertens Formalism as a mechanism to control parallelism in an imperative programming language. One of the main reasons for the failure of parallelism to enter mainstream computing is the difficulty of developing software and the lack of the portability and performance predictability enjoyed by sequential systems. A key objetive should be to minimise costs by abstracting much of the complexity away from the programmer. Criteria for a suitable parallel programming paradigm to meet this goal are defined. The Bird-Meertens Formalism, which has in the past been shown to be a suitable vehicle for expressing parallel algorithms, is used as the basis for a proposed imperative parallel programming paradigm which meets these criteria. A programming language is proposed which is an example of this paradigm, based on the BMF Theory of Lists and the sequential language C. A concurrent operational semantics is outlined, with the emphasis on its use as a practical tool for imcreasing confidence in program correctness, rather than on full and rigorous formality. A prototype implementation of a subset of this language for a distributed memory, massively parallel computer is produced in the form of a C subroutine library. Although not offering realistic absolute performance, it permits measurements of scalability and relative performance to be undertaken. A case study is undertaken which implements a simple but realistic algoritm in the language, and considers how well the the criteria outlined at the start of the project are met. The prototype library implementation is used for performance measurements. A range of further possibilities is examinedm, in particular ways in which the paradigm language may be extended, and the possibility of using alternative BMF-like type theories. Pragmatic considerations for achieving performance in a production implementation are discussed.
APA, Harvard, Vancouver, ISO, and other styles
23

Hsieh, Wilson Cheng-Yi. "Extracting parallelism from sequential programs." Thesis, Massachusetts Institute of Technology, 1988. http://hdl.handle.net/1721.1/14752.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Harrison, Rachel. "Pure functional languages and parallelism." Thesis, University of Southampton, 1991. https://eprints.soton.ac.uk/253169/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Gaska, Benjamin James, and Benjamin James Gaska. "ParForPy: Loop Parallelism in Python." Thesis, The University of Arizona, 2017. http://hdl.handle.net/10150/625320.

Full text
Abstract:
Scientists are trending towards usage of high-level programming languages such as Python. The convenience of these languages often have a performance cost. As the amount of data being processed increases this can make using these languages unfeasible. Parallelism is a means to achieve better performance, but many users are unaware of it, or find it difficult to work with. This thesis presents ParForPy, a means for loop-parallelization to to simplify usage of parallelism in Python for users. Discussion is included for determining when parallelism matches well with the problem. Results are given that indicate that ParForPy is both capable of improving program execution time and perceived to be a simpler construct to understand than other techniques for parallelism in Python.
APA, Harvard, Vancouver, ISO, and other styles
26

Zaitzow, Michael (Michael D. ). Carleton University Dissertation Religion. "Biblical parallelism in 2 Samuel." Ottawa, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
27

Chandramohan, Kiran. "Mapping parallelism to heterogeneous processors." Thesis, University of Edinburgh, 2016. http://hdl.handle.net/1842/22028.

Full text
Abstract:
Most embedded devices are based on heterogeneous Multiprocessor System on Chips (MPSoCs). These contain a variety of processors like CPUs, micro-controllers, DSPs, GPUs and specialised accelerators. The heterogeneity of these systems helps in achieving good performance and energy efficiency but makes programming inherently difficult. There is no single programming language or runtime to program such platforms. This thesis makes three contributions to these problems. First, it presents a framework that allows code in Single Program Multiple Data (SPMD) form to be mapped to a heterogeneous platform. The mapping space is explored, and it is shown that the best mapping depends on the metric used. Next, a compiler framework is presented which bridges the gap between the high -level programming model of OpenMP and the heterogeneous resources of MPSoCs. It takes OpenMP programs and generates code which runs on all processors. It delivers programming ease while exploiting heterogeneous resources. Finally, a compiler-based approach to runtime power management for heterogeneous cores is presented. Given an externally provided budget, the approach generates heterogeneous, partitioned code that attempts to give the best performance within that budget.
APA, Harvard, Vancouver, ISO, and other styles
28

CHATAIGNIER, PHILIPPE. "Simulations graphiques pour l'enseignement du parallelisme." Paris 6, 1994. http://www.theses.fr/1994PA066084.

Full text
Abstract:
Cette these presente quelques animations et simulations graphiques concues et realisees pour aider les etudiants a mieux comprendre les concepts du parallelisme par la visualisation des phenomenes. Le premier chapitre decrit des travaux effectues dans des domaines voisins: l'animation de programmes et d'algorithmes, le debogage et l'analyse des performances de programmes paralleles et la simulation. Le second chapitre est une presentation des projets force et colos dans le cadre desquels le travail a ete fait et une description des outils utilises. Le troisieme chapitre decrit deux animations qui visualisent les concepts de rendez-vous et de semaphore. Dans les deux derniers chapitres, sont presentees des simulations qui permettent la visualisation d'algorithmes paralleles sur differents types d'architectures: d'abord, des simulations de machines paralleles (un multiprocesseur a memoire globale et un reseau de processeurs) ecrites en smalltak/v ; puis, un systeme d'animations d'algorithmes distribues sur un reseau en anneau et une simulation d'un multiprocesseur a memoire globale, tous deux realises en utilisant x window et motif. Un exemple d'utilisation pedagogique de chacune de ces simulations est egalement presente
APA, Harvard, Vancouver, ISO, and other styles
29

Puzenat, Didier. "Parallelisme et modularite des modeles connexionnistes." Lyon, École normale supérieure (sciences), 1997. http://www.theses.fr/1997ENSL0061.

Full text
Abstract:
Le connexionnisme permet de resoudre des problemes en simulant des reseaux de neurones. La mise en uvre d'un reseau de neurones artificiels impose de lourds calculs et motive l'utilisation des machines les plus puissantes : les ordinateurs mimd a memoire distribuee. Cette these s'interesse plus particulierement a la parallelisation d'un classifieur incremental. Ce modele connexionniste discrimine un ensemble de donnees en classes, des cellules sont ajoutees en fonction des besoins. Une premiere parallelisation distribue l'espace d'entree du reseau entre les processeurs. Le classifieur parallelise suit parfaitement le comportement sequentiel. L'acceleration maximale est d'autant plus grande que la dimension de l'espace d'entree est elevee. Plusieurs parallelisations, dites modulaires, sont egalement proposees en parallelisant non plus le reseau mais l'apprentissage. Le comportement parallele du modele differe du comportement sequentiel mais les performances en classification sont maintenues. Une specialisation des modules permet d'atteindre une acceleration optimale. Le developpement d'une version asynchrone rend tout le travail realise independant de la machine cible en utilisant une machine virtuelle. Au-dela d'une utilisation en ingerieurie, le connexionnisme permet de modeliser et de comprendre le vivant dans le cadre des sciences cognitives. Le classifieur est notamment mis a contribution pour simuler un phenomene d'amorcage de repetition. Par ailleurs, l'experience accumulee sur le classifieur est mise a profit pour analyser les resultats d'experiences olfactives. Ce travail permet de valider une hypothese faite par les neuropsychologues sur le systeme perceptif olfactif. La these s'acheve sur la perspective de modeliser les processus de memorisation humains en associant des classifieurs incrementaux a une memoire associative. Ce nouveau systeme, hautement modulaire, ouvre un champ d'application tres large.
APA, Harvard, Vancouver, ISO, and other styles
30

Fung, Wilson Wai Lun. "GPU computing architecture for irregular parallelism." Thesis, University of British Columbia, 2015. http://hdl.handle.net/2429/51932.

Full text
Abstract:
Many applications with regular parallelism have been shown to benefit from using Graphics Processing Units (GPUs). However, employing GPUs for applications with irregular parallelism tends to be a risky process, involving significant effort from the programmer and an uncertain amount of performance/efficiency benefit. One known challenge in developing GPU applications with irregular parallelism is the underutilization of SIMD hardware in GPUs due to the application’s irregular control flow behavior, known as branch divergence. Another major development effort is to expose the available parallelism in the application as 1000s of concurrent threads without introducing data races or deadlocks. The GPU software developers may need to spend significant effort verifying the data synchronization mechanisms used in their applications. Despite various research studies indicating the potential benefits, the risks involved may discourage software developers from employing GPUs for this class of applications. This dissertation aims to reduce the burden on GPU software developers with two major enhancements to GPU architectures. First, thread block compaction (TBC) is a microarchitecture innovation that reduces the performance penalty caused by branch divergence in GPU applications. Our evaluations show that TBC provides an average speedup of 22% over a baseline per-warp, stack-based reconvergence mechanism on a set of GPU applications that suffer significantly from branch divergence. Second, Kilo TM is a cost effective, energy efficient solution for supporting transactional memory (TM) on GPUs. With TM, programmers can uses transactions instead of fine-grained locks to create deadlock-free, maintainable, yet aggressively-parallelized code. In our evaluations, Kilo TM achieves 192X speedup over coarse-grained locking and captures 66% of the performance of fine-grained locking with 34% energy overhead.
Applied Science, Faculty of
Electrical and Computer Engineering, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
31

Alahmadi, Marwan Ibrahim. "Optimizing data parallelism in applicative languages." Diss., Georgia Institute of Technology, 1990. http://hdl.handle.net/1853/8457.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Kilpatrick, P. L. "An application of parallelism to compilation." Thesis, Queen's University Belfast, 1985. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.373538.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Daniel, John W. H. "Exploiting application parallelism in production systems." Thesis, Cranfield University, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.279737.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Jouret, Guido Karel. "Exploiting data-parallelism in functional languages." Thesis, Imperial College London, 1991. http://hdl.handle.net/10044/1/46852.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Abeydeera, Maleen Hasanka (Weeraratna Patabendige Maleen Hasanka). "Optimizing throughput architectures for speculative parallelism." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/111930.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 57-62).
Throughput-oriented architectures, like GPUs, use a large number of simple cores and rely on application-level parallelism, using multithreading to keep the cores busy. These architectures work well when parallelism is plentiful but work poorly when its not. Therefore, it is important to combine these techniques with other hardware support for parallelizing challenging applications. Recent work has shown that speculative parallelism is plentiful for a large class of applications that have traditionally been hard to parallelize. However, adding hardware support for speculative parallelism to a throughput-oriented system leads to a severe pathology: aborted work consumes scarce resources and hurts the throughput of useful work. This thesis develops a technique to optimize throughput-oriented architectures for speculative parallelism: tasks should be prioritized according to how speculative they are. This focuses resources on work that is more likely to commit, reducing aborts and using speculation resources more efficiently. We identify two on-chip resources where this prioritization is most likely to help, the core pipeline and the memory controller. First, this thesis presents speculation-aware multithreading (SAM), a simple policy that modifies a multithreaded processor pipeline to prioritize instructions from less speculative tasks. Second, we modify the on-chip memory controller to prioritize requests issued by tasks that are earlier in the conflict resolution order. We evaluate SAM on systems with up to 64 SMT cores. With SAM, 8-threaded in-order cores outperform single-threaded cores by 2.41 x on average, while a speculation-oblivious policy yields a 1.91 x speedup. SAM also reduces wasted work by 43%. Unlike at the core, we find little performance benefit from prioritizing requests at the memory controller. The reason is that speculative execution works as a very effective prefetching mechanism, and most requests, even those from tasks that are ultimately aborted, do end up being useful.
by Weeraratna Patabendige Maleen Hasanka Abeydeera.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
36

Siapas, Athanassios G. "Criticality and parallelism in combinatorial optimization." Thesis, Massachusetts Institute of Technology, 1996. http://hdl.handle.net/1721.1/11009.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1996.
Includes bibliographical references (p. 60-63).
by Athanassios G. Siapas.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
37

Perez, Ruben (Ruben M. ). "Speculative parallelism in Intel Cilk Plus." Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/77032.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 37).
Certain algorithms can be effectively parallelized at the cost of performing some redundant work. One example is searching an unordered tree graph for a particular node. Each subtree can be searched in parallel by a separate thread. Once a single thread is successful, however, the work of the others is unneeded and should be ended. This type of computation is known as speculative parallelism. Typically, an abort command is provided in the programming language to provide this functionality, but some languages do not. This thesis shows how support for the abort command can be provided as a user-level library. A parallel version of the alpha beta search algorithm demonstrates its effectivenesss.
by Ruben Perez.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
38

McFarland, Daniel James. "Exploiting Malleable Parallelism on Multicore Systems." Thesis, Virginia Tech, 2011. http://hdl.handle.net/10919/33819.

Full text
Abstract:
As shared memory platforms continue to grow in core counts, the need for context-aware scheduling continues to grow. Context-aware scheduling takes into account characteristics of the system and application performance when making decisions. Traditionally applications run on a static thread count that is set at compile-time or at job start time, without taking into account the dynamic context of the system, other running applications, and potentially the performance of the application itself. However, many shared memory applications can easily be converted to malleable applications, that is, applications that can run with an arbitrary number of threads and can change thread counts during execution. Many new and intelligent scheduling decisions can be made when applications become context-aware, including expanding to ll an empty system or shrinking to accommodate a more parallelizable job. This thesis describes a prototype system called Resizing for Shared Memory (RSM), which supports OpenMP applications on shared memory platforms. RSM includes a main daemon that records performance information and makes resizing decisions as well as a communication library to allow applications to contact the RSM daemon and enact resizing decisions. Experimental results show that RSM can improve turn-around time and system utilization even using very simple heuristics and resizing policies.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
39

Rodriguez, Villamizar Gustavo Enrique. "A Graphical Representation of Exposed Parallelism." BYU ScholarsArchive, 2017. https://scholarsarchive.byu.edu/etd/6467.

Full text
Abstract:
Modern-day microprocessors are measured in part by their parallel performance. Parallelizing sequential programs is a complex task, requiring data dependence analysis of the program constructs. Researchers in the field of parallel optimization are working on shifting the optimization effort from the programmer to the compiler. The goal of this work is for the compiler to visually expose the parallel characteristics of the program to researchers as well as programmers for a better understanding of the parallel properties of their programs. In order to do that we developed Exposed Parallelism Visualization (EPV), a statically-generated graphical tool that builds a parallel task graph of source code after it has been converted to the LLVM compiler frameworkq s Intermediate Representation (IR). The goal is for this visual representation of IR to provide new insights about the parallel properties of the program without having to execute the program. This will help researchers and programmers to understand if and where parallelism exists in the program at compile time. With this understanding, researchers will be able to more easily develop compiler algorithms that identify parallelism and improve program performance, and programmers will easily identify parallelizable sections of code that can be executed in multiple cores or accelerators such as GPUs or FPGAs. To the best of our knowledge, EPV is the first static visualization tool made for the identification of parallelism.
APA, Harvard, Vancouver, ISO, and other styles
40

Chrisman, Dan Alvin Jr. "Limits to parallelism in scientific computing." W&M ScholarWorks, 1999. https://scholarworks.wm.edu/etd/1539623947.

Full text
Abstract:
The goal of our research is to decrease the execution time of scientific computing applications. We exploit the application's inherent parallelism to achieve this goal. This exploitation is expensive as we analyze sequential applications and port them to parallel computers. Many scientifically computational problems appear to have considerable exploitable parallelism; however, upon implementing a parallel solution on a parallel computer, limits to the parallelism are encountered. Unfortunately, many of these limits are characteristic of a specific parallel computer. This thesis explores these limits.;We study the feasibility of exploiting the inherent parallelism of four NASA scientific computing applications. We use simple models to predict each application's degree of parallelism at several levels of granularity. From this analysis, we conclude that it is infeasible to exploit the inherent parallelism of two of the four applications. The interprocessor communication of one application is too expensive relative to its computation cost. The input and output costs of the other application are too expensive relative to its computation cost. We exploit the parallelism of the remaining two applications and measure their performance on an Intel iPSC/2 parallel computer. We parallelize an Optimal Control Boundary Value Problem. This guidance control problem determines an optimal trajectory of a boat in a river. We parallelize the Carbon Dioxide Slicing technique which is a macrophysical cloud property retrieval algorithm. This technique computes the height at the top of a cloud using cloud imager measurements. We consider the feasibility of exploiting its massive parallelism on a MasPar MP-2 parallel computer. We conclude that many limits to parallelism are surmountable while other limits are inescapable.;From these limits, we elucidate some fundamental issues that must be considered when porting similar problems to yet-to-be designed computers. We conclude that the technological improvements to reduce the isolation of computational units frees a programmer from many of the programmer's current concerns about the granularity of the work. We also conclude that the technological improvements to relax the regimented guidance of the computational units allows a programmer to exploit the inherent heterogeneous parallelism of many applications.
APA, Harvard, Vancouver, ISO, and other styles
41

Shah, Bankim. "Exploiting and/or Parallelism in Prolog." PDXScholar, 1991. https://pdxscholar.library.pdx.edu/open_access_etds/4223.

Full text
Abstract:
Logic programming languages have generated increasing interest over the last few years. Logic programming languages like Prolog are being explored for different applications. Prolog is inherently parallel. Attempts are being made to utilize this inherent parallelism. There are two kinds of parallelism present in Prolog, OR parallelism and AND parallelism. OR parallelism is relatively easy to exploit while AND parallelism poses interesting issues. One of the main issues is dependencies between literals. It is very important to use the AND parallelism available in the language structure as not exploiting it would result in a substantial loss of parallelism. Any system trying to make use of either or both kinds of parallelism would need to have the capability of performing faster unification, as it affects the overall execution time greatly. A new architecture design is presented in this thesis that exploits both kinds of parallelism. The architecture efficiently implements some of the key concepts in Conery's approach to parallel execution [5]. The architecture has a memory hierarchy that uses associative memory. Associative memories are useful for faster lookup and response and hence their use results in quick response time. Along with the use of a memory hierarchy, execution algorithms and rules for ordering of literals are presented. The rules for ordering of literals are helpful in determining the order of execution. The analysis of response time is done for different configurations of the architecture, from sequential execution with one processor to multiple processing units having multiple processors. A benchmark program, "query," is used for obtaining results, and the map coloring problem is also solved on different configurations and results are compared. To obtain results the goals and subgoals are assigned to different processors by creating a tree. These assignments and transferring of goals are simulated by hand. The total time includes the time needed for moving goals back and forth from one processor to another. The total time is calculated in number of cycles with some assumptions about memory response time, communication time, number of messages that can be sent on the bus at a particular instant, etc. The results obtained show that the architecture efficiently exploits the AND parallelism and OR parallelism available in Prolog. The total time needed for different configurations is then compared and conclusions are drawn.
APA, Harvard, Vancouver, ISO, and other styles
42

Mourid, Btisam. "Filtre à décimation parallelisé /." Montréal : École de technologie supérieure, 2003. http://wwwlib.umi.com/cr/etsmtl/fullcit?pMQ85306.

Full text
Abstract:
Thèse (M. Ing.)--École de technologie supérieure, Montréal, 2003.
"Mémoire présenté à l'École de technologie supérieure comme exigence partielle à l'obtention de la maîtrise en génie électrique". Bibliogr.: f. [85]-86. Également disponible en version électronique.
APA, Harvard, Vancouver, ISO, and other styles
43

Mourid, Btisam. "Filtre à décimation parallelisé." Mémoire, École de technologie supérieure, 2003. http://espace.etsmtl.ca/761/1/MOURID_Btisam.pdf.

Full text
Abstract:
Plusieurs applications radio utilisent de plus en plus de hautes fréquences d'échantillonnage qui permettent de numériser directement les signaux RF. Malheureusement, la vitesse de l'unité de traitement numérique des signaux actuellement disponible est limitée. Elle ne permet pas d'atteindre des hautes fréquences d'horloge. L'objectif principal de ce mémoire est justement d'étudier et de développer une architecture prallelisée de filtres décimateurs qui offre la possibilité d'opérer à une haute fréquence d'échantillonnage de l'ordre de 1 GHz. Cette architecture doit être programmable avec les moyens technologiques disponibles tels que FPGA. Elle doit être également efficace avec une complexité acceptable. Ainsi, deux techniques ont été étudiées. La première technique consiste à utiliser des filtres numériques RII. La sseconde technique consiste à l'utilisation des filtres Cascaded Integrate and Comb (CIC) où plusieurs structures ont été analysées et évaluées.
APA, Harvard, Vancouver, ISO, and other styles
44

Rozoy, Brigitte. "Un modele de parallelisme : le monoide distribue." Caen, 1987. http://www.theses.fr/1987CAEN2039.

Full text
Abstract:
La mise en evidence d'equivalences des modeles mathematiques du parallelisme asynchrone nous amene a creer un nouveau modele dit monoide distribue qui constitue les differents modeles existants. Les problemes de cette etude sont la reconnaissabilite dans ce monoide et la terminaison distribuee dans les reseaux repartis asynchrones
APA, Harvard, Vancouver, ISO, and other styles
45

Vivien, Frédéric. "Detection de parallelisme dans les boucles imbriquees." Lyon, École normale supérieure (sciences), 1997. http://www.theses.fr/1997ENSL0077.

Full text
Abstract:
Nombre d'utilisateurs de l'informatique aimeraient pouvoir executer sur des ordinateurs paralleles leurs programmes sequentiels (ecrits pour etre executes sur des ordinateurs classiques). Il est donc devenu necessaire de savoir paralleliser les programmes sequentiels en programmes executables sur machines paralleles. Cette parallelisation doit etre automatique puisqu'elle s'adresse le plus souvent a de simples utilisateurs. Avant d'initier toute transformation du programme originel, il faut detecter et quantifier le parallelisme qu'il contient implicitement, ce qui requiert la connaissance des dependances existant entre les differents calculs. Ulterieurement, il sera necessaire de reordonner les calculs en explicitant le parallelisme decouvert. Cette these a pour objet la detection automatique du parallelisme implicite et la recherche d'ordonnancements l'explicitant pour des structures de programmes particulieres : les ensembles de boucles imbriquees. Nos travaux ont eu principalement pour but la comprehension des techniques existantes de detection de parallelisme, de leur points forts et de leurs limitations. D'un cote, nous avons etudie les principaux algorithmes preexistant a ces travaux. De l'autre, nous sommes partis du modele theorique fourni par les systemes d'equations recurrentes uniformes pour proposer un algorithme optimal de parallelisation des graphes de dependance reduits polyedriques, representation approchee des dependances qui generalise les deux modes classiques d'approximation. Nous avons compare ce nouvel algorithme aux algorithmes classiques et obtenu une classification des principaux algorithmes. Le probleme de la detection du parallelisme et de son expression n'etant qu'une des multiples composantes de la parallelisation automatique, nous nous sommes interesses aux interactions entre les problemes de placement et d'ordonnancement, et entre les problemes de detection de parallelisme et d'elimination de fausses dependances.
APA, Harvard, Vancouver, ISO, and other styles
46

Santos, Luiz Cláudio Villar dos. "Exploiting instruction-level parallelism a constructive approach /." Eindhoven : Technische Universiteit Eindhoven, 1998. http://catalog.hathitrust.org/api/volumes/oclc/40847445.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Hara, Yuko, Hiroyuki Tomiyama, Shinya Honda, Hiroaki Takada, and Katsuya Ishii. "Behavioral Partitioning with Exploiting Function-Level Parallelism." IEEE, 2008. http://hdl.handle.net/2237/12102.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Eriksson, Mikael. "Linné : an object oriented language with parallelism." Licentiate thesis, Luleå tekniska universitet, Datavetenskap, 1990. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-26317.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Shamaileh, Sana Fadi Adel. "The translation of parallelism in political speeches." Thesis, University of Salford, 2011. http://usir.salford.ac.uk/26907/.

Full text
Abstract:
The Translation of Parallelism in Political Speeches The core focus of this research centres on the rhetorical device parallelism, -which is frequently used in Arabic, particularly in the context of political discourse. The aim of this study is to investigate the way parallelism is dealt with when translated from Arabic into English in terms of its function, patterns and frequency and whether it manifests an impact on ST recipients. The research will draw on Critical Discourse Analysis (CDA) in the investigation of the context of the research, which centers on political discourse and argumentative text typology. (CDA) offers involvement with ideology, in terms of belonging to a group or a country, the need for safety, food, etc, and the fear from invasion and outsiders, which in turn is involved in political speeches. Furthermore, (CDA) highlights significant issues such as power and legitimacy which are the core of political discourse. Furthermore, the researcher has opted for two tools to approach the rhetorical device in hand, first of which is contrastive Stylistics where parallelism is investigated in both Arabic and English political speeches as a stylistic device. Second, translation studies which is applied to the analytical part of this study, which investigates parallelism in original speeches in Arabic and their English translations. The findings of the study show that parallelism is an effective rhetorical device, which occurs in high frequency in Arabic political speeches. The results also indicate that English uses parallelism but less frequently compared to Arabic and relies more on other rhetorical features such as listing three elements, using contrasting elements and manipulating the use of pronouns, among others. The study concludes that parallelism plays a significant role in Arabic political speeches and creates a greater impact on recipients. Despite the fact that parallelism occurs less frequently in English political speeches, it has been noticed that it is highly used in the English translation, in contrast to what has been hypothised and this may be due to the nature of the corpus, which is delivered by a monarch and reflects legitimacy, power and highly controlled language.
APA, Harvard, Vancouver, ISO, and other styles
50

Krishnamoorthy, Sriram. "Optimizing locality and parallelism through program reorganization." Columbus, Ohio : Ohio State University, 2008. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1197913392.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography