Academic literature on the topic 'Implicit parallelism'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Implicit parallelism.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Implicit parallelism"

1

Trilla, José Manuel Calderón, and Colin Runciman. "Improving implicit parallelism." ACM SIGPLAN Notices 50, no. 12 (2016): 153–64. http://dx.doi.org/10.1145/2887747.2804308.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Harris, Tim, and Satnam Singh. "Feedback directed implicit parallelism." ACM SIGPLAN Notices 42, no. 9 (2007): 251–64. http://dx.doi.org/10.1145/1291220.1291192.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Vose, Michael D., and Alden H. Wright. "Form Invariance and Implicit Parallelism." Evolutionary Computation 9, no. 3 (2001): 355–70. http://dx.doi.org/10.1162/106365601750406037.

Full text
Abstract:
Holland's schema theorem (an inequality) may be viewed as an attempt to understand genetic search in terms of a coarse graining of the state space. Stephens and Waelbroeck developed that perspective, sharpening the schema theorem to an equality. Of particular interest is a “form invariance” of their equations; the form is unchanged by the degree of coarse graining. This paper establishes a similar form invariance for the more general model of Vose et al. and uses the attendant machinery as a springboard for an interpretation and discussion of implicit parallelism.
APA, Harvard, Vancouver, ISO, and other styles
4

Bertoni, Alberto, and Marco Dorigo. "Implicit parallelism in genetic algorithms." Artificial Intelligence 61, no. 2 (1993): 307–14. http://dx.doi.org/10.1016/0004-3702(93)90071-i.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Vose, Michael D., and Alden H. Wright. "Erratum: Form Invariance and Implicit Parallelism." Evolutionary Computation 9, no. 4 (2001): 525. http://dx.doi.org/10.1162/10636560152642896.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ovatman, Tolga, Thomas Weigert, and Feza Buzluca. "Exploring implicit parallelism in class diagrams." Journal of Systems and Software 84, no. 5 (2011): 821–34. http://dx.doi.org/10.1016/j.jss.2011.01.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Alexandrov, Alexander, Asterios Katsifodimos, Georgi Krastev, and Volker Markl. "Implicit Parallelism through Deep Language Embedding." ACM SIGMOD Record 45, no. 1 (2016): 51–58. http://dx.doi.org/10.1145/2949741.2949754.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Bik, Aart J. C., and Dennis B. Gannon. "Automatically exploiting implicit parallelism in Java." Concurrency: Practice and Experience 9, no. 6 (1997): 579–619. http://dx.doi.org/10.1002/(sici)1096-9128(199706)9:6<579::aid-cpe309>3.0.co;2-g.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Bic, L., and M. Almouhamed. "The EM-4 under Implicit Parallelism." Journal of Parallel and Distributed Computing 19, no. 3 (1993): 255–61. http://dx.doi.org/10.1006/jpdc.1993.1109.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Senghor, Abdourahmane. "Barracuda, an Open Source Framework for Parallelizing Divide and Conquer Algorithm." International Journal on Cybernetics & Informatics 12, no. 2 (2023): 63–75. http://dx.doi.org/10.5121/ijci.2023.120206.

Full text
Abstract:
This paper presents a newly-created Barracuda open-source framework which aims to parallelize Java divide and conquer applications. This framework exploits implicit for-loop parallelism in dividing and merging operations. So, this makes it a mixture of parallel for-loop and task parallelism. It targets shared-memory multiprocessors and hybrid distributed shared-memory architectures. We highlight the effectiveness of the framework and focus on the performance gain and programming effort by using this framework. Barracuda aims at large public actors as well as various application domains. In terms of performance achievement, it is very close to Fork/Join framework while allowing end-users to only focus on refactoring code and experts to have the opportunity to improve it.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Implicit parallelism"

1

Calderon, Jose Manuel. "Improving implicit parallelism." Thesis, University of York, 2015. http://etheses.whiterose.ac.uk/13147/.

Full text
Abstract:
We propose a new technique for exploiting the inherent parallelism in lazy functional programs. Known as implicit parallelism, the goal of writing a sequential program and having the compiler improve its performance by determining what can be executed in parallel has been studied for many years. Our technique abandons the idea that a compiler should accomplish this feat in ‘one shot’ with static analysis and instead allow the compiler to improve upon the static analysis using iterative feedback. We demonstrate that iterative feedback can be relatively simple when the source language is a lazy purely functional programming language. We present three main contributions to the field: the auto- matic derivation of parallel strategies from a demand on a structure, and two new methods of feedback-directed auto-parallelisation. The first method treats the runtime of the program as a black box and uses the ‘wall-clock’ time as a fitness function to guide a heuristic search on bitstrings representing the parallel setting of the program. The second feedback approach is profile directed. This allows the compiler to use profile data that is gathered by the runtime system as the pro- gram executes. This allows the compiler to determine which threads are not worth the overhead of creating them. Our results show that the use of feedback-directed compilation can be a good source of refinement for the static analysis techniques that struggle to account for the cost of a computation. This lifts the burden of ‘is this parallelism worthwhile?’ away from the static phase of compilation and to the runtime, which is better equipped to answer the question.
APA, Harvard, Vancouver, ISO, and other styles
2

Ioannou, Nikolas. "Complementing user-level coarse-grain parallelism with implicit speculative parallelism." Thesis, University of Edinburgh, 2012. http://hdl.handle.net/1842/7900.

Full text
Abstract:
Multi-core and many-core systems are the norm in contemporary processor technology and are expected to remain so for the foreseeable future. Parallel programming is, thus, here to stay and programmers have to endorse it if they are to exploit such systems for their applications. Programs using parallel programming primitives like PThreads or OpenMP often exploit coarse-grain parallelism, because it offers a good trade-off between programming effort versus performance gain. Some parallel applications show limited or no scaling beyond a number of cores. Given the abundant number of cores expected in future many-cores, several cores would remain idle in such cases while execution performance stagnates. This thesis proposes using cores that do not contribute to performance improvement for running implicit fine-grain speculative threads. In particular, we present a many-core architecture and protocols that allow applications with coarse-grain explicit parallelism to further exploit implicit speculative parallelism within each thread. We show that complementing parallel programs with implicit speculative mechanisms offers significant performance improvements for a large and diverse set of parallel benchmarks. Implicit speculative parallelism frees the programmer from the additional effort to explicitly partition the work into finer and properly synchronized tasks. Our results show that, for a many-core comprising 128 cores supporting implicit speculative parallelism in clusters of 2 or 4 cores, performance improves on top of the highest scalability point by 44% on average for the 4-core cluster and by 31% on average for the 2-core cluster. We also show that this approach often leads to better performance and energy efficiency compared to existing alternatives such as Core Fusion and Turbo Boost. Moreover, we present a dynamic mechanism to choose the number of explicit and implicit threads, which performs within 6% of the static oracle selection of threads. To improve energy efficiency processors allow for Dynamic Voltage and Frequency Scaling (DVFS), which enables changing their performance and power consumption on-the-fly. We evaluate the amenability of the proposed explicit plus implicit threads scheme to traditional power management techniques for multithreaded applications and identify room for improvement. We thus augment prior schemes and introduce a novel multithreaded power management scheme that accounts for implicit threads and aims to minimize the Energy Delay2 product (ED2). Our scheme comprises two components: a “local” component that tries to adapt to the different program phases on a per explicit thread basis, taking into account implicit thread behavior, and a “global” component that augments the local components with information regarding inter-thread synchronization. Experimental results show a reduction of ED2 of 8% compared to having no power management, with an average reduction in power of 15% that comes at a minimal loss of performance of less than 3% on average.
APA, Harvard, Vancouver, ISO, and other styles
3

Austin, Todd Michael. "Exploiting implicit parallelism in SPARC instruction execution /." Online version of thesis, 1990. http://hdl.handle.net/1850/11007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Rann, David. "The effective use of implicit parallelism through the use of an object-oriented programming language." Thesis, Aston University, 1996. http://publications.aston.ac.uk/10596/.

Full text
Abstract:
Model of Concurrency. The concurrency model is designed to be a straightforward target for mapping sequential programs onto, thus making them parallel. It aids the compilation process by providing a high level of abstraction, including a useful model of parallel behaviour which enables easy incorporation of message interchange, locking, and synchronization of objects. Further, the model is sufficient such that a compiler can and has been practically built. Model of Compilation. The compilation-model's structure is based upon an object-oriented view of grammar descriptions and capitalises on both a recursive-descent style of processing and abstract syntax trees to perform the parsing. A composite-object view with an attribute grammar style of processing is used to extract sufficient semantic information for the parallelisation (i.e. code-generation) phase. Programming Principles. The set of principles presented are based upon information hiding, sharing and containment of objects and the dividing up of methods on the basis of a command/query division. When followed, the level of potential parallelism within the presented concurrency model is maximised. Further, these principles naturally arise from good programming practice. Summary. In summary this thesis shows that it is possible to compile well-written programs, written in a subset of Eiffel, into parallel programs without any syntactic additions or semantic alterations to Eiffel: i.e. no parallel primitives are added, and the parallel program is modelled to execute with equivalent semantics to the sequential version. If the programming principles are followed, a parallelised program achieves the maximum level of potential parallelisation within the concurrency model.
APA, Harvard, Vancouver, ISO, and other styles
5

Sirvent, Pardell Raül. "GRID superscalar: a programming model for the Grid." Doctoral thesis, Universitat Politècnica de Catalunya, 2009. http://hdl.handle.net/10803/6015.

Full text
Abstract:
Durant els darrers anys el Grid ha sorgit com una nova plataforma per la computació distribuïda. La tecnologia Gris permet unir diferents recursos de diferents dominis administratius i formar un superordinador virtual amb tots ells. Molts grups de recerca han dedicat els seus esforços a desenvolupar un conjunt de serveis bàsics per oferir un middleware de Grid: una capa que permet l'ús del Grid. De tota manera, utilitzar aquests serveis no és una tasca fácil per molts usuaris finals, cosa que empitjora si l'expertesa d'aquests usuaris no està relacionada amb la informàtica.<br/>Això té una influència negativa a l'hora de que la comunitat científica adopti la tecnologia Grid. Es veu com una tecnologia potent però molt difícil de fer servir. Per facilitar l'ús del Grid és necessària una capa extra que amagui la complexitat d'aquest i permeti als usuaris programar o portar les seves aplicacions de manera senzilla.<br/>Existeixen moltes propostes d'eines de programació pel Grid. En aquesta tesi fem un resum d'algunes d'elles, i podem veure que existeixen eines conscients i no-conscients del Grid (es programen especificant o no els detalls del Grid, respectivament). A més, molt poques d'aquestes eines poden explotar el paral·lelisme implícit de l'aplicació, i en la majoria d'elles, l'usuari ha de definir aquest paral·lelisme de manera explícita. Una altra característica que considerem important és si es basen en llenguatges de programació molt populars (com C++ o Java), cosa que facilita l'adopció per part dels usuaris finals.<br/>En aquesta tesi, el nostre objectiu principal ha estat crear un model de programació pel Grid basat en la programació seqüencial i els llenguatges més coneguts de la programació imperativa, capaç d'explotar el paral·lelisme implícit de les aplicacions i d'accelerar-les fent servir els recursos del Grid de manera concurrent. A més, com el Grid és de naturalesa distribuïda, heterogènia i dinàmica i degut també a que el nombre de recursos que pot formar un Grid pot ser molt gran, la probabilitat de que es produeixi una errada durant l'execució d'una aplicació és elevada. Per tant, un altre dels nostres objectius ha estat tractar qualsevol tipus d'error que pugui sorgir durant l'execució d'una aplicació de manera automàtica (ja siguin errors relacionats amb l'aplicació o amb el Grid). GRID superscalar (GRIDSs), la principal contribució d'aquesta tesi, és un model de programació que assoleix els<br/>objectius mencionats proporcionant una interfície molt petita i simple i un entorn d'execució que és capaç d'executar en paral·lel el codi proporcionat fent servir el Grid. La nostra interfície de programació permet a un usuari programar una aplicació no-conscient del Grid, amb llenguatges imperatius coneguts i populars (com C/C++, Java, Perl o Shell script) i de manera seqüencial, per tant dóna un pas important per ajudar als usuaris a adoptar la tecnologia Grid.<br/>Hem aplicat el nostre coneixement de l'arquitectura de computadors i el disseny de microprocessadors a l'entorn d'execució de GRIDSs. Tal com es fa a un processador superescalar, l'entorn d'execució de GRIDSs és capaç de realitzar un anàlisi de dependències entre les tasques que formen l'aplicació, i d'aplicar tècniques de renombrament per incrementar el seu paral·lelisme. GRIDSs genera automàticament a partir del codi principal de l'usuari un graf que descriu les dependències de dades en l'aplicació. També presentem casos d'ús reals del model de programació en els camps de la química computacional i la bioinformàtica, que demostren que els nostres objectius han estat assolits.<br/>Finalment, hem estudiat l'aplicació de diferents tècniques per detectar i tractar fallades: checkpoint, reintent i replicació de tasques. La nostra proposta és proporcionar un entorn capaç de tractar qualsevol tipus d'errors, de manera transparent a l'usuari sempre que sigui possible. El principal avantatge d'implementar aquests mecanismos al nivell del model de programació és que el coneixement a nivell de l'aplicació pot ser explotat per crear dinàmicament una estratègia de tolerància a fallades per cada aplicació, i evitar introduir sobrecàrrega en entorns lliures d'errors.<br>During last years, the Grid has emerged as a new platform for distributed computing. The Grid technology allows joining different resources from different administrative domains and forming a virtual supercomputer with all of them.<br/>Many research groups have dedicated their efforts to develop a set of basic services to offer a Grid middleware: a layer that enables the use of the Grid. Anyway, using these services is not an easy task for many end users, even more if their expertise is not related to computer science. This has a negative influence in the adoption of the Grid technology by the scientific community. They see it as a powerful technology but very difficult to exploit. In order to ease the way the Grid must be used, there is a need for an extra layer which hides all the complexity of the Grid, and allows users to program or port their applications in an easy way.<br/>There has been many proposals of programming tools for the Grid. In this thesis we give an overview on some of them, and we can see that there exist both Grid-aware and Grid-unaware environments (programmed with or without specifying details of the Grid respectively). Besides, very few existing tools can exploit the implicit parallelism of the application and in the majority of them, the user must define the parallelism explicitly. Another important feature we consider is if they are based in widely used programming languages (as C++ or Java), so the adoption is easier for end users.<br/>In this thesis, our main objective has been to create a programming model for the Grid based on sequential programming and well-known imperative programming languages, able to exploit the implicit parallelism of applications and to speed them up by using the Grid resources concurrently. Moreover, because the Grid has a distributed, heterogeneous and dynamic nature and also because the number of resources that form a Grid can be very big, the probability that an error arises during an application's execution is big. Thus, another of our objectives has been to automatically deal with any type of errors which may arise during the execution of the application (application related or Grid related).<br/>GRID superscalar (GRIDSs), the main contribution of this thesis, is a programming model that achieves these mentioned objectives by providing a very small and simple interface and a runtime that is able to execute in parallel the code provided using the Grid. Our programming interface allows a user to program a Grid-unaware application with already known and popular imperative languages (such as C/C++, Java, Perl or Shell script) and in a sequential fashion, therefore giving an important step to assist end users in the adoption of the Grid technology.<br/>We have applied our knowledge from computer architecture and microprocessor design to the GRIDSs runtime. As it is done in a superscalar processor, the GRIDSs runtime system is able to perform a data dependence analysis between the tasks that form an application, and to apply renaming techniques in order to increase its parallelism. GRIDSs generates automatically from user's main code a graph describing the data dependencies in the application.<br/>We present real use cases of the programming model in the fields of computational chemistry and bioinformatics, which demonstrate that our objectives have been achieved.<br/>Finally, we have studied the application of several fault detection and treatment techniques: checkpointing, task retry and task replication. Our proposal is to provide an environment able to deal with all types of failures, transparently for the user whenever possible. The main advantage in implementing these mechanisms at the programming model level is that application-level knowledge can be exploited in order to dynamically create a fault tolerance strategy for each application, and avoiding to introduce overhead in error-free environments.
APA, Harvard, Vancouver, ISO, and other styles
6

Coullon, Hélène. "Modélisation et implémentation de parallélisme implicite pour les simulations scientifiques basées sur des maillages." Thesis, Orléans, 2014. http://www.theses.fr/2014ORLE2029/document.

Full text
Abstract:
Le calcul scientifique parallèle est un domaine en plein essor qui permet à la fois d’augmenter la vitesse des longs traitements, de traiter des problèmes de taille plus importante ou encore des problèmes plus précis. Ce domaine permet donc d’aller plus loin dans les calculs scientifiques, d’obtenir des résultats plus pertinents, car plus précis, ou d’étudier des problèmes plus volumineux qu’auparavant. Dans le monde plus particulier de la simulation numérique scientifique, la résolution d’équations aux dérivées partielles (EDP) est un calcul particulièrement demandeur de ressources parallèles. Si les ressources matérielles permettant le calcul parallèle sont de plus en plus présentes et disponibles pour les scientifiques, à l’inverse leur utilisation et la programmation parallèle se démocratisent difficilement. Pour cette raison, des modèles de programmation parallèle, des outils de développement et même des langages de programmation parallèle ont vu le jour et visent à simplifier l’utilisation de ces machines. Il est toutefois difficile, dans ce domaine dit du “parallélisme implicite”, de trouver le niveau d’abstraction idéal pour les scientifiques, tout en réduisant l’effort de programmation. Ce travail de thèse propose tout d’abord un modèle permettant de mettre en oeuvre des solutions de parallélisme implicite pour les simulations numériques et la résolution d’EDP. Ce modèle est appelé “Structured Implicit Parallelism for scientific SIMulations” (SIPSim), et propose une vision au croisement de plusieurs types d’abstraction, en tentant de conserver les avantages de chaque vision. Une première implémentation de ce modèle, sous la forme d’une librairie C++ appelée SkelGIS, est proposée pour les maillages cartésiens à deux dimensions. Par la suite, SkelGIS, et donc l’implémentation du modèle, est étendue à des simulations numériques sur les réseaux (permettant l’application de simulations représentant plusieurs phénomènes physiques). Les performances de ces deux implémentations sont évaluées et analysées sur des cas d’application réels et complexes et démontrent qu’il est possible d’obtenir de bonnes performances en implémentant le modèle SIPSim<br>Parallel scientific computations is an expanding domain of computer science which increases the speed of calculations and offers a way to deal with heavier or more accurate calculations. Thus, the interest of scientific computations increases, with more precised results and bigger physical domains to study. In the particular case of scientific numerical simulations, solving partial differential equations (PDEs) is an especially heavy calculation and a perfect applicant to parallel computations. On one hand, it is more and more easy to get an access to very powerfull parallel machines and clusters, but on the other hand parallel programming is hard to democratize, and most scientists are not able to use these machines. As a result, high level programming models, framework, libraries, languages etc. have been proposed to hide technical details of parallel programming. However, in this “implicit parallelism” field, it is difficult to find the good abstraction level while keeping a low programming effort. This thesis proposes a model to write implicit parallelism solutions for numerical simulations such as mesh-based PDEs computations. This model is called “Structured Implicit Parallelism for scientific SIMulations” (SIPSim), and proposes an approach at the crossroads of existing solutions, taking advantage of each one. A first implementation of this model is proposed, as a C++ library called SkelGIS, for two dimensional Cartesian meshes. A second implementation of the model, and an extension of SkelGIS, proposes an implicit parallelism solution for network-simulations (which deals with simulations with multiple physical phenomenons), and is studied in details. A performance analysis of both these implementations is given on real case simulations, and it demonstrates that the SIPSim model can be implemented efficiently
APA, Harvard, Vancouver, ISO, and other styles
7

Borel, Christian. "Simulation d'ecoulements supersoniques tridimensionnels instationnaires de fluide parfait par une methode implicite multidomaine parallelisee." Paris 6, 1992. http://www.theses.fr/1992PA066648.

Full text
Abstract:
L'objet de cette these est l'etude, l'implementation et la mise en uvre d'une methode numerique de simulation pour les ecoulements instationnaires de fluide parfait compressible autour d'engins tridimensionnels animes d'un mouvement d'entrainement quelconque. En prenant pour base de travail le code de calcul aerolog developpe par matra defense pour la resolution des problemes stationnaires autour de geometries complexes par une methode explicite multidomaine, l'extension de la methode numerique au cas instationnaire a ete etudiee et implementee: modelisation en repere relatif, discretisation au second ordre pour les termes-source resultants du mouvement d'entrainement, adjonction au schema d'une phase implicite de lissage des residus permettant de reduire sensiblement les couts de calcul tout en conservant l'ordre de precision du schema. Une methode de raccordement de type schwarz simplifiee a ete validee et mise en uvre pour la resolution de la phase implicite multi-domaine. Dans l'objectif de rendre les couts de calcul compatibles avec une utilisation de type industriel, on s'est attache a l'optimisation de la programmation du code pour une exploitation sur machine vectorielle et parallele (approche macro-tasking reposant sur la structure multi-domaine de l'algorithme de resolution). La methode de calcul instationnaire implicite a ete validee sur de nombreux cas-tests bi et tri-dimensionnels, et un cas de calcul de nature industrielle (missile empenne en autorotation) a permis de verifier son interet quant a l'analyse des phenomenes instationnaires, ainsi que les performances reelles du code de calcul sur machine vectorielle et parallele
APA, Harvard, Vancouver, ISO, and other styles
8

Do, Duc Minh Quan. "Scalable factorization model to discover implicit and explicit similarities across domains." Thesis, 2018. http://hdl.handle.net/10453/133197.

Full text
Abstract:
University of Technology Sydney. Faculty of Engineering and Information Technology.<br>E-commerce businesses increasingly depend on recommendation systems to introduce personalized services and products to their target customers. Achieving accurate recommendations requires a sufficient understanding of user preferences and item characteristics. Given the current innovations on the Web, coupled datasets are abundantly available across domains. An analysis of these datasets can provide a broader knowledge to understand the underlying relationship between users and items. This thorough understanding results in more collaborative filtering power and leads to a higher recommendation accuracy. However, how to effectively use this knowledge for recommendation is still a challenging problem. In this research, we propose to exploit both explicit and implicit similarities extracted from latent factors across domains with matrix tri-factorization. On the coupled dimensions, common parts of the coupled factors across domains are shared among them. At the same time, their domain-specific parts are preserved. We show that such a configuration of both common and domain-specific parts benefits cross-domain recommendations significantly. Moreover, on the non-coupled dimensions, the middle factor of the tri-factorization is proposed to use to match the closely related clusters across datasets and align the matched ones to transfer cross-domain implicit similarities, further improving the recommendation. Furthermore, when dealing with data coupled from different sources, the scalability of the analytical method is another significant concern. We design a distributed factorization model that can scale up as the observed data across domains increases. Our data parallelism, based on Apache Spark, enables the model to have the smallest communication cost. Also, the model is equipped with an optimized solver that converges faster. We demonstrate that these key features stabilize our model’s performance when the data grows. Validated on real-world datasets, our developed model outperforms the existing algorithms regarding recommendation accuracy and scalability. These empirical results illustrate the potential of our research in exploiting both explicit and implicit similarities across domains for improving recommendation performance.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Implicit parallelism"

1

Rawlings, Hunter. Writing History Implicitly through Refined Structuring. Edited by Sara Forsdyke, Edith Foster, and Ryan Balot. Oxford University Press, 2017. http://dx.doi.org/10.1093/oxfordhb/9780199340385.013.4.

Full text
Abstract:
Thucydides self-consciously composed his history for an elite audience of reader-listeners who would pay close attention to his work, not simply hear parts of it recited once. On the surface, he organized it tightly according to rigidly heeded principles: strict focus on war and political decision-making; rigorous ordering of time and space by consecutive summers and winters, and by theaters of action. Behind these overt structures Thucydides imposed a number of implicit designs, which lead perceptive readers to see and appreciate recurring patterns in history, particularly in political leaders’ decision making and in the morale of their cities. Structural parallelisms, juxtapositions, and the ordering of the accounts, for instance, are important Thucydidean means of making readers engage with his history and with their own; verbal linkages also provoke readers to note the ironies, paradoxes, and incongruities of the events.
APA, Harvard, Vancouver, ISO, and other styles
2

Clarke, Katherine. Depth and Resonance. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198820437.003.0004.

Full text
Abstract:
In this chapter Herodotus’ world is explored as a resonant landscape in three main ways. First, through the human emotions of admiration and wonder generated by Herodotus both in his authorial voice and through characters in the narrative in response to both natural and man-made marvels. Here the multiple focalizations bring complexity through their range of responses. Secondly, depth is brought by the dimension of time, as mythological associations of the landscape are revealed, particularly by the progress of the Persian army through locations famous from myth and epic. Finally, additional resonance is brought by Herodotus’ implicit or explicit drawing of geographical parallels, in which different parts of his world reflect their associations on each other.
APA, Harvard, Vancouver, ISO, and other styles
3

Geva, Sharon. Inner Speech and Mental Imagery. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198796640.003.0005.

Full text
Abstract:
Inner speech has been investigated using neuroscientific techniques since the beginning of the twentieth century. One of the most important finding is that inner and overt speech differ in many respects, not only in the absence/presence of articulatory movements. In addition, studies implicate the involvement of various brain regions in the production and processing of inner speech, including areas involved in phonology and semantics, as well as auditory and motor processing. By looking at parallels between inner speech and other domains of imagery, studies explore two major questions: Are there common types of representations that underlie all types of mental imagery? And, is there a neural substrate for imagery, above and beyond modality? While these questions cannot yet be fully answered, studies of the neuroscience of imagery are bringing us a step towards better understanding of inner speech.
APA, Harvard, Vancouver, ISO, and other styles
4

Ashcroft, E. A., A. A. Faustini, R. Jaggannathan, and W. W. Wadge. Multidimensional Programming. Oxford University Press, 1995. http://dx.doi.org/10.1093/oso/9780195075977.001.0001.

Full text
Abstract:
This book describes a powerful language for multidimensional declarative programming called Lucid. Lucid has evolved considerably in the past ten years. The main catalyst for this metamorphosis was the discovery that Lucid is based on intensional logic, one commonly used in studying natural languages. Intensionality, and more specifically indexicality, has enabled Lucid to implicitly express multidimensional objects that change, a fundamental capability with several consequences which are explored in this book. The author covers a broad range of topics, from foundations to applications, and from implementations to implications. The role of intensional logic in Lucid as well as its consequences for programming in general is discussed. The syntax and mathematical semantics of the language are given and its ability to be used as a formal system for transformation and verification is presented. The use of Lucid in both multidimensional applications programming and software systems construction (such as a parallel programming system and a visual programming system) is described. A novel model of multidimensional computation--education--is described along with its serendipitous practical benefits for harnessing parallelism and tolerating faults. As the only volume that reflects the advances over the past decade, this work will be of great interest to researchers and advanced students involved with declarative language systems and programming.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Implicit parallelism"

1

Wright, Alden H., Michael D. Vose, and Jonathan E. Rowe. "Implicit Parallelism." In Genetic and Evolutionary Computation — GECCO 2003. Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/3-540-45110-2_22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Sargeant, John. "Implicit parallelism: The united functions and objects approach." In Lecture Notes in Computer Science. Springer Berlin Heidelberg, 1993. http://dx.doi.org/10.1007/3-540-56891-3_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Sargeant, John, Chris Kirkham, and Ian Watson. "Exploiting Implicit Parallelism in Functional Programs with SLAM." In Implementation of Functional Languages. Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-45361-x_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kim, Seon Wook, and Rudolf Eigenmann. "The Structure of a Compiler for Explicit and Implicit Parallelism." In Languages and Compilers for Parallel Computing. Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/3-540-35767-x_22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Nikhil, Rishiyur S. "New Languages with Implicit Parallelism and Heap Storage are Needed for Parallel Programming." In Opportunities and Constraints of Parallel Computing. Springer US, 1989. http://dx.doi.org/10.1007/978-1-4613-9668-0_25.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Lin, Yi. "A User-Interaction Parallel Networks Structure for Cold-Start Recommendation." In Proceeding of 2021 International Conference on Wireless Communications, Networking and Applications. Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-2456-9_63.

Full text
Abstract:
AbstractThe goal of the recommendation system is to recommend products to users who may like it. The collaborative filtering recommendation algorithm commonly used in recommendation systems needs to collect explicit/implicit feedback data, and new users do not leave behavioral data on the product, which leads to cold-start problem. This paper proposes a parallel network structure based on user interaction, which extracts features from user interaction information, social media information, and comment information and forms a matrix. The graph neural network is introduced to extract high-level embedded correlation features and the role of parallelism is to reduce computing cost further. Experiments based on standard data sets prove that this method has a certain degree of improvement in NDCG and HR indicators compared to the baseline.
APA, Harvard, Vancouver, ISO, and other styles
7

Sánchez Nieto, M. Teresa. "Chapter 4. “ Ich bekomme es erklärt ”." In Studies in Corpus Linguistics. John Benjamins Publishing Company, 2023. http://dx.doi.org/10.1075/scl.113.04san.

Full text
Abstract:
Spanish lacks a construction that parallels the German dative passive, which presents the process from the recipient perspective while often leaving the agent implicit. The main aim of this chapter is to elucidate as to what extent the recipient perspective is maintained, and which voice resources ( genus verbi ) and translation techniques are involved in the translation of German passive constructions between German and Spanish. The evidence for the study is taken from the Parallel Corpus of German and Spanish (PaGeS). When translating sentences which include the bekommen/kriegen variants of the dative passive, translators into Spanish do not maintain the recipient perspective, but opt for, mainly, the agent perspective in about 40% of the examples under scrutiny. In these cases, simplification comes into play as a translation technique.
APA, Harvard, Vancouver, ISO, and other styles
8

Grefenstette, John J. "Conditions for Implicit Parallelism." In Foundations of Genetic Algorithms. Elsevier, 1991. http://dx.doi.org/10.1016/b978-0-08-050684-5.50019-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

"Implicit Parallelism and Representations." In Evolutionary Computation. IEEE, 2009. http://dx.doi.org/10.1109/9780470544600.ch20.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

"Tradeoffs explicit and implicit parallelism." In Logic Programming. The MIT Press, 1995. http://dx.doi.org/10.7551/mitpress/4301.003.0062.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Implicit parallelism"

1

D, Sudharson, Diwakaran M, Senthil Kumar V, Sukirtha S, Afsheen Zaahrah A, and Karthick A. "Parallelism in Cloud Architecture using Implicit Partition Clustering Framework." In 2024 4th International Conference on Soft Computing for Security Applications (ICSCSA). IEEE, 2024. https://doi.org/10.1109/icscsa64454.2024.00050.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Trilla, José Manuel Calderón, and Colin Runciman. "Improving implicit parallelism." In ICFP'15: 20th ACM SIGPLAN International Conference on Functional Programming. ACM, 2015. http://dx.doi.org/10.1145/2804302.2804308.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Harris, Tim, and Satnam Singh. "Feedback directed implicit parallelism." In the 2007 ACM SIGPLAN international conference. ACM Press, 2007. http://dx.doi.org/10.1145/1291151.1291192.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

von Praun, Christoph, Luis Ceze, and Calin Caşcaval. "Implicit parallelism with ordered transactions." In the 12th ACM SIGPLAN symposium. ACM Press, 2007. http://dx.doi.org/10.1145/1229428.1229443.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ioannou, Nikolas, and Marcelo Cintra. "Complementing user-level coarse-grain parallelism with implicit speculative parallelism." In the 44th Annual IEEE/ACM International Symposium. ACM Press, 2011. http://dx.doi.org/10.1145/2155620.2155654.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Bic, Lubomir, and Mayez Al-Mouhamed. "The EM-4 under implicit parallelism." In the 7th international conference. ACM Press, 1993. http://dx.doi.org/10.1145/165939.165946.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Alexandrov, Alexander, Andreas Kunft, Asterios Katsifodimos, et al. "Implicit Parallelism through Deep Language Embedding." In SIGMOD/PODS'15: International Conference on Management of Data. ACM, 2015. http://dx.doi.org/10.1145/2723372.2750543.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Khasanov, Robert, Andrés Goens, and Jeronimo Castrillon. "Implicit Data-Parallelism in Kahn Process Networks." In the 9th Workshop and 7th Workshop. ACM Press, 2018. http://dx.doi.org/10.1145/3183767.3183790.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Bauer, Michael, Wonchan Lee, Elliott Slaughter, et al. "Scaling implicit parallelism via dynamic control replication." In PPoPP '21: 26th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming. ACM, 2021. http://dx.doi.org/10.1145/3437801.3441587.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Imam, Shams, Vivek Sarkar, David Leibs, and Peter B. Kessler. "Exploiting Implicit Parallelism in Dynamic Array Programming Languages." In PLDI '14: ACM SIGPLAN Conference on Programming Language Design and Implementation. ACM, 2014. http://dx.doi.org/10.1145/2627373.2627374.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography