Academic literature on the topic 'Parallel Programming Frameworks'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Parallel Programming Frameworks.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Parallel Programming Frameworks"

1

Kang, Sol Ji, Sang Yeon Lee, and Keon Myung Lee. "Performance Comparison of OpenMP, MPI, and MapReduce in Practical Problems." Advances in Multimedia 2015 (2015): 1–9. http://dx.doi.org/10.1155/2015/575687.

Full text
Abstract:
With problem size and complexity increasing, several parallel and distributed programming models and frameworks have been developed to efficiently handle such problems. This paper briefly reviews the parallel computing models and describes three widely recognized parallel programming frameworks: OpenMP, MPI, and MapReduce. OpenMP is the de facto standard for parallel programming on shared memory systems. MPI is the de facto industry standard for distributed memory systems. MapReduce framework has become the de facto standard for large scale data-intensive applications. Qualitative pros and cons of each framework are known, but quantitative performance indexes help get a good picture of which framework to use for the applications. As benchmark problems to compare those frameworks, two problems are chosen: all-pairs-shortest-path problem and data join problem. This paper presents the parallel programs for the problems implemented on the three frameworks, respectively. It shows the experiment results on a cluster of computers. It also discusses which is the right tool for the jobs by analyzing the characteristics and performance of the paradigms.
APA, Harvard, Vancouver, ISO, and other styles
2

Dobre, Ciprian, and Fatos Xhafa. "Parallel Programming Paradigms and Frameworks in Big Data Era." International Journal of Parallel Programming 42, no. 5 (September 1, 2013): 710–38. http://dx.doi.org/10.1007/s10766-013-0272-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

ALDINUCCI, MARCO. "${\mathsf{eskimo}}$: EXPERIMENTING WITH SKELETONS IN THE SHARED ADDRESS MODEL." Parallel Processing Letters 13, no. 03 (September 2003): 449–60. http://dx.doi.org/10.1142/s0129626403001410.

Full text
Abstract:
We discuss the lack of expressivity in some skeleton-based parallel programming frameworks. The problem is further exacerbated when approaching irregular problems and dealing with dynamic data structures. Shared memory programming has been argued to have substantial ease of programming advantages for this class of problems. We present the [Formula: see text] library which represents an attempt to merge the two programming models by introducing skeletons in a shared memory framework.
APA, Harvard, Vancouver, ISO, and other styles
4

DeLozier, Christian, and James Shey. "Using Visual Programming Games to Study Novice Programmers." International Journal of Serious Games 10, no. 2 (June 7, 2023): 115–36. http://dx.doi.org/10.17083/ijsg.v10i2.577.

Full text
Abstract:
Enabling programmers to write correct and efficient parallel code remains an important challenge, and the prevalence of on-chip accelerators exacerbates this challenge. Novice programmers, especially those in disciplines outside of Computer Science and Computer Engineering, need to be able to write code that exploits parallelism and heterogeneity, but the frameworks for writing parallel and heterogeneous programs expect expert knowledge and experience. More effort must be put into understanding how novice programmers solve parallel problems. Unfortunately, novice programmers are difficult to study because they are, by definition, novices. We have designed a visual programming language and game-based framework for studying how novice programmers solve parallel problems. This tool was used to conduct an initial study on 95 undergraduate students with little to no prior programming experience. 71% of all volunteer participants completed the study in 48 minutes on average. This study demonstrated that novice programmers could solve parallel problems, and this framework can be used to conduct more thorough studies of how novice programmers approach parallel code.
APA, Harvard, Vancouver, ISO, and other styles
5

Löff, Júnior, Dalvan Griebler, Gabriele Mencagli, Gabriell Araujo, Massimo Torquati, Marco Danelutto, and Luiz Gustavo Fernandes. "The NAS Parallel Benchmarks for evaluating C++ parallel programming frameworks on shared-memory architectures." Future Generation Computer Systems 125 (December 2021): 743–57. http://dx.doi.org/10.1016/j.future.2021.07.021.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zhao, Yuxuan, Qi Sun, Zhuolun He, Yang Bai, and Bei Yu. "AutoGraph: Optimizing DNN Computation Graph for Parallel GPU Kernel Execution." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 9 (June 26, 2023): 11354–62. http://dx.doi.org/10.1609/aaai.v37i9.26343.

Full text
Abstract:
Deep learning frameworks optimize the computation graphs and intra-operator computations to boost the inference performance on GPUs, while inter-operator parallelism is usually ignored. In this paper, a unified framework, AutoGraph, is proposed to obtain highly optimized computation graphs in favor of parallel executions of GPU kernels. A novel dynamic programming algorithm, combined with backtracking search, is adopted to explore the optimal graph optimization solution, with the fast performance estimation from the mixed critical path cost. Accurate runtime information based on GPU Multi-Stream launched with CUDA Graph is utilized to determine the convergence of the optimization. Experimental results demonstrate that our method achieves up to 3.47x speedup over existing graph optimization methods. Moreover, AutoGraph outperforms state-of-the-art parallel kernel launch frameworks by up to 1.26x.
APA, Harvard, Vancouver, ISO, and other styles
7

González-Vélez, Horacio, and Mario Leyton. "A survey of algorithmic skeleton frameworks: high-level structured parallel programming enablers." Software: Practice and Experience 40, no. 12 (November 2010): 1135–60. http://dx.doi.org/10.1002/spe.1026.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Zhao, Yongwei, Yunji Chen, and Zhiwei Xu. "Fractal Parallel Computing." Intelligent Computing 2022 (September 5, 2022): 1–10. http://dx.doi.org/10.34133/2022/9797623.

Full text
Abstract:
As machine learning (ML) becomes the prominent technology for many emerging problems, dedicated ML computers are being developed at a variety of scales, from clouds to edge devices. However, the heterogeneous, parallel, and multilayer characteristics of conventional ML computers concentrate the cost of development on the software stack, namely, ML frameworks, compute libraries, and compilers, which limits the productivity of new ML computers. Fractal von Neumann architecture (FvNA) is proposed to address the programming productivity issue for ML computers. FvNA is scale-invariant to program, thus making the development of a family of scaled ML computers as easy as a single node. In this study, we generalize FvNA to the field of general-purpose parallel computing. We model FvNA as an abstract parallel computer, referred to as the fractal parallel machine (FPM), to demonstrate several representative general-purpose tasks that are efficiently programmable. FPM limits the entropy of programming by applying constraints on the control pattern of the parallel computing systems. However, FPM is still general-purpose and cost-optimal. We settle some preliminary results showing that FPM is as powerful as many fundamental parallel computing models such as BSP and alternating Turing machine. Therefore, FvNA is also generally applicable to various fields other than ML.
APA, Harvard, Vancouver, ISO, and other styles
9

del Rio Astorga, David, Manuel F. Dolz, Luis Miguel Sánchez, J. Daniel García, Marco Danelutto, and Massimo Torquati. "Finding parallel patterns through static analysis in C++ applications." International Journal of High Performance Computing Applications 32, no. 6 (March 9, 2017): 779–88. http://dx.doi.org/10.1177/1094342017695639.

Full text
Abstract:
Since the ‘free lunch’ of processor performance is over, parallelism has become the new trend in hardware and architecture design. However, parallel resources deployed in data centers are underused in many cases, given that sequential programming is still deeply rooted in current software development. To address this problem, new methodologies and techniques for parallel programming have been progressively developed. For instance, parallel frameworks, offering programming patterns, allow expressing concurrency in applications to better exploit parallel hardware. Nevertheless, a large portion of production software, from a broad range of scientific and industrial areas, is still developed sequentially. Considering that these software modules contain thousands, or even millions, of lines of code, an extremely large amount of effort is needed to identify parallel regions. To pave the way in this area, this paper presents Parallel Pattern Analyzer Tool, a software component that aids the discovery and annotation of parallel patterns in source codes. This tool simplifies the transformation of sequential source code to parallel. Specifically, we provide support for identifying Map, Farm, and Pipeline parallel patterns and evaluate the quality of the detection for a set of different C++ applications.
APA, Harvard, Vancouver, ISO, and other styles
10

Fan, Wenfei, Tao He, Longbin Lai, Xue Li, Yong Li, Zhao Li, Zhengping Qian, et al. "GraphScope." Proceedings of the VLDB Endowment 14, no. 12 (July 2021): 2879–92. http://dx.doi.org/10.14778/3476311.3476369.

Full text
Abstract:
GraphScope is a system and a set of language extensions that enable a new programming interface for large-scale distributed graph computing. It generalizes previous graph processing frameworks (e.g. , Pregel, GraphX) and distributed graph databases ( e.g ., Janus-Graph, Neptune) in two important ways: by exposing a unified programming interface to a wide variety of graph computations such as graph traversal, pattern matching, iterative algorithms and graph neural networks within a high-level programming language; and by supporting the seamless integration of a highly optimized graph engine in a general purpose data-parallel computing system. A GraphScope program is a sequential program composed of declarative data-parallel operators, and can be written using standard Python development tools. The system automatically handles the parallelization and distributed execution of programs on a cluster of machines. It outperforms current state-of-the-art systems by enabling a separate optimization (or family of optimizations) for each graph operation in one carefully designed coherent framework. We describe the design and implementation of GraphScope and evaluate system performance using several real-world applications.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Parallel Programming Frameworks"

1

Podobas, Artur. "Performance-driven exploration using Task-based Parallel Programming Frameworks." Licentiate thesis, KTH, Programvaruteknik och Datorsystem, SCS, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-122569.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ali, Akhtar. "Comparative study of parallel programming models for multicore computing." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-94296.

Full text
Abstract:
Shared memory multi-core processor technology has seen a drastic developmentwith faster and increasing number of processors per chip. This newarchitecture challenges computer programmers to write code that scales overthese many cores to exploit full computational power of these machines.Shared-memory parallel programming paradigms such as OpenMP and IntelThreading Building Blocks (TBB) are two recognized models that offerhigher level of abstraction, shields programmers from low level detailsof thread management and scales computation over all available resources.At the same time, need for high performance power-ecient computing iscompelling developers to exploit GPGPU computing due to GPU's massivecomputational power and comparatively faster multi-core growth. Thistrend leads to systems with heterogeneous architectures containing multicoreCPUs and one or more programmable accelerators such as programmableGPUs. There exist dierent programming models to program these architecturesand code written for one architecture is often not portable to anotherarchitecture. OpenCL is a relatively new industry standard framework, de-ned by Khronos group, which addresses the portability issue. It oers aportable interface to exploit the computational power of a heterogeneous setof processors such as CPUs, GPUs, DSP processors and other accelerators. In this work, we evaluate the eectiveness of OpenCL for programmingmulti-core CPUs in a comparative case study with two CPU specic stableframeworks, OpenMP and Intel TBB, for ve benchmark applicationsnamely matrix multiply, LU decomposition, image convolution, Pi value approximationand image histogram generation. The evaluation includes aperformance comparison of the three frameworks and a study of the relativeeects of applying compiler optimizations on performance numbers.OpenCL performance on two vendor-dependent platforms Intel and AMD,is also evaluated. Then the same OpenCL code is ported to a modern GPUand its code correctness and performance portability is investigated. Finally,usability experience of coding using the three multi-core frameworksis presented.
APA, Harvard, Vancouver, ISO, and other styles
3

Chavez, Daniel. "Parallelizing Map Projection of Raster Data on Multi-core CPU and GPU Parallel Programming Frameworks." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-190883.

Full text
Abstract:
Map projections lie at the core of geographic information systems and numerous projections are used today. The reprojection between different map projections is recurring in a geographic information system and it can be parallelized with multi-core CPUs and GPUs. This thesis implements a parallel analytic reprojection algorithm of raster data in C/C++ with the parallel programming frameworks Pthreads, C++11 STL threads, OpenMP, Intel TBB, CUDA and OpenCL. The thesis compares the execution times from the different implementations on small, medium and large raster data sets, where OpenMP had the best speedup of 6, 6.2 and 5.5, respectively. Meanwhile, the GPU implementations were 293 % faster than the fastest CPU implementations, where profiling shows that the CPU implementations spend most time on trigonometry functions. The results show that reprojection algorithm is well suited for the GPU, while OpenMP and Intel TBB are the fastest of the CPU frameworks.
Kartprojektioner är en central del av geografiska informationssystem och en otalig mängd av kartprojektioner används idag. Omprojiceringen mellan olika kartprojektioner sker regelbundet i ett geografiskt informationssystem och den kan parallelliseras med flerkärniga CPU:er och GPU:er. Denna masteruppsats implementerar en parallel och analytisk omprojicering av rasterdata i C/C++ med ramverken Pthreads, C++11 STL threads, OpenMP, Intel TBB, CUDA och OpenCL. Uppsatsen jämför de olika implementationernas exekveringstider på tre rasterdata av varierande storlek, där OpenMP hade bäst speedup på 6, 6.2 och 5.5. GPU-implementationerna var 293 % snabbare än de snabbaste CPU-implementationerna, där profileringen visar att de senare spenderade mest tid på trigonometriska funktioner. Resultaten visar att GPU:n är bäst lämpad för omprojicering av rasterdata, medan OpenMP är den snabbaste inom CPU ramverken.
APA, Harvard, Vancouver, ISO, and other styles
4

Sonoda, Eloiza Helena. "OOPS - Object-Oriented Parallel System. Um framework de classes para a programação científica paralela." Universidade de São Paulo, 2006. http://www.teses.usp.br/teses/disponiveis/76/76132/tde-14022007-101855/.

Full text
Abstract:
Neste trabalho foi realizado o projeto e o desenvolvimento do framework de classes OOPS - Object-Oriented Parallel System. Esta é uma ferramenta que utiliza orientação a objetos para apoiar a implementação de programas científicos concorrentes para execução paralela. O OOPS fornece abstrações de alto nível para que o programador da aplicação não se envolva diretamente com detalhes de implementação paralela, sem contudo ocultar completamente aspectos paralelos de projeto, como particionamento e distribuição dos dados, por questões de eficiência e de desempenho da aplicação. Para isso, o OOPS apresenta um conjunto de classes que permitem o encapsulamento de técnicas comumente encontradas em programação de sistemas paralelos. Utiliza o conceito de processadores virtuais organizados em grupos, aos quais podem ser aplicadas topologias que fornecem modos de comunicação entre os processadores virtuais, e contêineres podem ter seus elementos distribuídos por essas topologias, com componentes paralelos atuando sobre eles. A utilização das classes fornecidas pelo OOPS facilita a implementação do código sem adicionar sobrecarga significativa à aplicação paralela, representando uma camada fina sobre a biblioteca de passagem de mensagens usada.
This work describes the design and development of the OOPS (Object Oriented Parallel System) class framework, which is a tool that uses object orientation to support programming of concurrent scientific applications for parallel execution. OOPS provides high level abstractions to avoid application programmer\'s involvement with many parallel implementation details. For performance considerations, some parallel aspects such as decomposition and data distribution are not completely hidden from the application programmer. To achieve its intents, OOPS encapsulates some programming techniques frequently used for parallel systems. Virtual processors are organized in groups, over which topologies that provide communication between the processors can be constructed; distributed containers have their elements distributed across the processors of a topology, and parallel components use these containers for their work. The use of the classes supplied by OOPS simplifies the implementation of parallel applications, without incurring in pronounced overhead. OOPS is thus a thin layer over the message passing interface used for its implementation.
APA, Harvard, Vancouver, ISO, and other styles
5

Torbey, Sami. "Towards a framework for intuitive programming of cellular automata." Thesis, Kingston, Ont. : [s.n.], 2007. http://hdl.handle.net/1974/929.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Hamdan, Mohammad M. "A combinational framework for parallel programming using algorithmic skeletons." Thesis, Heriot-Watt University, 2000. http://hdl.handle.net/10399/567.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Moraes, Sergio A. S. "A distributed processing framework with application to graphics." Thesis, University of Sussex, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.387338.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Cuello, Rosandra. "Providing Support for the Movidius Myriad1 Platform in the SkePU Skeleton Programming Framework." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-111844.

Full text
Abstract:
The Movidius Myriad1 Platform is a multicore embedded platform primed to offer high performance and power efficiency for computer vision applications in mobile devices. The challenges of programming multicore environments are well known and skeleton programming offers a high-level programming alternative for parallel computing, intended to hide the complexities of the system from the programmer. The SkePU Skeleton Programming Framework includes backend implementations for CPU and GPU systems and it has the capacity to support more platforms by extending its backend implementations. With this master thesis project we aim to extend the SkePU Skeleton Programming Framework to provide support for execution in the Movidius Myriad1 embedded platform. Our SkePU backend for Myriad1 consists on a set of macros and functions to compose the different elements of a Myriad1 application, data communication structures to exchange data between the host systems and Myriad1, and a helper script and auxiliary files to generate a Myriad1 application.Evaluation and testing demonstrate that our backend is usable, however further optimizations are needed to obtain good performance that would make it practical to use in real life applications, particularly when it comes to data communication. As part of this project, we have outlined some improvements that could be applied to obtain better performance overall in the future, addressing the issues found with the methods of data communication.
APA, Harvard, Vancouver, ISO, and other styles
9

Ernstsson, August. "Designing a Modern Skeleton Programming Framework for Parallel and Heterogeneous Systems." Licentiate thesis, Linköpings universitet, Programvara och system, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-170194.

Full text
Abstract:
Today's society is increasingly software-driven and dependent on powerful computer technology. Therefore it is important that advancements in the low-level processor hardware are made available for exploitation by a growing number of programmers of differing skill level. However, as we are approaching the end of Moore's law, hardware designers are finding new and increasingly complex ways to increase the accessible processor performance. It is getting more and more difficult to effectively target these processing resources without expert knowledge in parallelization, heterogeneous computation, communication, synchronization, and so on. To ensure that the software side can keep up, advanced programming environments and frameworks are needed to bridge the widening gap between hardware and software. One such example is the pattern-centric skeleton programming model and in particular the SkePU project. The work presented in this thesis first redesigns the SkePU framework based on modern C++ variadic template metaprogramming and state-of-the-art compiler technology. It then explores new ways to improve performance: by providing new patterns, improving the data access locality of existing ones, and using both static and dynamic knowledge about program flow. The work combines novel ideas with practical evaluation of the approach on several applications. The advancements also include the first skeleton API that allows variadic skeletons, new data containers, and finally an approach to make skeleton programming more customizable without compromising universal portability.

Ytterligare forskningsfinansiärer: EU H2020 project EXA2PRO (801015); SeRC.

APA, Harvard, Vancouver, ISO, and other styles
10

Manasievski, Milan. "Asynchronous and parallel programming in .NET framework 4 and 4.5 using C#." Master's thesis, Česká zemědělská univerzita v Praze, 2015. http://www.nusl.cz/ntk/nusl-258694.

Full text
Abstract:
In this diploma thesis the author will elaborate on asynchronous and parallel programming in the .NET framework version 4 and version 4.5. The aim of this thesis will be to prove and provide better insight on the task-programming model that Microsoft introduced and compare different applications in terms of speed and lines of code used to write then and the differences between them using simple statistics. Using the literature gathered, the author will explain what would be the best ways to achieve parallelism on applications, write about design patterns used, and provide code snippets that will help the reader get better overall understanding of the Task Parallel Library and the benefits it gives in comparison of older methods and sequential programming.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Parallel Programming Frameworks"

1

Corporation, Microsoft, ed. Parallel programming with Microsoft .NET: Design patterns for decomposition and coordination on multicore architectures. [S.l: Microsoft], 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

LINQ to objects using C♯ 4.0: Using and extending LINQ to objects and parallel LINQ (PLINQ). Upper Saddle River, NJ: Addison-Wesley, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Boudreau, Joseph F., and Eric S. Swanson. Parallel computing. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198708636.003.0009.

Full text
Abstract:
This chapter describes various approaches to concurrency, or “parallel programming”. An overview of high performance computing is followed with a review of Flynn’s taxonomy of parallel computing. Three methods for implementing parallel code using the frameworks provided by MPI, openMP, and C++ threads are presented. The use of the C++ constructs mutex and future to resolve issues of synchronization are discussed. All methods are illustrated with an embarrassingly parallel application to a Monte Carlo integral and common pitfalls are presented. The chapter closes with a discussion and example of the utility of forking processes and the use of C++ sockets and their application in a client/server environment.
APA, Harvard, Vancouver, ISO, and other styles
4

Hillar, Gastón C. Professional Parallel Programming with C#: Master Parallel Extensions With . NET 4. Wiley & Sons, Incorporated, John, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Professional Parallel Programming with C#: Master Parallel Extensions with .NET 4. John Wiley, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Hillar, Gastón C. Professional Parallel Programming with C#: Master Parallel Extensions With . NET 4. Wiley & Sons, Incorporated, John, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hillar, Gastón C. Professional Parallel Programming with C#: Master Parallel Extensions With . NET 4. Wiley & Sons, Incorporated, John, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Blewett, Richard, Andrew Clymer, and Rock Solid Knowledge Ltd. Pro Asynchronous Programming With . NET. Springer, 2014.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Pro Asynchronous Programming with .NET. Apress, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Shasha, Dennis E., and Jessica P. Chang. Storing Clocked Programs Inside DNA: A Simplifying Framework for Nanocomputing. Morgan & Claypool Publishers, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Parallel Programming Frameworks"

1

Niculescu, Virginia, Adrian Sterca, and Frédéric Loulergue. "Reflections on the Design of Parallel Programming Frameworks." In Communications in Computer and Information Science, 154–81. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-70006-5_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Reinders, James, Ben Ashbaugh, James Brodman, Michael Kinsner, John Pennycook, and Xinmin Tian. "Common Parallel Patterns." In Data Parallel C++, 323–52. Berkeley, CA: Apress, 2020. http://dx.doi.org/10.1007/978-1-4842-5574-2_14.

Full text
Abstract:
Abstract When we are at our best as programmers, we recognize patterns in our work and apply techniques that are time proven to be the best solution. Parallel programming is no different, and it would be a serious mistake not to study the patterns that have proven to be useful in this space. Consider the MapReduce frameworks adopted for Big Data applications; their success stems largely from being based on two simple yet effective parallel patterns—map and reduce.
APA, Harvard, Vancouver, ISO, and other styles
3

Bocchino, Robert L., and Vikram S. Adve. "Types, Regions, and Effects for Safe Programming with Object-Oriented Parallel Frameworks." In Lecture Notes in Computer Science, 306–32. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-22655-7_15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Troelsen, Andrew, and Philip Japikse. "Multithreaded, Parallel, and Async Programming." In C# 6.0 and the .NET 4.6 Framework, 695–747. Berkeley, CA: Apress, 2015. http://dx.doi.org/10.1007/978-1-4842-1332-2_19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Troelsen, Andrew. "Multithreaded, Parallel, and Async Programming." In Pro C# 5.0 and the .NET 4.5 Framework, 697–751. Berkeley, CA: Apress, 2012. http://dx.doi.org/10.1007/978-1-4302-4234-5_19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Dongarra, Jack, Piotr Luszczek, Felix Wolf, Jesper Larsson Träff, Patrice Quinton, Hermann Hellwagner, Martin Fränzle, et al. "SWARM: A Parallel Programming Framework for Multicore Processors." In Encyclopedia of Parallel Computing, 1966–71. Boston, MA: Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-09766-4_52.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Launay, Pascale, and Jean-Louis Pazat. "A framework for parallel programming in Java." In High-Performance Computing and Networking, 628–37. Berlin, Heidelberg: Springer Berlin Heidelberg, 1998. http://dx.doi.org/10.1007/bfb0037190.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Burgess, D. A., P. I. Crumpton, and M. B. Giles. "A Parallel Framework for Unstructured Grid Solvers." In Programming Environments for Massively Parallel Distributed Systems, 97–106. Basel: Birkhäuser Basel, 1994. http://dx.doi.org/10.1007/978-3-0348-8534-8_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Sato, Shigeyuki, and Hideya Iwasaki. "A Skeletal Parallel Framework with Fusion Optimizer for GPGPU Programming." In Programming Languages and Systems, 79–94. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-10672-9_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Steinhöfel, Dominic. "Ever Change a Running System: Structured Software Reengineering Using Automatically Proven-Correct Transformation Rules." In Ernst Denert Award for Software Engineering 2020, 197–226. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-83128-8_10.

Full text
Abstract:
AbstractLegacy systems are business-critical software systems whose failure can have a significant impact on the business. Yet, their maintenance and adaption to changed requirements consume a considerable amount of the total software development costs. Frequently, domain experts and developers involved in the original development are not available anymore, making it difficult to adapt a legacy system without introducing bugs or unwanted behavior. This results in a dilemma: businesses are reluctant to change a working system, while at the same time struggling with its high maintenance costs. We propose the concept of Structured Software Reengineering replacing the ad hoc forward engineering part of a reengineering process with the application of behavior-preserving, proven-correct transformations improving nonfunctional program properties. Such transformations preserve valuable business logic while improving properties such as maintainability, performance, or portability to new platforms. Manually encoding and proving such transformations for industrial programming languages, for example, in interactive proof assistants, is a major challenge requiring deep expert knowledge. Existing frameworks for automatically proving transformation rules have limited expressiveness and are restricted to particular target applications such as compilation or peep-hole optimizations. We present Abstract Execution, a specification and verification framework for statement-based program transformation rules on JAVA programs building on symbolic execution. Abstract Execution supports universal quantification over statements or expressions and addresses properties about the (big-step) behavior of programs. Since this class of properties is useful for a plethora of applications, Abstract Execution bridges the gap between expressiveness and automation. In many cases, fully automatic proofs are in possible. We explain REFINITY, a workbench for modeling and proving statement-level JAVA transformation rules, and discuss our applications of Abstract Execution to code refactoring, cost analysis of program transformations, and transformations reshaping programs for the application of parallel design patterns.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Parallel Programming Frameworks"

1

Gu, Ruidong, and Michela Becchi. "A comparative study of parallel programming frameworks for distributed GPU applications." In CF '19: Computing Frontiers Conference. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3310273.3323071.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Fu, Zhouwang, Tao Song, Zhengwei Qi, and Haibing Guan. "Efficient shuffle management with SCache for DAG computing frameworks." In PPoPP '18: 23nd ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming. New York, NY, USA: ACM, 2018. http://dx.doi.org/10.1145/3178487.3178510.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Arguello, M., R. Gacitua, J. Osborne, S. Peters, P. Ekin, and P. Sawyer. "Skeletons and Semantic Web Descriptions to Integrate Parallel Programming into Ontology Learning Frameworks." In 2009 11th International Conference on Computer Modelling and Simulation. IEEE, 2009. http://dx.doi.org/10.1109/uksim.2009.47.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Wilkinson, Barry, and Clayton Ferner. "The Suzaku Pattern Programming Framework." In 2016 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW). IEEE, 2016. http://dx.doi.org/10.1109/ipdpsw.2016.107.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Xu, Wenhao, Yongwei Wu, Wei Xue, Wusheng Zhang, Ye Yuan, and Kai Zhang. "Horde: A parallel programming framework for clusters." In 2009 1st IEEE Symposium on Web Society (SWS). IEEE, 2009. http://dx.doi.org/10.1109/sws.2009.5271793.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Lasserre, Alice, Raymond Namyst, and Pierre-Andre Wacrenier. "EASYPAP: a Framework for Learning Parallel Programming." In 2020 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW). IEEE, 2020. http://dx.doi.org/10.1109/ipdpsw50202.2020.00059.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Gannouni, Sofien. "A Gamma-calculus GPU-based parallel programming framework." In 2015 2nd World Symposium on Web Applications and Networking (WSWAN). IEEE, 2015. http://dx.doi.org/10.1109/wswan.2015.7210299.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Chen, Sikan, Minglu Li, and Feng He. "GridPPI: A Lightweight Grid-Enabled Parallel Programming Framework." In 2006 IEEE Asia-Pacific Conference on Services Computing (APSCC'06). IEEE, 2006. http://dx.doi.org/10.1109/apscc.2006.63.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Bader, David A., Varun Kanade, and Kamesh Madduri. "SWARM: A Parallel Programming Framework for Multicore Processors." In 2007 IEEE International Parallel and Distributed Processing Symposium. IEEE, 2007. http://dx.doi.org/10.1109/ipdps.2007.370681.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Lin, Shang-Chieh, and Yarsun Hsu. "A Runtime Framework for GPGPU." In 2014 Sixth International Symposium on Parallel Architectures, Algorithms and Programming (PAAP). IEEE, 2014. http://dx.doi.org/10.1109/paap.2014.18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography