Academic literature on the topic 'Parallel programming techniques'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Parallel programming techniques.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Parallel programming techniques"

1

PELÁEZ, IGNACIO, FRANCISCO ALMEIDA, and DANIEL GONZÁLEZ. "HIGH LEVEL PARALLEL SKELETONS FOR DYNAMIC PROGRAMMING." Parallel Processing Letters 18, no. 01 (March 2008): 133–47. http://dx.doi.org/10.1142/s0129626408003272.

Full text
Abstract:
Dynamic Programming is an important problem-solving technique used for solving a wide variety of optimization problems. Dynamic Programming programs are commonly designed as individual applications and software tools are usually tailored to specific classes of recurrences and methodologies. That contrasts with some other algorithmic techniques where a single generic program may solve all the instances. We have developed a general skeleton tool providing support for a wide range of dynamic programming methodologies on different parallel architectures. Genericity, flexibility and efficiency are basic issues of the design strategy. Parallelism is supplied to the user in a transparent manner through a common sequential interface. A set of test problems representative of different classes of Dynamic Programming formulations has been used to validate our skeleton on an IBM-SP.
APA, Harvard, Vancouver, ISO, and other styles
2

Mou, Xin Gang, Guo Hua Wei, and Xiao Zhou. "Parallel Programming and Optimization Based on TMS320C6678." Applied Mechanics and Materials 615 (August 2014): 259–64. http://dx.doi.org/10.4028/www.scientific.net/amm.615.259.

Full text
Abstract:
The development of multi-core processors has provided a good solution to applications that require real-time processing and a large number of calculations. However, simply exploiting parallelism in software is hard to make full use of the hardware performance. This paper studies the parallel programming and optimization techniques on TMS320C6678 multicore digital signal processors. We firstly illustrate an implementation of a selected parallel image convolution algorithm by OpenMP. Then several optimization techniques such as compiler intrinsics, cache, DMA are used to further enhance the application performance and achieve a good execution time according to the test results.
APA, Harvard, Vancouver, ISO, and other styles
3

Graham, John R. "Integrating parallel programming techniques into traditional computer science curricula." ACM SIGCSE Bulletin 39, no. 4 (December 2007): 75–78. http://dx.doi.org/10.1145/1345375.1345419.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ibarra, David, and Josep Arnal. "Parallel Programming Techniques Applied to Water Pump Scheduling Problems." Journal of Water Resources Planning and Management 140, no. 7 (July 2014): 06014002. http://dx.doi.org/10.1061/(asce)wr.1943-5452.0000439.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Alghamdi, Ahmed Mohammed, Fathy Elbouraey Eassa, Maher Ali Khamakhem, Abdullah Saad AL-Malaise AL-Ghamdi, Ahmed S. Alfakeeh, Abdullah S. Alshahrani, and Ala A. Alarood. "Parallel Hybrid Testing Techniques for the Dual-Programming Models-Based Programs." Symmetry 12, no. 9 (September 20, 2020): 1555. http://dx.doi.org/10.3390/sym12091555.

Full text
Abstract:
The importance of high-performance computing is increasing, and Exascale systems will be feasible in a few years. These systems can be achieved by enhancing the hardware’s ability as well as the parallelism in the application by integrating more than one programming model. One of the dual-programming model combinations is Message Passing Interface (MPI) + OpenACC, which has several features including increased system parallelism, support for different platforms with more performance, better productivity, and less programming effort. Several testing tools target parallel applications built by using programming models, but more effort is needed, especially for high-level Graphics Processing Unit (GPU)-related programming models. Owing to the integration of different programming models, errors will be more frequent and unpredictable. Testing techniques are required to detect these errors, especially runtime errors resulting from the integration of MPI and OpenACC; studying their behavior is also important, especially some OpenACC runtime errors that cannot be detected by any compiler. In this paper, we enhance the capabilities of ACC_TEST to test the programs built by using the dual-programming models MPI + OpenACC and detect their related errors. Our tool integrated both static and dynamic testing techniques to create ACC_TEST and allowed us to benefit from the advantages of both techniques reducing overheads, enhancing system execution time, and covering a wide range of errors. Finally, ACC_TEST is a parallel testing tool that creates testing threads based on the number of application threads for detecting runtime errors.
APA, Harvard, Vancouver, ISO, and other styles
6

García-Blas, Javier, and Christopher Brown. "High-level programming for heterogeneous and hierarchical parallel systems." International Journal of High Performance Computing Applications 32, no. 6 (November 2018): 804–6. http://dx.doi.org/10.1177/1094342018807840.

Full text
Abstract:
High-Level Heterogeneous and Hierarchical Parallel Systems (HLPGPU) aims to bring together researchers and practitioners to present new results and ongoing work on those aspects of high-level programming relevant, or specific to general-purpose computing on graphics processing units (GPGPUs) and new architectures. The 2016 HLPGPU symposium was an event co-located with the HiPEAC conference in Prague, Czech Republic. HLPGPU is targeted at high-level parallel techniques, including programming models, libraries and languages, algorithmic skeletons, refactoring tools and techniques for parallel patterns, tools and systems to aid parallel programming, heterogeneous computing, timing analysis and statistical performance models.
APA, Harvard, Vancouver, ISO, and other styles
7

PERRI, SIMONA, FRANCESCO RICCA, and MARCO SIRIANNI. "Parallel instantiation of ASP programs: techniques and experiments." Theory and Practice of Logic Programming 13, no. 2 (January 25, 2012): 253–78. http://dx.doi.org/10.1017/s1471068411000652.

Full text
Abstract:
AbstractAnswer-Set Programming (ASP) is a powerful logic-based programming language, which is enjoying increasing interest within the scientific community and (very recently) in industry. The evaluation of Answer-Set Programs is traditionally carried out in two steps. At the first step, an input program undergoes the so-called instantiation (or grounding) process, which produces a program ′ semantically equivalent to , but not containing any variable; in turn, ′ is evaluated by using a backtracking search algorithm in the second step. It is well-known that instantiation is important for the efficiency of the whole evaluation, might become a bottleneck in common situations, is crucial in several real-world applications, and is particularly relevant when huge input data have to be dealt with. At the time of this writing, the available instantiator modules are not able to exploit satisfactorily the latest hardware, featuring multi-core/multi-processor Symmetric MultiProcessing technologies. This paper presents some parallel instantiation techniques, including load-balancing and granularity control heuristics, which allow for the effective exploitation of the processing power offered by modern Symmetric MultiProcessing machines. This is confirmed by an extensive experimental analysis reported herein.
APA, Harvard, Vancouver, ISO, and other styles
8

Sathya, S., R. Hema, and M. Amala. "Parallel Techniques for Linear Programming Problems Using Multiprogramming and RSM." International Journal of Engineering Trends and Technology 13, no. 5 (July 25, 2014): 200–203. http://dx.doi.org/10.14445/22315381/ijett-v13p242.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Skinner, Gregg, and Rudolf Eigenmann. "Parallel Performance of a Combustion Chemistry Simulation." Scientific Programming 4, no. 3 (1995): 127–39. http://dx.doi.org/10.1155/1995/342723.

Full text
Abstract:
We used a description of a combustion simulation's mathematical and computational methods to develop a version for parallel execution. The result was a reasonable performance improvement on small numbers of processors. We applied several important programming techniques, which we describe, in optimizing the application. This work has implications for programming languages, compiler design, and software engineering.
APA, Harvard, Vancouver, ISO, and other styles
10

El-Neweihi, Emad, Frank Proschan, and Jayaram Sethuraman. "Optimal allocation of components in parallel–series and series–parallel systems." Journal of Applied Probability 23, no. 3 (September 1986): 770–77. http://dx.doi.org/10.2307/3214014.

Full text
Abstract:
This paper shows how majorization and Schur-convex functions can be used to solve the problem of optimal allocation of components to parallel-series and series-parallel systems to maximize the reliability of the system. For parallel-series systems the optimal allocation is completely described and depends only on the ordering of component reliabilities. For series-parallel systems, we describe a partial ordering among allocations that can lead to the optimal allocation. Finally, we describe how these problems can be cast as integer linear programming problems and thus the results obtained in this paper show that when some linear integer programming problems are recast in a different way and the techniques of Schur functions are used, complete solutions can be obtained in some instances and better insight in others.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Parallel programming techniques"

1

Pereira, Marcio Machado 1959. "Scheduling and serialization techniques for transactional memories." [s.n.], 2015. http://repositorio.unicamp.br/jspui/handle/REPOSIP/275547.

Full text
Abstract:
Orientadores: Guido Costa Souza de Araújo, José Nelson Amaral
Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Computação
Made available in DSpace on 2018-08-27T10:12:59Z (GMT). No. of bitstreams: 1 Pereira_MarcioMachado_D.pdf: 2922376 bytes, checksum: 9775914667eadf354d7e256fb2835859 (MD5) Previous issue date: 2015
Resumo: Nos últimos anos, Memórias Transacionais (Transactional Memories ¿ TMs) têm-se mostrado um modelo de programação paralela que combina, de forma eficaz, a melhoria de desempenho com a facilidade de programação. Além disso, a recente introdução de extensões para suporte a TM por grandes fabricantes de microprocessadores, também parece endossá-la como um modelo de programação para aplicações paralelas. Uma das questões centrais na concepção de sistemas de TM em Software (STM) é identificar mecanismos ou heurísticas que possam minimizar a contenção decorrente dos conflitos entre transações. Apesar de já terem sido propostos vários mecanismos para reduzir a contenção, essas técnicas têm um alcance limitado, uma vez que o conflito é evitado por interrupção ou serialização da execução da transação, impactando consideravelmente o desempenho do programa. Este trabalho explora uma abordagem complementar para melhorar o desempenho de STM através da utilização de escalonadores. Um escalonador de TM é um componente de software que decide quando uma determinada transação deve ser executada ou não. Sua eficácia é muito sensível às métricas usadas para prever o comportamento das transações, especialmente em cenários de alta contenção. Este trabalho propõe um novo escalonador, Dynamic Transaction Scheduler ¿ DTS, para selecionar a próxima transação a ser executada. DTS é baseada em uma política de "recompensa pelo sucesso" e utiliza uma métrica que mede com melhor precisão o trabalho realizado por uma transação. Memórias Transacionais em Hardware (HTMs) são mecanismos interessante para implementar TM porque integram o suporte a transações no nível da arquitetura. Por outro lado, aplicações que usam HTM podem ter o seu desempenho dificultado pela falta de escalabilidade e transbordamento da cache de dados. Este trabalho apresenta um extenso estudo de desempenho de aplicações que usam HTM na arquitetura Haswell da Intel. Ele avalia os pontos fortes e fracos desta nova arquitetura, realizando uma exploração das várias características das aplicações de TM. Este estudo detalhado revela as restrições impostas pela nova arquitetura e introduz uma política de serialização simples, porém eficaz, para garantir o progresso das transações, além de proporcionar melhor desempenho
Abstract: In the last few years, Transactional Memories (TMs) have been shown to be a parallel programming model that can effectively combine performance improvement with ease of programming. Moreover, the recent introduction of (H)TM-based ISA extensions, by major microprocessor manufacturers, also seems to endorse TM as a programming model for today¿s parallel applications. One of the central issues in designing Software TM (STM) systems is to identify mechanisms or heuristics that can minimize contention arising from conflicting transactions. Although a number of mechanisms have been proposed to tackle contention, such techniques have a limited scope, because conflict is avoided by either interrupting or serializing transaction execution, thus considerably impacting performance. This work explores a complementary approach to boost the performance of STM through the use of schedulers. A TM scheduler is a software component that decides when a particular transaction should be executed. Their effectiveness is very sensitive to the accuracy of the metrics used to predict transaction behaviour, particularly in high-contention scenarios. This work proposes a new Dynamic Transaction Scheduler ¿ DTS to select a transaction to execute next, based on a new policy that rewards success and an improved metric that measures the amount of effective work performed by a transaction. Hardware TMs (HTM) are an interesting mechanism to implement TM as they integrate the support for transactions at the lowest, most efficient, architectural level. On the other hand, for some applications, HTMs can have their performance hindered by the lack of scalability and by limitations in cache store capacity. This work presents an extensive performance study of the implementation of HTM in the Haswell generation of Intel x86 core processors. It evaluates the strengths and weaknesses of this new architecture by exploring several dimensions in the space of TM application characteristics. This detailed performance study provides insights on the constraints imposed by the Intel¿s Transaction Synchronization Extension (Intel¿s TSX) and introduces a simple, but efficient, serialization policy for guaranteeing forward progress on top of the best-effort Intel¿s HTM which was critical to achieving performance
Doutorado
Ciência da Computação
Doutor em Ciência da Computação
APA, Harvard, Vancouver, ISO, and other styles
2

Hind, Alan. "Parallel simulation techniques for telecommunication network modelling." Thesis, Durham University, 1994. http://etheses.dur.ac.uk/5520/.

Full text
Abstract:
In this thesis, we consider the application of parallel simulation to the performance modelling of telecommunication networks. A largely automated approach was first explored using a parallelizing compiler to speed up the simulation of simple models of circuit-switched networks. This yielded reasonable results for relatively little effort compared with other approaches. However, more complex simulation models of packet- and cell-based telecommunication networks, requiring the use of discrete event techniques, need an alternative approach. A critical review of parallel discrete event simulation indicated that a distributed model components approach using conservative or optimistic synchronization would be worth exploring. Experiments were therefore conducted using simulation models of queuing networks and Asynchronous Transfer Mode (ATM) networks to explore the potential speed-up possible using this approach. Specifically, it is shown that these techniques can be used successfully to speed-up the execution of useful telecommunication network simulations. A detailed investigation has demonstrated that conservative synchronization performs very well for applications with good look ahead properties and sufficient message traffic density and, given such properties, will significantly outperform optimistic synchronization. Optimistic synchronization, however, gives reasonable speed-up for models with a wider range of such properties and can be optimized for speed-up and memory usage at run time. Thus, it is confirmed as being more generally applicable particularly as model development is somewhat easier than for conservative synchronization. This has to be balanced against the more difficult task of developing and debugging an optimistic synchronization kernel and the application models.
APA, Harvard, Vancouver, ISO, and other styles
3

Lu, Kang Hsin. "Modelling of saturated traffic flow using highly parallel systems." Thesis, University of Sheffield, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.245726.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Papadopoulos, George Angelos. "Parallel implementation of concurrent logic languages using graph rewriting techniques." Thesis, University of East Anglia, 1989. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.329340.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Nautiyal, Sunil Datt. "Parallel computing techniques for investigating three dimensional collapse of a masonry arch." Thesis, University of Cambridge, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.320031.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Webb, Craig Jonathan. "Parallel computation techniques for virtual acoustics and physical modelling synthesis." Thesis, University of Edinburgh, 2014. http://hdl.handle.net/1842/15779.

Full text
Abstract:
The numerical simulation of large-scale virtual acoustics and physical modelling synthesis is a computationally expensive process. Time stepping methods, such as finite difference time domain, can be used to simulate wave behaviour in models of three-dimensional room acoustics and virtual instruments. In the absence of any form of simplifying assumptions, and at high audio sample rates, this can lead to simulations that require many hours of computation on a standard Central Processing Unit (CPU). In recent years the video game industry has driven the development of Graphics Processing Units (GPUs) that are now capable of multi-teraflop performance using highly parallel architectures. Whilst these devices are primarily designed for graphics calculations, they can also be used for general purpose computing. This thesis explores the use of such hardware to accelerate simulations of three-dimensional acoustic wave propagation, and embedded systems that create physical models for the synthesis of sound. Test case simulations of virtual acoustics are used to compare the performance of workstation CPUs to that of Nvidia’s Tesla GPU hardware. Using representative multicore CPU benchmarks, such simulations can be accelerated in the order of 5X for single precision and 3X for double precision floating-point arithmetic. Optimisation strategies are examined for maximising GPU performance when using single devices, as well as for multiple device codes that can compute simulations using billions of grid points. This allows the simulation of room models of several thousand cubic metres at audio rates such as 44.1kHz, all within a useable time scale. The performance of alternative finite difference schemes is explored, as well as strategies for the efficient implementation of boundary conditions. Creating physical models of acoustic instruments requires embedded systems that often rely on sparse linear algebra operations. The performance efficiency of various sparse matrix storage formats is detailed in terms of the fundamental operations that are required to compute complex models, with an optimised storage system achieving substantial performance gains over more generalised formats. An integrated instrument model of the timpani drum is used to demonstrate the performance gains that are possible using the optimisation strategies developed through this thesis.
APA, Harvard, Vancouver, ISO, and other styles
7

Bayne, Ethan. "Accelerating digital forensic searching through GPGPU parallel processing techniques." Thesis, Abertay University, 2017. https://rke.abertay.ac.uk/en/studentTheses/702de12a-e10b-4daa-8baf-c2c57a501240.

Full text
Abstract:
Background: String searching within a large corpus of data is a critical component of digital forensic (DF) analysis techniques such as file carving. The continuing increase in capacity of consumer storage devices requires similar improvements to the performance of string searching techniques employed by DF tools used to analyse forensic data. As string searching is a trivially-parallelisable problem, general purpose graphic processing unit (GPGPU) approaches are a natural fit. Currently, only some of the research in employing GPGPU programming has been transferred to the field of DF, of which, a closed-source GPGPU framework was used— Complete Unified Device Architecture (CUDA). Findings from these earlier studies have found that local storage devices from which forensic data are read present an insurmountable performance bottleneck. Aim: This research hypothesises that modern storage devices no longer present a performance bottleneck to the currently used processing techniques of the field, and proposes that an open-standards GPGPU framework solution – Open Computing Language (OpenCL) – would be better suited to accelerate file carving with wider compatibility across an array of modern GPGPU hardware. This research further hypothesises that a modern multi-string searching algorithm may be better adapted to fulfil the requirements of DF investigation. Methods: This research presents a review of existing research and tools used to perform file carving and acknowledges related work within the field. To test the hypothesis, parallel file carving software was created using C# and OpenCL, employing both a traditional string searching algorithm and a modern multi-string searching algorithm to conduct an analysis of forensic data. A set of case studies that demonstrate and evaluate potential benefits of adopting various methods in conducting string searching on forensic data are given. This research concludes with a final case study which evaluates the performance to perform file carving with the best-proposed string searching solution and compares the result with an existing file carving tool— Foremost. Results: The results demonstrated from the research establish that utilising the parallelised OpenCL and Parallel Failureless Aho-Corasick (PFAC) algorithm solution demonstrates significantly greater processing improvements from the use of a single, and multiple, GPUs on modern hardware. In comparison to CPU approaches, GPGPU processing models were observed to minimised the amount of time required to search for greater amounts of patterns. Results also showed that employing PFAC also delivers significant performance increases over the BM algorithm. The method employed to read data from storage devices was also seen to have a significant effect on the time required to perform string searching and file carving. Conclusions: Empirical testing shows that the proposed string searching method is believed to be more efficient than the widely-adopted Boyer-Moore algorithms when applied to string searching and performing file carving. The developed OpenCL GPGPU processing framework was found to be more efficient than CPU counterparts when searching for greater amounts of patterns within data. This research also refutes claims that file carving is solely limited by the performance of the storage device, and presents compelling evidence that performance is bound by the combination of the performance of the storage device and processing technique employed.
APA, Harvard, Vancouver, ISO, and other styles
8

Srivastava, Rohit Kumar. "Modeling Performance of Tensor Transpose using Regression Techniques." The Ohio State University, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=osu1524080824154753.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Titos, Gil José Rubén. "Hardware Techniques for High-Performance Transactional Memory in Many-Core Chip Multiprocessors." Doctoral thesis, Universidad de Murcia, 2011. http://hdl.handle.net/10803/51473.

Full text
Abstract:
Esta tesis investiga la implementación hardware eficiente de los sistemas de memoria transaccional (HTM) en un chip multiprocesador escalable (CMP), identificando aspectos que limitan el rendimiento y proponiendo técnicas que solventan dichas patologías. Las contribuciones de la tesis son varios diseños HTM complementarios que alcanzan un rendimiento robusto y evitan comportamientos patológicos, mediante la introducción de flexibilidad y adaptabilidad, sin que dichas técnicas apenas supongan un incremento en la complejidad del sistema global. Esta disertación considera tanto sistemas HTM de política ansiosa como aquellos diseñados bajo el enfoque perezoso, y afrontamos las sobrecargas en el rendimiento que son inherentes a cada política. Quizá la contribución más relevante de esta tesis es ZEBRA, un sistema HTM de política híbrida que adapta su comportamiento en función de las características dinámicas de la carga de trabajo.
This thesis focuses on the hardware mechanisms that provide optimistic concurrency control with guarantees of atomicity and isolation, with the intent of achieving high-performance across a variety of workloads, at a reasonable cost in terms of design complexity. This thesis identifies key inefficiencies that impact the performance of several hardware implementations of TM, and proposes mechanisms to overcome such limitations. In this dissertation we consider both eager and lazy approaches to HTM system design, and address important sources of overhead that are inherent to each policy. This thesis presents a hybrid-policy, adaptable HTM system that combines the advantages of both eager and lazy approaches in a low complexity design. Furthermore, this thesis investigates the overheads of the simpler, fixed-policy HTM designs that leverage a distributed directory-based coherence protocol to detect data races over a scalable interconnect, and develops solutions that address some performance degrading factors.
APA, Harvard, Vancouver, ISO, and other styles
10

Protze, Joachim [Verfasser]. "Modular Techniques and Interfaces for Data Race Detection in Multi-Paradigm Parallel Programming / Joachim Protze." Berlin : epubli, 2021. http://d-nb.info/1239488076/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Parallel programming techniques"

1

Taylor, Stephen. Parallel logic programming techniques. Englewood Cliffs, N.J: Prentice Hall, 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Parallel logic programming techniques. Englewood Cliffs, N.J: Prentice Hall, 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Michael, Allen C., ed. Parallel programming: Techniques and applications using networked workstations and parallel computers. Upper Saddle River, N.J: Prentice Hall, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Michael, Allen C., ed. Parallel programming: Techniques and applications using networked workstations and parallel computers. 2nd ed. Upper Saddle River, NJ: Pearson/Prentice Hall, 2005.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Grove, D. A. Performance modelling techniques for parallel supercomputing applications. Hauppauge NY: Nova Science Publishers, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Concurrent programming: Fundamental techniques for real-time and parallel software design. Chichester: Wiley, 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Harris, Tim J. A survey of PRAM simulation techniques. Edinburgh: University of Edinburgh, Dept. of Computer Science, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Banks, H. Thomas. Optimal feedback control infinite dimensional parabolic evolution systems: Approximation techniques. Hampton, Va: National Aeronautics and Space Administration, Langley Research Center, 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Gudula, Rünger, and SpringerLink (Online service), eds. Parallel Programming: For Multicore and Cluster Systems. 2nd ed. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Mauricio, Alvarez-Mesa, Chi Chi Ching, Azevedo Arnaldo, Meenderinck Cor, Ramirez Alex, and SpringerLink (Online service), eds. Scalable Parallel Programming Applied to H.264/AVC Decoding. New York, NY: Springer New York, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Parallel programming techniques"

1

Loyens, L. D. J. C. "Parallel programming techniques for linear algebra." In Parallel Computing 1988, 32–43. Berlin, Heidelberg: Springer Berlin Heidelberg, 1989. http://dx.doi.org/10.1007/3-540-51604-2_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Malinowski, K., and J. Sadecki. "Dynamic Programming: A Parallel Implementation." In Parallel Processing Techniques for Simulation, 161–70. Boston, MA: Springer US, 1986. http://dx.doi.org/10.1007/978-1-4684-5218-1_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Eppstein, David, and Zvi Galil. "Parallel algorithmic techniques for combinatorial computation." In Automata, Languages and Programming, 304–18. Berlin, Heidelberg: Springer Berlin Heidelberg, 1989. http://dx.doi.org/10.1007/bfb0035768.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Müller, Matthias. "Some Simple OpenMP Optimization Techniques." In OpenMP Shared Memory Parallel Programming, 31–39. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-44587-0_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Puccetti, Armand. "Extending the Techniques to Parallel Programs." In The Programming and Proof System ATES, 200–257. Berlin, Heidelberg: Springer Berlin Heidelberg, 1991. http://dx.doi.org/10.1007/978-3-642-84542-0_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Li, Songyuan, Hong Shen, and Yingpeng Sang. "A Survey of Privacy-Preserving Techniques on Trajectory Data." In Parallel Architectures, Algorithms and Programming, 461–76. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-2767-8_41.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Folino, Gianluigi, Clara Pizzuti, and Giandomenico Spezzano. "Ensemble Techniques for Parallel Genetic Programming Based Classifiers." In Lecture Notes in Computer Science, 59–69. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/3-540-36599-0_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Fontaine, Daniel, Laurent Michel, and Pascal Van Hentenryck. "Parallel Composition of Scheduling Solvers." In Integration of AI and OR Techniques in Constraint Programming, 159–69. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-33954-2_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Moisan, Thierry, Claude-Guy Quimper, and Jonathan Gaudreault. "Parallel Depth-Bounded Discrepancy Search." In Integration of AI and OR Techniques in Constraint Programming, 377–93. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-07046-9_27.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Bergman, David, Andre A. Cire, Ashish Sabharwal, Horst Samulowitz, Vijay Saraswat, and Willem-Jan van Hoeve. "Parallel Combinatorial Optimization with Decision Diagrams." In Integration of AI and OR Techniques in Constraint Programming, 351–67. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-07046-9_25.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Parallel programming techniques"

1

Steele, G. L. "Parallel programming and parallel abstractions in Fortress." In 14th International Conference on Parallel Architectures and Compilation Techniques (PACT'05). IEEE, 2005. http://dx.doi.org/10.1109/pact.2005.34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Cacaval, Colin. "Parallel programming for mobile computing." In 22nd International Conference on Parallel Architectures and Compilation Techniques (PACT). IEEE, 2013. http://dx.doi.org/10.1109/pact.2013.6618790.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Roy, Indranil, Ankit Srivastava, and Srinivas Aluru. "Programming Techniques for the Automata Processor." In 2016 45th International Conference on Parallel Processing (ICPP). IEEE, 2016. http://dx.doi.org/10.1109/icpp.2016.30.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Satnam Singh. "New parallel programming techniques for hardware design." In 2007 IFIP International Conference on Very Large Scale Integration. IEEE, 2007. http://dx.doi.org/10.1109/vlsisoc.2007.4402491.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Gregg, David. "Session details: Parallelization and parallel programming II." In PACT '10: International Conference on Parallel Architectures and Compilation Techniques. New York, NY, USA: ACM, 2010. http://dx.doi.org/10.1145/3254478.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Eigenmann, Rudolf. "Session details: Parallelization and parallel programming I." In PACT '10: International Conference on Parallel Architectures and Compilation Techniques. New York, NY, USA: ACM, 2010. http://dx.doi.org/10.1145/3254473.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Oppacher, Yandu, Franz Oppacher, and Dwight Deugo. "Creating Objects Using Genetic Programming Techniques." In 2009 10th ACIS International Conference on Software Engineering, Artificial Intelligences, Networking and Parallel/Distributed Computing. IEEE, 2009. http://dx.doi.org/10.1109/snpd.2009.82.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Han, Zhijie, and Miaoxin Xu. "Machine Learning Techniques in Storm." In 2015 Seventh International Symposium on Parallel Architectures, Algorithms and Programming (PAAP). IEEE, 2015. http://dx.doi.org/10.1109/paap.2015.35.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Baek, Woongki, Chi Cao Minh, Martin Trautmann, Christos Kozyrakis, and Kunle Olukotun. "The OpenTM Transactional Application Programming Interface." In 16th International Conference on Parallel Architecture and Compilation Techniques (PACT 2007). IEEE, 2007. http://dx.doi.org/10.1109/pact.2007.4336227.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ghosh, Sayan, and Barbara Chapman. "Programming Strategies for GPUs and their Power Consumption." In 2011 International Conference on Parallel Architectures and Compilation Techniques (PACT). IEEE, 2011. http://dx.doi.org/10.1109/pact.2011.51.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Parallel programming techniques"

1

Beguelin, Adam, and Gary Nutt. Visual Parallel Programming with Determinacy: A Language Specification, an Analysis Technique, and a Programming Tool. Fort Belvoir, VA: Defense Technical Information Center, June 1993. http://dx.doi.org/10.21236/ada267560.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography