Academic literature on the topic 'Shared-memory parallel programming'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Shared-memory parallel programming.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Shared-memory parallel programming"
Beck, B. "Shared-memory parallel programming in C++." IEEE Software 7, no. 4 (July 1990): 38–48. http://dx.doi.org/10.1109/52.56449.
Full textBonetta, Daniele, Luca Salucci, Stefan Marr, and Walter Binder. "GEMs: shared-memory parallel programming for Node.js." ACM SIGPLAN Notices 51, no. 10 (December 5, 2016): 531–47. http://dx.doi.org/10.1145/3022671.2984039.
Full textDeshpande, Ashish, and Martin Schultz. "Efficient Parallel Programming with Linda." Scientific Programming 1, no. 2 (1992): 177–83. http://dx.doi.org/10.1155/1992/829092.
Full textQuammen, Cory. "Introduction to programming shared-memory and distributed-memory parallel computers." XRDS: Crossroads, The ACM Magazine for Students 8, no. 3 (April 2002): 16–22. http://dx.doi.org/10.1145/567162.567167.
Full textQuammen, Cory. "Introduction to programming shared-memory and distributed-memory parallel computers." XRDS: Crossroads, The ACM Magazine for Students 12, no. 1 (October 2005): 2. http://dx.doi.org/10.1145/1144382.1144384.
Full textKeane, J. A., A. J. Grant, and M. Q. Xu. "Comparing distributed memory and virtual shared memory parallel programming models." Future Generation Computer Systems 11, no. 2 (March 1995): 233–43. http://dx.doi.org/10.1016/0167-739x(94)00065-m.
Full textRedondo, J. L., I. García, and P. M. Ortigosa. "Parallel evolutionary algorithms based on shared memory programming approaches." Journal of Supercomputing 58, no. 2 (December 18, 2009): 270–79. http://dx.doi.org/10.1007/s11227-009-0374-6.
Full textDi Martino, Beniamino, Sergio Briguglio, Gregorio Vlad, and Giuliana Fogaccia. "Workload Decomposition Strategies for Shared Memory Parallel Systems with OpenMP." Scientific Programming 9, no. 2-3 (2001): 109–22. http://dx.doi.org/10.1155/2001/891073.
Full textAlaghband, Gita, and Harry F. Jordan. "Overview of the Force Scientific Parallel Language." Scientific Programming 3, no. 1 (1994): 33–47. http://dx.doi.org/10.1155/1994/632497.
Full textWarren, Karen H. "PDDP, A Data Parallel Programming Model." Scientific Programming 5, no. 4 (1996): 319–27. http://dx.doi.org/10.1155/1996/857815.
Full textDissertations / Theses on the topic "Shared-memory parallel programming"
Ravela, Srikar Chowdary. "Comparison of Shared memory based parallel programming models." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-3384.
Full textFrom this study it is clear that threading model Pthreads model is identified as a dominant programming model by supporting high speedups for two of the three different dwarfs but on the other hand the tasking models are dominant in the development time and reducing the number of errors by supporting high growth in speedup for the applications without any communication and less growth in self-relative speedup for the applications involving communications. The degrade of the performance by the tasking models for the problems based on communications is because task based models are designed and bounded to execute the tasks in parallel without out any interruptions or preemptions during their computations. Introducing the communications violates the purpose and there by resulting in less performance. The directive model OpenMP is moderate in both aspects and stands in between these models. In general the directive models and tasking models offer better speedup than any other models for the task based problems which are based on the divide and conquer strategy. But for the data parallelism the speedup growth however achieved is low (i.e. they are less scalable for data parallel applications) are equally compatible in execution times with threading models. Also the development times are considerably low for data parallel applications this is because of the ease of development supported by those models by introducing less number of functional routines required to parallelize the applications. This thesis is concerned about the comparison of the shared memory based parallel programming models in terms of the speedup. This type of work acts as a hand in guide that the programmers can consider during the development of the applications under the shared memory based parallel programming models. We suggest that this work can be extended in two different ways: one is from the developer‘s perspective and the other is a cross-referential study about the parallel programming models. The former can be done by using a similar study like this by a different programmer and comparing this study with the new study. The latter can be done by including multiple data points in the same programming model or by using a different set of parallel programming models for the study.
C/O K. Manoj Kumar; LGH 555; Lindbloms Vägan 97; 37233; Ronneby. Phone no: 0738743400 Home country phone no: +91 9948671552
Schneider, Scott. "Shared Memory Abstractions for Heterogeneous Multicore Processors." Diss., Virginia Tech, 2010. http://hdl.handle.net/10919/30240.
Full textPh. D.
Stoker, Michael Allan. "The exploitation of parallelism on shared memory multiprocessors." Thesis, University of Newcastle Upon Tyne, 1990. http://hdl.handle.net/10443/2000.
Full textKarlbom, David. "A Performance Evaluation of MPI Shared Memory Programming." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-188676.
Full textI detta examensarbete undersöker vi Message Passing Inferfaces (MPI) support för shared memory programmering på modern hårdvaruarkitektur med flera Non-Uniform Memory Access (NUMA) domäner. Vi undersöker prestanda med hjälp av två fallstudier: matris-matris multiplikation och Conway’s game of life. Vi jämför prestandan utav MPI shared med hjälp utav exekveringstid samt minneskonsumtion jämtemot OpenMP och MPI punkt-till-punkt kommunikation, även känd som MPI two-sided. Vi utför strong scaling tests för båda fallstudierna. Vi observerar att MPI-two sided är 21% snabbare än MPI shared och 18% snabbare än OpenMP för matris-matris multiplikation när 32 processorer användes. För samma testdata har MPI shared en 45% lägre minnesförburkning än MPI two-sided. För Conway’s game of life är MPI two-sided 10% snabbare än MPI shared samt 82% snabbare än OpenMP implementation vid användandet av 32 processorer. Vi kunde också utskilja att om ingen mappning av virtuella minnet till en specifik NUMA domän görs, leder det till en ökning av exekveringstiden med upp till 64% när 32 processorer används. Vi kom fram till att MPI shared är användbart för intranode kommunikation på modern hårdvaruarkitektur med flera NUMA domäner.
Atukorala, G. S. "Porting a distributed operating system to a shared memory parallel computer." Thesis, University of Bath, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.256756.
Full textAlmas, Luís Pedro Parreira Galito Pimenta. "DSM-PM2 adequacy for distributed constraint programming." Master's thesis, Universidade de Évora, 2007. http://hdl.handle.net/10174/16454.
Full textCordeiro, Silvio Ricardo. "Code profiling and optimization in transactional memory systems." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2014. http://hdl.handle.net/10183/97866.
Full textTransactional Memory has shown itself to be a promising paradigm for the implementation of shared-memory concurrent applications that eschew a lock-based model of data synchronization. Rather than conditioning exclusive access on the value of a lock that is shared across concurrent threads, Transactional Memory attempts to execute critical sections optimistically, rolling back the modifications in the event of a data access conflict. However, while the lock-based approach has acquired a significant body of debugging, profiling and automated optimization tools (as one of the oldest and most researched synchronization techniques), the field of Transactional Memory is still comparably recent, and programmers are usually tasked with an unguided manual tuning of their transactional applications when facing efficiency problems. We propose a system in which code profiling in a simulated hardware implementation of Transactional Memory is used to characterize a transactional application, which forms the basis for the automated tuning of the underlying speculative system for the efficient execution of that particular application. We also propose a profile-guided approach to the scheduling of threads in a software-based implementation of Transactional Memory, using collected data to predict the likelihood of conflicts and determine what thread to schedule based on this prediction. We present the results achieved under both designs.
Farooq, Mohammad Habibur Rahman & Qaisar. "Performance Prediction of Parallel Programs in a Linux Environment." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-1143.
Full textcontact: +46(0)736368336
Tillenius, Martin. "Scientific Computing on Multicore Architectures." Doctoral thesis, Uppsala universitet, Avdelningen för beräkningsvetenskap, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-221241.
Full textUPMARC
eSSENCE
Bokhari, Saniyah S. "Parallel Solution of the Subset-sum Problem: An Empirical Study." The Ohio State University, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=osu1305898281.
Full textBooks on the topic "Shared-memory parallel programming"
Mueller, Matthias S., Barbara M. Chapman, Bronis R. de Supinski, Allen D. Malony, and Michael Voss, eds. OpenMP Shared Memory Parallel Programming. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-68555-5.
Full textVoss, Michael J., ed. OpenMP Shared Memory Parallel Programming. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/3-540-45009-2.
Full textEigenmann, Rudolf, and Michael J. Voss, eds. OpenMP Shared Memory Parallel Programming. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-44587-0.
Full textChapman, Barbara. Using OpenMP: Portable shared memory parallel programming. Cambridge, Mass: The MIT Press, 2008.
Find full textChapman, Barbara. Using OpenMP: Portable shared memory parallel programming. Cambridge, MA: The MIT Press, 2006.
Find full textChapman, Barbara M., ed. Shared Memory Parallel Programming with Open MP. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/b105895.
Full textScalable parallel sparse LU factorization methods on shared memory multiprocessors. Konstanz: Hartung-Gorre, 2000.
Find full textChung, Ki-Sung. A parallel, virtual shared memory implementation of the architecture-independent programming language UNITY. Manchester: University of Manchester, 1995.
Find full text1973-, Voss Michael J., ed. OpenMP shared memory parallel programming: International Workshop on OpenMP Applications and Tools, WOMPAT 2003, Toronto, Canada, June 26-27, 2003 : proceedings. New York: Springer, 2003.
Find full textMusgrave, Jeffrey L. Shared direct memory access on the Explorer II-LX. [Washington, DC]: National Aeronautics and Space Administration, 1990.
Find full textBook chapters on the topic "Shared-memory parallel programming"
Hoeflinger, Jay P., and Bronis R. de Supinski. "The OpenMP Memory Model." In OpenMP Shared Memory Parallel Programming, 167–77. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-68555-5_14.
Full textJamieson, Peter, and Angelos Bilas. "CableS : Thread Control and Memory System Extensions for Shared Virtual Memory Clusters." In OpenMP Shared Memory Parallel Programming, 170–84. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-44587-0_15.
Full textAslot, Vishal, Max Domeika, Rudolf Eigenmann, Greg Gaertner, Wesley B. Jones, and Bodo Parady. "SPEComp: A New Benchmark Suite for Measuring Parallel Computer Performance." In OpenMP Shared Memory Parallel Programming, 1–10. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-44587-0_1.
Full textNikolopoulos, Dimitrios S., and Eduard Ayguadé. "A Study of Implicit Data Distribution Methods for OpenMP Using the SPEC Benchmarks." In OpenMP Shared Memory Parallel Programming, 115–29. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-44587-0_11.
Full textSato, Mitsuhisa, Motonari Hirano, Yoshio Tanaka, and Satoshi Sekiguchi. "OmniRPC: A Grid RPC Facility for Cluster and Global Computing in OpenMP." In OpenMP Shared Memory Parallel Programming, 130–36. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-44587-0_12.
Full textGonzalez, M., E. Ayguadfi, X. Martorell, and J. Labarta. "Defining and Supporting Pipelined Executions in OpenMP." In OpenMP Shared Memory Parallel Programming, 155–69. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-44587-0_14.
Full textMin, Seung Jai, Seon Wook Kim, Michael Voss, Sang Ik Lee, and Rudolf Eigenmann. "Portable Compilers for OpenMP." In OpenMP Shared Memory Parallel Programming, 11–19. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-44587-0_2.
Full textKusano, Kazuhiro, Mitsuhisa Sato, Takeo Hosomi, and Yoshiki Seo. "The Omni OpenMP Compiler on the Distributed Shared Memory of Cenju-4." In OpenMP Shared Memory Parallel Programming, 20–30. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-44587-0_3.
Full textMüller, Matthias. "Some Simple OpenMP Optimization Techniques." In OpenMP Shared Memory Parallel Programming, 31–39. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-44587-0_4.
Full textCaubet, Jordi, Judit Gimenez, Jesus Labarta, Luiz DeRose, and Jeffrey Vetter. "A Dynamic Tracing Mechanism for Performance Analysis of OpenMP Applications." In OpenMP Shared Memory Parallel Programming, 53–67. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-44587-0_6.
Full textConference papers on the topic "Shared-memory parallel programming"
Bonetta, Daniele, Luca Salucci, Stefan Marr, and Walter Binder. "GEMs: shared-memory parallel programming for Node.js." In SPLASH '16: Conference on Systems, Programming, Languages, and Applications: Software for Humanity. New York, NY, USA: ACM, 2016. http://dx.doi.org/10.1145/2983990.2984039.
Full textZhang, Yu, and Wei Hu. "Exploring Deterministic Shared Memory Programming Model." In 2012 13th International Conference on Parallel and Distributed Computing Applications and Technologies (PDCAT). IEEE, 2012. http://dx.doi.org/10.1109/pdcat.2012.74.
Full textShibu, P. S., Atul, Balamati Choudhury, and Raveendranath U. Nair. "Shared Memory Architecture based Parallel Programming for RCS Estimation." In 2018 International Conference on Applied Electromagnetics, Signal Processing and Communication (AESPC). IEEE, 2018. http://dx.doi.org/10.1109/aespc44649.2018.9033194.
Full textSenghor, Abdourahmane, and Karim Konate. "A Java Hybrid Compiler for Shared Memory Parallel Programming." In 2012 13th International Conference on Parallel and Distributed Computing Applications and Technologies (PDCAT). IEEE, 2012. http://dx.doi.org/10.1109/pdcat.2012.21.
Full textOhno, Kazuhiko, Dai Michiura, Masaki Matsumoto, Takahiro Sasaki, and Toshio Kondo. "A GPGPU Programming Framework based on a Shared-Memory Model." In Parallel and Distributed Computing and Systems. Calgary,AB,Canada: ACTAPRESS, 2012. http://dx.doi.org/10.2316/p.2012.757-097.
Full textOhno, Kazuhiko, Dai Michiura, Masaki Matsumoto, Takahiro Sasaki, and Toshio Kondo. "A GPGPU Programming Framework based on a Shared-Memory Model." In Parallel and Distributed Computing and Systems. Calgary,AB,Canada: ACTAPRESS, 2011. http://dx.doi.org/10.2316/p.2011.757-097.
Full textChapman, B. "Scalable Shared Memory Parallel Programming: Will One Size Fit All?" In 14th Euromicro International Conference on Parallel, Distributed, and Network-Based Processing (PDP'06). IEEE, 2006. http://dx.doi.org/10.1109/pdp.2006.64.
Full textKarantasis, Konstantinos I., and Eleftherios D. Polychronopoulos. "Programming GPU Clusters with Shared Memory Abstraction in Software." In 2011 19th Euromicro International Conference on Parallel, Distributed and Network-Based Processing (PDP). IEEE, 2011. http://dx.doi.org/10.1109/pdp.2011.91.
Full textBättig, Martin, and Thomas R. Gross. "Synchronized-by-Default Concurrency for Shared-Memory Systems." In PPoPP '17: 22nd ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming. New York, NY, USA: ACM, 2017. http://dx.doi.org/10.1145/3018743.3018747.
Full textHayashi, Koby, Grey Ballard, Yujie Jiang, and Michael J. Tobia. "Shared-memory parallelization of MTTKRP for dense tensors." In PPoPP '18: 23nd ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming. New York, NY, USA: ACM, 2018. http://dx.doi.org/10.1145/3178487.3178522.
Full textReports on the topic "Shared-memory parallel programming"
Goudy, Susan Phelps, Jonathan Leighton Brown, Zhaofang Wen, Michael Allen Heroux, and Shan Shan Huang. BEC :a virtual shared memory parallel programming environment. Office of Scientific and Technical Information (OSTI), January 2006. http://dx.doi.org/10.2172/882923.
Full text