Academic literature on the topic 'Message-Passing Interface (MPI)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Message-Passing Interface (MPI).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Message-Passing Interface (MPI)"

1

Li, W. J., and J. J. Tsay. "Checkpointing message passing interface (MPI) parallel programs." Computer Standards & Interfaces 20, no. 6-7 (March 1999): 425. http://dx.doi.org/10.1016/s0920-5489(99)90841-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

AlDhubhani, Raed, Fathy Eassa, and Faisal Saeed. "Exascale Message Passing Interface based Program Deadlock Detection." International Journal of Electrical and Computer Engineering (IJECE) 6, no. 2 (April 1, 2016): 887. http://dx.doi.org/10.11591/ijece.v6i2.9575.

Full text
Abstract:
Deadlock detection is one of the main issues of software testing in High Performance Computing (HPC) and also inexascale computing areas in the near future. Developing and testing programs for machines which have millions of cores is not an easy task. HPC program consists of thousands (or millions) of parallel processes which need to communicate with each other in the runtime. Message Passing Interface (MPI) is a standard library which provides this communication capability and it is frequently used in the HPC. Exascale programs are expected to be developed using MPI standard library. For parallel programs, deadlock is one of the expected problems. In this paper, we discuss the deadlock detection for exascale MPI-based programs where the scalability and efficiency are critical issues. The proposed method detects and flags the processes and communication operations which are potential to cause deadlocks in a scalable and efficient manner. MPI benchmark programs were used to test the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
3

AlDhubhani, Raed, Fathy Eassa, and Faisal Saeed. "Exascale Message Passing Interface based Program Deadlock Detection." International Journal of Electrical and Computer Engineering (IJECE) 6, no. 2 (April 1, 2016): 887. http://dx.doi.org/10.11591/ijece.v6i2.pp887-894.

Full text
Abstract:
Deadlock detection is one of the main issues of software testing in High Performance Computing (HPC) and also inexascale computing areas in the near future. Developing and testing programs for machines which have millions of cores is not an easy task. HPC program consists of thousands (or millions) of parallel processes which need to communicate with each other in the runtime. Message Passing Interface (MPI) is a standard library which provides this communication capability and it is frequently used in the HPC. Exascale programs are expected to be developed using MPI standard library. For parallel programs, deadlock is one of the expected problems. In this paper, we discuss the deadlock detection for exascale MPI-based programs where the scalability and efficiency are critical issues. The proposed method detects and flags the processes and communication operations which are potential to cause deadlocks in a scalable and efficient manner. MPI benchmark programs were used to test the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
4

Skjellum, Anthony, Ewing Lusk, and William Gropp. "Early Applications in the Message-Passing Interface (Mpi)." International Journal of Supercomputer Applications and High Performance Computing 9, no. 2 (June 1995): 79–94. http://dx.doi.org/10.1177/109434209500900202.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Donegan, Brendan J., Daniel C. Doolan, and Sabin Tabirca. "Mobile Message Passing using a Scatternet Framework." International Journal of Computers Communications & Control 3, no. 1 (March 1, 2008): 51. http://dx.doi.org/10.15837/ijccc.2008.1.2374.

Full text
Abstract:
The Mobile Message Passing Interface is a library which implements MPI functionality on Bluetooth enabled mobile phones. It provides many of the functions available in MPI, including point-to-point and global communication. The main restriction of the library is that it was designed to work over Bluetooth piconets. Piconet based networks provide for a maximum of eight devices connected together simultaneously. This limits the libraries usefulness for parallel computing. A solution to solve this problem is presented that provides the same functionality as the original Mobile MPI library, but implemented over a Bluetooth scatternet. A scatternet may be defined as a number of piconets interconnected by common node(s). An outline of the scatternet design is explained and its major components discussed.
APA, Harvard, Vancouver, ISO, and other styles
6

Hempel, Rolf, and Falk Zimmermann. "Automatic Migration from PARMACS to MPI in Parallel Fortran Applications." Scientific Programming 7, no. 1 (1999): 39–46. http://dx.doi.org/10.1155/1999/890514.

Full text
Abstract:
The PARMACS message passing interface has been in widespread use by application projects, especially in Europe. With the new MPI standard for message passing, many projects face the problem of replacing PARMACS with MPI. An automatic translation tool has been developed which replaces all PARMACS 6.0 calls in an application program with their corresponding MPI calls. In this paper we describe the mapping of the PARMACS programming model onto MPI. We then present some implementation details of the converter tool.
APA, Harvard, Vancouver, ISO, and other styles
7

Skjellum, Anthony, Arkady Kanevsky, Yoginder S. Dandass, Jerrell Watts, Steve Paavola, Dennis Cottel, Greg Henley, L. Shane Hebert, Zhenqian Cui, and Anna Rounbehler. "The Real-Time Message Passing Interface Standard (MPI/RT-1.1)." Concurrency and Computation: Practice and Experience 16, S1 (2004): Si—S322. http://dx.doi.org/10.1002/cpe.744.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Gravvanis, George A., and Konstantinos M. Giannoutakis. "Parallel Preconditioned Conjugate Gradient Square Method Based on Normalized Approximate Inverses." Scientific Programming 13, no. 2 (2005): 79–91. http://dx.doi.org/10.1155/2005/508607.

Full text
Abstract:
A new class of normalized explicit approximate inverse matrix techniques, based on normalized approximate factorization procedures, for solving sparse linear systems resulting from the finite difference discretization of partial differential equations in three space variables are introduced. A new parallel normalized explicit preconditioned conjugate gradient square method in conjunction with normalized approximate inverse matrix techniques for solving efficiently sparse linear systems on distributed memory systems, using Message Passing Interface (MPI) communication library, is also presented along with theoretical estimates on speedups and efficiency. The implementation and performance on a distributed memory MIMD machine, using Message Passing Interface (MPI) is also investigated. Applications on characteristic initial/boundary value problems in three dimensions are discussed and numerical results are given.
APA, Harvard, Vancouver, ISO, and other styles
9

Abramov, Sergey, Vladimir Roganov, Valeriy Osipov, and German Matveev. "Implementation of the LAMMPS package using T-system with an Open Architecture." Informatics and Automation 20, no. 4 (August 11, 2021): 971–99. http://dx.doi.org/10.15622/ia.20.4.8.

Full text
Abstract:
Supercomputer applications are usually implemented in the C, C++, and Fortran programming languages using different versions of the Message Passing Interface library. The "T-system" project (OpenTS) studies the issues of automatic dynamic parallelization of programs. In practical terms, the implementation of applications in a mixed (hybrid) style is relevant, when one part of the application is written in the paradigm of automatic dynamic parallelization of programs and does not use any primitives of the MPI library, and the other part of it is written using the Message Passing Interface library. In this case, the library is used, which is a part of the T-system and is called DMPI (Dynamic Message Passing Interface). In this way, it is necessary to evaluate the effectiveness of the MPI implementation available in the T-system. The purpose of this work is to examine the effectiveness of DMPI implementation in the T-system. In a classic MPI application, 0% of the code is implemented using automatic dynamic parallelization of programs and 100% of the code is implemented in the form of a regular Message Passing Interface program. For comparative analysis, at the beginning the code is executed on the standard Message Passing Interface, for which it was originally written, and then it is executed using the DMPI library taken from the developed T-system. Сomparing the effectiveness of the approaches, the performance losses and the prospects for using a hybrid programming style are evaluated. As a result of the conducted experimental studies for different types of computational problems, it was possible to make sure that the efficiency losses are negligible. This allowed to formulate the direction of further work on the T-system and the most promising options for building hybrid applications. Thus, this article presents the results of the comparative tests of LAMMPS application using OpenMPI and using OpenTS DMPI. The test results confirm the effectiveness of the DMPI implementation in the OpenTS parallel programming environment
APA, Harvard, Vancouver, ISO, and other styles
10

Protopopov, Boris V., and Anthony Skjellum. "A Multithreaded Message Passing Interface (MPI) Architecture: Performance and Program Issues." Journal of Parallel and Distributed Computing 61, no. 4 (April 2001): 449–66. http://dx.doi.org/10.1006/jpdc.2000.1674.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Message-Passing Interface (MPI)"

1

Träff, Jesper. "Aspects of the efficient implementation of the message passing interface (MPI)." Aachen Shaker, 2009. http://d-nb.info/994501803/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Katti, Amogh. "Epidemic failure detection and consensus for message passing interface (MPI)." Thesis, University of Reading, 2016. http://centaur.reading.ac.uk/69932/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Rattanapoka, Choopan. "P2P-MPI : A fault-tolerant Message Passing Interface Implementation for Grids." Phd thesis, Université Louis Pasteur - Strasbourg I, 2008. http://tel.archives-ouvertes.fr/tel-00724132.

Full text
Abstract:
Cette thèse démontre la faisabilité d'un intergiciel destiné aux grilles de calcul, prenant en compte la dynamicité de ce type de plateforme, et les impératifs des programmes parallèles à passage de message. Pour cela, nous mettons en avant l'intérêt d'utiliser une architecture la plus distribuée possible : nous reprenons l'idée d'une infrastructure pair-à-pair pour l'organisation des ressources, qui facilite notamment la découverte des ressources, et nous retenons les détecteurs de défaillance distribués pour gérer la tolérance aux pannes. La dynamicité de ce type d'environnement est également un problème pour le modèle d'exécution sous-jacent à MPI, car la panne d'un seul processus entraine l'arrêt de l'application. La contribution de P2P-MPI dans ce domaine est la tolérance aux pannes par réplication. Nous pensons qu'elle est la mieux adaptée à une architecture pair-à-pair, les techniques classiques basées sur le check-point and restart nécessitant un ou des serveurs de sauvegardes. De plus, la réplication est totalement transparente à l'utilisateur et rejoint ainsi l'objectif de simplicité d'utilisation que nous nous sommes fixés. Nous pensons que garder un environnement très simple d'utilisation, entièrement maîtrisable par un utilisateur, est un des facteurs permettant d'augmenter le nombre de ressources disponibles sur la grille. Enfin, la contribution majeure de P2P-MPI est la librairie de communication proposée, qui est une implémentation de MPJ (MPI adapté à Java), et qui intègre la réplication des processus. Ce point particulier de notre travail plaide pour une collaboration étroite entre l'intergiciel, qui connaît l'état de la grille (détection des pannes par exemple) et la couche de communication qui peut adapter son comportement en connaissance de cause.
APA, Harvard, Vancouver, ISO, and other styles
4

Ramesh, Srinivasan. "MPI Performance Engineering with the MPI Tools Information Interface." Thesis, University of Oregon, 2018. http://hdl.handle.net/1794/23779.

Full text
Abstract:
The desire for high performance on scalable parallel systems is increasing the complexity and the need to tune MPI implementations. The MPI Tools Information Interface (MPI T) introduced in the MPI 3.0 standard provides an opportunity for performance tools and external software to introspect and understand MPI runtime behavior at a deeper level to detect scalability issues. The interface also provides a mechanism to fine-tune the performance of the MPI library dynamically at runtime. This thesis describes the motivation, design, and challenges involved in developing an MPI performance engineering infrastructure using MPI T for two performance toolkits — the TAU Performance System, and Caliper. I validate the design of the infrastructure for TAU by developing optimizations for production and synthetic applications. I show that the MPI T runtime introspection mechanism in Caliper enables a meaningful analysis of performance data. This thesis includes previously published co-authored material.
APA, Harvard, Vancouver, ISO, and other styles
5

Poole, Jeffrey Hyatt. "Implementation of a Hardware-Optimized MPI Library for the SCMP Multiprocessor." Thesis, Virginia Tech, 2001. http://hdl.handle.net/10919/10064.

Full text
Abstract:
As time progresses, computer architects continue to create faster and more complex microprocessors using techniques such as out-of-order execution, branch prediction, dynamic scheduling, and predication. While these techniques enable greater performance, they also increase the complexity and silicon area of the design. This creates larger development and testing times. The shrinking feature sizes associated with newer technology increase wire resistance and signal propagation delays, further complicating large designs. One potential solution is the Single-Chip Message-Passing (SCMP) Parallel Computer, developed at Virginia Tech. SCMP makes use of an architecture where a number of simple processors are tiled across a single chip and connected by a fast interconnection network. The system is designed to take advantage of thread-level parallelism and to keep wire traces short in preparation for even smaller integrated circuit feature sizes. This thesis presents the implementation of the MPI (Message-Passing Interface) communications library on top of SCMP's hardware communication support. Emphasis is placed on the specific needs of this system with regards to MPI. For example, MPI is designed to operate between heterogeneous systems; however, in the SCMP environment such support is unnecessary and wastes resources. The SCMP network is also designed such that messages can be sent with very low latency, but with cooperative multitasking it is difficult to assure a timely response to messages. Finally, the low-level network primitives have no support for send operations that occur before the receiver is prepared and that functionality is necessary for MPI support.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
6

Strand, Christian. "A Java Founded LOIS-framework and the Message Passing Interface? : An Exploratory Case Study." Thesis, Växjö University, School of Mathematics and Systems Engineering, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:vxu:diva-916.

Full text
Abstract:

In this thesis project we have successfully added an MPI extension layer to the LOIS framework. The framework defines an infrastructure for executing and connecting continuous stream processing applications. The MPI extension provides the same amount of stream based data as the framework’s original transport. We assert that an MPI-2 compatible implementation can be a candidate to extend the given framework with an adaptive and flexible communication sub-system. Adaptability is required since the communication subsystem has to be resilient to changes, either due to optimizations or system requirements.

APA, Harvard, Vancouver, ISO, and other styles
7

Träff, Jesper Larsson [Verfasser]. "Aspects of the efficient Implementation of the Message Passing Interface (MPI) / Jesper Larsson Träff." Aachen : Shaker, 2009. http://d-nb.info/115651794X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Holmes, Daniel John. "McMPI : a managed-code message passing interface library for high performance communication in C#." Thesis, University of Edinburgh, 2012. http://hdl.handle.net/1842/7732.

Full text
Abstract:
This work endeavours to achieve technology transfer between established best-practice in academic high-performance computing and current techniques in commercial high-productivity computing. It shows that a credible high-performance message-passing communication library, with semantics and syntax following the Message-Passing Interface (MPI) Standard, can be built in pure C# (one of the .Net suite of computer languages). Message-passing has been the dominant paradigm in high-performance parallel programming of distributed-memory computer architectures for three decades. The MPI Standard originally distilled architecture-independent and language-agnostic ideas from existing specialised communication libraries and has since been enhanced and extended. Object-oriented languages can increase programmer productivity, for example by allowing complexity to be managed through encapsulation. Both the C# computer language and the .Net common language runtime (CLR) were originally developed by Microsoft Corporation but have since been standardised by the European Computer Manufacturers Association (ECMA) and the International Standards Organisation (ISO), which facilitates portability of source-code and compiled binary programs to a variety of operating systems and hardware. Combining these two open and mature technologies enables mainstream programmers to write tightly-coupled parallel programs in a popular standardised object-oriented language that is portable to most modern operating systems and hardware architectures. This work also establishes that a thread-to-thread delivery option increases shared-memory communication performance between MPI ranks on the same node. This suggests that the thread-as-rank threading model should be explicitly specified in future versions of the MPI Standard and then added to existing MPI libraries for use by thread-safe parallel codes. This work also ascertains that the C# socket object suffers from undesirable characteristics that are critical to communication performance and proposes ways of improving the implementation of this object.
APA, Harvard, Vancouver, ISO, and other styles
9

Chen, Zhezhe. "System Support for Improving the Reliability of MPI Applications and Libraries." The Ohio State University, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=osu1375880144.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Radcliffe, Nicholas Ryan. "Adjusting Process Count on Demand for Petascale Global Optimization." Thesis, Virginia Tech, 2011. http://hdl.handle.net/10919/36349.

Full text
Abstract:
There are many challenges that need to be met before efficient and reliable computation at the petascale is possible. Many scientific and engineering codes running at the petascale are likely to be memory intensive, which makes thrashing a serious problem for many petascale applications. One way to overcome this challenge is to use a dynamic number of processes, so that the total amount of memory available for the computation can be increased on demand. This thesis describes modifications made to the massively parallel global optimization code pVTdirect in order to allow for a dynamic number of processes. In particular, the modified version of the code monitors memory use and spawns new processes if the amount of available memory is determined to be insufficient. The primary design challenges are discussed, and performance results are presented and analyzed.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Message-Passing Interface (MPI)"

1

Adamo, Jean-Marc. Multi-Threaded Object-Oriented MPI-Based Message Passing Interface. Boston, MA: Springer US, 1998. http://dx.doi.org/10.1007/978-1-4615-5761-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Gropp, William. Using MPI: Portable parallel programming with the message-passing interface. Cambridge, Mass: MIT Press, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ewing, Lusk, Thakur Rajeev, NetLibrary Inc, and Massachusetts Institute of Technology, eds. Using MPI-2: Advanced features of the message-passing interface. Cambridge, Mass: MIT Press, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Using MPI-2: Advanced Features of the Message Passing Interface. Cambridge, MA, USA: The MIT Press, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ewing, Lusk, and Skjellum Anthony, eds. Using MPI: Portable parallel programming with the message-passing interface. 2nd ed. Cambridge, Mass: MIT Press, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Multi-threaded object-oriented MPI-based message passing interface: The ARCH library. Boston, Mass: Kluwer Academic Publishers, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Adamo, Jean-Marc. Multi-Threaded Object-Oriented MPI-Based Message Passing Interface: The ARCH Library. Boston, MA: Springer US, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Keller, Rainer. Recent Advances in the Message Passing Interface: 17th European MPI Users’ Group Meeting, EuroMPI 2010, Stuttgart, Germany, September 12-15, 2010. Proceedings. Berlin, Heidelberg: Springer-Verlag Berlin Heidelberg, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Siegfried, Benkner, Dongarra Jack J, and SpringerLink (Online service), eds. Recent Advances in the Message Passing Interface: 19th European MPI Users’ Group Meeting, EuroMPI 2012, Vienna, Austria, September 23-26, 2012. Proceedings. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Anthony, Danalis, Nikolopoulos Dimitrios S, Dongarra J. J, and SpringerLink (Online service), eds. Recent Advances in the Message Passing Interface: 18th European MPI Users’ Group Meeting, EuroMPI 2011, Santorini, Greece, September 18-21, 2011. Proceedings. Berlin, Heidelberg: Springer-Verlag GmbH Berlin Heidelberg, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Message-Passing Interface (MPI)"

1

Chivers, Ian, and Jane Sleightholme. "MPI—Message Passing Interface." In Introduction to Programming with Fortran, 465–87. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-17701-4_30.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Chivers, Ian, and Jane Sleightholme. "MPI - Message Passing Interface." In Introduction to Programming with Fortran, 581–604. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-75502-1_32.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Padua, David, Amol Ghoting, John A. Gunnels, Mark S. Squillante, José Meseguer, James H. Cownie, Duncan Roweth, et al. "Message Passing Interface (MPI)." In Encyclopedia of Parallel Computing, 1116. Boston, MA: Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-09766-4_2085.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Padua, David, Amol Ghoting, John A. Gunnels, Mark S. Squillante, José Meseguer, James H. Cownie, Duncan Roweth, et al. "MPI (Message Passing Interface)." In Encyclopedia of Parallel Computing, 1184–90. Boston, MA: Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-09766-4_222.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Agapito, Giuseppe. "Message Passing Interface (MPI)." In Encyclopedia of Systems Biology, 1226. New York, NY: Springer New York, 2013. http://dx.doi.org/10.1007/978-1-4419-9863-7_1604.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Chivers, Ian, and Jane Sleightholme. "MPI – Message Passing Interface." In Introduction to Programming with Fortran, 419–45. London: Springer London, 2011. http://dx.doi.org/10.1007/978-0-85729-233-9_28.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Gropp, William, Ewing Lusk, and Rajeev Thakur. "Advanced MPI Including New MPI-3 Features." In Recent Advances in the Message Passing Interface, 14. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33518-1_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Clarke, Lyndon, Ian Glendinning, and Rolf Hempel. "The MPI Message Passing Interface Standard." In Programming Environments for Massively Parallel Distributed Systems, 213–18. Basel: Birkhäuser Basel, 1994. http://dx.doi.org/10.1007/978-3-0348-8534-8_21.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Bosilca, George. "Will MPI Remain Relevant?" In Recent Advances in the Message Passing Interface, 8. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-24449-0_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Nitsche, Thomas, and Wolfram Webers. "Functional message passing with OPAL-MPI." In Recent Advances in Parallel Virtual Machine and Message Passing Interface, 281–88. Berlin, Heidelberg: Springer Berlin Heidelberg, 1998. http://dx.doi.org/10.1007/bfb0056586.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Message-Passing Interface (MPI)"

1

Karim, Sharmila, Zurni Omar, and Haslinda Ibrahim. "Efficient parallel algorithm for listing permutation with Message Passing Interface (MPI)." In 2015 International Symposium on Mathematical Sciences and Computing Research (iSMSC). IEEE, 2015. http://dx.doi.org/10.1109/ismsc.2015.7594077.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Bruck, Jehoshua, Danny Dolev, Ching-Tien Ho, Marcel-Cătălin Roşu, and Ray Strong. "Efficient message passing interface (MPI) for parallel computing on clusters of workstations." In the seventh annual ACM symposium. New York, New York, USA: ACM Press, 1995. http://dx.doi.org/10.1145/215399.215421.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Junior, Augusto Mendes Gomes, Fernando Ryoji Kakugawa, Calebe de Paula Bianchini, and Francisco Isidro Massetto. "A thread-safe communication mechanism for message-passing interface based on MPI Standard." In 2009 Joint Conferences on Pervasive Computing (JCPC 2009). IEEE, 2009. http://dx.doi.org/10.1109/jcpc.2009.5420193.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Chou, Yu-Cheng, and Harry H. Cheng. "Interpretive MPI for Parallel Computing." In ASME 2008 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2008. http://dx.doi.org/10.1115/detc2008-49996.

Full text
Abstract:
Message Passing Interface (MPI) is a standardized library specification designed for message-passing parallel programming on large-scale distributed systems. A number of MPI libraries have been implemented to allow users to develop portable programs using the scientific programming languages, Fortran, C and C++. Ch is an embeddable C/C++ interpreter that provides an interpretive environment for C/C++ based scripts and programs. Combining Ch with any MPI C/C++ library provides the functionality for rapid development of MPI C/C++ programs without compilation. In this article, the method of interfacing Ch scripts with MPI C implementations is introduced by using the MPICH2 C library as an example. The MPICH2-based Ch MPI package provides users with the ability to interpretively run MPI C program based on the MPICH2 C library. Running MPI programs through the MPICH2-based Ch MPI package across heterogeneous platforms consisting of Linux and Windows machines is illustrated. Comparisons for the bandwidth, latency, and parallel computation speedup between C MPI, Ch MPI, and MPI for Python in an Ethernet-based environment comprising identical Linux machines are presented. A Web-based example is given to demonstrate the use of Ch and MPICH2 in C based CGI scripting to facilitate the development of Web-based applications for parallel computing.
APA, Harvard, Vancouver, ISO, and other styles
5

Suzuki, Masaaki, Hiroshi Okuda, and Genki Yagawa. "Large-Scale Biomolecular Dynamics Using SMP Clusters." In 12th International Conference on Nuclear Engineering. ASMEDC, 2004. http://dx.doi.org/10.1115/icone12-49573.

Full text
Abstract:
The authors have applied Message Passing Interface (MPI) / OpenMP hybrid parallel programming model to molecular dynamics (MD) method for simulating a protein structure on a symmetric multiprocessor (SMP) cluster architecture. In that architecture, it can be expected that the hybrid parallel programming model, which uses the message passing library such as MPI for inter-SMP node communication and the loop directives such as OpenMP for intra-SMP node parallelization, is the most effective one. In this study, the parallel performance of the hybrid style has been compared with that of conventional flat parallel programming style, which uses only MPI, both in case that the fast multipole method (FMM) is employed for computing long-distance interactions and that is not employed. The computer environments used here are Hitachi SR8000/MPP placed at the University of Tokyo. The results of calculation are as follows: Without using FMM, the parallel efficiency using 16 SMP nodes (128 PEs) is: - 90% with the hybrid style, - 75% with the flat-MPI style, for MD simulation with 33,402 atoms. With FMM, the parallel efficiency using 16 SMP nodes (128 PEs) is: - 60% with the hybrid style, - 48% with the flat-MPI style, for MD simulation with 117,649 atoms.
APA, Harvard, Vancouver, ISO, and other styles
6

Rosso, Pedro Henrique Di Francia, and Emilio Francesquini. "Improved Failure Detection and Propagation Mechanisms for MPI." In Escola Regional de Alto Desempenho de São Paulo. Sociedade Brasileira de Computação - SBC, 2021. http://dx.doi.org/10.5753/eradsp.2021.16702.

Full text
Abstract:
The Message Passing Interface (MPI) standard is largely used in High-Performance Computing (HPC) systems. Such systems employ a large number of computing nodes. Thus, Fault Tolerance (FT) is a concern since a large number of nodes leads to more frequent failures. Two essential components of FT are Failure Detection (FD) and Failure Propagation (FP). This paper proposes improvements to existing FD and FP mechanisms to provide more portability, scalability, and low overhead. Results show that the methods proposed can achieve better or at least similar results to existing methods while providing portability to any MPI standard-compliant distribution.
APA, Harvard, Vancouver, ISO, and other styles
7

Herrera, Stiw, Weber Ribeiro, Thiago Teixeira, André Carneiro, Frederico Cabral, Márcio Borges, and Carla Osthoff. "Avaliação de Desempenho no Supercomputador SDumont de uma Estratégia de Decomposição de Domínio usando as Funcionalidades de Mapeamento Topológico do MPI para um Método Numérico de Escoamento de Fluidos." In VI Escola Regional de Alto Desempenho do Rio de Janeiro. Sociedade Brasileira de Computação - SBC, 2020. http://dx.doi.org/10.5753/eradrj.2020.14513.

Full text
Abstract:
Oil and gas simulations need new high-performance computing techniques to deal with the large amount of data allocation and the high computational cost that we obtain from the numerical method. The domain decomposition technique (domain division technique) was applied to a three-dimensional oil reservoir, where the MPI (Message Passing Interface) allowed the creation of a uni, bi and three-dimensional topology, where a subdivision of a reservoir could be solved in each MPI process created. A performance study was developed with these domain decomposition strategies in 20 computational nodes of the SDumont Supercomputer, using a Cascade Lake architecture.
APA, Harvard, Vancouver, ISO, and other styles
8

Flatschart, Ricardo Becht, Julio Romano Meneghini, and Jose´ Alfredo Ferrari. "Parallel Simulation of a Marine Riser Using MPI." In ASME 2004 23rd International Conference on Offshore Mechanics and Arctic Engineering. ASMEDC, 2004. http://dx.doi.org/10.1115/omae2004-51123.

Full text
Abstract:
In this paper the dynamic response of a marine riser due to vortex shedding is numerically investigated. The riser is divided in two-dimensional sections along the riser length. The Discrete Vortex Method is employed for the assessment of the hydrodynamic forces acting on these two-dimensional sections. The hydrodynamic sections are solved independently, and the coupling among the sections is taken into account by the solution of the structure in the time domain by the Finite Element Method. Parallel processing is employed to improve the performance of the method. The simulations are carried out in a cluster of Pentium IV computers running the Linux operating system. A master-slave approach via MPI — Message Passing Interface — is used to exploit the parallelism of the present code. The riser sections are equally divided among the nodes of the cluster. Each node solves the hydrodynamic sections assigned to it. The forces acting on the sections are then passed to the master processor, which is responsible for the calculation of the displacement of the whole structure. Scalability of the algorithm is shown and discussed.
APA, Harvard, Vancouver, ISO, and other styles
9

Wang, Nenzi, and Chih-Ming Tsai. "Performance Evaluation of a Multiprocessor Cluster for a Thermohydrodynamic Lubrication Analysis." In World Tribology Congress III. ASMEDC, 2005. http://dx.doi.org/10.1115/wtc2005-63433.

Full text
Abstract:
This study presents a performance evaluation of two parallel programming paradigms, OpenMP and message-passing interface (MPI), for thermohydrodynamic lubrication analysis. In this study the performance of parallel computing in MPI cluster is equivalent to a similarly configured single-system image cluster. For a reasonable parallel efficiency (75%) the experimentally determined minimum execution times for the tasks to be conducted in parallel are approximate 0.5 and 5.0 seconds for OpenMP and MPI parallelism, respectively. It is noted that OpenMP programming allows parallel applications to be developed incrementally and supports fine-grain communication in a very cost effective manner. A computer program written in part to perform two or more tasks simultaneously may well be a computation norm in future tribological study.
APA, Harvard, Vancouver, ISO, and other styles
10

Niavarani-Kheirier, Anoosheh, Masoud Darbandi, and Gerry E. Schneider. "Parallelization of the Lattice Boltzmann Method in Simulating Buoyancy-Driven Convection Heat Transfer." In ASME 2004 International Mechanical Engineering Congress and Exposition. ASMEDC, 2004. http://dx.doi.org/10.1115/imece2004-61871.

Full text
Abstract:
The main objective of the current work is to utilize Lattice Boltzmann Method (LBM) for simulating buoyancy-driven flow considering the hybrid thermal lattice Boltzmann equation (HTLBE). After deriving the required formulations, they are validated against a wide range of Rayleigh numbers in buoyancy-driven square cavity problem. The performance of the method is investigated on parallel machines using Message Passing Interface (MPI) library and implementing domain decomposition technique to solve problems with large order of computations. The achieved results show that the code is highly efficient to solve large scale problems with excellent speedup.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Message-Passing Interface (MPI)"

1

Chapman, Ray, Phu Luong, Sung-Chan Kim, and Earl Hayter. Development of three-dimensional wetting and drying algorithm for the Geophysical Scale Transport Multi-Block Hydrodynamic Sediment and Water Quality Transport Modeling System (GSMB). Engineer Research and Development Center (U.S.), July 2021. http://dx.doi.org/10.21079/11681/41085.

Full text
Abstract:
The Environmental Laboratory (EL) and the Coastal and Hydraulics Laboratory (CHL) have jointly completed a number of large-scale hydrodynamic, sediment and water quality transport studies. EL and CHL have successfully executed these studies utilizing the Geophysical Scale Transport Modeling System (GSMB). The model framework of GSMB is composed of multiple process models as shown in Figure 1. Figure 1 shows that the United States Army Corps of Engineers (USACE) accepted wave, hydrodynamic, sediment and water quality transport models are directly and indirectly linked within the GSMB framework. The components of GSMB are the two-dimensional (2D) deep-water wave action model (WAM) (Komen et al. 1994, Jensen et al. 2012), data from meteorological model (MET) (e.g., Saha et al. 2010 - http://journals.ametsoc.org/doi/pdf/10.1175/2010BAMS3001.1), shallow water wave models (STWAVE) (Smith et al. 1999), Coastal Modeling System wave (CMS-WAVE) (Lin et al. 2008), the large-scale, unstructured two-dimensional Advanced Circulation (2D ADCIRC) hydrodynamic model (http://www.adcirc.org), and the regional scale models, Curvilinear Hydrodynamics in three dimensions-Multi-Block (CH3D-MB) (Luong and Chapman 2009), which is the multi-block (MB) version of Curvilinear Hydrodynamics in three-dimensions-Waterways Experiments Station (CH3D-WES) (Chapman et al. 1996, Chapman et al. 2009), MB CH3D-SEDZLJ sediment transport model (Hayter et al. 2012), and CE-QUAL Management - ICM water quality model (Bunch et al. 2003, Cerco and Cole 1994). Task 1 of the DOER project, “Modeling Transport in Wetting/Drying and Vegetated Regions,” is to implement and test three-dimensional (3D) wetting and drying (W/D) within GSMB. This technical note describes the methods and results of Task 1. The original W/D routines were restricted to a single vertical layer or depth-averaged simulations. In order to retain the required 3D or multi-layer capability of MB-CH3D, a multi-block version with variable block layers was developed (Chapman and Luong 2009). This approach requires a combination of grid decomposition, MB, and Message Passing Interface (MPI) communication (Snir et al. 1998). The MB single layer W/D has demonstrated itself as an effective tool in hyper-tide environments, such as Cook Inlet, Alaska (Hayter et al. 2012). The code modifications, implementation, and testing of a fully 3D W/D are described in the following sections of this technical note.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography