To see the other types of publications on this topic, follow the link: Message-Passing Interface (MPI).

Journal articles on the topic 'Message-Passing Interface (MPI)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Message-Passing Interface (MPI).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Li, W. J., and J. J. Tsay. "Checkpointing message passing interface (MPI) parallel programs." Computer Standards & Interfaces 20, no. 6-7 (March 1999): 425. http://dx.doi.org/10.1016/s0920-5489(99)90841-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

AlDhubhani, Raed, Fathy Eassa, and Faisal Saeed. "Exascale Message Passing Interface based Program Deadlock Detection." International Journal of Electrical and Computer Engineering (IJECE) 6, no. 2 (April 1, 2016): 887. http://dx.doi.org/10.11591/ijece.v6i2.9575.

Full text
Abstract:
Deadlock detection is one of the main issues of software testing in High Performance Computing (HPC) and also inexascale computing areas in the near future. Developing and testing programs for machines which have millions of cores is not an easy task. HPC program consists of thousands (or millions) of parallel processes which need to communicate with each other in the runtime. Message Passing Interface (MPI) is a standard library which provides this communication capability and it is frequently used in the HPC. Exascale programs are expected to be developed using MPI standard library. For parallel programs, deadlock is one of the expected problems. In this paper, we discuss the deadlock detection for exascale MPI-based programs where the scalability and efficiency are critical issues. The proposed method detects and flags the processes and communication operations which are potential to cause deadlocks in a scalable and efficient manner. MPI benchmark programs were used to test the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
3

AlDhubhani, Raed, Fathy Eassa, and Faisal Saeed. "Exascale Message Passing Interface based Program Deadlock Detection." International Journal of Electrical and Computer Engineering (IJECE) 6, no. 2 (April 1, 2016): 887. http://dx.doi.org/10.11591/ijece.v6i2.pp887-894.

Full text
Abstract:
Deadlock detection is one of the main issues of software testing in High Performance Computing (HPC) and also inexascale computing areas in the near future. Developing and testing programs for machines which have millions of cores is not an easy task. HPC program consists of thousands (or millions) of parallel processes which need to communicate with each other in the runtime. Message Passing Interface (MPI) is a standard library which provides this communication capability and it is frequently used in the HPC. Exascale programs are expected to be developed using MPI standard library. For parallel programs, deadlock is one of the expected problems. In this paper, we discuss the deadlock detection for exascale MPI-based programs where the scalability and efficiency are critical issues. The proposed method detects and flags the processes and communication operations which are potential to cause deadlocks in a scalable and efficient manner. MPI benchmark programs were used to test the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
4

Skjellum, Anthony, Ewing Lusk, and William Gropp. "Early Applications in the Message-Passing Interface (Mpi)." International Journal of Supercomputer Applications and High Performance Computing 9, no. 2 (June 1995): 79–94. http://dx.doi.org/10.1177/109434209500900202.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Donegan, Brendan J., Daniel C. Doolan, and Sabin Tabirca. "Mobile Message Passing using a Scatternet Framework." International Journal of Computers Communications & Control 3, no. 1 (March 1, 2008): 51. http://dx.doi.org/10.15837/ijccc.2008.1.2374.

Full text
Abstract:
The Mobile Message Passing Interface is a library which implements MPI functionality on Bluetooth enabled mobile phones. It provides many of the functions available in MPI, including point-to-point and global communication. The main restriction of the library is that it was designed to work over Bluetooth piconets. Piconet based networks provide for a maximum of eight devices connected together simultaneously. This limits the libraries usefulness for parallel computing. A solution to solve this problem is presented that provides the same functionality as the original Mobile MPI library, but implemented over a Bluetooth scatternet. A scatternet may be defined as a number of piconets interconnected by common node(s). An outline of the scatternet design is explained and its major components discussed.
APA, Harvard, Vancouver, ISO, and other styles
6

Hempel, Rolf, and Falk Zimmermann. "Automatic Migration from PARMACS to MPI in Parallel Fortran Applications." Scientific Programming 7, no. 1 (1999): 39–46. http://dx.doi.org/10.1155/1999/890514.

Full text
Abstract:
The PARMACS message passing interface has been in widespread use by application projects, especially in Europe. With the new MPI standard for message passing, many projects face the problem of replacing PARMACS with MPI. An automatic translation tool has been developed which replaces all PARMACS 6.0 calls in an application program with their corresponding MPI calls. In this paper we describe the mapping of the PARMACS programming model onto MPI. We then present some implementation details of the converter tool.
APA, Harvard, Vancouver, ISO, and other styles
7

Skjellum, Anthony, Arkady Kanevsky, Yoginder S. Dandass, Jerrell Watts, Steve Paavola, Dennis Cottel, Greg Henley, L. Shane Hebert, Zhenqian Cui, and Anna Rounbehler. "The Real-Time Message Passing Interface Standard (MPI/RT-1.1)." Concurrency and Computation: Practice and Experience 16, S1 (2004): Si—S322. http://dx.doi.org/10.1002/cpe.744.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Gravvanis, George A., and Konstantinos M. Giannoutakis. "Parallel Preconditioned Conjugate Gradient Square Method Based on Normalized Approximate Inverses." Scientific Programming 13, no. 2 (2005): 79–91. http://dx.doi.org/10.1155/2005/508607.

Full text
Abstract:
A new class of normalized explicit approximate inverse matrix techniques, based on normalized approximate factorization procedures, for solving sparse linear systems resulting from the finite difference discretization of partial differential equations in three space variables are introduced. A new parallel normalized explicit preconditioned conjugate gradient square method in conjunction with normalized approximate inverse matrix techniques for solving efficiently sparse linear systems on distributed memory systems, using Message Passing Interface (MPI) communication library, is also presented along with theoretical estimates on speedups and efficiency. The implementation and performance on a distributed memory MIMD machine, using Message Passing Interface (MPI) is also investigated. Applications on characteristic initial/boundary value problems in three dimensions are discussed and numerical results are given.
APA, Harvard, Vancouver, ISO, and other styles
9

Abramov, Sergey, Vladimir Roganov, Valeriy Osipov, and German Matveev. "Implementation of the LAMMPS package using T-system with an Open Architecture." Informatics and Automation 20, no. 4 (August 11, 2021): 971–99. http://dx.doi.org/10.15622/ia.20.4.8.

Full text
Abstract:
Supercomputer applications are usually implemented in the C, C++, and Fortran programming languages using different versions of the Message Passing Interface library. The "T-system" project (OpenTS) studies the issues of automatic dynamic parallelization of programs. In practical terms, the implementation of applications in a mixed (hybrid) style is relevant, when one part of the application is written in the paradigm of automatic dynamic parallelization of programs and does not use any primitives of the MPI library, and the other part of it is written using the Message Passing Interface library. In this case, the library is used, which is a part of the T-system and is called DMPI (Dynamic Message Passing Interface). In this way, it is necessary to evaluate the effectiveness of the MPI implementation available in the T-system. The purpose of this work is to examine the effectiveness of DMPI implementation in the T-system. In a classic MPI application, 0% of the code is implemented using automatic dynamic parallelization of programs and 100% of the code is implemented in the form of a regular Message Passing Interface program. For comparative analysis, at the beginning the code is executed on the standard Message Passing Interface, for which it was originally written, and then it is executed using the DMPI library taken from the developed T-system. Сomparing the effectiveness of the approaches, the performance losses and the prospects for using a hybrid programming style are evaluated. As a result of the conducted experimental studies for different types of computational problems, it was possible to make sure that the efficiency losses are negligible. This allowed to formulate the direction of further work on the T-system and the most promising options for building hybrid applications. Thus, this article presents the results of the comparative tests of LAMMPS application using OpenMPI and using OpenTS DMPI. The test results confirm the effectiveness of the DMPI implementation in the OpenTS parallel programming environment
APA, Harvard, Vancouver, ISO, and other styles
10

Protopopov, Boris V., and Anthony Skjellum. "A Multithreaded Message Passing Interface (MPI) Architecture: Performance and Program Issues." Journal of Parallel and Distributed Computing 61, no. 4 (April 2001): 449–66. http://dx.doi.org/10.1006/jpdc.2000.1674.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

STANKOVIC, NENAD, and KANG ZHANG. "VISUAL PROGRAMMING FOR MESSAGE-PASSING SYSTEMS." International Journal of Software Engineering and Knowledge Engineering 09, no. 04 (August 1999): 397–423. http://dx.doi.org/10.1142/s0218194099000231.

Full text
Abstract:
The attractiveness of visual programming stems in large part from the direct interaction with program elements as if they were real objects, since people deal better with concrete objects than with the abstract. This paper describes a new graph based software visualization tool for parallel message-passing programming named Visper that combines the levels of abstraction at which message-passing parallel programs are expressed and makes use of compositional programming. Central to the tool is the Process Communication Graph that correlates both the control and data flow graphs into a single graph formalism, without a need for complex textual annotation. The graph can express static and runtime communication and replication structures, as found in Message Passing Interface (MPI) and Parallel Virtual Machine (PVM). It also forms the basis for visualizing parallel debugging and performance.
APA, Harvard, Vancouver, ISO, and other styles
12

Duan, Xiang Wei, Wei Chang Shen, and Jun Guo. "The MPI and OpenMP Implementation of Parallel Algorithm for Generating Mandelbrot Set." Applied Mechanics and Materials 571-572 (June 2014): 26–29. http://dx.doi.org/10.4028/www.scientific.net/amm.571-572.26.

Full text
Abstract:
The paper introduce the Mandelbrot Set and the message passing interface (MPI) and shared-memory (OpenMP), analyses the characteristic of algorithm design in the MPI and OpenMP environment, describes the implementation of parallel algorithm about Mandelbrot Set in the MPI environment and the OpenMP environment, conducted a series of evaluation and performance testing during the process of running, then the difference between the two system implementations is compared.
APA, Harvard, Vancouver, ISO, and other styles
13

Ma, Wenpeng, Xiaodong Hu, and Xiazhen Liu. "Parallel multibody separation simulation using MPI and OpenMP with communication optimization." Journal of Algorithms & Computational Technology 13 (September 7, 2018): 174830181879706. http://dx.doi.org/10.1177/1748301818797062.

Full text
Abstract:
In this paper we investigate parallel implementations of multibody separation simulation using a hybrid of message passing interface and OpenMP. We propose a mesh block-based overset communication optimization algorithm. After presenting details of local data structures, we present our strategy for parallelizing both the overset mesh assembler and the flow solver by employing message passing interface and OpenMP. Experimental results show that the mesh block-based overset communication optimization algorithm has an advantage in real elapsed time when compared to a process-based implementation. The hybrid version shows that it is suitable for improving the load balance if a large number of CPU cores are used. We report results for a standard multibody separation case.
APA, Harvard, Vancouver, ISO, and other styles
14

HERRMANN, CHRISTOPH A. "GENERATING MESSAGE-PASSING PROGRAMS FROM ABSTRACT SPECIFICATIONS BY PARTIAL EVALUATION." Parallel Processing Letters 15, no. 03 (September 2005): 305–20. http://dx.doi.org/10.1142/s0129626405002234.

Full text
Abstract:
This paper demonstrates how parallel programs with message-passing can be generated from abstract specifications embedded in the functional language MetaOCaml. The functional style permits to design parallel programs with a high degree of parameterization, so-called skeletons. Programmers who are unexperienced in parallelism can take such skeletons for a simple and safe generation of parallel applications. Since MetaOCaml also has efficient imperative features and an MPI interface, the entire program can be written in one language, without the need to use a language interface restricting the set of data objects which could be exchanged. The semantics of abstract specifications is expressed by an interpreter written in MetaOCaml. A cost model is defined by abstract interpretation of the specification. Partial evaluation of the interpreter with a specification, a feature which MetaOCaml provides, yields a parallel program. The partial evaluation process takes time on each MPI process directly before the execution of the application program, exploiting knowledge of the number of processes, the current process identifier and the communication structure. Our example is the specification of a divide-and-conquer skeleton which is used to compute the multiplication of multi-digit numbers using Karatsuba's algorithm.
APA, Harvard, Vancouver, ISO, and other styles
15

LOUCA, SOULLA, NEOPHYTOS NEOPHYTOU, ADRIANOS LACHANAS, and PARASKEVAS EVRIPIDOU. "MPI-FT: PORTABLE FAULT TOLERANCE SCHEME FOR MPI." Parallel Processing Letters 10, no. 04 (December 2000): 371–82. http://dx.doi.org/10.1142/s0129626400000342.

Full text
Abstract:
In this paper, we propose the design and development of a fault tolerant and recovery scheme for the Message Passing Interface (MPI). The proposed scheme consists of a detection mechanism for detecting process failures, and a recovery mechanism. Two different cases are considered, both assuming the existence of a monitoring process, the Observer which triggers the recovery procedure in case of failure. In the first case, each process keeps a buffer with its own message traffic to be used in case of failure, while the implementor uses periodical tests for notification of failure by the Observer. The recovery function simulates all the communication of the processes with the dead one by re-sending to the replacement process all the messages destined for the dead one. In the second case, the Observer receives and stores all message traffic, and sends to the replacement all the buffered messages destined for the dead process. Solutions are provided to the dead communicator problem caused by the death of a process. A description of the prototype developed is provided along with the results of the experiments performed for efficiency and performance.
APA, Harvard, Vancouver, ISO, and other styles
16

Bruck, Jehoshua, Danny Dolev, Ching-Tien Ho, Marcel-Cătălin Roşu, and Ray Strong. "Efficient Message Passing Interface (MPI) for Parallel Computing on Clusters of Workstations." Journal of Parallel and Distributed Computing 40, no. 1 (January 1997): 19–34. http://dx.doi.org/10.1006/jpdc.1996.1267.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Gropp, William, Ewing Lusk, Nathan Doss, and Anthony Skjellum. "A high-performance, portable implementation of the MPI message passing interface standard." Parallel Computing 22, no. 6 (September 1996): 789–828. http://dx.doi.org/10.1016/0167-8191(96)00024-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Gallardo, Esthela, Jérôme Vienne, Leonardo Fialho, Patricia Teller, and James Browne. "Employing MPI_T in MPI Advisor to optimize application performance." International Journal of High Performance Computing Applications 32, no. 6 (January 31, 2017): 882–96. http://dx.doi.org/10.1177/1094342016684005.

Full text
Abstract:
MPI_T, the MPI Tool Information Interface, was introduced in the MPI 3.0 standard with the aim of enabling the development of more effective tools to support the Message Passing Interface (MPI), a standardized and portable message-passing system that is widely used in parallel programs. Most MPI optimization tools do not yet employ MPI_T and only describe the interactions between an application and an MPI library, thus requiring that users have expert knowledge to translate this information into optimizations. In contrast, MPI Advisor, a recently developed, easy-to-use methodology and tool for MPI performance optimization, pioneered the use of information provided by MPI_T to characterize the communication behaviors of an application and identify an MPI configuration that may enhance application performance. In addition to enabling the recommendation of performance optimizations, MPI_T has the potential to enable automatic runtime application of these optimizations. Optimization of MPI configurations is important because: (1) the vast majority of parallel applications executed on high-performance computing clusters use MPI for communication among processes, (2) most users execute their programs using the cluster’s default MPI configuration, and (3) while default configurations may give adequate performance, it is well known that optimizing the MPI runtime environment can significantly improve application performance, in particular, when the way in which the application is executed and/or the application’s input changes. This paper provides an overview of MPI_T, describes how it can be used to develop more effective MPI optimization tools, and demonstrates its use within an extended version of MPI Advisor. In doing the latter, it presents several MPI configuration choices that can significantly impact performance, shows how use of information collected at runtime with MPI_T and PMPI can be used to enhance performance, and presents MPI Advisor case studies of these configuration optimizations with performance gains of up to 40%.
APA, Harvard, Vancouver, ISO, and other styles
19

QURESHI, KALIM, and SYED SAJID HUSSAIN. "A COMPARATIVE STUDY OF PARALLELIZATION STRATEGIES FOR FRACTAL IMAGE COMPRESSION ON A CLUSTER OF WORKSTATIONS." International Journal of Computational Methods 05, no. 03 (September 2008): 463–82. http://dx.doi.org/10.1142/s0219876208001534.

Full text
Abstract:
In this paper we implement and compare the performance of the Message Passing Interface (MPI) static master-worker and three strategies of MPI task farm implementations for fractal image compression on a Beowulf cluster of workstations, namely Local Predecimation with Range Index Communication (LPRI), Global Predecimation with Range Communication (GPR) and No Predecimation with Range Index Communication (NPRI). Our results show that the MPI task farm implementations balance the load effectively among workers as compared to the MPI static master-worker implementation. The task farm strategies are compared by measuring their speedup and worker idle time cost.
APA, Harvard, Vancouver, ISO, and other styles
20

Han, Xiao Gang, Qin Lei Sun, and Jiang Wei Fan. "Parallel Dijkstra's Algorithm Based on Multi-Core and MPI." Applied Mechanics and Materials 441 (December 2013): 750–53. http://dx.doi.org/10.4028/www.scientific.net/amm.441.750.

Full text
Abstract:
Dijkstra’s algorithm is a typical but low efficiency shortest path algorithm. The parallel Dijkstra’s algorithm based on message passing interface (MPI) is efficient and easy to implement, but it’s not very suitable for PC platform. This paper describes a parallel Dijkstra’s algorithm. We designed the parallel algorithm and realized it based on multi-core PC and MPI software platform. The implementation is convenient, and the performance experiment shows that the algorithm has satisfied speedup and efficiency.
APA, Harvard, Vancouver, ISO, and other styles
21

Fineberg, Samuel A. "Using MPI-Portable Parallel Programming with the Message-Passing Interface, by William Gropp." Scientific Programming 5, no. 3 (1996): 275–76. http://dx.doi.org/10.1155/1996/465097.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Tracy, Fred Thomas, Thomas C. Oppe, and Maureen K. Corcoran. "A comparison of MPI and co-array FORTRAN for large finite element variably saturated flow simulations." Scalable Computing: Practice and Experience 19, no. 4 (December 29, 2018): 423–32. http://dx.doi.org/10.12694/scpe.v19i4.1468.

Full text
Abstract:
The purpose of this research is to determine how well co-array FORTRAN (CAF) performs relative to Message Passing Interface (MPI) on unstructured mesh finite element groundwater modelling applications with large problem sizes and core counts. This research used almost 150 million nodes and 300 million 3-D prism elements. Results for both the Cray XE6 and Cray XC30 are given. A comparison of the ghost-node update algorithms with source code provided for both MPI and CAF is also presented.
APA, Harvard, Vancouver, ISO, and other styles
23

Ayuningtyas, Astika. "Pemrosesan Paralel pada Low Pass Filtering Menggunakan Transform Cosinus di MPI (Message Passing Interface)." Conference SENATIK STT Adisutjipto Yogyakarta 2 (November 15, 2016): 115. http://dx.doi.org/10.28989/senatik.v2i0.68.

Full text
Abstract:
Parallel processing is a process of calculating two or more tasks simultaneously through the optimization of the computer system resource, one treatment models is a desktop system. The model allows to perform parallel processing between computers with specifications different computers. An implementation of a model network of workstations using MPI (Message Passing Interface). In this study, applied to the case of the low-pass filtering (LPF), a process in the image or the image of the shape of the filter that retrieves the data at low frequencies. filtering programs lowpass using the cosine transform MPI implemented by modifying the algorithm in the process on each node (computer). Depending on the test results, so that the processing speed of a parallel system is influenced by the number of nodes / processes and the number of frequency components that are processed. In the treatment of single larger process, the time it takes more and more and the value prop affects only the amount of high frequency data is filtered on the field. While parallel processing of more and more computers involved in the filter calculation process low-pass, plus the time required to perform the calculation.
APA, Harvard, Vancouver, ISO, and other styles
24

Hilbrich, Tobias, Joachim Protze, Martin Schulz, Bronis R. de Supinski, and Matthias S. Müller. "MPI Runtime Error Detection with MUST: Advances in Deadlock Detection." Scientific Programming 21, no. 3-4 (2013): 109–21. http://dx.doi.org/10.1155/2013/314971.

Full text
Abstract:
The widely used Message Passing Interface (MPI) is complex and rich. As a result, application developers require automated tools to avoid and to detect MPI programming errors. We present the Marmot Umpire Scalable Tool (MUST) that detects such errors with significantly increased scalability. We present improvements to our graph-based deadlock detection approach for MPI, which cover future MPI extensions. Our enhancements also check complex MPI constructs that no previous graph-based detection approach handled correctly. Finally, we present optimizations for the processing of MPI operations that reduce runtime deadlock detection overheads. Existing approaches often require 𝒪(p) analysis time per MPI operation, forpprocesses. We empirically observe that our improvements lead to sub-linear or better analysis time per operation for a wide range of real world applications.
APA, Harvard, Vancouver, ISO, and other styles
25

Yin, Zhaokai, Weihong Liao, Xiaohui Lei, and Hao Wang. "Parallel Hydrological Model Parameter Uncertainty Analysis Based on Message-Passing Interface." Water 12, no. 10 (September 23, 2020): 2667. http://dx.doi.org/10.3390/w12102667.

Full text
Abstract:
Parameter uncertainty analysis is one of the hot issues in hydrology studies, and the Generalized Likelihood Uncertainty Estimation (GLUE) is one of the most widely used methods. However, the scale of the existing research is relatively small, which results from computational complexity and limited computing resources. In this study, a parallel GLUE method based on a Message-Passing Interface (MPI) was proposed and implemented on a supercomputer system. The research focused on the computational efficiency of the parallel algorithm and the parameter uncertainty of the Xinanjiang model affected by different threshold likelihood function values and sampling sizes. The results demonstrated that the parallel GLUE method showed high computational efficiency and scalability. Through the large-scale parameter uncertainty analysis, it was found that within an interval of less than 0.1%, the proportion of behavioral parameter sets and the threshold value had an exponential relationship. A large sampling scale is more likely than a small sampling scale to obtain behavioral parameter sets at high threshold values. High threshold values may derive more concentrated posterior distributions of the sensitivity parameters than low threshold values.
APA, Harvard, Vancouver, ISO, and other styles
26

Nhita, Fhira. "Comparative Study between Parallel K-Means and Parallel K-Medoids with Message Passing Interface (MPI)." International Journal on Information and Communication Technology (IJoICT) 2, no. 2 (July 25, 2017): 27. http://dx.doi.org/10.21108/ijoict.2016.22.86.

Full text
Abstract:
<p>Data mining is a combination technology for analyze a useful information from dataset using some technique such as classification, clustering, and etc. Clustering is one of the most used data mining technique these day. K-Means and K-Medoids is one of clustering algorithms that mostly used because it’s easy implementation, efficient, and also present good results. Besides mining important information, the needs of time spent when mining data is also a concern in today era considering the real world applications produce huge volume of data. This research analyzed the result from K-Means and K-Medoids algorithm and time performance using High Performance Computing (HPC) Cluster to parallelize K-Means and K-Medoids algorithms and using Message Passing Interface (MPI) library. The results shown that K-Means algorithm gives smaller SSE than K-Medoids. And also parallel algorithm that used MPI gives faster computation time than sequential algorithm.</p>
APA, Harvard, Vancouver, ISO, and other styles
27

Fanfakh, Ahmed Badri Muslim. "Predicting the Performance of MPI Applications over Different Grid Architectures." JOURNAL OF UNIVERSITY OF BABYLON for Pure and Applied Sciences 27, no. 1 (April 1, 2019): 468–77. http://dx.doi.org/10.29196/jubpas.v27i1.2232.

Full text
Abstract:
Nowadays, the high speed and accurate optimization algorithms are required. In most of the cases, researchers need a method to predict some criteria with acceptable accuracy to use it after in their algorithms. However, in the field of parallel computing the execution time can be considered the most important criteria. Consequently, this paper presents new execution time prediction model for message passing interface applications execute over numerous grid scenarios. The model has ability to predict the execution time of the message passing applications running over any grid configuration in term of different number of nodes and their computing powers. The experiments are evaluated over SimGrid simulator to simulate the grid configuration scenarios. The results of comparing the real and the predicted execution time show a good accuracy. The average error ratio between the real and the predicted execution time for three benchmarks are 4.36%, 5.79% and 6.81%.
APA, Harvard, Vancouver, ISO, and other styles
28

Simmendinger, Christian, Roman Iakymchuk, Luis Cebamanos, Dana Akhmetova, Valeria Bartsch, Tiberiu Rotaru, Mirko Rahn, Erwin Laure, and Stefano Markidis. "Interoperability strategies for GASPI and MPI in large-scale scientific applications." International Journal of High Performance Computing Applications 33, no. 3 (November 14, 2018): 554–68. http://dx.doi.org/10.1177/1094342018808359.

Full text
Abstract:
One of the main hurdles of partitioned global address space (PGAS) approaches is the dominance of message passing interface (MPI), which as a de facto standard appears in the code basis of many applications. To take advantage of the PGAS APIs like global address space programming interface (GASPI) without a major change in the code basis, interoperability between MPI and PGAS approaches needs to be ensured. In this article, we consider an interoperable GASPI/MPI implementation for the communication/performance crucial parts of the Ludwig and iPIC3D applications. To address the discovered performance limitations, we develop a novel strategy for significantly improved performance and interoperability between both APIs by leveraging GASPI shared windows and shared notifications. First results with a corresponding implementation in the MiniGhost proxy application and the Allreduce collective operation demonstrate the viability of this approach.
APA, Harvard, Vancouver, ISO, and other styles
29

Guo, Xiao Mei, Wei Zhao, Li Hong Zhang, and Wen Hua Yu. "Parallel Performance of MPI Based Parallel FDTD on NUMA Architecture Workstation." Advanced Materials Research 532-533 (June 2012): 1115–19. http://dx.doi.org/10.4028/www.scientific.net/amr.532-533.1115.

Full text
Abstract:
This paper introduces a parallel FDTD (Finite Difference Time Domain) algorithm based on MPI (Message Passing Interface) parallel environment and NUMA (Non-Uniform Memory Access) architecture workstation. The FDTD computation is carried out independently in local meshes in each process. The data are exchanged by communication between adjacent subdomains to achieve the FDTD parallel method. The results show the consistency between serial and parallel algorithms, and the computing efficiency is improved effectively.
APA, Harvard, Vancouver, ISO, and other styles
30

Zheng, Yongjun, and Philippe Marguinaud. "Simulation of the performance and scalability of message passing interface (MPI) communications of atmospheric models running on exascale supercomputers." Geoscientific Model Development 11, no. 8 (August 22, 2018): 3409–26. http://dx.doi.org/10.5194/gmd-11-3409-2018.

Full text
Abstract:
Abstract. In this study, we identify the key message passing interface (MPI) operations required in atmospheric modelling; then, we use a skeleton program and a simulation framework (based on SST/macro simulation package) to simulate these MPI operations (transposition, halo exchange, and allreduce), with the perspective of future exascale machines in mind. The experimental results show that the choice of the collective algorithm has a great impact on the performance of communications; in particular, we find that the generalized ring-k algorithm for the alltoallv operation and the generalized recursive-k algorithm for the allreduce operation perform the best. In addition, we observe that the impacts of interconnect topologies and routing algorithms on the performance and scalability of transpositions, halo exchange, and allreduce operations are significant. However, the routing algorithm has a negligible impact on the performance of allreduce operations because of its small message size. It is impossible to infinitely grow bandwidth and reduce latency due to hardware limitations. Thus, congestion may occur and limit the continuous improvement of the performance of communications. The experiments show that the performance of communications can be improved when congestion is mitigated by a proper configuration of the topology and routing algorithm, which uniformly distribute the congestion over the interconnect network to avoid the hotspots and bottlenecks caused by congestion. It is generally believed that the transpositions seriously limit the scalability of the spectral models. The experiments show that the communication time of the transposition is larger than those of the wide halo exchange for the semi-Lagrangian method and the allreduce in the generalized conjugate residual (GCR) iterative solver for the semi-implicit method below 2×105 MPI processes. The transposition whose communication time decreases quickly with increasing number of MPI processes demonstrates strong scalability in the case of very large grids and moderate latencies. The halo exchange whose communication time decreases more slowly than that of transposition with increasing number of MPI processes reveals its weak scalability. In contrast, the allreduce whose communication time increases with increasing number of MPI processes does not scale well. From this point of view, the scalability of spectral models could still be acceptable. Therefore it seems to be premature to conclude that the scalability of the grid-point models is better than that of spectral models at the exascale, unless innovative methods are exploited to mitigate the problem of the scalability presented in the grid-point models.
APA, Harvard, Vancouver, ISO, and other styles
31

Misbahuddin, Syed. "1 Fault Detection and Tolerance in Cluster of Workstations using Message Passing Interface." Sir Syed Research Journal of Engineering & Technology 1, no. 1 (December 20, 2011): 4. http://dx.doi.org/10.33317/ssurj.v1i1.72.

Full text
Abstract:
A Cluster of Workstations (COW) is network based multi-computer system aimed to replace supercomputers. A cluster of workstations works on Divisible Load Theory (DLT) according to which a job is divided into n subtasks and delegated to n workstations in the COW architecture. To get the job completed, all subtasks must be completed. Therefore, for satisfactory job completion, all workstations must be functional. However, a faulty node can suspend the overall job completion task until and unless some fault avoidance and correction measures are taken. This paper presents a fault detection and fault tolerant algorithm which will use Message Passing Interface (MPI) to identify faulty workstations and transfer the subtask being performed by them to a normally working workstation. The assigned workstations will continue their original subtasks in addition to assigned subtasks on time sharing basis.
APA, Harvard, Vancouver, ISO, and other styles
32

Sharma, Anuj, and Irene Moulitsas. "MPI to Coarray Fortran: Experiences with a CFD Solver for Unstructured Meshes." Scientific Programming 2017 (2017): 1–12. http://dx.doi.org/10.1155/2017/3409647.

Full text
Abstract:
High-resolution numerical methods and unstructured meshes are required in many applications of Computational Fluid Dynamics (CFD). These methods are quite computationally expensive and hence benefit from being parallelized. Message Passing Interface (MPI) has been utilized traditionally as a parallelization strategy. However, the inherent complexity of MPI contributes further to the existing complexity of the CFD scientific codes. The Partitioned Global Address Space (PGAS) parallelization paradigm was introduced in an attempt to improve the clarity of the parallel implementation. We present our experiences of converting an unstructured high-resolution compressible Navier-Stokes CFD solver from MPI to PGAS Coarray Fortran. We present the challenges, methodology, and performance measurements of our approach using Coarray Fortran. With the Cray compiler, we observe Coarray Fortran as a viable alternative to MPI. We are hopeful that Intel and open-source implementations could be utilized in the future.
APA, Harvard, Vancouver, ISO, and other styles
33

Пушкарев, К. В., and В. Д. Кошур. "A hybrid heuristic parallel method of global optimization." Numerical Methods and Programming (Vychislitel'nye Metody i Programmirovanie), no. 2 (June 30, 2015): 242–55. http://dx.doi.org/10.26089/nummet.v16r224.

Full text
Abstract:
Рассматривается задача нахождения глобального минимума непрерывной целевой функции многих переменных в области, имеющей вид многомерного параллелепипеда. Для решения сложных задач глобальной оптимизации предлагается гибридный эвристический параллельный метод глобальной оптимизации (ГЭПМ), основанный на комбинировании и гибридизации различных методов и технологии многоагентной системы. В состав ГЭПМ включены как новые методы (например, метод нейросетевой аппроксимации инверсных зависимостей, использующий обобщeнно-регрессионные нейронные сети (GRNN), отображающие значения целевой функции в значения координат), так и модифицированные классические методы (например, модифицированный метод Хука-Дживса). Кратко описывается программная реализация ГЭПМ в форме кроссплатформенной (на уровне исходного кода) программной библиотеки на языке C++, использующей обмен сообщениями через интерфейс MPI (Message Passing Interface). Приводятся результаты сравнения ГЭПМ с 21 современным методом глобальной оптимизации и генетическим алгоритмом на 28 тестовых целевых функциях 50 переменных. The problem of finding the global minimum of a continuous objective function of multiple variables in a multidimensional parallelepiped is considered. A hybrid heuristic parallel method for solving of complicated global optimization problems is proposed. The method is based on combining various methods and on the multi-agent technology. It consists of new methods (for example, the method of neural network approximation of inverse coordinate mappings that uses Generalized Regression Neural Networks (GRNN) to map the values of an objective function to coordinates) and modified classical methods (for example, the modified Hooke-Jeeves method). An implementation of the proposed method as a cross-platform (on the source code level) library written in the C++ language is briefly discussed. This implementation uses the message passing via MPI (Message Passing Interface). The method is compared with 21 modern methods of global optimization and with a genetic algorithm using 28 test objective functions of 50 variables.
APA, Harvard, Vancouver, ISO, and other styles
34

Saldaña, Manuel, Emanuel Ramalho, and Paul Chow. "A Message-Passing Hardware/Software Cosimulation Environment for Reconfigurable Computing Systems." International Journal of Reconfigurable Computing 2009 (2009): 1–9. http://dx.doi.org/10.1155/2009/376232.

Full text
Abstract:
High-performance reconfigurable computers (HPRCs) provide a mix of standard processors and FPGAs to collectively accelerate applications. This introduces new design challenges, such as the need for portable programming models across HPRCs and system-level verification tools. To address the need for cosimulating a complete heterogeneous application using both software and hardware in an HPRC, we have created a tool called the Message-passing Simulation Framework (MSF). We have used it to simulate and develop an interface enabling an MPI-based approach to exchange data between X86 processors and hardware engines inside FPGAs. The MSF can also be used as an application development tool that enables multiple FPGAs in simulation to exchange messages amongst themselves and with X86 processors. As an example, we simulate a LINPACK benchmark hardware core using an Intel-FSB-Xilinx-FPGA platform to quickly prototype the hardware, to test the communications. and to verify the benchmark results.
APA, Harvard, Vancouver, ISO, and other styles
35

Stegailov, Vladimir, Ekaterina Dlinnova, Timur Ismagilov, Mikhail Khalilov, Nikolay Kondratyuk, Dmitry Makagon, Alexander Semenov, Alexei Simonov, Grigory Smirnov, and Alexey Timofeev. "Angara interconnect makes GPU-based Desmos supercomputer an efficient tool for molecular dynamics calculations." International Journal of High Performance Computing Applications 33, no. 3 (February 20, 2019): 507–21. http://dx.doi.org/10.1177/1094342019826667.

Full text
Abstract:
In this article, we describe the Desmos supercomputer that consists of 32 hybrid nodes connected by a low-latency high-bandwidth Angara interconnect with torus topology. This supercomputer is aimed at cost-effective classical molecular dynamics calculations. Desmos serves as a test bed for the Angara interconnect that supports 3-D and 4-D torus network topologies and verifies its ability to unite massively parallel programming systems speeding-up effectively message-passing interface (MPI)-based applications. We describe the Angara interconnect presenting typical MPI benchmarks. Desmos benchmarks results for GROMACS, LAMMPS, VASP and CP2K are compared with the data for other high-performance computing (HPC) systems. Also, we consider the job scheduling statistics for several months of Desmos deployment.
APA, Harvard, Vancouver, ISO, and other styles
36

Gerstenberger, Robert, Maciej Besta, and Torsten Hoefler. "Enabling Highly-Scalable Remote Memory Access Programming with MPI-3 One Sided." Scientific Programming 22, no. 2 (2014): 75–91. http://dx.doi.org/10.1155/2014/571902.

Full text
Abstract:
Modern interconnects offer remote direct memory access (RDMA) features. Yet, most applications rely on explicit message passing for communications albeit their unwanted overheads. The MPI-3.0 standard defines a programming interface for exploiting RDMA networks directly, however, it's scalability and practicability has to be demonstrated in practice. In this work, we develop scalable bufferless protocols that implement the MPI-3.0 specification. Our protocols support scaling to millions of cores with negligible memory consumption while providing highest performance and minimal overheads. To arm programmers, we provide a spectrum of performance models for all critical functions and demonstrate the usability of our library and models with several application studies with up to half a million processes. We show that our design is comparable to, or better than UPC and Fortran Coarrays in terms of latency, bandwidth and message rate. We also demonstrate application performance improvements with comparable programming complexity.
APA, Harvard, Vancouver, ISO, and other styles
37

Han, Xiao Gang, Feng Wang, and Jiang Wei Fan. "The Research of PID Controller Tuning Based on Parallel Particle Swarm Optimization." Applied Mechanics and Materials 433-435 (October 2013): 583–86. http://dx.doi.org/10.4028/www.scientific.net/amm.433-435.583.

Full text
Abstract:
Particle Swarm Optimization (PSO) is a good method to tune PID controller. But it doesnt work well enough in the application condition of high real-time requirement and control accuracy. This paper describes a parallel PSO algorithm for PID controller tuning. We designed the parallel algorithm and realized it in multi-core and message passing interface (MPI). We developed the test system using Visual C # 2008, and the performance experiment shows that the algorithm has satisfied tuning accuracy, speedup and efficiency.
APA, Harvard, Vancouver, ISO, and other styles
38

Leaver, George W., Martin J. Turner, James S. Perrin, Paul M. Mummery, and Philip J. Withers. "Porting the AVS/Express scientific visualization software to Cray XT4." Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 369, no. 1949 (August 28, 2011): 3398–412. http://dx.doi.org/10.1098/rsta.2011.0133.

Full text
Abstract:
Remote scientific visualization, where rendering services are provided by larger scale systems than are available on the desktop, is becoming increasingly important as dataset sizes increase beyond the capabilities of desktop workstations. Uptake of such services relies on access to suitable visualization applications and the ability to view the resulting visualization in a convenient form. We consider five rules from the e-Science community to meet these goals with the porting of a commercial visualization package to a large-scale system. The application uses message-passing interface (MPI) to distribute data among data processing and rendering processes. The use of MPI in such an interactive application is not compatible with restrictions imposed by the Cray system being considered. We present details, and performance analysis, of a new MPI proxy method that allows the application to run within the Cray environment yet still support MPI communication required by the application. Example use cases from materials science are considered.
APA, Harvard, Vancouver, ISO, and other styles
39

Liu, Keyan, Yunhua Li, and Wanxing Sheng. "The decomposition and computation method for distributed optimal power flow based on message passing interface (MPI)." International Journal of Electrical Power & Energy Systems 33, no. 5 (June 2011): 1185–93. http://dx.doi.org/10.1016/j.ijepes.2011.01.032.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Lin, Zhongchao, Yu Zhang, Shugang Jiang, Xunwang Zhao, and Jingyan Mo. "Simulation of Airborne Antenna Array Layout Problems Using Parallel Higher-Order MoM." International Journal of Antennas and Propagation 2014 (2014): 1–11. http://dx.doi.org/10.1155/2014/985367.

Full text
Abstract:
The parallel higher-order Method of Moments based on message passing interface (MPI) has been successfully used to analyze the changes in radiation patterns of a microstrip patch array antenna mounted on different positions of an airplane. The block-partitioned scheme for the large dense MoM matrix and a block-cyclic matrix distribution scheme are designed to achieve excellent load balance and high parallel efficiency. Numerical results demonstrate that the rigorous parallel Method of Moments can efficiently and accurately solve large complex electromagnetic problems with composite structures.
APA, Harvard, Vancouver, ISO, and other styles
41

Silva, Luis M., JoÃo Gabriel Silva, and Simon Chapple. "Implementation and Performance of DSMPI." Scientific Programming 6, no. 2 (1997): 201–14. http://dx.doi.org/10.1155/1997/452521.

Full text
Abstract:
Distributed shared memory has been recognized as an alternative programming model to exploit the parallelism in distributed memory systems because it provides a higher level of abstraction than simple message passing. DSM combines the simple programming model of shared memory with the scalability of distributed memory machines. This article presents DSMPI, a parallel library that runs atop of MPI and provides a DSM abstraction. It provides an easy-to-use programming interface, is fully, portable, and supports heterogeneity. For the sake of flexibility, it supports different coherence protocols and models of consistency. We present some performance results taken in a network of workstations and in a Cray T3D which show that DSMPI can be competitive with MPI for some applications.
APA, Harvard, Vancouver, ISO, and other styles
42

Ji, Guo Li, Y. P. Yang, Y. Lin, and Zhao Xian Xiong. "Parallel Simulation of Ceramic Grain Growth on the Platform of MPI." Key Engineering Materials 336-338 (April 2007): 2490–92. http://dx.doi.org/10.4028/www.scientific.net/kem.336-338.2490.

Full text
Abstract:
A computation method for parallel simulations of ceramic grain growth at an atomic scale in a PC cluster is proposed, by combining the Message Passing Interface (MPI) with the serial simulation of grain growth. A parallel platform is constructed for the simulation of grain growth with program modules of grain assignments, grain growth, data exchanges and boundary settlements, which are coded with Microsoft Visual C++ 6.0 and MPICH. Quantitative results show that the computing speed of parallel simulations with this platform is obviously increased compared with that of serial simulations. Such a computing mode of grain growth is in good agreement with practical situations of ceramic grain growth.
APA, Harvard, Vancouver, ISO, and other styles
43

Liu, Li, Cheng Zhang, Ruizhe Li, Bin Wang, and Guangwen Yang. "C-Coupler2: a flexible and user-friendly community coupler for model coupling and nesting." Geoscientific Model Development 11, no. 9 (August 31, 2018): 3557–86. http://dx.doi.org/10.5194/gmd-11-3557-2018.

Full text
Abstract:
Abstract. The Chinese C-Coupler (Community Coupler) family aims primarily to develop coupled models for weather forecasting and climate simulation and prediction. It is targeted to serve various coupled models with flexibility, user-friendliness, and extensive coupling functions. C-Coupler2, the latest version, includes a series of new features in addition to those of C-Coupler1 – including a common, flexible, and user-friendly coupling configuration interface that combines a set of application programming interfaces and a set of XML-formatted configuration files; the capability of coupling within one executable or the same subset of MPI (message passing interface) processes; flexible and automatic coupling procedure generation for any subset of component models; dynamic 3-D coupling that enables convenient coupling of fields on 3-D grids with time-evolving vertical coordinate values; non-blocking data transfer; facilitation for model nesting; facilitation for increment coupling; adaptive restart capability; and finally a debugging capability. C-Coupler2 is ready for use to develop various coupled or nested models. It has passed a number of test cases involving model coupling and nesting, and with various MPI process layouts between component models, and has already been used in several real coupled models.
APA, Harvard, Vancouver, ISO, and other styles
44

Fei, Guang Lei, Jian Guo Ning, and Tian Bao Ma. "Study on the Numerical Simulation of Explosion and Impact Processes Using PC Cluster System." Advanced Materials Research 433-440 (January 2012): 2892–98. http://dx.doi.org/10.4028/www.scientific.net/amr.433-440.2892.

Full text
Abstract:
Parallel computing has been applied in many fields, and the parallel computing platform system, PC cluster based on MPI (Message Passing Interface) library under Linux operating system is a cost-effectiveness approach to parallel compute. In this paper, the key algorithm of parallel program of explosion and impact is presented. The techniques of solving data dependence and realizing communication between subdomain are proposed. From the test of program, the portability of MMIC-3D parallel program is satisfied, and compared with the single computer, PC cluster can improve the calculation speed and enlarge the scale greatly.
APA, Harvard, Vancouver, ISO, and other styles
45

Yan, Ying, Yu Zhang, Chang-Hong Liang, Hui Zhao, and D. García-Doñoro. "RCS Computation by Parallel MoM Using Higher-Order Basis Functions." International Journal of Antennas and Propagation 2012 (2012): 1–8. http://dx.doi.org/10.1155/2012/745893.

Full text
Abstract:
A Message-Passing Interface (MPI) parallel implementation of an integral equation solver that uses the Method of Moments (MoM) with higher-order basis functions has been proposed to compute the Radar Cross-Section (RCS) of various targets. The block-partitioned scheme for the large dense MoM matrix is designed to achieve excellent load balance and high parallel efficiency. Some numerical results demonstrate that higher-order basis in this parallelized scheme is more efficient than the conventional RWG method and able to efficiently analyze RCS of various electrically large platforms.
APA, Harvard, Vancouver, ISO, and other styles
46

Mahinthakumar, G., and F. Saied. "A Hybrid Mpi-Openmp Implementation of an Implicit Finite-Element Code on Parallel Architectures." International Journal of High Performance Computing Applications 16, no. 4 (November 2002): 371–93. http://dx.doi.org/10.1177/109434200201600402.

Full text
Abstract:
Summary The hybrid MPI-OpenMP model is a natural parallel programming paradigm for emerging parallel architectures that are based on symmetric multiprocessor (SMP) clusters. This paper presents a hybrid implementation adapted for an implicit finite-element code developed for groundwater transport simulations. The original code was parallelized for distributed memory architectures using MPI (Message Passing Interface) using a domain decomposition strategy. OpenMP directives were then added to the code (a straightforward loop-level implementation) to use multiple threads within each MPI process. To improve the OpenMP performance, several loop modifications were adopted. The parallel performance results are compared for four modern parallel architectures. The results show that for most of the cases tested, the pure MPI approach outperforms the hybrid model. The exceptions to this observation were mainly due to a limitation in the MPI library implementation on one of the architectures. A general conclusion is that while the hybrid model is a promising approach for SMP cluster architectures, at the time of this writing, the payoff may not be justified for converting all existing MPI codes to hybrid codes. However, improvements in OpenMP compilers combined with potential MPI limitations in SMP nodes may make the hybrid approach more attractive for a broader set of applications in the future.
APA, Harvard, Vancouver, ISO, and other styles
47

Zounmevo, Judicael A., Dries Kimpe, Robert Ross, and Ahmad Afsahi. "Extreme-scale computing services over MPI: Experiences, observations and features proposal for next-generation message passing interface." International Journal of High Performance Computing Applications 28, no. 4 (September 10, 2014): 435–49. http://dx.doi.org/10.1177/1094342014548864.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Yang, Qian, Bing Wei, Linqian Li, and Debiao Ge. "Analysis of the Calculation of a Plasma Sheath Using the Parallel SO-DGTD Method." International Journal of Antennas and Propagation 2019 (April 21, 2019): 1–9. http://dx.doi.org/10.1155/2019/7160913.

Full text
Abstract:
The plasma sheath is known as a popular topic of computational electromagnetics, and the plasma case is more resource-intensive than the non-plasma case. In this paper, a parallel shift-operator discontinuous Galerkin time-domain method using the MPI (Message Passing Interface) library is proposed to solve the large-scale plasma problems. To demonstrate our algorithm, a plasma sheath model of the high-speed blunt cone was established based on the results of the multiphysics software, and our algorithm was used to extract the radar cross-section (RCS) versus different incident angles of the model.
APA, Harvard, Vancouver, ISO, and other styles
49

Wang, Xizhong, and Deyun Chen. "A Parallel Encryption Algorithm Based on Piecewise Linear Chaotic Map." Mathematical Problems in Engineering 2013 (2013): 1–7. http://dx.doi.org/10.1155/2013/537934.

Full text
Abstract:
We introduce a parallel chaos-based encryption algorithm for taking advantage of multicore processors. The chaotic cryptosystem is generated by the piecewise linear chaotic map (PWLCM). The parallel algorithm is designed with a master/slave communication model with the Message Passing Interface (MPI). The algorithm is suitable not only for multicore processors but also for the single-processor architecture. The experimental results show that the chaos-based cryptosystem possesses good statistical properties. The parallel algorithm provides much better performance than the serial ones and would be useful to apply in encryption/decryption file with large size or multimedia.
APA, Harvard, Vancouver, ISO, and other styles
50

NING, JIANGUO, TIANBAO MA, and GUANGLEI FEI. "MULTI-MATERIAL EULERIAN METHOD AND PARALLEL COMPUTATION FOR 3D EXPLOSION AND IMPACT PROBLEMS." International Journal of Computational Methods 11, no. 05 (October 2014): 1350079. http://dx.doi.org/10.1142/s0219876213500795.

Full text
Abstract:
We adopted operator splitting method to solve the governing equations of explosion and impact problems, and used fuzzy interface treatment to crack 3D multi-material interface difficulties. Then based on message passing interface (MPI), the data dependence and relevance between adjacent subdomains in parallel computing of the Eulerian method were studied and analyzed. At last, we numerically simulated blast in air and shaped charge jet using PMMIC-3D hydrocode, and performed empirical and experimental comparisons. The results show that the cell spatial step has important influence on computational accuracy, and the calculating result with smaller cell spatial step is close to the empirical formula one. Therefore, it is necessary to develop parallel computing to enlarge the calculation scale at smaller cell spatial step for the explosion and impact issues.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography