Dissertations / Theses on the topic 'Message-Passing Interface (MPI)'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 23 dissertations / theses for your research on the topic 'Message-Passing Interface (MPI).'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Träff, Jesper. "Aspects of the efficient implementation of the message passing interface (MPI)." Aachen Shaker, 2009. http://d-nb.info/994501803/04.
Full textKatti, Amogh. "Epidemic failure detection and consensus for message passing interface (MPI)." Thesis, University of Reading, 2016. http://centaur.reading.ac.uk/69932/.
Full textRattanapoka, Choopan. "P2P-MPI : A fault-tolerant Message Passing Interface Implementation for Grids." Phd thesis, Université Louis Pasteur - Strasbourg I, 2008. http://tel.archives-ouvertes.fr/tel-00724132.
Full textRamesh, Srinivasan. "MPI Performance Engineering with the MPI Tools Information Interface." Thesis, University of Oregon, 2018. http://hdl.handle.net/1794/23779.
Full textPoole, Jeffrey Hyatt. "Implementation of a Hardware-Optimized MPI Library for the SCMP Multiprocessor." Thesis, Virginia Tech, 2001. http://hdl.handle.net/10919/10064.
Full textMaster of Science
Strand, Christian. "A Java Founded LOIS-framework and the Message Passing Interface? : An Exploratory Case Study." Thesis, Växjö University, School of Mathematics and Systems Engineering, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:vxu:diva-916.
Full textIn this thesis project we have successfully added an MPI extension layer to the LOIS framework. The framework defines an infrastructure for executing and connecting continuous stream processing applications. The MPI extension provides the same amount of stream based data as the framework’s original transport. We assert that an MPI-2 compatible implementation can be a candidate to extend the given framework with an adaptive and flexible communication sub-system. Adaptability is required since the communication subsystem has to be resilient to changes, either due to optimizations or system requirements.
Träff, Jesper Larsson [Verfasser]. "Aspects of the efficient Implementation of the Message Passing Interface (MPI) / Jesper Larsson Träff." Aachen : Shaker, 2009. http://d-nb.info/115651794X/34.
Full textHolmes, Daniel John. "McMPI : a managed-code message passing interface library for high performance communication in C#." Thesis, University of Edinburgh, 2012. http://hdl.handle.net/1842/7732.
Full textChen, Zhezhe. "System Support for Improving the Reliability of MPI Applications and Libraries." The Ohio State University, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=osu1375880144.
Full textRadcliffe, Nicholas Ryan. "Adjusting Process Count on Demand for Petascale Global Optimization." Thesis, Virginia Tech, 2011. http://hdl.handle.net/10919/36349.
Full textMaster of Science
Fernandes, Cl?udio Ant?nio Costa. "Estudos de algumas ferramentas de coleta e visualiza??o de dados e desempenho de aplica??es paralelas no ambiente MPI." Universidade Federal do Rio Grande do Norte, 2003. http://repositorio.ufrn.br:8080/jspui/handle/123456789/15428.
Full textCoordena??o de Aperfei?oamento de Pessoal de N?vel Superior
The last years have presented an increase in the acceptance and adoption of the parallel processing, as much for scientific computation of high performance as for applications of general intention. This acceptance has been favored mainly for the development of environments with massive parallel processing (MPP - Massively Parallel Processing) and of the distributed computation. A common point between distributed systems and MPPs architectures is the notion of message exchange, that allows the communication between processes. An environment of message exchange consists basically of a communication library that, acting as an extension of the programming languages that allow to the elaboration of applications parallel, such as C, C++ and Fortran. In the development of applications parallel, a basic aspect is on to the analysis of performance of the same ones. Several can be the metric ones used in this analysis: time of execution, efficiency in the use of the processing elements, scalability of the application with respect to the increase in the number of processors or to the increase of the instance of the treat problem. The establishment of models or mechanisms that allow this analysis can be a task sufficiently complicated considering parameters and involved degrees of freedom in the implementation of the parallel application. An joined alternative has been the use of collection tools and visualization of performance data, that allow the user to identify to points of strangulation and sources of inefficiency in an application. For an efficient visualization one becomes necessary to identify and to collect given relative to the execution of the application, stage this called instrumentation. In this work it is presented, initially, a study of the main techniques used in the collection of the performance data, and after that a detailed analysis of the main available tools is made that can be used in architectures parallel of the type to cluster Beowulf with Linux on X86 platform being used libraries of communication based in applications MPI - Message Passing Interface, such as LAM and MPICH. This analysis is validated on applications parallel bars that deal with the problems of the training of neural nets of the type perceptrons using retro-propagation. The gotten conclusions show to the potentiality and easinesses of the analyzed tools.
Os ?ltimos anos t?m apresentado um aumento na aceita??o e ado??o do processamento paralelo, tanto para computa??o cient?fica de alto desempenho como para aplica??es de prop?sito geral. Essa aceita??o tem sido favorecida principalmente pelo desenvolvimento dos ambientes com processamento maci?amente paralelo (MPP - Massively Parallel Processing) e da computa??o distribu?da. Um ponto comum entre sistemas distribu?dos e arquiteturas MPPs ? a no??o de troca de mensagem, que permite a comunica??o entre processos. Um ambiente de troca de mensagem consiste basicamente de uma biblioteca de comunica??o que, atuando como uma extens?o das linguagens de programa??o, permite a elabora??o de aplica??es paralelas, tais como C, C++ e Fortran. No desenvolvimento de aplica??es paralelas, um aspecto fundamental esta ligado ? an?lise de desempenho das mesmas. V?rias podem ser as m?tricas utilizadas nesta an?lise: tempo de execu??o, efici?ncia na utiliza??o dos elementos de processamento, escalabilidade da aplica??o com respeito ao aumento no n?mero de processadores ou ao aumento da inst?ncia do problema tratado. O estabelecimento de modelos ou mecanismos que permitam esta an?lise pode ser uma tarefa bastante complicada considerando-se par?metros e graus de liberdade envolvidos na implementa??o da aplica??o paralela. Uma alternativa encontrada tem sido a utiliza??o de ferramentas de coleta e visualiza??o de dados de desempenho, que permitem ao usu?rio identificar pontos de estrangulamento e fontes de inefici?ncia em uma aplica??o. Para uma visualiza??o eficiente torna-se necess?rio identificar e coletar dados relativos ? execu??o da aplica??o, etapa esta denominada instrumenta??o. Neste trabalho ? apresentado, inicialmente, um estudo das principais t?cnicas utilizadas na coleta dos dados de desempenho, e em seguida ? feita uma an?lise detalhada das principais ferramentas dispon?veis que podem ser utilizadas em arquiteturas paralelas do tipo Cluster Beowulf com Linux sobre plataforma X86 utilizando bibliotecas de comunica??o baseadas em aplica??es MPI - Message Passing Interface, tais como LAM e MPICH . Esta an?lise ? validada sobre aplica??es paralelas que tratam do problema do treinamento de redes neurais do tipo perceptrons usando retropropaga??o. As conclus?es obtidas mostram as potencialidade e facilidades das ferramentas analisadas.
Seth, Umesh Kumar. "Message Passing Interface parallelization of a multi-block structured numerical solver. Application to the numerical simulation of various typical Electro-Hydro-Dynamic flows." Thesis, Poitiers, 2019. http://www.theses.fr/2019POIT2264/document.
Full textSeveral intricately coupled applications of modern industries fall under the multi-disciplinary domain of Electrohydrodynamics (EHD), where the interactions among charged and neutral particles are studied in context of both fluid dynamics and electrostatics together. The charge particles in fluids are generated with various physical mechanisms, and they move under the influence of external electric field and the fluid velocity. Generally, with sufficient electric force magnitudes, momentum transfer occurs from the charged species to the neutral particles also. This coupled system is solved with the Maxwell equations, charge transport equations and Navier-Stokes equations simulated sequentially in a common time loop. The charge transport is solved considering convection, diffusion, source terms and other relevant mechanisms for species. Then, the bulk fluid motion is simulated considering the induced electric force as a source term in the Navier-Stokes equations, thus, coupling the electrostatic system with the fluid. In this thesis, we numerically investigated some EHD phenomena like unipolar injection, conduction phenomenon in weakly conducting liquids and flow control with dielectric barrier discharge (DBD) plasma actuators.Solving such complex physical systems numerically requires high-end computing resources and parallel CFD solvers, as these large EHD models are mathematically stiff and highly time consuming due to the range of time and length scales involved. This thesis contributes towards advancing the capability of numerical simulations carried out within the EFD group at Institut Pprime by developing a high performance parallel solver with advanced EHD models. Being the most popular and specific technology, developed for the distributed memory platforms, Message Passing Interface (MPI) was used to parallelize our multi-block structured EHD solver. In the first part the parallelization of our numerical EHD solver with advanced MPI protocols such as Cartesian topology and Inter-Communicators is undertaken. In particular a specific strategy has been designed and detailed to account for the multi-block structured grids feature of the code. The parallel code has been fully validated through several benchmarks, and scalability tests carried out on up to 1200 cores on our local cluster showed excellent parallel speed-ups with our approach. A trustworthy database containing all these validation tests carried out on multiple cores is provided to assist in future developments. The second part of this thesis deals with the numerical simulations of several typical EHD flows. We have examined three-dimensional electroconvection induced by unipolar injection between two planar-parallel electrodes. Unsteady hexagonal cells were observed in our study. 3D flow phenomenon with electro-convective plumes was also studied in the blade-plane electrode configuration considering both autonomous and non-autonomous injection laws. Conduction mechanism based on the dissociation of neutral molecules of a weakly conductive liquid has been successfully simulated. Our results have been validated with some numerical computations undertaken with the commercial code Comsol. Physical implications of Robin boundary condition and Onsager effect on the charge species were highlighted in electro-conduction in a rectangular channel. Finally, flow control using Dielectric Barrier Discharge plasma actuator has been simulated using the Suzen-Huang model. Impacts of dielectric thickness, gap between the electrodes, frequency and waveform of applied voltage etc. were investigated in terms of their effect on the induced maximum ionic wind velocity and average body force. Flow control simulations with backward facing step showed that a laminar flow separation could be drastically controlled by placing the actuator at the tip of the step with both electrodes perpendicular to each other
Čižek, Martin. "Paralelizace sledování paprsku." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2009. http://www.nusl.cz/ntk/nusl-235485.
Full textAji, Ashwin M. "Programming High-Performance Clusters with Heterogeneous Computing Devices." Diss., Virginia Tech, 2015. http://hdl.handle.net/10919/52366.
Full textPh. D.
Glück, Olivier. "Optimisations de la bibliothèque de communication MPI pour machines parallèles de type " grappe de PCs " sur une primitive d'écriture distante." Paris 6, 2002. http://www.theses.fr/2002PA066158.
Full textAbdelkafi, Omar. "Métaheuristiques hybrides distribuées et massivement parallèles." Thesis, Mulhouse, 2016. http://www.theses.fr/2016MULH9578/document.
Full textMany optimization problems specific to different industrial and academic sectors (energy, chemicals, transportation, etc.) require the development of more effective methods in resolving. To meet these needs, the aim of this thesis is to develop a library of several hybrid metaheuristics distributed and massively parallel. First, we studied the traveling salesman problem and its resolution by the ant colony method to establish hybridization and parallelization techniques. Two other optimization problems have been dealt, which are, the quadratic assignment problem (QAP) and the zeolite structure problem (ZSP). For the QAP, several variants based on an iterative tabu search with adaptive diversification have been proposed. The aim of these proposals is to study the impact of: the data exchange, the diversification strategies and the methods of cooperation. Our best variant is compared with six from the leading works of the literature. For the ZSP two new formulations of the objective function are proposed to evaluate the potential of the zeolites structures founded. These formulations are based on reward and penalty evaluation. Two hybrid and parallel genetic algorithms are proposed to generate stable zeolites structures. Our algorithms have now generated six stable topologies, three of them are not listed in the SC-JZA website or in the Atlas of Prospective Zeolite Structures
Bedez, Mathieu. "Modélisation multi-échelles et calculs parallèles appliqués à la simulation de l'activité neuronale." Thesis, Mulhouse, 2015. http://www.theses.fr/2015MULH9738/document.
Full textComputational Neuroscience helped develop mathematical and computational tools for the creation, then simulation models representing the behavior of certain components of our brain at the cellular level. These are helpful in understanding the physical and biochemical interactions between different neurons, instead of a faithful reproduction of various cognitive functions such as in the work on artificial intelligence. The construction of models describing the brain as a whole, using a homogenization microscopic data is newer, because it is necessary to take into account the geometric complexity of the various structures comprising the brain. There is therefore a long process of rebuilding to be done to achieve the simulations. From a mathematical point of view, the various models are described using ordinary differential equations, and partial differential equations. The major problem of these simulations is that the resolution time can become very important when important details on the solutions are required on time scales but also spatial. The purpose of this study is to investigate the various models describing the electrical activity of the brain, using innovative techniques of parallelization of computations, thereby saving time while obtaining highly accurate results. Four major themes will address this issue: description of the models, explaining parallelization tools, applications on both macroscopic models
Zhang, Hua. "VCLUSTER: A PORTABLE VIRTUAL COMPUTING LIBRARY FOR CLUSTER COMPUTING." Doctoral diss., Orlando, Fla. : University of Central Florida, 2008. http://purl.fcla.edu/fcla/etd/CFE0002339.
Full textZhang, Hang. "Distributed Support Vector Machine With Graphics Processing Units." ScholarWorks@UNO, 2009. http://scholarworks.uno.edu/td/991.
Full textChu, Chia-Lin, and 朱家霖. "Distributed Finite-Element Computation Using Message Passing Interface - MPI." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/11531823016343763977.
Full text國立雲林科技大學
營建工程系碩士班
90
Static analysis is a basic computation of mechanics. It is a fundamental and very important task in structural analysis. With the results of static analysis, we can realize the status of structure when it is subjected to different types of loads. Since the finite element method is generally employed for large-scale structural analysis, the main computation phases involved in a static analysis consist of the evaluation of element stiffness matrix and load vectors, the assemblage of the system stiffness matrix and load vectors, the solution of system equilibrium equations, and the calculation of internal forces or stresses. The involved calculations are primarily matrix computations. If the data partition and the corresponding solution process can be made properly, most computational phases involved in a finite element analysis can be parallelized well. Apparently, the consumed computer time can be greatly reduced if parallel computation is incorporated into the analysis. This is extremely useful for analyzing large-scale structures. Sinice the inception of parallel computers, parallel computation has been a very popular research topic in the field of computational mechanics. The parallelization of finite element analysis holds much more attention because of the intensive computation usually involved in it. The main objective of this study is to develop efficient parallel algorithms for structural static analysis on distributed computer systems using a new generation of message passing standard, MPI.
Squyres, Jeffrey M. "A component architecture for the message passing interface (MPI) the systems services interface (SSI) of LAM/MPI /." 2004. http://etd.nd.edu/ETD-db/theses/submitted/etd-03312004-160652/.
Full textThesis directed by Andrew Lumsdaine for the Department of Computer Science and Engineering. "April 2004." Includes bibliographical references (leaves 301-312).
Zounmevo, Ayi Judicael. "Scalability-Driven Approaches to Key Aspects of the Message Passing Interface for Next Generation Supercomputing." Thesis, 2014. http://hdl.handle.net/1974/12194.
Full textThesis (Ph.D, Electrical & Computer Engineering) -- Queen's University, 2014-05-23 15:08:58.56
Gupta, Rakhi. "One To Mant And Many To Many Collective Communication Operations On Grids." Thesis, 2006. http://hdl.handle.net/2005/345.
Full text