Literatura académica sobre el tema "MPI"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "MPI".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Artículos de revistas sobre el tema "MPI"

1

Taniguchi, Yusuke, Nao Suzuki, Kae Kakura, Kazunari Tanabe, Ryutaro Ito, Tadahiro Kashiwamura, Akie Fujimoto et al. "Effect of Continuous Intake of Lactobacillus salivarius WB21 on Tissues Surrounding Implants: A Double-Blind Randomized Clinical Trial". Life 14, n.º 12 (22 de noviembre de 2024): 1532. http://dx.doi.org/10.3390/life14121532.

Texto completo
Resumen
Objective: This study aimed to improve the health of peri-implant tissues through continuous intake of Lactobacillus salivarius WB21 (LSWB21) tablets. Methods: A double-blind, randomized controlled trial was conducted with 23 maintenance patients who had generally healthy oral peri-implant tissues. Participants were divided into a test group (n = 12) receiving LSWB21 tablets and a control group (n = 11) receiving placebos. All patients took one tablet three times daily for 2 months. Evaluation measures included modified Gingival Index (mGI), modified Plaque Index (mPI), modified Bleeding Index (mBI), salivary secretory IgA, and oral symptoms assessed at baseline, 1 month, and 2 months. Results: After 2 months, significant improvements in the mGI, mPI, and mBI were observed in the test group; significant improvement in the mPI was observed in the control group. Changes in the mGI over 2 months significantly differed between the groups (p = 0.038), and multiple regression analysis confirmed the effectiveness of LSWB21 in reducing the mGI (p = 0.034). Subjective symptoms such as bad breath in the test group and tongue symptoms in the control group also significantly improved. Conclusion: Continuous intake of LSWB21 may be beneficial for stabilizing peri-implant tissue. Trial registration: UMIN000039392 (UMIN-CTR).
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Fleming, Richard. "Reno Cardiologist Confirms FMTVDM – Opening New Opportunities for Nuclear Cardiologists". Clinical Medical Reviews and Reports 1, n.º 1 (19 de diciembre de 2019): 01–04. http://dx.doi.org/10.31579/2690-8794/001.

Texto completo
Resumen
Background: A quantitative myocardial perfusion imaging (MPI) and oncologic - including molecular breast imaging (MBI) - utility patent (FMTVDM*) previously validated at experienced MPI and MBI centers was independently tested for clinical application at a private practice Reno, Nevada cardiologists office. Methods: Using FMTVDM, a private practice cardiologist independently investigated forty-four regions of interest (ROI) in 12-women with varying transitional levels of breast changes – including breast cancer. Results: Using FMTVDM, a nuclear cardiologist without prior experience in MBI was able to easily measure changes in women’s breast tissue differentiating inflammatory and cancerous breast tissue from normal using the same camera used for MPI. These measured changes provided diagnostically useful information on cellular metabolism and regional blood flow changes (RBF) – the same properties which differentiate ischemic coronary artery disease (CAD) on myocardial perfusion imaging (MPI). Conclusions: Quantitative MBI using FMTVDM allows differentiation of tissue types through measurement of enhanced regional blood flow and metabolic differences. Nuclear cardiologists have previously reported cases of breast cancer while conducting MPI studies. This investigation demonstrated that nuclear cardiologists can independently conduct MBI in addition to MPI studies using the nuclear cameras they currently use for MPI.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Overbeek, Femke C. M. S., Jeannette A. Goudzwaard, Judy van Hemmen, Rozemarijn L. van Bruchem-Visser, Janne M. Papma, Harmke A. Polinder-Bos y Francesco U. S. Mattace-Raso. "The Multidimensional Prognostic Index Predicts Mortality in Older Outpatients with Cognitive Decline". Journal of Clinical Medicine 11, n.º 9 (23 de abril de 2022): 2369. http://dx.doi.org/10.3390/jcm11092369.

Texto completo
Resumen
Since the heterogeneity of the growing group of older outpatients with cognitive decline, it is challenging to evaluate survival rates in clinical shared decision making. The primary outcome was to determine whether the Multidimensional Prognostic Index (MPI) predicts mortality, whilst assessing the MPI distribution was considered secondary. This retrospective chart review included 311 outpatients aged ≥65 years and diagnosed with dementia or mild cognitive impairment (MCI). The MPI includes several domains of the comprehensive geriatric assessment (CGA). All characteristics and data to calculate the risk score and mortality data were extracted from administrative information in the database of the Alzheimer’s Center and medical records. The study population (mean age 76.8 years, men = 51.4%) was divided as follows: 34.1% belonged to MPI category 1, 52.1% to MPI category 2 and 13.8% to MPI category 3. Patients with dementia have a higher mean MPI risk score than patients with MCI (0.47 vs. 0.32; p < 0.001). The HRs and corresponding 95% CIs for mortality in patients in MPI categories 2 and 3 were 1.67 (0.81–3.45) and 3.80 (1.56–9.24) compared with MPI category 1, respectively. This study shows that the MPI predicts mortality in outpatients with cognitive decline.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Hilbrich, Tobias, Matthias S. Müller y Bettina Krammer. "MPI Correctness Checking for OpenMP/MPI Applications". International Journal of Parallel Programming 37, n.º 3 (22 de abril de 2009): 277–91. http://dx.doi.org/10.1007/s10766-009-0099-4.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Babbar, Rohan, Matteo Ravasi y Yuxi Hong. "PyLops-MPI - MPI Powered PyLops with mpi4py". Journal of Open Source Software 10, n.º 105 (7 de enero de 2025): 7512. https://doi.org/10.21105/joss.07512.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Chiang, Y. C. y Y. T. Kiang. "Genetic analysis of mannose-6-phosphate isomerase in soybeans". Genome 30, n.º 5 (1 de octubre de 1988): 808–11. http://dx.doi.org/10.1139/g88-130.

Texto completo
Resumen
Five mannose-6-phosphate isomerase (EC 5.3.1.8) variants were observed electrophoretically in cultivated soybeans (Glycine max (L.) Merr.) and wild soybeans (G. soja Sieb. &Zucc.). Four of the five variants differed in the mobility of the two mannose-6-phosphate isomerase bands observed, while the fifth showed no enzyme activity. Several crosses involving different variants were made to study inheritance of the observed variants. The inheritance data showed that the five variants were allelic and controlled by a single locus (Mpi). The five alleles were as follows: Mpi-a (Rf 0.61 and 0.66); Mpi-b (Rf 0.66 and 0.7); Mpi-c (Rf 0.71 and 0.75); Mpi-d (Rf 0.76 and 0.80); and mpi. Mpi-a, Mpi-b, Mpi-c, and Mpi-d are codominant, and the null allele mpi is recessive. The Mpi-b allele is most common while the Mpi-d and mpi alleles are rare in both the cultivated and wild soybean germ plasm from various sources examined.Key words: Glycine max, Glycine soja, isozymes, Mpi, gel electrophoresis, allelic frequency.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Liu, Feilong, Claude Barthels, Spyros Blanas, Hideaki Kimura y Garret Swart. "Beyond MPI". ACM SIGMOD Record 49, n.º 4 (8 de marzo de 2021): 12–17. http://dx.doi.org/10.1145/3456859.3456862.

Texto completo
Resumen
Networkswith Remote DirectMemoryAccess (RDMA) support are becoming increasingly common. RDMA, however, offers a limited programming interface to remote memory that consists of read, write and atomic operations. With RDMA alone, completing the most basic operations on remote data structures often requires multiple round-trips over the network. Data-intensive systems strongly desire higher-level communication abstractions that supportmore complex interaction patterns. A natural candidate to consider is MPI, the de facto standard for developing high-performance applications in the HPC community. This paper critically evaluates the communication primitives of MPI and shows that using MPI in the context of a data processing system comes with its own set of insurmountable challenges. Based on this analysis, we propose a new communication abstraction named RDMO, or Remote DirectMemory Operation, that dispatches a short sequence of reads, writes and atomic operations to remote memory and executes them in a single round-trip.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

&NA;. "MPI-5003". Inpharma Weekly &NA;, n.º 1133 (abril de 1998): 9. http://dx.doi.org/10.2165/00128413-199811330-00014.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Karwande, Amit, Xin Yuan y David K. Lowenthal. "CC--MPI". ACM SIGPLAN Notices 38, n.º 10 (octubre de 2003): 95–106. http://dx.doi.org/10.1145/966049.781514.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

LOUCA, SOULLA, NEOPHYTOS NEOPHYTOU, ADRIANOS LACHANAS y PARASKEVAS EVRIPIDOU. "MPI-FT: PORTABLE FAULT TOLERANCE SCHEME FOR MPI". Parallel Processing Letters 10, n.º 04 (diciembre de 2000): 371–82. http://dx.doi.org/10.1142/s0129626400000342.

Texto completo
Resumen
In this paper, we propose the design and development of a fault tolerant and recovery scheme for the Message Passing Interface (MPI). The proposed scheme consists of a detection mechanism for detecting process failures, and a recovery mechanism. Two different cases are considered, both assuming the existence of a monitoring process, the Observer which triggers the recovery procedure in case of failure. In the first case, each process keeps a buffer with its own message traffic to be used in case of failure, while the implementor uses periodical tests for notification of failure by the Observer. The recovery function simulates all the communication of the processes with the dead one by re-sending to the replacement process all the messages destined for the dead one. In the second case, the Observer receives and stores all message traffic, and sends to the replacement all the buffered messages destined for the dead process. Solutions are provided to the dead communicator problem caused by the death of a process. A description of the prototype developed is provided along with the results of the experiments performed for efficiency and performance.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Más fuentes

Tesis sobre el tema "MPI"

1

Kamal, Humaira. "FG-MPI : Fine-Grain MPI". Thesis, University of British Columbia, 2013. http://hdl.handle.net/2429/44668.

Texto completo
Resumen
The Message Passing Interface (MPI) is widely used to write sophisticated parallel applications ranging from cognitive computing to weather predictions and is almost universally adopted for High Performance Computing (HPC). Many popular MPI implementations bind MPI processes to OS-processes. This runtime model has closely matched single or multi-processor compute clusters. Since 2008, however, clusters of multicore nodes have been the predominant architecture for HPC, with the opportunity for parallelism inside one compute node. There are a number of popular parallel programming languages for multicore that use message passing. One notable difference between MPI and these languages is the granularity of the MPI processes. Processes written using MPI tend to be coarse-grained and designed to match the number of processes to the available hardware, rather than the program structure. Binding MPI processes to OS-processes fails to take full advantage of the finer-grain parallelism available on today's multicore systems. Our goal was to take advantage of the type of runtime systems used by fine-grain languages and integrate that into MPI to obtain the best of these programming models; the ability to have fine-grain parallelism, while maintaining MPI's rich support for communication inside clusters. Fine-Grain MPI (FG-MPI) is a system that extends the execution model of MPI to include interleaved concurrency through integration into the MPI middleware. FG-MPI is integrated into the MPICH2 middleware, which is an open source, production-quality implementation of MPI. The FG-MPI runtime uses coroutines to implement light-weight MPI processes that are non-preemptively scheduled by its MPI-aware scheduler. The use of coroutines enables fast context-switching time and low communication and synchronization overhead. FG-MPI enables expression of finer-grain function-level parallelism, which allows for flexible process mapping, scalability, and can lead to better program performance. We have demonstrated FG-MPI's ability to scale to over a 100 million MPI processes on a large cluster of 6,480 cores. This is the first time any system has executed such a large number of MPI processes, and this capability will be useful in exploring scalability issues of the MPI middleware as systems move towards compute clusters with millions of processor cores.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Ramesh, Srinivasan. "MPI Performance Engineering with the MPI Tools Information Interface". Thesis, University of Oregon, 2018. http://hdl.handle.net/1794/23779.

Texto completo
Resumen
The desire for high performance on scalable parallel systems is increasing the complexity and the need to tune MPI implementations. The MPI Tools Information Interface (MPI T) introduced in the MPI 3.0 standard provides an opportunity for performance tools and external software to introspect and understand MPI runtime behavior at a deeper level to detect scalability issues. The interface also provides a mechanism to fine-tune the performance of the MPI library dynamically at runtime. This thesis describes the motivation, design, and challenges involved in developing an MPI performance engineering infrastructure using MPI T for two performance toolkits — the TAU Performance System, and Caliper. I validate the design of the infrastructure for TAU by developing optimizations for production and synthetic applications. I show that the MPI T runtime introspection mechanism in Caliper enables a meaningful analysis of performance data. This thesis includes previously published co-authored material.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Massetto, Francisco Isidro. "Hybrid MPI - uma implementação MPI para ambientes distribuídos híbridos". Universidade de São Paulo, 2007. http://www.teses.usp.br/teses/disponiveis/3/3141/tde-08012008-100937/.

Texto completo
Resumen
O crescente desenvolvimento de aplicações de alto desempenho é uma realidade presente nos dias atuais. Entretanto, a diversidade de arquiteturas de máquinas, incluindo monoprocessadores e multiprocessadores, clusters com ou sem máquina front-end, variedade de sistemas operacionais e implementações da biblioteca MPI tem aumentado cada dia mais. Tendo em vista este cenário, bibliotecas que proporcionem a integração de diversas implementações MPI, sistemas operacionais e arquiteturas de máquinas são necessárias. Esta tese apresenta o HyMPI, uma implementação da biblioteca MPI voltada para integração, em um mesmo ambiente distribuído de alto desempenho, nós com diferentes arquiteturas, clusters com ou sem máquina front-end, sistemas operacionais e implementações MPI. HyMPI oferece um conjunto de primitivas compatíveis com a especificação MPI, incluindo comunicação ponto a ponto, operações coletivas, inicio e termino, além de outras primitivas utilitárias.
The increasing develpment of high performance applications is a reality on current days. However, the diversity of computer architectures, including mono and multiprocessor machines, clusters with or without front-end node, the variety of operating systems and MPI implementations has growth increasingly. Focused on this scenario, programming libraries that allows integration of several MPI implementations, operating systems and computer architectures are needed. This thesis introduces HyMPI, a MPI implementation aiming integratino, on a distributed high performance system nodes with different architectures, clusters with or without front-end machine, operating systems and MPI implementations. HyMPI offers a set of primitives based on MPI specification, including point-to-point communication, collective operations, startup and finalization and some other utility functions.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Subotic, Vladimir. "Evaluating techniques for parallelization tuning in MPI, OmpSs and MPI/OmpSs". Doctoral thesis, Universitat Politècnica de Catalunya, 2013. http://hdl.handle.net/10803/129573.

Texto completo
Resumen
Parallel programming is used to partition a computational problem among multiple processing units and to define how they interact (communicate and synchronize) in order to guarantee the correct result. The performance that is achieved when executing the parallel program on a parallel architecture is usually far from the optimal: computation unbalance and excessive interaction among processing units often cause lost cycles, reducing the efficiency of parallel computation. In this thesis we propose techniques oriented to better exploit parallelism in parallel applications, with emphasis in techniques that increase asynchronism. Theoretically, this type of parallelization tuning promises multiple benefits. First, it should mitigate communication and synchronization delays, thus increasing the overall performance. Furthermore, parallelization tuning should expose additional parallelism and therefore increase the scalability of execution. Finally, increased asynchronism would provide higher tolerance to slower networks and external noise. In the first part of this thesis, we study the potential for tuning MPI parallelism. More specifically, we explore automatic techniques to overlap communication and computation. We propose a speculative messaging technique that increases the overlap and requires no changes of the original MPI application. Our technique automatically identifies the application’s MPI activity and reinterprets that activity using optimally placed non-blocking MPI requests. We demonstrate that this overlapping technique increases the asynchronism of MPI messages, maximizing the overlap, and consequently leading to execution speedup and higher tolerance to bandwidth reduction. However, in the case of realistic scientific workloads, we show that the overlapping potential is significantly limited by the pattern by which each MPI process locally operates on MPI messages. In the second part of this thesis, we study the potential for tuning hybrid MPI/OmpSs parallelism. We try to gain a better understanding of the parallelism of hybrid MPI/OmpSs applications in order to evaluate how these applications would execute on future machines and to predict the execution bottlenecks that are likely to emerge. We explore how MPI/OmpSs applications could scale on the parallel machine with hundreds of cores per node. Furthermore, we investigate how this high parallelism within each node would reflect on the network constraints. We especially focus on identifying critical code sections in MPI/OmpSs. We devised a technique that quickly evaluates, for a given MPI/OmpSs application and the selected target machine, which code section should be optimized in order to gain the highest performance benefits. Also, this thesis studies techniques to quickly explore the potential OmpSs parallelism inherent in applications. We provide mechanisms to easily evaluate potential parallelism of any task decomposition. Furthermore, we describe an iterative trialand-error approach to search for a task decomposition that will expose sufficient parallelism for a given target machine. Finally, we explore potential of automating the iterative approach by capturing the programmers’ experience into an expert system that can autonomously lead the search process. Also, throughout the work on this thesis, we designed development tools that can be useful to other researchers in the field. The most advanced of these tools is Tareador – a tool to help porting MPI applications to MPI/OmpSs programming model. Tareador provides a simple interface to propose some decomposition of a code into OmpSs tasks. Tareador dynamically calculates data dependencies among the annotated tasks, and automatically estimates the potential OmpSs parallelization. Furthermore, Tareador gives additional hints on how to complete the process of porting the application to OmpSs. Tareador already proved itself useful, by being included in the academic classes on parallel programming at UPC.
La programación paralela consiste en dividir un problema de computación entre múltiples unidades de procesamiento y definir como interactúan (comunicación y sincronización) para garantizar un resultado correcto. El rendimiento de un programa paralelo normalmente está muy lejos de ser óptimo: el desequilibrio de la carga computacional y la excesiva interacción entre las unidades de procesamiento a menudo causa ciclos perdidos, reduciendo la eficiencia de la computación paralela. En esta tesis proponemos técnicas orientadas a explotar mejor el paralelismo en aplicaciones paralelas, poniendo énfasis en técnicas que incrementan el asincronismo. En teoría, estas técnicas prometen múltiples beneficios. Primero, tendrían que mitigar el retraso de la comunicación y la sincronización, y por lo tanto incrementar el rendimiento global. Además, la calibración de la paralelización tendría que exponer un paralelismo adicional, incrementando la escalabilidad de la ejecución. Finalmente, un incremente en el asincronismo proveería una tolerancia mayor a redes de comunicación lentas y ruido externo. En la primera parte de la tesis, estudiamos el potencial para la calibración del paralelismo a través de MPI. En concreto, exploramos técnicas automáticas para solapar la comunicación con la computación. Proponemos una técnica de mensajería especulativa que incrementa el solapamiento y no requiere cambios en la aplicación MPI original. Nuestra técnica identifica automáticamente la actividad MPI de la aplicación y la reinterpreta usando solicitudes MPI no bloqueantes situadas óptimamente. Demostramos que esta técnica maximiza el solapamiento y, en consecuencia, acelera la ejecución y permite una mayor tolerancia a las reducciones de ancho de banda. Aún así, en el caso de cargas de trabajo científico realistas, mostramos que el potencial de solapamiento está significativamente limitado por el patrón según el cual cada proceso MPI opera localmente en el paso de mensajes. En la segunda parte de esta tesis, exploramos el potencial para calibrar el paralelismo híbrido MPI/OmpSs. Intentamos obtener una comprensión mejor del paralelismo de aplicaciones híbridas MPI/OmpSs para evaluar de qué manera se ejecutarían en futuras máquinas. Exploramos como las aplicaciones MPI/OmpSs pueden escalar en una máquina paralela con centenares de núcleos por nodo. Además, investigamos cómo este paralelismo de cada nodo se reflejaría en las restricciones de la red de comunicación. En especia, nos concentramos en identificar secciones críticas de código en MPI/OmpSs. Hemos concebido una técnica que rápidamente evalúa, para una aplicación MPI/OmpSs dada y la máquina objetivo seleccionada, qué sección de código tendría que ser optimizada para obtener la mayor ganancia de rendimiento. También estudiamos técnicas para explorar rápidamente el paralelismo potencial de OmpSs inherente en las aplicaciones. Proporcionamos mecanismos para evaluar fácilmente el paralelismo potencial de cualquier descomposición en tareas. Además, describimos una aproximación iterativa para buscar una descomposición en tareas que mostrará el suficiente paralelismo en la máquina objetivo dada. Para finalizar, exploramos el potencial para automatizar la aproximación iterativa. En el trabajo expuesto en esta tesis hemos diseñado herramientas que pueden ser útiles para otros investigadores de este campo. La más avanzada es Tareador, una herramienta para ayudar a migrar aplicaciones al modelo de programación MPI/OmpSs. Tareador proporciona una interfaz simple para proponer una descomposición del código en tareas OmpSs. Tareador también calcula dinámicamente las dependencias de datos entre las tareas anotadas, y automáticamente estima el potencial de paralelización OmpSs. Por último, Tareador da indicaciones adicionales sobre como completar el proceso de migración a OmpSs. Tareador ya se ha mostrado útil al ser incluido en las clases de programación de la UPC.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Träff, Jesper. "Aspects of the efficient implementation of the message passing interface (MPI)". Aachen Shaker, 2009. http://d-nb.info/994501803/04.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Young, Bobby Dalton. "MPI WITHIN A GPU". UKnowledge, 2009. http://uknowledge.uky.edu/gradschool_theses/614.

Texto completo
Resumen
GPUs offer high-performance floating-point computation at commodity prices, but their usage is hindered by programming models which expose the user to irregularities in the current shared-memory environments and require learning new interfaces and semantics. This thesis will demonstrate that the message-passing paradigm can be conceptually cleaner than the current data-parallel models for programming GPUs because it can hide the quirks of current GPU shared-memory environments, as well as GPU-specific features, behind a well-established and well-understood interface. This will be shown by demonstrating a proof-of-concept MPI implementation which provides cleaner, simpler code with a reasonable performance cost. This thesis will also demonstrate that, although there is a virtualization constraint imposed by MPI, this constraint is harmless as long as the virtualization was already chosen to be optimal in terms of a strong execution model and nearly-optimal execution time. This will be demonstrated by examining execution times with varying virtualization using a computationally-expensive micro-kernel.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Angadi, Raghavendra. "Best effort MPI/RT as an alternative to MPI design and performance comparison /". Master's thesis, Mississippi State : Mississippi State University, 2002. http://library.msstate.edu/etd/show.asp?etd=etd-12032002-162333.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Sankarapandian, Dayala Ganesh R. Kamal Raj. "Profiling MPI Primitives in Real-time Using OSU INAM". The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu1587336162238284.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Hoefler, Torsten. "Communication/Computation Overlap in MPI". Universitätsbibliothek Chemnitz, 2006. http://nbn-resolving.de/urn:nbn:de:swb:ch1-200600021.

Texto completo
Resumen
This talk discusses optimized collective algorithms and the benefits of leveraging independent hardware entities in a pipelined manner. The resulting approach uses overlap of computation and communication to reach this task. Different examples are given.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Chung, Ryan Ki Sing. "CMCMPI : Compose-Map-Configure MPI". Thesis, University of British Columbia, 2014. http://hdl.handle.net/2429/51185.

Texto completo
Resumen
In order to manage the complexities of Multiple Program, Multiple Data (MPMD) program deployment to optimize for performance, we propose (CM)²PI as a specification and tool that employs a four stage approach to create a separation of concerns between distinct decisions: architecture interactions, software size, resource constraints, and function. With function level parallelism in mind, to create a scalable architecture specification we use multi-level compositions to improve re-usability and encapsulation. We explore different ways to abstract out communication from the tight coupling of MPI ranks and placement. One of the methods proposed is the flow-controlled channels which also aims at tackling the common issues of buffer limitations and termination. The specification increase compatibility with optimization tools. This enables the automatic optimization of program run time with respect to resource constraints. Together these features simplify the development of MPMD MPI programs.
Science, Faculty of
Computer Science, Department of
Graduate
Los estilos APA, Harvard, Vancouver, ISO, etc.
Más fuentes

Libros sobre el tema "MPI"

1

Duanghom, Srinuan. An Mpi dictionary. Bangkok: Indigenous Languages of Thailand Research Project, 1989.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Marc, Snir, ed. MPI: The complete reference. Cambridge, Mass: MIT Press, 1996.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Ndụbisi, Oriaku Onyefụlụchukwu. Atụrụ ga-epu mpi--. Enugu: Generation Books, 2006.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Marc, Snir, ed. MPI--the complete reference. 2a ed. Cambridge, Mass: MIT Press, 1998.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

pechati, Moskovskiĭ gosudarstvennyĭ universitet, ed. My iz MPI: Moskovskiĭ poligraficheskiĭ institut. Moskva: MGUP, 2005.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Peter, Corbett y United States. National Aeronautics and Space Administration., eds. MPI-IO: A parallel file I/O interface for MPI : [NAS technical report NAS-95-002 ...]. [Washington, DC: National Aeronautics and Space Administration, 1995.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Nielsen, Frank. Introduction to HPC with MPI for Data Science. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-21903-5.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Research Institute for Advanced Computer Science (U.S.), ed. A portable MPI-based parallel vector template library. [Moffett Field, Calif.]: Research Institute for Advanced Computer Science, NASA Ames Research Center, 1995.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Research Institute for Advanced Computer Science (U.S.), ed. A portable MPI-based parallel vector template library. [Moffett Field, Calif.]: Research Institute for Advanced Computer Science, NASA Ames Research Center, 1995.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Research Institute for Advanced Computer Science (U.S.), ed. A portable MPI-based parallel vector template library. [Moffett Field, Calif.]: Research Institute for Advanced Computer Science, NASA Ames Research Center, 1995.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Más fuentes

Capítulos de libros sobre el tema "MPI"

1

Ross, Robert, Robert Latham, William Gropp, Ewing Lusk y Rajeev Thakur. "Processing MPI Datatypes Outside MPI". En Recent Advances in Parallel Virtual Machine and Message Passing Interface, 42–53. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-03770-2_11.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Pérache, Marc, Patrick Carribault y Hervé Jourdren. "MPC-MPI: An MPI Implementation Reducing the Overall Memory Consumption". En Recent Advances in Parallel Virtual Machine and Message Passing Interface, 94–103. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-03770-2_16.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Knoth, Adrian. "Open MPI". En Grid-Computing, 117–26. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-79747-0_6.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Huang, Chao, Orion Lawlor y L. V. Kalé. "Adaptive MPI". En Languages and Compilers for Parallel Computing, 306–22. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-24644-2_20.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Padua, David, Amol Ghoting, John A. Gunnels, Mark S. Squillante, José Meseguer, James H. Cownie, Duncan Roweth et al. "MPI-IO". En Encyclopedia of Parallel Computing, 1191–99. Boston, MA: Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-09766-4_297.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Rabenseifner, Rolf. "MPI-GLUE: Interoperable high-performance MPI combining different vendor’s MPI worlds". En Euro-Par’98 Parallel Processing, 563–69. Berlin, Heidelberg: Springer Berlin Heidelberg, 1998. http://dx.doi.org/10.1007/bfb0057902.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Gropp, William, Ewing Lusk y Rajeev Thakur. "Advanced MPI Including New MPI-3 Features". En Recent Advances in the Message Passing Interface, 14. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33518-1_5.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Huse, Lars Paul y Ole W. Saastad. "The Network Agnostic MPI – Scali MPI Connect". En Recent Advances in Parallel Virtual Machine and Message Passing Interface, 294–301. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/978-3-540-39924-7_42.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Graham, Richard L., Timothy S. Woodall y Jeffrey M. Squyres. "Open MPI: A Flexible High Performance MPI". En Parallel Processing and Applied Mathematics, 228–39. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11752578_29.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Szustak, Lukasz, Roman Wyrzykowski, Kamil Halbiniak y Pawel Bratek. "Toward Heterogeneous MPI+MPI Programming: Comparison of OpenMP and MPI Shared Memory Models". En Euro-Par 2019: Parallel Processing Workshops, 270–81. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-48340-1_21.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.

Actas de conferencias sobre el tema "MPI"

1

Yang, Chen, Guifa Sun, Xiang Cai, Xiguo Xie y Jianpeng Sun. "MPI toolkit: MPI-based performance analysis software for parallel programs". En International Conference on Algorithms, High Performance Computing and Artificial Intelligence, editado por Pavel Loskot y Liang Hu, 117. SPIE, 2024. http://dx.doi.org/10.1117/12.3051762.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Temuçin, Yıltan Hassan, Whit Schonbein, Scott Levy, Amirhossein Sojoodi, Ryan E. Grant y Ahmad Afsahi. "Design and Implementation of MPI-Native GPU-Initiated MPI Partitioned Communication". En SC24-W: Workshops of the International Conference for High Performance Computing, Networking, Storage and Analysis, 436–47. IEEE, 2024. https://doi.org/10.1109/scw63240.2024.00065.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Zhou, Hui, Robert Latham, Ken Raffenetti, Yanfei Guo y Rajeev Thakur. "MPI Progress For All". En SC24-W: Workshops of the International Conference for High Performance Computing, Networking, Storage and Analysis, 425–35. IEEE, 2024. https://doi.org/10.1109/scw63240.2024.00063.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Getov, Vladimir, Paul Gray y Vaidy Sunderam. "MPI and Java-MPI". En the 1999 ACM/IEEE conference. New York, New York, USA: ACM Press, 1999. http://dx.doi.org/10.1145/331532.331553.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Green, Ronald W. "Beyond MPI---Beyond MPI". En the 2006 ACM/IEEE conference. New York, New York, USA: ACM Press, 2006. http://dx.doi.org/10.1145/1188455.1188494.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

"MPI". En the 1993 ACM/IEEE conference. New York, New York, USA: ACM Press, 1993. http://dx.doi.org/10.1145/169627.169855.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Squyres, Jeff y Brian Barrett. "Open MPI---Open MPI community meeting". En the 2006 ACM/IEEE conference. New York, New York, USA: ACM Press, 2006. http://dx.doi.org/10.1145/1188455.1188461.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Graham, Richard, Galen Shipman, Brian Barrett, Ralph Castain, George Bosilca y Andrew Lumsdaine. "Open MPI: A High-Performance, Heterogeneous MPI". En 2006 IEEE International Conference on Cluster Computing. IEEE, 2006. http://dx.doi.org/10.1109/clustr.2006.311904.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Cong Du y Xian-He Sun Sun. "MPI-Mitten: Enabling Migration Technology in MPI". En Sixth IEEE International Symposium on Cluster Computing and the Grid. IEEE, 2006. http://dx.doi.org/10.1109/ccgrid.2006.71.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Booth, S. y E. Mourao. "Single sided MPI implementations for SUN MPI". En ACM/IEEE SC 2000 Conference. IEEE, 2000. http://dx.doi.org/10.1109/sc.2000.10022.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.

Informes sobre el tema "MPI"

1

Han, D. y T. Jones. MPI Profiling. Office of Scientific and Technical Information (OSTI), febrero de 2005. http://dx.doi.org/10.2172/15014654.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Garrett, Charles Kristopher. Distributed Computing (MPI). Office of Scientific and Technical Information (OSTI), junio de 2016. http://dx.doi.org/10.2172/1258356.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Pritchard, Howard Porter Jr, Samuel Keith Gutierrez, Nathan Hjelm, Daniel Holmes y Ralph Castain. MPI Sessions: Second Demonstration and Evaluation of MPI Sessions Prototype. Office of Scientific and Technical Information (OSTI), septiembre de 2019. http://dx.doi.org/10.2172/1566099.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Pritchard, Howard. MPI Sessions - Working Group activities post MPI 4.0 standard ratification. Office of Scientific and Technical Information (OSTI), diciembre de 2022. http://dx.doi.org/10.2172/1906014.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Hassanzadeh, Sara, Sina Neshat, Afshin Heidari y Masoud Moslehi. Myocardial Perfusion Imaging in the Era of COVID-19. INPLASY - International Platform of Registered Systematic Review and Meta-analysis Protocols, abril de 2022. http://dx.doi.org/10.37766/inplasy2022.4.0063.

Texto completo
Resumen
Review question / Objective: This review studies all aspects of myocardial perfusion imaging with single-photon emission computed tomography (MPI SPECT) after the COVID-19 pandemic. Condition being studied: Many imaging modalities have been reduced after the COVID-19 pandemic. Our focus in this review is to see if the number of MPIs is lowered or not and, if so, why. Furthermore, it is possible that a combination of CT attenuation correction and MPI could yield findings. In this study, we'll also look for these probable findings. Third, we know from previous studies that COVID might cause cardiac injuries in some people. Since MPI is a cardiovascular imaging technique, it might shows those injuries. So we'll review articles to find out in patients with active COVID infection, long COVID, or previous COVID cases what findings in MPI those cardiac injuries can cause.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Loewe, W. MPI I/O Testing Results. Office of Scientific and Technical Information (OSTI), septiembre de 2007. http://dx.doi.org/10.2172/925675.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

George, William L., John G. Hagedorn y Judith E. Devaney. Parallel programming with interoperable MPI. Gaithersburg, MD: National Institute of Standards and Technology, 2003. http://dx.doi.org/10.6028/nist.ir.7066.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Pritchard, Howard y Tom Herschberg. MPI Session:External Network Transport Implementation. Office of Scientific and Technical Information (OSTI), septiembre de 2020. http://dx.doi.org/10.2172/1669081.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Rao, Lakshman A. y Jon Weissman. MPI-Based Adaptive Parallel Grid Services. Fort Belvoir, VA: Defense Technical Information Center, agosto de 2003. http://dx.doi.org/10.21236/ada439405.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Bronevetsky, G., A. Friedley, T. Hoefler, A. Lumsdaine y D. Quinlan. Compiling MPI for Many-Core Systems. Office of Scientific and Technical Information (OSTI), junio de 2013. http://dx.doi.org/10.2172/1088441.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía