Academic literature on the topic 'MPI'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'MPI.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "MPI"
Taniguchi, Yusuke, Nao Suzuki, Kae Kakura, Kazunari Tanabe, Ryutaro Ito, Tadahiro Kashiwamura, Akie Fujimoto, et al. "Effect of Continuous Intake of Lactobacillus salivarius WB21 on Tissues Surrounding Implants: A Double-Blind Randomized Clinical Trial." Life 14, no. 12 (November 22, 2024): 1532. http://dx.doi.org/10.3390/life14121532.
Full textFleming, Richard. "Reno Cardiologist Confirms FMTVDM – Opening New Opportunities for Nuclear Cardiologists." Clinical Medical Reviews and Reports 1, no. 1 (December 19, 2019): 01–04. http://dx.doi.org/10.31579/2690-8794/001.
Full textOverbeek, Femke C. M. S., Jeannette A. Goudzwaard, Judy van Hemmen, Rozemarijn L. van Bruchem-Visser, Janne M. Papma, Harmke A. Polinder-Bos, and Francesco U. S. Mattace-Raso. "The Multidimensional Prognostic Index Predicts Mortality in Older Outpatients with Cognitive Decline." Journal of Clinical Medicine 11, no. 9 (April 23, 2022): 2369. http://dx.doi.org/10.3390/jcm11092369.
Full textHilbrich, Tobias, Matthias S. Müller, and Bettina Krammer. "MPI Correctness Checking for OpenMP/MPI Applications." International Journal of Parallel Programming 37, no. 3 (April 22, 2009): 277–91. http://dx.doi.org/10.1007/s10766-009-0099-4.
Full textBabbar, Rohan, Matteo Ravasi, and Yuxi Hong. "PyLops-MPI - MPI Powered PyLops with mpi4py." Journal of Open Source Software 10, no. 105 (January 7, 2025): 7512. https://doi.org/10.21105/joss.07512.
Full textChiang, Y. C., and Y. T. Kiang. "Genetic analysis of mannose-6-phosphate isomerase in soybeans." Genome 30, no. 5 (October 1, 1988): 808–11. http://dx.doi.org/10.1139/g88-130.
Full textLiu, Feilong, Claude Barthels, Spyros Blanas, Hideaki Kimura, and Garret Swart. "Beyond MPI." ACM SIGMOD Record 49, no. 4 (March 8, 2021): 12–17. http://dx.doi.org/10.1145/3456859.3456862.
Full text&NA;. "MPI-5003." Inpharma Weekly &NA;, no. 1133 (April 1998): 9. http://dx.doi.org/10.2165/00128413-199811330-00014.
Full textKarwande, Amit, Xin Yuan, and David K. Lowenthal. "CC--MPI." ACM SIGPLAN Notices 38, no. 10 (October 2003): 95–106. http://dx.doi.org/10.1145/966049.781514.
Full textLOUCA, SOULLA, NEOPHYTOS NEOPHYTOU, ADRIANOS LACHANAS, and PARASKEVAS EVRIPIDOU. "MPI-FT: PORTABLE FAULT TOLERANCE SCHEME FOR MPI." Parallel Processing Letters 10, no. 04 (December 2000): 371–82. http://dx.doi.org/10.1142/s0129626400000342.
Full textDissertations / Theses on the topic "MPI"
Kamal, Humaira. "FG-MPI : Fine-Grain MPI." Thesis, University of British Columbia, 2013. http://hdl.handle.net/2429/44668.
Full textRamesh, Srinivasan. "MPI Performance Engineering with the MPI Tools Information Interface." Thesis, University of Oregon, 2018. http://hdl.handle.net/1794/23779.
Full textMassetto, Francisco Isidro. "Hybrid MPI - uma implementação MPI para ambientes distribuídos híbridos." Universidade de São Paulo, 2007. http://www.teses.usp.br/teses/disponiveis/3/3141/tde-08012008-100937/.
Full textThe increasing develpment of high performance applications is a reality on current days. However, the diversity of computer architectures, including mono and multiprocessor machines, clusters with or without front-end node, the variety of operating systems and MPI implementations has growth increasingly. Focused on this scenario, programming libraries that allows integration of several MPI implementations, operating systems and computer architectures are needed. This thesis introduces HyMPI, a MPI implementation aiming integratino, on a distributed high performance system nodes with different architectures, clusters with or without front-end machine, operating systems and MPI implementations. HyMPI offers a set of primitives based on MPI specification, including point-to-point communication, collective operations, startup and finalization and some other utility functions.
Subotic, Vladimir. "Evaluating techniques for parallelization tuning in MPI, OmpSs and MPI/OmpSs." Doctoral thesis, Universitat Politècnica de Catalunya, 2013. http://hdl.handle.net/10803/129573.
Full textLa programación paralela consiste en dividir un problema de computación entre múltiples unidades de procesamiento y definir como interactúan (comunicación y sincronización) para garantizar un resultado correcto. El rendimiento de un programa paralelo normalmente está muy lejos de ser óptimo: el desequilibrio de la carga computacional y la excesiva interacción entre las unidades de procesamiento a menudo causa ciclos perdidos, reduciendo la eficiencia de la computación paralela. En esta tesis proponemos técnicas orientadas a explotar mejor el paralelismo en aplicaciones paralelas, poniendo énfasis en técnicas que incrementan el asincronismo. En teoría, estas técnicas prometen múltiples beneficios. Primero, tendrían que mitigar el retraso de la comunicación y la sincronización, y por lo tanto incrementar el rendimiento global. Además, la calibración de la paralelización tendría que exponer un paralelismo adicional, incrementando la escalabilidad de la ejecución. Finalmente, un incremente en el asincronismo proveería una tolerancia mayor a redes de comunicación lentas y ruido externo. En la primera parte de la tesis, estudiamos el potencial para la calibración del paralelismo a través de MPI. En concreto, exploramos técnicas automáticas para solapar la comunicación con la computación. Proponemos una técnica de mensajería especulativa que incrementa el solapamiento y no requiere cambios en la aplicación MPI original. Nuestra técnica identifica automáticamente la actividad MPI de la aplicación y la reinterpreta usando solicitudes MPI no bloqueantes situadas óptimamente. Demostramos que esta técnica maximiza el solapamiento y, en consecuencia, acelera la ejecución y permite una mayor tolerancia a las reducciones de ancho de banda. Aún así, en el caso de cargas de trabajo científico realistas, mostramos que el potencial de solapamiento está significativamente limitado por el patrón según el cual cada proceso MPI opera localmente en el paso de mensajes. En la segunda parte de esta tesis, exploramos el potencial para calibrar el paralelismo híbrido MPI/OmpSs. Intentamos obtener una comprensión mejor del paralelismo de aplicaciones híbridas MPI/OmpSs para evaluar de qué manera se ejecutarían en futuras máquinas. Exploramos como las aplicaciones MPI/OmpSs pueden escalar en una máquina paralela con centenares de núcleos por nodo. Además, investigamos cómo este paralelismo de cada nodo se reflejaría en las restricciones de la red de comunicación. En especia, nos concentramos en identificar secciones críticas de código en MPI/OmpSs. Hemos concebido una técnica que rápidamente evalúa, para una aplicación MPI/OmpSs dada y la máquina objetivo seleccionada, qué sección de código tendría que ser optimizada para obtener la mayor ganancia de rendimiento. También estudiamos técnicas para explorar rápidamente el paralelismo potencial de OmpSs inherente en las aplicaciones. Proporcionamos mecanismos para evaluar fácilmente el paralelismo potencial de cualquier descomposición en tareas. Además, describimos una aproximación iterativa para buscar una descomposición en tareas que mostrará el suficiente paralelismo en la máquina objetivo dada. Para finalizar, exploramos el potencial para automatizar la aproximación iterativa. En el trabajo expuesto en esta tesis hemos diseñado herramientas que pueden ser útiles para otros investigadores de este campo. La más avanzada es Tareador, una herramienta para ayudar a migrar aplicaciones al modelo de programación MPI/OmpSs. Tareador proporciona una interfaz simple para proponer una descomposición del código en tareas OmpSs. Tareador también calcula dinámicamente las dependencias de datos entre las tareas anotadas, y automáticamente estima el potencial de paralelización OmpSs. Por último, Tareador da indicaciones adicionales sobre como completar el proceso de migración a OmpSs. Tareador ya se ha mostrado útil al ser incluido en las clases de programación de la UPC.
Träff, Jesper. "Aspects of the efficient implementation of the message passing interface (MPI)." Aachen Shaker, 2009. http://d-nb.info/994501803/04.
Full textYoung, Bobby Dalton. "MPI WITHIN A GPU." UKnowledge, 2009. http://uknowledge.uky.edu/gradschool_theses/614.
Full textAngadi, Raghavendra. "Best effort MPI/RT as an alternative to MPI design and performance comparison /." Master's thesis, Mississippi State : Mississippi State University, 2002. http://library.msstate.edu/etd/show.asp?etd=etd-12032002-162333.
Full textSankarapandian, Dayala Ganesh R. Kamal Raj. "Profiling MPI Primitives in Real-time Using OSU INAM." The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu1587336162238284.
Full textHoefler, Torsten. "Communication/Computation Overlap in MPI." Universitätsbibliothek Chemnitz, 2006. http://nbn-resolving.de/urn:nbn:de:swb:ch1-200600021.
Full textChung, Ryan Ki Sing. "CMCMPI : Compose-Map-Configure MPI." Thesis, University of British Columbia, 2014. http://hdl.handle.net/2429/51185.
Full textScience, Faculty of
Computer Science, Department of
Graduate
Books on the topic "MPI"
Duanghom, Srinuan. An Mpi dictionary. Bangkok: Indigenous Languages of Thailand Research Project, 1989.
Find full textNdụbisi, Oriaku Onyefụlụchukwu. Atụrụ ga-epu mpi--. Enugu: Generation Books, 2006.
Find full textMarc, Snir, ed. MPI--the complete reference. 2nd ed. Cambridge, Mass: MIT Press, 1998.
Find full textpechati, Moskovskiĭ gosudarstvennyĭ universitet, ed. My iz MPI: Moskovskiĭ poligraficheskiĭ institut. Moskva: MGUP, 2005.
Find full textPeter, Corbett, and United States. National Aeronautics and Space Administration., eds. MPI-IO: A parallel file I/O interface for MPI : [NAS technical report NAS-95-002 ...]. [Washington, DC: National Aeronautics and Space Administration, 1995.
Find full textNielsen, Frank. Introduction to HPC with MPI for Data Science. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-21903-5.
Full textResearch Institute for Advanced Computer Science (U.S.), ed. A portable MPI-based parallel vector template library. [Moffett Field, Calif.]: Research Institute for Advanced Computer Science, NASA Ames Research Center, 1995.
Find full textResearch Institute for Advanced Computer Science (U.S.), ed. A portable MPI-based parallel vector template library. [Moffett Field, Calif.]: Research Institute for Advanced Computer Science, NASA Ames Research Center, 1995.
Find full textResearch Institute for Advanced Computer Science (U.S.), ed. A portable MPI-based parallel vector template library. [Moffett Field, Calif.]: Research Institute for Advanced Computer Science, NASA Ames Research Center, 1995.
Find full textBook chapters on the topic "MPI"
Ross, Robert, Robert Latham, William Gropp, Ewing Lusk, and Rajeev Thakur. "Processing MPI Datatypes Outside MPI." In Recent Advances in Parallel Virtual Machine and Message Passing Interface, 42–53. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-03770-2_11.
Full textPérache, Marc, Patrick Carribault, and Hervé Jourdren. "MPC-MPI: An MPI Implementation Reducing the Overall Memory Consumption." In Recent Advances in Parallel Virtual Machine and Message Passing Interface, 94–103. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-03770-2_16.
Full textKnoth, Adrian. "Open MPI." In Grid-Computing, 117–26. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-79747-0_6.
Full textHuang, Chao, Orion Lawlor, and L. V. Kalé. "Adaptive MPI." In Languages and Compilers for Parallel Computing, 306–22. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-24644-2_20.
Full textPadua, David, Amol Ghoting, John A. Gunnels, Mark S. Squillante, José Meseguer, James H. Cownie, Duncan Roweth, et al. "MPI-IO." In Encyclopedia of Parallel Computing, 1191–99. Boston, MA: Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-09766-4_297.
Full textRabenseifner, Rolf. "MPI-GLUE: Interoperable high-performance MPI combining different vendor’s MPI worlds." In Euro-Par’98 Parallel Processing, 563–69. Berlin, Heidelberg: Springer Berlin Heidelberg, 1998. http://dx.doi.org/10.1007/bfb0057902.
Full textGropp, William, Ewing Lusk, and Rajeev Thakur. "Advanced MPI Including New MPI-3 Features." In Recent Advances in the Message Passing Interface, 14. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33518-1_5.
Full textHuse, Lars Paul, and Ole W. Saastad. "The Network Agnostic MPI – Scali MPI Connect." In Recent Advances in Parallel Virtual Machine and Message Passing Interface, 294–301. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/978-3-540-39924-7_42.
Full textGraham, Richard L., Timothy S. Woodall, and Jeffrey M. Squyres. "Open MPI: A Flexible High Performance MPI." In Parallel Processing and Applied Mathematics, 228–39. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11752578_29.
Full textSzustak, Lukasz, Roman Wyrzykowski, Kamil Halbiniak, and Pawel Bratek. "Toward Heterogeneous MPI+MPI Programming: Comparison of OpenMP and MPI Shared Memory Models." In Euro-Par 2019: Parallel Processing Workshops, 270–81. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-48340-1_21.
Full textConference papers on the topic "MPI"
Yang, Chen, Guifa Sun, Xiang Cai, Xiguo Xie, and Jianpeng Sun. "MPI toolkit: MPI-based performance analysis software for parallel programs." In International Conference on Algorithms, High Performance Computing and Artificial Intelligence, edited by Pavel Loskot and Liang Hu, 117. SPIE, 2024. http://dx.doi.org/10.1117/12.3051762.
Full textTemuçin, Yıltan Hassan, Whit Schonbein, Scott Levy, Amirhossein Sojoodi, Ryan E. Grant, and Ahmad Afsahi. "Design and Implementation of MPI-Native GPU-Initiated MPI Partitioned Communication." In SC24-W: Workshops of the International Conference for High Performance Computing, Networking, Storage and Analysis, 436–47. IEEE, 2024. https://doi.org/10.1109/scw63240.2024.00065.
Full textZhou, Hui, Robert Latham, Ken Raffenetti, Yanfei Guo, and Rajeev Thakur. "MPI Progress For All." In SC24-W: Workshops of the International Conference for High Performance Computing, Networking, Storage and Analysis, 425–35. IEEE, 2024. https://doi.org/10.1109/scw63240.2024.00063.
Full textGetov, Vladimir, Paul Gray, and Vaidy Sunderam. "MPI and Java-MPI." In the 1999 ACM/IEEE conference. New York, New York, USA: ACM Press, 1999. http://dx.doi.org/10.1145/331532.331553.
Full textGreen, Ronald W. "Beyond MPI---Beyond MPI." In the 2006 ACM/IEEE conference. New York, New York, USA: ACM Press, 2006. http://dx.doi.org/10.1145/1188455.1188494.
Full text"MPI." In the 1993 ACM/IEEE conference. New York, New York, USA: ACM Press, 1993. http://dx.doi.org/10.1145/169627.169855.
Full textSquyres, Jeff, and Brian Barrett. "Open MPI---Open MPI community meeting." In the 2006 ACM/IEEE conference. New York, New York, USA: ACM Press, 2006. http://dx.doi.org/10.1145/1188455.1188461.
Full textGraham, Richard, Galen Shipman, Brian Barrett, Ralph Castain, George Bosilca, and Andrew Lumsdaine. "Open MPI: A High-Performance, Heterogeneous MPI." In 2006 IEEE International Conference on Cluster Computing. IEEE, 2006. http://dx.doi.org/10.1109/clustr.2006.311904.
Full textCong Du and Xian-He Sun Sun. "MPI-Mitten: Enabling Migration Technology in MPI." In Sixth IEEE International Symposium on Cluster Computing and the Grid. IEEE, 2006. http://dx.doi.org/10.1109/ccgrid.2006.71.
Full textBooth, S., and E. Mourao. "Single sided MPI implementations for SUN MPI." In ACM/IEEE SC 2000 Conference. IEEE, 2000. http://dx.doi.org/10.1109/sc.2000.10022.
Full textReports on the topic "MPI"
Han, D., and T. Jones. MPI Profiling. Office of Scientific and Technical Information (OSTI), February 2005. http://dx.doi.org/10.2172/15014654.
Full textGarrett, Charles Kristopher. Distributed Computing (MPI). Office of Scientific and Technical Information (OSTI), June 2016. http://dx.doi.org/10.2172/1258356.
Full textPritchard, Howard Porter Jr, Samuel Keith Gutierrez, Nathan Hjelm, Daniel Holmes, and Ralph Castain. MPI Sessions: Second Demonstration and Evaluation of MPI Sessions Prototype. Office of Scientific and Technical Information (OSTI), September 2019. http://dx.doi.org/10.2172/1566099.
Full textPritchard, Howard. MPI Sessions - Working Group activities post MPI 4.0 standard ratification. Office of Scientific and Technical Information (OSTI), December 2022. http://dx.doi.org/10.2172/1906014.
Full textHassanzadeh, Sara, Sina Neshat, Afshin Heidari, and Masoud Moslehi. Myocardial Perfusion Imaging in the Era of COVID-19. INPLASY - International Platform of Registered Systematic Review and Meta-analysis Protocols, April 2022. http://dx.doi.org/10.37766/inplasy2022.4.0063.
Full textLoewe, W. MPI I/O Testing Results. Office of Scientific and Technical Information (OSTI), September 2007. http://dx.doi.org/10.2172/925675.
Full textGeorge, William L., John G. Hagedorn, and Judith E. Devaney. Parallel programming with interoperable MPI. Gaithersburg, MD: National Institute of Standards and Technology, 2003. http://dx.doi.org/10.6028/nist.ir.7066.
Full textPritchard, Howard, and Tom Herschberg. MPI Session:External Network Transport Implementation. Office of Scientific and Technical Information (OSTI), September 2020. http://dx.doi.org/10.2172/1669081.
Full textRao, Lakshman A., and Jon Weissman. MPI-Based Adaptive Parallel Grid Services. Fort Belvoir, VA: Defense Technical Information Center, August 2003. http://dx.doi.org/10.21236/ada439405.
Full textBronevetsky, G., A. Friedley, T. Hoefler, A. Lumsdaine, and D. Quinlan. Compiling MPI for Many-Core Systems. Office of Scientific and Technical Information (OSTI), June 2013. http://dx.doi.org/10.2172/1088441.
Full text