Academic literature on the topic 'Massively Parallel Processing (MPP)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Massively Parallel Processing (MPP).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Massively Parallel Processing (MPP)"

1

Willson, Ian A. "The Evolution of the Massively Parallel Processing Database in Support of Visual Analytics." Information Resources Management Journal 24, no. 4 (2011): 1–26. http://dx.doi.org/10.4018/irmj.2011100101.

Full text
Abstract:
This article explores the evolution of the Massively Parallel Processing (MPP) database, focusing on trends of particular relevance to analytics. The dramatic shift of database vendors and leading companies to utilize MPP databases and deploy an Enterprise Data Warehouse (EDW) is presented. The inherent benefits of fresher data, storage efficiency, and most importantly accessibility to analytics are explored. Published industry and vendor metrics are examined that demonstrate substantial and growing cost efficiencies from utilizing MPP databases. The author concludes by reviewing trends toward parallelizing decision support workload into the database, ranging from within database transformations to new statistical and spatial analytic capabilities provided by parallelizing these algorithms to execute directly within the MPP database. These new capabilities present an opportunity for timely and powerful enterprise analytics, providing a substantial competitive advantage to those companies able to leverage this technology to turn data into actionable information, gain valuable new insights, and automate operational decision making.
APA, Harvard, Vancouver, ISO, and other styles
2

Ji, Yunhong, Yunpeng Chai, Xuan Zhou, Lipeng Ren, and Yajie Qin. "Smart Intra-query Fault Tolerance for Massive Parallel Processing Databases." Data Science and Engineering 5, no. 1 (2019): 65–79. http://dx.doi.org/10.1007/s41019-019-00114-z.

Full text
Abstract:
AbstractIntra-query fault tolerance has increasingly been a concern for online analytical processing, as more and more enterprises migrate data analytical systems from mainframes to commodity computers. Most massive parallel processing (MPP) databases do not support intra-query fault tolerance. They may suffer from prolonged query latency when running on unreliable commodity clusters. While SQL-on-Hadoop systems can utilize the fault tolerance support of low-level frameworks, such as MapReduce and Spark, their cost-effectiveness is not always acceptable. In this paper, we propose a smart intra-query fault tolerance (SIFT) mechanism for MPP databases. SIFT achieves fault tolerance by performing checkpointing, i.e., materializing intermediate results of selected operators. Different from existing approaches, SIFT aims at promoting query success rate within a given time. To achieve its goal, it needs to: (1) minimize query rerunning time after encountering failures and (2) introduce as less checkpointing overhead as possible. To evaluate SIFT in real-world MPP database systems, we implemented it in Greenplum. The experimental results indicate that it can improve success rate of query processing effectively, especially when working with unreliable hardware.
APA, Harvard, Vancouver, ISO, and other styles
3

RACCA, R. G., Z. MENG, J. M. OZARD, and M. J. WILMUT. "EVALUATION OF MASSIVELY PARALLEL COMPUTING FOR EXHAUSTIVE AND CLUSTERED MATCHED-FIELD PROCESSING." Journal of Computational Acoustics 04, no. 02 (1996): 159–73. http://dx.doi.org/10.1142/s0218396x96000039.

Full text
Abstract:
Many computer algorithms contain an operation that accounts for a substantial portion of the total execution cost in a frequently executed loop. The use of a parallel computer to execute that operation may represent an alternative to a sheer increase in processor speed. The signal processing technique known as matched-field processing (MFP) involves performing identical and independent operations on a potentially huge set of vectors. To investigate a massively parallel approach to MFP and clustered nearest neighbors MFP, algorithms were implemented on a DECmpp 12000 massively parallel computer (from Digital Equipment and MasPar Corporation) with 8192 processors. The execution time for the MFP technique on the MasPar machine was compared with that of MFP on a serial VAX9000–210 equipped with a vector processor. The results showed that the MasPar achieved a speedup factor of at least 17 relative to the VAX9000. The speedup was 3.5 times higher than the ratio of the peak ratings of 600 MFLOPS for the MasPar versus 125 MFLOPS for the VAX9000 with vector processor. The execution speed on the parallel machine represented 64% of its peak rating. This is much better than what is commonly assumed for a parallel machine and was obtained with modest programming effort. An initial implementation of a massively parallel approach to clustered MFP on the MasPar showed a further order of magnitude increase in speed, for an overall speedup factor of 35.
APA, Harvard, Vancouver, ISO, and other styles
4

Manea, A. M., and T. Almani. "Scalable Graphics Processing Unit–Based Multiscale Linear Solvers for Reservoir Simulation." SPE Journal 27, no. 01 (2021): 643–62. http://dx.doi.org/10.2118/203939-pa.

Full text
Abstract:
Summary In this work, the scalability of two key multiscale solvers for the pressure equation arising from incompressible flow in heterogeneous porous media, namely, the multiscale finite volume (MSFV) solver, and the restriction-smoothed basis multiscale (MsRSB) solver, are investigated on the graphics processing unit (GPU) massively parallel architecture. The robustness and scalability of both solvers are compared against their corresponding carefully optimized implementation on the shared-memory multicore architecture in a structured problem setting. Although several components in MSFV and MsRSB algorithms are directly parallelizable, their scalability on the GPU architecture depends heavily on the underlying algorithmic details and data-structure design of every step, where one needs to ensure favorable control and data flow on the GPU, while extracting enough parallel work for a massively parallel environment. In addition, the type of algorithm chosen for each step greatly influences the overall robustness of the solver. Thus, we extend the work on the parallel multiscale methods of Manea et al. (2016) to map the MSFV and MsRSB special kernels to the massively parallel GPU architecture. The scalability of our optimized parallel MSFV and MsRSB GPU implementations are demonstrated using highly heterogeneous structured 3D problems derived from the SPE10 Benchmark (Christie and Blunt 2001). Those problems range in size from millions to tens of millions of cells. For both solvers, the multicore implementations are benchmarked on a shared-memory multicore architecture consisting of two packages of Intel® Cascade Lake Xeon Gold 6246 central processing unit (CPU), whereas the GPU implementations are benchmarked on a massively parallel architecture consisting of NVIDIA Volta V100 GPUs. We compare the multicore implementations to the GPU implementations for both the setup and solution stages. Finally, we compare the parallel MsRSB scalability to the scalability of MSFV on the multicore (Manea et al. 2016) and GPU architectures. To the best of our knowledge, this is the first parallel implementation and demonstration of these versatile multiscale solvers on the GPU architecture. NOTE: This paper is also published as part of the 2021 SPE Reservoir Simulation Conference Special Issue.
APA, Harvard, Vancouver, ISO, and other styles
5

Gall, R., F. Tabaddor, D. Robbins, P. Majors, W. Sheperd, and S. Johnson. "Some Notes on the Finite Element Analysis of Tires." Tire Science and Technology 23, no. 3 (1995): 175–88. http://dx.doi.org/10.2346/1.2137503.

Full text
Abstract:
Abstract Over the past ten years the Finite Element Analysis (FEA) has been increasingly integrated into the tire design process. The FEA has been used to study the general tire behavior, to perform parameter studies, and to do comparative analyses. To decrease the tire development cycle, the FEA is now being used as a replacement for certain tire tests. This requires the accuracy of the FEA results to be within those test limits. This paper investigates some of the known modeling techniques and their impact on accuracy. Some of the issues are the use of shell elements, assumptions for boundary conditions, and global/local analysis approaches. Finally, the use of new generation supercomputers, massively parallel processing systems (MPP), is discussed.
APA, Harvard, Vancouver, ISO, and other styles
6

Chen, Hongzhi, Changji Li, Chenguang Zheng, et al. "G-tran." Proceedings of the VLDB Endowment 15, no. 11 (2022): 2545–58. http://dx.doi.org/10.14778/3551793.3551813.

Full text
Abstract:
Graph transaction processing poses unique challenges such as random data access due to the irregularity of graph structures, low throughput and high abort rate due to the relatively large read/write sets in graph transactions. To address these challenges, we present G-Tran, a remote direct memory access (RDMA)-enabled distributed in-memory graph database with serializable and snapshot isolation support. First, we propose a graph-native data store to achieve good data locality and fast data access for transactional updates and queries. Second, G-Tran adopts a fully decentralized architecture that leverages RDMA to process distributed transactions with the massively parallel processing (MPP) model, which can achieve high performance by utilizing all computing resources. In addition, we propose a new multi-version optimistic concurrency control (MV-OCC) protocol with two optimizations to address the issue of large read/write sets in graph transactions. Extensive experiments show that G-Tran achieves competitive performance compared with other popular graph databases on benchmark workloads.
APA, Harvard, Vancouver, ISO, and other styles
7

Zeng, Feng Sheng. "Research and Improvement of Database Storage Method." Applied Mechanics and Materials 608-609 (October 2014): 641–45. http://dx.doi.org/10.4028/www.scientific.net/amm.608-609.641.

Full text
Abstract:
This paper presents a massive data storage and parallel processing method based on MPP architecture, and put forward full persistent data storage way from the client to request, and the integration the idea of Map/Reduce, the system will be distributed to each data node, the data has high scalability, high availability, high concurrency. And the simulation test and verifies the feasibility of mass data storage mode by building a distributed data node.
APA, Harvard, Vancouver, ISO, and other styles
8

Researcher. "DATA WAREHOUSING WITH AMAZON REDSHIFT: REVOLUTIONIZING BIG DATA ANALYTICS." International Journal of Computer Engineering and Technology (IJCET) 15, no. 4 (2024): 395–405. https://doi.org/10.5281/zenodo.13270530.

Full text
Abstract:
The article talks about Amazon Redshift, a cutting-edge cloud-based data warehouse that is changing the way big data analytics is done. In it, the architecture, main features, and benefits of Redshift are discussed in detail. Columnar storage, massively parallel processing, and a distributed system design are emphasized. The article discusses how business intelligence, data science, operational analytics, customer analytics, and financial analytics are used in the real world. It also compares and contrasts with other cloud data stores, such as Snowflake and Google BigQuery, pointing out their pros and cons. The story goes into great detail about how Amazon Redshift helps businesses use the power of their data on a large scale, which leads to new ideas and a competitive edge in today's data-driven business world.
APA, Harvard, Vancouver, ISO, and other styles
9

Mitchell, Rory, Eibe Frank, and Geoffrey Holmes. "GPUTreeShap: massively parallel exact calculation of SHAP scores for tree ensembles." PeerJ Computer Science 8 (April 5, 2022): e880. http://dx.doi.org/10.7717/peerj-cs.880.

Full text
Abstract:
SHapley Additive exPlanation (SHAP) values (Lundberg & Lee, 2017) provide a game theoretic interpretation of the predictions of machine learning models based on Shapley values (Shapley, 1953). While exact calculation of SHAP values is computationally intractable in general, a recursive polynomial-time algorithm called TreeShap (Lundberg et al., 2020) is available for decision tree models. However, despite its polynomial time complexity, TreeShap can become a significant bottleneck in practical machine learning pipelines when applied to large decision tree ensembles. Unfortunately, the complicated TreeShap algorithm is difficult to map to hardware accelerators such as GPUs. In this work, we present GPUTreeShap, a reformulated TreeShap algorithm suitable for massively parallel computation on graphics processing units. Our approach first preprocesses each decision tree to isolate variable sized sub-problems from the original recursive algorithm, then solves a bin packing problem, and finally maps sub-problems to single-instruction, multiple-thread (SIMT) tasks for parallel execution with specialised hardware instructions. With a single NVIDIA Tesla V100-32 GPU, we achieve speedups of up to 19× for SHAP values, and speedups of up to 340× for SHAP interaction values, over a state-of-the-art multi-core CPU implementation executed on two 20-core Xeon E5-2698 v4 2.2 GHz CPUs. We also experiment with multi-GPU computing using eight V100 GPUs, demonstrating throughput of 1.2 M rows per second—equivalent CPU-based performance is estimated to require 6850 CPU cores.
APA, Harvard, Vancouver, ISO, and other styles
10

Maya Ömərova, Taleh Əsgərov, Maya Ömərova, Taleh Əsgərov. "BÖYÜK HƏCMLI VERİLƏNLƏRİN EMALI ÜÇÜN TƏTBIQLƏR." PAHTEI-Procedings of Azerbaijan High Technical Educational Institutions 36, no. 01 (2024): 204–10. http://dx.doi.org/10.36962/pahtei36012024-204.

Full text
Abstract:
Müasir dövrdə verilənlərin həcmi və müxtəlifliyliyi sürətlə artmaqdadır. Böyük həcmli verilənlər (big data) və onların emalı günümüzün informasiya texnologiyaları sahəsində ən mühüm problemlərdən biridir. Bu problemin həll edilməsi, digər problemlərin həllinə kömək edir. Çünki, günümüzün informasiya dövründə, bir çox şirkətlər, müəssisələr və hətta hökumətlər, böyük həcmli verilənlərlə işləyir. Bu verilənlər, müxtəlif istiqamətli məlumatları və informasiyaları əhatə edir və onların müxtəlif formaları, kompleksliliyi alqoritmlərin işlənməsini çətinləşdirir. Böyük həcmli verilənlər, yəni,Big data müxtəlif mənbələrdən əldə edilir. Hər gün müasir sistemlər və Əşyaların İnterneti (Internet of Things, İoT) kimi rəqəmsal texnologiyalar vasitəsilə terabaytlarla ifadə olunan böyük verilənlər anbarı yaranır. Bəzən bir gündə 2.5 eksabayt həcmində məlumatlar ortaya çıxır. Belə dövrdə məlumat analitikasını mövcud texnikalarla aparmaq çətinlik törədir. Məlumatların böyük həcmi onların ölçülməsi və genişləndirilməsi məsələsini də ortaya qoyur.Böyük verilənlərin analitikası sahəsində kritik ölçüləri başa düşmək və effektiv şəkildə idarə etmək üçün 5V adlanan xüsusiyyətlər qeyd edilmişdir. Böyük həcmli verilənləri emal etmək üçün ilk olaraq Hadoop ekosistemini və ona daxil olan Kafka tədbiqini, Hadoop tədbiqinin proqramlaşdırma modeli olan Map Reduced texnologiyasını araşdırmaq lazımdır. Daha sonra, Apache Spark ,Mongo DB, Elastich Search, Hive, Hcatalog, Hbase, MPP (Massively Parallel Processing), PIG, Mahout, NoSQL və Cassandra kimi paylanmış fayl sistemləri ilə işləyən tədbiqlər araşdırılmışdır. Ən məşhur böyük məlumat texnologiyalarından biri kimi tanınan Hadoop, onun əsas komponentləri, HDFS (Paylanmış fayl sistemi) xidmətinin əsas və köməkçi qovşaqları haqqında araşdırma aparılmışdır. Açar sözlər: Big data, 5V, analitika, Hadoop, Map Reduced, Apache Spark
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Massively Parallel Processing (MPP)"

1

Kumm, Holger Thomas. "Methodologies for the synthesis of cost-effective modular-MPC configurations for image processing applications." Thesis, Brunel University, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.296194.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ervin, Brian. "Neural Spike Detection and Classification Using Massively Parallel Graphics Processing." University of Cincinnati / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1377868773.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Nordström, Tomas. "Designing and using massively parallel computers for artificial neural networks." Licentiate thesis, Luleå tekniska universitet, Signaler och system, 1991. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-17900.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Hymel, Shawn. "Massively Parallel Hidden Markov Models for Wireless Applications." Thesis, Virginia Tech, 2011. http://hdl.handle.net/10919/36017.

Full text
Abstract:
Cognitive radio is a growing field in communications which allows a radio to automatically configure its transmission or reception properties in order to reduce interference, provide better quality of service, or allow for more users in a given spectrum. Such processes require several complex features that are currently being utilized in cognitive radio. Two such features, spectrum sensing and identification, have been implemented in numerous ways, however, they generally suffer from high computational complexity. Additionally, Hidden Markov Models (HMMs) are a widely used mathematical modeling tool used in various fields of engineering and sciences. In electrical and computer engineering, it is used in several areas, including speech recognition, handwriting recognition, artificial intelligence, queuing theory, and are used to model fading in communication channels. The research presented in this thesis proposes a new approach to spectrum identification using a parallel implementation of Hidden Markov Models. Algorithms involving HMMs are usually implemented in the traditional serial manner, which have prohibitively long runtimes. In this work, we study their use in parallel implementations and compare our approach to traditional serial implementations. Timing and power measurements are taken and used to show that the parallel implementation can achieve well over 100Ã speedup in certain situations. To demonstrate the utility of this new parallel algorithm using graphics processing units (GPUs), a new method for signal identification is proposed for both serial and parallel implementations using HMMs. The method achieved high recognition at -10 dB Eb/N0. HMMs can benefit from parallel implementation in certain circumstances, specifically, in models that have many states or when multiple models are used in conjunction.<br>Master of Science
APA, Harvard, Vancouver, ISO, and other styles
5

Savaş, Süleyman. "Linear Algebra for Array Signal Processing on a Massively Parallel Dataflow Architecture." Thesis, Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-2192.

Full text
Abstract:
<p>This thesis provides the deliberations about the implementation of Gentleman-Kung systolic array for QR decomposition using Givens Rotations within the context of radar signal </p><p>processing. The systolic array of Givens Rotations is implemented and analysed using a massively parallel processor array (MPPA), Ambric Am2045. The tools that are dedicated to the MPPA are tested in terms of engineering efficiency. aDesigner, which is built on eclipse environment, is used for programming, simulating and performance analysing. aDesigner has been produced for Ambric chip family. 2 parallel matrix multiplications have been implemented </p><p>to get familiar with the architecture and tools. Moreover different sized systolic arrays are implemented and compared with each other. For programming, ajava and astruct languages are provided. However floating point numbers are not supported by the provided languages. </p><p>Thus fixed point arithmetic is used in systolic array implementation of Givens Rotations. Stable and precise numerical results are obtained as outputs of the algorithms. However the analysis </p><p>results are not reliable because of the performance analysis tools.</p>
APA, Harvard, Vancouver, ISO, and other styles
6

Savaş, Süleyman. "Linear Algebra for Array Signal Processing on a Massively Parallel Dataflow Architecture." Thesis, Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-4137.

Full text
Abstract:
<p>This thesis provides the deliberations about the implementation of Gentleman-Kung systolic array for QR decomposition using Givens Rotations within the context of radar signal processing. The systolic array of Givens Rotations is implemented and analysed using a massively parallel processor array (MPPA), Ambric Am2045. The tools that are dedicated to the MPPA are tested in terms of engineering efficiency. aDesigner, which is built on eclipse environment, is used for programming, simulating and performance analysing. aDesigner has been produced for Ambric chip family. 2 parallel matrix multiplications have been implemented to get familiar with the architecture and tools. Moreover different sized systolic arrays are implemented and compared with each other. For programming, ajava and astruct languages are provided. However floating point numbers are not supported by the provided languages. Thus fixed point arithmetic is used in systolic array implementation of Givens Rotations. Stable </p><p>and precise numerical results are obtained as outputs of the algorithms. However the analysis results are not reliable because of the performance analysis tools.</p>
APA, Harvard, Vancouver, ISO, and other styles
7

Ediger, David. "Analyzing hybrid architectures for massively parallel graph analysis." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/47659.

Full text
Abstract:
The quantity of rich, semi-structured data generated by sensor networks, scientific simulation, business activity, and the Internet grows daily. The objective of this research is to investigate architectural requirements for emerging applications in massive graph analysis. Using emerging hybrid systems, we will map applications to architectures and close the loop between software and hardware design in this application space. Parallel algorithms and specialized machine architectures are necessary to handle the immense size and rate of change of today's graph data. To highlight the impact of this work, we describe a number of relevant application areas ranging from biology to business and cybersecurity. With several proposed architectures for massively parallel graph analysis, we investigate the interplay of hardware, algorithm, data, and programming model through real-world experiments and simulations. We demonstrate techniques for obtaining parallel scaling on multithreaded systems using graph algorithms that are orders of magnitude faster and larger than the state of the art. The outcome of this work is a proposed hybrid architecture for massive-scale analytics that leverages key aspects of data-parallel and highly multithreaded systems. In simulations, the hybrid systems incorporating a mix of multithreaded, shared memory systems and solid state disks performed up to twice as fast as either homogeneous system alone on graphs with as many as 18 trillion edges.
APA, Harvard, Vancouver, ISO, and other styles
8

Walsh, Declan. "Design and implementation of massively parallel fine-grained processor arrays." Thesis, University of Manchester, 2015. https://www.research.manchester.ac.uk/portal/en/theses/design-and-implementation-of-massively-parallel-finegrained-processor-arrays(e0e03bd5-4feb-4d66-8d4b-0e057684e498).html.

Full text
Abstract:
This thesis investigates the use of massively parallel fine-grained processor arrays to increase computational performance. As processors move towards multi-core processing, more energy-efficient processors can be designed by increasing the number of processor cores on a single chip rather than increasing the clock frequency of a single processor. This can be done by making processor cores less complex, but increasing the number of processor cores on a chip. Using this philosophy, a processor core can be reduced in complexity, area, and speed to form a very small processor which can still perform basic arithmetic operations. Due to the small area occupation this can be multiplied and scaled to form a large scale parallel processor array to offer a significant performance. Following this design methodology, two fine-grained parallel processor arrays are designed which aim to achieve a small area occupation with each individual processor so that a larger array can be implemented over a given area. To demonstrate scalability and performance, SIMD parallel processor array is designed for implementation on an FPGA where each processor can be implemented using four ‘slices’ of a Xilinx FPGA. With such small area utilization, a large fine-grained processor can be implemented on these FPGAs. A 32 × 32 processor array is implemented and fast processing demonstrated using image processing tasks. An event-driven MIMD parallel processor array is also designed which occupies a small amount of area and can be scaled up to form much larger arrays. The event-driven approach allows the processor to enter an idle mode when no events are occurring local to the processor, reducing power consumption. The processor can switch to operational mode when events are detected. The processor core is designed with a multi-bit data path and ALU and contains its own instruction memory making the array a multi-core processor array. With area occupation of primary concern, the processor is relatively simple and connects with its four nearest direct neighbours. A small 8 × 8 prototype chip is implemented in a 65 nm CMOS technology process which can operate at a clock frequency of 80 MHz and offer a peak performance of 5.12 GOPS which can be scaled up to larger arrays. An application of the event-driven processor array is demonstrated using a simulation model of the processor. An event-driven algorithm is demonstrated to perform distributed control of distributed manipulator simulator by separating objects based on their physical properties.
APA, Harvard, Vancouver, ISO, and other styles
9

Joseph, Rosh John. "Investigating the user-acceptability of a massively parallel computing solution for image processing workstations." Thesis, Brunel University, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.242981.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Chaudhari, Gunavant Dinkar. "Simulation and emulation of massively parallel processor for solving constraint satisfaction problems based on oracles." PDXScholar, 2011. https://pdxscholar.library.pdx.edu/open_access_etds/11.

Full text
Abstract:
Most part of my thesis is devoted to efficient automated logic synthesis of oracle processors. These Oracle Processors are of interest to several modern technologies, including Scheduling and Allocation, Image Processing and Robot Vision, Computer Aided Design, Games and Puzzles, and Cellular Automata, but so far the most important practical application is to build logic circuits to solve various practical Constraint Satisfaction Problems in Intelligent Robotics. For instance, robot path planning can be reduced to Satisfiability. In short, an oracle is a circuit that has some proposition of solution on the inputs and answers yes/no to this proposition. In other language, it is a predicate or a concept-checking machine. Oracles have many applications in AI and theoretical computer science but so far they were not used much in hardware architectures. Systematic logic synthesis methodologies for oracle circuits were so far not a subject of a special research. It is not known how big advantages these processors will bring when compared to parallel processing with CUDA/GPU processors, or standard PC processing. My interest in this thesis is only in architectural and logic synthesis aspects and not in physical (technological) design aspects of these circuits. In future, these circuits will be realized using reversible, nano and some new technologies, but the interest in this thesis is not in the future realization technologies. We want just to answer the following question: Is there any speed advantage of the new oracle-based architectures, when compared with standard serial or parallel processors?
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Massively Parallel Processing (MPP)"

1

Robert, Schreiber, and Research Institute for Advanced Computer Science (U.S.), eds. Efficient, massively parallel Eigenvalue computation. Research Institute for Advanced Computer Science, NASA Ames Research Center, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kitano, Hiroaki. Massively parallel artificial intelligence. AAAI Press, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

1942-, Muraoka Yōichi, and Tanaka Hidehiko 1915-, eds. The massively parallel processing system JUMP-1. Ohmsha, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Hoffmann, Geerd-R., and Dimitris K. Maretis, eds. The Dawn of Massively Parallel Processing in Meteorology. Springer Berlin Heidelberg, 1990. http://dx.doi.org/10.1007/978-3-642-84020-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

1955-, Katz Randy H., and United States. National Aeronautics and Space Administration., eds. RAMA: A filesystem for massively parallel computers. National Aeronautics and Space Administration, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Fischer, James R. Frontiers of massively parallel scientific computation: Proceedings of the first symposium sponsored by the National Aeronautics and Space Administration, Washington, D.C., and the Goodyear Aerospace Corporation, Akron, Ohio, and held at NASA Goddard Space Flight Center, Greenbelt, Maryland, September 24-25, 1986. Goddard Space Flight Center, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Kowalik, Janusz S. Using OpenCL: Programming massively parallel computers. IOS Press, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

1955-, Katz Randy H., and United States. National Aeronautics and Space Administration., eds. RAMA: A file system for massively parallel computers. National Aeronautics and Space Administration, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

F, Rohrbach, and European Organization for Nuclear Research., eds. The MPPC Project final report: Massively Parallel Processing Collaboration. CERN, European Organization for Nuclear Research, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Fischer, James R. Report from the MPP Working Group to the NASA Associate Administrator for Space Sciences and Applications. Goddard Space Flight Center, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Massively Parallel Processing (MPP)"

1

Lloyd, Ashley D., and Tony Purcell. "Massively parallel processing (MPP) systems — Commercial reality or scientific curiosity?" In High-Performance Computing and Networking. Springer Berlin Heidelberg, 1996. http://dx.doi.org/10.1007/3-540-61142-8_658.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Padua, David, Amol Ghoting, John A. Gunnels, et al. "Massively Parallel Processor (MPP)." In Encyclopedia of Parallel Computing. Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-09766-4_2275.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

MacDonald, Tom, and Zdenek Sekera. "The Cray Research MPP Fortran Programming Model." In Programming Environments for Massively Parallel Distributed Systems. Birkhäuser Basel, 1994. http://dx.doi.org/10.1007/978-3-0348-8534-8_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Spalt, Alfred, Edith Spiegl, and Thomas Meikl. "Massively parallel volume rendering." In Parallel Processing: CONPAR 94 — VAPP VI. Springer Berlin Heidelberg, 1994. http://dx.doi.org/10.1007/3-540-58430-7_35.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Williams, Winifred, Timothy Hoel, and Douglas Pase. "The MPP Apprentice™ Performance Tool: Delivering the Performance of the Cray T3D®." In Programming Environments for Massively Parallel Distributed Systems. Birkhäuser Basel, 1994. http://dx.doi.org/10.1007/978-3-0348-8534-8_33.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Chen, Ling Tony, Larry S. Davis, and Clyde P. Kruskal. "Massively Parallel Processing of Image Contours." In Visual Form. Springer US, 1992. http://dx.doi.org/10.1007/978-1-4899-0715-8_15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ogata, Kazuhiro, Hiromichi Hirata, Shigenori Ioroi, and Kokichi Futatsugi. "Experimental implementation of parallel TRAM on massively parallel computer." In Euro-Par’98 Parallel Processing. Springer Berlin Heidelberg, 1998. http://dx.doi.org/10.1007/bfb0057939.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Becher, Jonathan D., and Kent L. Beck. "Profiling on a massively parallel computer." In Parallel Processing: CONPAR 92—VAPP V. Springer Berlin Heidelberg, 1992. http://dx.doi.org/10.1007/3-540-55895-0_402.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Hoffmann, Rolf. "The Massively Parallel Computing Model GCA." In Euro-Par 2010 Parallel Processing Workshops. Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-21878-1_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Grothey, Andreas. "Massively Parallel Asset and Liability Management." In Euro-Par 2010 Parallel Processing Workshops. Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-21878-1_52.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Massively Parallel Processing (MPP)"

1

Schenfeld, Eugen. "Massively Parallel Processing with Optical Interconnections: What Can Be, Should Be and Must Not Be Done By Optics." In Optical Computing. Optica Publishing Group, 1995. http://dx.doi.org/10.1364/optcomp.1995.omb1.

Full text
Abstract:
What is wrong about Optical Computing is the implied search for “general purpose computing”. We think that such an attempt has little chance to result in a practical system for, at least, the next ten years. The main reason is the economical justification. What such an “optical computing” system may offer has to be compared with the value of the application and the alternatives (electronics). On the other hand, communication in general is an area where optics has proved to be a real blessing. Long distance communication is most economically done today using optical fibers. We think that another realistic search for good optical applications should now be done for shorter distances. A possible good direction may be the communication needs of Massively Parallel Processing (MPP) systems. In such a system, large number (10’s of thousands) of Processing Elements (PEs) are to be interconnected. A PE can be seen as made of a high-end single chip CPU available today, with memory and communication circuits. We do not view the other possible meaning of MPP, namely processing and interconnections at the single gate or device level, as practical to consider. This paper describes the views of the author from the computer architecture’s standpoint, with the hope to serve as a pointer to the “Optical Computing” community. Although much has been done in the area of optical communication technology for the past 10-20 years, and many optical network experimental systems have been proposed, it seems that optics has not yet found its expected place as the interconnection technology of choice for MPP systems. In this paper we try to suggest some possible reasons preventing the common use of optical interconnections in MPP systems, in a hope to focus attention on what really needs to be done to advance the field. We would suggest focusing on searching for a processing-less solution rather than trying to mimic the existing thinking of electronic networks. We outline several key principles essential to follow to reach realistic and economical solutions of optical interconnections for MPP systems. An example of using such principles for an MPP, free-space network is presented in [1].
APA, Harvard, Vancouver, ISO, and other styles
2

Fasanella, Kenneth, Tae Jin Kim, David T. Neilson, and Eugen Schenfeld. "Modular Opto-Mechanical Design of Free-Space Optical Interconnect System for Massively Parallel Processing." In Optics in Computing. Optica Publishing Group, 1997. http://dx.doi.org/10.1364/oc.1997.otua.3.

Full text
Abstract:
Free-space optical interconnections can provide parallel access for many communication patterns with a large bandwidth[1, 2]. The challenge in the realization of such systems relies in finding good solution for the alignment and packaging of the electro-optical components. We present a new modular building block that will address these challenges in a free-space optical network for massively parallel processing (MPP). Our component and system ideas offer a simple method to fabricate and build the optical network assuring reliable and convenient alignment and packaging. We use micro-lens arrays which are pre-aligned and packaged as modular building blocks to relay optical beams from one module to the next. We can apply our modular building block ideas to our previous system of a 64-channel free-space optical interconnection[1] and its wavelength division multiplexing (WDM) extensions[3] to realize a more reliable and easier to build system. In the current network architecture, as well as in the previous ones, we use a nonblocking and reconfigurable interconnection cached network (ICN)[4].
APA, Harvard, Vancouver, ISO, and other styles
3

Jaquay, Kenneth R., and Michael J. Anderson. "Yucca Mountain Project Structural Fragility Estimates for Impact Loading of Waste Packages." In ASME 2008 International Mechanical Engineering Congress and Exposition. ASMEDC, 2008. http://dx.doi.org/10.1115/imece2008-66538.

Full text
Abstract:
A methodology is presented for estimating the ultimate structural capability (fragility) of metallic nuclear waste disposal containers (waste packages) subject to impact events. The LS-DYNA finite element analysis (FEA) computer code and massively parallel processing (MPP) is used for nonlinear, dynamic-plastic, large-distortion impact simulations. The fragility estimate for risk assessments uses strain energy concepts, a ductile-rupture damage criterion and tri-linear stress-strain curves adjusted for material cold-forming triaxiality and weldment toughness scatter. FEA examples are provided for waste package impacts on ground support structures.
APA, Harvard, Vancouver, ISO, and other styles
4

Jaquay, Kenneth R., and Michael J. Anderson. "Yucca Mountain Project Structural Acceptance Criterion for Impact Loading of Waste Packages." In ASME 2008 International Mechanical Engineering Congress and Exposition. ASMEDC, 2008. http://dx.doi.org/10.1115/imece2008-66537.

Full text
Abstract:
A methodology is presented for evaluating the structural integrity of metallic nuclear waste disposal containers (waste packages) subject to impact events. The LS-DYNA finite element analysis (FEA) computer code and massively parallel processing (MPP) is used for nonlinear, dynamic-plastic, large-distortion impact simulations. The acceptance criterion is based on minimum-strength, bilinear, stress-strain curves and the ASME Boiler and Pressure Vessel Code primary stress intensity limits. The evaluation uses component stress classifications based on force-moment response trends from a series of reduced-modulus elastic analyses. FEA examples are provided for a waste package that is supported on an emplacement pallet (pallet) and dropped from the transfer vehicle.
APA, Harvard, Vancouver, ISO, and other styles
5

Gilaki, Mehdi, and Ilya Avdeev. "Comparing High-Performance Computing Techniques for Modeling Structural Impact on Battery Cells." In ASME 2014 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2014. http://dx.doi.org/10.1115/imece2014-39271.

Full text
Abstract:
In this study, we have investigated feasibility of using commercial explicit finite element code LS-DYNA on massively parallel super-computing cluster for accurate modeling of structural impact on battery cells. Physical and numerical lateral impact tests have been conducted on cylindrical cells using a flat rigid drop cart in a custom-built drop test apparatus. The main component of cylindrical cell, jellyroll, is a layered spiral structure which consists of thin layers of electrodes and separator. Two numerical approaches were considered: (1) homogenized model of the cell and (2) heterogeneous (full) 3-D cell model. In the first approach, the jellyroll was considered as a homogeneous material with an effective stress-strain curve obtained through experiments. In the second model, individual layers of anode, cathode and separator were accounted for in the model, leading to extremely complex and computationally expensive finite element model. To overcome limitations of desktop computers, high-performance computing (HPC) techniques on a HPC cluster were needed in order to get the results of transient simulations in a reasonable solution time. We have compared two HPC methods used for this model is shared memory parallel processing (SMP) and massively parallel processing (MPP). Both the homogeneous and the heterogeneous models were considered for parallel simulations utilizing different number of computational nodes and cores and the performance of these models was compared. This work brings us one step closer to accurate modeling of structural impact on the entire battery pack that consists of thousands of cells.
APA, Harvard, Vancouver, ISO, and other styles
6

Luo, Siqiang, and Zulun Zhu. "Massively Parallel Single-Source SimRanks in O(log N) Rounds." In Thirty-Third International Joint Conference on Artificial Intelligence {IJCAI-24}. International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/ijcai.2024/249.

Full text
Abstract:
SimRank is one of the most fundamental measures that evaluate the structural similarity between two nodes in a graph and has been applied in a plethora of data mining and machine learning tasks. These tasks often involve single-source SimRank computation that evaluates the SimRank values between a source node u and all other nodes. Due to its high computation complexity, single-source SimRank computation for large graphs is notoriously challenging, and hence recent studies resort to distributed processing. To our surprise, although SimRank has been widely adopted for two decades, theoretical aspects of distributed SimRanks with provable results have rarely been studied. In this paper, we conduct a theoretical study on single-source SimRank computation in the Massive Parallel Computation (MPC) model, which is the standard theoretical framework modeling distributed systems. Existing distributed SimRank algorithms enforce either Ω(log n) communication round complexity or Ω(n) machine space for a graph of n nodes. We overcome this barrier. Particularly, given a graph of n nodes, for any query node v and constant error ϵ&gt;3/n, we show that using O(log² log n) rounds of communication among machines is enough to compute single-source SimRank values with at most ϵ absolute errors, while each machine only needs a space sub-linear to n. To the best of our knowledge, this is the first single-source SimRank algorithm in MPC that can overcome the Θ(log n) round complexity barrier with provable result accuracy.
APA, Harvard, Vancouver, ISO, and other styles
7

Hsieh, Ching, Madan Vunnam, Dilip Bhalsod, and Hao Chen. "Comparative Study Using LS-DYNA ALE &amp; S-ALE Methods for under body Mine Blast Simulations." In 2024 NDIA Michigan Chapter Ground Vehicle Systems Engineering and Technology Symposium. National Defense Industrial Association, 2024. http://dx.doi.org/10.4271/2024-01-3617.

Full text
Abstract:
&lt;p&gt;Full-vehicle, End-to-End Underbody Blast (UB) simulations with LS-DYNA ALE (Arbitrary Lagrange-Eulerian) method have been common practice at the Tank Automotive Research, Development and Engineering Center (TARDEC) for the last several years to support Program Managers in the Army Acquisition and Science &amp;amp; Technology (S&amp;amp;T) Community of military ground vehicles. Although the method has been applied extensively and successfully, the demand for reducing the simulation time has been very high. Very recently a new method, Structured ALE (S-ALE), was developed in LS-DYNA by taking advantage of structured mesh to speed up the calculation time. In this paper several case studies for underbody mine blast simulations were analyzed by both ALE and S-ALE methods. The comparative results show the new method is very promising in improving the simulation time as well as the Massively Parallel Processing (MPP) scalability.&lt;/p&gt;
APA, Harvard, Vancouver, ISO, and other styles
8

Shioya, Ryuji, Masao Ogino, Hiroshi Kawai, and Shinobu Yoshimura. "Advanced General-Purpose Finite Element Solid Analysis System Adventure_Solid on the Earth Simulator: Its Application to Full-Scale Analysis of Nuclear Pressure Vessel." In ASME/JSME 2004 Pressure Vessels and Piping Conference. ASMEDC, 2004. http://dx.doi.org/10.1115/pvp2004-2750.

Full text
Abstract:
We have been developing an advanced general-purpose computational mechanics system, named ADVENTURE, which is designed to be able to analyze a model of arbitrary shape with a 10–100 million degrees of freedom (DOFs) mesh, and additionally to enable parametric and non-parametric shape optimization. Domain-decomposition-based parallel algorithms are implemented in pre-processes (domain decomposition), main processes (system matrix assembling and solutions) and post-process (visualization), respectively. Especially the hierarchical domain decomposition method with a preconditioned iterative solver (HDDM) is adopted in one of the main modules for solid analysis, named ADVENTURE_Solid. The employed preconditioner is the Balancing Domain Decomposition (BDD) type method. The ADVENTURE_Solid has been successfully implemented on a single PC, PC clusters and massively parallel processors such as Hitachi SR8000/MPP. In this study, this solid analysis module is implemented with minor modification on the Earth Simulator consisting of 256 nodes, i.e. 2,048 vector-type processing elements of theoretical peak performance of 16 TFLOPS (Tela FLoating point Operations Per Seconds), and succeeded in solving an elastostatic problem of a nuclear pressure vessel model of 100 million DOFs mesh in 8.5 minutes with 5.1 TFLOPS, which is 31.8% of the peak performance and over 80% parallel efficiency. As the purpose of demonstration of virtual mock-up test, the ADVENTURE_Solid is applied to solve a precise model of the ABWR vessel subjected to two kinds of loading conditions, i.e. (1) quasi-static seismic loading and (2) hydrostatic internal pressure.
APA, Harvard, Vancouver, ISO, and other styles
9

C. Wenes, G. "Seismic imaging in massively parallel processors (MPP) computer architectures." In 55th EAEG Meeting. European Association of Geoscientists & Engineers, 1993. http://dx.doi.org/10.3997/2214-4609.201411588.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Levitan, Steven P., and Donald M. Chiarulli. "Massively parallel processing." In the 46th Annual Design Automation Conference. ACM Press, 2009. http://dx.doi.org/10.1145/1629911.1630050.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Massively Parallel Processing (MPP)"

1

Li, Yao. Massively Parallel Spatial Light Modulation-Based Optical Signal Processing. Defense Technical Information Center, 1993. http://dx.doi.org/10.21236/ada264846.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Bosl, B., T. Keller, and J. Northrup. High Performance Parallel Processing Project (HPPPP) Advanced Materials Designs for Massively Parall. Office of Scientific and Technical Information (OSTI), 2021. http://dx.doi.org/10.2172/1776660.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Mailhiot, C. High performance parallel processing project (HPPPP) advanced materials designs for massively parallel environment CRADA No. TC-0824-94-I. Office of Scientific and Technical Information (OSTI), 1998. http://dx.doi.org/10.2172/764030.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Barrett, Terence W. Advanced Workstations Accelerated by Embedded Massively Parallel Computer Modules for Image Processing Applications. Phase 1. Defense Technical Information Center, 1995. http://dx.doi.org/10.21236/ada299734.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Baughcum, S., and D. Rotman. High Performance Parallel Processing (HPPP) Global Atmospheric Chemisty Models on Massively Parallel Computers Final Report CRADA No. TC-0824-94-D. Office of Scientific and Technical Information (OSTI), 2018. http://dx.doi.org/10.2172/1424674.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Mailhiot, C., J. Northrup, and S. Smithline. High Performance Parallel Processing Project (HPPP) Advanced Materials Designs for Massively Parallel Environment Final Report CRADA No. TC-0824-94-I. Office of Scientific and Technical Information (OSTI), 2018. http://dx.doi.org/10.2172/1426081.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Rotman, D. High performance parallel processing (HPPP) global atmospheric chemistry models on massively parallel computers CRADA No. TC-0824-94-D - Final CRADA. Office of Scientific and Technical Information (OSTI), 1998. http://dx.doi.org/10.2172/756375.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Sela, Shlomo, and Michael McClelland. Investigation of a new mechanism of desiccation-stress tolerance in Salmonella. United States Department of Agriculture, 2013. http://dx.doi.org/10.32747/2013.7598155.bard.

Full text
Abstract:
Low-moisture foods (LMF) are increasingly involved in foodborne illness. While bacteria cannot grow in LMF due to the low water content, pathogens such as Salmonella can still survive in dry foods and pose health risks to consumer. We recently found that Salmonella secretes a proteinaceous compound during desiccation, which we identified as OsmY, an osmotic stress response protein of 177 amino acids. To elucidate the role of OsmY in conferring tolerance against desiccation and other stresses in Salmonella entericaserovarTyphimurium (STm), our specific objectives were: (1) Characterize the involvement of OsmY in desiccation tolerance; (2) Perform structure-function analysis of OsmY; (3) Study OsmY expression under various growth- and environmental conditions of relevance to agriculture; (4) Examine the involvement of OsmY in response to other stresses of relevance to agriculture; and (5) Elucidate regulatory pathways involved in controlling osmY expression. We demonstrated that an osmY-mutant strain is impaired in both desiccation tolerance (DT) and in long-term persistence during cold storage (LTP). Genetic complementation and addition of a recombinantOsmY (rOsmY) restored the mutant survival back to that of the wild type (wt). To analyze the function of specific domains we have generated a recombinantOsmY (rOsmY) protein. A dose-response DT study showed that rOsmY has the highest protection at a concentration of 0.5 nM. This effect was protein- specific as a comparable amount of bovine serum albumin, an unrelated protein, had a three-time lower protection level. Further characterization of OsmY revealed that the protein has a surfactant activity and is involved in swarming motility. OsmY was shown to facilitate biofilm formation during dehydration but not during bacterial growth under optimal growth conditions. This finding suggests that expression and secretion of OsmY under stress conditions was potentially associated with facilitating biofilm production. OsmY contains two conserved BON domains. To better understand the role of the BON sites in OsmY-mediated dehydration tolerance, we have generated two additional rOsmY constructs, lacking either BON1 or BON2 sites. BON1-minus (but not BON2) protein has decreased dehydration tolerance compared to intact rOsmY, suggesting that BON1 is required for maximal OsmY-mediated activity. Addition of BON1-peptide at concentration below 0.4 µM did not affect STm survival. Interestingly, a toxic effect of BON1 peptide was observed in concentration as low as 0.4 µM. Higher concentrations resulted in complete abrogation of the rOsmY effect, supporting the notion that BON-mediated interaction is essential for rOsmY activity. We performed extensive analysis of RNA expression of STm undergoing desiccation after exponential and stationary growth, identifying all categories of genes that are differentially expressed during this process. We also performed massively in-parallel screening of all genes in which mutation caused changes in fitness during drying, identifying over 400 such genes, which are now undergoing confirmation. As expected OsmY is one of these genes. In conclusion, this is the first study to identify that OsmY protein secreted during dehydration contributes to desiccation tolerance in Salmonella by facilitating dehydration- mediated biofilm formation. Expression of OsmY also enhances swarming motility, apparently through its surfactant activity. The BON1 domain is required for full OsmY activity, demonstrating a potential intervention to reduce pathogen survival in food processing. Expression and fitness screens have begun to elucidate the processes of desiccation, with the potential to uncover additional specific targets for efforts to mitigate pathogen survival in desiccation.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography