To see the other types of publications on this topic, follow the link: Massive parallel computers.

Journal articles on the topic 'Massive parallel computers'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Massive parallel computers.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Taghipour, Hassan, Mahdi Rezaei, and Heydar Ali Esmaili. "Solving the 0/1 Knapsack Problem by a Biomolecular DNA Computer." Advances in Bioinformatics 2013 (February 18, 2013): 1–6. http://dx.doi.org/10.1155/2013/341419.

Full text
Abstract:
Solving some mathematical problems such as NP-complete problems by conventional silicon-based computers is problematic and takes so long time. DNA computing is an alternative method of computing which uses DNA molecules for computing purposes. DNA computers have massive degrees of parallel processing capability. The massive parallel processing characteristic of DNA computers is of particular interest in solving NP-complete and hard combinatorial problems. NP-complete problems such as knapsack problem and other hard combinatorial problems can be easily solved by DNA computers in a very short pe
APA, Harvard, Vancouver, ISO, and other styles
2

Sterkenburgh, Tomas, Rolf Michael Michels, Peter Dress, and Hilmar Franke. "Explicit finite-difference simulation of optical integrated devices on massive parallel computers." Applied Optics 36, no. 6 (1997): 1191. http://dx.doi.org/10.1364/ao.36.001191.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ivancova, Olga, Vladimir Korenkov, Olga Tyatyushkina, Sergey Ulyanov, and Toshio Fukuda. "Quantum supremacy in end-to-end intelligent IT. PT. III. Quantum software engineering – quantum approximate optimization algorithm on small quantum processors." System Analysis in Science and Education, no. 2 (2020) (June 30, 2020): 115–76. http://dx.doi.org/10.37005/2071-9612-2020-2-115-176.

Full text
Abstract:
Principles and methodologies of quantum algorithmic gate-based design on small quantum computer described. The possibilities of quantum algorithmic gates simulation on classical computers discussed. A new approach to a circuit implementation design of quantum algorithm gates for fast quantum massive parallel computing presented. SW & HW support sophisticated smart toolkit of supercomputing accelerator of quantum algorithm simulation on small quantum programmable computer algorithm gate (that can program in SW to implement arbitrary quantum algorithms by executing any sequence of universal
APA, Harvard, Vancouver, ISO, and other styles
4

Ji, Yunhong, Yunpeng Chai, Xuan Zhou, Lipeng Ren, and Yajie Qin. "Smart Intra-query Fault Tolerance for Massive Parallel Processing Databases." Data Science and Engineering 5, no. 1 (2019): 65–79. http://dx.doi.org/10.1007/s41019-019-00114-z.

Full text
Abstract:
AbstractIntra-query fault tolerance has increasingly been a concern for online analytical processing, as more and more enterprises migrate data analytical systems from mainframes to commodity computers. Most massive parallel processing (MPP) databases do not support intra-query fault tolerance. They may suffer from prolonged query latency when running on unreliable commodity clusters. While SQL-on-Hadoop systems can utilize the fault tolerance support of low-level frameworks, such as MapReduce and Spark, their cost-effectiveness is not always acceptable. In this paper, we propose a smart intra
APA, Harvard, Vancouver, ISO, and other styles
5

Wang, De Wen, and Xiao Jian Liu. "Parallel Fault Diagnosis of Power Transformer Based on MapReduce and K-Means." Applied Mechanics and Materials 494-495 (February 2014): 813–16. http://dx.doi.org/10.4028/www.scientific.net/amm.494-495.813.

Full text
Abstract:
Fault diagnosis can insure the power transformer safety and economic operation, and the data mining is the key technology of fault diagnosis for power transformer. In order to achieve the fast parallel fault diagnosis for power transformer, we need to put cloud computing technology into the smart grid. We give a parallel method of K-means based on MapReduce framework on the Hadoop distributed systems cluster to diagnose operation state of power transformer. Finally, through transformer fault diagnosis experimentations of massive DGA data, the results indicate closely linear speedup with an inc
APA, Harvard, Vancouver, ISO, and other styles
6

Jiang, Ling, Guoan Tang, Xuejun Liu, Xiaodong Song, Jianyi Yang, and Kai Liu. "Parallel contributing area calculation with granularity control on massive grid terrain datasets." Computers & Geosciences 60 (October 2013): 70–80. http://dx.doi.org/10.1016/j.cageo.2013.07.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

ELLIS, J. L., G. KEDEM, T. C. LYERLY, et al. "THE RAYCASTING ENGINE AND RAY REPRESENTATIONS: A TECHNICAL SUMMARY." International Journal of Computational Geometry & Applications 01, no. 04 (1991): 347–80. http://dx.doi.org/10.1142/s0218195991000256.

Full text
Abstract:
Solid modeling is computationally intensive. Thus far its use in industry has been limited mainly to simple parts and simple applications, and this is not likely to change much until 'massive' computing power can be made available at an affordable cost. The RayCasting Engine is one specialized source of 'massive' computing power for solid modeling, and it is but the simplest member of a potentially large family of 'classification computers'. The RayCasting Engine (RCE) is a highly parallel, custom-VLSI computer that classifies grids of parallel lines against solids represented in CSG. The sets
APA, Harvard, Vancouver, ISO, and other styles
8

Pothen, Alex, S. M. Ferdous, and Fredrik Manne. "Approximation algorithms in combinatorial scientific computing." Acta Numerica 28 (May 1, 2019): 541–633. http://dx.doi.org/10.1017/s0962492919000035.

Full text
Abstract:
We survey recent work on approximation algorithms for computing degree-constrained subgraphs in graphs and their applications in combinatorial scientific computing. The problems we consider include maximization versions of cardinality matching, edge-weighted matching, vertex-weighted matching and edge-weighted $b$-matching, and minimization versions of weighted edge cover and $b$-edge cover. Exact algorithms for these problems are impractical for massive graphs with several millions of edges. For each problem we discuss theoretical foundations, the design of several linear or near-linear time
APA, Harvard, Vancouver, ISO, and other styles
9

Wohl, Peter. "EFFICIENCY THROUGH REDUCED COMMUNICATION IN MESSAGE PASSING SIMULATION OF NEURAL NETWORKS." International Journal on Artificial Intelligence Tools 02, no. 01 (1993): 133–62. http://dx.doi.org/10.1142/s0218213093000096.

Full text
Abstract:
Neural algorithms require massive computation and very high communication bandwidth and are naturally expressed at a level of granularity finer than parallel systems can exploit efficiently. Mapping Neural Networks onto parallel computers has traditionally implied a form of clustering neurons and weights to increase the granularity. SIMD simulations may exceed a million connections per second using thousands of processors, but are often tailored to particular networks and learning algorithms. MIMD simulations required an even larger granularity to run efficiently and often trade flexibility fo
APA, Harvard, Vancouver, ISO, and other styles
10

Cazzaniga, Paolo, Marco S. Nobile, Daniela Besozzi, Matteo Bellini, and Giancarlo Mauri. "Massive Exploration of Perturbed Conditions of the Blood Coagulation Cascade through GPU Parallelization." BioMed Research International 2014 (2014): 1–20. http://dx.doi.org/10.1155/2014/863298.

Full text
Abstract:
The introduction of general-purpose Graphics Processing Units (GPUs) is boosting scientific applications in Bioinformatics, Systems Biology, and Computational Biology. In these fields, the use of high-performance computing solutions is motivated by the need of performing large numbers ofin silicoanalysis to study the behavior of biological systems in different conditions, which necessitate a computing power that usually overtakes the capability of standard desktop computers. In this work we present coagSODA, a CUDA-powered computational tool that was purposely developed for the analysis of a l
APA, Harvard, Vancouver, ISO, and other styles
11

Nakano, Aiichiro, Rajiv K. Kalia, Priya Vashishta, et al. "Scalable Atomistic Simulation Algorithms for Materials Research." Scientific Programming 10, no. 4 (2002): 263–70. http://dx.doi.org/10.1155/2002/203525.

Full text
Abstract:
A suite of scalable atomistic simulation programs has been developed for materials research based on space-time multiresolution algorithms. Design and analysis of parallel algorithms are presented for molecular dynamics (MD) simulations and quantum-mechanical (QM) calculations based on the density functional theory. Performance tests have been carried out on 1,088-processor Cray T3E and 1,280-processor IBM SP3 computers. The linear-scaling algorithms have enabled 6.44-billion-atom MD and 111,000-atom QM calculations on 1,024 SP3 processors with parallel efficiency well over 90%. production-qua
APA, Harvard, Vancouver, ISO, and other styles
12

Ivancova, Olga, Vladimir Korenkov, Olga Tyatyushkina, Sergey Ulyanov, and Toshio Fukuda. "Quantum supremacy in end-to-end intelligent IT. Pt. I:Quantum software engineering–quantum gate level applied models simulators." System Analysis in Science and Education, no. 1 (2020) (2020): 52–84. http://dx.doi.org/10.37005/2071-9612-2020-1-52-84.

Full text
Abstract:
Principles and methodologies of quantum algorithmic gates design for master course and PhD students in computer science, control engineering and intelligent robotics described. The possibilities of quantum algorithmic gates simulation on classical computers discussed. Applications of quantum gate of nanotechnology in intelligent quantum control introduced. Anew approach to a circuit implementation design of quantum algorithm gates for fast quantum massive parallel computing presented. The main attention focused on the development of design method of fast quantum algorithm operators as superpos
APA, Harvard, Vancouver, ISO, and other styles
13

Adamatzky, Andrew, Jörg Schnauß, and Florian Huber. "Actin droplet machine." Royal Society Open Science 6, no. 12 (2019): 191135. http://dx.doi.org/10.1098/rsos.191135.

Full text
Abstract:
The actin droplet machine is a computer model of a three-dimensional network of actin bundles developed in a droplet of a physiological solution, which implements mappings of sets of binary strings. The actin bundle network is conductive to travelling excitations, i.e. impulses. The machine is interfaced with an arbitrary selected set of k electrodes through which stimuli, binary strings of length k represented by impulses generated on the electrodes, are applied and responses are recorded. The responses are recorded in a form of impulses and then converted to binary strings. The machine’s sta
APA, Harvard, Vancouver, ISO, and other styles
14

Fung, Larry S. K., Mohammad O. Sindi, and Ali H. Dogru. "Multiparadigm Parallel Acceleration for Reservoir Simulation." SPE Journal 19, no. 04 (2014): 716–25. http://dx.doi.org/10.2118/163591-pa.

Full text
Abstract:
Summary With the advent of the multicore central-processing unit (CPU), today's commodity PC clusters are effectively a collection of interconnected parallel computers, each with multiple multicore CPUs and large shared random access memory (RAM), connected together by means of high-speed networks. Each computer, referred to as a compute node, is a powerful parallel computer on its own. Each compute node can be equipped further with acceleration devices such as the general-purpose graphical processing unit (GPGPU) to further speed up computational-intensive portions of the simulator. Reservoir
APA, Harvard, Vancouver, ISO, and other styles
15

Huang, Wei, Lingkui Meng, Dongying Zhang, and Wen Zhang. "In-Memory Parallel Processing of Massive Remotely Sensed Data Using an Apache Spark on Hadoop YARN Model." IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 10, no. 1 (2017): 3–19. http://dx.doi.org/10.1109/jstars.2016.2547020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Adamatzky, Andrew. "A brief history of liquid computers." Philosophical Transactions of the Royal Society B: Biological Sciences 374, no. 1774 (2019): 20180372. http://dx.doi.org/10.1098/rstb.2018.0372.

Full text
Abstract:
A substrate does not have to be solid to compute. It is possible to make a computer purely from a liquid. I demonstrate this using a variety of experimental prototypes where a liquid carries signals, actuates mechanical computing devices and hosts chemical reactions. We show hydraulic mathematical machines that compute functions based on mass transfer analogies. I discuss several prototypes of computing devices that employ fluid flows and jets. They are fluid mappers, where the fluid flow explores a geometrically constrained space to find an optimal way around, e.g. the shortest path in a maze
APA, Harvard, Vancouver, ISO, and other styles
17

Khabarov, Nikolay, Alexey Smirnov, Juraj Balkovič, et al. "Heterogeneous Compute Clusters and Massive Environmental Simulations Based on the EPIC Model." Modelling 1, no. 2 (2020): 215–24. http://dx.doi.org/10.3390/modelling1020013.

Full text
Abstract:
In recent years, the crop growth modeling community invested immense effort into high resolution global simulations estimating inter alia the impacts of projected climate change. The demand for computing resources in this context is high and expressed in processor core-years per one global simulation, implying several crops, management systems, and a several decades time span for a single climatic scenario. The anticipated need to model a richer set of alternative management options and crop varieties would increase the processing capacity requirements even more, raising the looming issue of c
APA, Harvard, Vancouver, ISO, and other styles
18

LI, YAMIN, SHIETUNG PENG, and WANMING CHU. "DISJOINT-PATHS AND FAULT-TOLERANT ROUTING ON RECURSIVE DUAL-NET." International Journal of Foundations of Computer Science 22, no. 05 (2011): 1001–18. http://dx.doi.org/10.1142/s0129054111008532.

Full text
Abstract:
The recursive dual-net is a newly proposed interconnection network for massive parallel computers. The recursive dual-net is based on recursive dual-construction of a symmetric base network. A k-level dual-construction for k > 0 creates a network containing (2n0)2k/2 nodes with node-degree d0 + k, where n0 and d0 are the number of nodes and the node-degree of the base network, respectively. The recursive dual-net is node and edge symmetric and can contain huge number of nodes with small node-degree and short diameter. Disjoint-paths routing and fault-tolerant routing are fundamental and cri
APA, Harvard, Vancouver, ISO, and other styles
19

Liu, Yang, Youbo Liu, Junyong Liu, et al. "A MapReduce Based High Performance Neural Network in Enabling Fast Stability Assessment of Power Systems." Mathematical Problems in Engineering 2017 (2017): 1–12. http://dx.doi.org/10.1155/2017/4030146.

Full text
Abstract:
Transient stability assessment is playing a vital role in modern power systems. For this purpose, machine learning techniques have been widely employed to find critical conditions and recognize transient behaviors based on massive data analysis. However, an ever increasing volume of data generated from power systems poses a number of challenges to traditional machine learning techniques, which are computationally intensive running on standalone computers. This paper presents a MapReduce based high performance neural network to enable fast stability assessment of power systems. Hadoop, which is
APA, Harvard, Vancouver, ISO, and other styles
20

Song, JinGyo, and Seog Chung Seo. "Efficient Parallel Implementation of CTR Mode of ARX-Based Block Ciphers on ARMv8 Microcontrollers." Applied Sciences 11, no. 6 (2021): 2548. http://dx.doi.org/10.3390/app11062548.

Full text
Abstract:
With the advancement of 5G mobile telecommunication, various IoT (Internet of Things) devices communicate massive amounts of data by being connected to wireless networks. Since this wireless communication is vulnerable to hackers via data leakage during communication, the transmitted data should be encrypted through block ciphers to protect the data during communication. In addition, in order to encrypt the massive amounts of data securely, it is essential to apply one of secure mode of operation. Among them, CTR (CounTeR) mode is the most widely used in industrial applications. However, these
APA, Harvard, Vancouver, ISO, and other styles
21

Raasch, Siegfried, and Michael Schröter. "PALM - A large-eddy simulation model performing on massively parallel computers." Meteorologische Zeitschrift 10, no. 5 (2001): 363–72. http://dx.doi.org/10.1127/0941-2948/2001/0010-0363.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Hanke, Martin, Marlis Hochbruck, and Wilhelm Niethammer. "Experiments with Krylov subspace methods on a massively parallel computer." Applications of Mathematics 38, no. 6 (1993): 440–51. http://dx.doi.org/10.21136/am.1993.104566.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Erlacher, Christoph, Karl-Heinrich Anders, Piotr Jankowski, Gernot Paulus, and Thomas Blaschke. "A Framework for Cloud-Based Spatially-Explicit Uncertainty and Sensitivity Analysis in Spatial Multi-Criteria Models." ISPRS International Journal of Geo-Information 10, no. 4 (2021): 244. http://dx.doi.org/10.3390/ijgi10040244.

Full text
Abstract:
Global sensitivity analysis, like variance-based methods for massive raster datasets, is especially computationally costly and memory-intensive, limiting its applicability for commodity cluster computing. The computational effort depends mainly on the number of model runs, the spatial, spectral, and temporal resolutions, the number of criterion maps, and the model complexity. The current Spatially-Explicit Uncertainty and Sensitivity Analysis (SEUSA) approach employs a cluster-based parallel and distributed Python–Dask solution for large-scale spatial problems, which validates and quantifies t
APA, Harvard, Vancouver, ISO, and other styles
24

Suzuki, Katsuyuki, Hideomi Ohtsubo, and Masashi Naito. "Distributed Structural Optimization Using Massive Parallel Computer." Journal of the Society of Naval Architects of Japan 1997, no. 182 (1997): 589–94. http://dx.doi.org/10.2534/jjasnaoe1968.1997.182_589.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Hou, Kaihua, Chengqi Cheng, Bo Chen, et al. "A Set of Integral Grid-Coding Algebraic Operations Based on GeoSOT-3D." ISPRS International Journal of Geo-Information 10, no. 7 (2021): 489. http://dx.doi.org/10.3390/ijgi10070489.

Full text
Abstract:
As the amount of collected spatial information (2D/3D) increases, the real-time processing of these massive data is among the urgent issues that need to be dealt with. Discretizing the physical earth into a digital gridded earth and assigning an integral computable code to each grid has become an effective way to accelerate real-time processing. Researchers have proposed optimization algorithms for spatial calculations in specific scenarios. However, a complete set of algorithms for real-time processing using grid coding is still lacking. To address this issue, a carefully designed, integral g
APA, Harvard, Vancouver, ISO, and other styles
26

Kato, S., S. Murakami, Y. Utsumi, and K. Mizutani. "Application of massive parallel computer to computational wind engineering." Journal of Wind Engineering and Industrial Aerodynamics 46-47 (August 1993): 393–400. http://dx.doi.org/10.1016/0167-6105(93)90305-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Gu, Mengyang, and James O. Berger. "Parallel partial Gaussian process emulation for computer models with massive output." Annals of Applied Statistics 10, no. 3 (2016): 1317–47. http://dx.doi.org/10.1214/16-aoas934.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Zhang, Yu, James Z. Wang, and Jia Li. "Parallel Massive Clustering of Discrete Distributions." ACM Transactions on Multimedia Computing, Communications, and Applications 11, no. 4 (2015): 1–24. http://dx.doi.org/10.1145/2700293.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Navarro, Cristóbal A., Nancy Hitschfeld-Kahler, and Luis Mateu. "A Survey on Parallel Computing and its Applications in Data-Parallel Problems Using GPU Architectures." Communications in Computational Physics 15, no. 2 (2014): 285–329. http://dx.doi.org/10.4208/cicp.110113.010813a.

Full text
Abstract:
AbstractParallel computing has become an important subject in the field of computer science and has proven to be critical when researching high performance solutions. The evolution of computer architectures (multi-coreandmany-core) towards a higher number of cores can only confirm that parallelism is the method of choice for speeding up an algorithm. In the last decade, the graphics processing unit, or GPU, has gained an important place in the field of high performance computing (HPC) because of its low cost and massive parallel processing power. Super-computing has become, for the first time,
APA, Harvard, Vancouver, ISO, and other styles
30

FURNARI, MARIO. "MEMORY SYSTEMS AND MASSIVE PARALLEL SYMBOLIC COMPUTATION." International Journal of High Speed Computing 05, no. 03 (1993): 307–26. http://dx.doi.org/10.1142/s0129053393000141.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Ishiya, Koji, and Shintaroh Ueda. "MitoSuite: a graphical tool for human mitochondrial genome profiling in massive parallel sequencing." PeerJ 5 (May 30, 2017): e3406. http://dx.doi.org/10.7717/peerj.3406.

Full text
Abstract:
Recent rapid advances in high-throughput, next-generation sequencing (NGS) technologies have promoted mitochondrial genome studies in the fields of human evolution, medical genetics, and forensic casework. However, scientists unfamiliar with computer programming often find it difficult to handle the massive volumes of data that are generated by NGS. To address this limitation, we developed MitoSuite, a user-friendly graphical tool for analysis of data from high-throughput sequencing of the human mitochondrial genome. MitoSuite generates a visual report on NGS data with simple mouse operations.
APA, Harvard, Vancouver, ISO, and other styles
32

Qiao, Shaojie, Tianrui Li, and Jing Peng. "Parallel Sequential Pattern Mining of Massive Trajectory Data." International Journal of Computational Intelligence Systems 3, no. 3 (2010): 343. http://dx.doi.org/10.2991/ijcis.2010.3.3.10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Qiao, Shaojie, Tianrui Li, Jing Peng, and Jiangtao Qiu. "Parallel Sequential Pattern Mining of Massive Trajectory Data." International Journal of Computational Intelligence Systems 3, no. 3 (2010): 343–56. http://dx.doi.org/10.1080/18756891.2010.9727705.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Zhao, Jin, Fan, Song, Zhou, and Jiang. "High-performance Overlay Analysis of Massive Geographic Polygons That Considers Shape Complexity in a Cloud Environment." ISPRS International Journal of Geo-Information 8, no. 7 (2019): 290. http://dx.doi.org/10.3390/ijgi8070290.

Full text
Abstract:
: Overlay analysis is a common task in geographic computing that is widely used in geographic information systems, computer graphics, and computer science. With the breakthroughs in Earth observation technologies, particularly the emergence of high-resolution satellite remote-sensing technology, geographic data have demonstrated explosive growth. The overlay analysis of massive and complex geographic data has become a computationally intensive task. Distributed parallel processing in a cloud environment provides an efficient solution to this problem. The cloud computing paradigm represented by
APA, Harvard, Vancouver, ISO, and other styles
35

Liu, Zhi Guo, Jun Yu Li, and Xiu Li Ren. "Research on the Parameter Model Simulation Based on Computer Simulation Basketball Match." Advanced Materials Research 791-793 (September 2013): 1203–7. http://dx.doi.org/10.4028/www.scientific.net/amr.791-793.1203.

Full text
Abstract:
With the development of computer hardware and software, the simulation of computer virtual simulation has been rapid development. The running process of computer simulation platform need to deal with massive data, and in general software testing technology can not meet the requirements. On the basis of this, an application computer virtual platforms parallel test method is proposed. Firstly, this paper gives a brief introduction for the process of computer simulation, and then the testing method can carry out software programming, it will realize the virtual process of serial FIFO buffer and r
APA, Harvard, Vancouver, ISO, and other styles
36

Ballard, Dana H. "Cortical connections and parallel processing: Structure and function." Behavioral and Brain Sciences 9, no. 1 (1986): 67–90. http://dx.doi.org/10.1017/s0140525x00021555.

Full text
Abstract:
AbstractThe cerebral cortex is a rich and diverse structure that is the basis of intelligent behavior. One of the deepest mysteries of the function of cortex is that neural processing times are only about one hundred times as fast as the fastest response times for complex behavior. At the very least, this would seem to indicate that the cortex does massive amounts of parallel computation.This paper explores the hypothesis that an important part of the cortex can be modeled as a connectionist computer that is especially suited for parallel problem solving. The connectionist computer uses a spec
APA, Harvard, Vancouver, ISO, and other styles
37

Krawczyk, Henryk, Michał Nykiel, and Jerzy Proficz. "Tryton Supercomputer Capabilities for Analysis of Massive Data Streams." Polish Maritime Research 22, no. 3 (2015): 99–104. http://dx.doi.org/10.1515/pomr-2015-0062.

Full text
Abstract:
Abstract The recently deployed supercomputer Tryton, located in the Academic Computer Center of Gdansk University of Technology, provides great means for massive parallel processing. Moreover, the status of the Center as one of the main network nodes in the PIONIER network enables the fast and reliable transfer of data produced by miscellaneous devices scattered in the area of the whole country. The typical examples of such data are streams containing radio-telescope and satellite observations. Their analysis, especially with real-time constraints, can be challenging and requires the usage of
APA, Harvard, Vancouver, ISO, and other styles
38

García-García, César, José Luis Fernández-Robles, Victor Larios-Rosillo, and Hervé Luga. "ALFIL." International Journal of Game-Based Learning 2, no. 3 (2012): 71–86. http://dx.doi.org/10.4018/ijgbl.2012070105.

Full text
Abstract:
This article presents the current development of a serious game for the simulation of massive evacuations. The purpose of this project is to promote self-protection through awareness of the procedures and different possible scenarios during the evacuation of a massive event. Sophisticated behaviors require massive computational power and it has been necessary to implement several distributed programming techniques to simulate crowds of thousands of people. Even with the current state of computer hardware, the costs of building and operating this hardware is still prohibitive; so, it‘s preferre
APA, Harvard, Vancouver, ISO, and other styles
39

Ma, Youzhong, Xiaofeng Meng, and Shaoya Wang. "Parallel similarity joins on massive high-dimensional data using MapReduce." Concurrency and Computation: Practice and Experience 28, no. 1 (2015): 166–83. http://dx.doi.org/10.1002/cpe.3663.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Li, Deguang, and Zhanyou Cui. "A Parallel Attribute Reduction Method Based on Classification." Complexity 2021 (April 10, 2021): 1–8. http://dx.doi.org/10.1155/2021/9989471.

Full text
Abstract:
Parallel processing as a method to improve computer performance has become a development trend. Based on rough set theory and divide-and-conquer idea of knowledge reduction, this paper proposes a classification method that supports parallel attribute reduction processing, the method makes the relative positive domain which needs to be calculated repeatedly independent, and the independent relative positive domain calculation could be processed in parallel; thus, attribute reduction could be handled in parallel based on this classification method. Finally, the proposed algorithm and the traditi
APA, Harvard, Vancouver, ISO, and other styles
41

Pagès, Gilles, and Benedikt Wilbertz. "GPGPUs in computational finance: massive parallel computing for American style options." Concurrency and Computation: Practice and Experience 24, no. 8 (2011): 837–48. http://dx.doi.org/10.1002/cpe.1774.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Wu, Zebin, Jinping Gu, Yonglong Li, Fu Xiao, Jin Sun, and Zhihui Wei. "Distributed Parallel Endmember Extraction of Hyperspectral Data Based on Spark." Scientific Programming 2016 (2016): 1–9. http://dx.doi.org/10.1155/2016/3252148.

Full text
Abstract:
Due to the increasing dimensionality and volume of remotely sensed hyperspectral data, the development of acceleration techniques for massive hyperspectral image analysis approaches is a very important challenge. Cloud computing offers many possibilities of distributed processing of hyperspectral datasets. This paper proposes a novel distributed parallel endmember extraction method based on iterative error analysis that utilizes cloud computing principles to efficiently process massive hyperspectral data. The proposed method takes advantage of technologies including MapReduce programming model
APA, Harvard, Vancouver, ISO, and other styles
43

S. Nobile, Marco, Paolo Cazzaniga, Daniela Besozzi, and Giancarlo Mauri. "ginSODA: massive parallel integration of stiff ODE systems on GPUs." Journal of Supercomputing 75, no. 12 (2018): 7844–56. http://dx.doi.org/10.1007/s11227-018-2549-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Wu, Dan, Ming Quan Zhou, and Rong Fang Bie. "Massive Image Treatment System Based on Cloud Computing Platform." Applied Mechanics and Materials 687-691 (November 2014): 3733–37. http://dx.doi.org/10.4028/www.scientific.net/amm.687-691.3733.

Full text
Abstract:
Massive image processing technology requires high requirements of processor and memory, and it needs to adopt high performance of processor and the large capacity memory. While the single or single core processing and traditional memory can’t satisfy the need of image processing. This paper introduces the cloud computing function into the massive image processing system. Through the cloud computing function it expands the virtual space of the system, saves computer resources and improves the efficiency of image processing. The system processor uses multi-core DSP parallel processor, and develo
APA, Harvard, Vancouver, ISO, and other styles
45

Hao, Wang Shen, Xin Min Dong, Jie Han, and Wen Ping Lei. "Study on Mechanical Equipment Fault Diagnosis System Based on Cloud Computing." Applied Mechanics and Materials 220-223 (November 2012): 2520–23. http://dx.doi.org/10.4028/www.scientific.net/amm.220-223.2520.

Full text
Abstract:
Generally working in severe conditions, mechanical equipments are subjected to progressive deterioration of their state. The mechanical failures account for more than 60% of breakdowns of the system. Therefore, the identification of impending mechanical fault is very important to prevent the system from illness running. It generally requires high performance computer to complete the traditional parallel computing, while the parallel FFT algorithm based on Hadoop MapReduce programming model can be realized in the low-end machines. Combining with Cloud Computing and equipment fault diagnosis tec
APA, Harvard, Vancouver, ISO, and other styles
46

Wu, Yongwei, Weichao Guo, Jinglei Ren, Xun Zhao, and Weimin Zheng. "NO2: Speeding up Parallel Processing of Massive Compute-Intensive Tasks." IEEE Transactions on Computers 63, no. 10 (2014): 2487–99. http://dx.doi.org/10.1109/tc.2013.132.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Huang, Wanrong, Xiaodong Yi, Yichun Sun, Yingwen Liu, Shuai Ye, and Hengzhu Liu. "Scalable Parallel Distributed Coprocessor System for Graph Searching Problems with Massive Data." Scientific Programming 2017 (2017): 1–9. http://dx.doi.org/10.1155/2017/1496104.

Full text
Abstract:
The Internet applications, such as network searching, electronic commerce, and modern medical applications, produce and process massive data. Considerable data parallelism exists in computation processes of data-intensive applications. A traversal algorithm, breadth-first search (BFS), is fundamental in many graph processing applications and metrics when a graph grows in scale. A variety of scientific programming methods have been proposed for accelerating and parallelizing BFS because of the poor temporal and spatial locality caused by inherent irregular memory access patterns. However, new p
APA, Harvard, Vancouver, ISO, and other styles
48

Dai, Yu. "Application of Cloud Video Information Processing Technology in Alleviating the Food Safety Trust Crisis." International Journal of Pattern Recognition and Artificial Intelligence 34, no. 02 (2019): 2055007. http://dx.doi.org/10.1142/s0218001420550071.

Full text
Abstract:
In view of the current consumer trust crisis in food safety, some researchers proposed to build a reliable food safety traceability system solution. Because the food safety traceability system requires storing and processing of massive video data, this paper proposes to build a reliable food safety traceability system by introducing cloud storage technology into video surveillance system. Hadoop platform is built to store and process massive monitoring of video data. In addition, in order to make use of Map Reduce framework for parallel computing, this paper optimizes the traditional particle
APA, Harvard, Vancouver, ISO, and other styles
49

Mendez, Diego, David Arevalo, Diego Patino, Eduardo Gerlein, and Ricardo Quintana. "Parallel Architecture of Reconfigurable Hardware for Massive Output Active Noise Control." Parallel Processing Letters 29, no. 03 (2019): 1950014. http://dx.doi.org/10.1142/s0129626419500142.

Full text
Abstract:
Filtered-x Least Mean Squares (FxLMS) is an algorithm commonly used for Active Noise Control (ANC) systems in order to cancel undesired acoustic waves from a sound source. There is a small number of hardware designs reported in the literature, that in turn only use one reference signal, one error signal and one output control signal. In this paper, it is proposed a 3-dimensional hardware-based version of the widely used FxLMS algorithm, using one reference microphone, 18 error microphones, one output and a FIR filter of 400[Formula: see text] order. The FxLMS algorithm was implemented in a Xil
APA, Harvard, Vancouver, ISO, and other styles
50

Radojković, Petar, Vladimir Čakarević, Javier Verdú, et al. "Thread to strand binding of parallel network applications in massive multi-threaded systems." ACM SIGPLAN Notices 45, no. 5 (2010): 191–202. http://dx.doi.org/10.1145/1837853.1693480.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!