To see the other types of publications on this topic, follow the link: And Cluster Computing.

Journal articles on the topic 'And Cluster Computing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'And Cluster Computing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Buyya, Rajkumar, Hai Jin, and Toni Cortes. "Cluster computing." Future Generation Computer Systems 18, no. 3 (January 2002): v—viii. http://dx.doi.org/10.1016/s0167-739x(01)00053-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Nwobodo, Ikechukwu. "Cloud Computing: A Detailed Relationship to Grid and Cluster Computing." International Journal of Future Computer and Communication 4, no. 2 (April 2015): 82–87. http://dx.doi.org/10.7763/ijfcc.2015.v4.361.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

ROSENBERG, ARNOLD L., and RON C. CHIANG. "HETEROGENEITY IN COMPUTING: INSIGHTS FROM A WORKSHARING SCHEDULING PROBLEM." International Journal of Foundations of Computer Science 22, no. 06 (September 2011): 1471–93. http://dx.doi.org/10.1142/s0129054111008829.

Full text
Abstract:
Heterogeneity complicates the use of multicomputer platforms. Can it also enhance their performance? How can one measure the power of a heterogeneous assemblage of computers ("cluster"), in absolute terms (how powerful is this cluster) and relative terms (which cluster is more powerful)? Is a cluster that has one super-fast computer and the rest of "average" speed more/less powerful than one all of whose computers are "moderately" fast? If you can replace just one computer in a cluster with a faster one, should you replace the fastest? the slowest? A result concerning "worksharing" in heterogeneous clusters provides a highly idealized, yet algorithmically meaningful, framework for studying such questions in a way that admits rigorous analysis and formal proof. We encounter some surprises as we answer the preceding questions (perforce, within the idealized framework). Highlights: (1) If one can replace only one computer in a cluster by a faster one, it is (almost) always most advantageous to replace the fastest one. (2) If the computers in two clusters have the same mean speed, then the cluster with the larger variance in speed is (almost) always more productive (verified analytically for small clusters and empirically for large ones.) (3) Heterogeneity can actually enhance a cluster's computing power.
APA, Harvard, Vancouver, ISO, and other styles
4

Hatcher, P., M. Reno, G. Antoniu, and L. Bouge. "Cluster Computing with Java." Computing in Science and Engineering 7, no. 2 (March 2005): 34–39. http://dx.doi.org/10.1109/mcse.2005.28.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Du, Ran, Jingyan Shi, Xiaowei Jiang, and Jiaheng Zou. "Cosmos : A Unified Accounting System both for the HTCondor and Slurm Clusters at IHEP." EPJ Web of Conferences 245 (2020): 07060. http://dx.doi.org/10.1051/epjconf/202024507060.

Full text
Abstract:
HTCondor was adopted to manage the High Throughput Computing (HTC) cluster at IHEP in 2016. In 2017 a Slurm cluster was set up to run High Performance Computing (HPC) jobs. To provide accounting services for these two clusters, we implemented a unified accounting system named Cosmos. Multiple workloads bring different accounting requirements. Briefly speaking, there are four types of jobs to account. First of all, 30 million single-core jobs run in the HTCondor cluster every year. Secondly, Virtual Machine (VM) jobs run in the legacy HTCondor VM cluster. Thirdly, parallel jobs run in the Slurm cluster, and some of these jobs are run on the GPU worker nodes to accelerate computing. Lastly, some selected HTC jobs are migrated from the HTCondor cluster to the Slurm cluster for research purposes. To satisfy all the mentioned requirements, Cosmos is implemented with four layers: acquisition, integration, statistics and presentation. Details about the issues and solutions of each layer will be presented in the paper. Cosmos has run in production for two years, and the status shows that it is a well-functioning system, also meets the requirements of the HTCondor and Slurm clusters.
APA, Harvard, Vancouver, ISO, and other styles
6

Pushkar, V. I., H. D. Kyselov, YE V. Olenovych, and O. H. Kyivskyi. "Computing cluster performance evaluation in department of the university." Electronics and Communications 15, no. 5 (March 29, 2010): 211–16. http://dx.doi.org/10.20535/2312-1807.2010.58.5.285236.

Full text
Abstract:
An educational cluster based on the low speed servers was created. Their testing and total performance evaluations were carried out. The testing results describe the possibility of cluster implementation for educational process in the meaning of parallel programming studying and clusters system administrators training.
APA, Harvard, Vancouver, ISO, and other styles
7

Tripathy, Minakshi, and C. R. Tripathy. "A Comparative Analysis of Performance of Shared Memory Cluster Computing Interconnection Systems." Journal of Computer Networks and Communications 2014 (2014): 1–9. http://dx.doi.org/10.1155/2014/128438.

Full text
Abstract:
In recent past, many types of shared memory cluster computing interconnection systems have been proposed. Each of these systems has its own advantages and limitations. With the increase in system size of the cluster interconnection systems, the comparative analysis of their various performance measures becomes quite inevitable. The cluster architecture, load balancing, and fault tolerance are some of the important aspects, which need to be addressed. The comparison needs to be made in order to choose the best one for a particular application. In this paper, a detailed comparative study on four important and different classes of shared memory cluster architectures has been made. The systems taken up for the purpose of the study are shared memory clusters, hierarchical shared memory clusters, distributed shared memory clusters, and the virtual distributed shared memory clusters. These clusters are analyzed and compared on the basis of the architecture, load balancing, and fault tolerance aspects. The results of comparison are reported.
APA, Harvard, Vancouver, ISO, and other styles
8

Fowler, A. G., and K. Goyal. "Topological cluster state quantum computing." Quantum Information and Computation 9, no. 9&10 (September 2009): 721–38. http://dx.doi.org/10.26421/qic9.9-10-1.

Full text
Abstract:
The quantum computing scheme described by Raussendorf et. al (2007), when viewed as a cluster state computation, features a 3-D cluster state, novel adjustable strength error correction capable of correcting general errors through the correction of Z errors only, a threshold error rate approaching 1% and low overhead arbitrarily long-range logical gates. In this work, we review the scheme in detail framing the discussion solely in terms of the required 3-D cluster state and its stabilizers.
APA, Harvard, Vancouver, ISO, and other styles
9

Saman, M. Y., and D. J. Evans. "Distributed computing on cluster systems." International Journal of Computer Mathematics 78, no. 3 (January 2001): 383–97. http://dx.doi.org/10.1080/00207160108805118.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Thiruvathukal, G. K. "Guest Editors' Introduction: Cluster Computing." Computing in Science and Engineering 7, no. 2 (March 2005): 11–13. http://dx.doi.org/10.1109/mcse.2005.33.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Ruchkin, V. N., M. N. Makhmudov, V. A. Romanchuk, V. A. Fulin, and B. V. Kostrov. "Cluster management of computing resources." MATEC Web of Conferences 75 (2016): 08004. http://dx.doi.org/10.1051/matecconf/20167508004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Taufer, Michela, Pavan Balaji, and Satoshi Matsuoka. "Special Issue on Cluster Computing." Parallel Computing 58 (October 2016): 25–26. http://dx.doi.org/10.1016/j.parco.2016.09.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Tavangarian, D. "Special issue on cluster computing." Journal of Systems Architecture 44, no. 3-4 (January 1998): 163–68. http://dx.doi.org/10.1016/s1383-7621(97)00034-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Caballer, Miguel, Carlos de Alfonso, Fernando Alvarruiz, and Germán Moltó. "EC3: Elastic Cloud Computing Cluster." Journal of Computer and System Sciences 79, no. 8 (December 2013): 1341–51. http://dx.doi.org/10.1016/j.jcss.2013.06.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Baker, Mark, and Rajkumar Buyya. "Cluster computing: the commodity supercomputer." Software: Practice and Experience 29, no. 6 (May 1999): 551–76. http://dx.doi.org/10.1002/(sici)1097-024x(199905)29:6<551::aid-spe248>3.0.co;2-c.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Carrington, Walter A., and Dimitri Lisin. "Cluster computing for digital microscopy." Microscopy Research and Technique 64, no. 2 (2004): 204–13. http://dx.doi.org/10.1002/jemt.20065.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Malik, Kamran, Osama Khan, Tahir Mobashir, and Mansoor Sarwar. "Migratable sockets in cluster computing." Journal of Systems and Software 75, no. 1-2 (February 2005): 171–77. http://dx.doi.org/10.1016/j.jss.2004.03.023.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Sheng, Chong Chong, Wei Song Hu, Xin Ming Hu, and Bai Feng Wu. "StreamMAP: Automatic Task Assignment System on GPU Cluster." Advanced Materials Research 926-930 (May 2014): 2414–17. http://dx.doi.org/10.4028/www.scientific.net/amr.926-930.2414.

Full text
Abstract:
GPU Clusters which use General-Purpose GPUs (GPGPUs) as accelerators are becoming more and more popular in high performance computing area. Currently the mainly used programming model for GPU cluster is hybrid MPI/CUDA. However, when using this model, programmers tend to need detailed knowledge of the hardware resources, which makes the program more complicated and less portable. In this paper, we present StreamMAP, an automatic task assignment system on GPU Clusters. The main contributions of StreamMAP are (1) It provides powerful yet concise language extension suitable to describe the computing resource demands of cluster tasks. (2) It maintains resource information and implements automatic task assignment for GPU Cluster. Experiments show that StreamMAP provides programmability, portability and performance gains.
APA, Harvard, Vancouver, ISO, and other styles
19

WEBER, MICHAEL. "WORKSTATION CLUSTERS: ONE WAY TO PARALLEL COMPUTING." International Journal of Modern Physics C 04, no. 06 (December 1993): 1307–14. http://dx.doi.org/10.1142/s0129183193001026.

Full text
Abstract:
The feasibility and constraints of workstation clusters for parallel processing are investigated. Measurements of latency and bandwidth are presented to fix the position of clusters in comparison to massively parallel systems. So it becomes possible to identify the kind of applications that seem to be suited for running on a cluster.
APA, Harvard, Vancouver, ISO, and other styles
20

Guo, Zhihui, Hongbin Chen, and Shichao Li. "Deep Reinforcement Learning-Based One-to-Multiple Cooperative Computing in Large-Scale Event-Driven Wireless Sensor Networks." Sensors 23, no. 6 (March 18, 2023): 3237. http://dx.doi.org/10.3390/s23063237.

Full text
Abstract:
Emergency event monitoring is a hot topic in wireless sensor networks (WSNs). Benefiting from the progress of Micro-Electro-Mechanical System (MEMS) technology, it is possible to process emergency events locally by using the computing capacities of redundant nodes in large-scale WSNs. However, it is challenging to design a resource scheduling and computation offloading strategy for a large number of nodes in an event-driven dynamic environment. In this paper, focusing on cooperative computing with a large number of nodes, we propose a set of solutions, including dynamic clustering, inter-cluster task assignment and intra-cluster one-to-multiple cooperative computing. Firstly, an equal-size K-means clustering algorithm is proposed, which activates the nodes around event location and then divides active nodes into several clusters. Then, through inter-cluster task assignment, every computation task of events is alternately assigned to the cluster heads. Next, in order to make each cluster efficiently complete the computation tasks within the deadline, a Deep Deterministic Policy Gradient (DDPG)-based intra-cluster one-to-multiple cooperative computing algorithm is proposed to obtain a computation offloading strategy. Simulation studies show that the performance of the proposed algorithm is close to that of the exhaustive algorithm and better than other classical algorithms and the Deep Q Network (DQN) algorithm.
APA, Harvard, Vancouver, ISO, and other styles
21

Buyya, Rajkumar, and Srikumar Venugopal. "Cluster and Grid Computing: A Graduate Distributed-Computing Course." IEEE Distributed Systems Online 8, no. 12 (December 2007): 2. http://dx.doi.org/10.1109/mdso.2007.4415516.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Serik, Meruert, Nursaule Karelkhan, Jaroslav Kultan, and Zhandos Zulpykhar. "Setting Up and Implementation of the Parallel Computing Cluster in Higher Education." International Journal of Emerging Technologies in Learning (iJET) 14, no. 06 (March 29, 2019): 4. http://dx.doi.org/10.3991/ijet.v14i06.9736.

Full text
Abstract:
In this article, we describe in detail the setting up and implementation of the parallel computing cluster for education in the Matlab environment and how we solved the problems arising on this way. We also describe the comparative analysis of parallel computing cluster by the example of matrix multiplication by a vector with large dimensions. First calculations were performed on one computer, and then on a parallel computing cluster. In the experiment, we proved the effectiveness of parallel computing and the necessity of the setting up of the parallel computing cluster. We hope that the creation of a parallel computing cluster for education will help in teaching the subject of parallel computing at higher schools that do not have sufficient hardware resources. This paper presents unique setting up and implementation of the parallel computing cluster for teaching and learning of the parallel computing course and a wide variety of information sources from which instructors can choose.
APA, Harvard, Vancouver, ISO, and other styles
23

Li, Tsung-Lung, and Wen-Cai Lu. "Structural and electronic characteristics of intercalated monopotassium–rubrene: Simulation on a commodity computing cluster." Journal of Theoretical and Computational Chemistry 15, no. 04 (June 2016): 1650035. http://dx.doi.org/10.1142/s0219633616500358.

Full text
Abstract:
The structural and electronic characteristics of the intercalated monopotassium–rubrene (K1Rub) are studied. In the intercalated K1Rub, one of the two pairs of phenyl groups of rubrene is intercalated by potassium, whereas the other pair remains pristine. This structural feature facilitates the comparison of the electronic structures of the intercalated and pristine pairs of phenyl groups. It is found that, in contrast to potassium adsorption to rubrene, the potassium intercalation promotes the carbon [Formula: see text] orbitals of the intercalated pair of phenyls to participate in the electronic structures of HOMO. Additionally, this intercalated K1Rub is used as a testing vehicle to study the performance of a commodity computing cluster built to run the General Atomic and Molecular Electronic Structure System (GAMESS) simulation package. It is shown that, for many frequently encountered simulation tasks, the performance of the commodity computing cluster is comparable with a massive computing cluster. The high performance-cost-ratio of the computing clusters constructed with commodity hardware suggests a feasible alternative for research institutes to establish their computing facilities.
APA, Harvard, Vancouver, ISO, and other styles
24

Zeng, Xiao Hui, Ming Guo, Jun Rui Liu, Wen Lang Luo, and Ji Chang Kang. "A High Performance/Price Ratio Cluster Computing Platform for Simulation Design." Advanced Materials Research 159 (December 2010): 176–79. http://dx.doi.org/10.4028/www.scientific.net/amr.159.176.

Full text
Abstract:
Based on our Fibre-Channel Token-Routing Switch-Network, a high performance/price ratio cluster computing platform is put forward and developed for simulation design which needs large-scale parallel computation. According to the feature of our cluster computing platform, the cluster computing software platform is proposed and implemented. The cluster computing software platform has two parts: the cluster management scheduling sub-platform and the cluster tasks running sub-platform. The cluster management scheduling sub-platform is composed of the cluster management system and the task scheduling system, and the cluster tasks running sub-platform includes the FC-TR communication software and the FC-TR parallel programming environment. The FC-TR communication software provides high-bandwidth, low-latency communication services for the parallel programming environment on the upper layer, while the parallel programming environment makes a number of large-scale-computation simulation design applications run on our network seamlessly. The experiment results show that our cluster computing platform has achieved better performance than the Scali cluster computing platform.
APA, Harvard, Vancouver, ISO, and other styles
25

Mesheryakov, Roman, Alexander Moiseev, Anton Demin, Vadim Dorofeev, and Vasily Sorokin. "Using Parallel Computing in Queueing Network Simulation." Key Engineering Materials 685 (February 2016): 943–47. http://dx.doi.org/10.4028/www.scientific.net/kem.685.943.

Full text
Abstract:
The paper is devoted to the simulation of queueing networks on high performance computer clusters. The objective is to develop a mathematical model of queueing network and simulation approach to the modelling of the general network functionality, as well as to provide a software implementation on a high-performance computer cluster. The simulation is based on a discrete-event approach, object oriented programming, and MPI technology. The model of the queueing networks simulation system was developed as an application that allows a user to simulate networks of rather free configuration. The experiments on a high performance computer cluster emphasize the high efficiency of parallel computing.
APA, Harvard, Vancouver, ISO, and other styles
26

Myint, Khin Nyein, Myo Hein Zaw, and Win Thanda Aung. "Parallel and Distributed Computing Using MPI on Raspberry Pi Cluster." International Journal of Future Computer and Communication 9, no. 1 (March 2020): 18–22. http://dx.doi.org/10.18178/ijfcc.2020.9.1.559.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Radchenko, I., O. Shekhovtsov, A. Kovalenko, and O. Sytnyk. "FORMATION OF CLUSTERS ON SINGLE-BOARD COMPUTERS IN IOT NETWORKS." Системи управління, навігації та зв’язку. Збірник наукових праць 2, no. 76 (April 30, 2024): 141–43. http://dx.doi.org/10.26906/sunz.2024.2.141.

Full text
Abstract:
The article looks at the problem of using single -board computers for Internet technology . An analysis of current single-board computers in various countries was carried out. Single-board computers and clusters of single-board computers have found their place in the concept of edge computing, allowing optimization of hard computing by placing computing resources closer to the core. The idea of a “virtual cluster” lies in the unification and organization of disparate heterogeneous devices for the development of various complex computing tasks from the available resources of the existing infrastructure of edge computing, what is known in the area. First of all, we have secured the resources of single-board computers. Such a cluster will also allow the use of resources from the existing infrastructure in a more efficient way, for example, by activating additional services from processing and saving data.
APA, Harvard, Vancouver, ISO, and other styles
28

Liu, Liang, and Tian Yu Wo. "A Scalable Data Platform for Cloud Computing Systems." Applied Mechanics and Materials 577 (July 2014): 860–64. http://dx.doi.org/10.4028/www.scientific.net/amm.577.860.

Full text
Abstract:
With cloud computing systems becoming popular, it has been a hotspot to design a scalable, highly available and cost-effective data platform. This paper proposed such a data platform using MySQL DBMS blocks. For scalability, a three-level (system, super-cluster, cluster) architecture is applied, making it scalable to thousands of applications. For availability, we use asynchronous replication across geographically dispersed super clusters to provide disaster recovery, synchronous replication within a cluster to perform failure recovery and hot standby or even process pair mechanism for controllers to enhance fault tolerance. For resource utility, we design a novel load balancing strategy by exploiting the key property that the throughput requirement of web applications is flucatuated in a time period. Experiments with NLPIR dataset indicate that the system can scale to a large number of web applications and make good use of resources provided.
APA, Harvard, Vancouver, ISO, and other styles
29

S., Shreyanth, and Niveditha S. "CLUSTER-BASED GRID COMPUTING ON WIRELESS NETWORK DATA TRANSMISSION WITH ROUTING ANALYSIS PROTOCOL AND DEEP LEARNING." International Journal of Advanced Research 11, no. 06 (June 30, 2023): 517–34. http://dx.doi.org/10.21474/ijar01/17096.

Full text
Abstract:
Grid computing based on clusters has emerged as a promising strategy for improving the efficacy of wireless network data transmission. This study examines the incorporation of cluster-based grid computing, routing analysis protocols, and deep learning techniques to optimize data transmission in wireless networks. The proposed method utilizes clusters to distribute computing duties and enhance resource utilization, resulting in efficient data transmission. To further improve the routing process, a novel routing analysis protocol is introduced, which dynamically adapts to network conditions and chooses the most optimal routes. In addition, deep learning algorithms are used to analyze network data patterns, allowing for intelligent data routing and resource allocation decisions. Experiment results exhibit the efficacy of the proposed method, revealing substantial enhancements in network performance metrics such as throughput, latency, and energy consumption. This research contributes to the development of cluster-based grid computing and offers valuable insights for the design of efficient wireless network data transmission systems.
APA, Harvard, Vancouver, ISO, and other styles
30

Hamad, Faten. "An Overview of Hadoop Scheduler Algorithms." Modern Applied Science 12, no. 8 (July 26, 2018): 69. http://dx.doi.org/10.5539/mas.v12n8p69.

Full text
Abstract:
Hadoop is a cloud computing open source system, used in large-scale data processing. It became the basic computing platforms for many internet companies. With Hadoop platform users can develop the cloud computing application and then submit the task to the platform. Hadoop has a strong fault tolerance, and can easily increase the number of cluster nodes, using linear expansion of the cluster size, so that clusters can process larger datasets. However Hadoop has some shortcomings, especially in the actual use of the process of exposure to the MapReduce scheduler, which calls for more researches on Hadoop scheduling algorithms.This survey provides an overview of the default Hadoop scheduler algorithms and the problem they have. It also compare between five Hadoop framework scheduling algorithms in term of the default scheduler algorithm to be enhanced, the proposed scheduler algorithm, type of cluster applied either heterogeneous or homogeneous, methodology, and clusters classification based on performance evaluation. Finally, a new algorithm based on capacity scheduling and use of perspective resource utilization to enhance Hadoop scheduling is proposed.
APA, Harvard, Vancouver, ISO, and other styles
31

Saluja, Simran Kaur, and Tanya Tiwari. "DIFFERENT COMPUTING TECHNOLOGIES AND VIRTUALIZATION." BSSS Journal of Computer 14, no. 1 (June 30, 2023): 47–53. http://dx.doi.org/10.51767/jc1407.

Full text
Abstract:
In today’s time, the industrial world is basically facing many challenges which are happening rapidly. One of the major changes faced by them is with respect to technology. Industries that want to stay in the market must use the updated technology. Technologies like cloud computing, grid computing and cluster computing which allows access to large amounts of computing power in a virtual manner. They also help the organization to share a large amount of services in a cost-effective manner. This paper basically includes about cloud computing, grid computing and cluster computing like what they are all about, their benefits in today’s time and drawbacks of having it. Moreover, this paper also talks about the virtualisation importance in cloud, grid and cluster computing as well as comparison between cloud, grid and cluster computing.
APA, Harvard, Vancouver, ISO, and other styles
32

Xiong, Jie, Shen-Han Shi, and Song Zhang. "Build and Evaluate a Free Virtual Cluster on Amazon Elastic Compute Cloud for Scientific Computing." International Journal of Online Engineering (iJOE) 13, no. 08 (August 4, 2017): 121. http://dx.doi.org/10.3991/ijoe.v13i08.7373.

Full text
Abstract:
Scientific computing requires a huge amount of computing resources, but not all the scientific researchers have an access to sufficient high-end computing systems. Currently, Amazon provides a free tier account for cloud computing which could be used to build a virtual cluster. In order to investigate whether it is suitable for scientific computing, we first describe how to build a free virtual cluster using StarCluster on Amazon Elastic Compute Cloud (EC2). Then, we perform a comprehensive performance evaluation of the virtual cluster built before. The results show that a free virtual cluster is easily built on Amazon EC2 and it is suitable for the basic scientific computing. It is especially valuable for scientific researchers, who do not have any HPC or cluster, to develop and test their prototype system of scientific computing without paying anything, and move it to a higher performance virtual cluster when necessary by choosing more powerful instance on Amazon EC2.
APA, Harvard, Vancouver, ISO, and other styles
33

Cui, Kuntao, Bin Lin, Wenli Sun, and Wenqiang Sun. "Learning-Based Task Offloading for Marine Fog-Cloud Computing Networks of USV Cluster." Electronics 8, no. 11 (November 5, 2019): 1287. http://dx.doi.org/10.3390/electronics8111287.

Full text
Abstract:
In recent years, unmanned surface vehicles (USVs) have made important advances in civil, maritime, and military applications. With the continuous improvement of autonomy, the increasing complexity of tasks, and the emergence of various types of advanced sensors, higher requirements are imposed on the computing performance of USV clusters, especially for latency sensitive tasks. However, during the execution of marine operations, due to the relative movement of the USV cluster nodes and the network topology of the cluster, the wireless channel states are changing rapidly, and the computing resources of cluster nodes may be available or unavailable at any time. It is difficult to accurately predict in advance. Therefore, we propose an optimized offloading mechanism based on the classic multi-armed bandit (MAB) theory. This mechanism enables USV cluster nodes to dynamically make offloading decisions by learning the potential computing performance of their neighboring team nodes to minimize average computation task offloading delay. It is an optimized algorithm named Adaptive Upper Confidence Boundary (AUCB) algorithm, and corresponding simulations are designed to evaluate the performance. The algorithm enables the USV cluster to effectively adapt to the marine vehicular fog computing networks, balancing the trade-off between exploration and exploitation (EE). The simulation results show that the proposed algorithm can quickly converge to the optimal computation task offloading combination strategy under heavy and light input data loads.
APA, Harvard, Vancouver, ISO, and other styles
34

Kovalenko, Vadim, Anna Rodakova, Hamza Mohammed Ridha Al-Khafaji, Artem Volkov, Ammar Muthanna, and Andrey Koucheryavy. "Resource Allocation Computing Algorithm for UAV Dynamical Statements based on AI Technology." Webology 19, no. 1 (January 20, 2022): 2307–19. http://dx.doi.org/10.14704/web/v19i1/web19157.

Full text
Abstract:
An unmanned aerial vehicle (UAV) is one of the complex and relevant communication networks of 5G and 2030 networks. Development of technologies for virtualization (NFV), containerization and orchestration of data systems. NFV technology can be implemented not only in the data center but also on the switch or router. Thus, by analogy with the above-described trend, flying network segments can also use computing power to solve any problems. For example, deployment on virtual distributed capacities of a flying station controller, an internal network controller. Within the framework of this direction, there are a number of interrelated tasks that need to be resolved by the trends and capabilities of Artificial Intelligence technologies. This paper proposes an algorithm for searching for computing resources in real time. The article defined the criteria for choosing the head node and the cluster with the highest total resources, considered the possibility of implementing the function of the SDN controller in the UAV cluster, the main possible functions and tasks of the UAV, proposed a three-level architecture based on the separation of the functions performed by the UAV. In this work, simulation was carried out in the Matlab program to detect areas of increased load, form UAV clusters, select a head node in clusters and select one UAV cluster with the highest total resources for its subsequent migration to the area of increased load.
APA, Harvard, Vancouver, ISO, and other styles
35

Luo, Siqi, Xu Chen, Zhi Zhou, Xiang Chen, and Weigang Wu. "Incentive-Aware Micro Computing Cluster Formation for Cooperative Fog Computing." IEEE Transactions on Wireless Communications 19, no. 4 (April 2020): 2643–57. http://dx.doi.org/10.1109/twc.2020.2967371.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Volkov, Aleksandr O. "EVALUATION OF CLOUD COMPUTING CLUSTER PERFORMANCE." T-Comm 14, no. 12 (2020): 72–79. http://dx.doi.org/10.36724/2072-8735-2020-14-12-72-79.

Full text
Abstract:
For cloud service providers, one of the most relevant tasks is to maintain the required quality of service (QoS) at an acceptable level for customers. This condition complicates the work of providers, since now they need to not only manage their resources, but also provide the expected level of QoS for customers. All these factors require an accurate and well-adapted mechanism for analyzing the performance of the service provided. For the reasons stated above, the development of a model and algorithms for estimation the required resource is an urgent task that plays a significant role in cloud systems performance evaluation. In cloud systems, there is a serious variance in the requirements for the provided resource, as well as there is a need to quickly process incoming requests and maintain the proper level of quality of service – all of these factors cause difficulties for cloud providers. The proposed analytical model for processing requests for a cloud computing system in the Processor Sharing (PS) service mode allows us to solve emerging problems. In this work, the flow of service requests is described by the Poisson model, which is a special case of the Engset model. The proposed model and the results of its analysis can be used to evaluate the main characteristics of the performance of cloud systems.
APA, Harvard, Vancouver, ISO, and other styles
37

Lin, Hong, Jeremy Kemp, and Padraic Gilbert. "Computing Gamma Calculus on Computer Cluster." International Journal of Technology Diffusion 1, no. 4 (October 2010): 42–52. http://dx.doi.org/10.4018/jtd.2010100104.

Full text
Abstract:
Gamma Calculus is an inherently parallel, high-level programming model, which allows simple programming molecules to interact, creating a complex system with minimum of coding. Gamma calculus modeled programs were written on top of IBM’s TSpaces middleware, which is Java-based and uses a “Tuple Space” based model for communication, similar to that in Gamma. A parser was written in C++ to translate the Gamma syntax. This was implemented on UHD’s grid cluster (grid.uhd.edu), and in an effort to increase performance and scalability, existing Gamma programs are being transferred to Nvidia’s CUDA architecture. General Purpose GPU computing is well suited to run Gamma programs, as GPU’s excel at running the same operation on a large data set, potentially offering a large speedup.
APA, Harvard, Vancouver, ISO, and other styles
38

Patel, R. B., and Manpreet Singh. "Cluster Computing: A Mobile Code Approach." Journal of Computer Science 2, no. 10 (October 1, 2006): 798–806. http://dx.doi.org/10.3844/jcssp.2006.798.806.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

De Beenhouwer, Jan, Steven Staelens, Dirk Kruecker, Ludovic Ferrer, Yves D'Asseler, Ignace Lemahieu, and Fernando R. Rannou. "Cluster computing software for GATE simulations." Medical Physics 34, no. 6Part1 (May 9, 2007): 1926–33. http://dx.doi.org/10.1118/1.2731993.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Patnaik, L. M. "High performance cluster computing [Book Reviews]." IEEE Concurrency 8, no. 1 (January 2000): 86–87. http://dx.doi.org/10.1109/mcc.2000.824332.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Elkabbany, Ghada F., and Mona M. Moussa. "Accelerating video encoding using cluster computing." Multimedia Tools and Applications 79, no. 25-26 (February 18, 2020): 17427–44. http://dx.doi.org/10.1007/s11042-020-08717-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Wu, Xingfu, and Wei Li. "Performance models for scalable cluster computing." Journal of Systems Architecture 44, no. 3-4 (January 1998): 189–205. http://dx.doi.org/10.1016/s1383-7621(97)00036-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Campbell, Robert, Micah R. Shepherd, and Stephen Hambric. "Structural-acoustic optimization using cluster computing." Journal of the Acoustical Society of America 141, no. 5 (May 2017): 3514. http://dx.doi.org/10.1121/1.4987377.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Satoh, Ichiro. "Reusable mobile agents for cluster computing." International Journal of High Performance Computing and Networking 2, no. 2/3/4 (2004): 77. http://dx.doi.org/10.1504/ijhpcn.2004.008894.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Wu, Yongwei, Chen Gang, Jia Liu, Rui Fang, Xiaomeng Huang, Guangwen Yang, and Weimin Zheng. "Automatically constructing trusted cluster computing environment." Journal of Supercomputing 55, no. 1 (July 9, 2009): 51–68. http://dx.doi.org/10.1007/s11227-009-0315-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Shaikh, Muhammad Kashif, Muzammil Ahmad Khan, and Mumtaz ul Imam. "5 Performance Assessment of High Availability Clustered Computing using LVS-NAT." Sir Syed Research Journal of Engineering & Technology 1, no. 1 (December 20, 2011): 6. http://dx.doi.org/10.33317/ssurj.v1i1.76.

Full text
Abstract:
High availability cluster computing environment attempts to provide high availability to computing services. This paper evaluates building and investigating a highly available computing environment that provides a solution to achieve high availability. A prototype of cluster computing environment is developed in Linux environment to provide a single but highly available point of entry. The cluster of computers run web based application to provide services to HTTP users.
APA, Harvard, Vancouver, ISO, and other styles
47

Shaikh, Muhammad Kashif, Muzammil Ahmad Khan, and Mumtaz ul Imam. "Performance Assessment of High Availability Clustered Computing using LVS-NAT." Sir Syed University Research Journal of Engineering & Technology 1, no. 1 (December 20, 2011): 6. http://dx.doi.org/10.33317/ssurj.76.

Full text
Abstract:
High availability cluster computing environment attempts to provide high availability to computing services. This paper evaluates building and investigating a highly available computing environment that provides a solution to achieve high availability. A prototype of cluster computing environment is developed in Linux environment to provide a single but highly available point of entry. The cluster of computers run web based application to provide services to HTTP users.
APA, Harvard, Vancouver, ISO, and other styles
48

Regassa, Dereje, Heonyoung Yeom, and Yongseok Son. "Harvesting the Aggregate Computing Power of Commodity Computers for Supercomputing Applications." Applied Sciences 12, no. 10 (May 19, 2022): 5113. http://dx.doi.org/10.3390/app12105113.

Full text
Abstract:
Distributed supercomputing is becoming common in different companies and academia. Most of the parallel computing researchers focused on harnessing the power of commodity processors and even internet computers to aggregate their computation powers to solve computationally complex problems. Using flexible commodity cluster computers for supercomputing workloads over a dedicated supercomputer and expensive high-performance computing (HPC) infrastructure is cost-effective. Its scalable nature can make it better employed to the available organizational resources, which can benefit researchers who aim to conduct numerous repetitive calculations on small to large volumes of data to obtain valid results in a reasonable time. In this paper, we design and implement an HPC-based supercomputing facility from commodity computers at an organizational level to provide two separate implementations for cluster-based supercomputing using Hadoop and Spark-based HPC clusters, primarily for data-intensive jobs and Torque-based clusters for Multiple Instruction Multiple Data (MIMD) workloads. The performance of these clusters is measured through extensive experimentation. With the implementation of the message passing interface, the performance of the Spark and Torque clusters is increased by 16.6% for repetitive applications and by 73.68% for computation-intensive applications with a speedup of 1.79 and 2.47 respectively on the HPDA cluster. We conclude that the specific application or job could be chosen to run based on the computation parameters on the implemented clusters.
APA, Harvard, Vancouver, ISO, and other styles
49

Alves Filho, Sebastiao Emidio, Aquiles Medeiros Filgueira Burlamaqui, Rafael Vidal Aroca, and Luiz Marcos Garcia Goncalves. "NPi-Cluster: A Low Power Energy-Proportional Computing Cluster Architecture." IEEE Access 5 (2017): 16297–313. http://dx.doi.org/10.1109/access.2017.2728720.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Chen, Hong, Hu Xing Zhou, and Juan Meng. "Parallel Route Optimization Algorithm of Central Guidance." Applied Mechanics and Materials 380-384 (August 2013): 1571–75. http://dx.doi.org/10.4028/www.scientific.net/amm.380-384.1571.

Full text
Abstract:
To solve the problem that the central guidance system takes too long time to calculate the shortest routes between all node pairs of network which can not meet the real-time demand of central guidance, this paper presents a central guidance parallel route optimization method based on parallel computing technique involving both route optimization time and travelers preferences by means of researching three parts: network data storage based on an array, multi-level network decomposition with travelers preferences considered and parallel shortest route computing of deque based on messages transfer. And based on the actual traffic network data of Guangzhou city, the suggested method is verified on three parallel computing platforms including ordinary PC cluster, Lenovo server cluster and HP workstations cluster. The results show that above three clusters finish the optimization of 21.4 million routes between 5631 nodes of Guangzhou city traffic network in 215, 189 and 177 seconds with the presented method respectively, which can completely meet the real-time demand of the central guidance.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography