To see the other types of publications on this topic, follow the link: Clusters and Grid Computing.

Journal articles on the topic 'Clusters and Grid Computing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Clusters and Grid Computing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Meddeber, Meriem, and Belabbas Yagoubi. "Dynamic Dependent Tasks Assignment for Grid Computing." International Journal of Grid and High Performance Computing 3, no. 2 (2011): 44–58. http://dx.doi.org/10.4018/jghpc.2011040104.

Full text
Abstract:
A computational grid is a widespread computing environment that provides huge computational power for large-scale distributed applications. One of the most important issues in such an environment is resource management. Task assignment as a part of resource management has a considerable effect on the grid middleware performance. In grid computing, task execution time is dependent on the machine to which it is assigned, and task precedence constraints are represented by a directed acyclic graph. This paper proposes a hybrid assignment strategy of dependent tasks in Grids which integrate static and dynamic assignment technologies. Grid computing is considered a set of clusters formed by a set of computing elements and a cluster manager. The main objective is to arrive at a method of task assignment that could achieve minimum response time and reduce the transfer cost, inducing by the tasks transfer respecting the dependency constraints.
APA, Harvard, Vancouver, ISO, and other styles
2

S., Shreyanth, and Niveditha S. "CLUSTER-BASED GRID COMPUTING ON WIRELESS NETWORK DATA TRANSMISSION WITH ROUTING ANALYSIS PROTOCOL AND DEEP LEARNING." International Journal of Advanced Research 11, no. 06 (2023): 517–34. http://dx.doi.org/10.21474/ijar01/17096.

Full text
Abstract:
Grid computing based on clusters has emerged as a promising strategy for improving the efficacy of wireless network data transmission. This study examines the incorporation of cluster-based grid computing, routing analysis protocols, and deep learning techniques to optimize data transmission in wireless networks. The proposed method utilizes clusters to distribute computing duties and enhance resource utilization, resulting in efficient data transmission. To further improve the routing process, a novel routing analysis protocol is introduced, which dynamically adapts to network conditions and chooses the most optimal routes. In addition, deep learning algorithms are used to analyze network data patterns, allowing for intelligent data routing and resource allocation decisions. Experiment results exhibit the efficacy of the proposed method, revealing substantial enhancements in network performance metrics such as throughput, latency, and energy consumption. This research contributes to the development of cluster-based grid computing and offers valuable insights for the design of efficient wireless network data transmission systems.
APA, Harvard, Vancouver, ISO, and other styles
3

SAI, Oleksandr, and Kyrylo KRASNIKOV. "FUZZY MODEL OF ELECTRICITY CONTROL WITH WIRELESS INFORMATION PROCESSED ON GPU." Computer systems and information technologies, no. 3 (September 26, 2024): 44–50. http://dx.doi.org/10.31891/csit-2024-3-6.

Full text
Abstract:
Methods of processing information transmitted through wireless networks with software development are investigated in the work. Innovative methods of data transmission such as optical technologies, quantum data transmission and wireless data transmission technologies are disclosed. It is noted that in the modern understanding, the concept of distributed computing defines the process of convergence (convergence) of distributed processing methods, such as GRID, cloud and fog computing, with the combination of virtual cluster systems (grid clusters, cloud clusters and fog clusters) into a single information communication and computing system . It is emphasized that, unlike cellular modems, ZigBee technology nodes have microcontrollers with a pre-installed operating system and flash memory, which allows solving simple computational tasks in real time before sending data. It is advisable to solve such tasks within the framework of a multi-agent approach, which will increase the efficiency of the use of sensor nodes and the entire sensor network. The advantages of the multi-agent technology of fog computing based on sensor nodes of the wireless network of the ZigBee standard are revealed. The method of multi-agent processing of sensory information and its main components are described. The architecture of the system of distributed sensor data processing is outlined, which includes 4 hardware and software levels: Terminal sensor nodes and controllers of measuring devices and automation devices that implement fuzzy calculations; Coordinators, sensor segment routers and cellular modems that collect, protect and transmit sensor data to the processing center; A data processing center that includes a cluster of servers for GRID calculations and a cloud data storage server; Client devices to access cloud storage, computing cluster servers, and distributed fog computing terminals. It is emphasized that indicators and forecast results can be stored on distributed sensor nodes or transmitted for accumulation in cloud storage for further extraction and intelligent processing in the GRID cluster of the data center.
APA, Harvard, Vancouver, ISO, and other styles
4

Kolumbet, Vadim, and Olha Svynchuk. "MULTIAGENT METHODS OF MANAGEMENT OF DISTRIBUTED COMPUTING IN HYBRID CLUSTERS." Advanced Information Systems 6, no. 1 (2022): 32–36. http://dx.doi.org/10.20998/2522-9052.2022.1.05.

Full text
Abstract:
Modern information technologies include the use of server systems, virtualization technologies, communication tools for distributed computing and development of software and hardware solutions of data processing and storage centers, the most effective of such complexes for managing heterogeneous computing resources are hybrid GRID- distributed computing infrastructure combines resources of different types with collective access to these resources for and sharing shared resources. The article considers a multi-agent system that provides integration of the computational management approach for a cluster Grid system of computational type, the nodes of which have a complex hybrid structure. The hybrid cluster includes computing modules that support different parallel programming technologies and differ in their computational characteristics. The novelty and practical significance of the methods and tools presented in the article are a significant increase in the functionality of the Grid cluster computing management system for the distribution and division of Grid resources at different levels of tasks, the ability to embed intelligent computing management tools in problem-oriented applications. The use of multi-agent systems for task planning in Grid systems will solve two main problems - scalability and adaptability. The methods and techniques used today do not sufficiently provide solutions to these complex problems. Thus, the scientific task of improving the effectiveness of methods and tools for managing problem-oriented distributed computing in a cluster Grid system, integrated with traditional meta-planners and local resource managers of Grid nodes, corresponding to trends in the concept of scalability and adaptability.
APA, Harvard, Vancouver, ISO, and other styles
5

Nwobodo, Ikechukwu. "Cloud Computing: A Detailed Relationship to Grid and Cluster Computing." International Journal of Future Computer and Communication 4, no. 2 (2015): 82–87. http://dx.doi.org/10.7763/ijfcc.2015.v4.361.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Woods, Christopher J., Muan Hong Ng, Steven Johnston, et al. "Grid computing and biomolecular simulation." Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 363, no. 1833 (2005): 2017–35. http://dx.doi.org/10.1098/rsta.2005.1626.

Full text
Abstract:
Biomolecular computer simulations are now widely used not only in an academic setting to understand the fundamental role of molecular dynamics on biological function, but also in the industrial context to assist in drug design. In this paper, two applications of Grid computing to this area will be outlined. The first, involving the coupling of distributed computing resources to dedicated Beowulf clusters, is targeted at simulating protein conformational change using the Replica Exchange methodology. In the second, the rationale and design of a database of biomolecular simulation trajectories is described. Both applications illustrate the increasingly important role modern computational methods are playing in the life sciences.
APA, Harvard, Vancouver, ISO, and other styles
7

Fullana Torregrosa, Esteban, Doug Benjamin, Paolo Calafiura, et al. "Grid production with the ATLAS Event Service." EPJ Web of Conferences 214 (2019): 04016. http://dx.doi.org/10.1051/epjconf/201921404016.

Full text
Abstract:
ATLAS has developed and previously presented a new computing architecture, the Event Service, that allows real time delivery of fine grained workloads which process dispatched events (or event ranges) and immediately streams outputs. The principal aim was to profit from opportunistic resources such as commercial cloud, supercomputing, and volunteer computing, and otherwise unused cycles on clusters and grids. During the development and deployment phase, its utility also on the grid and conventional clusters for the exploitation of otherwise unused cycles became apparent. Here we describe our experience commissioning the Event Service on the grid in the ATLAS production system. We study the performance compared with standard simulation production. We describe the integration with the ATLAS data management system to ensure scalability and compatibility with object stores. Finally, we outline the remaining steps towards a fully commissioned system.
APA, Harvard, Vancouver, ISO, and other styles
8

Weiß, Andreas, Elisabeth Wendlinger, Maximilian Hecker, and Aaron Praktiknjo. "Determination, Evaluation, and Validation of Representative Low-Voltage Distribution Grid Clusters." Energies 17, no. 17 (2024): 4433. http://dx.doi.org/10.3390/en17174433.

Full text
Abstract:
Decarbonizing the mobility and heating sector involves increasing connected components in low-voltage grids. The simulation of distribution grids and the incorporation of an energy system are relevant instruments for evaluating the effects of these developments. However, grids are highly diversified, and with over 900,000 low-voltage grids in Germany, the simulation would require significant data management and computing capacity. A solution already applied in the literature is the simulation of representative grids. Here, we show the compatibility of clusters and representatives for grid topologies from the literature and further extend and validate them by applying accurate grid data. Our analysis indicates that clusters from the literature unify well across three key parameters but also reveals that the clusters still exclude a relevant amount of grids. Extension, reclassification, and validation using about 1200 real grids establish meta-clusters covering the spectrum of grids from rural to urban regions, focusing on residential to commercial supply tasks. We anticipate our assay to be a further relevant step toward typifying low-voltage distribution grids in Germany.
APA, Harvard, Vancouver, ISO, and other styles
9

Creel, Michael, and William L. Goffe. "Multi-core CPUs, Clusters, and Grid Computing: A Tutorial." Computational Economics 32, no. 4 (2008): 353–82. http://dx.doi.org/10.1007/s10614-008-9143-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Yu, Xin, Feng Zeng, Deborah Simon Mwakapesa, et al. "DBWGIE-MR: A density-based clustering algorithm by using the weighted grid and information entropy based on MapReduce." Journal of Intelligent & Fuzzy Systems 40, no. 6 (2021): 10781–96. http://dx.doi.org/10.3233/jifs-201792.

Full text
Abstract:
The main target of this paper is to design a density-based clustering algorithm using the weighted grid and information entropy based on MapReduce, noted as DBWGIE-MR, to deal with the problems of unreasonable division of data gridding, low accuracy of clustering results and low efficiency of parallelization in big data clustering algorithm based on density. This algorithm is implemented in three stages: data partitioning, local clustering, and global clustering. For each stage, we propose several strategies to improve the algorithm. In the first stage, based on the spatial distribution of data points, we propose an adaptive division strategy (ADG) to divide the grid adaptively. In the second stage, we design a weighted grid construction strategy (NE) which can strengthen the relevance between grids to improve the accuracy of clustering. Meanwhile, based on the weighted grid and information entropy, we design a density calculation strategy (WGIE) to calculate the density of the grid. And last, to improve the parallel efficiency, core clusters computing algorithm based on MapReduce (COMCORE-MR) are proposed to parallel compute the core clusters of the clustering algorithm. In the third stage, based on disjoint-set, we propose a core cluster merging algorithm (MECORE) to speed-up ratio the convergence of merged local clusters. Furthermore, based on MapReduce, a core clusters parallel merging algorithm (MECORE-MR) is proposed to get the clustering algorithm results faster, which improves the core clusters merging efficiency of the density-based clustering algorithm. We conduct the experiments on four synthetic clusters. Compared with H-DBSCAN, DBSCAN-MR and MR-VDBSCAN, the experimental results show that the DBWGIE-MR algorithm has higher stability and accuracy, and it takes less time in parallel clustering.
APA, Harvard, Vancouver, ISO, and other styles
11

Alnasir, Jamie. "Distributed Computing in a Pandemic." ADCAIJ: Advances in Distributed Computing and Artificial Intelligence Journal 11, no. 1 (2022): 19–43. http://dx.doi.org/10.14201/adcaij.27337.

Full text
Abstract:
The current COVID-19 global pandemic caused by the SARS-CoV-2 betacoronavirus has resulted in over a million deaths and is having a grave socio-economic impact, hence there is an urgency to find solutions to key research challenges. Much of this COVID-19 research depends on distributed computing. In this article, I review distributed architectures -- various types of clusters, grids and clouds -- that can be leveraged to perform these tasks at scale, at high-throughput, with a high degree of parallelism, and which can also be used to work collaboratively. High-performance computing (HPC) clusters will be used to carry out much of this work. Several bigdata processing tasks used in reducing the spread of SARS-CoV-2 require high-throughput approaches, and a variety of tools, which Hadoop and Spark offer, even using commodity hardware. Extremely large-scale COVID-19 research has also utilised some of the world's fastest supercomputers, such as IBM's SUMMIT -- for ensemble docking high-throughput screening against SARS-CoV-2 targets for drug-repurposing, and high-throughput gene analysis -- and Sentinel, an XPE-Cray based system used to explore natural products. Grid computing has facilitated the formation of the world's first Exascale grid computer. This has accelerated COVID-19 research in molecular dynamics simulations of SARS-CoV-2 spike protein interactions through massively-parallel computation and was performed with over 1 million volunteer computing devices using the Folding@home platform. Grids and clouds both can also be used for international collaboration by enabling access to important datasets and providing services that allow researchers to focus on research rather than on time-consuming data-management tasks.
APA, Harvard, Vancouver, ISO, and other styles
12

Gagliardi, Fabrizio, Bob Jones, François Grey, Marc-Elian Bégin, and Matti Heikkurinen. "Building an infrastructure for scientific Grid computing: status and goals of the EGEE project." Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 363, no. 1833 (2005): 1729–42. http://dx.doi.org/10.1098/rsta.2005.1603.

Full text
Abstract:
The state of computer and networking technology today makes the seamless sharing of computing resources on an international or even global scale conceivable. Scientific computing Grids that integrate large, geographically distributed computer clusters and data storage facilities are being developed in several major projects around the world. This article reviews the status of one of these projects, Enabling Grids for E-SciencE, describing the scientific opportunities that such a Grid can provide, while illustrating the scale and complexity of the challenge involved in establishing a scientific infrastructure of this kind.
APA, Harvard, Vancouver, ISO, and other styles
13

Collignon, TP, and MB van Gijzen. "Fast iterative solution of large sparse linear systems on geographically separated clusters." International Journal of High Performance Computing Applications 25, no. 4 (2011): 440–50. http://dx.doi.org/10.1177/1094342010388541.

Full text
Abstract:
Parallel asynchronous iterative algorithms exhibit features that are extremely well-suited for Grid computing, such as lack of synchronization points. Unfortunately, they also suffer from slow convergence rates. In this paper we propose using asynchronous methods as a coarse-grained preconditioner in a flexible iterative method, where the preconditioner is allowed to change in each iteration step. A full implementation of the algorithm is presented using Grid middleware that allows for both synchronous and asynchronous communication. Advantages and disadvantages of the approach are discussed. Numerical experiments on heterogeneous computing hardware demonstrate the effectiveness of the proposed algorithm on Grid computers, with application to large 2D and 3D bubbly flow problems.
APA, Harvard, Vancouver, ISO, and other styles
14

Ahn, S., J. H. Kim, T. Huh, et al. "The Embedment of a Metadata System at Grid Farms at the Belle II." Journal of the Korean Physical Society 59, no. 4 (2011): 2695–701. https://doi.org/10.3938/jkps.59.2695.

Full text
Abstract:
In order to search for new physics beyond the standard model, the next generation of B-factory experiment, Belle II will collect a huge data sample that is a challenge for computing systems. The Belle II experiment, which should commence data collection in 2015, expects data rates 50 times greater than that of Belle. In order to handle this amount of data, we need a new data handling system based on a new computing model, which is a distributed computing model including grid farms as opposed to the central computing model using clusters at the Belle experiment. We have constructed a metadata system and embedded the system in the grid farms of the Belle II experiment. We have tested the system using grid farms. Results show good performance in handling such a huge amount of data.
APA, Harvard, Vancouver, ISO, and other styles
15

Kehagias, Dimitris, Michael Grivas, and Grammati Pantziou. "Using a hybrid platform for cluster, NoW and GRID computing." Facta universitatis - series: Electronics and Energetics 18, no. 2 (2005): 205–18. http://dx.doi.org/10.2298/fuee0502205k.

Full text
Abstract:
Clusters, Networks of Workstations (NoW) and Grids offer a new, highly available and cost-effective parallel computing paradigm. Their simplicity, versatility and unlimited power made them a rapidly adopted technology. It seems that they form the new way of computing. Although these platforms are based on the same principles, they differ significantly, needing special attention to their characteristics. Future computer scientists, programmers and analysts have to be well prepared, both about administrative and programming issues, in order to ensure faster and smoother transition. We present the architecture of a dynamic clustering system consisting of Beowulf class clusters and NoW, as well as our experience from constructing and using such a system, as an educational infrastructure for HPC.
APA, Harvard, Vancouver, ISO, and other styles
16

Saluja, Simran Kaur, and Tanya Tiwari. "DIFFERENT COMPUTING TECHNOLOGIES AND VIRTUALIZATION." BSSS Journal of Computer 14, no. 1 (2023): 47–53. http://dx.doi.org/10.51767/jc1407.

Full text
Abstract:
In today’s time, the industrial world is basically facing many challenges which are happening rapidly. One of the major changes faced by them is with respect to technology. Industries that want to stay in the market must use the updated technology. Technologies like cloud computing, grid computing and cluster computing which allows access to large amounts of computing power in a virtual manner. They also help the organization to share a large amount of services in a cost-effective manner. This paper basically includes about cloud computing, grid computing and cluster computing like what they are all about, their benefits in today’s time and drawbacks of having it. Moreover, this paper also talks about the virtualisation importance in cloud, grid and cluster computing as well as comparison between cloud, grid and cluster computing.
APA, Harvard, Vancouver, ISO, and other styles
17

Laouni, Djafri, and Mekki Rachida. "Monitoring and Resource Management in P2P Grid-Based Web Services." Computer Engineering and Applications Journal 1, no. 2 (2012): 71–84. http://dx.doi.org/10.18495/comengapp.v1i2.9.

Full text
Abstract:
Grid computing has recently emerged as a response to the growing demand for resources (processing power, storage, etc.) exhibited by scientific applications. However, as grid sizes increase, the need for self-organization and dynamic reconfigurations is becoming more and more important. Since such properties are exhibited by P2P systems, the convergence of grid computing and P2P computing seems natural. However, using P2P systems (usually running on the Internet) on a grid infrastructure (generally available as a federation of SAN-based clusters interconnected by high-bandwidth WANs) may raise the issue of the adequacy of the P2P communication mechanisms. Among the interesting properties of P2P systems is the volatility of peers which causes the need for integration of a service fault tolerance. And service Load balancing,  As a solution, we proposed a mechanism of fault tolerance and model of Load balancing adapted to a grid P2P model, named SGRTE (Monitoring and Resource Management, Fault Tolerances and Load Balancing).DOI: 10.18495/comengapp.12.071084
APA, Harvard, Vancouver, ISO, and other styles
18

Ranjit, Rajak. "A Comparative Study: Taxonomy of High Performance Computing (HPC)." International Journal of Electrical and Computer Engineering (IJECE) 8, no. 5 (2018): 3386–91. https://doi.org/10.11591/ijece.v8i5.pp3386-3391.

Full text
Abstract:
The computer technologies have rapidly developed in both software and hardware field. The complexity of software is increasing as per the market demand because the manual systems are going to become automation as well as the cost of hardware is decreasing. High Performance Computing (HPC) is very demanding technology and an attractive area of computing due to huge data processing in many applications of computing. The paper focus upon different applications of HPC and the types of HPC such as Cluster Computing, Grid Computing and Cloud Computing. It also studies, different classifications and applications of above types of HPC. All these types of HPC are demanding area of computer science. This paper also done comparative study of grid, cloud and cluster computing based on benefits, drawbacks, key areas of research, characterstics, issues and challenges.
APA, Harvard, Vancouver, ISO, and other styles
19

BEAUMONT, O., A. LEGRAND, L. MARCHAL, and Y. ROBERT. "SCHEDULING STRATEGIES FOR MIXED DATA AND TASK PARALLELISM ON HETEROGENEOUS CLUSTERS." Parallel Processing Letters 13, no. 02 (2003): 225–44. http://dx.doi.org/10.1142/s0129626403001252.

Full text
Abstract:
We consider the execution of a complex application on a heterogeneous "grid" computing platform. The complex application consists of a suite of identical, independent problems to be solved. In turn, each problem consists of a set of tasks. There are dependences (precedence constraints) between these tasks and these dependences are organized as a tree. A typical example is the repeated execution of the same algorithm on several distinct data samples. We use a non-oriented graph to model the grid platform, where resources have different speeds of computation and communication. We show how to determine the optimal steady-state scheduling strategy for each processor (the fraction of time spent computing and the fraction of time spent communicating with each neighbor). This result holds for a quite general framework, allowing for cycles and multiple paths in the platform graph.
APA, Harvard, Vancouver, ISO, and other styles
20

ASHISH, ADINATH VANKUDRE. "GRID COMPUTING: TERMS AND OVERVIEW." JournalNX - A Multidisciplinary Peer Reviewed Journal 3, no. 4 (2017): 148–51. https://doi.org/10.5281/zenodo.1453800.

Full text
Abstract:
The last some years there has been a rapid rampant increase in computer processing power, communication, and data storage. Grid is an infrastructure that contains the integrated and collective use of computers, databases, networks and experimental instruments managed and owned by various organizations. Grid computing is a kind of distributed computing whereby a "super and virtual computer" is built of a cluster of networked, loosely coupled computers, working in concert to perform large tasks. Here paper presents an introduction of Grid computing providing wisdom into the gird components, terms, architecture, Grid Types, Applications of grid computing. https://journalnx.com/journal-article/20150300
APA, Harvard, Vancouver, ISO, and other styles
21

Buyya, Rajkumar, and Srikumar Venugopal. "Cluster and Grid Computing: A Graduate Distributed-Computing Course." IEEE Distributed Systems Online 8, no. 12 (2007): 2. http://dx.doi.org/10.1109/mdso.2007.4415516.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Barreiro Megino, Fernando Harald, Jeffrey Ryan Albert, Frank Berghaus, et al. "Using Kubernetes as an ATLAS computing site." EPJ Web of Conferences 245 (2020): 07025. http://dx.doi.org/10.1051/epjconf/202024507025.

Full text
Abstract:
In recent years containerization has revolutionized cloud environments, providing a secure, lightweight, standardized way to package and execute software. Solutions such as Kubernetes enable orchestration of containers in a cluster, including for the purpose of job scheduling. Kubernetes is becoming a de facto standard, available at all major cloud computing providers, and is gaining increased attention from some WLCG sites. In particular, CERN IT has integrated Kubernetes into their cloud infrastructure by providing an interface to instantly create Kubernetes clusters, and the University of Victoria is pursuing an infrastructure-as-code approach to deploying Kubernetes as a flexible and resilient platform for running services and delivering resources. The ATLAS experiment at the LHC has partnered with CERN IT and the University of Victoria to explore and demonstrate the feasibility of running an ATLAS computing site directly on Kubernetes, replacing all grid computing services. We have interfaced ATLAS’ workload submission engine PanDA with Kubernetes, to directly submit and monitor the status of containerized jobs. We describe the integration and deployment details, and focus on the lessons learned from running a wide variety of ATLAS production payloads on Kubernetes using clusters of several thousand cores at CERN and the Tier 2 computing site in Victoria.
APA, Harvard, Vancouver, ISO, and other styles
23

Wu, Wenjing, and David Cameron. "Backfilling the Grid with Containerized BOINC in the ATLAS computing." EPJ Web of Conferences 214 (2019): 07015. http://dx.doi.org/10.1051/epjconf/201921407015.

Full text
Abstract:
Virtualization is a commonly used solution for utilizing the opportunistic computing resources in the HEP field, as it provides a unified software and OS layer that the HEP computing tasks require over the heterogeneous opportunistic computing resources. However there is always performance penalty with virtualization, especially for short jobs which are always the case for volunteer computing tasks, the overhead of virtualization reduces the CPU efficiency of the jobs, hence it leads to low CPU efficiency of the jobs. With the wide usage of containers in HEP computing, we explore the possibility of adopting the container technology into the ATLAS BOINC project, hence we implemented a Native version in BOINC, which uses the Singularity container or direct usage of the Operating System of the host machines to replace VirtualBox. In this paper, we will discuss 1) the implementation and workflow of the Native version in the ATLAS BOINC; 2) the performance measurement of the Native version comparing to the previous virtualization version. 3) the limits and shortcomings of the Native version; 4) The practice and outcome of the Native version which includes using it in backfilling the ATLAS Grid Tier2 sites and other clusters, and to utilize the idle computers from the CERN computing centre.
APA, Harvard, Vancouver, ISO, and other styles
24

Marcon, Caterina, Leonardo Carminati, David Rebatto, and Ruggero Turra. "The Platform-as-a-Service paradigm meets ATLAS: Developing an automated analysis workflow on the newly established INFN CLOUD." EPJ Web of Conferences 295 (2024): 04048. http://dx.doi.org/10.1051/epjconf/202429504048.

Full text
Abstract:
The Worldwide LHC Computing Grid (WLCG) is a large-scale collaboration which gathers computing resources from more than 170 computing centers worldwide. To fulfill the requirements of new applications and to improve the long-term sustainability of the grid middleware, newly available solutions are being investigated. Like open-source and commercial players, the HEP community has also recognized the benefits of integrating cloud technologies into the legacy, grid-based workflows. Since March 2021, INFN has entered the field of cloud computing establishing the INFN CLOUD infrastructure. This platform supports scientific computing, software development and training, and serves as an extension of local resources. Among available services, virtual machines, Docker-based deployments, HTCondor (deployed on Kubernetes) or general-purpose Kubernetes clusters can be deployed. An ongoing R&D activity within the ATLAS experiment has the long-term objective to define an operation model which is efficient, versatile and scalable in terms of costs and computing power. As a part of this larger effort, this study investigates the feasibility of an automated, cloud-based data analysis workflow for the ATLAS experiment using INFN CLOUD resources. The scope of this research has been defined in a new INFN R&D project: the INfn Cloud based Atlas aNalysis faciliTy, or INCANT. The long-term objective of INCANT is to provide a cloud-based system to support data preparation and data analysis. As a first project milestone, a proofof-concept has been developed. A Kubernetes cluster equipped with 7 nodes (total 28 vCPU, 56 GB of RAM and 700 GB of non-shared block storage) hosts an HTCondor cluster, federated with INFN’s IAM authentication platform, running in specialized Kubernetes pods. HTCondor worker nodes have direct access to CVMFS and EOS (via XRootD) for provisioning software and data, respectively. They are also connected to a NFS shared drive which can optionally be backed by an S3-compatible 2 TB storage. Jobs are submitted to the HTCondor cluster from a satellite, Dockerized submit node which is also federated with INFN’s IAM and connected to the same data and software resources. This proof-of-concept is being tested with actual analysis workflows.
APA, Harvard, Vancouver, ISO, and other styles
25

Ahuja, A. "Cluster and Grid Computing A Detailed Comparison." International Journal of Computer Sciences and Engineering 6, no. 5 (2018): 934–40. http://dx.doi.org/10.26438/ijcse/v6i5.934940.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Bukhsh, Rasool, Nadeem Javaid, Zahoor Ali Khan, Farruh Ishmanov, Muhammad Afzal, and Zahid Wadud. "Towards Fast Response, Reduced Processing and Balanced Load in Fog-Based Data-Driven Smart Grid." Energies 11, no. 12 (2018): 3345. http://dx.doi.org/10.3390/en11123345.

Full text
Abstract:
The integration of the smart grid with the cloud computing environment promises to develop an improved energy-management system for utility and consumers. New applications and services are being developed which generate huge requests to be processed in the cloud. As smart grids can dynamically be operated according to consumer requests (data), so, they can be called Data-Driven Smart Grids. Fog computing as an extension of cloud computing helps to mitigate the load on cloud data centers. This paper presents a cloud–fog-based system model to reduce Response Time (RT) and Processing Time (PT). The load of requests from end devices is processed in fog data centers. The selection of potential data centers and efficient allocation of requests on Virtual Machines (VMs) optimize the RT and PT. A New Service Broker Policy (NSBP) is proposed for the selection of a potential data center. The load-balancing algorithm, a hybrid of Particle Swarm Optimization and Simulated Annealing (PSO-SA), is proposed for the efficient allocation of requests on VMs in the potential data center. In the proposed system model, Micro-Grids (MGs) are placed near the fogs for uninterrupted and cheap power supply to clusters of residential buildings. The simulation results show the supremacy of NSBP and PSO-SA over their counterparts.
APA, Harvard, Vancouver, ISO, and other styles
27

Zhang, Kai, Wei Guo, Jian Feng, and Mei Liu. "Load Forecasting Method Based on Improved Deep Learning in Cloud Computing Environment." Scientific Programming 2021 (October 12, 2021): 1–11. http://dx.doi.org/10.1155/2021/3250732.

Full text
Abstract:
For the problems of low accuracy and low efficiency of most load forecasting methods, a load forecasting method based on improved deep learning in cloud computing environment is proposed. Firstly, the preprocessed data set is divided into several data partitions with relatively balanced data volume through spatial grid, so as to better detect abnormal data. Then, the density peak clustering algorithm based on spark is used to detect abnormal data in each partition, and the local clusters and abnormal points are merged. The parallel processing of data is realized by using spark cluster computing platform. Finally, the deep belief network is used for load classification, and the classification results are input into the empirical mode decomposition-gating recurrent unit network model, and the load prediction results are obtained through learning. Based on the load data of a power grid, the experimental results demonstrate that the mean prediction error of the proposed method is basically controlled within 3% in the short term and 0.023 MW, 19.75%, and 2.76% in the long term, which are better than other comparison methods, and the parallel performance is good, which has a certain feasibility.
APA, Harvard, Vancouver, ISO, and other styles
28

BOUABACHE, FATIHA, THOMAS HERAULT, GILLES FEDAK, and FRANCK CAPPELLO. "HIERARCHICAL REPLICATION TECHNIQUES TO ENSURE CHECKPOINT STORAGE RELIABILITY IN GRID ENVIRONMENT." Journal of Interconnection Networks 10, no. 04 (2009): 345–64. http://dx.doi.org/10.1142/s0219265909002613.

Full text
Abstract:
An efficient and reliable fault tolerance protocol plays a key role in High Performance Computing. Rollback recovery is the most common fault tolerance technique used in High Performance Computing and especially in MPI applications. This technique relies on the reliability of the checkpoint storage. Most of the rollback recovery protocols assume that the checkpoint servers machines are reliable. However, in a grid environment any unit can fail at any moment, including components used to connect different administrative domains. Such failures lead to the loss of a whole set of machines, including the more reliable machines used to store the checkpoints in this administrative domain. Thus it is not safe to rely on the high Mean Time Between Failures of specific machines to store the checkpoint images. This paper introduces a new coordinated checkpoint protocol, which tolerates checkpoint server failures and clusters failures, and ensures a checkpoint storage reliability in a grid environment. To provide this reliability the protocol is based on a replication process. We propose new hierarchical replication strategies that exploit the locality of checkpoint images in order to minimize inter-cluster communication. We evaluate the effectiveness of our two hierarchical replication strategies through simulations against several criteria such as topology and scalability.
APA, Harvard, Vancouver, ISO, and other styles
29

Kovalenko, A., R. Miroshnychenko, and A. Martyntsov. "DISTRIBUTED COMPUTING SYSTEMS BASED ON THE USE OF GRID TECHNOLOGIES." Системи управління, навігації та зв’язку. Збірник наукових праць 1, no. 71 (2023): 101–3. http://dx.doi.org/10.26906/sunz.2023.1.101.

Full text
Abstract:
The article describes the use of Grid technology in telecommunication systems and the creation of Grid telecommunication systems. Network service providers are focusing on huge consolidated data centers. Particular attention began to be paid to improving the methods of scheduling data processing tasks. The shortcomings of modern approaches to planning are considered. These include the inability to provide the maximum total priority of tasks performed at individual planning stages in distributed computing systems. The developed planning procedure is described. It overcomes this shortcoming. But at the same time, to reduce the total processing time of tasks in the system in comparison with the planning procedure based on solving the problem of the least coverage. The desire for the maximum sum of priorities of the selected tasks is the main criterion for selecting tasks from the queue, which we will characterize by the importance coefficient. The coefficient of preservation of importance and the coefficient of acceleration of the operation of the Grid system segment are described. The article also proposes a solution to improve the efficiency and quality of task servicing. It is proposed to expand the functionality of distributed telecommunication systems based on the use of computer clusters using Grid technologies.
APA, Harvard, Vancouver, ISO, and other styles
30

Cruz-Chávez, Marco Antonio, Abelardo Rodríguez-León, Rafael Rivera-López, and Martín H. Cruz-Rosales. "A Grid-Based Genetic Approach to Solving the Vehicle Routing Problem with Time Windows." Applied Sciences 9, no. 18 (2019): 3656. http://dx.doi.org/10.3390/app9183656.

Full text
Abstract:
This paper describes one grid-based genetic algorithm approach to solve the vehicle routing problem with time windows in one experimental cluster MiniGrid. Clusters used in this approach are located in two Mexican cities (Cuernavaca and Jiutepec, Morelos) securely communicating with each other since they are configured as one virtual private network, and its use as a single set of processors instead of isolated groups allows one to increase the computing power to solve complex tasks. The genetic algorithm splits the population of candidate solutions in several segments, which are simultaneously mutated in each process generated by the MiniGrid. These mutated segments are used to build a new population combining the results produced by each process. In this paper, the MiniGrid configuration scheme is described, and both the communication latency and the speedup behavior are discussed. Experimental results show one information exchange reduction through the MiniGrid clusters as well as an improved behavior of the evolutionary algorithm. A statistical analysis of these results suggests that our approach is better as a combinatorial optimization procedure as compared with other methods.
APA, Harvard, Vancouver, ISO, and other styles
31

Yang, Chao-Tung, and Wen-Chung Shih. "On Construction of Cluster and Grid Computing Platforms for Parallel Bioinformatics Applications." International Journal of Grid and High Performance Computing 3, no. 1 (2011): 69–88. http://dx.doi.org/10.4018/jghpc.2011010104.

Full text
Abstract:
Biology databases are diverse and massive. As a result, researchers must compare each sequence with vast numbers of other sequences. Comparison, whether of structural features or protein sequences, is vital in bioinformatics. These activities require high-speed, high-performance computing power to search through and analyze large amounts of data and industrial-strength databases to perform a range of data-intensive computing functions. Grid computing and Cluster computing meet these requirements. Biological data exist in various web services that help biologists search for and extract useful information. The data formats produced are heterogeneous and powerful tools are needed to handle the complex and difficult task of integrating the data. This paper presents a review of the technologies and an approach to solve this problem using cluster and grid computing technologies. The authors implement an experimental distributed computing application for bioinformatics, consisting of basic high-performance computing environments (Grid and PC Cluster systems), multiple interfaces at user portals that provide useful graphical interfaces to enable biologists to benefit directly from the use of high-performance technology, and a translation tool for converting biology data into XML format.
APA, Harvard, Vancouver, ISO, and other styles
32

Ummah, Izzatul. "Pengembangan dan Pengujian Sistem Grid Computing Menggunakan Globus Toolkit di Universitas Telkom." Indonesian Journal on Computing (Indo-JC) 2, no. 1 (2017): 7. http://dx.doi.org/10.21108/indojc.2017.2.1.19.

Full text
Abstract:
In this research, we build a grid computing infrastructure by utilizing existing cluster in Telkom University as back-end resources. We used middleware Globus Toolkit 6.0 and Condor 8.4.2 in developing the grid system. We tested the performance of our grid system using parallel matrix multiplication. The result showed that our grid system has achieved good performance. With the implementation of this grid system, we believe that access to high performance computing resources will become easier and the Quality of Service will also be improved.
APA, Harvard, Vancouver, ISO, and other styles
33

Thaman, Jyoti, and Manpreet Singh. "Extending Dynamic Scheduling Policies in WorkflowSim by Using Variance based Approach." International Journal of Grid and High Performance Computing 8, no. 2 (2016): 76–93. http://dx.doi.org/10.4018/ijghpc.2016040105.

Full text
Abstract:
Workflow scheduling has been around for more than two decades. With growing interest in service oriented computing architecture among researchers and corporate users, different platform like clusters computing, grid computing and most recent cloud computing, appeared on computing horizon. Cloud computing has attracted a lot of interest from all types of users. It gave rise to variety of applications and tasks with varied requirements. Heterogeneity in application's requirement catalyzes the provision of customized services for task types. Representation of tasks characteristics and inter-task relationship through workflows is in use since the ages of automation. Scheduling of workflows not only maintain the hierarchical relationship between the tasks but also dictates the requirement of dynamic scheduling. This paper presents a variance based extensions of few promising dynamic scheduling policies supported by WorkflowSim. An exhaustive performance analysis presents strength and weakness of the authors' proposal.
APA, Harvard, Vancouver, ISO, and other styles
34

Dongarra, Jack J., and Bernard Tourancheau. "Clusters and Computational Grids for Scientific Computing." International Journal of High Performance Computing Applications 13, no. 3 (1999): 179. http://dx.doi.org/10.1177/109434209901300301.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Dongarra, Jack, Masaaki Shimasaki, and Bernard Tourancheau. "Clusters and computational grids for scientific computing." Parallel Computing 27, no. 11 (2001): 1401–2. http://dx.doi.org/10.1016/s0167-8191(01)00095-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Ashima, Ashima, and Vikramjit Singh. "A NOVEL APPROACH OF JOB ALLOCATION USING MULTIPLE PARAMETERS IN IN CLOUD ENVIRONMENT." INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY 17, no. 1 (2018): 7103–10. http://dx.doi.org/10.24297/ijct.v17i1.7004.

Full text
Abstract:
Cloud computing is Internet ("cloud") based development and use of computer technology ("computing"). It is a style of computing in which dynamically scalable and often virtualized resources are provided as a service over the Internet. This research deals with the balancing of work load in cloud environment. Load balancing is one of the essential factors to enhance the working performance of the cloud service provider. Grid computing utilizes the distributed heterogeneous resources in order to support complicated computing problems. Grid can be classified into two types: computing grid and data grid. We propose an improved load balancing algorithm for job scheduling in the Grid environment. Hence, in this research work, a multi-objective load balancing algorithm has been proposed to avoid deadlocks and to provide proper utilization of all the virtual machines (VMs) while processing the requests received from the users by VM classification. The capacity of virtual machine is computed based on multiple parameters like MIPS, RAM and bandwidth. Heterogeneous virtual machines of different MIPS and processing power in multiple data centers with different hosts have been created in cloud simulator. The VM’s are divided into 2 clusters using K-Means clustering mechanism in terms of processor MIPS, memory and bandwidth. The cloudlets are divided into two categories like High QOS and Low QOS based on the instruction size. The cloudlet whose task size is greater than the threshold value will enter into High QOS and cloudlet whose task size is lesser than the threshold value will enter into Low QOS. Submit the job of the user to the datacenter broker. The job of the user is submitted to the broker and it will first find the suitable VM according to the requirements of the cloudlet and will match VM depending upon its availability. Multiple parameters have been evaluated like waiting time, turnaround time, execution time and processing cost. This modified algorithm has an edge over the original approach in which each cloudlet build their own individual result set and it is later on built into a complete solution.
APA, Harvard, Vancouver, ISO, and other styles
37

Rajak, Ranjit. "A Comparative Study: Taxonomy of High Performance Computing (HPC)." International Journal of Electrical and Computer Engineering (IJECE) 8, no. 5 (2018): 3386. http://dx.doi.org/10.11591/ijece.v8i5.pp3386-3391.

Full text
Abstract:
The computer technologies have rapidly developed in both software and hardware field. The complexity of software is increasing as per the market demand because the manual systems are going to become automation as well as the cost of hardware is decreasing. High Performance Computing (HPC) is very demanding technology and an attractive area of computing due to huge data processing in many applications of computing. The paper focus upon different applications of HPC and the types of HPC such as Cluster Computing, Grid Computing and Cloud Computing. It also studies, different classifications and applications of above types of HPC. All these types of HPC are demanding area of computer science. This paper also done comparative study of grid, cloud and cluster computing based on benefits, drawbacks, key areas of research, characterstics, issues and challenges.
APA, Harvard, Vancouver, ISO, and other styles
38

Ali, Namer. "A Comparison between Cluster, Grid, and Cloud Computing." International Journal of Computer Applications 179, no. 32 (2018): 37–42. http://dx.doi.org/10.5120/ijca2018916732.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Baker, M., A. Shafi, B. Carpenter, et al. "Cluster Computing and Grid 2005 Works in Progress." IEEE Distributed Systems Online 6, no. 10 (2005): 2. http://dx.doi.org/10.1109/mdso.2005.51.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Elmroth, Erik, Mats Nylen, and Roger Oscarsson. "A user-centric cluster and grid computing portal." International Journal of Computational Science and Engineering 4, no. 2 (2009): 127. http://dx.doi.org/10.1504/ijcse.2009.027004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Park, Jong Hyuk, Laurence T. Yang, and Jinjun Chen. "Research trends in cloud, cluster and grid computing." Cluster Computing 16, no. 3 (2012): 335–37. http://dx.doi.org/10.1007/s10586-012-0213-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Ould-Khaoua, Mohamed, and Geyong Min. "Performance Evaluation of Grid and Cluster Computing Systems." Journal of Supercomputing 34, no. 2 (2005): 91–92. http://dx.doi.org/10.1007/s11227-005-2334-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Matsuo, K., Y. Tanaka, L. F. G. Sarmenta, T. Nakai, and E. Bagarinao. "Enabling On-demand Real-time Functional MRI Analysis Using Grid Technology." Methods of Information in Medicine 44, no. 05 (2005): 665–73. http://dx.doi.org/10.1055/s-0038-1634023.

Full text
Abstract:
Summary Objectives: The analysis of brain imaging data such as functional MRI often requires considerable computing resources, which in most cases are not readily available in many medical imaging facilities. This lack of computing power makes it difficult for researchers and medical practitioners alike to perform on-site analysis of the generated data. This paper presents a system that is capable of analyzing functional MRI data in real time with results available within seconds after data acquisition. Methods: The system employs remote computational servers to provide the necessary computing power. System integration is accomplished by an accompanying software package, which includes fMRI analysis tools, data transfer routines, and an easy-to-use graphical user interface. The remote analysis is transparent to the user as if all computations are performed locally. Results: The use of PC clusters in the analysis of fMRI data significantly improved the performance of the system. Simulation runs fully achieved real-time performance with a total processing time of 1.089 s per image volume (64 x 64 x 30 in size), much less than the per volume acquisition time set to 3.0 s. Conclusions: The results show the feasibility of using remote computational resources to enable on-demand real-time fMRI capabilities to imaging sites. It also offers the possibility of doing more intensive analysis even if the imaging site doesn’t have the necessary computing resources.
APA, Harvard, Vancouver, ISO, and other styles
44

Holmes, Violeta, and Ibad Kureshi. "Developing High Performance Computing Resources for Teaching Cluster and Grid Computing Courses." Procedia Computer Science 51 (2015): 1714–23. http://dx.doi.org/10.1016/j.procs.2015.05.310.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Dorronsoro, Bernabé, and Sergio Nesmachnow. "Special issue on soft computing techniques in cluster and grid computing systems." Cluster Computing 17, no. 2 (2014): 153–54. http://dx.doi.org/10.1007/s10586-013-0336-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Chakaberia, Irakli, Jerome Lauret, Michael Poat, and Jefferson Porter. "Data transfer for STAR grid jobs." Journal of Physics: Conference Series 2438, no. 1 (2023): 012022. http://dx.doi.org/10.1088/1742-6596/2438/1/012022.

Full text
Abstract:
Abstract The Solenoidal Tracker at RHIC (STAR) is a multipurpose experiment at the Relativistic Heavy Ion Collider (RHIC) with the primary goal to study the formation and properties of the quark-gluon plasma. STAR is an international collaboration of member institutions and laboratories from around the world. Yearly data-taking period produces PBytes of raw data collected by the experiment. STAR primarily uses its dedicated facility at BNL to process this data, but has routinely leveraged distributed systems, both high throughput (HTC) and high performance (HPC) computing clusters, to significantly augment the processing capacity available to the experiment. The ability to automate the efficient transfer of large data sets on reliable, scalable, and secure infrastructure is critical for any large-scale distributed processing campaign. For more than a decade, STAR computing has relied upon GridFTP with its x509-based authentication to build such data transfer systems and integrate them into its larger production workflow. The end of support by the community for both GridFTP and the x509 standard requires STAR to investigate other approaches to meet its distributed processing needs. In this study we investigate two multi-purpose data distribution systems, Globus.org and XRootD, as alternatives to GridFTP. We compare both their performance and the ease by which each service is integrated into the type of secure and automated data transfer systems STAR has previously built using GridFTP. The presented approach and study may be applicable to other distributed data processing use cases beyond STAR.
APA, Harvard, Vancouver, ISO, and other styles
47

Koltcov, Sergei, Vera Ignatenko, and Sergei Pashakhin. "Fast Tuning of Topic Models: An Application of Rényi Entropy and Renormalization Theory." Proceedings 46, no. 1 (2019): 5. http://dx.doi.org/10.3390/ecea-5-06674.

Full text
Abstract:
In practice, the critical step in building machine learning models of big data (BD) is costly in terms of time and the computing resources procedure of parameter tuning with a grid search. Due to the size, BD are comparable to mesoscopic physical systems. Hence, methods of statistical physics could be applied to BD. The paper shows that topic modeling demonstrates self-similar behavior under the condition of a varying number of clusters. Such behavior allows using a renormalization technique. The combination of a renormalization procedure with the Rényi entropy approach allows for fast searching of the optimal number of clusters. In this paper, the renormalization procedure is developed for the Latent Dirichlet Allocation (LDA) model with a variational Expectation-Maximization algorithm. The experiments were conducted on two document collections with a known number of clusters in two languages. The paper presents results for three versions of the renormalization procedure: (1) a renormalization with the random merging of clusters, (2) a renormalization based on minimal values of Kullback–Leibler divergence and (3) a renormalization with merging clusters with minimal values of Rényi entropy. The paper shows that the renormalization procedure allows finding the optimal number of topics 26 times faster than grid search without significant loss of quality.
APA, Harvard, Vancouver, ISO, and other styles
48

Liu, Xiyu, Laisheng Xiang, and Xin Wang. "Spatial Cluster Analysis by the Adleman-Lipton DNA Computing Model and Flexible Grids." Discrete Dynamics in Nature and Society 2012 (2012): 1–32. http://dx.doi.org/10.1155/2012/894207.

Full text
Abstract:
Spatial cluster analysis is an important data-mining task. Typical techniques include CLARANS, density- and gravity-based clustering, and other algorithms based on traditional von Neumann’s computing architecture. The purpose of this paper is to propose a technique for spatial cluster analysis based on DNA computing and a grid technique. We will adopt the Adleman-Lipton model and then design a flexible grid algorithm. Examples are given to show the effect of the algorithm. The new clustering technique provides an alternative for traditional cluster analysis.
APA, Harvard, Vancouver, ISO, and other styles
49

Chen, Ji Lin, Nan Zhang, Chang Feng Qin, Na Na Liu, Wei Jiang Qiu, and Jian Yong Hu. "The Optimization and Implementation of Power System Parallel Computing Management Platform." Applied Mechanics and Materials 687-691 (November 2014): 3365–70. http://dx.doi.org/10.4028/www.scientific.net/amm.687-691.3365.

Full text
Abstract:
With the rapid development of computer hardware, parallel computing cluster is becoming more and more powerful. It becomes focus of attention that how to make full use of the parallel abilities in computing cluster for grid service faster and more agility in the analysis of power system simulation. This paper introduces the optimization and implementation of power system parallel computing management platform based on online and offline power system research. It improves the computing resource utilization, enhances the general platform, standardizes management of computing interface at the same time and realizes the application process can be configured for the customs. At present, the optimization and implementation of power system parallel computing management platform has been successfully applied to online warning research and auxiliary decision in the power system, and it is also applied to offline study prediction, collaborative system for power grid operation mode calculation and so on.
APA, Harvard, Vancouver, ISO, and other styles
50

Fotohi, Reza, and Mehdi Effatparvar. "A Cluster Based Job Scheduling Algorithm for Grid Computing." International Journal of Information Technology and Computer Science 5, no. 12 (2013): 70–77. http://dx.doi.org/10.5815/ijitcs.2013.12.09.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography