To see the other types of publications on this topic, follow the link: Job scheduler.

Journal articles on the topic 'Job scheduler'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Job scheduler.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Geetha, J1 N. UdayBhaskar 2. P. ChennaReddy3 Neha Sniha 4. "Hadoop Scheduler wi t h Deadline Constraint." International Journal on Cloud Computing: Services and Architecture (IJCCSA) 4, October (2018): 01–07. https://doi.org/10.5281/zenodo.1409986.

Full text
Abstract:
A popular programming model for running data intensive applications on the cloud is map reduce. In the Hadoop usually, jobs are scheduled in FIFO order by default. There are many map reduce applications which require strict deadline. In Hadoop framework, scheduler w i t h deadline con stra in ts has not been implemented. Existing schedulers d o not guarantee that the job will be completed by a specific deadline. Some schedulers address the issue of deadlines but focus more on improving s y s t e m utilization. We have proposed an algorithm which facilitates the user to specify a jobs deadline
APA, Harvard, Vancouver, ISO, and other styles
2

Sotskov, Yuri N., Natalja M. Matsveichuk, and Vadzim D. Hatsura. "Schedule Execution for Two-Machine Job-Shop to Minimize Makespan with Uncertain Processing Times." Mathematics 8, no. 8 (2020): 1314. http://dx.doi.org/10.3390/math8081314.

Full text
Abstract:
This study addresses a two-machine job-shop scheduling problem with fixed lower and upper bounds on the job processing times. An exact value of the job duration remains unknown until completing the job. The objective is to minimize a schedule length (makespan). It is investigated how to best execute a schedule, if the job processing time may be equal to any real number from the given (closed) interval. Scheduling decisions consist of the off-line phase and the on-line phase of scheduling. Using the fixed lower and upper bounds on the job processing times available at the off-line phase, a sche
APA, Harvard, Vancouver, ISO, and other styles
3

Raheja, Supriya. "An Intuitionistic Fuzzy Based Novel Approach to CPU Scheduler." Current Medical Imaging Formerly Current Medical Imaging Reviews 16, no. 4 (2020): 316–28. http://dx.doi.org/10.2174/1573405614666180903120708.

Full text
Abstract:
Background: The extension of CPU schedulers with fuzzy has been ascertained better because of its unique capability of handling imprecise information. Though, other generalized forms of fuzzy can be used which can further extend the performance of the scheduler. Objectives: This paper introduces a novel approach to design an intuitionistic fuzzy inference system for CPU scheduler. Methods: The proposed inference system is implemented with a priority scheduler. The proposed scheduler has the ability to dynamically handle the impreciseness of both priority and estimated execution time. It also m
APA, Harvard, Vancouver, ISO, and other styles
4

Sotskov, Yuri N., Natalja M. Matsveichuk, and Vadzim D. Hatsura. "Two-Machine Job-Shop Scheduling Problem to Minimize the Makespan with Uncertain Job Durations." Algorithms 13, no. 1 (2019): 4. http://dx.doi.org/10.3390/a13010004.

Full text
Abstract:
We study two-machine shop-scheduling problems provided that lower and upper bounds on durations of n jobs are given before scheduling. An exact value of the job duration remains unknown until completing the job. The objective is to minimize the makespan (schedule length). We address the issue of how to best execute a schedule if the job duration may take any real value from the given segment. Scheduling decisions may consist of two phases: an off-line phase and an on-line phase. Using information on the lower and upper bounds for each job duration available at the off-line phase, a scheduler c
APA, Harvard, Vancouver, ISO, and other styles
5

Periyasami, Karthikeyan, Arul Xavier Viswanathan Mariammal, Iwin Thanakumar Joseph, and Velliangiri Sarveshwaran. "Combinatorial Double Auction Based Meta-scheduler for Medical Image Analysis Application in Grid Environment." Recent Advances in Computer Science and Communications 13, no. 5 (2020): 999–1007. http://dx.doi.org/10.2174/2213275911666190320161934.

Full text
Abstract:
Background: Medical image analysis application has complex resource requirement. Scheduling Medical image analysis application is the complex task to the grid resources. It is necessary to develop a new model to improve the breast cancer screening process. Proposed novel Meta scheduler algorithm allocate the image analyse applications to the local schedulers and local scheduler submit the job to the grid node which analyses the medical image and generates the result sent back to Meta scheduler. Meta schedulers are distinct from the local scheduler. Meta scheduler and local scheduler have the a
APA, Harvard, Vancouver, ISO, and other styles
6

Prabowo, Sidik, and Maman Abdurohman. "Studi Perbandingan Performa Algoritma Penjadwalan untuk Real Time Data Twitter pada Hadoop." Komputika : Jurnal Sistem Komputer 9, no. 1 (2020): 43–50. http://dx.doi.org/10.34010/komputika.v9i1.2848.

Full text
Abstract:
Hadoop merupakan sebuah framework software yang bersifat open source dan berbasis java. Hadoop terdiri atas dua komponen utama, yaitu MapReduce dan Hadoop Distributed File System (HDFS). MapReduce terdiri atas Map dan Reduce yang digunakan untuk pemrosesan data, sementara HDFS adalah tempat atau direktori dimana data hadoop dapat disimpan. Dalam menjalankan job yang tidak jarang terdapat keragaman karakteristik eksekusinya, diperlukan job scheduler yang tepat. Terdapat banyak job scheduler yang dapat di pilih supaya sesuai dengan karakteristik job. Fair Scheduler menggunakan salah satu schedul
APA, Harvard, Vancouver, ISO, and other styles
7

Subramani, K. "Partially clairvoyant scheduling for aggregate constraints." Journal of Applied Mathematics and Decision Sciences 2005, no. 4 (2005): 225–40. http://dx.doi.org/10.1155/jamds.2005.225.

Full text
Abstract:
The problem of partially clairvoyant scheduling is concerned with checking whether an ordered set of jobs, having nonconstant execution times and subject to a collection of imposed constraints, has a partially clairvoyant schedule. Variability of execution times of jobs and nontrivial relationships constraining their executions, are typical features of real-time systems. A partially clairvoyant scheduler parameterizes the schedule, in that the start time of a job in a sequence can depend upon the execution times of jobs that precede it, in the sequence. In real-time scheduling, parameterizatio
APA, Harvard, Vancouver, ISO, and other styles
8

Andrusenko, Yuliia, and Dmytro Lysytsia. "IAAS PERFORMANCE ASSESSMENT WITH SERVICE LEVELS." Системи управління, навігації та зв’язку. Збірник наукових праць 3, no. 77 (2024): 196–98. http://dx.doi.org/10.26906/sunz.2024.3.196.

Full text
Abstract:
The paper proposes a modification of the IaaS cloud model. To demonstrate the practicality and competitiveness of the method, a comprehensive performance study is conducted using simulation. Workloads based on real production runs of heterogeneous HPC systems are used to evaluate the practicality of the scheduling method. An online scheduling problem is considered. Jobs arrive one after another, and after a new job arrives, the scheduler must decide whether to reject this incoming job or schedule it on one of the machines. The problem is online, since the scheduler must solve it without inform
APA, Harvard, Vancouver, ISO, and other styles
9

Somasundaram, K., and S. Radhakrishnan. "Task Resource Allocation in Grid using Swift Scheduler." International Journal of Computers Communications & Control 4, no. 2 (2009): 158. http://dx.doi.org/10.15837/ijccc.2009.2.2423.

Full text
Abstract:
In nature, Grid computing is the combination of parallel and distributed computing where running computationally intensive applications like sequence alignment, weather forecasting, etc are needed a proficient scheduler to solve the problems awfully fast. Most of the Grid tasks are scheduled based on the First come first served (FCFS) or FCFS with advanced reservation, Shortest Job First (SJF) and etc. But these traditional algorithms seize more computational time due to soar waiting time of jobs in job queue. In Grid scheduling algorithm, the resources selection is NPcomplete. To triumph over
APA, Harvard, Vancouver, ISO, and other styles
10

Lee, Donghun, Hyeongwon Kang, Dongjin Lee, Jeonwoo Lee, and Kwanho Kim. "Deep Reinforcement Learning-Based Scheduler on Parallel Dedicated Machine Scheduling Problem towards Minimizing Total Tardiness." Sustainability 15, no. 4 (2023): 2920. http://dx.doi.org/10.3390/su15042920.

Full text
Abstract:
This study considers a parallel dedicated machine scheduling problem towards minimizing the total tardiness of allocated jobs on machines. In addition, this problem comes under the category of NP-hard. Unlike classical parallel machine scheduling, a job is processed by only one of the dedicated machines according to its job type defined in advance, and a machine is able to process at most one job at a time. To obtain a high-quality schedule in terms of total tardiness for the considered scheduling problem, we suggest a machine scheduler based on double deep Q-learning. In the training phase, t
APA, Harvard, Vancouver, ISO, and other styles
11

Zheng, Zhishuo, Deyu Qi, Mincong Yu, et al. "Optimizing Job Coscheduling by Adaptive Deadlock-Free Scheduler." Mathematical Problems in Engineering 2018 (August 15, 2018): 1–18. http://dx.doi.org/10.1155/2018/1438792.

Full text
Abstract:
It is ubiquitous that multiple jobs coexist on the same machine, because tens or hundreds of cores are able to reside on the same chip. To run multiple jobs efficiently, the schedulers should provide flexible scheduling logic. Besides, corunning jobs may compete for the shared resources, which may lead to performance degradation. While many scheduling algorithms have been proposed for supporting different scheduling logic schemes and alleviating this contention, job coscheduling without performance degradation on the same machine remains a challenging problem. In this paper, we propose a novel
APA, Harvard, Vancouver, ISO, and other styles
12

Muhammad, Asif, and Muhammad Abdul Qadir. "MF-Storm: a maximum flow-based job scheduler for stream processing engines on computational clusters to increase throughput." PeerJ Computer Science 8 (September 26, 2022): e1077. http://dx.doi.org/10.7717/peerj-cs.1077.

Full text
Abstract:
Background A scheduling algorithm tries to schedule multiple computational tasks on a cluster of multiple computing nodes to maximize throughput with optimal utilization of computational and communicational resources. A Stream Processing Engine (SPE) is deployed to run streaming applications (computational tasks) on a computational cluster which helps execution and coordination of these applications. It is observed that there is a gap in the optimal mapping of a computational and communicational load of a streaming application on the underlying computational and communication power of the reso
APA, Harvard, Vancouver, ISO, and other styles
13

Brecht, Timothy, Xiaotie Deng, and Nian Gu. "Competitive Dynamic Multiprocessor Allocation for Parallel Applications." Parallel Processing Letters 07, no. 01 (1997): 89–100. http://dx.doi.org/10.1142/s0129626497000115.

Full text
Abstract:
We study dynamic multiprocessor allocation policies for parallel jobs, which allow the preemption and reallocation of processors to take place at any time. The objective is to minimize the completion time of the last job to finish executing (the makespan). We characterize a parallel job using two parameter. The job's parallelism, Pi, which is the number of tasks being executed in parallel by a job, and its execution time, li, when Pi processors are allocated to the job. The only information available to the scheduler is the parallelism of jobs. The job execution time is not known to the schedu
APA, Harvard, Vancouver, ISO, and other styles
14

Wang, Qiqi, Hongjie Zhang, Cheng Qu, Yu Shen, Xiaohui Liu, and Jing Li. "RLSchert: An HPC Job Scheduler Using Deep Reinforcement Learning and Remaining Time Prediction." Applied Sciences 11, no. 20 (2021): 9448. http://dx.doi.org/10.3390/app11209448.

Full text
Abstract:
The job scheduler plays a vital role in high-performance computing platforms. It determines the execution order of the jobs and the allocation of resources, which in turn affect the resource utilization of the entire system. As the scale and complexity of HPC continue to grow, job scheduling is becoming increasingly important and difficult. Existing studies relied on user-specified or regression techniques to give fixed runtime prediction values and used the values in static heuristic scheduling algorithms. However, these approaches require very accurate runtime predictions to produce better r
APA, Harvard, Vancouver, ISO, and other styles
15

Son, Siwoon, and Yang-Sae Moon. "Locality/Fairness-Aware Job Scheduling in Distributed Stream Processing Engines." Electronics 9, no. 11 (2020): 1857. http://dx.doi.org/10.3390/electronics9111857.

Full text
Abstract:
Distributed stream processing engines (DSPEs) deploy multiple tasks on distributed servers to process data streams in real time. Many DSPEs have provided locality-aware stream partitioning (LSP) methods to reduce network communication costs. However, an even job scheduler provided by DSPEs deploys tasks far away from each other on the distributed servers, which cannot use the LSP properly. In this paper, we propose a Locality/Fairness-aware job scheduler (L/F job scheduler) that considers locality together to solve problems of the even job scheduler that only considers fairness. First, the L/F
APA, Harvard, Vancouver, ISO, and other styles
16

Li, Yiren, Tieke Li, Pei Shen, et al. "PAS: Performance-Aware Job Scheduling for Big Data Processing Systems." Security and Communication Networks 2022 (April 12, 2022): 1–14. http://dx.doi.org/10.1155/2022/8598305.

Full text
Abstract:
Big data analytics has become increasingly vital in many modern enterprise applications such as user profiling and business process optimization. Today’s big data processing systems, such as Hadoop MapReduce, Spark, and Hive, treat big data applications as a batch of jobs for scheduling. Existing schedulers in production systems often maintain fair allocation without considering application performance and resource utilization simultaneously. It is challenging to perform job scheduling in big data systems to achieve both low turnaround time and high resource utilization due to the high complex
APA, Harvard, Vancouver, ISO, and other styles
17

Charanjeet, Kaur*1& Sumanpreet Kaur2. "NOVEL IMPROVED CAPACITY SCHEDULING ALGORITHM FOR HETEROGENEOUS HADOOP." INTERNATIONAL JOURNAL OF ENGINEERING SCIENCES & RESEARCH TECHNOLOGY 6, no. 6 (2017): 401–10. https://doi.org/10.5281/zenodo.814540.

Full text
Abstract:
For large scale parallel applications Mapreduce is a widely used programming model. Mapreduce is an important programming model for parallel applications. Hadoop is a open source which is popular for developing data based applications and hadoop is a open source implementation of Mapreduce. Mapreduce gives programming interfaces to share data based in a cluster or distributed environment. As it works in a distributed environment so it should provide efficient scheduling mechanisms for efficient work capability in distributed environment. locality and synchronization overhead are main issues in
APA, Harvard, Vancouver, ISO, and other styles
18

Prabowo, Sidik. "Implementasi Algoritma Penjadwalan untuk pengelolaan Big Data dengan Hadoop." Indonesian Journal on Computing (Indo-JC) 2, no. 2 (2017): 119. http://dx.doi.org/10.21108/indojc.2017.2.2.189.

Full text
Abstract:
<p>Paper ini mengusulkan skema scheduler hadoop pada penyelesaikan tipe job yang sesuai untuk peningkatan kinerja Hadoop. Kesesuaian jenis scheduler dan tipe job yang dikerjakan dapat meningkatkan throughput dan menurunkan waktu rata-rata penyelesaian job. Masalah utama pada eksekusi job adalah ketidaksesuaian antara scheduler dengan tipe job yang dikerjakan. Pada paper ini telah dilakukan pengujian terhadap beberapa algoritma scheduler Hadoop yaitu FIFO, Fair, SARS dan COSHH scheduler dengan beberapa jenis job yang ditangani dalam lingkungan hadoop. Jenis-jenis job yang diujikan adalah
APA, Harvard, Vancouver, ISO, and other styles
19

Raheja, Supriya. "An Adaptive Fuzzy-Based Two-Layered HRRN CPU Scheduler." International Journal of Fuzzy System Applications 11, no. 1 (2022): 1–20. http://dx.doi.org/10.4018/ijfsa.285557.

Full text
Abstract:
Fuzzy Based CPU scheduler becomes an emerging component of an operating system. It can handle the imprecise nature of parameters used in scheduler. This paper introduces an adaptive fuzzy based Highest Response Ratio Next CPU scheduler which is an extension of conventional CPU scheduler. Proposed scheduler works in 2 layers. At the first layer, a Fuzzy Inference System is defined which handles the uncertainties of parameters and at the second layer, an adaptive scheduling algorithm is used to schedule each task. Proposed scheduler intelligently generates the response ratio for each ready to ru
APA, Harvard, Vancouver, ISO, and other styles
20

Weckman, Gary R., Chandrasekhar V. Ganduri, and David A. Koonce. "A neural network job-shop scheduler." Journal of Intelligent Manufacturing 19, no. 2 (2008): 191–201. http://dx.doi.org/10.1007/s10845-008-0073-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Siddesh, G. M., and K. G. Srinisas. "An Adaptive Scheduler Framework for Complex Workflow Jobs on Grid Systems." International Journal of Distributed Systems and Technologies 3, no. 4 (2012): 63–79. http://dx.doi.org/10.4018/jdst.2012100106.

Full text
Abstract:
Grid Computing provides sharing of geographically distributed resources among large scale complex applications. Due to dynamic nature of resources in grid, there is a need of highly efficient job scheduling and resource management policies in grid. A novel Grid Resource Scheduler (GRS) is proposed to effectively utilize the available resources in Grid. Proposed GRS contributes, an optimal job scheduling algorithm on Job Rank-Backfilling policy and a resource matching algorithm based on ranking of resources with best fit allocation model. Performance of GRS is measured by considering a web base
APA, Harvard, Vancouver, ISO, and other styles
22

Sung, Tegg Taekyong, Jeongsoo Ha, Jeewoo Kim, Alex Yahja, Chae-Bong Sohn, and Bo Ryu. "DeepSoCS: A Neural Scheduler for Heterogeneous System-on-Chip (SoC) Resource Scheduling." Electronics 9, no. 6 (2020): 936. http://dx.doi.org/10.3390/electronics9060936.

Full text
Abstract:
In this paper, we present a novel scheduling solution for a class of System-on-Chip (SoC) systems where heterogeneous chip resources (DSP, FPGA, GPU, etc.) must be efficiently scheduled for continuously arriving hierarchical jobs with their tasks represented by a directed acyclic graph. Traditionally, heuristic algorithms have been widely used for many resource scheduling domains, and Heterogeneous Earliest Finish Time (HEFT) has been a dominating state-of-the-art technique across a broad range of heterogeneous resource scheduling domains over many years. Despite their long-standing popularity
APA, Harvard, Vancouver, ISO, and other styles
23

Zhou, Tong, Haihua Zhu, Dunbing Tang, et al. "Reinforcement learning for online optimization of job-shop scheduling in a smart manufacturing factory." Advances in Mechanical Engineering 14, no. 3 (2022): 168781322210861. http://dx.doi.org/10.1177/16878132221086120.

Full text
Abstract:
The job-shop scheduling problem (JSSP) is a complex combinatorial problem, especially in dynamic environments. Low-volume-high-mix orders contain various design specifications that bring a large number of uncertainties to manufacturing systems. Traditional scheduling methods are limited in handling diverse manufacturing resources in a dynamic environment. In recent years, artificial intelligence (AI) arouses the interests of researchers in solving dynamic scheduling problems. However, it is difficult to optimize the scheduling policies for online decision making while considering multiple obje
APA, Harvard, Vancouver, ISO, and other styles
24

Wu, Haoze, Clark Barrett, Mahmood Sharif, Nina Narodytska, and Gagandeep Singh. "Scalable verification of GNN-based job schedulers." Proceedings of the ACM on Programming Languages 6, OOPSLA2 (2022): 1036–65. http://dx.doi.org/10.1145/3563325.

Full text
Abstract:
Recently, Graph Neural Networks (GNNs) have been applied for scheduling jobs over clusters, achieving better performance than hand-crafted heuristics. Despite their impressive performance, concerns remain over whether these GNN-based job schedulers meet users’ expectations about other important properties, such as strategy-proofness, sharing incentive, and stability. In this work, we consider formal verification of GNN-based job schedulers. We address several domain-specific challenges such as networks that are deeper and specifications that are richer than those encountered when verifying ima
APA, Harvard, Vancouver, ISO, and other styles
25

Lee, Jae-Kook, Min-Woo Kwon, Do-Sik An, et al. "Improvements to Supercomputing Service Availability Based on Data Analysis." Applied Sciences 11, no. 13 (2021): 6166. http://dx.doi.org/10.3390/app11136166.

Full text
Abstract:
As the demand for high-performance computing (HPC) resources has increased in the field of computational science, an inevitable consideration is service availability in large cluster systems such as supercomputers. In particular, the factor that most affects availability in supercomputing services is the job scheduler utilized for allocating resources. Consequent to submitting user data through the job scheduler for data analysis, 25.6% of jobs failed because of program errors, scheduler errors, or I/O errors. Based on this analysis, we propose a K-hook method for scheduling to increase the su
APA, Harvard, Vancouver, ISO, and other styles
26

Chen, Rui, Wen Hua Zeng, and Kai Ji Fan. "Research on Hadoop Greedy Scheduler Based on the Fair." Applied Mechanics and Materials 145 (December 2011): 460–64. http://dx.doi.org/10.4028/www.scientific.net/amm.145.460.

Full text
Abstract:
Job scheduling technology is one of the Hadoop platform’s key technologies, whose main function is to control the execute sequence of job and the distribution of computing resources, which directly relates to the Hadoop platform’s overall performance and system resources, usage. However, the existing job scheduling algorithms such as FIFO Scheduler, Fair Scheduler and Capacity Scheduler all have some defects. To overcome theses defects, this paper proposed a new algorithm Hadoop Greedy Scheduler Based on the Fair (HGSF). Firstly, the job pools are sorted by priority from high to low; pools in
APA, Harvard, Vancouver, ISO, and other styles
27

Hong, Yige, and Ziv Scully. "Performance of the Gittins Policy in the G/G/1 and G/G/k, With and Without Setup Times." ACM SIGMETRICS Performance Evaluation Review 51, no. 4 (2024): 12–13. http://dx.doi.org/10.1145/3649477.3649485.

Full text
Abstract:
We consider the classic problem of preemptively scheduling jobs in a queue to minimize mean number-in-system, or equivalently mean response time. Even in single-server queueing models, this can be a nontrivial problem whose answer depends on the information available to the scheduler. The simplest case is when the scheduler knows each job's size, for which the optimal policy is Shortest Remaining Processing Time (SRPT) [9]. In the more realistic case of scheduling with unknown or partially known job sizes, people consider the Gittins policy [1-3, 12]. Roughly speaking, Gittins assigns each job
APA, Harvard, Vancouver, ISO, and other styles
28

Hariharan, B., and D. Paul Raj. "WBAT Job Scheduler: A Multi-Objective Approach for Job Scheduling Problem on Cloud Computing." Journal of Circuits, Systems and Computers 29, no. 06 (2019): 2050089. http://dx.doi.org/10.1142/s0218126620500899.

Full text
Abstract:
The main objective of the proposed methodology is multi-objective job scheduling using hybridization of whale and BAT optimization algorithm (WBAT) which is used to change existing solution and to adopt a new good solution based on the objective function. The scheduling function in the proposed job scheduling strategy first creates a set of jobs and cloud node to generate the population by assigning jobs to cloud node randomly and evaluate the fitness function which minimizes the makespan and maximizes the quality of jobs. Second, the function uses iterations to regenerate populations based on
APA, Harvard, Vancouver, ISO, and other styles
29

Gautam, Jyoti V., Harshadkumar B. Prajapati, Vipul K. Dabhi, and Sanjay Chaudhary. "Empirical Study of Job Scheduling Algorithms in Hadoop MapReduce." Cybernetics and Information Technologies 17, no. 1 (2017): 146–63. http://dx.doi.org/10.1515/cait-2017-0012.

Full text
Abstract:
Abstract Several Job scheduling algorithms have been developed for Hadoop-Map Reduce model, which vary widely in design and behavior for handling different issues such as locality of data, user share fairness, and resource awareness. This article focuses on empirically evaluating the performance of three schedulers: First In First Out (FIFO), Fair scheduler, and Capacity scheduler. To carry out the experimental evaluation, we implement our own Hadoop cluster testbed, consisting of four machines, in which one of the machines works as the master node and all four machines work as slave nodes. Th
APA, Harvard, Vancouver, ISO, and other styles
30

Banu Dawood Ali, Madhina, and Enayathullah Syed Mohamed. "VM queuing optimal scheduling in cloud using heuristic ant colony optimal based multi-objective genetic approach." Indonesian Journal of Electrical Engineering and Computer Science 29, no. 3 (2023): 1542. http://dx.doi.org/10.11591/ijeecs.v29.i3.pp1542-1550.

Full text
Abstract:
The usages of cloud based applications are increasing tremendously. The cloud computing task distribution is an unknown polynomial time issue that is challengeable to find the optimal solution. In solve above mentioned issue with large amount user’s job requests, heuristic ant colony optimal based multi-objective genetic (HACOMOG) approach based job allocation and resource optimization is proposed. Utilization basis scheduler recognizes the task order and optimal resources to be scheduled. The primary contribution of the proposed technique is to develop several online techniques to find soluti
APA, Harvard, Vancouver, ISO, and other styles
31

Madhina, Banu Dawood Ali, and Syed Mohamed Enayathullah. "VM queuing optimal scheduling in cloud using heuristic ant colony optimal based multi-objective genetic approach." VM queuing optimal scheduling in cloud using heuristic ant colony optimal based multi-objective genetic approach 29, no. 3 (2023): 1542–50. https://doi.org/10.11591/ijeecs.v29.i3.pp1542-1550.

Full text
Abstract:
The usages of cloud based applications are increasing tremendously. The cloud computing task distribution is an unknown polynomial time issue that is challengeable to find the optimal solution. In solve above mentioned issue with large amount user’s job requests, heuristic ant colony optimal based multi-objective genetic (HACOMOG) approach based job allocation and resource optimization is proposed. Utilization basis scheduler recognizes the task order and optimal resources to be scheduled. The primary contribution of the proposed technique is to develop several online techniques to find
APA, Harvard, Vancouver, ISO, and other styles
32

Ananda, A. L., G. Tan, and Fau Leng Fong. "Shopping for job takers [distributed scheduler agent]." IEEE Potentials 18, no. 4 (1999): 13–16. http://dx.doi.org/10.1109/45.796096.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Dagli, Cihan, and Sinchai Sittisathanchai. "Genetic neuro-scheduler for job shop scheduling." Computers & Industrial Engineering 25, no. 1-4 (1993): 267–70. http://dx.doi.org/10.1016/0360-8352(93)90272-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Dar-El, Ezey M., and Zvi Feuer. "SIBS — a job shop simulation-based scheduler." Computer Integrated Manufacturing Systems 5, no. 1 (1992): 15–20. http://dx.doi.org/10.1016/0951-5240(92)90014-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Bansal, Jyoti, Dr Shaveta Rani, and Dr Paramjit Singh. "TheWorkQueue with Dynamic Replication-FaultTolerant Scheduler in Desktop Grid Environment." INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY 11, no. 4 (2013): 2446–51. http://dx.doi.org/10.24297/ijct.v11i4.3128.

Full text
Abstract:
Desktop Grid is different from Grid in terms of the characteristics of resources as well as types of sharing. Particularly, resource providers in Desktop Grid are volatile, heterogeneous, faulty, and malicious. These distinct features make it difficult for a scheduler to allocate tasks. Moreover, they deteriorate reliability of computation and performance. Availability metrics can forecast unavailability & can provide schedulers with information about reliability which helps them to make better scheduling decision when combined them with information about speed. This paper using these metr
APA, Harvard, Vancouver, ISO, and other styles
36

Zang, Zelin, Wanliang Wang, Yuhang Song, et al. "Hybrid Deep Neural Network Scheduler for Job-Shop Problem Based on Convolution Two-Dimensional Transformation." Computational Intelligence and Neuroscience 2019 (July 10, 2019): 1–19. http://dx.doi.org/10.1155/2019/7172842.

Full text
Abstract:
In this paper, a hybrid deep neural network scheduler (HDNNS) is proposed to solve job-shop scheduling problems (JSSPs). In order to mine the state information of schedule processing, a job-shop scheduling problem is divided into several classification-based subproblems. And a deep learning framework is used for solving these subproblems. HDNNS applies the convolution two-dimensional transformation method (CTDT) to transform irregular scheduling information into regular features so that the convolution operation of deep learning can be introduced into dealing with JSSP. The simulation experime
APA, Harvard, Vancouver, ISO, and other styles
37

Yoon, JunWeon, TaeYoung Hong, ChanYeol Park, Seo-Young Noh, and HeonChang Yu. "Log Analysis-Based Resource and Execution Time Improvement in HPC: A Case Study." Applied Sciences 10, no. 7 (2020): 2634. http://dx.doi.org/10.3390/app10072634.

Full text
Abstract:
High-performance computing (HPC) uses many distributed computing resources to solve large computational science problems through parallel computation. Such an approach can reduce overall job execution time and increase the capacity of solving large-scale and complex problems. In the supercomputer, the job scheduler, the HPC’s flagship tool, is responsible for distributing and managing the resources of large systems. In this paper, we analyze the execution log of the job scheduler for a certain period of time and propose an optimization approach to reduce the idle time of jobs. In our experimen
APA, Harvard, Vancouver, ISO, and other styles
38

Zhang, Yanhao, Congyang Wang, Xin He, Junyang Yu, Rui Zhai, and Yalin Song. "Delay-aware resource-efficient interleaved task scheduling strategy in spark." Computer Science and Information Systems, no. 00 (2025): 18. https://doi.org/10.2298/csis240831018z.

Full text
Abstract:
For solving the low CPU and network resource utilization in the task scheduler process of the Spark and Flink computing frameworks, this paper proposes a Delay-Aware Resource-Efficient Interleaved Task Scheduling Strategy(RPTS). This algorithm can schedule parallel tasks in a pipelined fashion, effectively improving the system resource utilization and shortening the job completion times. Firstly, based on historical data of task completion times, we stagger the execution of tasks within the stage with the longest completion time. This helps optimize the utilization of system resources and ensu
APA, Harvard, Vancouver, ISO, and other styles
39

Sundaraj, Kasi Perumal, Madhusudhan Rao T, and Praveen Chander P G. "Multiple MapReduce Jobs in Distributed Scheduler for Big Data Applications." International Journal of Advanced Research in Computer Science and Software Engineering 7, no. 12 (2018): 34. http://dx.doi.org/10.23956/ijarcsse.v7i12.484.

Full text
Abstract:
The majority of large-scale data intensive applications executed by data centers are based on MapReduce or its open-source implementation, Hadoop. Such applications are executed on large clusters requiring large amounts of energy, making the energy costs a considerable fraction of the data center’s overall costs. Therefore minimizing the energy consumption when executing each MapReduce job is a critical concern for data centers. In this paper, we propose a framework for improving the energy efficiency of MapReduce applications, while satisfying the service level agreement (SLA).We first model
APA, Harvard, Vancouver, ISO, and other styles
40

Mämmelä, Olli, Mikko Majanen, Robert Basmadjian, Hermann De Meer, André Giesler, and Willi Homberg. "Energy-aware job scheduler for high-performance computing." Computer Science - Research and Development 27, no. 4 (2011): 265–75. http://dx.doi.org/10.1007/s00450-011-0189-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Raheja, Supriya, Reena Dhadich, and Smita Rajpal. "Designing of 2-Stage CPU Scheduler Using Vague Logic." Advances in Fuzzy Systems 2014 (2014): 1–10. http://dx.doi.org/10.1155/2014/841976.

Full text
Abstract:
In operating system the CPU scheduler is designed in such a way that all the resources are fully utilized. With static priority scheduling the scheduler ensures that CPU time will be assigned according to the highest priority but ignores other factors; hence it affects the performance. To improve the performance, we propose a new 2-stage vague logic based scheduler. In first stage, scheduler handles the uncertainty of tasks using the proposed vague inference system (VIS). In second stage, scheduler uses a vague oriented priority scheduling (VOPS) algorithm for selection of next process. The go
APA, Harvard, Vancouver, ISO, and other styles
42

Nimdia, Kirit, and Balasubrahmanya Balakrishna. "Batch Job Active-Active Resiliency System and Method." International Journal of Computer Science and Mobile Computing 13, no. 5 (2024): 89–99. http://dx.doi.org/10.47760/ijcsmc.2024.v13i05.008.

Full text
Abstract:
This manuscript presents a robust solution for orchestrating batch job workflows in an active-active resiliency mode across multiple regions. The proposed system abstracts the complexities of job and step/task state management, provides automated and customizable retries, and ensures auditability and observability. The system maintains operational continuity and fault tolerance, leveraging lock marker files and a configurable scheduler, even during regional failures. This approach addresses the challenges of asynchronous workflow management, offering enhanced visibility, control, and efficienc
APA, Harvard, Vancouver, ISO, and other styles
43

Sneha, Sneha, and Shoney Sebastian. "Improved fair Scheduling Algorithm for Hadoop Clustering." Oriental journal of computer science and technology 10, no. 1 (2017): 194–200. http://dx.doi.org/10.13005/ojcst/10.01.26.

Full text
Abstract:
Traditional way of storing such a huge amount of data is not convenient because processing those data in the later stages is very tedious job. So nowadays, Hadoop is used to store and process large amount of data. When we look at the statistics of data generated in the recent years it is very high in the last 2 years. Hadoop is a good framework to store and process data efficiently. It works like parallel processing and there is no failure or data loss as such due to fault tolerance. Job scheduling is an important process in Hadoop Map Reduce. Hadoop comes with three types of schedulers namely
APA, Harvard, Vancouver, ISO, and other styles
44

Yan, Feng, Ludmila Cherkasova, Zhuoyao Zhang, and Evgenia Smirni. "DyScale: A MapReduce Job Scheduler for Heterogeneous Multicore Processors." IEEE Transactions on Cloud Computing 5, no. 2 (2017): 317–30. http://dx.doi.org/10.1109/tcc.2015.2415772.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

SAPUTRA, WAHYUDDIN, K. Tone, and ALVIANUS DENGEN. "ANALISIS KINERJA ALGORITMA DELAY SCHEDULING PADA HADOOP TERHADAP KARAKTERISTIK RESPONS TIME UNTUK PENGIRIMAN 2 JOB YANG BERBEDA." Jurnal INSTEK (Informatika Sains dan Teknologi) 5, no. 1 (2020): 129. http://dx.doi.org/10.24252/instek.v5i1.14516.

Full text
Abstract:
Perkembangan teknologi sistem informasi sangat pesat yang berbanding lurus dengan pertumbuhan yang sangat besar. Data tersebut sudah sangat sulit untuk dikoleksi, disimpan, dikelola maupun dianalisa, dibutuhkan infrastruktur dan teknologi komputer yang disebut Parallel computing. Parallel computing adalah penggunaan beberapa komputer yang saling terhubung untuk mengolah data dalam ukuran yang besar dengan menggunakan Hadoop. merupakan platform untuk mengolah data yang berukuran besar (big data) secara terdistribusi dan dapat berjalan diatas cluster. Pada penelitian ini menggunakan metode FIFO
APA, Harvard, Vancouver, ISO, and other styles
46

Chen, Zhongrui, Isaac Grosof, and Benjamin Berg. "Improving Multiresource Job Scheduling with Markovian Service Rate Policies." ACM SIGMETRICS Performance Evaluation Review 53, no. 1 (2025): 7–9. https://doi.org/10.1145/3744970.3727290.

Full text
Abstract:
Modern cloud computing workloads are composed of multiresource jobs that require a variety of computational resources in order to run, such as CPU cores, memory, disk space, or hardware accelerators. A server can run a set of multiresource jobs in parallel if the total resource demands of those jobs do not exceed the resource capacity of the server. Given a stream of arriving multiresource jobs, a scheduling policy must decide which jobs to run in parallel at every moment in time in order to minimize the mean response time across jobs --- the average time between when a job arrives to the syst
APA, Harvard, Vancouver, ISO, and other styles
47

S., Manishankar, and S. Sathayanarayana. "Performance evaluation and resource optimization of cloud based parallel Hadoop clusters with an intelligent scheduler." International Journal of Engineering & Technology 7, no. 4.20 (2018): 4220. http://dx.doi.org/10.14419/ijet.v7i4.13372.

Full text
Abstract:
Data generated from real time information systems are always incremental in nature. Processing of such a huge incremental data in large scale requires a parallel processing system like Hadoop based cluster. Major challenge that arises in all cluster-based system is how efficiently the resources of the system can be used. The research carried out proposes a model architecture for Hadoop cluster with additional components integrated such as super node who manages the clusters computations and a mediation manager who does the performance monitoring and evaluation. Super node in the system is equi
APA, Harvard, Vancouver, ISO, and other styles
48

Vinutha, D. C., and G. T. Raju. "An Accurate and Efficient Scheduler for Hadoop MapReduce Framework." Indonesian Journal of Electrical Engineering and Computer Science 12, no. 3 (2018): 1132. http://dx.doi.org/10.11591/ijeecs.v12.i3.pp1132-1142.

Full text
Abstract:
MapReduce is the preferred computing framework used in large data analysis and processing applications. Hadoop is a widely used MapReduce framework across different community due to its open source nature. Cloud service provider such as Microsoft azure HDInsight offers resources to its customer and only pays for their use. However, the critical challenges of cloud service provider is to meet user task Service level agreement (SLA) requirement (task deadline). Currently, the onus is on client to compute the amount of resource required to run a job on cloud. This work present a novel makespan mo
APA, Harvard, Vancouver, ISO, and other styles
49

D., C. Vinutha, and Raju G.T. "An Accurate and Efficient Scheduler for Hadoop MapReduce Framework." Indonesian Journal of Electrical Engineering and Computer Science 12, no. 3 (2018): 1132–42. https://doi.org/10.11591/ijeecs.v12.i3.pp1132-1142.

Full text
Abstract:
MapReduce is the preferred computing framework used in large data analysis and processing applications. Hadoop is a widely used MapReduce framework across different community due to its open source nature. Cloud service provider such as Microsoft azure HDInsight offers resources to its customer and only pays for their use. However, the critical challenges of cloud service provider is to meet user task Service level agreement (SLA) requirement (task deadline). Currently, the onus is on client to compute the amount of resource required to run a job on cloud. This work present a novel makespan mo
APA, Harvard, Vancouver, ISO, and other styles
50

Huang, Ye, Amos Brocco, Michele Courant, Beat Hirsbrunne, and Pierre Kuonen. "MaGate." International Journal of Distributed Systems and Technologies 1, no. 3 (2010): 24–39. http://dx.doi.org/10.4018/jdst.2010070102.

Full text
Abstract:
This work presents the design and architecture of a decentralized grid scheduler named MaGate, which is developed within the SmartGRID project and focuses on grid scheduler interoperation. The MaGate scheduler is modular structured, and emphasizes the functionality, procedure and policy of delegating local unsuited jobs to appropriate remote MaGates within the same grid system. To avoid an isolated solution, web services and several existing and emerging grid standards are adopted, as well as a series of interfaces to both publish MaGate capabilities and integrate functionalities from external
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!