To see the other types of publications on this topic, follow the link: Cloud workflow.

Journal articles on the topic 'Cloud workflow'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Cloud workflow.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Lu, Pingping, Gongxuan Zhang, Zhaomeng Zhu, Xiumin Zhou, Jin Sun, and Junlong Zhou. "A Review of Cost and Makespan-Aware Workflow Scheduling in Clouds." Journal of Circuits, Systems and Computers 28, no. 06 (2019): 1930006. http://dx.doi.org/10.1142/s021812661930006x.

Full text
Abstract:
Scientific workflow is a common model to organize large scientific computations. It borrows the concept of workflow in business activities to manage the complicated processes in scientific computing automatically or semi-automatically. The workflow scheduling, which maps tasks in workflows to parallel computing resources, has been extensively studied over years. In recent years, with the rise of cloud computing as a new large-scale distributed computing model, it is of great significance to study workflow scheduling problem in the cloud. Compared with traditional distributed computing platforms, cloud platforms have unique characteristics such as the self-service resource management model and the pay-as-you-go billing model. Therefore, the workflow scheduling in cloud needs to be reconsidered. When scheduling workflows in clouds, the monetary cost and the makespan of the workflow executions are concerned with both the cloud service providers (CSPs) and the customers. In this paper, we study a series of cost-and-time-aware workflow scheduling algorithms in cloud environments, which aims to provide researchers with a choice of appropriate cloud workflow scheduling approaches in various scenarios. We conducted a broad review of different cloud workflow scheduling algorithms and categorized them based on their optimization objectives and constraints. Also, we discuss the possible future research direction of the clouds workflow scheduling.
APA, Harvard, Vancouver, ISO, and other styles
2

Dr.S.M., Jagateesan1 V.M.Pavithra2. "A SURVEY ON BIGDATA SCHEDULING ON CLOUD FRAMEWORK." INTERNATIONAL JOURNAL OF ENGINEERING SCIENCES & RESEARCH TECHNOLOGY 6, no. 7 (2017): 922–27. https://doi.org/10.5281/zenodo.834614.

Full text
Abstract:
Computational science workflows have been successfully run on traditional High Performance Computing (HPC) systems like clusters and Grids for many years. Now a day, users are interested to execute their workflow applications in the Cloud to exploit the economic and technical benefits of this new rising technology. The deployment and management of workflows over the current existing heterogeneous and not yet interoperable Cloud providers, is still a challenging task for the workflow developers. The Pointer Gossip Content Addressable Network Montage Framework allows an automatic selection of the goal clouds, a uniform get entry to to the clouds, and workflow data management with respect to user Service Level Agreement (SLA) requirements. Consequently, a number of studies, focusing on different aspects, emerged in the literature. In this comparative review on workflow scheduling algorithm cloud environment is provide solution for the problems. Based on the analysis, the authors also highlight some research directions for future investigation. The previous results offer benefits to users by executing workflows with the expected performance and service quality at lowest cost.
APA, Harvard, Vancouver, ISO, and other styles
3

Lakhwani, Kamlesh, Gajanand Sharma, Ramandeep Sandhu, et al. "Adaptive and Convex Optimization-Inspired Workflow Scheduling for Cloud Environment." International Journal of Cloud Applications and Computing 13, no. 1 (2023): 1–25. http://dx.doi.org/10.4018/ijcac.324809.

Full text
Abstract:
Scheduling large-scale and resource-intensive workflows in cloud infrastructure is one of the main challenges for cloud service providers (CSPs). Cloud infrastructure is more efficient when virtual machines and other resources work up to their full potential. The main factor that influences the quality of cloud services is the distribution of workflow on virtual machines (VMs). Scheduling tasks to VMs depends on the type of workflow and mechanism of resource allocation. Scientific workflows include large-scale data transfer and consume intensive resources of cloud infrastructures. Therefore, scheduling of tasks from scientific workflows on VMs requires efficient and optimized workflow scheduling techniques. This paper proposes an optimised workflow scheduling approach that aims to improve the utilization of cloud resources without increasing execution time and execution cost.
APA, Harvard, Vancouver, ISO, and other styles
4

Hanoosh, Zaid. "A Survey on Multiple Workflow Scheduling Algorithms in Cloud Environment." Al-Furat Journal of Innovations in Electronics and Computer Engineering 3, no. 1 (2024): 36–44. http://dx.doi.org/10.46649/fjiece.v3.1.4a.13.4.2024.

Full text
Abstract:
The workflow approach is a standard for displaying processes and their implementation process. With the advent of electrical sciences, more cumbersome workflows were created with more processing requirements. New distributed systems, such as grid systems and computing clouds, allow users to access heterogeneous resources that are geographically located at different points and execute their workflow tasks. Therefore, the simultaneous receipt and execution of several workflows is obvious. As a result of discussing scheduling algorithms, it is necessary to consider arrangements for implementing multiple workflows on a shared resource set. Improving the execution of multiple workflows can accelerate the process of obtaining results when sending processes to the cloud. In this paper, we first discuss the classification of multiple workflow scheduling algorithms, and then briefly describe the scheduling algorithms for the cloud environment, andfinally, the algorithms of papers were compared with each other
APA, Harvard, Vancouver, ISO, and other styles
5

Malawski, Maciej, Kamil Figiela, Marian Bubak, Ewa Deelman, and Jarek Nabrzyski. "Scheduling Multilevel Deadline-Constrained Scientific Workflows on Clouds Based on Cost Optimization." Scientific Programming 2015 (2015): 1–13. http://dx.doi.org/10.1155/2015/680271.

Full text
Abstract:
This paper presents a cost optimization model for scheduling scientific workflows on IaaS clouds such as Amazon EC2 or RackSpace. We assume multiple IaaS clouds with heterogeneous virtual machine instances, with limited number of instances per cloud and hourly billing. Input and output data are stored on a cloud object store such as Amazon S3. Applications are scientific workflows modeled as DAGs as in the Pegasus Workflow Management System. We assume that tasks in the workflows are grouped into levels of identical tasks. Our model is specified using mathematical programming languages (AMPL and CMPL) and allows us to minimize the cost of workflow execution under deadline constraints. We present results obtained using our model and the benchmark workflows representing real scientific applications in a variety of domains. The data used for evaluation come from the synthetic workflows and from general purpose cloud benchmarks, as well as from the data measured in our own experiments with Montage, an astronomical application, executed on Amazon EC2 cloud. We indicate how this model can be used for scenarios that require resource planning for scientific workflows and their ensembles.
APA, Harvard, Vancouver, ISO, and other styles
6

Shang, Shi Feng, Jing He Huo, and Zeng Zhang. "QBARM: A Queue Theory-Based Adaptive Resource Usage Model." Advanced Materials Research 756-759 (September 2013): 2523–27. http://dx.doi.org/10.4028/www.scientific.net/amr.756-759.2523.

Full text
Abstract:
Workflow is becoming a more and more important tool for business operations, scientific research and engineering. Cloud computing provides an elastic, on-demand and high cost-efficient resource allocation model for workflow executions. During workflow execution, the load will change from time to time and therefore, it becomes an interesting topic to optimize resource utilization of workflows in the cloud computing environment. In this paper, a workflow framework is proposed that can adaptively use cloud resources. In detail, after users specify the desired goal to achieve, the proposed workflow framework then monitors the workflow execution, and utilizes different pricing models to acquire cloud resources according to the change of workflow load. In this way, the cost of workflow execution is reduced. .
APA, Harvard, Vancouver, ISO, and other styles
7

Wu, Lei, and Yuandou Wang. "Scheduling Multi-Workflows Over Heterogeneous Virtual Machines With a Multi-Stage Dynamic Game-Theoretic Approach." International Journal of Web Services Research 15, no. 4 (2018): 82–96. http://dx.doi.org/10.4018/ijwsr.2018100105.

Full text
Abstract:
Cloud computing, with dependable, consistent, pervasive, and inexpensive access to geographically distributed computational capabilities, is becoming an increasingly popular platform for the execution of scientific applications such as scientific workflows. Scheduling multiple workflows over cloud infrastructures and resources is well recognized to be NP-hard and thus critical to meeting various types of Quality-of-Service (QoS) requirements. In this work, the authors consider a multi-objective scientific workflow scheduling framework based on the dynamic game-theoretic model. It aims at reducing make-spans, cloud cost, while maximizing system fairness in terms of workload distribution among heterogeneous cloud virtual machines (VMs). The authors consider randomly-generated scientific workflow templates as test cases and carry out extensive real-world tests based on third-party commercial clouds. Experimental results show that their proposed framework outperforms traditional ones by achieving lower make-spans, lower cost, and better system fairness.
APA, Harvard, Vancouver, ISO, and other styles
8

Wu, Lei, Ran Ding, Zhaohong Jia, and Xuejun Li. "Cost-Effective Resource Provisioning for Real-Time Workflow in Cloud." Complexity 2020 (March 30, 2020): 1–15. http://dx.doi.org/10.1155/2020/1467274.

Full text
Abstract:
In the era of big data, mining and analysis of the enormous amount of data has been widely used to support decision-making. This complex process including huge-volume data collecting, storage, transmission, and analysis could be modeled as workflow. Meanwhile, cloud environment provides sufficient computing and storage resources for big data management and analytics. Due to the clouds providing the pay-as-you-go pricing scheme, executing a workflow in clouds should pay for the provisioned resources. Thus, cost-effective resource provisioning for workflow in clouds is still a critical challenge. Also, the responses of the complex data management process are usually required to be real-time. Therefore, deadline is the most crucial constraint for workflow execution. In order to address the challenge of cost-effective resource provisioning while meeting the real-time requirements of workflow execution, a resource provisioning strategy based on dynamic programming is proposed to achieve cost-effectiveness of workflow execution in clouds and a critical-path based workflow partition algorithm is presented to guarantee that the workflow can be completed before deadline. Our approach is evaluated by simulation experiments with real-time workflows of different sizes and different structures. The results demonstrate that our algorithm outperforms the existing classical algorithms.
APA, Harvard, Vancouver, ISO, and other styles
9

Kaur, Avinash, Pooja Gupta, Parminder Singh, and Manpreet Singh. "Data Placement Oriented Scheduling Algorithm for Scheduling Scientific Workflow in Cloud: A Budget-Aware Approach." Recent Advances in Computer Science and Communications 13, no. 5 (2020): 871–83. http://dx.doi.org/10.2174/2666255813666190925141324.

Full text
Abstract:
Background: A large number of communities and enterprises deploy numerous scientific workflow applications on cloud service. Aims: The main aim of the cloud service provider is to execute the workflows with a minimal budget and makespan. Most of the existing techniques for budget and makespan are employed for the traditional platform of computing and are not applicable to cloud computing platforms with unique resource management methods and pricing strategies based on service. Methods: In this paper, we studied the joint optimization of cost and makespan of scheduling workflows in IaaS clouds, and proposed a novel workflow scheduling scheme. Also, data placement is included in the proposed algorithm. Results: In this scheme, DPO-HEFT (Data Placement Oriented HEFT) algorithm is developed which closely integrates the data placement mechanism with the list scheduling heuristic HEFT. Extensive experiments using the real-world and synthetic workflow demonstrate the efficacy of our scheme. Conclusion: Our scheme can achieve significantly better cost and makespan trade-off fronts with remarkably higher hypervolume and can run up to hundreds times faster than the state-of-the-art algorithms.
APA, Harvard, Vancouver, ISO, and other styles
10

Akter, Sohana. "Harmonizing Compliance: Coordinating Automated Verification Processes within Cloud-based AI/ML Workflows." Journal of Artificial Intelligence General science (JAIGS) ISSN:3006-4023 3, no. 1 (2024): 292–302. http://dx.doi.org/10.60087/jaigs.v3i1.121.

Full text
Abstract:
The significance of ensuring security and upholding data privacy within cloud-based workflows is widely recognized in research domains. This importance is particularly evident in contexts such as safeguarding patients' private data managed within cloud-deployed workflows, where maintaining confidentiality is paramount, alongside ensuring secure communication among involved stakeholders. In response to these imperatives, our paper presents an architecture and formal model designed to enforce security measures within cloud workflow orchestration. Central to our proposed architecture is the emphasis on continuous monitoring of cloud resources, workflow tasks, and data streams to detect and preempt anomalies in workflow orchestration processes. To accomplish this, we advocate for a multi-modal approach that integrates deep learning, one-class classification, and clustering techniques. In essence, our proposed architecture offers a comprehensive solution for enforcing security within cloud workflow orchestration, harnessing advanced methodologies like deep learning for anomaly detection and prediction. This approach is particularly pertinent in critical sectors such as healthcare, especially during unprecedented events like the COVID-19 pandemic.
APA, Harvard, Vancouver, ISO, and other styles
11

Akter, Sohana. "Harmonizing Compliance: Coordinating Automated Verification Processes within Cloud-based AI/ML Workflows." Journal of Artificial Intelligence General science (JAIGS) ISSN:3006-4023 3, no. 1 (2024): 143–60. http://dx.doi.org/10.60087/jaigs.vol03.issue01.p160.

Full text
Abstract:
The significance of ensuring security and upholding data privacy within cloud-based workflows is widely recognized in research domains. This importance is particularly evident in contexts such as safeguarding patients' private data managed within cloud-deployed workflows, where maintaining confidentiality is paramount, alongside ensuring secure communication among involved stakeholders. In response to these imperatives, our paper presents an architecture and formal model designed to enforce security measures within cloud workflow orchestration. Central to our proposed architecture is the emphasis on continuous monitoring of cloud resources, workflow tasks, and data streams to detect and preempt anomalies in workflow orchestration processes. To accomplish this, we advocate for a multi-modal approach that integrates deep learning, one-class classification, and clustering techniques. In essence, our proposed architecture offers a comprehensive solution for enforcing security within cloud workflow orchestration, harnessing advanced methodologies like deep learning for anomaly detection and prediction. This approach is particularly pertinent in critical sectors such as healthcare, especially during unprecedented events like the COVID-19 pandemic.
APA, Harvard, Vancouver, ISO, and other styles
12

Bayani, Samir Vinayak, Ravish Tillu, and Jawaharbabu Jeyaraman. "Streamlining Compliance: Orchestrating Automated Checks for Cloud-based AI/ML Workflows." Journal of Knowledge Learning and Science Technology ISSN: 2959-6386 (online) 2, no. 3 (2023): 413–35. http://dx.doi.org/10.60087/jklst.vol2.n3.p435.

Full text
Abstract:
Ensuring security and safeguarding data privacy within cloud workflows has garnered considerable attention in research circles. For instance, protecting the confidentiality of patients' private data managed within a cloud-deployed workflow is crucial, as is ensuring secure communication of such sensitive information among various stakeholders. In light of this, our paper proposes an architecture and a formal model for enforcing security within cloud workflow orchestration. The proposed architecture underscores the importance of monitoring cloud resources, workflow tasks, and data to identify and anticipate anomalies in cloud workflow orchestration. To achieve this, we advocate a multi-modal approach combining deep learning, one-class classification, and clustering techniques. In summary, our proposed architecture offers a comprehensive solution to security enforcement within cloud workflow orchestration, leveraging advanced techniques like deep learning for anomaly detection and prediction, particularly pertinent in critical domains such as healthcare during unprecedented times like the COVID-19 pandemic.
APA, Harvard, Vancouver, ISO, and other styles
13

SMANCHAT, Sucha, and Kanchana VIRIYAPANT. "Scheduling Dynamic Parallel Loop Workflow in Cloud Environment." Walailak Journal of Science and Technology (WJST) 15, no. 1 (2016): 19–27. http://dx.doi.org/10.48048/wjst.2018.2267.

Full text
Abstract:
Scientific workflows have been employed to automate large scale scientific experiments by leveraging computational power provided on-demand by cloud computing platforms. Among these workflows, a parallel loop workflow is used for studying the effects of different input values of a scientific experiment. Because of its independent loop characteristic, a parallel loop workflow can be dynamically executed as parallel workflow instances to accelerate the execution. Such execution negates workflow traversal used in existing works to calculate execution time and cost during scheduling in order to maintain time and cost constraints. In this paper, we propose a novel scheduling technique that is able to handle dynamic parallel loop workflow execution through a new method for evaluating execution progress together with a workflow instance arrival control and a cloud resource adjustment mechanism. The proposed technique, which aims at maintaining a workflow deadline while reducing cost, is tested using 3 existing task scheduling heuristics as its task mapping strategies. The simulation results show that the proposed technique is practical and performs better when the time constraint is more relaxed. It also prefers task scheduling heuristics that allow for a more accurate progress evaluation.
APA, Harvard, Vancouver, ISO, and other styles
14

Kaur, Avinash, Pooja Gupta, and Manpreet Singh. "Hybrid Balanced Task Clustering Algorithm for Scientific Workflows in Cloud Computing." Scalable Computing: Practice and Experience 20, no. 2 (2019): 237–58. http://dx.doi.org/10.12694/scpe.v20i2.1515.

Full text
Abstract:
Scientific Workflow is a composition of both coarse-grained and fine-grained computational tasks displaying varying execution requirements. Large-scale data transfer is involved in scientific workflows, so efficient techniques are required to reduce the makespan of the workflow. Task clustering is an efficient technique used in such a scenario that involves combining multiple tasks with shorter execution time into a single cluster to be executed on a resource. This leads to a reduction of scheduling overheads in scientific workflows and thus improvement of performance. However available task clustering methods involve clustering the tasks horizontally without the consideration of the structure of tasks in a workflow. We propose hybrid balanced task clustering algorithm that uses the parameter of impact factor of workflows along with the structure of workflow. According to this technique, tasks can be considered for clustering either vertically or horizontally based on the value of the impact factor. This minimizes the system overheads and the makespan for execution of a workflow. A simulation based evaluation is performed on real workflows that shows the proposed algorithm is efficient in recommending clusters. It shows improvement of 5-10\% in makespan time of workflow depending on the type of workflow used.
APA, Harvard, Vancouver, ISO, and other styles
15

Sangani, Sangeeta, and Rudragoud Patil. "Reliable and efficient webserver management for task scheduling in edge-cloud platform." International Journal of Electrical and Computer Engineering (IJECE) 13, no. 5 (2023): 5922–31. https://doi.org/10.11591/ijece.v13i5.pp5922-5931.

Full text
Abstract:
The development in the field of cloud webserver management for the execution of the workflow and meeting the quality-of-service (QoS) prerequisites in a distributed cloud environment has been a challenging task. Though, internet of things (IoT) of work presented for the scheduling of the workflow in a heterogeneous cloud environment. Moreover, the rapid development in the field of cloud computing like edge-cloud computing creates new methods to schedule the workflow in a heterogenous cloud environment to process different tasks like IoT, event-driven applications, and different network applications. The current methods used for workflow scheduling have failed to provide better trade-offs to meet reliable performance with minimal delay. In this paper, a novel web server resource management framework is presented namely the reliable and efficient webserver management (REWM) framework for the edge-cloud environment. The experiment is conducted on complex bioinformatic workflows; the result shows the significant reduction of cost and energy by the proposed REWM in comparison with standard webserver management methodology.
APA, Harvard, Vancouver, ISO, and other styles
16

Wang, Danjing, Huifang Li, Youwei Zhang, and Baihai Zhang. "Gradient-Based Scheduler for Scientific Workflows in Cloud Computing." Journal of Advanced Computational Intelligence and Intelligent Informatics 27, no. 1 (2023): 64–73. http://dx.doi.org/10.20965/jaciii.2023.p0064.

Full text
Abstract:
It is becoming increasingly attractive to execute workflows in the cloud, as the cloud environment enables scientific applications to utilize elastic computing resources on demand. However, despite being a key to efficiently managing application execution in the cloud, traditional workflow scheduling algorithms face significant challenges in the cloud environment. The gradient-based optimizer (GBO) is a newly proposed evolutionary algorithm with a search engine based on the Newton’s method. It employs a set of vectors to search in the solution space. This study designs a gradient-based scheduler by using GBO for workflow scheduling to minimize the usage costs of workflows under given deadline constraints. Extensive experiments are conducted on well-known scientific workflows of different sizes and types using WorkflowSim. The experimental results show that the proposed scheduling algorithm outperforms five other state-of-the-art algorithms in terms of both the constraint satisfiability and cost optimization, thereby verifying its advantages in addressing workflow scheduling problems.
APA, Harvard, Vancouver, ISO, and other styles
17

Stéphane, Fouakeu-Tatieze, Kamla Vivient Corneille, Sonia Yassa, Kamgang Jean-Claude, and Nkenlifack Marcellin Julius Antonio. "Taxonomy and survey of scientific workflow scheduling in infrastructure-as-a-service cloud computing systems." Journal of Advanced Computer Science & Technology 12, no. 1 (2024): 19–32. http://dx.doi.org/10.14419/f8ka3p66.

Full text
Abstract:
Scientific workflows are groups of scientific application tasks organized in oriented graphs. These scientific workflows are characterized by a large number of tasks requiring sufficient resources for their execution. Tasks with all available input data can be computed simultaneously. Cloud computing is an appropriate environment for the implementation of scientific workflows. Although the cloud computing environment has unlimited resources and can run some scientific workflow tasks simultaneously, scheduling scientific workflow tasks using pay-as-you-go cloud computing resources is an NP-complete problem. This difficulty is due to constraints on the part of the cloud resource provider and the part of the user (customer). The algorithm tries to find efficient schedules that take into account several requirements of client service (QoS) such as deadlines, budgets, and resource providers’ profits, i.e., the minimization of energy consumption and many others. There have been several papers recommending effective solutions to workflow schedule problems. This paper reviews existing and more recent papers on the plan of scientific workflows in pay-as-you-go IaaS cloud computing environments, focusing on future directions for algorithms that can improve the optimal solution.
APA, Harvard, Vancouver, ISO, and other styles
18

Sahu, Babuli, Sangram Keshari Swain, Sudheer Mangalampalli, and Satyasis Mishra. "Multiobjective Prioritized Workflow Scheduling in Cloud Computing Using Cuckoo Search Algorithm." Applied Bionics and Biomechanics 2023 (July 7, 2023): 1–13. http://dx.doi.org/10.1155/2023/4350615.

Full text
Abstract:
Effective workflow scheduling in cloud computing is still a challenging problem as incoming workflows to cloud console having variable task processing capacities and dependencies as they will arise from various heterogeneous resources. Ineffective scheduling of workflows to virtual resources in cloud environment leads to violations in service level agreements and high energy consumption, which impacts the quality of service of cloud provider. Many existing authors developed workflow scheduling algorithms addressing operational costs and makespan, but still, there is a provision to improve the scheduling process in cloud paradigm as it is an nondeterministic polynomial-hard problem. Therefore, in this research, a task-prioritized multiobjective workflow scheduling algorithm was developed by using cuckoo search algorithm to precisely map incoming workflows onto corresponding virtual resources. Extensive simulations were carried out on workflowsim using randomly generated workflows from simulator. For evaluating the efficacy of our proposed approach, we compared our proposed scheduling algorithm with existing approaches, i.e., Max–Min, first come first serve, minimum completion time, Min–Min, resource allocation security with efficient task scheduling in cloud computing-hybrid machine learning, and Round Robin. Our proposed approach is outperformed by minimizing energy consumption by 15% and reducing service level agreement violations by 22%.
APA, Harvard, Vancouver, ISO, and other styles
19

Chen, Jiangchuan, Jiajia Jiang, and Dan Luo. "A Predictive and Evolutionary Approach for Cost-Effective and Deadline-Constrained Workflow Scheduling Over Distributed IaaS Clouds." International Journal of Web Services Research 16, no. 3 (2019): 78–94. http://dx.doi.org/10.4018/ijwsr.2019070105.

Full text
Abstract:
Clouds provide highly elastic resource provisioning styles through which scientific workflows are allowed to acquire desired resources ahead of the execution and build required software environment on virtual machines (VMs). However, various challenges for cloud workflow, especially its optimal scheduling, are yet to be addressed. Traditional approaches mainly consider VMs to be with non-fluctuating, time-invariant, stochastic, or bounded performance. This work describes workflows to be deployed and executed over distributed infrastructure-as-a-service clouds with time-varying performance of VMs and is aimed at reducing the execution cost of workflow while meeting deadline constraints. For this purpose, the authors employ time-series-based prediction approaches to capture dynamic performance fluctuations, feed an evolutionary algorithm with predicted performance information, and generate schedules at real-time. A case study based on multiple randomly-generated workflow templates and third-party commercial clouds shows that their proposed approach outperforms traditional ones.
APA, Harvard, Vancouver, ISO, and other styles
20

Sangani, Sangeeta, and Rudragoud Patil. "Reliable and efficient webserver management for task scheduling in edge-cloud platform." International Journal of Electrical and Computer Engineering (IJECE) 13, no. 5 (2023): 5922. http://dx.doi.org/10.11591/ijece.v13i5.pp5922-5931.

Full text
Abstract:
<span lang="EN-US">The development in the field of cloud webserver management for the execution of the workflow and meeting the quality-of-service (QoS) prerequisites in a distributed cloud environment has been a challenging task. Though, internet of things (IoT) of work presented for the scheduling of the workflow in a heterogeneous cloud environment. Moreover, the rapid development in the field of cloud computing like edge-cloud computing creates new methods to schedule the workflow in a heterogenous cloud environment to process different tasks like IoT, event-driven applications, and different network applications. The current methods used for workflow scheduling have failed to provide better trade-offs to meet reliable performance with minimal delay. In this paper, a novel web server resource management framework is presented namely the reliable and efficient webserver management (REWM) framework for the edge-cloud environment. The experiment is conducted on complex bioinformatic workflows; the result shows the significant reduction of cost and energy by the proposed REWM in comparison with standard webserver management methodology.</span>
APA, Harvard, Vancouver, ISO, and other styles
21

Hu, Liang, Xianwei Wu, and Xilong Che. "HICA: A Hybrid Scientific Workflow Scheduling Algorithm for Symmetric Homogeneous Resource Cloud Environments." Symmetry 17, no. 2 (2025): 280. https://doi.org/10.3390/sym17020280.

Full text
Abstract:
With the increasing volume of scientific computation data and the advancement of computer performance, scientific computation is becoming more dependent on the powerful computing capabilities of cloud computing. On cloud platforms, tasks in scientific workflows are assigned to computational resources and executed according to specific strategies. Therefore, workflow scheduling has become a key factor affecting efficiency. This paper proposes a hybrid scientific workflow scheduling algorithm, HICA, to address the scheduling problem of scientific workflows in symmetric homogeneous cloud environments with optimization goals of makespan and cost. HICA combines the Imperialist Competitive Algorithm (ICA) with the HEFT algorithm, integrating HEFT into the initial population of the ICA to accelerate the convergence of the ICA. Experimental results show that the proposed hybrid approach outperforms other algorithms in real-world workflow applications. Specifically, when the workflow scale is 100, the average improvements in makespan and cost are 133.89 and 273.33, respectively; when the workflow scale is 1000, the improvements are 371.62 and 9178.98. The scheduling results for the Earth System Model parameter tuning workflow show that compared to the scenario without using a scheduling algorithm, the makespan and cost were improved by 13% and 21%, respectively.
APA, Harvard, Vancouver, ISO, and other styles
22

Saeed, Hadeel Amjed, Sufyan T. Faraj Al-Janabi, Esam Taha Yassen, and Omar A. Aldhaibani. "Survey on Secure Scientific Workflow Scheduling in Cloud Environments." Future Internet 17, no. 2 (2025): 51. https://doi.org/10.3390/fi17020051.

Full text
Abstract:
In cloud computing environments, the representation and management of data through workflows are crucial to ensuring efficient processing. This paper focuses on securing scientific workflow scheduling, which involves executing complex data-processing tasks with specific dependencies. The security of intermediate data, often transmitted between virtual machines during workflow execution, is critical for maintaining the integrity and confidentiality of scientific workflows. This review analyzes methods for securing scientific workflow scheduling in cloud environments, emphasizing the application of security principles such as confidentiality, authentication, and integrity. Various scheduling algorithms, including heuristics and metaheuristics, are examined for their effectiveness in balancing security with constraints like execution time and cost.
APA, Harvard, Vancouver, ISO, and other styles
23

Wu, Na, Decheng Zuo, and Zhan Zhang. "Dynamic Fault-Tolerant Workflow Scheduling with Hybrid Spatial-Temporal Re-Execution in Clouds." Information 10, no. 5 (2019): 169. http://dx.doi.org/10.3390/info10050169.

Full text
Abstract:
Improving reliability is one of the major concerns of scientific workflow scheduling in clouds. The ever-growing computational complexity and data size of workflows present challenges to fault-tolerant workflow scheduling. Therefore, it is essential to design a cost-effective fault-tolerant scheduling approach for large-scale workflows. In this paper, we propose a dynamic fault-tolerant workflow scheduling (DFTWS) approach with hybrid spatial and temporal re-execution schemes. First, DFTWS calculates the time attributes of tasks and identifies the critical path of workflow in advance. Then, DFTWS assigns appropriate virtual machine (VM) for each task according to the task urgency and budget quota in the phase of initial resource allocation. Finally, DFTWS performs online scheduling, which makes real-time fault-tolerant decisions based on failure type and task criticality throughout workflow execution. The proposed algorithm is evaluated on real-world workflows. Furthermore, the factors that affect the performance of DFTWS are analyzed. The experimental results demonstrate that DFTWS achieves a trade-off between high reliability and low cost objectives in cloud computing environments.
APA, Harvard, Vancouver, ISO, and other styles
24

Sriperambuduri, Vinay Kumar, and Nagaratna M. "A Hybrid Grey Wolf Optimization and Constriction Factor based PSO Algorithm for Workflow Scheduling in Cloud." International Journal on Recent and Innovation Trends in Computing and Communication 11, no. 9s (2023): 718–26. http://dx.doi.org/10.17762/ijritcc.v11i9s.7744.

Full text
Abstract:
Due to its flexibility, scalability, and cost-effectiveness of cloud computing, it has emerged as a popular platform for hosting various applications. However, optimizing workflow scheduling in the cloud is still a challenging problem because of the dynamic nature of cloud resources and the diversity of user requirements. In this context, Particle Swarm Optimization (PSO) and Grey Wolf Optimization (GWO) algorithms have been proposed as effective techniques for improving workflow scheduling in cloud environments. The primary objective of this work is to propose a workflow scheduling algorithm that optimizes the makespan, service cost, and load balance in the cloud. The proposed HGWOCPSO hybrid algorithm employs GWO and Constriction factor based PSO (CPSO) for the workflow optimization. The algorithm is simulated on Workflowsim, where a set of scientific workflows with varying task sizes and inter-task communication requirements are executed on a cloud platform. The simulation results show that the proposed algorithm outperforms existing algorithms in terms of makespan, service cost, and load balance. The employed GWO algorithm mitigates the problem of local optima that is inherent in PSO algorithm.
APA, Harvard, Vancouver, ISO, and other styles
25

Assuncao, Luis, Carlos Goncalves, and Jose C. Cunha. "Autonomic Workflow Activities." International Journal of Adaptive, Resilient and Autonomic Systems 5, no. 2 (2014): 57–82. http://dx.doi.org/10.4018/ijaras.2014040104.

Full text
Abstract:
Workflows have been successfully applied to express the decomposition of complex scientific applications. This has motivated many initiatives that have been developing scientific workflow tools. However the existing tools still lack adequate support to important aspects namely, decoupling the enactment engine from workflow tasks specification, decentralizing the control of workflow activities, and allowing their tasks to run autonomous in distributed infrastructures, for instance on Clouds. Furthermore many workflow tools only support the execution of Direct Acyclic Graphs (DAG) without the concept of iterations, where activities are executed millions of iterations during long periods of time and supporting dynamic workflow reconfigurations after certain iteration. We present the AWARD (Autonomic Workflow Activities Reconfigurable and Dynamic) model of computation, based on the Process Networks model, where the workflow activities (AWA) are autonomic processes with independent control that can run in parallel on distributed infrastructures, e. g. on Clouds. Each AWA executes a Task developed as a Java class that implements a generic interface allowing end-users to code their applications without concerns for low-level details. The data-driven coordination of AWA interactions is based on a shared tuple space that also enables support to dynamic workflow reconfiguration and monitoring of the execution of workflows. We describe how AWARD supports dynamic reconfiguration and discuss typical workflow reconfiguration scenarios. For evaluation we describe experimental results of AWARD workflow executions in several application scenarios, mapped to a small dedicated cluster and the Amazon (Elastic Computing EC2) Cloud.
APA, Harvard, Vancouver, ISO, and other styles
26

Li, Hanqi, Xianhui Liu, and Weidong Zhao. "Research on Lightweight Microservice Composition Technology in Cloud-Edge Device Scenarios." Sensors 23, no. 13 (2023): 5939. http://dx.doi.org/10.3390/s23135939.

Full text
Abstract:
In recent years, cloud-native technology has become popular among Internet companies. Microservice architecture solves the complexity problem for multiple service methods by decomposing a single application so that each service can be independently developed, independently deployed, and independently expanded. At the same time, domestic industrial Internet construction is still in its infancy, and small and medium-sized enterprises still face many problems in the process of digital transformation, such as difficult resource integration, complex control equipment workflow, slow development and deployment process, and shortage of operation and maintenance personnel. The existing traditional workflow architecture is mainly aimed at the cloud scenario, which consumes a lot of resources and cannot be used in resource-limited scenarios at the edge. Moreover, traditional workflow is not efficient enough to transfer data and often needs to rely on various storage mechanisms. In this article, a lightweight and efficient workflow architecture is proposed to optimize the defects of these traditional workflows by combining cloud-edge scene. By orchestrating a lightweight workflow engine with a Kubernetes Operator, the architecture can significantly reduce workflow execution time and unify data flow between cloud microservices and edge devices.
APA, Harvard, Vancouver, ISO, and other styles
27

Bukhari, S. Sabahat H., and Yunni Xia. "A Novel Completion-Time-Minimization Scheduling Approach of Scientific Workflows Over Heterogeneous Cloud Computing Systems." International Journal of Web Services Research 16, no. 4 (2019): 1–20. http://dx.doi.org/10.4018/ijwsr.2019100101.

Full text
Abstract:
The cloud computing paradigm provides an ideal platform for supporting large-scale scientific-workflow-based applications over the internet. However, the scheduling and execution of scientific workflows still face various challenges such as cost and response time management, which aim at handling acquisition delays of physical servers and minimizing the overall completion time of workflows. A careful investigation into existing methods shows that most existing approaches consider static performance of physical machines (PMs) and ignore the impact of resource acquisition delays in their scheduling models. In this article, the authors present a meta-heuristic-based method to scheduling scientific workflows aiming at reducing workflow completion time through appropriately managing acquisition and transmission delays required for inter-PM communications. The authors carry out extensive case studies as well based on real-world commercial cloud sand multiple workflow templates. Experimental results clearly show that the proposed method outperforms the state-of-art ones such as ICPCP, CEGA, and JIT-C in terms of workflow completion time.
APA, Harvard, Vancouver, ISO, and other styles
28

Gharbi, Nada, Mārīte Kirikova, and Lotfi Bouzguenda. "Integrated Cloud-Based Services for Medical Workflow Systems." Applied Computer Systems 20, no. 1 (2016): 36–39. http://dx.doi.org/10.1515/acss-2016-0013.

Full text
Abstract:
Abstract Recent years have witnessed significant progress of workflow systems in different business areas. However, in the medical domain, the workflow systems are comparatively scarcely researched. In the medical domain, the workflows are as important as in other areas. In fact, the flow of information in the healthcare industry is even more critical than it is in other industries. Workflow can provide a new way of looking at how processes and procedures are completed in particular medical systems, and it can help improve the decision-making in these systems. Despite potential capabilities of workflow systems, medical systems still often perceive critical challenges in maintaining patient medical information that results in the difficulties in accessing patient data by different systems. In this paper, a new cloud-based service-oriented architecture is proposed. This architecture will support a medical workflow system integrated with cloud services aligned with medical standards to improve the healthcare system.
APA, Harvard, Vancouver, ISO, and other styles
29

Albtoush, Alaa, Farizah Yunus, and Noor Maizura Mohamad Noor. "Multi-priority scheduling algorithm for scientific workflows in cloud." Bulletin of Electrical Engineering and Informatics 13, no. 4 (2024): 2979–90. http://dx.doi.org/10.11591/eei.v13i4.7520.

Full text
Abstract:
The public cloud environment has emerged as a promising platform for exe-cuting scientific workflows. These executions involve leasing virtual machines (VMs) from public services for the duration of the workflow. The structure of the workflows significantly impacts the performance of any proposed scheduling approach. A task within a workflow cannot begin its execution before receiving all the required data from its preceding tasks. In this paper, we introduce a multi-priority scheduling approach for executing workflow tasks in the cloud. The key component of the proposed approach is a mechanism that logically or-ders and groups workflow tasks based on their data dependencies and locality. Using the proposed approach, the number of available VMs influences the num-ber of groups (partitions) obtained. Based on the locality of each group’s tasks, the priority of each group is determined to reduce the overall execution delay and improve VM utilization. As the results demonstrate, the proposed approach achieves a significant reduction in both execution costs and time in most scenar-ios
APA, Harvard, Vancouver, ISO, and other styles
30

Panhwar, Muhammad Aamir. "Workflow-based Approach in Cloud Computing Environment." Journal of Advanced Research in Dynamical and Control Systems 12, no. 7 (2020): 815–22. http://dx.doi.org/10.5373/jardcs/v12i7/20202066.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Farid, Mazen, Rohaya Latip, Masnida Hussin, and Nor Asilah Wati Abdul Hamid. "A fault-intrusion-tolerant system and deadline-aware algorithm for scheduling scientific workflow in the cloud." PeerJ Computer Science 7 (November 2, 2021): e747. http://dx.doi.org/10.7717/peerj-cs.747.

Full text
Abstract:
Background Recent technological developments have enabled the execution of more scientific solutions on cloud platforms. Cloud-based scientific workflows are subject to various risks, such as security breaches and unauthorized access to resources. By attacking side channels or virtual machines, attackers may destroy servers, causing interruption and delay or incorrect output. Although cloud-based scientific workflows are often used for vital computational-intensive tasks, their failure can come at a great cost. Methodology To increase workflow reliability, we propose the Fault and Intrusion-tolerant Workflow Scheduling algorithm (FITSW). The proposed workflow system uses task executors consisting of many virtual machines to carry out workflow tasks. FITSW duplicates each sub-task three times, uses an intermediate data decision-making mechanism, and then employs a deadline partitioning method to determine sub-deadlines for each sub-task. This way, dynamism is achieved in task scheduling using the resource flow. The proposed technique generates or recycles task executors, keeps the workflow clean, and improves efficiency. Experiments were conducted on WorkflowSim to evaluate the effectiveness of FITSW using metrics such as task completion rate, success rate and completion time. Results The results show that FITSW not only raises the success rate by about 12%, it also improves the task completion rate by 6.2% and minimizes the completion time by about 15.6% in comparison with intrusion tolerant scientific workflow ITSW system.
APA, Harvard, Vancouver, ISO, and other styles
32

Xing, Lining, Jun Li, Zhaoquan Cai, and Feng Hou. "Evolutionary Optimization of Energy Consumption and Makespan of Workflow Execution in Clouds." Mathematics 11, no. 9 (2023): 2126. http://dx.doi.org/10.3390/math11092126.

Full text
Abstract:
Making sound trade-offs between the energy consumption and the makespan of workflow execution in cloud platforms remains a significant but challenging issue. So far, some works balance workflows’ energy consumption and makespan by adopting multi-objective evolutionary algorithms, but they often regard this as a black-box problem, resulting in the low efficiency of the evolutionary search. To compensate for the shortcomings of existing works, this paper mathematically formulates the cloud workflow scheduling for an infrastructure-as-a-service (IaaS) platform as a multi-objective optimization problem. Then, this paper tailors a knowledge-driven energy- and makespan-aware workflow scheduling algorithm, namely EMWSA. Specifically, a critical task adjustment-based local search strategy is proposed to intelligently adjust some critical tasks to the same resource of their successor tasks, striving to simultaneously reduce workflows’ energy consumption and makespan. Further, an idle gap reuse strategy is proposed to search the optimal energy consumption of each non-critical task without affecting the operation of other tasks, so as to further reduce energy consumption. Finally, in the context of real-world workflows and cloud platforms, we carry out comparative experiments to verify the superiority of the proposed EMWSA by significantly outperforming 4 representative baselines on 19 out of 20 workflow instances.
APA, Harvard, Vancouver, ISO, and other styles
33

Mangalampalli, Sudheer, Kiran Sree Pokkuluri, G. Naga Satish, and K. Varada Raj Kumar. "An Effective Workflow Scheduling Algorithm in Cloud Computing Using Cat Swarm Optimization." ECS Transactions 107, no. 1 (2022): 2523–30. http://dx.doi.org/10.1149/10701.2523ecst.

Full text
Abstract:
Effective workflow scheduling approach is needed, as it is huge challenge in cloud computing as variable and heterogeneous workflows comes onto cloud console. To handle these complex workflows and to schedule them onto appropriate virtual resources an effective workflow scheduler is necessary. Earlier authors used various nature inspired algorithms to solve this workflow scheduling by minimizing makespan and maximizing resource utilization, but still there is a chance to minimize makespan and improve resource utilization. In this paper, we have calculated the task priorities of all tasks, which are incoming onto cloud console, and thereby mapping tasks onto VMs to effectively map tasks. Cat swarm optimization used to solve scheduling problem and finally we have addressed parameters named as makespan and resource utilization. Simulation was carried out on workflowsim and we evaluated the efficacy of our algorithm by comparing with existing algorithms PSO and CS. From simulation results we observed that our algorithm shows great improvement over existing algorithms for mentioned parameters.
APA, Harvard, Vancouver, ISO, and other styles
34

Schovanec, Heather, Gabriel Walton, Ryan Kromer, and Adam Malsam. "Development of Improved Semi-Automated Processing Algorithms for the Creation of Rockfall Databases." Remote Sensing 13, no. 8 (2021): 1479. http://dx.doi.org/10.3390/rs13081479.

Full text
Abstract:
While terrestrial laser scanning and photogrammetry provide high quality point cloud data that can be used for rock slope monitoring, their increased use has overwhelmed current data analysis methodologies. Accordingly, point cloud processing workflows have previously been developed to automate many processes, including point cloud alignment, generation of change maps and clustering. However, for more specialized rock slope analyses (e.g., generating a rockfall database), the creation of more specialized processing routines and algorithms is necessary. More specialized algorithms include the reconstruction of rockfall volumes from clusters and points and automatic classification of those volumes are both processing steps required to automate the generation of a rockfall database. We propose a workflow that can automate all steps of the point cloud processing workflow. In this study, we detail adaptions to commonly used algorithms for rockfall monitoring use cases, such as Multiscale Model to Model Cloud Comparison (M3C2). This workflow details the entire processing pipeline for rockfall database generation using terrestrial laser scanning.
APA, Harvard, Vancouver, ISO, and other styles
35

Hameed, Saif, Hend Marouane, Ahmed Fakhfakh, and Sinan Salih. "Improved Sine-Cosine Nomadic People Optimizer (NPO) for Large and Synthetic Extra-large Scientific Workflow Task Scheduling Optimization in Cloud Environment." Data and Metadata 4 (May 13, 2025): 1000. https://doi.org/10.56294/dm20251000.

Full text
Abstract:
Cloud computing has become an increasingly fundamental technology in recent years, influencing many different areas of the economy. It offers significant features such as greater scalability, on-demand resource allocation for varied workflows, and a pay-as-you-go pricing system. For cloud service providers, efficient and optimized scheduling is essential since it lowers resources consumption, operation expenses, and guarantees users' service level agreements. However, scheduling optimization becomes increasingly challenging due to the inherent heterogeneity of cloud resources and the growing scale of workflows. To tackle these issues, this study presents hybrid Sine-Cosine Nomadic People Optimizer (called QNPO) aimed at optimization of multi-objective cloud task scheduling with a special emphasis on large and extra-large scientific workflow. Sixteen synthetic extra-large heterogeneous workflows datasets were composed in this study and used to evaluate the proposed approach on a heterogeneous cloud infrastructure configure in Workflow Sim. The results indicated that the QNPO consistently outperformed traditional optimization algorithms in all proposed evaluation scenarios, achieving a significant improvement in scheduling efficiency between 30 and 60 %.
APA, Harvard, Vancouver, ISO, and other styles
36

Albtoush, Alaa, Farizah Yunus, Khaled Almi’ani, and Noor Maizura Mohamad Noor. "Structure-Aware Scheduling Methods for Scientific Workflows in Cloud." Applied Sciences 13, no. 3 (2023): 1980. http://dx.doi.org/10.3390/app13031980.

Full text
Abstract:
Scientific workflows consist of numerous tasks subject to constraints on data dependency. Effective workflow scheduling is perpetually necessary to efficiently utilize the provided resources to minimize workflow execution cost and time (makespan). Accordingly, cloud computing has emerged as a promising platform for scheduling scientific workflows. In this paper, level- and hierarchy-based scheduling approaches were proposed to address the problem of scheduling scientific workflow in the cloud. In the level-based approach, tasks are partitioned into a set of isolated groups in which available virtual machines (VMs) compete to execute the groups’ tasks. Accordingly, based on a utility function, a task will be assigned to the VM that will achieve the highest utility by executing this task. The hierarchy-based approach employs a look-ahead approach, in which the partitioning of the workflow tasks is performed by considering the entire structure of the workflow, whereby the objective is to reduce the data dependency between the obtained groups. Additionally, in the hierarchy-based approach, a fair-share strategy is employed to determine the share (number of VMs) that will be assigned to each group of tasks. Dividing the available VMs based on the computational requirements of the task groups provides the hierarchy-based approach the advantage of further utilizing the VMs usage. The results show that, on average, both approaches improve the execution time and cost by 27% compared to the benchmarked algorithms.
APA, Harvard, Vancouver, ISO, and other styles
37

Sairohith, Thummarakoti, Prasad Udayagiri, and Harikrishna Reddy B. "Adaptive Orchestration for Performance Cost Optimization in Multi-Cloud Infrastructure." European Journal of Advances in Engineering and Technology 7, no. 8 (2020): 119–25. https://doi.org/10.5281/zenodo.15307951.

Full text
Abstract:
Organizations can benefit from using multiple cloud systems through multi-cloud infrastructure but must navigate high platform management difficulties. This document presents an automatic workflow management system to shift workloads among clouds while continuously optimizing performance and expense levels. Real-time performance tracking allows decision-makers to maximize provider workflow distribution according to their costs. The system utilizes an orchestration engine to link monitoring with decision-making processes and automatic provisioning, ensuring efficient operation among various cloud services.
APA, Harvard, Vancouver, ISO, and other styles
38

Chen, Peng, Yunni Xia, and Chun Yu. "A Novel Reinforcement-Learning-Based Approach to Workflow Scheduling Upon Infrastructure-as-a-Service Clouds." International Journal of Web Services Research 18, no. 1 (2021): 21–33. http://dx.doi.org/10.4018/ijwsr.2021010102.

Full text
Abstract:
Recently, the cloud computing paradigm has become increasingly popular in large-scale and complex workflow applications. The workflow scheduling problem, which refers to finding the most suitable resource for each task of the workflow to meet user defined quality of service, attracts considerable research attention. Multi-objective optimization algorithms in workflow scheduling have many limitations (e.g., the encoding schemes in most existing heuristic-based scheduling algorithms require prior experts' knowledge), and thus, they can be ineffective when scheduling workflows upon dynamic cloud infrastructures with real time. A novel reinforcement-learning-based algorithm to multi-workflow scheduling over IaaS is proposed. It aims at optimizing make-span and dwell time and is to achieve a unique set of correlated equilibrium solution. The proposed algorithm is evaluated for famous workflow templates and real-world industrial IaaS by simulation and compared to the current state-of-the-art heuristic algorithms. The result shows that the algorithm outperforms compared algorithm.
APA, Harvard, Vancouver, ISO, and other styles
39

Gong, Chengjuan, Ranyu Yin, Tengfei Long, Weili Jiao, Guojin He, and Guizhou Wang. "Spatial–Temporal Approach and Dataset for Enhancing Cloud Detection in Sentinel-2 Imagery: A Case Study in China." Remote Sensing 16, no. 6 (2024): 973. http://dx.doi.org/10.3390/rs16060973.

Full text
Abstract:
Clouds often cause challenges during the application of optical satellite images. Masking clouds and cloud shadows is a crucial step in the image preprocessing workflow. The absence of a thermal band in products of the Sentinel-2 series complicates cloud detection. Additionally, most existing cloud detection methods provide binary results (cloud or non-cloud), which lack information on thin clouds and cloud shadows. This study attempted to use end-to-end supervised spatial–temporal deep learning (STDL) models to enhance cloud detection in Sentinel-2 imagery for China. To support this workflow, a new dataset for time-series cloud detection featuring high-quality labels for thin clouds and haze was constructed through time-series interpretation. A classification system consisting of six categories was employed to obtain more detailed results and reduce intra-class variance. Considering the balance of accuracy and computational efficiency, we constructed four STDL models based on shared-weight convolution modules and different classification modules (dense, long short-term memory (LSTM), bidirectional LSTM (Bi-LSTM), and transformer). The results indicated that spatial and temporal features were crucial for high-quality cloud detection. The STDL models with simple architectures that were trained on our dataset achieved excellent accuracy performance and detailed detection of clouds and cloud shadows, although only four bands with a resolution of 10 m were used. The STDL models that used the Bi-LSTM and that used the transformer as the classifier showed high and close overall accuracies. While the transformer classifier exhibited slightly lower accuracy than that of Bi-LSTM, it offered greater computational efficiency. Comparative experiments also demonstrated that the usable data labels and cloud detection results obtained with our workflow outperformed the results of the existing s2cloudless, MAJA, and CS+ methods.
APA, Harvard, Vancouver, ISO, and other styles
40

Feng, Yong, Ka Lun Leung, Yingkui Li, and Kwai Lam Wong. "An AI-Based Workflow for Fast Registration of UAV-Produced 3D Point Clouds." Remote Sensing 15, no. 21 (2023): 5163. http://dx.doi.org/10.3390/rs15215163.

Full text
Abstract:
The integration of structure from motion (SFM) and unmanned aerial vehicle (UAV) technologies has allowed for the generation of very high-resolution three-dimensional (3D) point cloud data (up to millimeters) to detect and monitor surface changes. However, a bottleneck still exists in accurately and rapidly registering the point clouds at different times. The existing point cloud registration algorithms, such as the Iterative Closest Point (ICP) and the Fast Global Registration (FGR) method, were mainly developed for the registration of small and static point cloud data, and do not perform well when dealing with large point cloud data with potential changes over time. In particular, registering large data is computationally expensive, and the inclusion of changing objects reduces the accuracy of the registration. In this paper, we develop an AI-based workflow to ensure high-quality registration of the point clouds generated using UAV-collected photos. We first detect stable objects from the ortho-photo produced by the same set of UAV-collected photos to segment the point clouds of these objects. Registration is then performed only on the partial data with these stable objects. The application of this workflow using the UAV data collected from three erosion plots at the East Tennessee Research and Education Center indicates that our workflow outperforms the existing algorithms in both computational speed and accuracy. This AI-based workflow significantly improves computational efficiency and avoids the impact of changing objects for the registration of large point cloud data.
APA, Harvard, Vancouver, ISO, and other styles
41

Zhou, Jiantao, Chaoxin Sun, Weina Fu, Jing Liu, Lei Jia, and Hongyan Tan. "Modeling, Design, and Implementation of a Cloud Workflow Engine Based on Aneka." Journal of Applied Mathematics 2014 (2014): 1–9. http://dx.doi.org/10.1155/2014/512476.

Full text
Abstract:
This paper presents a Petri net-based model for cloud workflow which plays a key role in industry. Three kinds of parallelisms in cloud workflow are characterized and modeled. Based on the analysis of the modeling, a cloud workflow engine is designed and implemented in Aneka cloud environment. The experimental results validate the effectiveness of our approach of modeling, design, and implementation of cloud workflow.
APA, Harvard, Vancouver, ISO, and other styles
42

Pathirage, Milinda, Srinath Perera, Indika Kumara, Denis Weerasiri, and Sanjiva Weerawarana. "A Scalable Multi-Tenant Architecture for Business Process Executions." International Journal of Web Services Research 9, no. 2 (2012): 21–41. http://dx.doi.org/10.4018/jwsr.2012040102.

Full text
Abstract:
Cloud computing, as a concept, promises cost savings to end-users by letting them outsource their non-critical business functions to a third-party in pay-as-you-go style. However, to enable economic pay-as-you-go services, the end-users need Cloud middleware that maximizes sharing and support near-zero cost for unused applications. Multi-tenancy, which let multiple tenants to share a single application instance securely, is a key enabler for building such a middleware. On the other hand, Business processes capture Business logic of organizations in an abstract and reusable manner, and hence play a key role in most organizations. This paper presents the design and architecture of a scalable Multi-tenant Workflow engine while discussing in detail the potential use cases of such architecture. Primary contributions of this paper are motivating workflow multi-tenancy, and the design and implementation of a scalable multi-tenant workflow engine that enables multiple tenants to run their workflows securely within the same workflow engine instance without modifications to the workflows. Furthermore, the workflow engine supports process sharing and process variability across the tenants and discusses its ramifications.
APA, Harvard, Vancouver, ISO, and other styles
43

Sangeeta, P. Sangani, and F. Rodd Sunil. "Delay Aware and Performance Efficient Workflow Scheduling of Web Servers in Hybrid Cloud Computing Environment." Indian Journal of Science and Technology 15, no. 20 (2022): 965–75. https://doi.org/10.17485/IJST/v15i20.1809.

Full text
Abstract:
Abstract <strong>Background :</strong>&nbsp;To design an effective workflow scheduling optimization of web servers that will bring good trade-offs to meet the workflow task delay prerequisite and performance requirement.&nbsp;<strong>Methods:</strong>&nbsp;This study presents a delayaware and performance-efficient energy optimization (DAPEEO) technique for workflow execution in a heterogeneous environment (i.e., edge-cloud environment). This technique provides a workflow execution model which meets the application delay prerequisite and performance requirement.&nbsp;<strong>Findings:</strong>&nbsp;Our model has been designed to reduce the energy consumption, increase throughput, reduce computational cost and computational time to provide a delay-aware and performance-efficient workflow model for the web servers in hybrid cloud computing. Our model Delay Aware and Performance Efficient Workflow Scheduling of Web Servers in Hybrid Cloud Computing Environment (DAPEEO) has reduced the energy consumption by 4.217%, increased the throughput by 19.51%, and reduce the computational cost by 62.38% when compared with the existing Deadline-Constrained Cost Optimization for Hybrid Clouds (DCOH) models. Furthermore, the average energy consumption showed a reduction of 40.993% and 90.384% when compared with the DCOH and Self-Configuring and Self-Healing of Cloud-based Resources (RADAR) workload model respectively. Experiment outcome shows the DAPEEO technique achieves much superior energy efficiency, throughput and computation cost reduction when compared with the existing workflow execution model.&nbsp;<strong>Novelty:</strong>&nbsp;Existing model failed to balance reducing cost, and meeting workflow execution deadlines under a heterogeneous environment. On the other side, the DAPEEO is efficient in bringing trade-offs in reducing energy dissipation and meeting task deadlines with reduced cost under the edge-cloud computing model. <strong>Keywords:</strong> Cloud Computing; EdgeCloud; DAPEEO; Energy Efficiency; Throughput; Cost
APA, Harvard, Vancouver, ISO, and other styles
44

Huang, Hua, Yi Lai Zhang, and Min Zhang. "A Survey of Cloud Workflow." Advanced Materials Research 765-767 (September 2013): 1343–48. http://dx.doi.org/10.4028/www.scientific.net/amr.765-767.1343.

Full text
Abstract:
Traditionally, enterprises have approached IT modernization and back office process enhancement in silos, by introducing multiple vendors with different technologies, platforms and operating systems, which limits process integrity and leads to disseminated responsibilities. In contrast, the Cloud revolution has made it much easier for enterprises to approach business transformation in a holistic manner, leading to a quantum leap in their business and IT practices and performance. In other words, Cloud-based workflow (which is shorted for cloud workflow) technology is now regarded as a pretty good solution: Cloud workflow helps organizations in process harmonization, optimal organization design and change management offering benefits beyond cost reduction, it enables a complex application instance to be abstractly defined, flexibly configured and auto-operated. This paper introduces shortly the technology of both cloud computing and workflow and then presents the concepts and features of cloud workflow. Meanwhile, the following part discusses the application scenario and the application cases of cloud workflow. Lastly a possible trend of workflow in the future is proposed.
APA, Harvard, Vancouver, ISO, and other styles
45

Chaudhry, Nauman Riaz, Anastasia Anagnostou, and Simon J. E. Taylor. "A Workflow Architecture for Cloud-based Distributed Simulation." ACM Transactions on Modeling and Computer Simulation 32, no. 2 (2022): 1–26. http://dx.doi.org/10.1145/3503510.

Full text
Abstract:
Distributed Simulation has still to be adopted significantly by the wider simulation community. Reasons for this might be that distributed simulation applications are difficult to develop and access to multiple computing resources are required. Cloud computing offers low-cost on-demand computing resources. Developing applications that can use cloud computing can be also complex, particularly those that can run on different clouds. Cloud-based Distributed Simulation (CBDS) is potentially attractive, as it may solve the computing resources issue as well as other cloud benefits, such as convenient network access. However, as possibly shown by the lack of sustainable approaches in the literature, the combination of cloud and distributed simulation may be far too complex to develop a general approach. E-Infrastructures have emerged as large-scale distributed systems that support high-performance computing in various scientific fields. Workflow Management Systems (WMS) have been created to simplify the use of these e-Infrastructures. There are many examples of where both technologies have been extended to use cloud computing. This article therefore presents our investigation into the potential of using these technologies for CBDS in the above context and the WORkflow architecture for cLoud-based Distributed Simulation (WORLDS), our contribution to CBDS. We present an implementation of WORLDS using the CloudSME Simulation Platform that combines the WS-PGRADE/gUSE WMS with the CloudBroker Platform as a Service. The approach is demonstrated with a case study using an agent-based distributed simulation of an Emergency Medical Service in REPAST and the Portico HLA RTI on the Amazon EC2 cloud.
APA, Harvard, Vancouver, ISO, and other styles
46

Zhou, Xiumin, Gongxuan Zhang, Tian Wang, Mingyue Zhang, Xiji Wang, and Wei Zhang. "Makespan–Cost–Reliability-Optimized Workflow Scheduling Using Evolutionary Techniques in Clouds." Journal of Circuits, Systems and Computers 29, no. 10 (2019): 2050167. http://dx.doi.org/10.1142/s0218126620501674.

Full text
Abstract:
Most popular scientific workflow systems can now support the deployment of tasks to the cloud. The execution of workflow on cloud has become a multi-objective scheduling in order to meet the needs of users in many aspects. Cost and makespan are considered to be the two most important objects. In addition to these, there are some other Quality-of-Service (QoS) parameters including system reliability, energy consumption and so on. Here, we focus on three objectives: cost, makespan and system reliability. In this paper, we propose a Multi-objective Evolutionary Algorithm on the Cloud (MEAC). In the algorithm, we design some novel schemes including problem-specific encoding and also evolutionary operations, such as crossover and mutation. Simulations on real-world and random workflows are conducted and the results show that MEAC can get on average about 5% higher hypervolume value than some other workflow scheduling algorithms.
APA, Harvard, Vancouver, ISO, and other styles
47

Sriperambuduri, Vinay Kumar, and Nagaratna M. "Comparison of Algorithms for Workflow Applications in Cloud Computing." International Journal of Engineering and Advanced Technology (IJEAT) 11, no. 1 (2021): 55–59. https://doi.org/10.35940/ijeat.A3139.1011121.

Full text
Abstract:
Cloud computing model has evolved to deliver resources on pay per use model to businesses, service providers and end-users. Workflow scheduling has become one of the research trends in cloud computing as many applications in scientific, business, and big data processing can be expressed in the form of a workflow. The scheduling aims to execute scientific or synthetic workloads on the cloud by utilizing the resources by meeting QoS requirements, makespan, energy and cost. There has been extensive research in this area to schedule workflow applications in a distributed environment, to execute background tasks in IoT applications, event-driven and web applications. This paper focuses on the comprehensive survey and classification of workflow scheduling algorithms designed for the cloud.
APA, Harvard, Vancouver, ISO, and other styles
48

Zhang, Jingyu, Jinhui Yao, Shiping Chen, and David Levy. "Facilitating Biodefense Research with Mobile-Cloud Computing." International Journal of Systems and Service-Oriented Engineering 2, no. 3 (2011): 18–31. http://dx.doi.org/10.4018/jssoe.2011070102.

Full text
Abstract:
This paper proposes the use of a mobile-cloud by combining mobile devices and the cloud together in a biodefense and emerging infectious diseases (BEI) research application scenario. A mobile-cloud framework is developed to facilitate the use of mobile devices to collect data for, and manipulate and interact with the scientific workflows running in the Cloud. In this framework, an independent trusted accountability service is used to provide data provenance and enforce compliance among the participants of a biodefense research workflow. The authors have implemented a prototype which allows the researchers to use mobile devices to design and participate in biodefense workflows. The authors evaluated the effectiveness of the mobile-cloud with a prototype and conducted performance testing with example biodefense workflows.
APA, Harvard, Vancouver, ISO, and other styles
49

Gutierrez-Garcia, J. Octavio, and Kwang Mong Sim. "Agent-based cloud workflow execution." Integrated Computer-Aided Engineering 19, no. 1 (2012): 39–56. http://dx.doi.org/10.3233/ica-2012-0387.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Singh, Rajwinder, and Pushpendra Kumar Petriya. "Workflow Scheduling in Cloud Computing." International Journal of Computer Applications 61, no. 11 (2013): 38–40. http://dx.doi.org/10.5120/9975-4804.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!