To see the other types of publications on this topic, follow the link: Offloading tasks.

Journal articles on the topic 'Offloading tasks'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Offloading tasks.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Zhang, Rui, Libing Wu, Shuqin Cao, et al. "Task Offloading with Task Classification and Offloading Nodes Selection for MEC-Enabled IoV." ACM Transactions on Internet Technology 22, no. 2 (2022): 1–24. http://dx.doi.org/10.1145/3475871.

Full text
Abstract:
The Mobile Edge Computing (MEC)-based task offloading in the Internet of Vehicles (IoV) scenario, which transfers computational tasks to mobile edge nodes and fixed edge nodes with available computing resources, has attracted interest in recent years. The MEC-based task offloading can achieve low latency and low operational cost under the tasks delay constraints. However, most existing research generally focuses on how to divide and migrate these tasks to the other devices. This research ignores delay constraints and offloading node selection for different tasks. In this article, we design the MEC-enabled IoV architecture, in which all vehicles and MEC servers act as offloading nodes. Mobile offloading nodes (i.e., vehicles) and fixed offloading nodes (i.e., MEC servers) provide low latency offloading services cooperatively through roadside units. Then we propose the task offloading scheme that considers task classification and offloading nodes selection (TO-TCONS). Our goal is to minimize the total execution time of tasks. In TO-TCONS Scheme, we divide the task offloading into the same region offloading mode and cross-region offloading mode, which is based on the delay constraints of tasks and the travel time of the target vehicle. Moreover, we propose the mobile offloading nodes selection strategy to select offloading nodes for each task, which evaluates offloading candidates for each task based on computing resources and transmission rates. Simulation results demonstrate that TO-TCONS Scheme is indeed capable of reducing total latency of tasks execution under the delay constraints in MEC-enabled IoV.
APA, Harvard, Vancouver, ISO, and other styles
2

Zou, Jing, Zhaoxiang Yuan, Peizhe Xin, et al. "Privacy-Friendly Task Offloading for Smart Grid in 6G Satellite–Terrestrial Edge Computing Networks." Electronics 12, no. 16 (2023): 3484. http://dx.doi.org/10.3390/electronics12163484.

Full text
Abstract:
Through offloading computing tasks to visible satellites for execution, the satellite edge computing architecture effectively issues the high-delay problem in remote grids (e.g., mountain and desert) when tasks are offloaded to the urban terrestrial cloud (TC). However, existing works are usually limited to offloading tasks in pure satellite networks and make offloading decisions based on the predefined models. Additionally, runtime consumption for offloading decisions is rather high. Furthermore, privacy information may be maliciously sniffed since computing tasks are transmitted via vulnerable satellite networks. In this paper, we study the task-offloading problem in satellite–terrestrial edge computing networks, where tasks can be executed by satellite or urban TC. A privacy leakage scenario is described, and we consider preserving privacy by sending extra random dummy tasks to confuse adversaries. Then, the offloading cost with privacy protection consideration is modeled, and the offloading decision that minimizes the offloading cost is formulated as a mixed-integer programming (MIP) problem. To speed up solving the MIP problem, we propose a deep reinforcement learning-based task-offloading (DRTO) algorithm. In this case, offloading location and bandwidth allocation only depend on the current channel states. Simulation results show that the offloading overhead is reduced by 17.5% and 23.6% compared with pure TC computing and pure SatEC computing, while the runtime consumption of DRTO is reduced by at least 42.6%. The dummy tasks are exhibited to effectively mitigate privacy leakage during offloading.
APA, Harvard, Vancouver, ISO, and other styles
3

Zhang, Ruipeng, Yanxiang Feng, Yikang Yang, Xiaoling Li, and Hengnian Li. "Dynamic Delay-Sensitive Observation-Data-Processing Task Offloading for Satellite Edge Computing: A Fully-Decentralized Approach." Remote Sensing 16, no. 12 (2024): 2184. http://dx.doi.org/10.3390/rs16122184.

Full text
Abstract:
Satellite edge computing (SEC) plays an increasing role in earth observation, due to its global coverage and low-latency computing service. In SEC, it is pivotal to offload diverse observation-data-processing tasks to the appropriate satellites. Nevertheless, due to the sparse intersatellite link (ISL) connections, it is hard to gather complete information from all satellites. Moreover, the dynamic arriving tasks will also influence the obtained offloading assignment. Therefore, one daunting challenge in SEC is achieving optimal offloading assignments with consideration of the dynamic delay-sensitive tasks. In this paper, we formulate task offloading in SEC with delay-sensitive tasks as a mixed-integer linear programming problem, aiming to minimize the weighted sum of deadline violations and energy consumption. Due to the limited ISLs, we propose a fully-decentralized method, called the PI-based task offloading (PITO) algorithm. The PITO operates on each satellite in parallel and only relies on local communication via ISLs. Tasks can be directly offloaded on board without depending on any central server. To further handle the dynamic arriving tasks, we propose a re-offloading mechanism based on the match-up strategy, which reduces the tasks involved and avoids unnecessary insertion attempts by pruning. Finally, extensive experiments demonstrate that PITO outperforms state-of-the-art algorithms when solving task offloading in SEC, and the proposed re-offloading mechanism is significantly more efficient than existing methods.
APA, Harvard, Vancouver, ISO, and other styles
4

Lv, Dan, Peng Wang, Qubeijian Wang, Yu Ding, Zeyang Han, and Yadong Zhang. "Task Offloading and Resource Optimization Based on Predictive Decision Making in a VIoT System." Electronics 13, no. 12 (2024): 2332. http://dx.doi.org/10.3390/electronics13122332.

Full text
Abstract:
With the exploration of next-generation network technology, visual internet of things (VIoT) systems impose significant computational and transmission demands on mobile edge computing systems that handle large amounts of offloaded video data. Visual users offload specific tasks to cloud or edge computing platforms to meet strict real-time requirements. However, the available scheduling and computational resources for offloading tasks constantly destroy the system’s reliability and efficiency. This paper proposes a mechanism for task offloading and resource optimization based on predictive perception. Firstly, we proposed two LSTM-based decision-making prediction methods. In resource-constrained scenarios, we improve resource utilization by encouraging edge devices to participate in task offloading, ensuring the completion of more latency-sensitive request tasks, and enabling predictive decision-making for task offloading. We propose a polynomial time optimal mechanism for pre-emptive decision task offloading in resource-abundant scenarios. We solve the 0–1 knapsack problem of offloading tasks to better meet the demands of low-latency tasks where the system’s available resources are not constrained. Finally, we provide numerical results to demonstrate the effectiveness of our scheme.
APA, Harvard, Vancouver, ISO, and other styles
5

Li , Deng, Chengqin Yu, Ying Tan, and Jiaqi Liu. "Optimization Method of Fog Computing High Offloading Service Based on Frame of Reference." Mathematics 12, no. 5 (2024): 621. http://dx.doi.org/10.3390/math12050621.

Full text
Abstract:
The cost of offloading tasks is a crucial parameter that influences the task selection of fog nodes. Low-cost tasks can be completed quickly, while high-cost tasks are rarely chosen. Therefore, it is essential to design an effective incentive mechanism to encourage fog nodes to actively participate in high-cost offloading tasks. Current incentive mechanisms generally increase remuneration to enhance the probability of participants selecting high-cost tasks, which inevitably leads to increased platform costs. To improve the likelihood of choosing high-cost tasks, we introduce a frame of reference into fog computing offloading and design a Reference Incentive Mechanism (RIM) by incorporating reference objects. Leveraging the characteristics of the frame of reference, we set an appropriate reference task as the reference point that influences the attraction of offloading tasks to fog nodes and motivates them towards choosing high-cost tasks. Finally, simulation results demonstrate that our proposed mechanism outperforms existing algorithms in enhancing the selection probability of high-cost tasks and improving platform utility.
APA, Harvard, Vancouver, ISO, and other styles
6

Fu, Shuang, Chenyang Ding, and Peng Jiang. "Computational Offloading of Service Workflow in Mobile Edge Computing." Information 13, no. 7 (2022): 348. http://dx.doi.org/10.3390/info13070348.

Full text
Abstract:
Mobile edge computing (MEC) sinks the functions and services of cloud computing to the edge of the network to provide users with storage and computing resources. For workflow tasks, the interdependency and the sequence constraint being among the tasks make the offloading strategy more complicated. To obtain the optimal offloading and scheduling scheme for workflow tasks to minimize the total energy consumption of the system, a workflow task offloading and scheduling scheme based on an improved genetic algorithm is proposed in an MEC network with multiple users and multiple virtual machines (VMs). Firstly, the system model of the offloading and scheduling of workflow tasks in a multi-user and multi-VMs MEC network is built. Then, the problem of how to determine the optimal offloading and scheduling scheme of workflow to minimize the total energy consumption of the system while meeting the deadline constraint is formulated. To solve this problem, the improved genetic algorithm is adopted to obtain the optimal offloading strategy and scheduling. Finally, the simulation results show that the proposed scheme can achieve a lower energy consumption than other benchmark schemes.
APA, Harvard, Vancouver, ISO, and other styles
7

Liu, Jun, Xiaohui Lian, and Chang Liu. "Research on Task-Oriented Computation Offloading Decision in Space-Air-Ground Integrated Network." Future Internet 13, no. 5 (2021): 128. http://dx.doi.org/10.3390/fi13050128.

Full text
Abstract:
In Space–Air–Ground Integrated Networks (SAGIN), computation offloading technology is a new way to improve the processing efficiency of node tasks and improve the limitation of computing storage resources. To solve the problem of large delay and energy consumption cost of task computation offloading, which caused by the complex and variable network offloading environment and a large amount of offloading tasks, a computation offloading decision scheme based on Markov and Deep Q Networks (DQN) is proposed. First, we select the optimal offloading network based on the characteristics of the movement of the task offloading process in the network. Then, the task offloading process is transformed into a Markov state transition process to build a model of the computational offloading decision process. Finally, the delay and energy consumption weights are introduced into the DQN algorithm to update the computation offloading decision process, and the optimal offloading decision under the low cost is achieved according to the task attributes. The simulation results show that compared with the traditional Lyapunov-based offloading decision scheme and the classical Q-learning algorithm, the delay and energy consumption are respectively reduced by 68.33% and 11.21%, under equal weights when the offloading task volume exceeds 500 Mbit. Moreover, compared with offloading to edge nodes or backbone nodes of the network alone, the proposed mixed offloading model can satisfy more than 100 task requests with low energy consumption and low delay. It can be seen that the computation offloading decision proposed in this paper can effectively reduce the delay and energy consumption during the task computation offloading in the Space–Air–Ground Integrated Network environment, and can select the optimal offloading sites to execute the tasks according to the characteristics of the task itself.
APA, Harvard, Vancouver, ISO, and other styles
8

Wu, Qiong, Hongmei Ge, Qiang Fan, Wei Yin, Bo Chang, and Guilu Wu. "Efficient Task Offloading for 802.11p-Based Cloud-Aware Mobile Fog Computing System in Vehicular Networks." Wireless Communications and Mobile Computing 2020 (September 9, 2020): 1–12. http://dx.doi.org/10.1155/2020/8816090.

Full text
Abstract:
Various emerging vehicular applications such as autonomous driving and safety early warning are used to improve the traffic safety and ensure passenger comfort. The completion of these applications necessitates significant computational resources to perform enormous latency-sensitive/nonlatency-sensitive and computation-intensive tasks. It is hard for vehicles to satisfy the computation requirements of these applications due to the limit computational capability of the on-board computer. To solve the problem, many works have proposed some efficient task offloading schemes in computing paradigms such as mobile fog computing (MFC) for the vehicular network. In the MFC, vehicles adopt the IEEE 802.11p protocol to transmit tasks. According to the IEEE 802.11p, tasks can be divided into high priority and low priority according to the delay requirements. However, no existing task offloading work takes into account the different priorities of tasks transmitted by different access categories (ACs) of IEEE 802.11p. In this paper, we propose an efficient task offloading strategy to maximize the long-term expected system reward in terms of reducing the executing time of tasks. Specifically, we jointly consider the impact of priorities of tasks transmitted by different ACs, mobility of vehicles, and the arrival/departure of computing tasks, and then transform the offloading problem into a semi-Markov decision process (SMDP) model. Afterwards, we adopt the relative value iterative algorithm to solve the SMDP model to find the optimal task offloading strategy. Finally, we evaluate the performance of the proposed scheme by extensive experiments. Numerical results indicate that the proposed offloading strategy performs well compared to the greedy algorithm.
APA, Harvard, Vancouver, ISO, and other styles
9

Li, Xianwei, and Baoliu Ye. "Latency-Aware Computation Offloading for 5G Networks in Edge Computing." Security and Communication Networks 2021 (September 22, 2021): 1–15. http://dx.doi.org/10.1155/2021/8800234.

Full text
Abstract:
With the development of Internet of Things, massive computation-intensive tasks are generated by mobile devices whose limited computing and storage capacity lead to poor quality of services. Edge computing, as an effective computing paradigm, was proposed for efficient and real-time data processing by providing computing resources at the edge of the network. The deployment of 5G promises to speed up data transmission but also further increases the tasks to be offloaded. However, how to transfer the data or tasks to the edge servers in 5G for processing with high response efficiency remains a challenge. In this paper, a latency-aware computation offloading method in 5G networks is proposed. Firstly, the latency and energy consumption models of edge computation offloading in 5G are defined. Then the fine-grained computation offloading method is employed to reduce the overall completion time of the tasks. The approach is further extended to solve the multiuser computation offloading problem. To verify the effectiveness of the proposed method, extensive simulation experiments are conducted. The results show that the proposed offloading method can effectively reduce the execution latency of the tasks.
APA, Harvard, Vancouver, ISO, and other styles
10

Mohammed, Mostafa Abdulghafoor, Aya Ahkam Kamil, Raed Abdulkareem Hasan, and Nicolae Tapus. "An Effective Context Sensitive Offloading System for Mobile Cloud Environments using Support Value-based Classification." Scalable Computing: Practice and Experience 20, no. 4 (2019): 687–98. http://dx.doi.org/10.12694/scpe.v20i4.1570.

Full text
Abstract:
Mobile cloud computing (MCC) has drawn significant research attention recently due to the popularity and capability of portable devices. This paper presents an MCC offloading system based on internet offloading choices. This system guarantees the conservation of battery life and reduced execution time. The proposed effective context sensitive offloading approach using support value-based classification is processed in different steps. Initially, the context data of the input tasks is extracted through the energy consumption model, cost model, execution model, communication model and stored. Then, the support value-based classification approach classifies the tasks based on the context information. This classification creates the information about the tasks and finally, a decision is made at the right time to achieve better offloading. The result indicates the presented offloading framework can choose reasonable cloud assets depending on various contexts of the mobile devices and achieve significant performance enhancement.
APA, Harvard, Vancouver, ISO, and other styles
11

Eang, Chanthol, Seyha Ros, Seungwoo Kang, et al. "Offloading Decision and Resource Allocation in Mobile Edge Computing for Cost and Latency Efficiencies in Real-Time IoT." Electronics 13, no. 7 (2024): 1218. http://dx.doi.org/10.3390/electronics13071218.

Full text
Abstract:
Internet of Things (IoT) devices can integrate with applications requiring intensive contextual data processing, intelligent vehicle control, healthcare remote sensing, VR, data mining, traffic management, and interactive applications. However, there are computationally intensive tasks that need to be completed quickly within the time constraints of IoT devices. To address this challenge, researchers have proposed computation offloading, where computing tasks are sent to edge servers instead of being executed locally on user devices. This approach involves using edge servers located near users in cellular network base stations, and also known as Mobile Edge Computing (MEC). The goal is to offload tasks to edge servers, optimizing both latency and energy consumption. The main objective of this paper mentioned in the summary is to design an algorithm for time- and energy-optimized task offloading decision-making in MEC environments. Therefore, we developed a Lagrange Duality Resource Optimization Algorithm (LDROA) to optimize for both decision offloading and resource allocation for tasks, whether to locally execute or offload to an edge server. The LDROA technique produces superior simulation outcomes in terms of task offloading, with improved performance in computation latency and cost usage compared to conventional methods like Random Offloading, Load Balancing, and the Greedy Latency Offloading scheme.
APA, Harvard, Vancouver, ISO, and other styles
12

Liu, Zemin, Na Zhou, Yan Wang, Jian-Tao Zhou, Haotian Zhang, and Gang Xu. "An Effective Task Offloading Method for Separable Complex Mobile Terminal Tasks." Wireless Communications and Mobile Computing 2022 (February 14, 2022): 1–16. http://dx.doi.org/10.1155/2022/3700135.

Full text
Abstract:
Due to limited energy and computing power of IoT devices, they cannot handle complex tasks. Edge computing technology effectively solves the requirements of computing power and response delay for complex tasks in devices by migrating computing power to the vicinity of IoT devices. For a separable complex task on IoT terminal, we focus on the effects of data distribution, dependencies, and offloading sequence of subtasks on its total delay when it is offloaded to edge servers. Through comprehensively considering these factors, we study the slicing and choreographing method during the offloading process of a complex task. Firstly, a task slicing method based on hierarchical clustering is presented and an improved hierarchical clustering algorithm is used to obtain the optimal solution of task partitioning. Secondly, a task choreographing method based on overlapping the longest path is presented. Finally, through the simulation experiments of complex tasks with different structures and loads, the effectiveness of our method is verified.
APA, Harvard, Vancouver, ISO, and other styles
13

Sisodiya, Pankaj Singh, and Vijay Bhandari. "Dynamic Resource Allocation in Industrial Internet of Things (IIoT) using Machine Learning Approaches." International Journal on Recent and Innovation Trends in Computing and Communication 11, no. 10s (2023): 530–40. http://dx.doi.org/10.17762/ijritcc.v11i10s.7691.

Full text
Abstract:
In today's era of rapid smart equipment development and the Industrial Revolution, the application scenarios for Internet of Things (IoT) technology are expanding widely. The combination of IoT and industrial manufacturing systems gives rise to the Industrial IoT (IIoT). However, due to resource limitations such as computational units and battery capacity in IIoT devices (IIEs), it is crucial to execute computationally intensive tasks efficiently. The dynamic and continuous generation of tasks poses a significant challenge to managing the limited resources in the IIoT environment. This paper proposes a collaborative approach for optimal offloading and resource allocation of highly sensitive industrial IoT tasks. Firstly, the computation-intensive IIoT tasks are transformed into a directed acyclic graph. Then, task offloading is treated as an optimization problem, taking into account the models of processor resources and energy consumption for the offloading scheme. Lastly, a dynamic resource allocation approach is introduced to allocate computing resources to the edge-cloud server for the execution of computation-intensive tasks. The proposed joint offloading and scheduling (JOS) algorithm creates its DAG and prepare a offloading queue. This queue is designed using collaborative q-learning based reinforcement learning and allocate optimal resources to the JOS for execution of tasks present in offloading queue. For this machine learning approach is used to predict and allocate resources. The paper compares conventional and machine learning-based resource allocation methods. The machine learning approach performs better in terms of response time, delay, and energy consumption. The proposed algorithm shows that energy usage increases with task size, and response time increases with the number of users. Among the algorithms compared, JOS has the lowest waiting time, followed by DQN, while Q-learning performs the worst. Based on these findings, the paper recommends adopting the machine learning approach, specifically the JOS algorithm, for joint offloading and resource allocation.
APA, Harvard, Vancouver, ISO, and other styles
14

Han, Li, Yanru Bin, Shuaijie Zhu, and Yanpei Liu. "Dynamic Selection Slicing-Based Offloading Algorithm for In-Vehicle Tasks in Mobile Edge Computing." Electronics 12, no. 12 (2023): 2708. http://dx.doi.org/10.3390/electronics12122708.

Full text
Abstract:
With the surge in tasks for in-vehicle terminals, the resulting network congestion and time delay cannot meet the service needs of users. Offloading algorithms are introduced to handle vehicular tasks, which will greatly improve the above problems. In this paper, the dependencies of vehicular tasks are represented as directed acyclic graphs, and network slices are integrated within the edge server. The Dynamic Selection Slicing-based Offloading Algorithm for in-vehicle tasks in MEC (DSSO) is proposed. First, a computational offloading model for vehicular tasks is established based on available resources, wireless channel state, and vehicle loading level. Second, the solution of the model is transformed into a Markov decision process, and the combination of the DQN algorithm and Dueling Network from deep reinforcement learning is used to select the appropriate slices and dynamically update the optimal offloading strategy for in-vehicle tasks in the effective interval. Finally, an experimental environment is set up to compare the DSSO algorithm with LOCAL, MINCO, and DJROM, the results show that the system energy consumption of DSSO algorithm resources is reduced by 10.31%, the time latency is decreased by 22.75%, and the ratio of dropped tasks is decreased by 28.71%.
APA, Harvard, Vancouver, ISO, and other styles
15

Wang, Yuchen, Zishan Huang, Zhongcheng Wei, and Jijun Zhao. "MADDPG-Based Offloading Strategy for Timing-Dependent Tasks in Edge Computing." Future Internet 16, no. 6 (2024): 181. http://dx.doi.org/10.3390/fi16060181.

Full text
Abstract:
With the increasing popularity of the Internet of Things (IoT), the proliferation of computation-intensive and timing-dependent applications has brought serious load pressure on terrestrial networks. In order to solve the problem of computing resource conflict and long response delay caused by concurrent application service applications from multiple users, this paper proposes an improved edge computing timing-dependent, task-offloading scheme based on Multi-Agent Deep Deterministic Policy Gradient (MADDPG) that aims to shorten the offloading delay and improve the resource utilization rate by means of resource prediction and collaboration among multiple agents to shorten the offloading delay and improve the resource utilization. First, to coordinate the global computing resource, the gated recurrent unit is utilized, which predicts the next computing resource requirements of the timing-dependent tasks according to historical information. Second, the predicted information, the historical offloading decisions and the current state are used as inputs, and the training process of the reinforcement learning algorithm is improved to propose a task-offloading algorithm based on MADDPG. The simulation results show that the algorithm reduces the response latency by 6.7% and improves the resource utilization by 30.6% compared with the suboptimal benchmark algorithm, and it reduces nearly 500 training rounds during the learning process, which effectively improves the timeliness of the offloading strategy.
APA, Harvard, Vancouver, ISO, and other styles
16

Wang, Mingzhi, Tao Wu, Xiaochen Fan, Penghao Sun, Yuben Qu, and Panlong Yang. "TPD: Temporal and Positional Computation Offloading with Dynamic and Dependent Tasks." Wireless Communications and Mobile Computing 2021 (November 10, 2021): 1–15. http://dx.doi.org/10.1155/2021/3877285.

Full text
Abstract:
With the rapid development of wireless communication technologies and the proliferation of the urban Internet of Things (IoT), the paradigm of mobile computing has been shifting from centralized clouds to edge networks. As an enabling paradigm for computation-intensive and latency-sensitive computation tasks, mobile edge computing (MEC) can provide in-proximity computing services for resource-constrained IoT devices. Nevertheless, it remains challenging to optimize computation offloading from IoT devices to heterogeneous edge servers, considering complex intertask dependency, limited bandwidth, and dynamic networks. In this paper, we address the above challenges in MEC with TPD, that is, temporal and positional computation offloading with dynamic-dependent tasks. In particular, we investigate channel interference and intertask dependency by considering the position and moment of computation offloading simultaneously. We define a novel criterion for assessing the criticality of each task, and we identify the critical path based on a directed acyclic graph of all tasks. Furthermore, we propose an online algorithm for finding the optimal computation offloading strategy with intertask dependency and adjusting the strategy in real-time when facing dynamic tasks. Extensive simulation results show that our algorithm reduces significantly the time to complete all tasks by 30–60% in different scenarios and takes less time to adjust the offloading strategy in dynamic MEC systems.
APA, Harvard, Vancouver, ISO, and other styles
17

Hossen, Md Rajib, and Mohammad A. Islam. "Mobile Task Offloading Under Unreliable Edge Performance." ACM SIGMETRICS Performance Evaluation Review 48, no. 4 (2021): 29–32. http://dx.doi.org/10.1145/3466826.3466838.

Full text
Abstract:
Offloading resource-hungry tasks from mobile devices to an edge server has been explored recently to improve task com- pletion time as well as save battery energy. The low la- tency computing resource from edge servers are a perfect companion to realize such task offloading. However, edge servers may su er from unreliable performance due to its rapid workload variation and reliance on intermittent re- newable energy. Further, batteries in mobile devices make online optimum offloading decisions challenging since it in- tertwines offloading decisions across di erent tasks. In this paper, we propose a deep Q-learning based task offloading solution, DeepTO, for online task offloading. DeepTO learns edge server performance in a model-free manner and takes future battery needs of the mobile device into account. Us- ing a simulation-based evaluation, we show that DeepTO can perform close to the optimum solution that has com- plete future knowledge.
APA, Harvard, Vancouver, ISO, and other styles
18

Wang, Yanyan, Lin Wang, Ruijuan Zheng, Xuhui Zhao, and Muhua Liu. "Latency-Optimal Computational Offloading Strategy for Sensitive Tasks in Smart Homes." Sensors 21, no. 7 (2021): 2347. http://dx.doi.org/10.3390/s21072347.

Full text
Abstract:
In smart homes, the computational offloading technology of edge cloud computing (ECC) can effectively deal with the large amount of computation generated by smart devices. In this paper, we propose a computational offloading strategy for minimizing delay based on the back-pressure algorithm (BMDCO) to get the offloading decision and the number of tasks that can be offloaded. Specifically, we first construct a system with multiple local smart device task queues and multiple edge processor task queues. Then, we formulate an offloading strategy to minimize the queue length of tasks in each time slot by minimizing the Lyapunov drift optimization problem, so as to realize the stability of queues and improve the offloading performance. In addition, we give a theoretical analysis on the stability of the BMDCO algorithm by deducing the upper bound of all queues in this system. The simulation results show the stability of the proposed algorithm, and demonstrate that the BMDCO algorithm is superior to other alternatives. Compared with other algorithms, this algorithm can effectively reduce the computation delay.
APA, Harvard, Vancouver, ISO, and other styles
19

Choi, HeeSeok, Heonchang Yu, and EunYoung Lee. "Latency-Classification-Based Deadline-Aware Task Offloading Algorithm in Mobile Edge Computing Environments." Applied Sciences 9, no. 21 (2019): 4696. http://dx.doi.org/10.3390/app9214696.

Full text
Abstract:
In this study, we consider an edge cloud server in which a lightweight server is placed near a user device for the rapid processing and storage of large amounts of data. For the edge cloud server, we propose a latency classification algorithm based on deadlines and urgency levels (i.e., latency-sensitive and latency-tolerant). Furthermore, we design a task offloading algorithm to reduce the execution time of latency-sensitive tasks without violating deadlines. Unlike prior studies on task offloading or scheduling that have applied no deadlines or task-based deadlines, we focus on a comprehensive deadline-aware task scheduling scheme that performs task offloading by considering the real-time properties of latency-sensitive tasks. Specifically, when a task is offloaded to the edge cloud server due to a lack of resources on the user device, services could be provided without delay by offloading latency-tolerant tasks first, which are presumed to perform relatively important functions. When offloading a task, the type of the task, weight of the task, task size, estimated execution time, and offloading time are considered. By distributing and offloading latency-sensitive tasks as much as possible, the performance degradation of the system can be minimized. Based on experimental performance evaluations, we prove that our latency-based task offloading algorithm achieves a significant execution time reduction compared to previous solutions without incurring deadline violations. Unlike existing research, we applied delays with various network types in the MEC (mobile edge computing) environment for verification, and the experimental result was measured not only by the total response time but also by the cause of the task failure rate.
APA, Harvard, Vancouver, ISO, and other styles
20

Zha, Zhiyong, Yifei Yang, Yongjun Xia, et al. "Energy-Efficient Joint Partitioning and Offloading for Delay-Sensitive CNN Inference in Edge Computing." Applied Sciences 14, no. 19 (2024): 8656. http://dx.doi.org/10.3390/app14198656.

Full text
Abstract:
With the development of deep learning foundation model technology, the types of computing tasks have become more complex, and the computing resources and memory required for these tasks have also become more substantial. Since it has long been revealed that task offloading in cloud servers has many drawbacks, such as high communication delay and low security, task offloading is mostly carried out in the edge servers of the Internet of Things (IoT) network. However, edge servers in IoT networks are characterized by tight resource constraints and often the dynamic nature of data sources. Therefore, the question of how to perform task offloading of deep learning foundation model services on edge servers has become a new research topic. However, the existing task offloading methods either can not meet the requirements of massive CNN architecture or require a lot of communication overhead, leading to significant delays and energy consumption. In this paper, we propose a parallel partitioning method based on matrix convolution to partition foundation model inference tasks, which partitions large CNN inference tasks into subtasks that can be executed in parallel to meet the constraints of edge devices with limited hardware resources. Then, we model and mathematically express the problem of task offloading. In a multi-edge-server, multi-user, and multi-task edge-end system, we propose a task-offloading method that balances the tradeoff between delay and energy consumption. It adopts a greedy algorithm to optimize task-offloading decisions and terminal device transmission power to maximize the benefits of task offloading. Finally, extensive experiments verify the significant and extensive effectiveness of our algorithm.
APA, Harvard, Vancouver, ISO, and other styles
21

Zhu, Chunhua, Chong Liu, Hai Zhu, and Jingtao Li. "Cloud–Fog Collaborative Computing Based Task Offloading Strategy in Internet of Vehicles." Electronics 13, no. 12 (2024): 2355. http://dx.doi.org/10.3390/electronics13122355.

Full text
Abstract:
Vehicle terminals in the mobile internet of vehicles are faced with difficulty in the requirements for computation-intensive and delay-sensitive tasks, and vehicle mobility also causes dynamic changes in vehicle-to-vehicle (V2V) communication links, which results in a lower task offloading quality. To solve the above problems, a new task offloading strategy based on cloud–fog collaborative computing is proposed. Firstly, the V2V-assisted task forwarding mechanism is introduced under cloud–fog collaborative computing, and a forwarding vehicles predicting algorithm based on environmental information is designed; then, considering the parallel computing relationship of tasks in each computing node, a task offloading cost model is constructed with the goal of minimizing delay and energy consumption; finally, a multi-strategy improved genetic algorithm (MSI-GA) is proposed to solve the above task offloading optimization problem, which adapts the chaotic sequence to initialize the population, comprehensively considers the influence factors to optimize the adaptive operator, and introduces Gaussian perturbation to enhance the local optimization ability of the algorithm. The simulation experiments show that compared with the existing strategies, the proposed task offloading strategy has the lower task offloading cost for a different number of tasks and fog nodes; additionally, the introduced V2V auxiliary task forwarding mechanism can reduce the forwarding load of fog nodes by cooperative vehicles to forward tasks.
APA, Harvard, Vancouver, ISO, and other styles
22

Huang, Ruize, Xiaolan Xie, and Qiang Guo. "Multi-Queue-Based Offloading Strategy for Deep Reinforcement Learning Tasks." Electronics 13, no. 12 (2024): 2307. http://dx.doi.org/10.3390/electronics13122307.

Full text
Abstract:
With the boom in mobile internet services, computationally intensive applications such as virtual and augmented reality have emerged. Mobile edge computing (MEC) technology allows mobile devices to offload heavy computational tasks to edge servers, which are located at the edge of the network. This technique is considered an effective approach to help reduce the burden on devices and enable efficient task offloading. This paper addresses a dynamic real-time task-offloading problem within a stochastic multi-user MEC network, focusing on the long-term stability of system energy consumption and energy budget constraints. To solve this problem, a task-offloading strategy with long-term constraints is proposed, optimized through the construction of multiple queues to maintain users’ long-term quality of experience (QoE). The problem is decoupled using Lyapunov theory into a single time-slot problem, modeled as a Markov decision process (MDP). A deep reinforcement learning (DRL)-based LMADDPG algorithm is introduced to solve the task-offloading decision. Finally, Experiments are conducted under the constraints of a limited MEC energy budget and the need to maintain the long-term energy stability of the system. The results from simulation experiments demonstrate that the algorithm outperforms other baseline algorithms in terms of task-offloading decisions.
APA, Harvard, Vancouver, ISO, and other styles
23

Dong, Li, Wenji He, and Haipeng Yao. "Task Offloading and Resource Allocation for Tasks with Varied Requirements in Mobile Edge Computing Networks." Electronics 12, no. 2 (2023): 366. http://dx.doi.org/10.3390/electronics12020366.

Full text
Abstract:
Edge computing enables devices with insufficient computing resources to offload their tasks to the edge for computing, to improve the service experience. Some existing work has noticed that the data size of offloaded tasks played a role in resource allocation shares but has not delved further into how the data size of an offloaded task affects resource allocation. Among offloaded tasks, those with larger data sizes often consume a larger share of system resources, potentially even monopolizing system resources if the data size is large enough. As a result, tasks with small or regular sizes lose the opportunity to be offloaded to the edge due to their limited data size. To address this issue, we introduce the concept of an emergency factor to penalize tasks with immense sizes for monopolizing system resources, while supporting tasks with small sizes to contend for system resources. The joint offloading decision and resource allocation problem is formulated as a mixed-integer nonlinear programming (MINLP) problem and further decomposed into an offloading decision subproblem and a resource allocation subproblem. Using the KKT conditions, we design a bisection search-based algorithm to find the optimal resource allocation scheme.Additionally, we propose a linear-search-based coordinate descent (CD) algorithm to identify the optimal offloading decision. Numerical results show that our proposed algorithm converges to the optimal scheme (for the minimal delay) when tasks are of regular size. Moreover, when tasks of immense, small and regular sizes coexist in the system, our scheme can exclude tasks of immense size from edge resource allocation, while still enabling tasks of small size to be offloaded.
APA, Harvard, Vancouver, ISO, and other styles
24

Abinaya, C., and E. Ramaraj. "Offloading Scheme for Cloudlets Computation Tasks." International Journal of Computer Sciences and Engineering 6, no. 8 (2018): 228–32. http://dx.doi.org/10.26438/ijcse/v6i8.228232.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Yao, Bingxin, Bin Wu, Siyun Wu, Yin Ji, Danggui Chen, and Limin Liu. "An Offloading Algorithm based on Markov Decision Process in Mobile Edge Computing System." International Journal of Circuits, Systems and Signal Processing 16 (January 5, 2022): 115–21. http://dx.doi.org/10.46300/9106.2022.16.15.

Full text
Abstract:
In this paper, an offloading algorithm based on Markov Decision Process (MDP) is proposed to solve the multi-objective offloading decision problem in Mobile Edge Computing (MEC) system. The feature of the algorithm is that MDP is used to make offloading decision. The number of tasks in the task queue, the number of accessible edge clouds and Signal-Noise-Ratio (SNR) of the wireless channel are taken into account in the state space of the MDP model. The offloading delay and energy consumption are considered to define the value function of the MDP model, i.e. the objective function. To maximize the value function, Value Iteration Algorithm is used to obtain the optimal offloading policy. According to the policy, tasks of mobile terminals (MTs) are offloaded to the edge cloud or central cloud, or executed locally. The simulation results show that the proposed algorithm can effectively reduce the offloading delay and energy consumption.
APA, Harvard, Vancouver, ISO, and other styles
26

Valentino, Rico, Woo-Sung Jung, and Young-Bae Ko. "A Design and Simulation of the Opportunistic Computation Offloading with Learning-Based Prediction for Unmanned Aerial Vehicle (UAV) Clustering Networks." Sensors 18, no. 11 (2018): 3751. http://dx.doi.org/10.3390/s18113751.

Full text
Abstract:
Drones have recently become extremely popular, especially in military and civilian applications. Examples of drone utilization include reconnaissance, surveillance, and packet delivery. As time has passed, drones’ tasks have become larger and more complex. As a result, swarms or clusters of drones are preferred, because they offer more coverage, flexibility, and reliability. However, drone systems have limited computing power and energy resources, which means that sometimes it is difficult for drones to finish their tasks on schedule. A solution to this is required so that drone clusters can complete their work faster. One possible solution is an offloading scheme between drone clusters. In this study, we propose an opportunistic computational offloading system, which allows for a drone cluster with a high intensity task to borrow computing resources opportunistically from other nearby drone clusters. We design an artificial neural network-based response time prediction module for deciding whether it is faster to finish tasks by offloading them to other drone clusters. The offloading scheme is conducted only if the predicted offloading response time is smaller than the local computing time. Through simulation results, we show that our proposed scheme can decrease the response time of drone clusters through an opportunistic offloading process.
APA, Harvard, Vancouver, ISO, and other styles
27

Xu, Qiliang, Guo Zhang, and Jianping Wang. "Research on Cloud-Edge-End Collaborative Computing Offloading Strategy in the Internet of Vehicles Based on the M-TSA Algorithm." Sensors 23, no. 10 (2023): 4682. http://dx.doi.org/10.3390/s23104682.

Full text
Abstract:
In the Internet of Vehicles scenario, the in-vehicle terminal cannot meet the requirements of computing tasks in terms of delay and energy consumption; the introduction of cloud computing and MEC is an effective way to solve the above problem. The in-vehicle terminal requires a high task processing delay, and due to the high delay of cloud computing to upload computing tasks to the cloud, the MEC server has limited computing resources, which will increase the task processing delay when there are more tasks. To solve the above problems, a vehicle computing network based on cloud-edge-end collaborative computing is proposed, in which cloud servers, edge servers, service vehicles, and task vehicles themselves can provide computing services. A model of the cloud-edge-end collaborative computing system for the Internet of Vehicles is constructed, and a computational offloading strategy problem is given. Then, a computational offloading strategy based on the M-TSA algorithm and combined with task prioritization and computational offloading node prediction is proposed. Finally, comparative experiments are conducted under task instances simulating real road vehicle conditions to demonstrate the superiority of our network, where our offloading strategy significantly improves the utility of task offloading and reduces offloading delay and energy consumption.
APA, Harvard, Vancouver, ISO, and other styles
28

Hasan, Raed A., Khalil Yasin, Parween R. Kareem, et al. "Energy-Efficient Task Offloading and Resource Allocation in Mobile Cloud Computing Using Edge-AI and Network Virtualization." KHWARIZMIA 2025 (July 4, 2025): 42–49. https://doi.org/10.70470/khwarizmia/2025/005.

Full text
Abstract:
In the emerging landscape of Mobile Cloud Computing, energy efficiency and resource optimization are very vital challenges because most of the cloud and edge resources are increasing the execution of tasks in mobile devices. The focus of this paper is to propose a new energy-efficient task offloading and resource allocation framework in Edge-AI enabled network virtualization for dynamic management of computational tasks in mobile cloud environments. The framework allows real-time decisions of task offloading by comparing energy consumption of local execution versus edge processing and further looks at the received performance gains in executing that way. Then, based on the savings in energy and availability of the edge resources, it grades tasks for offloading. Network virtualization optimizes edge resource use by allocating resources depending on demand from tasks, leading to a reduction in latency for increased processing efficiency. The simulation results proved that our approach could really cut down energy consumption on mobile devices, with low latency and high rates of task success, better than cloud-only offloading through static edge computing methods and traditional dynamic programming.
APA, Harvard, Vancouver, ISO, and other styles
29

Qiao-Feng Song, Qiao-Feng Song, Jun Wang Qiao-Feng Song, Ji-Xu Gao Jun Wang, and Jia-Hao Liu Ji-Xu Gao. "A Multi-edge Collaborative Computational Offloading Scheme Based on Game Theory." 電腦學刊 34, no. 5 (2023): 149–66. http://dx.doi.org/10.53106/199115992023103405011.

Full text
Abstract:
<p>A common approach in existing collaborative edge computing offloading schemes is to partition tasks into independent sub-tasks and offload them to participating servers. However, in practice, these sub-tasks often have dependencies, resulting in waiting time. To address this problem, we propose a collaborative computation offloading scheme based on Stackelberg game theory and graph theory (CCOSGG). First, we introduce a task clustering method based on graph theory, which uses task reconstruction and graph partition algorithm to cluster strongly related sub-tasks into appropriately sized clusters. Second, we use Stackelberg game theory to introduce an incentive mechanism that encourages remote edges to participate in the collaborative offloading. Finally, simulation results demonstrate that the proposed scheme can minimize latency and energy consumption at different network scales.</p> <p> </p>
APA, Harvard, Vancouver, ISO, and other styles
30

Wu, Jingyan, Jiawei Zhang, Yuming Xiao, and Yuefeng Ji. "Cooperative Offloading in D2D-Enabled Three-Tier MEC Networks for IoT." Wireless Communications and Mobile Computing 2021 (August 16, 2021): 1–13. http://dx.doi.org/10.1155/2021/9977700.

Full text
Abstract:
Mobile/multi-access edge computing (MEC) takes advantage of its proximity to end-users, which greatly reduces the transmission delay of task offloading compared to mobile cloud computing (MCC). Offloading computing tasks to edge servers with a certain amount of computing ability can also reduce the computing delay. Meanwhile, device-to-device (D2D) cooperation can help to process small-scale delay-sensitive tasks to further decrease the delay of tasks. But where to offload the computing tasks is a critical issue. In this article, we integrate MEC and D2D cooperation techniques to optimize the offloading decisions and resource allocation problem in D2D-enabled three-tier MEC networks for Internet of Things (IoT). Mobile devices (MDs), edge clouds, and central cloud data center (DC) make up these three-tier MEC networks. They cooperate with each other to finish the offloading tasks. Each task can be processed by MD itself or its neighboring MDs at device tier, by edge servers at edge tier, or by remote cloud servers at cloud tier. Under the maximum energy cost constraints, we formulate the cooperative offloading problem into a mixed-integer nonlinear problem aiming to minimize the total delay of tasks. We utilize the alternating direction method of multipliers (ADMM) to speed up the computing process. The proposed scheme decomposes the complicated problem into 3 smaller subproblems, which are solved in a parallel fashion. Finally, we compare our proposal with D2D and MEC networks in simulations. Numerical results validate that the proposed D2D-enabled MEC networks for IoT can significantly enhance the computing abilities and reduce the total delay of tasks.
APA, Harvard, Vancouver, ISO, and other styles
31

Zheng, Huiji, Sicong Yu, and Xiaolong Cui. "GMOM: An Offloading Method of Dependent Tasks Based on Deep Reinforcement Learning." Mobile Information Systems 2022 (November 8, 2022): 1–13. http://dx.doi.org/10.1155/2022/9587040.

Full text
Abstract:
Mobile edge computing (MEC) is considered as an effective solution to delay-sensitive services, and computing offloading, the central technology in MEC, can expand the capacity of resource-constrained mobile terminals (MTs). However, because of the interdependency among applications, and the dynamically changing and complex nature of the MEC environment, offloading decision making turns out to be an NP-hard problem. In the present work, a graph mapping offloading model (GMOM) based on deep reinforcement learning (DRL) is proposed to address the offloading problem of dependent tasks in MEC. Specifically, the MT application is first modeled into a directed acyclic graph (DAG), which is called a DAG task. Then, the DAG task is transformed into a subtask sequence vector according to the predefined order of priorities to facilitate processing. Finally, the sequence vector is input into an encoding-decoding framework based on the attention mechanism to obtain the offloading strategy vector. The GMOM is trained using the advanced proximal policy optimization (PPO) algorithm to minimize the comprehensive cost function including delay and energy consumption. Experiments show that the proposed model has good decision-making performance, with verified effectiveness in convergence, delay, and energy consumption.
APA, Harvard, Vancouver, ISO, and other styles
32

Anoop, S., and Dr J. Amar Pratap Singh. "A LSTM Approach for Secure Energy Efficient Computational Offloading in Mobile Edge Computing." Webology 18, no. 2 (2021): 856–74. http://dx.doi.org/10.14704/web/v18i2/web18359.

Full text
Abstract:
Mobile technologies is evolving so rapidly in every aspect, utilizing every single resource in the form of applications which creates advancement in day to day life. This technological advancements overcomes the traditional computing methods which increases communication delay, energy consumption for mobile devices. In today’s world, Mobile Edge Computing is evolving as a scenario for improving in these limitations so as to provide better output to end users. This paper proposed a secure and energy-efficient computational offloading scheme using LSTM. The prediction of the computational tasks done using the LSTM algorithm. A strategy for computation offloading based on the prediction of tasks, and the migration of tasks for the scheme of edge cloud scheduling based on a reinforcement learning routing algorithm help to optimize the edge computing offloading model. Experimental results show that our proposed algorithm Intelligent Energy Efficient Offloading Algorithm (IEEOA), can efficiently decrease total task delay and energy consumption, and bring much security to the devices due to the firewall nature of LSTM.
APA, Harvard, Vancouver, ISO, and other styles
33

Kim, Bongjae, Joonhyouk Jang, Jinman Jung, Jungkyu Han, Junyoung Heo, and Hong Min. "A Computation Offloading Scheme for UAV-Edge Cloud Computing Environments Considering Energy Consumption Fairness." Drones 7, no. 2 (2023): 139. http://dx.doi.org/10.3390/drones7020139.

Full text
Abstract:
A heterogeneous computing environment has been widely used with UAVs, edge servers, and cloud servers operating in tandem. Various applications can be allocated and linked to the computing nodes that constitute this heterogeneous computing environment. Efficiently offloading and allocating computational tasks is essential, especially in these heterogeneous computing environments with differentials in processing power, network bandwidth, and latency. In particular, UAVs, such as drones, operate using minimal battery power. Therefore, energy consumption must be considered when offloading and allocating computational tasks. This study proposed an energy consumption fairness-aware computational offloading scheme based on a genetic algorithm (GA). The proposed method minimized the differences in energy consumption by allocating and offloading tasks evenly among drones. Based on performance evaluations, our scheme improved the efficiency of energy consumption fairness, as compared to previous approaches, such as Liu et al.’s scheme. We showed that energy consumption fairness was improved by up to 120%.
APA, Harvard, Vancouver, ISO, and other styles
34

Gong, Bencan, and Xiaowei Jiang. "Dependent Task-Offloading Strategy Based on Deep Reinforcement Learning in Mobile Edge Computing." Wireless Communications and Mobile Computing 2023 (January 4, 2023): 1–12. http://dx.doi.org/10.1155/2023/4665067.

Full text
Abstract:
In mobile edge computing, there are usually relevant dependencies between different tasks, and traditional algorithms are inefficient in solving dependent task-offloading problems and neglect the impact of the dynamic change of the channel on the offloading strategy. To solve the offloading problem of dependent tasks in a dynamic network environment, this paper establishes the dependent task model as a directed acyclic graph. A Dependent Task-Offloading Strategy (DTOS) based on deep reinforcement learning is proposed with minimizing the weighted sum of delay and energy consumption of network services as the optimization objective. DTOS transforms the dependent task offloading into an optimal policy problem under Markov decision processes. Multiple parallel deep neural networks (DNNs) are used to generate offloading decisions, cache the optimal decisions for each round, and then optimize the DNN parameters using priority experience replay mechanism to extract valuable experiences. DTOS introduces a penalty mechanism to obtain the optimal task-offloading decisions, which is triggered if the service energy consumption or service delay exceeds the threshold. The experimental results show that the algorithm produces better offloading decisions than existing algorithms, can effectively reduce the delay and energy consumption of network services, and can self-adapt to the changing network environment.
APA, Harvard, Vancouver, ISO, and other styles
35

Rego, Paulo A. L., Fernando A. M. Trinta, Masum Z. Hasan, and Jose N. de Souza. "Enhancing Offloading Systems with Smart Decisions, Adaptive Monitoring, and Mobility Support." Wireless Communications and Mobile Computing 2019 (April 21, 2019): 1–18. http://dx.doi.org/10.1155/2019/1975312.

Full text
Abstract:
Mobile cloud computing is an approach for mobile devices with processing and storage limitations to take advantage of remote resources that assist in performing computationally intensive or data-intensive tasks. The migration of tasks or data is commonly referred to as offloading, and its proper use can bring benefits such as performance improvement or reduced power consumption on mobile devices. In this paper, we face three challenges for any offloading solution: the decision of when and where to perform offloading, the decision of which metrics must be monitored by the offloading system, and the support for user’s mobility in a hybrid environment composed of cloudlets and public cloud instances. We introduce novel approaches based on machine learning and software-defined networking techniques for handling these challenges. In addition, we present details of our offloading system and the experiments conducted to assess the proposed approaches.
APA, Harvard, Vancouver, ISO, and other styles
36

A. U., Labdo, Dhabariya A. S., Sani Z. M., and Abbayero M. A. "A Review of Task Offloading Algorithms with Deep Reinforcement Learning." British Journal of Computer, Networking and Information Technology 7, no. 3 (2024): 107–17. http://dx.doi.org/10.52589/bjcnit-ughjh8qg.

Full text
Abstract:
Enormous data generated by IoT devices are handled in processing and storage by edge computing, a paradigm that allows tasks to be processed outside host devices. Task offloading is the movement of tasks from IoT devices to an edge or cloud server –where resources and processing capabilities are abundant– for processing, it is an important aspect of edge computing. This paper reviewed some task-offloading algorithms and the techniques used by each algorithm. Existing algorithms focus on either latency, load, cost, energy or delay, the deep reinforcement phase of a task offloading algorithm automates and optimizes the offloading decision process, it trains agents and defines rewards. Latency-aware phase then proceeds to obtain the best offload destination in order to significantly reduce the latency.
APA, Harvard, Vancouver, ISO, and other styles
37

Han, Yuelin, and Qi Zhu. "Joint Computation Offloading and Resource Allocation for NOMA-Enabled Multitask D2D System." Wireless Communications and Mobile Computing 2022 (July 28, 2022): 1–14. http://dx.doi.org/10.1155/2022/5349571.

Full text
Abstract:
Due to the limited computing capacity of mobile device and high network-accessing delay, user in the area where mobile terminals are densely distributed (e.g., schools, malls, and hospitals) will experience high latency when processing multiple computation-intensive tasks. In this paper, a computation offloading scheme based on Device-to-Device (D2D) communication is proposed to deal with the problem that users have multiple tasks to process. Exploiting nonorthogonal multiple access (NOMA), user can offload tasks to multiple nearby devices that have available idle computing resource. We aim to minimize the user’s total cost including time latency, energy consumption, and offloading charge, which is formulated as a mixed integer nonlinear programming (MINLP) problem. We use decomposition approach to solve our problem and propose a two-layer optimization scheme named Multitask Joint Computation Offloading and Resource Allocation (MT-JCORA). In the inner layer, the NOMA-transmission time optimization problem in given task offloading decision is proved as a strictly convex problem. In the outer layer, we design a heuristic algorithm based on GA algorithm to obtain the optimal task offloading decision. Simulation results demonstrate that MT-JCORA can effectively reduce the total cost of user compared with related schemes.
APA, Harvard, Vancouver, ISO, and other styles
38

Hu, Xinyue, Xiaoke Tang, Yantao Yu, Sihai Qiu, and Shiyong Chen. "Joint Load Balancing and Offloading Optimization in Multiple Parked Vehicle-Assisted Edge Computing." Wireless Communications and Mobile Computing 2021 (November 23, 2021): 1–13. http://dx.doi.org/10.1155/2021/8943862.

Full text
Abstract:
The introduction of mobile edge computing (MEC) in vehicular network has been a promising paradigm to improve vehicular services by offloading computation-intensive tasks to the MEC server. To avoid the overload phenomenon in MEC server, the vast idle resources of parked vehicles can be utilized to effectively relieve the computational burden on the server. Furthermore, unbalanced load allocation may cause larger latency and energy consumption. To solve the problem, the reported works preferred to allocate workload between MEC server and single parked vehicle. In this paper, a multiple parked vehicle-assisted edge computing (MPVEC) paradigm is first introduced. A joint load balancing and offloading optimization problem is formulated to minimize the system cost under delay constraint. In order to accomplish the offloading tasks, a multiple offloading node selection algorithm is proposed to select several appropriate PVs to collaborate with the MEC server in computing tasks. Furthermore, a workload allocation strategy based on dynamic game is presented to optimize the system performance with jointly considering the workload balance among computing nodes. Numerical results indicate that the offloading strategy in MPVEC scheme can significantly reduce the system cost and load balancing of the system can be achieved.
APA, Harvard, Vancouver, ISO, and other styles
39

Wu, Zhoupeng, Zongpu Jia, Xiaoyan Pang, and Shan Zhao. "Deep Reinforcement Learning-Based Task Offloading and Load Balancing for Vehicular Edge Computing." Electronics 13, no. 8 (2024): 1511. http://dx.doi.org/10.3390/electronics13081511.

Full text
Abstract:
Vehicular edge computing (VEC) effectively reduces the computational burden on vehicles by offloading tasks from resource-constrained vehicles to edge nodes. However, non-uniformly distributed vehicles offloading a large number of tasks cause load imbalance problems among edge nodes, resulting in performance degradation. In this paper, we propose a deep reinforcement learning-based decision scheme for task offloading and load balancing with the optimization objective of minimizing the system cost considering the split offloading of tasks and the load dynamics of edge nodes. First, we model the mutual interaction between mobile vehicles and Mobile Edge Computing (MEC) servers using a Markov decision process. Second, the optimal task-offloading and resource allocation decision is obtained by utilizing the twin delayed deep deterministic policy gradient algorithm (TD3), and server load balancing is achieved through edge collaboration using a server selection algorithm based on the technique for order preference by similarity to the ideal solution (TOPSIS). Finally, we have conducted extensive simulation experiments and compared the results with several other baseline schemes. The proposed scheme can more effectively reduce the system cost and increase the system resource utilization.
APA, Harvard, Vancouver, ISO, and other styles
40

Wei, Dawei, Ning Xi, Jianfeng Ma, and Lei He. "UAV-Assisted Privacy-Preserving Online Computation Offloading for Internet of Things." Remote Sensing 13, no. 23 (2021): 4853. http://dx.doi.org/10.3390/rs13234853.

Full text
Abstract:
Unmanned aerial vehicle (UAV) plays a more and more important role in Internet of Things (IoT) for remote sensing and device interconnecting. Due to the limitation of computing capacity and energy, the UAV cannot handle complex tasks. Recently, computation offloading provides a promising way for the UAV to handle complex tasks by deep reinforcement learning (DRL)-based methods. However, existing DRL-based computation offloading methods merely protect usage pattern privacy and location privacy. In this paper, we consider a new privacy issue in UAV-assisted IoT, namely computation offloading preference leakage, which lacks through study. To cope with this issue, we propose a novel privacy-preserving online computation offloading method for UAV-assisted IoT. Our method integrates the differential privacy mechanism into deep reinforcement learning (DRL), which can protect UAV’s offloading preference. We provide the formal analysis on security and utility loss of our method. Extensive real-world experiments are conducted. Results demonstrate that, compared with baseline methods, our method can learn cost-efficient computation offloading policy without preference leakage and a priori knowledge of the wireless channel model.
APA, Harvard, Vancouver, ISO, and other styles
41

Hossain, Md Delowar, Tangina Sultana, Md Alamgir Hossain, et al. "Fuzzy Decision-Based Efficient Task Offloading Management Scheme in Multi-Tier MEC-Enabled Networks." Sensors 21, no. 4 (2021): 1484. http://dx.doi.org/10.3390/s21041484.

Full text
Abstract:
Multi-access edge computing (MEC) is a new leading technology for meeting the demands of key performance indicators (KPIs) in 5G networks. However, in a rapidly changing dynamic environment, it is hard to find the optimal target server for processing offloaded tasks because we do not know the end users’ demands in advance. Therefore, quality of service (QoS) deteriorates because of increasing task failures and long execution latency from congestion. To reduce latency and avoid task failures from resource-constrained edge servers, vertical offloading between mobile devices with local-edge collaboration or with local edge-remote cloud collaboration have been proposed in previous studies. However, they ignored the nearby edge server in the same tier that has excess computing resources. Therefore, this paper introduces a fuzzy decision-based cloud-MEC collaborative task offloading management system called FTOM, which takes advantage of powerful remote cloud-computing capabilities and utilizes neighboring edge servers. The main objective of the FTOM scheme is to select the optimal target node for task offloading based on server capacity, latency sensitivity, and the network’s condition. Our proposed scheme can make dynamic decisions where local or nearby MEC servers are preferred for offloading delay-sensitive tasks, and delay-tolerant high resource-demand tasks are offloaded to a remote cloud server. Simulation results affirm that our proposed FTOM scheme significantly improves the rate of successfully executing offloaded tasks by approximately 68.5%, and reduces task completion time by 66.6%, when compared with a local edge offloading (LEO) scheme. The improved and reduced rates are 32.4% and 61.5%, respectively, when compared with a two-tier edge orchestration-based offloading (TTEO) scheme. They are 8.9% and 47.9%, respectively, when compared with a fuzzy orchestration-based load balancing (FOLB) scheme, approximately 3.2% and 49.8%, respectively, when compared with a fuzzy workload orchestration-based task offloading (WOTO) scheme, and approximately 38.6%% and 55%, respectively, when compared with a fuzzy edge-orchestration based collaborative task offloading (FCTO) scheme.
APA, Harvard, Vancouver, ISO, and other styles
42

Malik, Usman Mahmood, Muhammad Awais Javed, Jaroslav Frnda, Jan Rozhon, and Wali Ullah Khan. "Efficient Matching-Based Parallel Task Offloading in IoT Networks." Sensors 22, no. 18 (2022): 6906. http://dx.doi.org/10.3390/s22186906.

Full text
Abstract:
Fog computing is one of the major components of future 6G networks. It can provide fast computing of different application-related tasks and improve system reliability due to better decision-making. Parallel offloading, in which a task is split into several sub-tasks and transmitted to different fog nodes for parallel computation, is a promising concept in task offloading. Parallel offloading suffers from challenges such as sub-task splitting and mapping of sub-tasks to the fog nodes. In this paper, we propose a novel many-to-one matching-based algorithm for the allocation of sub-tasks to fog nodes. We develop preference profiles for IoT nodes and fog nodes to reduce the task computation delay. We also propose a technique to address the externalities problem in the matching algorithm that is caused by the dynamic preference profiles. Furthermore, a detailed evaluation of the proposed technique is presented to show the benefits of each feature of the algorithm. Simulation results show that the proposed matching-based offloading technique outperforms other available techniques from the literature and improves task latency by 52% at high task loads.
APA, Harvard, Vancouver, ISO, and other styles
43

Lin, Li, and Lei Zhang. "Joint Optimization of Offloading and Resource Allocation for SDN-Enabled IoV." Wireless Communications and Mobile Computing 2022 (March 4, 2022): 1–13. http://dx.doi.org/10.1155/2022/2954987.

Full text
Abstract:
With the development of various vehicle applications, such as vehicle social networking, pattern recognition, and augmented reality, diverse and complex tasks have to be executed by the vehicle terminals. To extend the computing capability, the nearest roadside-unit (RSU) is used to offload the tasks. Nevertheless, for intensive tasks, excessive load not only leads to poor communication links but also results to ultrahigh latency and computational delay. To overcome these problems, this paper proposes a joint optimization approach on offloading and resource allocation for Internet of Vehicles (IoV). Specifically, assuming particle tasks assigned for vehicles in the system model are offloaded to RSUs and executed in parallel. Moreover, the software-defined networking (SDN) assisted routing and controlling protocol is introduced to divide the IoV system into two independent layers: data transmission layer and control layer. A joint approach optimized offloading decision, offloading ratio, and resource allocation (ODRR) is proposed to minimize system average delay, on the premise of satisfying the demand of the quality of service (QoS). By comparing with conventional offloading strategies, the proposed approach is proved to be optimal and effective for SDN-enabled IoV.
APA, Harvard, Vancouver, ISO, and other styles
44

Zhang, Ruipeng, Yikang Yang, and Hengnian Li. "A Distributed Deadlock-Free Task Offloading Algorithm for Integrated Communication–Sensing–Computing Satellites with Data-Dependent Constraints." Remote Sensing 16, no. 18 (2024): 3459. http://dx.doi.org/10.3390/rs16183459.

Full text
Abstract:
Integrated communication–sensing–computing (ICSC) satellites, which integrate edge computing servers on Earth observation satellites to process collected data directly in orbit, are attracting growing attention. Nevertheless, some monitoring tasks involve sequential sub-tasks like target observation and movement prediction, leading to data dependencies. Moreover, the limited energy supply on satellites requires the sequential execution of sub-tasks. Therefore, inappropriate assignments can cause circular waiting among satellites, resulting in deadlocks. This paper formulates task offloading in ICSC satellites with data-dependent constraints as a mixed-integer linear programming (MILP) problem, aiming to minimize service latency and energy consumption simultaneously. Given the non-centrality of ICSC satellites, we propose a distributed deadlock-free task offloading (DDFTO) algorithm. DDFTO operates in parallel on each satellite, alternating between sub-task inclusion and consensus and sub-task removal until a common offloading assignment is reached. To avoid deadlocks arising from sub-task inclusion, we introduce the deadlock-free insertion mechanism (DFIM), which strategically restricts the insertion positions of sub-tasks based on interval relationships, ensuring deadlock-free assignments. Extensive experiments demonstrate the effectiveness of DFIM in avoiding deadlocks and show that the DDFTO algorithm outperforms benchmark algorithms in achieving deadlock-free offloading assignments.
APA, Harvard, Vancouver, ISO, and other styles
45

He, Ping, Jiayue Cang, Huaying Qi, and Hui Li. "Research on Decomposition and Offloading Strategies for Complex Divisible Computing Tasks in Computing Power Networks." Symmetry 16, no. 6 (2024): 699. http://dx.doi.org/10.3390/sym16060699.

Full text
Abstract:
With the continuous emergence of intelligent network applications and complex tasks for mobile terminals, the traditional single computing model often fails to meet the greater requirements of computing and network technology, thus promoting the formation of a new computing power network architecture, of ‘cloud, edge and end’ three-level heterogeneous computing. For complex divisible computing tasks in the network, task decomposition and offloading help to realize a distributed execution of tasks, thus reducing the overall running time and improving the utilization of fragmented resources in the network. However, in the process of task decomposition and offloading, there are problems, such as there only being a single method of task decomposition; that too large or too small decomposition granularity will lead to an increase in transmission delay; and the pursuit of low-delay and low-energy offloading requirements. Based on this, a complex divisible computing task decomposition and offloading scheme is proposed. Firstly, the computational task is decomposed into multiple task elements based on code partitioning, and then a density-peak-clustering algorithm with an improved adaptive truncation distance and clustering center (ATDCC-DPC) is proposed to cluster the task elements into subtasks based on the task elements themselves and the dependencies between the task elements. Secondly, taking the subtasks as the offloading objects, the improved Double Deep Q-Network subtask offloading algorithm (ISO-DDQN) is proposed to find the optimal offloading scheme that minimizes the delay and energy consumption. Finally, the proposed algorithms are verified by simulation experiments, and the scheme in this paper can effectively reduce the task delay and energy consumption and improve the service experience.
APA, Harvard, Vancouver, ISO, and other styles
46

Alaa Sabeeh Salim. "Managing and Decreasing Power Consumption of Devices in a Smart City Environment." Journal of Electrical Systems 20, no. 7s (2024): 1820–32. http://dx.doi.org/10.52783/jes.3810.

Full text
Abstract:
As smart cities continue to grow and the number of connected devices increases, power consumption becomes a critical concern. By offloading computationally intensive tasks from resource-constrained devices to more powerful edge servers, energy efficiency can be significantly improved. The research proposes a framework for managing power consumption in smart cities by offloading computational tasks to edge servers. This approach, considering factors like device capabilities, network conditions, and energy profiles, can improve energy efficiency. The framework's effectiveness is evaluated through real-world data simulations and performance metrics. Results show that offloading tasks to edge servers significantly reduces power consumption, conserving energy and prolonging battery life. The framework's adaptability ensures optimal resource allocation, maximizing energy efficiency without compromising performance. This research offers practical solutions for sustainable and energy-efficient operations in smarter cities.
APA, Harvard, Vancouver, ISO, and other styles
47

Chen, Xiaoqian, Tieliang Gao, Hui Gao, Baoju Liu, Ming Chen, and Bo Wang. "A multi-stage heuristic method for service caching and task offloading to improve the cooperation between edge and cloud computing." PeerJ Computer Science 8 (June 23, 2022): e1012. http://dx.doi.org/10.7717/peerj-cs.1012.

Full text
Abstract:
Edge-cloud computing has attracted increasing attention recently due to its efficiency on providing services for not only delay-sensitive applications but also resource-intensive requests, by combining low-latency edge resources and abundant cloud resources. A carefully designed strategy of service caching and task offloading helps to improve the user satisfaction and the resource efficiency. Thus, in this article, we focus on joint service caching and task offloading problem in edge-cloud computing environments, to improve the cooperation between edge and cloud resources. First, we formulated the problem into a mix-integer nonlinear programming, which is proofed as NP-hard. Then, we proposed a three-stage heuristic method for solving the problem in polynomial time. In the first stages, our method tried to make full use of abundant cloud resources by pre-offloading as many tasks as possible to the cloud. Our method aimed at making full use of low-latency edge resources by offloading remaining tasks and caching corresponding services on edge resources. In the last stage, our method focused on improving the performance of tasks offloaded to the cloud, by re-offloading some tasks from cloud resources to edge resources. The performance of our method was evaluated by extensive simulated experiments. The results show that our method has up to 155%, 56.1%, and 155% better performance in user satisfaction, resource efficiency, and processing efficiency, respectively, compared with several classical and state-of-the-art task scheduling methods.
APA, Harvard, Vancouver, ISO, and other styles
48

Alasmari, Moteb K., Sami S. Alwakeel, and Yousef A. Alohali. "A Multi-Classifiers Based Algorithm for Energy Efficient Tasks Offloading in Fog Computing." Sensors 23, no. 16 (2023): 7209. http://dx.doi.org/10.3390/s23167209.

Full text
Abstract:
The IoT has connected a vast number of devices on a massive internet scale. With the rapid increase in devices and data, offloading tasks from IoT devices to remote Cloud data centers becomes unproductive and costly. Optimizing energy consumption in IoT devices while meeting deadlines and data constraints is challenging. Fog Computing aids efficient IoT task processing with proximity to nodes and lower service delay. Cloud task offloading occurs frequently due to Fog Computing’s limited resources compared to remote Cloud, necessitating improved techniques for accurate categorization and distribution of IoT device task offloading in a hybrid IoT, Fog, and Cloud paradigm. This article explores relevant offloading strategies in Fog Computing and proposes MCEETO, an intelligent energy-aware allocation strategy, utilizing a multi-classifier-based algorithm for efficient task offloading by selecting optimal Fog Devices (FDs) for module placement. MCEETO decision parameters include task attributes, Fog node characteristics, network latency, and bandwidth. The method is evaluated using the iFogSim simulator and compared with edge-ward and Cloud-only strategies. The proposed solution is more energy-efficient, saving around 11.36% compared to Cloud-only and approximately 9.30% compared to the edge-ward strategy. Additionally, the MCEETO algorithm achieved a 67% and 96% reduction in network usage compared to both strategies.
APA, Harvard, Vancouver, ISO, and other styles
49

Arif, Muhammad, F. Ajesh, Shermin Shamsudheen, and Muhammad Shahzad. "Secure and Energy-Efficient Computational Offloading Using LSTM in Mobile Edge Computing." Security and Communication Networks 2022 (January 7, 2022): 1–13. http://dx.doi.org/10.1155/2022/4937588.

Full text
Abstract:
The use of application media, gamming, entertainment, and healthcare engineering has expanded as a result of the rapid growth of mobile technologies. This technology overcomes the traditional computing methods in terms of communication delay and energy consumption, thereby providing high reliability and bandwidth for devices. In today’s world, mobile edge computing is improving in various forms so as to provide better output and there is no room for simple computing architecture for MEC. So, this paper proposed a secure and energy-efficient computational offloading scheme using LSTM. The prediction of the computational tasks is done using the LSTM algorithm, the strategy for computation offloading of mobile devices is based on the prediction of tasks, and the migration of tasks for the scheme of edge cloud scheduling helps to optimize the edge computing offloading model. Experiments show that our proposed architecture, which consists of an LSTM-based offloading technique and routing (LSTMOTR) algorithm, can efficiently decrease total task delay with growing data and subtasks, reduce energy consumption, and bring much security to the devices due to the firewall nature of LSTM.
APA, Harvard, Vancouver, ISO, and other styles
50

Peng, Biying, Taoshen Li, and Yan Chen. "DRL-Based Dependent Task Offloading Strategies with Multi-Server Collaboration in Multi-Access Edge Computing." Applied Sciences 13, no. 1 (2022): 191. http://dx.doi.org/10.3390/app13010191.

Full text
Abstract:
Many applications in Multi-access Edge Computing (MEC) consist of interdependent tasks where the output of some tasks is the input of others. Most of the existing research on computational offloading does not consider the dependency of the task and uses convex relaxation or heuristic algorithms to solve the offloading problem, which lacks adaptability and is not suitable for computational offloading in the dynamic environment of fast fading channels. Therefore, in this paper, the optimization problem is modeled as a Markov Decision Process (MDP) in multi-user and multi-server MEC environments, and the dependent tasks are represented by Directed Acyclic Graph (DAG). Combined with the Soft Actor–Critic (SAC) algorithm in Deep Reinforcement Learning (DRL) theory, an intelligent task offloading scheme is proposed. Under the condition of resource constraint, each task can be offloaded to the corresponding MEC server through centralized control, which greatly reduces the service delay and terminal energy consumption. The experimental results show that the algorithm converges quickly and stably, and its optimization effect is better than existing methods, which verifies the effectiveness of the algorithm.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!