To see the other types of publications on this topic, follow the link: Reinforcement learning. Production scheduling.

Journal articles on the topic 'Reinforcement learning. Production scheduling'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Reinforcement learning. Production scheduling.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Lee, Seunghoon, Yongju Cho, and Young Hoon Lee. "Injection Mold Production Sustainable Scheduling Using Deep Reinforcement Learning." Sustainability 12, no. 20 (2020): 8718. http://dx.doi.org/10.3390/su12208718.

Full text
Abstract:
In the injection mold industry, it is important for manufacturers to satisfy the delivery date for the products that customers order. The mold products are diverse, and each product has a different manufacturing process. Owing to the nature of mold, mold manufacturing is a complex and dynamic environment. To meet the delivery date of the customers, the scheduling of mold production is important and is required to be sustainable and intelligent even in the complicated system and dynamic situation. To address this, in this paper, deep reinforcement learning (RL) is proposed for injection mold pr
APA, Harvard, Vancouver, ISO, and other styles
2

Waschneck, Bernd, André Reichstaller, Lenz Belzner, et al. "Optimization of global production scheduling with deep reinforcement learning." Procedia CIRP 72 (2018): 1264–69. http://dx.doi.org/10.1016/j.procir.2018.03.212.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Hubbs, Christian D., Can Li, Nikolaos V. Sahinidis, Ignacio E. Grossmann, and John M. Wassick. "A deep reinforcement learning approach for chemical production scheduling." Computers & Chemical Engineering 141 (October 2020): 106982. http://dx.doi.org/10.1016/j.compchemeng.2020.106982.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Wang, Yi-Chi, and John M. Usher. "Application of reinforcement learning for agent-based production scheduling." Engineering Applications of Artificial Intelligence 18, no. 1 (2005): 73–82. http://dx.doi.org/10.1016/j.engappai.2004.08.018.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Guo, Fang, Yongqiang Li, Ao Liu, and Zhan Liu. "A Reinforcement Learning Method to Scheduling Problem of Steel Production Process." Journal of Physics: Conference Series 1486 (April 2020): 072035. http://dx.doi.org/10.1088/1742-6596/1486/7/072035.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Shi, Daming, Wenhui Fan, Yingying Xiao, Tingyu Lin, and Chi Xing. "Intelligent scheduling of discrete automated production line via deep reinforcement learning." International Journal of Production Research 58, no. 11 (2020): 3362–80. http://dx.doi.org/10.1080/00207543.2020.1717008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Kardos, Csaba, Catherine Laflamme, Viola Gallina, and Wilfried Sihn. "Dynamic scheduling in a job-shop production system with reinforcement learning." Procedia CIRP 97 (2021): 104–9. http://dx.doi.org/10.1016/j.procir.2020.05.210.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Han, Guo, and Su. "A Reinforcement Learning Method for a Hybrid Flow-Shop Scheduling Problem." Algorithms 12, no. 11 (2019): 222. http://dx.doi.org/10.3390/a12110222.

Full text
Abstract:
The scheduling problems in mass production, manufacturing, assembly, synthesis, and transportation, as well as internet services, can partly be attributed to a hybrid flow-shop scheduling problem (HFSP). To solve the problem, a reinforcement learning (RL) method for HFSP is studied for the first time in this paper. HFSP is described and attributed to the Markov Decision Processes (MDP), for which the special states, actions, and reward function are designed. On this basis, the MDP framework is established. The Boltzmann exploration policy is adopted to trade-off the exploration and exploitatio
APA, Harvard, Vancouver, ISO, and other styles
9

Zhou, Tong, Dunbing Tang, Haihua Zhu, and Liping Wang. "Reinforcement Learning With Composite Rewards for Production Scheduling in a Smart Factory." IEEE Access 9 (2021): 752–66. http://dx.doi.org/10.1109/access.2020.3046784.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Zhang, Zhicong, Kaishun Hu, Shuai Li, Huiyu Huang, and Shaoyong Zhao. "Chip Attach Scheduling in Semiconductor Assembly." Journal of Industrial Engineering 2013 (March 26, 2013): 1–11. http://dx.doi.org/10.1155/2013/295604.

Full text
Abstract:
Chip attach is the bottleneck operation in semiconductor assembly. Chip attach scheduling is in nature unrelated parallel machine scheduling considering practical issues, for example, machine-job qualification, sequence-dependant setup times, initial machine status, and engineering time. The major scheduling objective is to minimize the total weighted unsatisfied Target Production Volume in the schedule horizon. To apply Q-learning algorithm, the scheduling problem is converted into reinforcement learning problem by constructing elaborate system state representation, actions, and reward functi
APA, Harvard, Vancouver, ISO, and other styles
11

Wang, Yi-Chi, and John M. Usher. "A reinforcement learning approach for developing routing policies in multi-agent production scheduling." International Journal of Advanced Manufacturing Technology 33, no. 3-4 (2006): 323–33. http://dx.doi.org/10.1007/s00170-006-0465-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Li, Zhipeng, Xiumei Wei, Xuesong Jiang, and Yewen Pang. "A Kind of Reinforcement Learning to Improve Genetic Algorithm for Multiagent Task Scheduling." Mathematical Problems in Engineering 2021 (January 12, 2021): 1–12. http://dx.doi.org/10.1155/2021/1796296.

Full text
Abstract:
It is difficult to coordinate the various processes in the process industry. We built a multiagent distributed hierarchical intelligent control model for manufacturing systems integrating multiple production units based on multiagent system technology. The model organically combines multiple intelligent agent modules and physical entities to form an intelligent control system with certain functions. The model consists of system management agent, workshop control agent, and equipment agent. For the task assignment problem with this model, we combine reinforcement learning to improve the genetic
APA, Harvard, Vancouver, ISO, and other styles
13

Yang, Hongbing, Wenchao Li, and Bin Wang. "Joint optimization of preventive maintenance and production scheduling for multi-state production systems based on reinforcement learning." Reliability Engineering & System Safety 214 (October 2021): 107713. http://dx.doi.org/10.1016/j.ress.2021.107713.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Raju, Leo, R. S. Milton, and S. Sakthiyanandan. "Energy Optimization of Solar Micro-Grid Using Multi Agent Reinforcement Learning." Applied Mechanics and Materials 787 (August 2015): 843–47. http://dx.doi.org/10.4028/www.scientific.net/amm.787.843.

Full text
Abstract:
In this paper, two solar Photovoltaic (PV) systems are considered; one in the department with capacity of 100 kW and the other in the hostel with capacity of 200 kW. Each one has battery and load. The capital cost and energy savings by conventional methods are compared and it is proved that the energy dependency from grid is reduced in solar micro-grid element, operating in distributed environment. In the smart grid frame work, the grid energy consumption is further reduced by optimal scheduling of the battery, using Reinforcement Learning. Individual unit optimization is done by a model free
APA, Harvard, Vancouver, ISO, and other styles
15

Kumar, Ashish, and Roussos Dimitrakopoulos. "Production scheduling in industrial mining complexes with incoming new information using tree search and deep reinforcement learning." Applied Soft Computing 110 (October 2021): 107644. http://dx.doi.org/10.1016/j.asoc.2021.107644.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Yin, Lvjiang, Meier Zhuang, Jing Jia, and Huan Wang. "Energy Saving in Flow-Shop Scheduling Management: An Improved Multiobjective Model Based on Grey Wolf Optimization Algorithm." Mathematical Problems in Engineering 2020 (October 13, 2020): 1–14. http://dx.doi.org/10.1155/2020/9462048.

Full text
Abstract:
Currently, energy saving is increasingly important. During the production procedure, energy saving can be achieved if the operational method and machine infrastructure are improved, but it also increases the complexity of flow-shop scheduling. Actually, as one of the data mining technologies, Grey Wolf Optimization Algorithm is widely applied to various mathematical problems in engineering. However, due to the immaturity of this algorithm, it still has some defects. Therefore, we propose an improved multiobjective model based on Grey Wolf Optimization Algorithm related to Kalman filter and rei
APA, Harvard, Vancouver, ISO, and other styles
17

Cunha, Bruno, Ana Madureira, Benjamim Fonseca, and João Matos. "Intelligent Scheduling with Reinforcement Learning." Applied Sciences 11, no. 8 (2021): 3710. http://dx.doi.org/10.3390/app11083710.

Full text
Abstract:
In this paper, we present and discuss an innovative approach to solve Job Shop scheduling problems based on machine learning techniques. Traditionally, when choosing how to solve Job Shop scheduling problems, there are two main options: either use an efficient heuristic that provides a solution quickly, or use classic optimization approaches (e.g., metaheuristics) that take more time but will output better solutions, closer to their optimal value. In this work, we aim to create a novel architecture that incorporates reinforcement learning into scheduling systems in order to improve their overa
APA, Harvard, Vancouver, ISO, and other styles
18

Yang, Yanxiang, Jiang Hu, Dana Porter, Thomas Marek, Kevin Heflin, and Hongxin Kong. "Deep Reinforcement Learning-Based Irrigation Scheduling." Transactions of the ASABE 63, no. 3 (2020): 549–56. http://dx.doi.org/10.13031/trans.13633.

Full text
Abstract:
Highlights Deep reinforcement learning-based irrigation scheduling is proposed to determine the amount of irrigation required at each time step considering soil moisture level, evapotranspiration, forecast precipitation, and crop growth stage. The proposed methodology was compared with traditional irrigation scheduling approaches and some machine learning based scheduling approaches based on simulation. Abstract. Machine learning has been widely applied in many areas, with promising results and large potential. In this article, deep reinforcement learning-based irrigation scheduling is propose
APA, Harvard, Vancouver, ISO, and other styles
19

ZHANG, ZHICONG, WEIPING WANG, SHOUYAN ZHONG, and KAISHUN HU. "FLOW SHOP SCHEDULING WITH REINFORCEMENT LEARNING." Asia-Pacific Journal of Operational Research 30, no. 05 (2013): 1350014. http://dx.doi.org/10.1142/s0217595913500140.

Full text
Abstract:
Reinforcement learning (RL) is a state or action value based machine learning method which solves large-scale multi-stage decision problems such as Markov Decision Process (MDP) and Semi-Markov Decision Process (SMDP) problems. We minimize the makespan of flow shop scheduling problems with an RL algorithm. We convert flow shop scheduling problems into SMDPs by constructing elaborate state features, actions and the reward function. Minimizing the accumulated reward is equivalent to minimizing the schedule objective function. We apply on-line TD(λ) algorithm with linear gradient-descent function
APA, Harvard, Vancouver, ISO, and other styles
20

Hirashima, Yoichi, Kazuhiro Takeda, and Akira Inoue. "A Container Transfer Scheduling Using Reinforcement Learning." IEEJ Transactions on Industry Applications 123, no. 10 (2003): 1111–17. http://dx.doi.org/10.1541/ieejias.123.1111.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Joo, Minwoo, Wonwoo Jang, and Wonjun Lee. "Deep Reinforcement Learning based Multipath Packet Scheduling." Journal of KIISE 46, no. 7 (2019): 714–19. http://dx.doi.org/10.5626/jok.2019.46.7.714.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Martins, Miguel S. E., Joaquim L. Viegas, Tiago Coito, et al. "Reinforcement Learning for Dual-Resource Constrained Scheduling." IFAC-PapersOnLine 53, no. 2 (2020): 10810–15. http://dx.doi.org/10.1016/j.ifacol.2020.12.2866.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Andrade, Pedro, Catarina Silva, Bernardete Ribeiro, and Bruno F. Santos. "Aircraft Maintenance Check Scheduling Using Reinforcement Learning." Aerospace 8, no. 4 (2021): 113. http://dx.doi.org/10.3390/aerospace8040113.

Full text
Abstract:
This paper presents a Reinforcement Learning (RL) approach to optimize the long-term scheduling of maintenance for an aircraft fleet. The problem considers fleet status, maintenance capacity, and other maintenance constraints to schedule hangar checks for a specified time horizon. The checks are scheduled within an interval, and the goal is to, schedule them as close as possible to their due date. In doing so, the number of checks is reduced, and the fleet availability increases. A Deep Q-learning algorithm is used to optimize the scheduling policy. The model is validated in a real scenario us
APA, Harvard, Vancouver, ISO, and other styles
24

Teruhiko, Unoki, and Suetake Noriaki. "Distributed Scheduling for Autonomous Vehicles by Reinforcement Learning." IEEJ Transactions on Electronics, Information and Systems 117, no. 10 (1997): 1513–20. http://dx.doi.org/10.1541/ieejeiss1987.117.10_1513.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Melnik, Mikhail, and Denis Nasonov. "Workflow scheduling using Neural Networks and Reinforcement Learning." Procedia Computer Science 156 (2019): 29–36. http://dx.doi.org/10.1016/j.procs.2019.08.126.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Aydin, M. Emin, and Ercan Öztemel. "Dynamic job-shop scheduling using reinforcement learning agents." Robotics and Autonomous Systems 33, no. 2-3 (2000): 169–78. http://dx.doi.org/10.1016/s0921-8890(00)00087-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Pandit, Mohammad Khalid, Roohie Naaz Mir, and Mohammad Ahsan Chishti. "Adaptive task scheduling in IoT using reinforcement learning." International Journal of Intelligent Computing and Cybernetics 13, no. 3 (2020): 261–82. http://dx.doi.org/10.1108/ijicc-03-2020-0021.

Full text
Abstract:
PurposeThe intelligence in the Internet of Things (IoT) can be embedded by analyzing the huge volumes of data generated by it in an ultralow latency environment. The computational latency incurred by the cloud-only solution can be significantly brought down by the fog computing layer, which offers a computing infrastructure to minimize the latency in service delivery and execution. For this purpose, a task scheduling policy based on reinforcement learning (RL) is developed that can achieve the optimal resource utilization as well as minimum time to execute tasks and significantly reduce the co
APA, Harvard, Vancouver, ISO, and other styles
28

Shujun, Pei, Zhang Qinggen, and Cheng Xuehui. "Workflow Scheduling using Graph Segmentation and Reinforcement Learning." International Journal of Performability Engineering 16, no. 8 (2020): 1262. http://dx.doi.org/10.23940/ijpe.20.08.p13.12621270.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Li, Kai, Wei Ni, Mehran Abolhasan, and Eduardo Tovar. "Reinforcement Learning for Scheduling Wireless Powered Sensor Communications." IEEE Transactions on Green Communications and Networking 3, no. 2 (2019): 264–74. http://dx.doi.org/10.1109/tgcn.2018.2879023.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Cui, Delong, Zhiping Peng, Wende Ke, Xiaoyu Hong, and Jinglong Zuo. "Cloud workflow scheduling algorithm based on reinforcement learning." International Journal of High Performance Computing and Networking 11, no. 3 (2018): 181. http://dx.doi.org/10.1504/ijhpcn.2018.091889.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Cui, Delong, Zhiping Peng, Wende Ke, Xiaoyu Hong, and Jinglong Zuo. "Cloud workflow scheduling algorithm based on reinforcement learning." International Journal of High Performance Computing and Networking 11, no. 3 (2018): 181. http://dx.doi.org/10.1504/ijhpcn.2018.10012994.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Yau, Kok-Lim Alvin, Kae Hsiang Kwong, and Chong Shen. "Reinforcement learning models for scheduling in wireless networks." Frontiers of Computer Science 7, no. 5 (2013): 754–66. http://dx.doi.org/10.1007/s11704-013-2291-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Rummukainen, Hannu, and Jukka K. Nurminen. "Practical Reinforcement Learning -Experiences in Lot Scheduling Application." IFAC-PapersOnLine 52, no. 13 (2019): 1415–20. http://dx.doi.org/10.1016/j.ifacol.2019.11.397.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Swarup, Shashank, Elhadi M. Shakshuki, and Ansar Yasar. "Task Scheduling in Cloud Using Deep Reinforcement Learning." Procedia Computer Science 184 (2021): 42–51. http://dx.doi.org/10.1016/j.procs.2021.03.016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Dimitrakakis, Christos, Guangliang Li, and Nikoalos Tziortziotis. "The Reinforcement Learning Competition 2014." AI Magazine 35, no. 3 (2014): 61–65. http://dx.doi.org/10.1609/aimag.v35i3.2548.

Full text
Abstract:
Reinforcement learning is one of the most general problems in artificial intelligence. It has been used to model problems in automated experiment design, control, economics, game playing, scheduling and telecommunications. The aim of the reinforcement learning competition is to encourage the development of very general learning agents for arbitrary reinforcement learning problems and to provide a test-bed for the unbiased evaluation of algorithms.
APA, Harvard, Vancouver, ISO, and other styles
36

Wang, Chao, Hong Bin Zhang, Jing Guo, and Ling Chen. "Reinforcement Learning Based Job Shop Scheduling with Machine Choice." Advanced Materials Research 314-316 (August 2011): 2172–76. http://dx.doi.org/10.4028/www.scientific.net/amr.314-316.2172.

Full text
Abstract:
Job shop scheduling is a key technology in modern manufacturing. Scheduling performance will decide the enterprises’ core competitiveness. In this paper, improved reinforcement learning with cohesion is used in dynamic job shop environment, and it eased the contradiction of precocious and slow convergence. Also the machine choice is considered. So the dual scheduling which included job and machine is achieved in this system. And it obtains better results through the experiments. The utilization of equipments and the emergency handling capacity can be improved in the dynamic environment.
APA, Harvard, Vancouver, ISO, and other styles
37

Drakaki, Maria, and Panagiotis Tzionas. "Manufacturing Scheduling Using Colored Petri Nets and Reinforcement Learning." Applied Sciences 7, no. 2 (2017): 136. http://dx.doi.org/10.3390/app7020136.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Kintsakis, Athanassios M., Fotis E. Psomopoulos, and Pericles A. Mitkas. "Reinforcement Learning based scheduling in a workflow management system." Engineering Applications of Artificial Intelligence 81 (May 2019): 94–106. http://dx.doi.org/10.1016/j.engappai.2019.02.013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Martínez, E. C. "Solving batch process scheduling/planning tasks using reinforcement learning." Computers & Chemical Engineering 23 (June 1999): S527—S530. http://dx.doi.org/10.1016/s0098-1354(99)80130-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Peng, Bile, Gonzalo Seco-Granados, Erik Steinmetz, Markus Frohle, and Henk Wymeersch Wymeersch. "Decentralized Scheduling for Cooperative Localization With Deep Reinforcement Learning." IEEE Transactions on Vehicular Technology 68, no. 5 (2019): 4295–305. http://dx.doi.org/10.1109/tvt.2019.2913695.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Wang, Fan, Jie Gao, Mushu Li, and Lian Zhao. "Autonomous PEV Charging Scheduling Using Dyna-Q Reinforcement Learning." IEEE Transactions on Vehicular Technology 69, no. 11 (2020): 12609–20. http://dx.doi.org/10.1109/tvt.2020.3026004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Zhou, Longfei, Lin Zhang, and Berthold K. P. Horn. "Deep reinforcement learning-based dynamic scheduling in smart manufacturing." Procedia CIRP 93 (2020): 383–88. http://dx.doi.org/10.1016/j.procir.2020.05.163.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Tong, Zhao, Zheng Xiao, Kenli Li, and Keqin Li. "Proactive scheduling in distributed computing—A reinforcement learning approach." Journal of Parallel and Distributed Computing 74, no. 7 (2014): 2662–72. http://dx.doi.org/10.1016/j.jpdc.2014.03.007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Khadilkar, Harshad. "A Scalable Reinforcement Learning Algorithm for Scheduling Railway Lines." IEEE Transactions on Intelligent Transportation Systems 20, no. 2 (2019): 727–36. http://dx.doi.org/10.1109/tits.2018.2829165.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Kim, Byung-Gook, Yu Zhang, Mihaela van der Schaar, and Jang-Won Lee. "Dynamic Pricing and Energy Consumption Scheduling With Reinforcement Learning." IEEE Transactions on Smart Grid 7, no. 5 (2016): 2187–98. http://dx.doi.org/10.1109/tsg.2015.2495145.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Yang, Jun, Xinghui You, Gaoxiang Wu, Mohammad Mehedi Hassan, Ahmad Almogren, and Joze Guna. "Application of reinforcement learning in UAV cluster task scheduling." Future Generation Computer Systems 95 (June 2019): 140–48. http://dx.doi.org/10.1016/j.future.2018.11.014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Wang, Bin, Fagui Liu, and Weiwei Lin. "Energy-efficient VM scheduling based on deep reinforcement learning." Future Generation Computer Systems 125 (December 2021): 616–28. http://dx.doi.org/10.1016/j.future.2021.07.023.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

赵, 飞鸿. "A Resource Scheduling Method Based on Deep Reinforcement Learning." Computer Science and Application 11, no. 07 (2021): 2008–18. http://dx.doi.org/10.12677/csa.2021.117205.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Comsa, Ioan Sorin, Mehmet Aydin, Sijing Zhang, Pierre Kuonen, and Jean–Frédéric Wagen. "Multi Objective Resource Scheduling in LTE Networks Using Reinforcement Learning." International Journal of Distributed Systems and Technologies 3, no. 2 (2012): 39–57. http://dx.doi.org/10.4018/jdst.2012040103.

Full text
Abstract:
The use of the intelligent packet scheduling process is absolutely necessary in order to make the radio resources usage more efficient in recent high-bit-rate demanding radio access technologies such as Long Term Evolution (LTE). Packet scheduling procedure works with various dispatching rules with different behaviors. In the literature, the scheduling disciplines are applied for the entire transmission sessions and the scheduler performance strongly depends on the exploited discipline. The method proposed in this paper aims to discuss how a straightforward schedule can be provided within the
APA, Harvard, Vancouver, ISO, and other styles
50

Sheng, Shuran, Peng Chen, Zhimin Chen, Lenan Wu, and Yuxuan Yao. "Deep Reinforcement Learning-Based Task Scheduling in IoT Edge Computing." Sensors 21, no. 5 (2021): 1666. http://dx.doi.org/10.3390/s21051666.

Full text
Abstract:
Edge computing (EC) has recently emerged as a promising paradigm that supports resource-hungry Internet of Things (IoT) applications with low latency services at the network edge. However, the limited capacity of computing resources at the edge server poses great challenges for scheduling application tasks. In this paper, a task scheduling problem is studied in the EC scenario, and multiple tasks are scheduled to virtual machines (VMs) configured at the edge server by maximizing the long-term task satisfaction degree (LTSD). The problem is formulated as a Markov decision process (MDP) for whic
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!