Статті в журналах з теми "Soft Actor-Critic"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Soft Actor-Critic".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.
Hyeon, Soo-Jong, Tae-Young Kang, and Chang-Kyung Ryoo. "A Path Planning for Unmanned Aerial Vehicles Using SAC (Soft Actor Critic) Algorithm." Journal of Institute of Control, Robotics and Systems 28, no. 2 (February 28, 2022): 138–45. http://dx.doi.org/10.5302/j.icros.2022.21.0220.
Ding, Feng, Guanfeng Ma, Zhikui Chen, Jing Gao, and Peng Li. "Averaged Soft Actor-Critic for Deep Reinforcement Learning." Complexity 2021 (April 1, 2021): 1–16. http://dx.doi.org/10.1155/2021/6658724.
Qin, Chenjie, Lijun Zhang, Dawei Yin, Dezhong Peng, and Yongzhong Zhuang. "Some effective tricks are used to improve Soft Actor Critic." Journal of Physics: Conference Series 2010, no. 1 (September 1, 2021): 012061. http://dx.doi.org/10.1088/1742-6596/2010/1/012061.
Yang, Qisong, Thiago D. Simão, Simon H. Tindemans, and Matthijs T. J. Spaan. "WCSAC: Worst-Case Soft Actor Critic for Safety-Constrained Reinforcement Learning." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 12 (May 18, 2021): 10639–46. http://dx.doi.org/10.1609/aaai.v35i12.17272.
Wong, Ching-Chang, Shao-Yu Chien, Hsuan-Ming Feng, and Hisasuki Aoyama. "Motion Planning for Dual-Arm Robot Based on Soft Actor-Critic." IEEE Access 9 (2021): 26871–85. http://dx.doi.org/10.1109/access.2021.3056903.
Wu, Xiongwei, Xiuhua Li, Jun Li, P. C. Ching, Victor C. M. Leung, and H. Vincent Poor. "Caching Transient Content for IoT Sensing: Multi-Agent Soft Actor-Critic." IEEE Transactions on Communications 69, no. 9 (September 2021): 5886–901. http://dx.doi.org/10.1109/tcomm.2021.3086535.
Ali, Hamid, Hammad Majeed, Imran Usman, and Khaled A. Almejalli. "Reducing Entropy Overestimation in Soft Actor Critic Using Dual Policy Network." Wireless Communications and Mobile Computing 2021 (June 10, 2021): 1–13. http://dx.doi.org/10.1155/2021/9920591.
Sola, Yoann, Gilles Le Chenadec, and Benoit Clement. "Simultaneous Control and Guidance of an AUV Based on Soft Actor–Critic." Sensors 22, no. 16 (August 14, 2022): 6072. http://dx.doi.org/10.3390/s22166072.
Yu, Xin, Yushan Sun, Xiangbin Wang, and Guocheng Zhang. "End-to-End AUV Motion Planning Method Based on Soft Actor-Critic." Sensors 21, no. 17 (September 1, 2021): 5893. http://dx.doi.org/10.3390/s21175893.
Al Younes, Younes Al, and Martin Barczyk. "Adaptive Nonlinear Model Predictive Horizon Using Deep Reinforcement Learning for Optimal Trajectory Planning." Drones 6, no. 11 (October 27, 2022): 323. http://dx.doi.org/10.3390/drones6110323.
刘, 雨. "Coordinated Optimization of Integrated Electricity-Heat Energy System Based on Soft Actor-Critic." Smart Grid 11, no. 02 (2021): 107–17. http://dx.doi.org/10.12677/sg.2021.112011.
Tang, Hengliang, Anqi Wang, Fei Xue, Jiaxin Yang, and Yang Cao. "A Novel Hierarchical Soft Actor-Critic Algorithm for Multi-Logistics Robots Task Allocation." IEEE Access 9 (2021): 42568–82. http://dx.doi.org/10.1109/access.2021.3062457.
Li, Tao, Wei Cui, and Naxin Cui. "Soft Actor-Critic Algorithm-Based Energy Management Strategy for Plug-In Hybrid Electric Vehicle." World Electric Vehicle Journal 13, no. 10 (October 18, 2022): 193. http://dx.doi.org/10.3390/wevj13100193.
Chen, Rusi, Haiguang Liu, Chengquan Liu, Guangzheng Yu, Xuan Yang, and Yue Zhou. "System Frequency Control Method Driven by Deep Reinforcement Learning and Customer Satisfaction for Thermostatically Controlled Load." Energies 15, no. 21 (October 24, 2022): 7866. http://dx.doi.org/10.3390/en15217866.
Chen, Shaotao, Xihe Qiu, Xiaoyu Tan, Zhijun Fang, and Yaochu Jin. "A model-based hybrid soft actor-critic deep reinforcement learning algorithm for optimal ventilator settings." Information Sciences 611 (September 2022): 47–64. http://dx.doi.org/10.1016/j.ins.2022.08.028.
Wu, Tao, Jianhui Wang, Xiaonan Lu, and Yuhua Du. "AC/DC hybrid distribution network reconfiguration with microgrid formation using multi-agent soft actor-critic." Applied Energy 307 (February 2022): 118189. http://dx.doi.org/10.1016/j.apenergy.2021.118189.
Tang, Hengliang, Anqi Wang, Fei Xue, Jiaxin Yang, and Yang Cao. "Corrections to “A Novel Hierarchical Soft Actor-Critic Algorithm for Multi-Logistics Robots Task Allocation”." IEEE Access 9 (2021): 71090. http://dx.doi.org/10.1109/access.2021.3078911.
Haklidir, Mehmet, and Hakan Temeltas. "Guided Soft Actor Critic: A Guided Deep Reinforcement Learning Approach for Partially Observable Markov Decision Processes." IEEE Access 9 (2021): 159672–83. http://dx.doi.org/10.1109/access.2021.3131772.
Zheng, Yuemin, Jin Tao, Hao Sun, Qinglin Sun, Zengqiang Chen, Matthias Dehmer, and Quan Zhou. "Load Frequency Active Disturbance Rejection Control for Multi-Source Power System Based on Soft Actor-Critic." Energies 14, no. 16 (August 6, 2021): 4804. http://dx.doi.org/10.3390/en14164804.
Xu, Dezhou, Yunduan Cui, Jiaye Ye, Suk Won Cha, Aimin Li, and Chunhua Zheng. "A soft actor-critic-based energy management strategy for electric vehicles with hybrid energy storage systems." Journal of Power Sources 524 (March 2022): 231099. http://dx.doi.org/10.1016/j.jpowsour.2022.231099.
Gamolped, Prem, Sakmongkon Chumkamon, Chanapol Piyavichyanon, Eiji Hayashi, and Abbe Mowshowitz. "Online Deep Reinforcement Learning on Assigned Weight Spaghetti Grasping in One Time using Soft Actor-Critic." Proceedings of International Conference on Artificial Life and Robotics 27 (January 20, 2022): 554–58. http://dx.doi.org/10.5954/icarob.2022.os19-1.
Prianto, Evan, MyeongSeop Kim, Jae-Han Park, Ji-Hun Bae, and Jung-Su Kim. "Path Planning for Multi-Arm Manipulators Using Deep Reinforcement Learning: Soft Actor–Critic with Hindsight Experience Replay." Sensors 20, no. 20 (October 19, 2020): 5911. http://dx.doi.org/10.3390/s20205911.
Gupta, Abhishek, Ahmed Shaharyar Khwaja, Alagan Anpalagan, Ling Guan, and Bala Venkatesh. "Policy-Gradient and Actor-Critic Based State Representation Learning for Safe Driving of Autonomous Vehicles." Sensors 20, no. 21 (October 22, 2020): 5991. http://dx.doi.org/10.3390/s20215991.
Xu, Xibao, Yushen Chen, and Chengchao Bai. "Deep Reinforcement Learning-Based Accurate Control of Planetary Soft Landing." Sensors 21, no. 23 (December 6, 2021): 8161. http://dx.doi.org/10.3390/s21238161.
Mollahasani, Shahram, Turgay Pamuklu, Rodney Wilson, and Melike Erol-Kantarci. "Energy-Aware Dynamic DU Selection and NF Relocation in O-RAN Using Actor–Critic Learning." Sensors 22, no. 13 (July 3, 2022): 5029. http://dx.doi.org/10.3390/s22135029.
Litwynenko, Karina, and Małgorzata Plechawska-Wójcik. "Analysis of the possibilities for using machine learning algorithms in the Unity environment." Journal of Computer Sciences Institute 20 (September 30, 2021): 197–204. http://dx.doi.org/10.35784/jcsi.2680.
Coraci, Davide, Silvio Brandi, Marco Savino Piscitelli, and Alfonso Capozzoli. "Online Implementation of a Soft Actor-Critic Agent to Enhance Indoor Temperature Control and Energy Efficiency in Buildings." Energies 14, no. 4 (February 14, 2021): 997. http://dx.doi.org/10.3390/en14040997.
Park, Kwan-Woo, MyeongSeop Kim, Jung-Su Kim, and Jae-Han Park. "Path Planning for Multi-Arm Manipulators Using Soft Actor-Critic Algorithm with Position Prediction of Moving Obstacles via LSTM." Applied Sciences 12, no. 19 (September 29, 2022): 9837. http://dx.doi.org/10.3390/app12199837.
Kathirgamanathan, Anjukan, Eleni Mangina, and Donal P. Finn. "Development of a Soft Actor Critic deep reinforcement learning approach for harnessing energy flexibility in a Large Office building." Energy and AI 5 (September 2021): 100101. http://dx.doi.org/10.1016/j.egyai.2021.100101.
Zhang, Bin, Weihao Hu, Di Cao, Tao Li, Zhenyuan Zhang, Zhe Chen, and Frede Blaabjerg. "Soft actor-critic –based multi-objective optimized energy conversion and management strategy for integrated energy systems with renewable energy." Energy Conversion and Management 243 (September 2021): 114381. http://dx.doi.org/10.1016/j.enconman.2021.114381.
Zheng, Yuemin, Jin Tao, Qinglin Sun, Hao Sun, Zengqiang Chen, Mingwei Sun, and Guangming Xie. "Soft Actor–Critic based active disturbance rejection path following control for unmanned surface vessel under wind and wave disturbances." Ocean Engineering 247 (March 2022): 110631. http://dx.doi.org/10.1016/j.oceaneng.2022.110631.
Zhao, Xiaohu, Hanli Jiang, Chenyang An, Ruocheng Wu, Yijun Guo, and Daquan Yang. "A Method of Multi-UAV Cooperative Task Assignment Based on Reinforcement Learning." Mobile Information Systems 2022 (August 12, 2022): 1–9. http://dx.doi.org/10.1155/2022/1147819.
Jurj, Sorin Liviu, Dominik Grundt, Tino Werner, Philipp Borchers, Karina Rothemann, and Eike Möhlmann. "Increasing the Safety of Adaptive Cruise Control Using Physics-Guided Reinforcement Learning." Energies 14, no. 22 (November 12, 2021): 7572. http://dx.doi.org/10.3390/en14227572.
Backman, Sofi, Daniel Lindmark, Kenneth Bodin, Martin Servin, Joakim Mörk, and Håkan Löfgren. "Continuous Control of an Underground Loader Using Deep Reinforcement Learning." Machines 9, no. 10 (September 27, 2021): 216. http://dx.doi.org/10.3390/machines9100216.
Phan Bui, Khoi, Giang Nguyen Truong, and Dat Nguyen Ngoc. "GCTD3: Modeling of Bipedal Locomotion by Combination of TD3 Algorithms and Graph Convolutional Network." Applied Sciences 12, no. 6 (March 14, 2022): 2948. http://dx.doi.org/10.3390/app12062948.
Qi, Qi, Wenbin Lin, Boyang Guo, Jinshan Chen, Chaoping Deng, Guodong Lin, Xin Sun, and Youjia Chen. "Augmented Lagrangian-Based Reinforcement Learning for Network Slicing in IIoT." Electronics 11, no. 20 (October 19, 2022): 3385. http://dx.doi.org/10.3390/electronics11203385.
Prianto, Evan, Jae-Han Park, Ji-Hun Bae, and Jung-Su Kim. "Deep Reinforcement Learning-Based Path Planning for Multi-Arm Manipulators with Periodically Moving Obstacles." Applied Sciences 11, no. 6 (March 14, 2021): 2587. http://dx.doi.org/10.3390/app11062587.
Wen, Wen, Yuyu Yuan, and Jincui Yang. "Reinforcement Learning for Options Trading." Applied Sciences 11, no. 23 (November 25, 2021): 11208. http://dx.doi.org/10.3390/app112311208.
Yuan, Yuyu, Wen Wen, and Jincui Yang. "Using Data Augmentation Based Reinforcement Learning for Daily Stock Trading." Electronics 9, no. 9 (August 27, 2020): 1384. http://dx.doi.org/10.3390/electronics9091384.
Tovarnov, M. S., and N. V. Bykov. "Reinforcement learning reward function in unmanned aerial vehicle control tasks." Journal of Physics: Conference Series 2308, no. 1 (July 1, 2022): 012004. http://dx.doi.org/10.1088/1742-6596/2308/1/012004.
Xu, Yuting, Chao Wang, Jiakai Liang, Keqiang Yue, Wenjun Li, Shilian Zheng, and Zhijin Zhao. "Deep Reinforcement Learning Based Decision Making for Complex Jamming Waveforms." Entropy 24, no. 10 (October 10, 2022): 1441. http://dx.doi.org/10.3390/e24101441.
Choi, Hongrok, and Sangheon Pack. "Cooperative Downloading for LEO Satellite Networks: A DRL-Based Approach." Sensors 22, no. 18 (September 10, 2022): 6853. http://dx.doi.org/10.3390/s22186853.
Zhang, Jian, and Fengge Wu. "A Novel Model-Based Reinforcement Learning Attitude Control Method for Virtual Reality Satellite." Wireless Communications and Mobile Computing 2021 (July 1, 2021): 1–11. http://dx.doi.org/10.1155/2021/7331894.
Shahid, Asad Ali, Dario Piga, Francesco Braghin, and Loris Roveda. "Continuous control actions learning and adaptation for robotic manipulation through reinforcement learning." Autonomous Robots 46, no. 3 (February 9, 2022): 483–98. http://dx.doi.org/10.1007/s10514-022-10034-z.
Yatawatta, Sarod, and Ian M. Avruch. "Deep reinforcement learning for smart calibration of radio telescopes." Monthly Notices of the Royal Astronomical Society 505, no. 2 (May 17, 2021): 2141–50. http://dx.doi.org/10.1093/mnras/stab1401.
Sun, Haoran, Tingting Fu, Yuanhuai Ling, and Chaoming He. "Adaptive Quadruped Balance Control for Dynamic Environments Using Maximum-Entropy Reinforcement Learning." Sensors 21, no. 17 (September 2, 2021): 5907. http://dx.doi.org/10.3390/s21175907.
Huang, Jianbin, Longji Huang, Meijuan Liu, He Li, Qinglin Tan, Xiaoke Ma, Jiangtao Cui, and De-Shuang Huang. "Deep Reinforcement Learning-based Trajectory Pricing on Ride-hailing Platforms." ACM Transactions on Intelligent Systems and Technology 13, no. 3 (June 30, 2022): 1–19. http://dx.doi.org/10.1145/3474841.
Xu, Haotian, Qi Fang, Cong Hu, Yue Hu, and Quanjun Yin. "MIRA: Model-Based Imagined Rollouts Augmentation for Non-Stationarity in Multi-Agent Systems." Mathematics 10, no. 17 (August 25, 2022): 3059. http://dx.doi.org/10.3390/math10173059.
Kim, MyeongSeop, Jung-Su Kim, Myoung-Su Choi, and Jae-Han Park. "Adaptive Discount Factor for Deep Reinforcement Learning in Continuing Tasks with Uncertainty." Sensors 22, no. 19 (September 25, 2022): 7266. http://dx.doi.org/10.3390/s22197266.
Singh, Arambam James, Akshat Kumar, and Hoong Chuin Lau. "Learning and Exploiting Shaped Reward Models for Large Scale Multiagent RL." Proceedings of the International Conference on Automated Planning and Scheduling 31 (May 17, 2021): 588–96. http://dx.doi.org/10.1609/icaps.v31i1.16007.