Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Learning and control.

Статті в журналах з теми "Learning and control"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Learning and control".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Hewing, Lukas, Kim P. Wabersich, Marcel Menner, and Melanie N. Zeilinger. "Learning-Based Model Predictive Control: Toward Safe Learning in Control." Annual Review of Control, Robotics, and Autonomous Systems 3, no. 1 (2020): 269–96. http://dx.doi.org/10.1146/annurev-control-090419-075625.

Повний текст джерела
Анотація:
Recent successes in the field of machine learning, as well as the availability of increased sensing and computational capabilities in modern control systems, have led to a growing interest in learning and data-driven control techniques. Model predictive control (MPC), as the prime methodology for constrained control, offers a significant opportunity to exploit the abundance of data in a reliable manner, particularly while taking safety constraints into account. This review aims at summarizing and categorizing previous research on learning-based MPC, i.e., the integration or combination of MPC
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Chiuso, A., and G. Pillonetto. "System Identification: A Machine Learning Perspective." Annual Review of Control, Robotics, and Autonomous Systems 2, no. 1 (2019): 281–304. http://dx.doi.org/10.1146/annurev-control-053018-023744.

Повний текст джерела
Анотація:
Estimation of functions from sparse and noisy data is a central theme in machine learning. In the last few years, many algorithms have been developed that exploit Tikhonov regularization theory and reproducing kernel Hilbert spaces. These are the so-called kernel-based methods, which include powerful approaches like regularization networks, support vector machines, and Gaussian regression. Recently, these techniques have also gained popularity in the system identification community. In both linear and nonlinear settings, kernels that incorporate information on dynamic systems, such as the smoo
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Antsaklis, P. J. "Intelligent Learning Control." IEEE Control Systems 15, no. 3 (1995): 5–7. http://dx.doi.org/10.1109/mcs.1995.594467.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Ali, S. Nageeb. "Learning Self-Control *." Quarterly Journal of Economics 126, no. 2 (2011): 857–93. http://dx.doi.org/10.1093/qje/qjr014.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Barto, Andrew G. "Reinforcement learning control." Current Opinion in Neurobiology 4, no. 6 (1994): 888–93. http://dx.doi.org/10.1016/0959-4388(94)90138-4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Matsubara, Takamitsu. "Learning Control Policies by Reinforcement Learning." Journal of the Robotics Society of Japan 36, no. 9 (2018): 597–600. http://dx.doi.org/10.7210/jrsj.36.597.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Dang, Ngoc Trung, and Phuong Nam Dao. "Data-Driven Reinforcement Learning Control for Quadrotor Systems." International Journal of Mechanical Engineering and Robotics Research 13, no. 5 (2024): 495–501. http://dx.doi.org/10.18178/ijmerr.13.5.495-501.

Повний текст джерела
Анотація:
This paper aims to solve the tracking problem and optimality effectiveness of an Unmanned Aerial Vehicle (UAV) by model-free data Reinforcement Learning (RL) algorithms in both sub-systems of attitude and position. First, a cascade UAV model structure is given to establish the control system diagram with two corresponding attitude and position control loops. Second, based on the computation of the time derivative of the Bellman function by two different methods, the combination of the Bellman function and the optimal control is adopted to maintain the control signal as time converges to infini
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Freeman, Chris, and Ying Tan. "Iterative learning control and repetitive control." International Journal of Control 84, no. 7 (2011): 1193–95. http://dx.doi.org/10.1080/00207179.2011.596574.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Recht, Benjamin. "A Tour of Reinforcement Learning: The View from Continuous Control." Annual Review of Control, Robotics, and Autonomous Systems 2, no. 1 (2019): 253–79. http://dx.doi.org/10.1146/annurev-control-053018-023825.

Повний текст джерела
Анотація:
This article surveys reinforcement learning from the perspective of optimization and control, with a focus on continuous control applications. It reviews the general formulation, terminology, and typical experimental implementations of reinforcement learning as well as competing solution paradigms. In order to compare the relative merits of various techniques, it presents a case study of the linear quadratic regulator (LQR) with unknown dynamics, perhaps the simplest and best-studied problem in optimal control. It also describes how merging techniques from learning theory and control can provi
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Ravichandar, Harish, Athanasios S. Polydoros, Sonia Chernova, and Aude Billard. "Recent Advances in Robot Learning from Demonstration." Annual Review of Control, Robotics, and Autonomous Systems 3, no. 1 (2020): 297–330. http://dx.doi.org/10.1146/annurev-control-100819-063206.

Повний текст джерела
Анотація:
In the context of robotics and automation, learning from demonstration (LfD) is the paradigm in which robots acquire new skills by learning to imitate an expert. The choice of LfD over other robot learning methods is compelling when ideal behavior can be neither easily scripted (as is done in traditional robot programming) nor easily defined as an optimization problem, but can be demonstrated. While there have been multiple surveys of this field in the past, there is a need for a new one given the considerable growth in the number of publications in recent years. This review aims to provide an
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Hu, Bin, Kaiqing Zhang, Na Li, Mehran Mesbahi, Maryam Fazel, and Tamer Başar. "Toward a Theoretical Foundation of Policy Optimization for Learning Control Policies." Annual Review of Control, Robotics, and Autonomous Systems 6, no. 1 (2023): 123–58. http://dx.doi.org/10.1146/annurev-control-042920-020021.

Повний текст джерела
Анотація:
Gradient-based methods have been widely used for system design and optimization in diverse application domains. Recently, there has been a renewed interest in studying theoretical properties of these methods in the context of control and reinforcement learning. This article surveys some of the recent developments on policy optimization, a gradient-based iterative approach for feedback control synthesis that has been popularized by successes of reinforcement learning. We take an interdisciplinary perspective in our exposition that connects control theory, reinforcement learning, and large-scale
Стилі APA, Harvard, Vancouver, ISO та ін.
12

WU, Q. H. "Reinforcement learning control using interconnected learning automata." International Journal of Control 62, no. 1 (1995): 1–16. http://dx.doi.org/10.1080/00207179508921531.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
13

You, Byungyong. "Normalized Learning Rule for Iterative Learning Control." International Journal of Control, Automation and Systems 16, no. 3 (2018): 1379–89. http://dx.doi.org/10.1007/s12555-017-0194-z.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Zhang, Quanqi, Chengwei Wu, Haoyu Tian, Yabin Gao, Weiran Yao, and Ligang Wu. "Safety reinforcement learning control via transfer learning." Automatica 166 (August 2024): 111714. http://dx.doi.org/10.1016/j.automatica.2024.111714.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Achille, Alessandro, and Stefano Soatto. "A Separation Principle for Control in the Age of Deep Learning." Annual Review of Control, Robotics, and Autonomous Systems 1, no. 1 (2018): 287–307. http://dx.doi.org/10.1146/annurev-control-060117-105140.

Повний текст джерела
Анотація:
We review the problem of defining and inferring a state for a control system based on complex, high-dimensional, highly uncertain measurement streams, such as videos. Such a state, or representation, should contain all and only the information needed for control and discount nuisance variability in the data. It should also have finite complexity, ideally modulated depending on available resources. This representation is what we want to store in memory in lieu of the data, as it separates the control task from the measurement process. For the trivial case with no dynamics, a representation can
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Youssef, Ayman, Mohamed El Telbany, and Abdelhalim Zekry. "Reinforcement Learning for Online Maximum Power Point Tracking Control." Journal of Clean Energy Technologies 4, no. 4 (2015): 245–48. http://dx.doi.org/10.7763/jocet.2016.v4.290.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Amor, Heni Ben, Shuhei Ikemoto, Takashi Minato, and Hiroshi Ishiguro. "1P1-E07 Learning Android Control using Growing Neural Networks." Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec) 2006 (2006): _1P1—E07_1—_1P1—E07_4. http://dx.doi.org/10.1299/jsmermd.2006._1p1-e07_1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
18

SUGANUMA, Yoshinori, and Masami ITO. "Learning Control and Knowledge." Transactions of the Society of Instrument and Control Engineers 22, no. 8 (1986): 841–48. http://dx.doi.org/10.9746/sicetr1965.22.841.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Marino, R., S. Scalzi, P. Tomei, and C. M. Verrelli. "Generalized PID learning control." IFAC Proceedings Volumes 44, no. 1 (2011): 3629–34. http://dx.doi.org/10.3182/20110828-6-it-1002.01606.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Buchli, Jonas, Freek Stulp, Evangelos Theodorou, and Stefan Schaal. "Learning variable impedance control." International Journal of Robotics Research 30, no. 7 (2011): 820–33. http://dx.doi.org/10.1177/0278364911402527.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Dada, Maqbool, and Richard Marcellus. "Process Control with Learning." Operations Research 42, no. 2 (1994): 323–36. http://dx.doi.org/10.1287/opre.42.2.323.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Huang, S. N., and S. Y. Lim. "Predictive Iterative Learning Control." Intelligent Automation & Soft Computing 9, no. 2 (2003): 103–12. http://dx.doi.org/10.1080/10798587.2000.10642847.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Schaal, Stefan, and Christopher G. Atkeson. "Learning Control in Robotics." IEEE Robotics & Automation Magazine 17, no. 2 (2010): 20–29. http://dx.doi.org/10.1109/mra.2010.936957.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Hart, Stephen, and Roderic Grupen. "Learning Generalizable Control Programs." IEEE Transactions on Autonomous Mental Development 3, no. 3 (2011): 216–31. http://dx.doi.org/10.1109/tamd.2010.2103311.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Gallego, Jorge, Elise Nsegbe, and Estelle Durand. "Learning in Respiratory Control." Behavior Modification 25, no. 4 (2001): 495–512. http://dx.doi.org/10.1177/0145445501254002.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Ekanayake, Jinendra, Aswin Chari, Claudia Craven, et al. "Learning to control ICP." Fluids and Barriers of the CNS 12, Suppl 1 (2015): P12. http://dx.doi.org/10.1186/2045-8118-12-s1-p12.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Evans, Glen. "Getting learning under control." Australian Educational Researcher 15, no. 1 (1988): 1–17. http://dx.doi.org/10.1007/bf03219398.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Wang, C., and D. J. Hill. "Learning From Neural Control." IEEE Transactions on Neural Networks 17, no. 1 (2006): 130–46. http://dx.doi.org/10.1109/tnn.2005.860843.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Smith, Ian. "Tuberculosis Control Learning Games." Tropical Doctor 23, no. 3 (1993): 101–3. http://dx.doi.org/10.1177/004947559302300304.

Повний текст джерела
Анотація:
In teaching health workers about tuberculosis (TB) control we frequently concentrate on the technological aspects, such as diagnosis, treatment and recording. Health workers also need to understand the sociological aspects of TB control, particularly those that influence the likelihood of diagnosis and cure. Two games are presented that help health workers comprehend the reasons why TB patients often delay in presenting for diagnosis, and why they then frequently default from treatment.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

ARIMOTO, Suguru. "Theory of Learning Control." Journal of the Society of Mechanical Engineers 93, no. 856 (1990): 180–86. http://dx.doi.org/10.1299/jsmemag.93.856_180.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Najafi, Esmaeil, Robert Babuska, and Gabriel A. D. Lopes. "Learning Sequential Composition Control." IEEE Transactions on Cybernetics 46, no. 11 (2016): 2559–69. http://dx.doi.org/10.1109/tcyb.2015.2481081.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Shi, Jia, Furong Gao, and Tie-Jun Wu. "From Two-Dimensional Linear Quadratic Optimal Control to Iterative Learning Control. Paper 2. Iterative Learning Controls for Batch Processes." Industrial & Engineering Chemistry Research 45, no. 13 (2006): 4617–28. http://dx.doi.org/10.1021/ie051298a.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Poot, Maurice, Jim Portegies, and Tom Oomen. "On the Role of Models in Learning Control: Actor-Critic Iterative Learning Control." IFAC-PapersOnLine 53, no. 2 (2020): 1450–55. http://dx.doi.org/10.1016/j.ifacol.2020.12.1918.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Pasamontes, Manuel, José Domingo Alvarez, José Luis Guzman, and Manuel Berenguel. "Learning Switching Control: A Tank Level-Control Exercise." IEEE Transactions on Education 55, no. 2 (2012): 226–32. http://dx.doi.org/10.1109/te.2011.2162239.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Wawrzyński, Paweł. "Control Policy with Autocorrelated Noise in Reinforcement Learning for Robotics." International Journal of Machine Learning and Computing 5, no. 2 (2015): 91–95. http://dx.doi.org/10.7763/ijmlc.2015.v5.489.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Genders, Wade, and Saiedeh Razavi. "Evaluating Reinforcement Learning State Representations for Adaptive Traffic Signal Control." International Journal of Traffic and Transportation Management 1, no. 1 (2019): 19–26. http://dx.doi.org/10.5383/jttm.01.01.003.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
37

AZLAN, Norsinnira, and Hiroshi YAMAURA. "20204 Study of Feedback Error Learning Control for Underactuated Systems." Proceedings of Conference of Kanto Branch 2009.15 (2009): 149–50. http://dx.doi.org/10.1299/jsmekanto.2009.15.149.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Irfan, C. M. Althaff, Karim Ouzzane, Shusaku Nomura, and Yoshimi Fukumura. "211 AN ACCESS CONTROL SYSTEM For E-Learning MANAGEMENT SYSTEMS." Proceedings of Conference of Hokuriku-Shinetsu Branch 2010.47 (2010): 59–60. http://dx.doi.org/10.1299/jsmehs.2010.47.59.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
39

白家納, 白家納, та 黃崇能 Pachara Opattrakarnkul. "以深度學習模式估測控制之駕駛輔助系統的研發". 理工研究國際期刊 12, № 1 (2022): 015–24. http://dx.doi.org/10.53106/222344892022041201002.

Повний текст джерела
Анотація:
<p>Adaptive cruise control (ACC) systems are designed to provide longitudinal assistance to enhance safety and driving comfort by adjusting vehicle velocity to maintain a safe distance between the host vehicle and the preceding vehicle. Generally, using model predictive control (MPC) in ACC systems provides high responsiveness and lower discomfort by solving real-time constrained optimization problems but results in computational load. This paper presents an architecture of deep learning based on model predictive control in ACC systems to avoid real-time optimization problems required by
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Hein, Daniel, Steffen Limmer, and Thomas A. Runkler. "Interpretable Control by Reinforcement Learning." IFAC-PapersOnLine 53, no. 2 (2020): 8082–89. http://dx.doi.org/10.1016/j.ifacol.2020.12.2277.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Kurata, K. "GREENHOUSE CONTROL BY MACHINE LEARNING." Acta Horticulturae, no. 230 (September 1988): 195–200. http://dx.doi.org/10.17660/actahortic.1988.230.23.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Žáková, Katarína, and Richard Balogh. "Control Engineering Home Learning Kit." IFAC-PapersOnLine 55, no. 4 (2022): 310–15. http://dx.doi.org/10.1016/j.ifacol.2022.06.051.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Yuan, Xu, Lucian Buşoniu, and Robert Babuška. "Reinforcement Learning for Elevator Control." IFAC Proceedings Volumes 41, no. 2 (2008): 2212–17. http://dx.doi.org/10.3182/20080706-5-kr-1001.00373.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
44

De Loo, I. "Management control bij action learning." Maandblad Voor Accountancy en Bedrijfseconomie 77, no. 10 (2003): 445–52. http://dx.doi.org/10.5117/mab.77.11785.

Повний текст джерела
Анотація:
In tegenstelling tot de jaren zestig en zeventig van de vorige eeuw komt uit evaluaties van action learningprogramma’s steeds vaker naar voren dat zij weliswaar hebben geleid tot persoonlijke groei, maar niet langer tot organisatiegroei. In dit artikel wordt betoogd dat één van de belangrijkste oorzaken daarvan is dat er geen specifieke rol voor management controlsystemen is weggelegd bij action learning. Wanneer echter vormen van ‘trial and error’ en ‘intuitive control’ zouden worden toegepast, is het niet ondenkbeeldig dat organisatiegroei wél weer gerealiseerd wordt, mits action l
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Benavent, Christophe. "CRM, LEARNING AND ORGANIZATIONAL CONTROL." JISTEM Journal of Information Systems and Technology Management 3, no. 2 (2006): 193–210. http://dx.doi.org/10.4301/s1807-17752006000200006.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Layne, Jeffery R., and Kevin M. Passino. "Fuzzy Model Reference Learning Control." Journal of Intelligent and Fuzzy Systems 4, no. 1 (1996): 33–47. http://dx.doi.org/10.3233/ifs-1996-4103.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Horowitz, Roberto. "Learning Control Applications to Mechatronics." JSME international journal. Ser. C, Dynamics, control, robotics, design and manufacturing 37, no. 3 (1994): 421–30. http://dx.doi.org/10.1299/jsmec1993.37.421.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Jiménez, E., and M. Rodríguez. "WEB BASED PROCESS CONTROL LEARNING." IFAC Proceedings Volumes 39, no. 6 (2006): 349–54. http://dx.doi.org/10.3182/20060621-3-es-2905.00061.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Holder, Tim. "Motor Control, Learning and Development." Journal of Sports Sciences 26, no. 12 (2008): 1375–76. http://dx.doi.org/10.1080/02640410802271547.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Suganuma, Y., and M. Ito. "Learning in movement and control." IEEE Transactions on Systems, Man, and Cybernetics 19, no. 2 (1989): 258–70. http://dx.doi.org/10.1109/21.31031.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!