To see the other types of publications on this topic, follow the link: Learning and control.

Journal articles on the topic 'Learning and control'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Learning and control.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Hewing, Lukas, Kim P. Wabersich, Marcel Menner, and Melanie N. Zeilinger. "Learning-Based Model Predictive Control: Toward Safe Learning in Control." Annual Review of Control, Robotics, and Autonomous Systems 3, no. 1 (2020): 269–96. http://dx.doi.org/10.1146/annurev-control-090419-075625.

Full text
Abstract:
Recent successes in the field of machine learning, as well as the availability of increased sensing and computational capabilities in modern control systems, have led to a growing interest in learning and data-driven control techniques. Model predictive control (MPC), as the prime methodology for constrained control, offers a significant opportunity to exploit the abundance of data in a reliable manner, particularly while taking safety constraints into account. This review aims at summarizing and categorizing previous research on learning-based MPC, i.e., the integration or combination of MPC
APA, Harvard, Vancouver, ISO, and other styles
2

Chiuso, A., and G. Pillonetto. "System Identification: A Machine Learning Perspective." Annual Review of Control, Robotics, and Autonomous Systems 2, no. 1 (2019): 281–304. http://dx.doi.org/10.1146/annurev-control-053018-023744.

Full text
Abstract:
Estimation of functions from sparse and noisy data is a central theme in machine learning. In the last few years, many algorithms have been developed that exploit Tikhonov regularization theory and reproducing kernel Hilbert spaces. These are the so-called kernel-based methods, which include powerful approaches like regularization networks, support vector machines, and Gaussian regression. Recently, these techniques have also gained popularity in the system identification community. In both linear and nonlinear settings, kernels that incorporate information on dynamic systems, such as the smoo
APA, Harvard, Vancouver, ISO, and other styles
3

Antsaklis, P. J. "Intelligent Learning Control." IEEE Control Systems 15, no. 3 (1995): 5–7. http://dx.doi.org/10.1109/mcs.1995.594467.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ali, S. Nageeb. "Learning Self-Control *." Quarterly Journal of Economics 126, no. 2 (2011): 857–93. http://dx.doi.org/10.1093/qje/qjr014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Barto, Andrew G. "Reinforcement learning control." Current Opinion in Neurobiology 4, no. 6 (1994): 888–93. http://dx.doi.org/10.1016/0959-4388(94)90138-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Matsubara, Takamitsu. "Learning Control Policies by Reinforcement Learning." Journal of the Robotics Society of Japan 36, no. 9 (2018): 597–600. http://dx.doi.org/10.7210/jrsj.36.597.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Dang, Ngoc Trung, and Phuong Nam Dao. "Data-Driven Reinforcement Learning Control for Quadrotor Systems." International Journal of Mechanical Engineering and Robotics Research 13, no. 5 (2024): 495–501. http://dx.doi.org/10.18178/ijmerr.13.5.495-501.

Full text
Abstract:
This paper aims to solve the tracking problem and optimality effectiveness of an Unmanned Aerial Vehicle (UAV) by model-free data Reinforcement Learning (RL) algorithms in both sub-systems of attitude and position. First, a cascade UAV model structure is given to establish the control system diagram with two corresponding attitude and position control loops. Second, based on the computation of the time derivative of the Bellman function by two different methods, the combination of the Bellman function and the optimal control is adopted to maintain the control signal as time converges to infini
APA, Harvard, Vancouver, ISO, and other styles
8

Freeman, Chris, and Ying Tan. "Iterative learning control and repetitive control." International Journal of Control 84, no. 7 (2011): 1193–95. http://dx.doi.org/10.1080/00207179.2011.596574.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Recht, Benjamin. "A Tour of Reinforcement Learning: The View from Continuous Control." Annual Review of Control, Robotics, and Autonomous Systems 2, no. 1 (2019): 253–79. http://dx.doi.org/10.1146/annurev-control-053018-023825.

Full text
Abstract:
This article surveys reinforcement learning from the perspective of optimization and control, with a focus on continuous control applications. It reviews the general formulation, terminology, and typical experimental implementations of reinforcement learning as well as competing solution paradigms. In order to compare the relative merits of various techniques, it presents a case study of the linear quadratic regulator (LQR) with unknown dynamics, perhaps the simplest and best-studied problem in optimal control. It also describes how merging techniques from learning theory and control can provi
APA, Harvard, Vancouver, ISO, and other styles
10

Ravichandar, Harish, Athanasios S. Polydoros, Sonia Chernova, and Aude Billard. "Recent Advances in Robot Learning from Demonstration." Annual Review of Control, Robotics, and Autonomous Systems 3, no. 1 (2020): 297–330. http://dx.doi.org/10.1146/annurev-control-100819-063206.

Full text
Abstract:
In the context of robotics and automation, learning from demonstration (LfD) is the paradigm in which robots acquire new skills by learning to imitate an expert. The choice of LfD over other robot learning methods is compelling when ideal behavior can be neither easily scripted (as is done in traditional robot programming) nor easily defined as an optimization problem, but can be demonstrated. While there have been multiple surveys of this field in the past, there is a need for a new one given the considerable growth in the number of publications in recent years. This review aims to provide an
APA, Harvard, Vancouver, ISO, and other styles
11

Hu, Bin, Kaiqing Zhang, Na Li, Mehran Mesbahi, Maryam Fazel, and Tamer Başar. "Toward a Theoretical Foundation of Policy Optimization for Learning Control Policies." Annual Review of Control, Robotics, and Autonomous Systems 6, no. 1 (2023): 123–58. http://dx.doi.org/10.1146/annurev-control-042920-020021.

Full text
Abstract:
Gradient-based methods have been widely used for system design and optimization in diverse application domains. Recently, there has been a renewed interest in studying theoretical properties of these methods in the context of control and reinforcement learning. This article surveys some of the recent developments on policy optimization, a gradient-based iterative approach for feedback control synthesis that has been popularized by successes of reinforcement learning. We take an interdisciplinary perspective in our exposition that connects control theory, reinforcement learning, and large-scale
APA, Harvard, Vancouver, ISO, and other styles
12

WU, Q. H. "Reinforcement learning control using interconnected learning automata." International Journal of Control 62, no. 1 (1995): 1–16. http://dx.doi.org/10.1080/00207179508921531.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

You, Byungyong. "Normalized Learning Rule for Iterative Learning Control." International Journal of Control, Automation and Systems 16, no. 3 (2018): 1379–89. http://dx.doi.org/10.1007/s12555-017-0194-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Zhang, Quanqi, Chengwei Wu, Haoyu Tian, Yabin Gao, Weiran Yao, and Ligang Wu. "Safety reinforcement learning control via transfer learning." Automatica 166 (August 2024): 111714. http://dx.doi.org/10.1016/j.automatica.2024.111714.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Achille, Alessandro, and Stefano Soatto. "A Separation Principle for Control in the Age of Deep Learning." Annual Review of Control, Robotics, and Autonomous Systems 1, no. 1 (2018): 287–307. http://dx.doi.org/10.1146/annurev-control-060117-105140.

Full text
Abstract:
We review the problem of defining and inferring a state for a control system based on complex, high-dimensional, highly uncertain measurement streams, such as videos. Such a state, or representation, should contain all and only the information needed for control and discount nuisance variability in the data. It should also have finite complexity, ideally modulated depending on available resources. This representation is what we want to store in memory in lieu of the data, as it separates the control task from the measurement process. For the trivial case with no dynamics, a representation can
APA, Harvard, Vancouver, ISO, and other styles
16

Youssef, Ayman, Mohamed El Telbany, and Abdelhalim Zekry. "Reinforcement Learning for Online Maximum Power Point Tracking Control." Journal of Clean Energy Technologies 4, no. 4 (2015): 245–48. http://dx.doi.org/10.7763/jocet.2016.v4.290.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Amor, Heni Ben, Shuhei Ikemoto, Takashi Minato, and Hiroshi Ishiguro. "1P1-E07 Learning Android Control using Growing Neural Networks." Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec) 2006 (2006): _1P1—E07_1—_1P1—E07_4. http://dx.doi.org/10.1299/jsmermd.2006._1p1-e07_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

SUGANUMA, Yoshinori, and Masami ITO. "Learning Control and Knowledge." Transactions of the Society of Instrument and Control Engineers 22, no. 8 (1986): 841–48. http://dx.doi.org/10.9746/sicetr1965.22.841.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Marino, R., S. Scalzi, P. Tomei, and C. M. Verrelli. "Generalized PID learning control." IFAC Proceedings Volumes 44, no. 1 (2011): 3629–34. http://dx.doi.org/10.3182/20110828-6-it-1002.01606.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Buchli, Jonas, Freek Stulp, Evangelos Theodorou, and Stefan Schaal. "Learning variable impedance control." International Journal of Robotics Research 30, no. 7 (2011): 820–33. http://dx.doi.org/10.1177/0278364911402527.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Dada, Maqbool, and Richard Marcellus. "Process Control with Learning." Operations Research 42, no. 2 (1994): 323–36. http://dx.doi.org/10.1287/opre.42.2.323.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Huang, S. N., and S. Y. Lim. "Predictive Iterative Learning Control." Intelligent Automation & Soft Computing 9, no. 2 (2003): 103–12. http://dx.doi.org/10.1080/10798587.2000.10642847.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Schaal, Stefan, and Christopher G. Atkeson. "Learning Control in Robotics." IEEE Robotics & Automation Magazine 17, no. 2 (2010): 20–29. http://dx.doi.org/10.1109/mra.2010.936957.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Hart, Stephen, and Roderic Grupen. "Learning Generalizable Control Programs." IEEE Transactions on Autonomous Mental Development 3, no. 3 (2011): 216–31. http://dx.doi.org/10.1109/tamd.2010.2103311.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Gallego, Jorge, Elise Nsegbe, and Estelle Durand. "Learning in Respiratory Control." Behavior Modification 25, no. 4 (2001): 495–512. http://dx.doi.org/10.1177/0145445501254002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Ekanayake, Jinendra, Aswin Chari, Claudia Craven, et al. "Learning to control ICP." Fluids and Barriers of the CNS 12, Suppl 1 (2015): P12. http://dx.doi.org/10.1186/2045-8118-12-s1-p12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Evans, Glen. "Getting learning under control." Australian Educational Researcher 15, no. 1 (1988): 1–17. http://dx.doi.org/10.1007/bf03219398.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Wang, C., and D. J. Hill. "Learning From Neural Control." IEEE Transactions on Neural Networks 17, no. 1 (2006): 130–46. http://dx.doi.org/10.1109/tnn.2005.860843.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Smith, Ian. "Tuberculosis Control Learning Games." Tropical Doctor 23, no. 3 (1993): 101–3. http://dx.doi.org/10.1177/004947559302300304.

Full text
Abstract:
In teaching health workers about tuberculosis (TB) control we frequently concentrate on the technological aspects, such as diagnosis, treatment and recording. Health workers also need to understand the sociological aspects of TB control, particularly those that influence the likelihood of diagnosis and cure. Two games are presented that help health workers comprehend the reasons why TB patients often delay in presenting for diagnosis, and why they then frequently default from treatment.
APA, Harvard, Vancouver, ISO, and other styles
30

ARIMOTO, Suguru. "Theory of Learning Control." Journal of the Society of Mechanical Engineers 93, no. 856 (1990): 180–86. http://dx.doi.org/10.1299/jsmemag.93.856_180.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Najafi, Esmaeil, Robert Babuska, and Gabriel A. D. Lopes. "Learning Sequential Composition Control." IEEE Transactions on Cybernetics 46, no. 11 (2016): 2559–69. http://dx.doi.org/10.1109/tcyb.2015.2481081.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Shi, Jia, Furong Gao, and Tie-Jun Wu. "From Two-Dimensional Linear Quadratic Optimal Control to Iterative Learning Control. Paper 2. Iterative Learning Controls for Batch Processes." Industrial & Engineering Chemistry Research 45, no. 13 (2006): 4617–28. http://dx.doi.org/10.1021/ie051298a.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Poot, Maurice, Jim Portegies, and Tom Oomen. "On the Role of Models in Learning Control: Actor-Critic Iterative Learning Control." IFAC-PapersOnLine 53, no. 2 (2020): 1450–55. http://dx.doi.org/10.1016/j.ifacol.2020.12.1918.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Pasamontes, Manuel, José Domingo Alvarez, José Luis Guzman, and Manuel Berenguel. "Learning Switching Control: A Tank Level-Control Exercise." IEEE Transactions on Education 55, no. 2 (2012): 226–32. http://dx.doi.org/10.1109/te.2011.2162239.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Wawrzyński, Paweł. "Control Policy with Autocorrelated Noise in Reinforcement Learning for Robotics." International Journal of Machine Learning and Computing 5, no. 2 (2015): 91–95. http://dx.doi.org/10.7763/ijmlc.2015.v5.489.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Genders, Wade, and Saiedeh Razavi. "Evaluating Reinforcement Learning State Representations for Adaptive Traffic Signal Control." International Journal of Traffic and Transportation Management 1, no. 1 (2019): 19–26. http://dx.doi.org/10.5383/jttm.01.01.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

AZLAN, Norsinnira, and Hiroshi YAMAURA. "20204 Study of Feedback Error Learning Control for Underactuated Systems." Proceedings of Conference of Kanto Branch 2009.15 (2009): 149–50. http://dx.doi.org/10.1299/jsmekanto.2009.15.149.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Irfan, C. M. Althaff, Karim Ouzzane, Shusaku Nomura, and Yoshimi Fukumura. "211 AN ACCESS CONTROL SYSTEM For E-Learning MANAGEMENT SYSTEMS." Proceedings of Conference of Hokuriku-Shinetsu Branch 2010.47 (2010): 59–60. http://dx.doi.org/10.1299/jsmehs.2010.47.59.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

白家納, 白家納, та 黃崇能 Pachara Opattrakarnkul. "以深度學習模式估測控制之駕駛輔助系統的研發". 理工研究國際期刊 12, № 1 (2022): 015–24. http://dx.doi.org/10.53106/222344892022041201002.

Full text
Abstract:
<p>Adaptive cruise control (ACC) systems are designed to provide longitudinal assistance to enhance safety and driving comfort by adjusting vehicle velocity to maintain a safe distance between the host vehicle and the preceding vehicle. Generally, using model predictive control (MPC) in ACC systems provides high responsiveness and lower discomfort by solving real-time constrained optimization problems but results in computational load. This paper presents an architecture of deep learning based on model predictive control in ACC systems to avoid real-time optimization problems required by
APA, Harvard, Vancouver, ISO, and other styles
40

Hein, Daniel, Steffen Limmer, and Thomas A. Runkler. "Interpretable Control by Reinforcement Learning." IFAC-PapersOnLine 53, no. 2 (2020): 8082–89. http://dx.doi.org/10.1016/j.ifacol.2020.12.2277.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Kurata, K. "GREENHOUSE CONTROL BY MACHINE LEARNING." Acta Horticulturae, no. 230 (September 1988): 195–200. http://dx.doi.org/10.17660/actahortic.1988.230.23.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Žáková, Katarína, and Richard Balogh. "Control Engineering Home Learning Kit." IFAC-PapersOnLine 55, no. 4 (2022): 310–15. http://dx.doi.org/10.1016/j.ifacol.2022.06.051.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Yuan, Xu, Lucian Buşoniu, and Robert Babuška. "Reinforcement Learning for Elevator Control." IFAC Proceedings Volumes 41, no. 2 (2008): 2212–17. http://dx.doi.org/10.3182/20080706-5-kr-1001.00373.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

De Loo, I. "Management control bij action learning." Maandblad Voor Accountancy en Bedrijfseconomie 77, no. 10 (2003): 445–52. http://dx.doi.org/10.5117/mab.77.11785.

Full text
Abstract:
In tegenstelling tot de jaren zestig en zeventig van de vorige eeuw komt uit evaluaties van action learningprogramma’s steeds vaker naar voren dat zij weliswaar hebben geleid tot persoonlijke groei, maar niet langer tot organisatiegroei. In dit artikel wordt betoogd dat één van de belangrijkste oorzaken daarvan is dat er geen specifieke rol voor management controlsystemen is weggelegd bij action learning. Wanneer echter vormen van ‘trial and error’ en ‘intuitive control’ zouden worden toegepast, is het niet ondenkbeeldig dat organisatiegroei wél weer gerealiseerd wordt, mits action l
APA, Harvard, Vancouver, ISO, and other styles
45

Benavent, Christophe. "CRM, LEARNING AND ORGANIZATIONAL CONTROL." JISTEM Journal of Information Systems and Technology Management 3, no. 2 (2006): 193–210. http://dx.doi.org/10.4301/s1807-17752006000200006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Layne, Jeffery R., and Kevin M. Passino. "Fuzzy Model Reference Learning Control." Journal of Intelligent and Fuzzy Systems 4, no. 1 (1996): 33–47. http://dx.doi.org/10.3233/ifs-1996-4103.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Horowitz, Roberto. "Learning Control Applications to Mechatronics." JSME international journal. Ser. C, Dynamics, control, robotics, design and manufacturing 37, no. 3 (1994): 421–30. http://dx.doi.org/10.1299/jsmec1993.37.421.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Jiménez, E., and M. Rodríguez. "WEB BASED PROCESS CONTROL LEARNING." IFAC Proceedings Volumes 39, no. 6 (2006): 349–54. http://dx.doi.org/10.3182/20060621-3-es-2905.00061.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Holder, Tim. "Motor Control, Learning and Development." Journal of Sports Sciences 26, no. 12 (2008): 1375–76. http://dx.doi.org/10.1080/02640410802271547.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Suganuma, Y., and M. Ito. "Learning in movement and control." IEEE Transactions on Systems, Man, and Cybernetics 19, no. 2 (1989): 258–70. http://dx.doi.org/10.1109/21.31031.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!