Добірка наукової літератури з теми "Learning and control"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Learning and control".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Learning and control"

1

Hewing, Lukas, Kim P. Wabersich, Marcel Menner, and Melanie N. Zeilinger. "Learning-Based Model Predictive Control: Toward Safe Learning in Control." Annual Review of Control, Robotics, and Autonomous Systems 3, no. 1 (2020): 269–96. http://dx.doi.org/10.1146/annurev-control-090419-075625.

Повний текст джерела
Анотація:
Recent successes in the field of machine learning, as well as the availability of increased sensing and computational capabilities in modern control systems, have led to a growing interest in learning and data-driven control techniques. Model predictive control (MPC), as the prime methodology for constrained control, offers a significant opportunity to exploit the abundance of data in a reliable manner, particularly while taking safety constraints into account. This review aims at summarizing and categorizing previous research on learning-based MPC, i.e., the integration or combination of MPC
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Chiuso, A., and G. Pillonetto. "System Identification: A Machine Learning Perspective." Annual Review of Control, Robotics, and Autonomous Systems 2, no. 1 (2019): 281–304. http://dx.doi.org/10.1146/annurev-control-053018-023744.

Повний текст джерела
Анотація:
Estimation of functions from sparse and noisy data is a central theme in machine learning. In the last few years, many algorithms have been developed that exploit Tikhonov regularization theory and reproducing kernel Hilbert spaces. These are the so-called kernel-based methods, which include powerful approaches like regularization networks, support vector machines, and Gaussian regression. Recently, these techniques have also gained popularity in the system identification community. In both linear and nonlinear settings, kernels that incorporate information on dynamic systems, such as the smoo
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Antsaklis, P. J. "Intelligent Learning Control." IEEE Control Systems 15, no. 3 (1995): 5–7. http://dx.doi.org/10.1109/mcs.1995.594467.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Ali, S. Nageeb. "Learning Self-Control *." Quarterly Journal of Economics 126, no. 2 (2011): 857–93. http://dx.doi.org/10.1093/qje/qjr014.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Barto, Andrew G. "Reinforcement learning control." Current Opinion in Neurobiology 4, no. 6 (1994): 888–93. http://dx.doi.org/10.1016/0959-4388(94)90138-4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Matsubara, Takamitsu. "Learning Control Policies by Reinforcement Learning." Journal of the Robotics Society of Japan 36, no. 9 (2018): 597–600. http://dx.doi.org/10.7210/jrsj.36.597.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Dang, Ngoc Trung, and Phuong Nam Dao. "Data-Driven Reinforcement Learning Control for Quadrotor Systems." International Journal of Mechanical Engineering and Robotics Research 13, no. 5 (2024): 495–501. http://dx.doi.org/10.18178/ijmerr.13.5.495-501.

Повний текст джерела
Анотація:
This paper aims to solve the tracking problem and optimality effectiveness of an Unmanned Aerial Vehicle (UAV) by model-free data Reinforcement Learning (RL) algorithms in both sub-systems of attitude and position. First, a cascade UAV model structure is given to establish the control system diagram with two corresponding attitude and position control loops. Second, based on the computation of the time derivative of the Bellman function by two different methods, the combination of the Bellman function and the optimal control is adopted to maintain the control signal as time converges to infini
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Freeman, Chris, and Ying Tan. "Iterative learning control and repetitive control." International Journal of Control 84, no. 7 (2011): 1193–95. http://dx.doi.org/10.1080/00207179.2011.596574.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Recht, Benjamin. "A Tour of Reinforcement Learning: The View from Continuous Control." Annual Review of Control, Robotics, and Autonomous Systems 2, no. 1 (2019): 253–79. http://dx.doi.org/10.1146/annurev-control-053018-023825.

Повний текст джерела
Анотація:
This article surveys reinforcement learning from the perspective of optimization and control, with a focus on continuous control applications. It reviews the general formulation, terminology, and typical experimental implementations of reinforcement learning as well as competing solution paradigms. In order to compare the relative merits of various techniques, it presents a case study of the linear quadratic regulator (LQR) with unknown dynamics, perhaps the simplest and best-studied problem in optimal control. It also describes how merging techniques from learning theory and control can provi
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Ravichandar, Harish, Athanasios S. Polydoros, Sonia Chernova, and Aude Billard. "Recent Advances in Robot Learning from Demonstration." Annual Review of Control, Robotics, and Autonomous Systems 3, no. 1 (2020): 297–330. http://dx.doi.org/10.1146/annurev-control-100819-063206.

Повний текст джерела
Анотація:
In the context of robotics and automation, learning from demonstration (LfD) is the paradigm in which robots acquire new skills by learning to imitate an expert. The choice of LfD over other robot learning methods is compelling when ideal behavior can be neither easily scripted (as is done in traditional robot programming) nor easily defined as an optimization problem, but can be demonstrated. While there have been multiple surveys of this field in the past, there is a need for a new one given the considerable growth in the number of publications in recent years. This review aims to provide an
Стилі APA, Harvard, Vancouver, ISO та ін.
Більше джерел

Дисертації з теми "Learning and control"

1

Stendal, Ludvig. "Learning about process control." Doctoral thesis, Norwegian University of Science and Technology, Faculty of Social Sciences and Technology Management, 2003. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-195.

Повний текст джерела
Анотація:
<p>The research site has been the Södra Cell Tofte pulp mill. The main focus in this thesis is how to learn about process control. The need for research on this theme is given implicitly in the foundation and construction of the INPRO programme. Norwegian engineering education is discipline oriented, and the INPRO programme aimed at integrating the three disciplines engineering cybernetics, chemical engineering, and organisation and work life science in a single PhD programme. One goal was to produce knowledge of modern production in chemical process plants based on socio-technical thinking.</
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Townley, Tracy Yvette. "Predictive iterative learning control." Thesis, University of Exeter, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.246383.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Munde, Gurubachan. "Adaptive iterative learning control." Thesis, University of Exeter, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.390139.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Wallén, Johanna. "Estimation-based iterative learning control." Doctoral thesis, Linköpings universitet, Reglerteknik, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-64017.

Повний текст джерела
Анотація:
In many  applications industrial robots perform the same motion  repeatedly. One way of compensating the repetitive part of the error  is by using iterative learning control (ILC). The ILC algorithm  makes use of the measured errors and iteratively calculates a  correction signal that is applied to the system. The main topic of the thesis is to apply an ILC algorithm to a  dynamic system where the controlled variable is not measured. A  remedy for handling this difficulty is to use additional sensors in  combination with signal processing algorithms to obtain estimates of  the controlled varia
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Gaskett, Chris, and cgaskett@it jcu edu au. "Q-Learning for Robot Control." The Australian National University. Research School of Information Sciences and Engineering, 2002. http://thesis.anu.edu.au./public/adt-ANU20041108.192425.

Повний текст джерела
Анотація:
Q-Learning is a method for solving reinforcement learning problems. Reinforcement learning problems require improvement of behaviour based on received rewards. Q-Learning has the potential to reduce robot programming effort and increase the range of robot abilities. However, most currentQ-learning systems are not suitable for robotics problems: they treat continuous variables, for example speeds or positions, as discretised values. Discretisation does not allow smooth control and does not fully exploit sensed information. A practical algorithm must also cope with real-time constraints, sensing
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Cleland, Benjamin George. "Reinforcement Learning for Racecar Control." The University of Waikato, 2006. http://hdl.handle.net/10289/2507.

Повний текст джерела
Анотація:
This thesis investigates the use of reinforcement learning to learn to drive a racecar in the simulated environment of the Robot Automobile Racing Simulator. Real-life race driving is known to be difficult for humans, and expert human drivers use complex sequences of actions. There are a large number of variables, some of which change stochastically and all of which may affect the outcome. This makes driving a promising domain for testing and developing Machine Learning techniques that have the potential to be robust enough to work in the real world. Therefore the principles of the algorithm
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Turnham, Edward James Anthony. "Meta-learning in sensorimotor control." Thesis, University of Cambridge, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.610592.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Jackson, Carl Patrick Thomas. "Motor learning and predictive control." Thesis, University of Nottingham, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.519400.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Layne, Jeffery Ray. "Fuzzy model reference learning control." Connect to resource, 1992. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1159541293.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Liu, Bai S. M. Massachusetts Institute of Technology. "Reinforcement learning in network control." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/122414.

Повний текст джерела
Анотація:
Thesis: S.M., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2019<br>Cataloged from PDF version of thesis.<br>Includes bibliographical references (pages 59-91).<br>With the rapid growth of information technology, network systems have become increasingly complex. In particular, designing network control policies requires knowledge of underlying network dynamics, which are often unknown, and need to be learned. Existing reinforcement learning methods such as Q-Learning, Actor-Critic, etc. are heuristic and do not offer performance guarantees. In contrast, mode
Стилі APA, Harvard, Vancouver, ISO та ін.
Більше джерел

Книги з теми "Learning and control"

1

Ahn, Hyo-Sung, YangQuan Chen, and Kevin L. Moore. Iterative Learning Control. Springer London, 2007. http://dx.doi.org/10.1007/978-1-84628-859-3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Bien, Zeungnam, and Jian-Xin Xu, eds. Iterative Learning Control. Springer US, 1998. http://dx.doi.org/10.1007/978-1-4615-5629-9.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Chen, Yangquan, and Changyun Wen, eds. Iterative learning control. Springer London, 1999. http://dx.doi.org/10.1007/bfb0110114.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Owens, David H. Iterative Learning Control. Springer London, 2016. http://dx.doi.org/10.1007/978-1-4471-6772-3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Tresilian, James. Sensorimotor Control and Learning. Macmillan Education UK, 2012. http://dx.doi.org/10.1007/978-1-137-00511-3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Minton, Steven. Learning Search Control Knowledge. Springer US, 1988. http://dx.doi.org/10.1007/978-1-4613-1703-6.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Latash, Mark L., and Francis Lestienne, eds. Motor Control and Learning. Springer US, 2006. http://dx.doi.org/10.1007/0-387-28287-4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Shea, Charles H. Motor learning and control. Allyn and Bacon, 1993.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Wayne, Shebilske, and Worchel Stephen, eds. Motor learning and control. Prentice Hall, 1993.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Chu, Bing, and David H. Owens. Optimal Iterative Learning Control. Springer Nature Switzerland, 2025. https://doi.org/10.1007/978-3-031-80236-2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Більше джерел

Частини книг з теми "Learning and control"

1

Westphal, L. C. "Learning control." In Sourcebook of Control Systems Engineering. Springer US, 1995. http://dx.doi.org/10.1007/978-1-4615-1805-1_32.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Yang, Yi, and Furong Gao. "Learning Control." In Computer Modeling for Injection Molding. John Wiley & Sons, Inc., 2013. http://dx.doi.org/10.1002/9781118444887.ch13.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Calinon, Sylvain, and Dongheui Lee. "Learning Control." In Humanoid Robotics: A Reference. Springer Netherlands, 2017. http://dx.doi.org/10.1007/978-94-007-7194-9_68-1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Calinon, Sylvain, and Dongheui Lee. "Learning Control." In Humanoid Robotics: A Reference. Springer Netherlands, 2018. http://dx.doi.org/10.1007/978-94-007-7194-9_68-2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Webb, Geoffrey I., Claude Sammut, Claudia Perlich, et al. "Learning Control." In Encyclopedia of Machine Learning. Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-30164-8_450.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Calinon, Sylvain, and Dongheui Lee. "Learning Control." In Humanoid Robotics: A Reference. Springer Netherlands, 2018. http://dx.doi.org/10.1007/978-94-007-6046-2_68.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Szepesvári, Csaba. "Control." In Algorithms for Reinforcement Learning. Springer International Publishing, 2010. http://dx.doi.org/10.1007/978-3-031-01551-9_3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Paluszek, Michael, and Stephanie Thomas. "Adaptive Control." In MATLAB Machine Learning. Apress, 2016. http://dx.doi.org/10.1007/978-1-4842-2250-8_11.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Rose, Sherri, and Mark J. van der Laan. "Independent Case-Control Studies." In Targeted Learning. Springer New York, 2011. http://dx.doi.org/10.1007/978-1-4419-9782-1_13.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Patan, Krzysztof. "Iterative Learning Control." In Studies in Systems, Decision and Control. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-11869-3_6.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Learning and control"

1

Lee, Kyunghyun, Ukcheol Shin, and Byeong-Uk Lee. "Learning to Control Camera Exposure via Reinforcement Learning." In 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2024. http://dx.doi.org/10.1109/cvpr52733.2024.00287.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Muthiah, Letchumanan, and Arun K. Tangirala. "Transfer Learning for Iterative Learning Control Using Gaussian Processes." In 2024 Tenth Indian Control Conference (ICC). IEEE, 2024. https://doi.org/10.1109/icc64753.2024.10883738.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Xu, Lijun, Kang Li, Minrui Fei, and Dajun Du. "A new bandwidth scheduling method for networked learning control." In 2012 UKACC International Conference on Control (CONTROL). IEEE, 2012. http://dx.doi.org/10.1109/control.2012.6334594.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Wang, Xuan, and Eric Rogers. "Noncausal finite time interval iterative learning control law design." In 2014 UKACC International Conference on Control (CONTROL). IEEE, 2014. http://dx.doi.org/10.1109/control.2014.6915113.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Chu, Bing, Zhonglun Cai, David H. Owens, Eric Rogers, Chris T. Freeman, and Paul L. Lewin. "Experimental verification of constrained iterative learning control using successive projection." In 2012 UKACC International Conference on Control (CONTROL). IEEE, 2012. http://dx.doi.org/10.1109/control.2012.6334655.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Yang, Zhile, Kang Li, and Lidong Zhang. "Binary teaching-learning based optimization for power system unit commitment." In 2016 UKACC 11th International Conference on Control (CONTROL). IEEE, 2016. http://dx.doi.org/10.1109/control.2016.7737550.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Mitchell, R. J. "Using MATLAB GUIs to improve the learning of frequency response methods." In 2012 UKACC International Conference on Control (CONTROL). IEEE, 2012. http://dx.doi.org/10.1109/control.2012.6334774.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Mitchell, R. I. "A MATLAB GUI for learning controller design in the frequency domain." In 2014 UKACC International Conference on Control (CONTROL). IEEE, 2014. http://dx.doi.org/10.1109/control.2014.6915153.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Postlethwaite, Bruce. "The development of PISim: Software for process control teaching and learning." In 2016 UKACC 11th International Conference on Control (CONTROL). IEEE, 2016. http://dx.doi.org/10.1109/control.2016.7737576.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Jewaratnam, J., J. Zhang, J. Morris, and A. Hussain. "Batch-to-batch iterative learning control using linearised models with adaptive model updating." In 2012 UKACC International Conference on Control (CONTROL). IEEE, 2012. http://dx.doi.org/10.1109/control.2012.6334641.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Звіти організацій з теми "Learning and control"

1

Safonov, Michael G. Robust Control Feedback and Learning. Defense Technical Information Center, 2002. http://dx.doi.org/10.21236/ada399708.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Kim, Jihie, and Paul S. Rosenbloom. Constraining Learning with Search Control. Defense Technical Information Center, 1993. http://dx.doi.org/10.21236/ada269517.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Chen, Yan, Arnab Bhattacharya, Jing Li, and Draguna Vrabie. Optimal Control by Transfer-Learning. Office of Scientific and Technical Information (OSTI), 2019. http://dx.doi.org/10.2172/1988297.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Feng, Zhili, Wei Zhang, Dali Wang, Jian Chen, and Keerti Kappagantula. Machine Learning for Joint Quality Control. Office of Scientific and Technical Information (OSTI), 2024. http://dx.doi.org/10.2172/2448165.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

VanLehn, Kurt, and Randolph M. Jones. Learning Physics Via Explanation-Based Learning of Correctness and Analogical Search Control,. Defense Technical Information Center, 1991. http://dx.doi.org/10.21236/ada240775.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Whitney, Paul. Learning from Text: A Cognitive Control Perspective. Defense Technical Information Center, 1992. http://dx.doi.org/10.21236/ada251842.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Seo, Young-Woo, Drew Bagnell, and Katia Sycara. Cost-Sensitive Learning for Confidential Access Control. Defense Technical Information Center, 2005. http://dx.doi.org/10.21236/ada597125.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Hu, Vincent C. Machine Learning for Access Control Policy Verification. National Institute of Standards and Technology, 2021. http://dx.doi.org/10.6028/nist.ir.8360.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Jiang, Zhong-Ping. Cognitive Models for Learning to Control Dynamic Systems. Defense Technical Information Center, 2008. http://dx.doi.org/10.21236/ada487160.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Ren, Liu, Gregory Shakhnarovich, Jessica K. Hodgins, Hanspeter Pfister, and Paul A. Viola. Learning Silhouette Features for Control of Human Motion. Defense Technical Information Center, 2004. http://dx.doi.org/10.21236/ada457871.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!