To see the other types of publications on this topic, follow the link: Learning and control.

Dissertations / Theses on the topic 'Learning and control'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Learning and control.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Stendal, Ludvig. "Learning about process control." Doctoral thesis, Norwegian University of Science and Technology, Faculty of Social Sciences and Technology Management, 2003. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-195.

Full text
Abstract:
<p>The research site has been the Södra Cell Tofte pulp mill. The main focus in this thesis is how to learn about process control. The need for research on this theme is given implicitly in the foundation and construction of the INPRO programme. Norwegian engineering education is discipline oriented, and the INPRO programme aimed at integrating the three disciplines engineering cybernetics, chemical engineering, and organisation and work life science in a single PhD programme. One goal was to produce knowledge of modern production in chemical process plants based on socio-technical thinking.</
APA, Harvard, Vancouver, ISO, and other styles
2

Townley, Tracy Yvette. "Predictive iterative learning control." Thesis, University of Exeter, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.246383.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Munde, Gurubachan. "Adaptive iterative learning control." Thesis, University of Exeter, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.390139.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Wallén, Johanna. "Estimation-based iterative learning control." Doctoral thesis, Linköpings universitet, Reglerteknik, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-64017.

Full text
Abstract:
In many  applications industrial robots perform the same motion  repeatedly. One way of compensating the repetitive part of the error  is by using iterative learning control (ILC). The ILC algorithm  makes use of the measured errors and iteratively calculates a  correction signal that is applied to the system. The main topic of the thesis is to apply an ILC algorithm to a  dynamic system where the controlled variable is not measured. A  remedy for handling this difficulty is to use additional sensors in  combination with signal processing algorithms to obtain estimates of  the controlled varia
APA, Harvard, Vancouver, ISO, and other styles
5

Gaskett, Chris, and cgaskett@it jcu edu au. "Q-Learning for Robot Control." The Australian National University. Research School of Information Sciences and Engineering, 2002. http://thesis.anu.edu.au./public/adt-ANU20041108.192425.

Full text
Abstract:
Q-Learning is a method for solving reinforcement learning problems. Reinforcement learning problems require improvement of behaviour based on received rewards. Q-Learning has the potential to reduce robot programming effort and increase the range of robot abilities. However, most currentQ-learning systems are not suitable for robotics problems: they treat continuous variables, for example speeds or positions, as discretised values. Discretisation does not allow smooth control and does not fully exploit sensed information. A practical algorithm must also cope with real-time constraints, sensing
APA, Harvard, Vancouver, ISO, and other styles
6

Cleland, Benjamin George. "Reinforcement Learning for Racecar Control." The University of Waikato, 2006. http://hdl.handle.net/10289/2507.

Full text
Abstract:
This thesis investigates the use of reinforcement learning to learn to drive a racecar in the simulated environment of the Robot Automobile Racing Simulator. Real-life race driving is known to be difficult for humans, and expert human drivers use complex sequences of actions. There are a large number of variables, some of which change stochastically and all of which may affect the outcome. This makes driving a promising domain for testing and developing Machine Learning techniques that have the potential to be robust enough to work in the real world. Therefore the principles of the algorithm
APA, Harvard, Vancouver, ISO, and other styles
7

Turnham, Edward James Anthony. "Meta-learning in sensorimotor control." Thesis, University of Cambridge, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.610592.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Jackson, Carl Patrick Thomas. "Motor learning and predictive control." Thesis, University of Nottingham, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.519400.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Layne, Jeffery Ray. "Fuzzy model reference learning control." Connect to resource, 1992. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1159541293.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Liu, Bai S. M. Massachusetts Institute of Technology. "Reinforcement learning in network control." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/122414.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2019<br>Cataloged from PDF version of thesis.<br>Includes bibliographical references (pages 59-91).<br>With the rapid growth of information technology, network systems have become increasingly complex. In particular, designing network control policies requires knowledge of underlying network dynamics, which are often unknown, and need to be learned. Existing reinforcement learning methods such as Q-Learning, Actor-Critic, etc. are heuristic and do not offer performance guarantees. In contrast, mode
APA, Harvard, Vancouver, ISO, and other styles
11

Gaskett, Chris. "Q-Learning for robot control." View thesis entry in Australian Digital Theses Program, 2002. http://eprints.jcu.edu.au/623/1/gaskettthesis.pdf.

Full text
Abstract:
Q-Learning is a method for solving reinforcement learning problems. Reinforcement learning problems require improvement of behaviour based on received rewards. Q-Learning has the potential to reduce robot programming effort and increase the range of robot abilities. However, most currentQ-learning systems are not suitable for robotics problems: they treat continuous variables, for example speeds or positions, as discretised values. Discretisation does not allow smooth control and does not fully exploit sensed information. A practical algorithm must also cope with real-time constraints, sensing
APA, Harvard, Vancouver, ISO, and other styles
12

Desimone, Roberto V. "Learning control knowledge within an explanation-based learning framework." Thesis, University of Edinburgh, 1989. http://hdl.handle.net/1842/18827.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Gaudio, Joseph Emilio. "Fast learning and adaptation in control and machine learning." Thesis, Massachusetts Institute of Technology, 2020. https://hdl.handle.net/1721.1/127050.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, May, 2020<br>Cataloged from the official PDF of thesis.<br>Includes bibliographical references (pages 249-264).<br>As machine learning methods become more prevalent in society, problems of a dynamical nature will increasingly need to be considered, especially in the interactions of learning-based algorithms with the physical world. The dynamical nature of these problems may include regressors which are time-varying, necessitating new algorithms in machine learning approaches as well as real-time decisi
APA, Harvard, Vancouver, ISO, and other styles
14

Parisi, Aaron Thomas. "An Application of Sliding Mode Control to Model-Based Reinforcement Learning." DigitalCommons@CalPoly, 2019. https://digitalcommons.calpoly.edu/theses/2054.

Full text
Abstract:
The state-of-art model-free reinforcement learning algorithms can generate admissible controls for complicated systems with no prior knowledge of the system dynamics, so long as sufficient (oftentimes millions) of samples are available from the environ- ment. On the other hand, model-based reinforcement learning approaches seek to leverage known optimal or robust control to reinforcement learning tasks by mod- elling the system dynamics and applying well established control algorithms to the system model. Sliding-mode controllers are robust to system disturbance and modelling errors, and have
APA, Harvard, Vancouver, ISO, and other styles
15

Hosseinkhan-Boucher, Rémy. "On Learning-Based Control of Dynamical Systems." Electronic Thesis or Diss., université Paris-Saclay, 2025. http://www.theses.fr/2025UPASG029.

Full text
Abstract:
Les impératifs environnementaux suscitent un regain d’intérêt pour la recherche sur le contrôle de l’écoulement des fluides afin de réduire la consommation d’énergie et les émissions dans diverses applications telles que l’aéronautique et l’automobile. Les stratégies de contrôle des fluides peuvent optimiser le système en temps réel, en tirant parti des mesures des capteurs et des modèles physiques. Ces stratégies visent à manipuler le comportement d’un système pour atteindre un état souhaité (stabilité, performance, consommation d’énergie). Dans le même temps, le développement d’approches de
APA, Harvard, Vancouver, ISO, and other styles
16

Amann, Notker. "Optimal algorithms for iterative learning control." Thesis, University of Exeter, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.337751.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

McAllister, Rowan. "Bayesian learning for data-efficient control." Thesis, University of Cambridge, 2017. https://www.repository.cam.ac.uk/handle/1810/269779.

Full text
Abstract:
Applications to learn control of unfamiliar dynamical systems with increasing autonomy are ubiquitous. From robotics, to finance, to industrial processing, autonomous learning helps obviate a heavy reliance on experts for system identification and controller design. Often real world systems are nonlinear, stochastic, and expensive to operate (e.g. slow, energy intensive, prone to wear and tear). Ideally therefore, nonlinear systems can be identified with minimal system interaction. This thesis considers data efficient autonomous learning of control of nonlinear, stochastic systems. Data effici
APA, Harvard, Vancouver, ISO, and other styles
18

Howard, Matthew. "Learning control policies from constrained motion." Thesis, University of Edinburgh, 2009. http://hdl.handle.net/1842/3972.

Full text
Abstract:
Many everyday human skills can be framed in terms of performing some task subject to constraints imposed by the task or the environment. Constraints are usually unobservable and frequently change between contexts. In this thesis, we explore the problem of learning control policies from data containing variable, dynamic and non-linear constraints on motion. We show that an effective approach for doing this is to learn the unconstrained policy in a way that is consistent with the constraints. We propose several novel algorithms for extracting these policies from movement data, where observations
APA, Harvard, Vancouver, ISO, and other styles
19

Mohamed, S. S. "Iterative learning control of multivariable plants." Thesis, University of Salford, 1992. http://usir.salford.ac.uk/2148/.

Full text
Abstract:
In recent years, many researchers have proposed different iterative learning controllers, which unfortunately mostly require that the plants under control be regular. Therefore, in order to remove this limitation, various analogue and digital iterative learning controllers are proposed in this thesis. Indeed, it is shown that analogue iterative learning controllers can be designed for plants with any order of irregularity using initial state shifting or initial impulsive action. However, such analogue controllers have to be digitalised for purpose of implementation. In addition, in the synthes
APA, Harvard, Vancouver, ISO, and other styles
20

Hatzikos, Vasilis E. "Genetic algorithms into iterative learning control." Thesis, University of Sheffield, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.408314.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Feng, Kairui. "Parameter optimisation in iterative learning control." Thesis, University of Sheffield, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.401237.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Goran, Alan. "Reinforcement Learning for Uplink Power Control." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-246043.

Full text
Abstract:
Uplink power control is a resource management functionthat controls the signal’s transmit power from a userdevice, i.e. mobile phone, to a base-station tower. It isused to maximize the data-rates while reducing the generatedinterference.Reinforcement learning is a powerful learning techniquethat has the capability not only to teach an artificial agenthow to act, but also to create the possibility for the agentto learn through its own experiences by interacting with anenvironment.In this thesis we have applied reinforcement learningon uplink power control, enabling an intelligent softwareagent
APA, Harvard, Vancouver, ISO, and other styles
23

Millington, Peter J. (Peter John). "Associative reinforcement learning for optimal control." Thesis, Massachusetts Institute of Technology, 1991. http://hdl.handle.net/1721.1/13830.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Sanzida, Nahid. "Iterative learning control of crystallisation systems." Thesis, Loughborough University, 2014. https://dspace.lboro.ac.uk/2134/14981.

Full text
Abstract:
Under the increasing pressure of issues like reducing the time to market, managing lower production costs, and improving the flexibility of operation, batch process industries thrive towards the production of high value added commodity, i.e. specialty chemicals, pharmaceuticals, agricultural, and biotechnology enabled products. For better design, consistent operation and improved control of batch chemical processes one cannot ignore the sensing and computational blessings provided by modern sensors, computers, algorithms, and software. In addition, there is a growing demand for modelling and c
APA, Harvard, Vancouver, ISO, and other styles
25

Abdul-hadi, Omar. "Machine Learning Applications to Robot Control." Thesis, University of California, Berkeley, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10817183.

Full text
Abstract:
<p> Control of robot manipulators can be greatly improved with the use of velocity and torque feedforward control. However, the effectiveness of feedforward control greatly relies on the accuracy of the model. In this study, kinematics and dynamics analysis is performed on a six axis arm, a Delta2 robot, and a Delta3 robot. Velocity feedforward calculation is performed using the traditional means of using the kinematics solution for velocity. However, a neural network is used to model the torque feedforward equations. For each of these mechanisms, we first solve the forward and inverse kinemat
APA, Harvard, Vancouver, ISO, and other styles
26

Bjarre, Lukas. "Robust Reinforcement Learning for Quadcopter Control." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-277631.

Full text
Abstract:
Sim-to-Reality transfer in Reinforcement Learning is a promising approach ofsolving costly exploration in real systems, but it comes with the generalizationproblem of transferring policies from simulators to real systems. This thesislooks at ideas presented by Robust Markov Decision Processes, which combinesideas from Reinforcement Learning and Robust Control to create agentswith embedded uncertainty about the simulated environment, opting for pessimisticoptimization in order to handle potential gaps between simulators andreality. These ideas are adapted in order to apply it to a state-of-the-
APA, Harvard, Vancouver, ISO, and other styles
27

Ishihara, Abraham K. "Feedback error learning in neuromotor control /." May be available electronically:, 2008. http://proquest.umi.com/login?COPT=REJTPTU1MTUmSU5UPTAmVkVSPTI=&clientId=12498.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Gregory, David Alan. "Impulsivity control and self-regulated learning /." Available to subscribers only, 2007. http://proquest.umi.com/pqdweb?did=1407688881&sid=1&Fmt=2&clientId=1509&RQT=309&VName=PQD.

Full text
Abstract:
Thesis (Ph. D.)--Southern Illinois University Carbondale, 2007.<br>"Department of Educational Psychology and Special Education." Keywords: Impulsivity control, Self-regulated learning, Achievement Includes bibliographical references (p. 132-167). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
29

Zhang, Bing. "Experiments in learning control using neural networks." Thesis, University of Strathclyde, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.336066.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Isaac, Andrew Paul Computer Science &amp Engineering Faculty of Engineering UNSW. "Behavioural cloning robust goal directed control." Awarded By:University of New South Wales. Computer Science & Engineering, 2009. http://handle.unsw.edu.au/1959.4/43367.

Full text
Abstract:
Behavioural cloning is a simple and effective technique for automatically and non-intrusively producing comprehensible and implementable models of human control skill. Behavioural cloning applies machine learning techniques to behavioural trace data, in a transparent manner, and has been very successful in a wide range of domains. The limitations of early behavioural cloning work are: that the clones lack goal-structure, are not robust to variation, are sensitive to the nature of the training data and often produce complicated models of the control skill. Recent behavioural cloning work has so
APA, Harvard, Vancouver, ISO, and other styles
31

Lindfors, J. (Juha). "A modern learning environment for Control Engineering." Doctoral thesis, University of Oulu, 2002. http://urn.fi/urn:isbn:951426911X.

Full text
Abstract:
Abstract Teaching in the university has been under pressure to change in recent years. On the one hand, there is financial pressure to decrease resources on the other, there is a need to keep quality and quantity of education offered high and to give due consideration to changes in technology and learning methods. One response to these pressures has been to study if it is possible to build a learning environment for Control Engineering that is available to students virtually. It could help to distribute materials and facilitate overall communication, from course information through student f
APA, Harvard, Vancouver, ISO, and other styles
32

馬裕旭 and Yu-xu Lecky Ma. "Discrete iterative learning control of robotic manipulators." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1991. http://hub.hku.hk/bib/B31232723.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Bradley, Susanne. "Applications of machine learning in sensorimotor control." Thesis, University of British Columbia, 2015. http://hdl.handle.net/2429/54570.

Full text
Abstract:
There have been many recent advances in the simulation of biologically realistic systems, but controlling these systems remains a challenge. In this thesis, we focus on methods for learning to control these systems without prior knowledge of the dynamics of the system or its environment. We present two algorithms. The first, designed for quasistatic systems, combines Gaussian process regression and stochastic gradient descent. By testing on a model of the human mid-face, we show that this combined method gives better control accuracy than either regression or gradient descent alone, and impro
APA, Harvard, Vancouver, ISO, and other styles
34

Pon, Kumar Steven Spielberg. "Deep reinforcement learning approaches for process control." Thesis, University of British Columbia, 2017. http://hdl.handle.net/2429/63810.

Full text
Abstract:
The conventional and optimization based controllers have been used in process industries for more than two decades. The application of such controllers on complex systems could be computationally demanding and may require estimation of hidden states. They also require constant tuning, development of a mathematical model (first principle or empirical), design of control law which are tedious. Moreover, they are not adaptive in nature. On the other hand, in the recent years, there has been significant progress in the fields of computer vision and natural language processing that followed the suc
APA, Harvard, Vancouver, ISO, and other styles
35

Wang, Yu. "Adaptive control and learning using multiple models." Thesis, Yale University, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10783473.

Full text
Abstract:
<p> Adaptation can have different objectives. Compared to a learning behavior, which is mainly to optimize the rewards/experience obtained through the learning process, adaptive control is a type of adaptation that follows a specific target guided by a controller. Although the targets may be different, the two types of adaption share common research interests.</p><p> One of the popular research techniques for studying adaptation is the use of multiple models, where the system will utilize information from multiple environment observers instead of one to improve the adaptation behavior in ter
APA, Harvard, Vancouver, ISO, and other styles
36

Norrlöf, Mikael. "Iterative learning control : analysis, design, and experiments /." Linköping : Univ, 2000. http://www.bibl.liu.se/liupubl/disp/disp2000/tek653s.htm.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Solomon, Luiza. "Learning and flow control in optimistic simulation." Thesis, McGill University, 2002. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=29475.

Full text
Abstract:
This thesis has two main contributions. The first contribution is the development of a modular, easy-to-use Time Warp simulation engine targeted towards distributed-memory environments. The second contribution is the analysis and experimental verification of the performance of the flow control algorithm proposed by Choe in a distributed-memory environment.<br>The Time Warp simulation engine TWSIM provides our laboratory with a research medium for Time Warp simulations in a distributed-memory environment such as a network of workstations. The modular design of TWSIM allows for easy integration
APA, Harvard, Vancouver, ISO, and other styles
38

Desjardins, Charles. "Cooperative Adaptive Cruise Control: A Learning Approach." Thesis, Université Laval, 2009. http://www.theses.ulaval.ca/2009/26048/26048.pdf.

Full text
Abstract:
L'augmentation dans les dernières décennies du nombre de véhicules présents sur les routes ne s'est pas passée sans son lot d'impacts négatifs sur la société. Même s'ils ont joué un rôle important dans le développement économique des régions urbaines à travers le monde, les véhicules sont aussi responsables d'impacts négatifs sur les entreprises, car l'inefficacité du ot de traffic cause chaque jour d'importantes pertes en productivité. De plus, la sécurité des passagers est toujours problématique car les accidents de voiture sont encore aujourd'hui parmi les premières causes de blessures et
APA, Harvard, Vancouver, ISO, and other styles
39

Govindhasamy, J. J. "Learning systems for process identification and control." Thesis, Queen's University Belfast, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.419356.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Moore, Andrew William. "Efficient memory-based learning for robot control." Thesis, University of Cambridge, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.385715.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Watanabe, Yukio. "Learning control of automotive active suspension systems." Thesis, Cranfield University, 1997. http://dspace.lib.cranfield.ac.uk/handle/1826/13865.

Full text
Abstract:
This thesis considers the neural network learning control of a variable-geometry automotive active suspension system which combines most of the benefits of active suspension systems with low energy consumption. Firstly, neural networks are applied to the control of various simplified automotive active suspensions, in order to understand how a neural network controller can be integrated with a physical dynamic system model. In each case considered, the controlled system has a defined objective and the minimisation of a cost function. The neural network is set up in a learning structure, such th
APA, Harvard, Vancouver, ISO, and other styles
42

Kulkarni, Tejas Dattatraya. "Learning structured representations for perception and control." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/107557.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, 2016.<br>Cataloged from PDF version of thesis.<br>Includes bibliographical references (pages 117-129).<br>I argue that the intersection of deep learning, hierarchical reinforcement learning, and generative models provides a promising avenue towards building agents that learn to produce goal-directed behavior given sensations. I present models and algorithms that learn from raw observations and will emphasize on minimizing their sample complexity and number of training steps required for convergen
APA, Harvard, Vancouver, ISO, and other styles
43

Weintraub, Ben Julian. "Learning control applied to a model helicopter." Thesis, Massachusetts Institute of Technology, 1994. http://hdl.handle.net/1721.1/49921.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Hall, Joseph Alexander. "Machine learning for control : incorporating prior knowledge." Thesis, University of Cambridge, 2013. https://www.repository.cam.ac.uk/handle/1810/283930.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Bradley, Richard Stephen. "The robust stability of iterative learning control." Thesis, University of Southampton, 2010. https://eprints.soton.ac.uk/165393/.

Full text
Abstract:
This thesis examines the notion of the long term robust stability of iterative learning control (ILC) systems engaged in trajectory tracking, using a robust stability theorem based on a biased version of the nonlinear gap metric. This is achieved through two main results: The first concerns the establishment of a nonlinear robust stability theorem, where signals are measured relative to a given trajectory. Although primarily motivated by ILC, the theorem provided is applicable to a wider range of problems. This is due to its development being made independently of any particular signal space,
APA, Harvard, Vancouver, ISO, and other styles
46

Chen, Yiyang. "Iterative learning control for spatial path tracking." Thesis, University of Southampton, 2017. https://eprints.soton.ac.uk/415865/.

Full text
Abstract:
Iterative learning control (ILC) is a high performance method for systems operating in a repetitive manner, which aims to improve tracking performance by learning from previous trial information. In recent years research interest has focused on generalizing the task description in order to achieve greater performance and flexibility. In particular, researchers have addressed the ease of tracking only at a single, or a collection of time instants. However, there still remain substantial open problems, such as the choice of the time instants, the need for system constraint handling, and the abil
APA, Harvard, Vancouver, ISO, and other styles
47

Bengtsson, Ivar. "Autonomous Overtaking with Learning Model Predictive Control." Thesis, KTH, Optimeringslära och systemteori, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-276691.

Full text
Abstract:
We review recent research into trajectory planning for autonomous overtaking to understand existing challenges. Then, the recently developed framework Learning Model Predictive Control (LMPC) is presented as a suitable method to iteratively improve an overtaking manoeuvre each time it is performed. We present recent extensions to the LMPC framework to make it applicable to overtaking. Furthermore, we also present two alternative modelling approaches with the intention of reducing computational complexity of the optimization problems solved by the controller. All proposed frameworks are built f
APA, Harvard, Vancouver, ISO, and other styles
48

Jennings, Alan Lance. "Autonomous Motion Learning for Near Optimal Control." University of Dayton / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1344016631.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Attebo, Edvin. "Safe learning and control in complex systems." Thesis, Umeå universitet, Institutionen för fysik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-178164.

Full text
Abstract:
When autonomously controlling physical objects, a deviation from a trajectorycan lead to unwanted impacts, which can be very expensive or even dangerous. Thedeviation may be due to uncertainties, either from disturbance or model mismatch.One way to deal with these types of uncertainties is to design a robust control sys-tem, which creates margins for errors in the system. These margins make the systemsafe but also lowers the performance, hence it is desirable to have the margins assmall as possible and still make the system safe. One way to reduce the margins isto add a learning strategy to th
APA, Harvard, Vancouver, ISO, and other styles
50

Salaün, Camille. "Learning models to control redundancy in robotics." Paris 6, 2010. http://www.theses.fr/2010PA066238.

Full text
Abstract:
La robotique de service est un domaine émergent où il est nécessaire de commander des robots en interaction forte avec leur environnement. Ce travail présente une méthode adaptative de commande combinant de l'apprentissage de modèles physiques et de la commande dans l'espace opérationnel de robots redondants. L'apprentissage des modèles cinématiques est obtenu soit par dérivation de modèles géométriques appris, soit par apprentissage direct. Ces modèles cinématiques, également appelés matrices Jacobiennes, peuvent être utilisés dans le calcul de pseudo-inverse ou de projecteurs pour la command
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!