To see the other types of publications on this topic, follow the link: Vision-based motion controls.

Dissertations / Theses on the topic 'Vision-based motion controls'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 15 dissertations / theses for your research on the topic 'Vision-based motion controls.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Henning, Timothy Paul. "Dynamics and controls for an omnidirectional robot." Ohio : Ohio University, 2003. http://www.ohiolink.edu/etd/view.cgi?ohiou1175093596.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Reski, Nico. "Change your Perspective : Exploration of a 3D Network created with Open Data in an Immersive Virtual Reality Environment using a Head-mounted Display and Vision-based Motion Controls." Thesis, Linnéuniversitetet, Institutionen för medieteknik (ME), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-46779.

Full text
Abstract:
Year after year, technologies are evolving in an incredible rapid pace, becoming faster, more complex, more accurate and more immersive. Looking back just a decade, especially interaction technologies have made a major leap. Just two years ago in 2013, after being researched for quite some time, the hype around virtual reality (VR) arouse renewed enthusiasm, finally reaching mainstream attention as the so called head-mounted displays (HMD), devices worn on the head  to grant a visual peek into the virtual world, gain more and more acceptance with the end-user. Currently, humans interact with computers in a very counter-intuitive two dimensional way. The ability to experience digital content in the humans most natural manner, by simply looking around and perceiving information from their surroundings, has the potential to be a major game changer in how we perceive and eventually interact with digital information. However, this confronts designers and developers with new challenges of how to apply these exciting technologies, supporting interaction mechanisms to naturally explore digital information in the virtual world, ultimately overcoming real world boundaries. Within the virtual world, the only limit is our imagination. This thesis investigates an approach of how to naturally interact and explore information based on open data within an immersive virtual reality environment using a head-mounted display and vision-based motion controls. For this purpose, an immersive VR application visualizing information as a network of European capital cities has been implemented, offering interaction through gesture input. The application lays a major focus on the exploration of the generated network and the consumption of the displayed information. While the conducted user interaction study with eleven participants investigated their acceptance of the developed prototype, estimating their workload and examining their explorative behaviour, the additional dialog with five experts in the form of explorative discussions provided further feedback towards the prototype’s design and concept. The results indicate the participants’ enthusiasm and excitement towards the novelty and intuitiveness of exploring information in a less traditional way than before, while challenging them with the applied interface and interaction design in a positive manner. The design and concept were also accepted through the experts, valuing the idea and implementation. They provided constructive feedback towards the visualization of the information as well as emphasising and encouraging to be even bolder, making more usage of the available 3D environment. Finally, the thesis discusses these findings and proposes recommendations for future work.
APA, Harvard, Vancouver, ISO, and other styles
3

Sivilli, Robert. "Vision-Based Testbeds for Control System Applicaitons." Master's thesis, University of Central Florida, 2012. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/5504.

Full text
Abstract:
In the field of control systems, testbeds are a pivotal step in the validation and improvement of new algorithms for different applications. They provide a safe, controlled environment typically having a significantly lower cost of failure than the final application. Vision systems provide nonintrusive methods of measurement that can be easily implemented for various setups and applications. This work presents methods for modeling, removing distortion, calibrating, and rectifying single and two camera systems, as well as, two very different applications of vision-based control system testbeds: deflection control of shape memory polymers and trajectory planning for mobile robots. First, a testbed for the modeling and control of shape memory polymers (SMP) is designed. Red-green-blue (RGB) thresholding is used to assist in the webcam-based, 3D reconstruction of points of interest. A PID based controller is designed and shown to work with SMP samples, while state space models were identified from step input responses. Models were used to develop a linear quadratic regulator that is shown to work in simulation. Also, a simple to use graphical interface is designed for fast and simple testing of a series of samples. Second, a robot testbed is designed to test new trajectory planning algorithms. A template-based predictive search algorithm is investigated to process the images obtained through a low-cost webcam vision system, which is used to monitor the testbed environment. Also a user-friendly graphical interface is developed such that the functionalities of the webcam, robots, and optimizations are automated. The testbeds are used to demonstrate a wavefront-enhanced, B-spline augmented virtual motion camouflage algorithm for single or multiple robots to navigate through an obstacle dense and changing environment, while considering inter-vehicle conflicts, obstacle avoidance, nonlinear dynamics, and different constraints. In addition, it is expected that this testbed can be used to test different vehicle motion planning and control algorithms.
M.S.A.E.
Masters
Mechanical and Aerospace Engineering
Engineering and Computer Science
Aerospace Engineering; Space Systems Design and Engineering
APA, Harvard, Vancouver, ISO, and other styles
4

Hoff, Rein. "The aeroplane spin motion and an investigation into factors affecting the aeroplane spin." Thesis, Brunel University, 2014. http://bura.brunel.ac.uk/handle/2438/10537.

Full text
Abstract:
A review of aeroplane spin literature is presented, including early spin research history and lessons learned from spinning trials. Despite many years of experience in spinning evaluation, it is difficult to predict spin characteristics and problems have been encountered and several prototype aeroplanes have been lost. No currently published method will reliably predict an aeroplane’s spin recovery characteristics. Quantitative data is required to study the spin motion of the aeroplane in adequate detail. An alternative method, Vision Based State Estimation, has been used to capture the spin motion. This alternative method has produced unique illustrations of the spinning research aeroplane and data has been obtained that could possibly be very challenging to obtain using traditional methods. To investigate the aerodynamic flow of a spinning aeroplane, flights have been flown using wool tufts on wing, aft fuselage and empennage for flow visualization. To complement the tuft observations, the differential pressure between the upper and lower horizontal tail and wing surfaces have been measured at selected points. Tufts indicate that a large-scale Upper Surface Vortex forms on the outside wing. This USV has also been visualized using a smoke source. The flow structures on top of both wings, and on top of the horizontal tail surfaces, have also been studied on another aeroplane model. The development of these rotational flow effects has been related to the spin motion. It is hypothesized that the flow structure of the turbulent boundary layer on the outside upper wing surface is due to additional accelerations induced by the rotational motion of the aeroplane. The dynamic effects have been discussed and their importance for the development of the spin considered. In addition, it is suggested that another dynamic effect might exist due to the additional acceleration of the turbulent boundary layer due to the rotational motion of the aeroplane. It is recommended that future spin recovery prediction methods account for dynamic effects, in addition to aerodynamic control effectiveness and aeroplane inertia, since the spin entry phase is important for the subsequent development of the spin. Finally, suggestions for future research are given.
APA, Harvard, Vancouver, ISO, and other styles
5

Sabiron, Guillaume. "Synthèse d’une solution GNC basée sur des capteurs de flux optique bio-inspirés adaptés à la mesure des basses vitesses pour un atterrissage lunaire autonome en douceur." Thesis, Toulouse, ISAE, 2014. http://www.theses.fr/2014ESAE0038/document.

Full text
Abstract:
Dans cette thèse, nous nous intéressons au problème de l’atterrissage lunaire autonome et nous proposons une méthode innovante amenant une alternative à l’utilisation de capteurs classiques qui peuvent se révéler encombrants, énergivores et très onéreux.La première partie est consacrée au développement et à la construction de capteurs de mouvement inspirés de la vision des insectes volants et mesurant le flux optique.Le flux optique correspond à la vitesse angulaire relative de l’environnement mesurée par la rétine d’un agent. Dans un environnement fixe, les mouvements d’un robot génèrent un flux optique contenant des informations essentielles sur le mouvement de ce dernier. En utilisant le principe du « temps de passage », nous présentons les résultats expérimentaux obtenus en extérieur avec deux versions de ces capteurs.Premièrement, un capteur mesurant le flux optique dans les deux directions opposées est développé et testé en laboratoire. Deuxièmement un capteur adapté à la mesure des faibles flux optiques similaires à ceux pouvant être mesurés lors d’un alunissage est développé, caractérisé et enfin testé sur un drone hélicoptère en conditions extérieures.Dans la seconde partie, une méthode permettant de réaliser le guidage, la navigation et la commande (GNC pour Guidance Navigation and Control) du système est proposée. L’innovation réside dans le fait que l’atterrissage en douceur est uniquement assuré par les capteurs de flux optique. L’utilisation des capteurs inertiels est réduite au maximum. Plusieurs capteurs orientés dans différentes directions de visée, et fixés à la structure de l’atterrisseur permettent d’atteindre les conditions finales définies par les partenaires industriels. Les nombreuses informations décrivant la position et l’attitude du système contenues dans le flux optique sont exploitées grâce aux algorithmes de navigation qui permettent d’estimer les flux optiques ventraux et d’expansion ainsi que le tangage.Nous avons également montré qu’il est possible de contrôler l’atterrisseur planétaire en faisant suivre aux flux optiques estimés une consigne optimale au sens de la consommation d’énergie. Les simulations réalisées durant la thèse ont permis de valider le fonctionnement et le potentiel de la solution GNC proposée en intégrant le code du capteur ainsi que des images simulées du sol de la lune
In this PhD thesis, the challenge of autonomous lunar landing was addressed and an innovative method was developed, which provides an alternative to the classical sensor suites based on RADAR, LIDAR and cameras, which tend to be bulky, energy consuming and expensive. The first part is devoted to the development of a sensor inspired by the fly’s visual sensitivity to optic flow (OF). The OF is an index giving the relative angular velocity of the environment sensed by the retina of a moving insect or robot. In a fixed environment (where there is no external motion), the self-motion of an airborne vehicle generates an OF containing information about its own velocity and attitude and the distance to obstacles. Based on the “Time of Travel” principle we present the results obtained for two versions of 5 LMSs based optic flow sensors. The first one is able to measure accurately the OF in two opposite directions. It was tested in the laboratory and gave satisfying results. The second optic flow sensor operates at low velocities such as those liable to occur during lunar landing was developed. After developing these sensors, their performances were characterized both indoors and outdoors, and lastly, they were tested onboard an 80-kg helicopter flying in an outdoor environment. The Guidance Navigation and Control (GNC) system was designed in the second part on the basis of several algorithms, using various tools such as optimal control, nonlinear control design and observation theory. This is a particularly innovative approach, since it makes it possible to perform soft landing on the basis of OF measurements and as less as possible on inertial sensors. The final constraints imposed by our industrial partners were met by mounting several non-gimbaled sensors oriented in different gaze directions on the lander’s structure. Information about the lander’s self-motion present in the OF measurements is extracted by navigation algorithms, which yield estimates of the ventral OF, expansion OF and pitch angle. It was also established that it is possible to bring the planetary lander gently to the ground by tracking a pre-computed optimal reference trajectory in terms of the lowest possible fuel consumption. Software-in-the-loop simulations were carried out in order to assess the potential of the proposed GNC approach by testing its performances. In these simulations, the sensor firmware was taken into account and virtual images of the lunar surface were used in order to improve the realism of the simulated landings
APA, Harvard, Vancouver, ISO, and other styles
6

Nguyen, Van-Truong, and NguyenV T. "Vision-Based Compliance Motion Control of Robots." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/458a77.

Full text
Abstract:
碩士
國立臺北科技大學
電機工程系所
98
This thesis presents vision-based compliance control of robots including two dual-arm mobile robots and an industrial manipulator. Two approaches have been accomplished for compliance motion control. One is for object grasping task by two cooperative mobile robots both equipped with dual arms. The two mobile robots, master and slave, are controlled under an visual intelligent space with compliance motion of the arms for cooperatively moving an object to a target. A compliance control strategy without any force sensor is proposed for the arms of the slave robot to react against impact during the task. The other approach is for an industrial 6-DOF manipulator. The manipulator is equipped with a 6-axis force sensor. A vision-based compliance control law with force sensing is proposed. The tasks of interacting with unknown surfaces have been experimented for verifying the effectiveness of the proposed controller. Both approaches have been successfully validated by experiments. In particular, two Dr Robot i90 mobile robots have been used for implementing cooperative object grasping and a Mitsubishi RV-1A manipulator has been utilized for performing compliance motion control.
APA, Harvard, Vancouver, ISO, and other styles
7

Wu, Shian-shin, and 巫憲欣. "Machine Vision Based Robot Motion Control by Using a SOPC System." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/qsrt9p.

Full text
Abstract:
碩士
國立臺灣科技大學
機械工程系
94
A servo control IC for robot arm using SOPC technology is presented in this thesis. A gobang game is implemented in the system by using vision feedback to recognition pieces in the board. In the proposed servo control IC, there are two modules. One module is implemented by hardware circuit. Its functions are five quadrature encoder pulse process, ten limit switches detect, five pulse width modulation generator and CMOS image sensor signal capture. The other module is implemented by software using Nios II micro processor. Its functions are an UART to connection with PC, inverse kinematics of robot arm, point to point motion control, continuous motion trajectory control, sequential control, self organization fuzzy control, fuzzy sliding mode control, digital image processing and gobang game AI algorithm. The digital hardware circuits are design by Verilog language and programs in Nios II micro processor is coded in C language. The FPGA chip adopts ALTERA Statrix II EP2S60F672C5ES on the development board. The CMOS color image sensor adopts PIXART PAS106BCB283 which resolution is 356×292 pixels. At last, an integrated experimental system which includes Nios II development board, five axis robot arm, DC motor drivers and CMOS image sensor has been constructed. Some experimental results have been provided and demonstrated the effectiveness and correctness of the proposed servo control system.
APA, Harvard, Vancouver, ISO, and other styles
8

"Distributed, vision-based control laws for motion coordination in multi-agent systems." UNIVERSITY OF PENNSYLVANIA, 2009. http://pqdtopen.proquest.com/#viewpdf?dispub=3328697.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

YU, WEI-FEN, and 游韋汎. "Vision-based Motion Control of Parallel Robot for Pick and Place Applications." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/3efet8.

Full text
Abstract:
碩士
大葉大學
機械與自動化工程學系
106
This thesis developed vision-based motion control of parallel robot for pick and place. A personal computer, a CCD, a Delta robot, a Micro-Box controller and servo drive system were integrated in the thesis. The image processing, trajectory planning and motion control program were written to achieve the control picking and placing motion using Matlab software development platform. Firstly, the forward and reverse kinematics was derived for the Delta robotic arm. A CCD was used to capture the image of the unknown object on the work surface. the image processing program was then developed to determine its position in the Cartesian coordinates. Accordingly, the trajectory was planned to obtain the shortest motion path of the robot arm. The angle values of the three- axe robot arm were calculated by inverse kinematics. The motion control program was written using the Simulink software development platform. The Micro-Box controller was used to control the movement of the arm's three-axis actuators, and completed the precise and fast pick-and-place operation of objects in unknown positions. Finally, the feasibility of the proposed method is verified by various software and hardware simulations and experiments.
APA, Harvard, Vancouver, ISO, and other styles
10

Chen, Kun-Yung, and 陳崑永. "System Identification and Vision-Based Motion Control for a Motor-Toggle Mechanism." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/42443001816125214752.

Full text
Abstract:
碩士
國立高雄第一科技大學
機械與自動化工程所
93
In this paper, the punching machine is made up by a toggle mechanism driven by a permanent magnet (PM) synchronous servomotor. First, Hamilton’s principle, Lagrange multiplier, geometric constraints and partitioning method are employed to derive its dynamic equations. The system parameters are difficult to obtain if the mechanism’s components can not be taken apart. The recursive least-squares (RLS) method is implemented to identify these parameters. This paper presents the comparisons of the visual servoing feedback motion control with the fuzzy logic control (FLC) and adaptive controller by using the stability analysis with inertia-related Lyapunov function to the punching machine. The main purpose of the punching machine is to transport the work pieces to a fixed position for manufacture. To satisfy the demand of the machine performance, three controllers including the FLC and adaptive controller are designed to control the slider responses. Distinguish from the previous studies, the vision servo system of an non-contact measurement of a charge coupled device (CCD) camera is employed in this paper to control the color pattern output state instead of using the expensive linear scale or encoder of the motor-mechanism coupled system. Finally, from the well agreements between numerical simulations and experimental results, it is convinced that the proposed controller by using of machine vision is robust to external disturbances for a punching machine system.
APA, Harvard, Vancouver, ISO, and other styles
11

Huang, Bo-Shiun, and 黃柏勛. "Monocular Vision Single Image Based Motion Control for Autonomous Mobile Robot Target Tracking." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/35833843848653117237.

Full text
Abstract:
碩士
中原大學
機械工程研究所
100
Due to the rapid improvement of the autonomous mobile robot technology in recent years, autonomous mobile robots have been widely applied to a variety of domains such as medical operations, healthcare, and security. The development of visual tracking systems plays a key role in expanding and enhancing the functions and applications of autonomous mobile robots. An optimal, or at least suitable, visual tracking system should possess high accuracy and use few resources in hardware and software. This thesis proposes a new motion control method, based on monocular vision and single image, for autonomous mobile robot target tracking. The proposed method predicts a moving target’s position in an image through a particle filter. Due to the stochastic properties of particle filtering, the proposed method can effectively and accurately handle both linear and nonlinear dynamic motions. In addition, the proposed method uses simple polynomial calculations to map a target’s virtual position to its real-world coordinates. Thus, the proposed method needs few software resources for computation. Moreover, the proposed method adopts the monocular vision approach, i.e., it uses a single camera, and therefore it needs few hardware resources for implementation. The proposed method predicts a moving target’s position in an image, and calculates the virtual position’s real-world coordinates relative to a mobile robot. Based on to the target’s relative coordinates, the mobile robot is commanded to move towards the target in order to keep the target at the camera’s central field of view. Experimental results show that the proposed method can produce acceptable to good results in linear and nonlinear tracking experiments, and has an overall better tracking performance than the Kalman filter approach.
APA, Harvard, Vancouver, ISO, and other styles
12

Hung, Tsung-Wui, and 洪宗輝. "Unmanned Railcar Motion Control Based on Real-Time Image Recognition of Computer Vision." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/f8235t.

Full text
Abstract:
碩士
義守大學
電機工程學系
104
The main purpose of this thesis is to construct an autopilot system for an unmanned railcar based on computer vision technology. The system operation flow is that wireless camera on board the railcar to acquire continuous image frames and transfer them through wireless microwave receiver to the host computer. The main program written in Python keeps detecting several particular signs with meanings of acceleration, deceleration, reverse and stop in the receiving images. When a particular sign is detected and recognized, the main program gives motion command through Arduino UNO R3 board, which controls Arduino L298n board to generate PWM signals to railcar’s driving motor so that the railcar can properly response. In addition, the man machine interface that takes user command inputs and displays camera image is also provided. The particular signs’ identification program is developed based on Haar-like feature training and AdaBoost classifier from OpenCV. To achieve the objective, comprehensive system integrations such as hardware, device drives, protocols, application program and man machine interface have been properly done. The experiment results successfully verify the proposed methodology and integrated system. In average, the railcar can response within one second when a particular sign is detected. Therefore, the real-time performance of the system is also assured. Since the pictures of particular signs can be easily reproduced at low cost, people can properly distribute those particular signs along the sides of the rails or hang those signs atop the rails to build up an autopilot railcar system with applications in mass rapid transit (MRT) and production line automation.
APA, Harvard, Vancouver, ISO, and other styles
13

Yao, Li-Wei, and 姚力瑋. "Vision-assisted Behavior-based Motion Control for a Differential-Drive Mobile Robot System." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/10448378933247185152.

Full text
Abstract:
碩士
淡江大學
機械與機電工程學系碩士班
97
Based on the dynamic model of differential-drive mobile robot, the motion controller and behavior-based controller are designed and implemented in this research, and further provided for the robot self-localization and mapping in a known environment. The research includes four parts, namely, the dynamic motion control of differential-drive mobile robot, the behavior-based control for the robot, the motion model of the mobile robot, system integration and experiments. The developed behavior-based motion controller will be applied to the differential-drive mobile robot with omnidirectional vision.
APA, Harvard, Vancouver, ISO, and other styles
14

賴天寬. "An active vision motion control based on a parallel architecture using three independent actuators." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/31942121242641608000.

Full text
Abstract:
碩士
國立彰化師範大學
電機工程學系
92
This thesis proposes an active vision motion control based on a parallel architecture using three independent actuators (PATIA) with three degrees of freedom, low inertia, high stability and high speed. The geometric structures of the PATIA and the motion control algorithms are presented. An iteration method is developed to solve the no only one solution problem in the active motion mode. The presented control relationships between the actuator motion angles and camera view directions are evidenced by the simulation programs. The initial PATIA calibration is performed in this experiment. The PATIA calibration methods are developed and the fundamental PATIA control methods are constructed with vision tracking control and active motion control application. The proposed method is effective in tracking motion for a toy car. The camera view directions are precisely controlled in the active motion mode. The experimental results verify that the proposed control algorithm is effective.
APA, Harvard, Vancouver, ISO, and other styles
15

Lin, Ting-hsuan, and 林婷萱. "A Novel 3-D Motion Control System Based on Binocular Stereo Vision and Fingertip Detection Techniques." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/398jeu.

Full text
Abstract:
碩士
國立臺灣科技大學
資訊工程系
103
With the advance of technology, people can’t live without any kind of electronic equipment. In order to make them more convenient and friendlier when using, human-computer interaction becomes a very important topic. Take the development of cell phone for example; the conversion of the interface from traditional button to smart touch panel makes more function be implemented. In recent years, because of the appearance of somatosensory technology, a controller of middleware is no longer needed when interacting with system. Therefore, the gap between system and human is significantly narrowed. Among all somatosensory technology, using hand to communicate with system is considered as the most intuitive way. In this thesis, a fingertip interaction system based on stereo vision is proposed. We used the simplest devices and setting of environment to exactly locate the position of fingertips in three dimensions space. In addition, we demonstrated the recognition of some common fingertip gestures. First of all, to detect fingertips, we used color information and geometry features to calculate the position of fingertips in two-dimensional plane. And use the method of stereo vision to construct a disparity map to obtain the depth information of fingertips. Finally, we calculated the three-dimensional features of fingertip’s trajectories and applied machine learning to train and recognize trajectories. We carried out two experiments which are tested by different people in different light condition using a cell phone with two cameras. We can detect the position of fingertips and recognize the fingertip’s gestures accurately. The first experiment is for fingertips detection, and its average accuracy rate is 91.78%. And the second experiment is for gesture recognition, which average accuracy rate is 88.40%. In addition, the purposed system is real-time, and its total performance is about 25 frames per second for the image of 320x180 resolutions.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography