Dissertations / Theses on the topic 'Vision-based motion controls'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 15 dissertations / theses for your research on the topic 'Vision-based motion controls.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Henning, Timothy Paul. "Dynamics and controls for an omnidirectional robot." Ohio : Ohio University, 2003. http://www.ohiolink.edu/etd/view.cgi?ohiou1175093596.
Full textReski, Nico. "Change your Perspective : Exploration of a 3D Network created with Open Data in an Immersive Virtual Reality Environment using a Head-mounted Display and Vision-based Motion Controls." Thesis, Linnéuniversitetet, Institutionen för medieteknik (ME), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-46779.
Full textSivilli, Robert. "Vision-Based Testbeds for Control System Applicaitons." Master's thesis, University of Central Florida, 2012. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/5504.
Full textM.S.A.E.
Masters
Mechanical and Aerospace Engineering
Engineering and Computer Science
Aerospace Engineering; Space Systems Design and Engineering
Hoff, Rein. "The aeroplane spin motion and an investigation into factors affecting the aeroplane spin." Thesis, Brunel University, 2014. http://bura.brunel.ac.uk/handle/2438/10537.
Full textSabiron, Guillaume. "Synthèse d’une solution GNC basée sur des capteurs de flux optique bio-inspirés adaptés à la mesure des basses vitesses pour un atterrissage lunaire autonome en douceur." Thesis, Toulouse, ISAE, 2014. http://www.theses.fr/2014ESAE0038/document.
Full textIn this PhD thesis, the challenge of autonomous lunar landing was addressed and an innovative method was developed, which provides an alternative to the classical sensor suites based on RADAR, LIDAR and cameras, which tend to be bulky, energy consuming and expensive. The first part is devoted to the development of a sensor inspired by the fly’s visual sensitivity to optic flow (OF). The OF is an index giving the relative angular velocity of the environment sensed by the retina of a moving insect or robot. In a fixed environment (where there is no external motion), the self-motion of an airborne vehicle generates an OF containing information about its own velocity and attitude and the distance to obstacles. Based on the “Time of Travel” principle we present the results obtained for two versions of 5 LMSs based optic flow sensors. The first one is able to measure accurately the OF in two opposite directions. It was tested in the laboratory and gave satisfying results. The second optic flow sensor operates at low velocities such as those liable to occur during lunar landing was developed. After developing these sensors, their performances were characterized both indoors and outdoors, and lastly, they were tested onboard an 80-kg helicopter flying in an outdoor environment. The Guidance Navigation and Control (GNC) system was designed in the second part on the basis of several algorithms, using various tools such as optimal control, nonlinear control design and observation theory. This is a particularly innovative approach, since it makes it possible to perform soft landing on the basis of OF measurements and as less as possible on inertial sensors. The final constraints imposed by our industrial partners were met by mounting several non-gimbaled sensors oriented in different gaze directions on the lander’s structure. Information about the lander’s self-motion present in the OF measurements is extracted by navigation algorithms, which yield estimates of the ventral OF, expansion OF and pitch angle. It was also established that it is possible to bring the planetary lander gently to the ground by tracking a pre-computed optimal reference trajectory in terms of the lowest possible fuel consumption. Software-in-the-loop simulations were carried out in order to assess the potential of the proposed GNC approach by testing its performances. In these simulations, the sensor firmware was taken into account and virtual images of the lunar surface were used in order to improve the realism of the simulated landings
Nguyen, Van-Truong, and NguyenV T. "Vision-Based Compliance Motion Control of Robots." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/458a77.
Full text國立臺北科技大學
電機工程系所
98
This thesis presents vision-based compliance control of robots including two dual-arm mobile robots and an industrial manipulator. Two approaches have been accomplished for compliance motion control. One is for object grasping task by two cooperative mobile robots both equipped with dual arms. The two mobile robots, master and slave, are controlled under an visual intelligent space with compliance motion of the arms for cooperatively moving an object to a target. A compliance control strategy without any force sensor is proposed for the arms of the slave robot to react against impact during the task. The other approach is for an industrial 6-DOF manipulator. The manipulator is equipped with a 6-axis force sensor. A vision-based compliance control law with force sensing is proposed. The tasks of interacting with unknown surfaces have been experimented for verifying the effectiveness of the proposed controller. Both approaches have been successfully validated by experiments. In particular, two Dr Robot i90 mobile robots have been used for implementing cooperative object grasping and a Mitsubishi RV-1A manipulator has been utilized for performing compliance motion control.
Wu, Shian-shin, and 巫憲欣. "Machine Vision Based Robot Motion Control by Using a SOPC System." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/qsrt9p.
Full text國立臺灣科技大學
機械工程系
94
A servo control IC for robot arm using SOPC technology is presented in this thesis. A gobang game is implemented in the system by using vision feedback to recognition pieces in the board. In the proposed servo control IC, there are two modules. One module is implemented by hardware circuit. Its functions are five quadrature encoder pulse process, ten limit switches detect, five pulse width modulation generator and CMOS image sensor signal capture. The other module is implemented by software using Nios II micro processor. Its functions are an UART to connection with PC, inverse kinematics of robot arm, point to point motion control, continuous motion trajectory control, sequential control, self organization fuzzy control, fuzzy sliding mode control, digital image processing and gobang game AI algorithm. The digital hardware circuits are design by Verilog language and programs in Nios II micro processor is coded in C language. The FPGA chip adopts ALTERA Statrix II EP2S60F672C5ES on the development board. The CMOS color image sensor adopts PIXART PAS106BCB283 which resolution is 356×292 pixels. At last, an integrated experimental system which includes Nios II development board, five axis robot arm, DC motor drivers and CMOS image sensor has been constructed. Some experimental results have been provided and demonstrated the effectiveness and correctness of the proposed servo control system.
"Distributed, vision-based control laws for motion coordination in multi-agent systems." UNIVERSITY OF PENNSYLVANIA, 2009. http://pqdtopen.proquest.com/#viewpdf?dispub=3328697.
Full textYU, WEI-FEN, and 游韋汎. "Vision-based Motion Control of Parallel Robot for Pick and Place Applications." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/3efet8.
Full text大葉大學
機械與自動化工程學系
106
This thesis developed vision-based motion control of parallel robot for pick and place. A personal computer, a CCD, a Delta robot, a Micro-Box controller and servo drive system were integrated in the thesis. The image processing, trajectory planning and motion control program were written to achieve the control picking and placing motion using Matlab software development platform. Firstly, the forward and reverse kinematics was derived for the Delta robotic arm. A CCD was used to capture the image of the unknown object on the work surface. the image processing program was then developed to determine its position in the Cartesian coordinates. Accordingly, the trajectory was planned to obtain the shortest motion path of the robot arm. The angle values of the three- axe robot arm were calculated by inverse kinematics. The motion control program was written using the Simulink software development platform. The Micro-Box controller was used to control the movement of the arm's three-axis actuators, and completed the precise and fast pick-and-place operation of objects in unknown positions. Finally, the feasibility of the proposed method is verified by various software and hardware simulations and experiments.
Chen, Kun-Yung, and 陳崑永. "System Identification and Vision-Based Motion Control for a Motor-Toggle Mechanism." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/42443001816125214752.
Full text國立高雄第一科技大學
機械與自動化工程所
93
In this paper, the punching machine is made up by a toggle mechanism driven by a permanent magnet (PM) synchronous servomotor. First, Hamilton’s principle, Lagrange multiplier, geometric constraints and partitioning method are employed to derive its dynamic equations. The system parameters are difficult to obtain if the mechanism’s components can not be taken apart. The recursive least-squares (RLS) method is implemented to identify these parameters. This paper presents the comparisons of the visual servoing feedback motion control with the fuzzy logic control (FLC) and adaptive controller by using the stability analysis with inertia-related Lyapunov function to the punching machine. The main purpose of the punching machine is to transport the work pieces to a fixed position for manufacture. To satisfy the demand of the machine performance, three controllers including the FLC and adaptive controller are designed to control the slider responses. Distinguish from the previous studies, the vision servo system of an non-contact measurement of a charge coupled device (CCD) camera is employed in this paper to control the color pattern output state instead of using the expensive linear scale or encoder of the motor-mechanism coupled system. Finally, from the well agreements between numerical simulations and experimental results, it is convinced that the proposed controller by using of machine vision is robust to external disturbances for a punching machine system.
Huang, Bo-Shiun, and 黃柏勛. "Monocular Vision Single Image Based Motion Control for Autonomous Mobile Robot Target Tracking." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/35833843848653117237.
Full text中原大學
機械工程研究所
100
Due to the rapid improvement of the autonomous mobile robot technology in recent years, autonomous mobile robots have been widely applied to a variety of domains such as medical operations, healthcare, and security. The development of visual tracking systems plays a key role in expanding and enhancing the functions and applications of autonomous mobile robots. An optimal, or at least suitable, visual tracking system should possess high accuracy and use few resources in hardware and software. This thesis proposes a new motion control method, based on monocular vision and single image, for autonomous mobile robot target tracking. The proposed method predicts a moving target’s position in an image through a particle filter. Due to the stochastic properties of particle filtering, the proposed method can effectively and accurately handle both linear and nonlinear dynamic motions. In addition, the proposed method uses simple polynomial calculations to map a target’s virtual position to its real-world coordinates. Thus, the proposed method needs few software resources for computation. Moreover, the proposed method adopts the monocular vision approach, i.e., it uses a single camera, and therefore it needs few hardware resources for implementation. The proposed method predicts a moving target’s position in an image, and calculates the virtual position’s real-world coordinates relative to a mobile robot. Based on to the target’s relative coordinates, the mobile robot is commanded to move towards the target in order to keep the target at the camera’s central field of view. Experimental results show that the proposed method can produce acceptable to good results in linear and nonlinear tracking experiments, and has an overall better tracking performance than the Kalman filter approach.
Hung, Tsung-Wui, and 洪宗輝. "Unmanned Railcar Motion Control Based on Real-Time Image Recognition of Computer Vision." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/f8235t.
Full text義守大學
電機工程學系
104
The main purpose of this thesis is to construct an autopilot system for an unmanned railcar based on computer vision technology. The system operation flow is that wireless camera on board the railcar to acquire continuous image frames and transfer them through wireless microwave receiver to the host computer. The main program written in Python keeps detecting several particular signs with meanings of acceleration, deceleration, reverse and stop in the receiving images. When a particular sign is detected and recognized, the main program gives motion command through Arduino UNO R3 board, which controls Arduino L298n board to generate PWM signals to railcar’s driving motor so that the railcar can properly response. In addition, the man machine interface that takes user command inputs and displays camera image is also provided. The particular signs’ identification program is developed based on Haar-like feature training and AdaBoost classifier from OpenCV. To achieve the objective, comprehensive system integrations such as hardware, device drives, protocols, application program and man machine interface have been properly done. The experiment results successfully verify the proposed methodology and integrated system. In average, the railcar can response within one second when a particular sign is detected. Therefore, the real-time performance of the system is also assured. Since the pictures of particular signs can be easily reproduced at low cost, people can properly distribute those particular signs along the sides of the rails or hang those signs atop the rails to build up an autopilot railcar system with applications in mass rapid transit (MRT) and production line automation.
Yao, Li-Wei, and 姚力瑋. "Vision-assisted Behavior-based Motion Control for a Differential-Drive Mobile Robot System." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/10448378933247185152.
Full text淡江大學
機械與機電工程學系碩士班
97
Based on the dynamic model of differential-drive mobile robot, the motion controller and behavior-based controller are designed and implemented in this research, and further provided for the robot self-localization and mapping in a known environment. The research includes four parts, namely, the dynamic motion control of differential-drive mobile robot, the behavior-based control for the robot, the motion model of the mobile robot, system integration and experiments. The developed behavior-based motion controller will be applied to the differential-drive mobile robot with omnidirectional vision.
賴天寬. "An active vision motion control based on a parallel architecture using three independent actuators." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/31942121242641608000.
Full text國立彰化師範大學
電機工程學系
92
This thesis proposes an active vision motion control based on a parallel architecture using three independent actuators (PATIA) with three degrees of freedom, low inertia, high stability and high speed. The geometric structures of the PATIA and the motion control algorithms are presented. An iteration method is developed to solve the no only one solution problem in the active motion mode. The presented control relationships between the actuator motion angles and camera view directions are evidenced by the simulation programs. The initial PATIA calibration is performed in this experiment. The PATIA calibration methods are developed and the fundamental PATIA control methods are constructed with vision tracking control and active motion control application. The proposed method is effective in tracking motion for a toy car. The camera view directions are precisely controlled in the active motion mode. The experimental results verify that the proposed control algorithm is effective.
Lin, Ting-hsuan, and 林婷萱. "A Novel 3-D Motion Control System Based on Binocular Stereo Vision and Fingertip Detection Techniques." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/398jeu.
Full text國立臺灣科技大學
資訊工程系
103
With the advance of technology, people can’t live without any kind of electronic equipment. In order to make them more convenient and friendlier when using, human-computer interaction becomes a very important topic. Take the development of cell phone for example; the conversion of the interface from traditional button to smart touch panel makes more function be implemented. In recent years, because of the appearance of somatosensory technology, a controller of middleware is no longer needed when interacting with system. Therefore, the gap between system and human is significantly narrowed. Among all somatosensory technology, using hand to communicate with system is considered as the most intuitive way. In this thesis, a fingertip interaction system based on stereo vision is proposed. We used the simplest devices and setting of environment to exactly locate the position of fingertips in three dimensions space. In addition, we demonstrated the recognition of some common fingertip gestures. First of all, to detect fingertips, we used color information and geometry features to calculate the position of fingertips in two-dimensional plane. And use the method of stereo vision to construct a disparity map to obtain the depth information of fingertips. Finally, we calculated the three-dimensional features of fingertip’s trajectories and applied machine learning to train and recognize trajectories. We carried out two experiments which are tested by different people in different light condition using a cell phone with two cameras. We can detect the position of fingertips and recognize the fingertip’s gestures accurately. The first experiment is for fingertips detection, and its average accuracy rate is 91.78%. And the second experiment is for gesture recognition, which average accuracy rate is 88.40%. In addition, the purposed system is real-time, and its total performance is about 25 frames per second for the image of 320x180 resolutions.