To see the other types of publications on this topic, follow the link: Robotic guidance.

Dissertations / Theses on the topic 'Robotic guidance'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Robotic guidance.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Gurunathan, Mohan 1975. "Guidance, navigation and control of a robotic fish." Thesis, Massachusetts Institute of Technology, 1998. http://hdl.handle.net/1721.1/50052.

Full text
Abstract:
Thesis (S.B. and M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1998.
Includes bibliographical references (p. 70).
by Mohan Grurnathan.
S.B.and M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
2

Potamianos, Paul. "Development of a robotic guidance system for percutaneous surgery." Thesis, Imperial College London, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.297264.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Liu, Taoming. "A MAGNETICALLY-ACTUATED ROBOTIC CATHETER FOR ATRIAL FIBRILLATION ABLATION UNDER REAL-TIME MAGNETIC RESONANCE IMAGING GUIDANCE." Case Western Reserve University School of Graduate Studies / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=case1484654444253783.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Borg, Jonathan M. "An industrial robotic system for moving object interception using ideal proportional navigation guidance." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape4/PQDD_0021/MQ54104.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Karahan, Murat. "Prioritized Exploration Strategy Based On Invasion Percolation Guidance." Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12611450/index.pdf.

Full text
Abstract:
The major aim in search and rescue using mobile robots is to reach trapped survivors and to support rescue operations through the disaster environments. Our motivation is based on the fact that a search and rescue (SAR) robot can navigate within and penetrate a disaster area only if the area in question possesses connected voids Traversability or penetrability of a disaster area is a primary factor that guides the navigation of a search and rescue (SAR) robot, since it is highly desirable that the robot, without hitting a dead end or getting stuck, keeps its mobility for its primary task of reconnaissance and mapping when searching the highly unstructured environment We propose two novel guided prioritized exploration system: 1) percolation guided methodology where a percolator estimates the existence of connected voids in the upcoming yet unexplored region ahead of the robot so as to increase the efficiency of reconnaissance operation by the superior ability of the percolation guidance in speedy coverage of the area
2) the hybrid exploration methodology that makes the percolation guided exploration collaborate with entropy based SLAM under a switching control dependent on either priority given to position accuracy or to map accuracy This second methodology has proven to combine the superiority of both methods so that the active SLAM becomes speedy, with high coverage rate of the area as well as accurate in localization.
APA, Harvard, Vancouver, ISO, and other styles
6

Purnell, Graham. "Implementation of a robotic system for deboning of a beef forequarter for process meat." Thesis, University of Bristol, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.240579.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Roldán, Mckinley Javier Agustín. "Three-dimensional rigid body guidance using gear connections in a robotic manipulator with parallel consecutive axes." [Gainesville, Fla.] : University of Florida, 2007. http://purl.fcla.edu/fcla/etd/UFE0021383.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Nakhaeinia, Danial. "Hybrid-Adaptive Switched Control for Robotic Manipulator Interacting with Arbitrary Surface Shapes Under Multi-Sensory Guidance." Thesis, Université d'Ottawa / University of Ottawa, 2018. http://hdl.handle.net/10393/37127.

Full text
Abstract:
Industrial robots rapidly gained popularity as they can perform tasks quickly, repeatedly and accurately in static environments. However, in modern manufacturing, robots should also be able to safely interact with arbitrary objects and dynamically adapt their behavior to various situations. The large masses and rigid constructions of industrial robots prevent them from easily being re-tasked. In this context, this work proposes an immediate solution to make rigid manipulators compliant and able to efficiently handle object interactions, with only an add-on module (a custom designed instrumented compliant wrist) and an original control framework which can easily be ported to different manipulators. The proposed system utilizes both offline and online trajectory planning to achieve fully automated object interaction and surface following with or without contact where no prior knowledge of the objects is available. To minimize the complexity of the task, the problem is formulated into four interaction motion modes: free, proximity, contact and a blend of those. The free motion mode guides the robot towards the object of interest using information provided by a RGB-D sensor. The RGB-D sensor is used to collect raw 3D information on the environment and construct an approximate 3D model of an object of interest in the scene. In order to completely explore the object, a novel coverage path planning technique is proposed to generate a primary (offline) trajectory. However, RGB-D sensors provide only limited accuracy on the depth measurements and create blind spot when it reaches close to surfaces. Therefore, the offline trajectory is then further refined by applying the proximity motion mode and contact motion mode or a blend of them (blend motion mode) that allow the robot to dynamically interact with arbitrary objects and adapt to the surfaces it approaches or touches using live proximity and contact feedback from the compliant wrist. To achieve seamless and efficient integration of the sensory information and smoothly switch between different interaction modes, an original hybrid switching scheme is proposed that applies a supervisory (decision making) module and a mixture of hard and blend switches to support data fusion from multiple sensing sources by combining pairs of the main motion modes. Experimental results using a CRS-F3 manipulator demonstrate the feasibility and performance of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
9

Sheehan, Mark Christopher. "3D laser methods for calibrating and localising robotic vehicles." Thesis, University of Oxford, 2013. http://ora.ox.ac.uk/objects/uuid:021cd760-62a7-470a-b276-d47ba305dc04.

Full text
Abstract:
This thesis is about the construction and automatic target-less calibration of a 3D laser sensor; this is then used to localise an autonomous vehicle without using other sensors. Two novel contributions to our knowledge of robotics are presented here. The first is an automatic calibration routine, which is capable of learning its calibration parameters using only data from a 3D laser scanner. Targets with known dimensions are not required, as has previously been the case. The second main contribution is a localisation algorithm, which uses the high quality data from the calibrated 3D laser scanner with trajectory information from an additional source to build maps of the environment. The vehicle subsequently localises itself within these maps, using the 3D laser sensor alone. Inaccurate laser data manifests itself as blurring when it is plotted in 3D space. The automatic calibration routine recognises that the environment has a true underlying structure to it, and expresses the amount of disorder in the measured laser points using a cost function based on the entropy of the 3D laser data. By optimising this quantity, we obtain the true calibration parameters for the system. We have quantified the accuracy of this algorithm by simulating a static environment from which we draw laser measurements with known calibration parameters. It was found that our calibration system converges to the true calibration values of the sensor. We also address the problem of robotic localisation, as a continuous problem, evaluating precisely the continuous trajectory that the robot has taken as well as the location of the robotic platform. Maps are constructed using the high accuracy data stream from the 3D laser, combining it with an odometry stream, to build high quality laser point cloud maps. The algorithm localises the robotic platform within these maps using a single 3D laser sensor. We vary our estimate of the vehicle's trajectory, treating the scans from the 3D laser and the location of the vehicle as continuous data streams, in a way that maximally aligns the 3D laser data and the map; this is achieved by optimising a cost function based on the Kernelised Rényi Distance. This procedure is typically computationally taxing; however, the computational complexity and computation time of the overall system have been reduced considerably using an efficient algorithm known as the Improved Fast Gauss Transform (IFGT), making the system viable even for large amounts of laser data. An additional speedup was achieved by calculating the Jacobian of the cost function, rearranging it to a form calculable using IFGT approximations. These efficiencies reduce the cost of evaluating the system to near real time. We evaluate the accuracy of our localisation system by comparing it to a DGPS stream as the best available source of ground truth. We show that our system performs more consistently than DGPS. This was especially prominent in regions where the line of sight to GPS satellites was obscured by trees. It was found that the accuracy of our system was comparable to that of the DGPS system.
APA, Harvard, Vancouver, ISO, and other styles
10

Toschi, Marco. "Towards Monocular Depth Estimation for Robot Guidance." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021.

Find full text
Abstract:
Human visual perception is a powerful tool to let us interact with the world, interpreting depth using both physiological and psychological cues. In the early days, machine vision was primarily inspired by physiological cues, guiding robots with bulky sensors based on focal length adjustments, pattern matching, and binocular disparity. In reality, however, we always get a certain degree of depth sensation from the monocular image reproduced on the retina, which is judged by our brain upon empirical grounds. With the advent of deep learning techniques, estimating depth from a monocular image has became a major research topic. Currently, it is still far from industrial use, as the estimated depth is valid only up to a scale factor, leaving us with relative depth information. We propose an algorithm to estimate the depth of a scene at the actual global scale, leveraging geometric constraints and state-of-the-art techniques in optical flow and depth estimation. We first compute the three-dimensional information of multiple similar scenes, triangulating multi-view images for which dense correspondences have been estimated by an Optical Flow Estimation network. Then we train a Monocular Depth Estimation network on the precomputed multiple scenes to learn their similarities, like objects sizes, and ignore their differences, like objects arrangements. Experimental results suggest that our method is able to learn to estimate metric depth of a novel similar scene, opening the possibility to perform Robot Guidance using an affordable, light and compact smartphone camera as depth sensor.
APA, Harvard, Vancouver, ISO, and other styles
11

Gillham, Michael David Anthony. "A non-holonomic, highly human-in-the-loop compatible, assistive mobile robotic platform guidance navigation and control strategy." Thesis, University of Kent, 2015. https://kar.kent.ac.uk/50525/.

Full text
Abstract:
The provision of assistive mobile robotics for empowering and providing independence to the infirm, disabled and elderly in society has been the subject of much research. The issue of providing navigation and control assistance to users, enabling them to drive their powered wheelchairs effectively, can be complex and wide-ranging; some users fatigue quickly and can find that they are unable to operate the controls safely, others may have brain injury re-sulting in periodic hand tremors, quadriplegics may use a straw-like switch in their mouth to provide a digital control signal. Advances in autonomous robotics have led to the development of smart wheelchair systems which have attempted to address these issues; however the autonomous approach has, ac-cording to research, not been successful; users reporting that they want to be active drivers and not passengers. Recent methodologies have been to use collaborative or shared control which aims to predict or anticipate the need for the system to take over control when some pre-decided threshold has been met, yet these approaches still take away control from the us-er. This removal of human supervision and control by an autonomous system makes the re-sponsibility for accidents seriously problematic. This thesis introduces a new human-in-the-loop control structure with real-time assistive lev-els. One of these levels offers improved dynamic modelling and three of these levels offer unique and novel real-time solutions for: collision avoidance, localisation and waypoint iden-tification, and assistive trajectory generation. This architecture and these assistive functions always allow the user to remain fully in control of any motion of the powered wheelchair, shown in a series of experiments.
APA, Harvard, Vancouver, ISO, and other styles
12

Edwards, Barrett Bruce. "An Onboard Vision System for Unmanned Aerial Vehicle Guidance." BYU ScholarsArchive, 2010. https://scholarsarchive.byu.edu/etd/2381.

Full text
Abstract:
The viability of small Unmanned Aerial Vehicles (UAVs) as a stable platform for specific application use has been significantly advanced in recent years. Initial focus of lightweight UAV development was to create a craft capable of stable and controllable flight. This is largely a solved problem. Currently, the field has progressed to the point that unmanned aircraft can be carried in a backpack, launched by hand, weigh only a few pounds and be capable of navigating through unrestricted airspace. The most basic use of a UAV is to visually observe the environment and use that information to influence decision making. Previous attempts at using visual information to control a small UAV used an off-board approach where the video stream from an onboard camera was transmitted down to a ground station for processing and decision making. These attempts achieved limited results as the two-way transmission time introduced unacceptable amounts of latency into time-sensitive control algorithms. Onboard image processing offers a low-latency solution that will avoid the negative effects of two-way communication to a ground station. The first part of this thesis will show that onboard visual processing is capable of meeting the real-time control demands of an autonomous vehicle, which will also include the evaluation of potential onboard computing platforms. FPGA-based image processing will be shown to be the ideal technology for lightweight unmanned aircraft. The second part of this thesis will focus on the exact onboard vision system implementation for two proof-of-concept applications. The first application describes the use of machine vision algorithms to locate and track a target landing site for a UAV. GPS guidance was insufficient for this task. A vision system was utilized to localize the target site during approach and provide course correction updates to the UAV. The second application describes a feature detection and tracking sub-system that can be used in higher level application algorithms.
APA, Harvard, Vancouver, ISO, and other styles
13

Saveriano, Matteo [Verfasser], Dongheui [Akademischer Betreuer] Lee, Alberto [Gutachter] Finzi, and Dongheui [Gutachter] Lee. "Robotic Tasks Acquisition via Human Guidance: Representation, Learning and Execution / Matteo Saveriano ; Gutachter: Alberto Finzi, Dongheui Lee ; Betreuer: Dongheui Lee." München : Universitätsbibliothek der TU München, 2017. http://d-nb.info/1150399392/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Miller, Michael E. "The development of an improved low cost machine vision system for robotic guidance and manipulation of randomly oriented, straight edged objects." Ohio : Ohio University, 1989. http://www.ohiolink.edu/etd/view.cgi?ohiou1182445639.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Bouchard, Amy. "Effect of haptic guidance and error amplification robotic training interventions on the immediate improvement of timing among individuals that had a stroke." Mémoire, Université de Sherbrooke, 2016. http://hdl.handle.net/11143/9543.

Full text
Abstract:
Abstract : Many individuals that had a stroke have motor impairments such as timing deficits that hinder their ability to complete daily activities like getting dressed. Robotic rehabilitation is an increasingly popular therapeutic avenue in order to improve motor recovery among this population. Yet, most studies have focused on improving the spatial aspect of movement (e.g. reaching), and not the temporal one (e.g. timing). Hence, the main aim of this study was to compare two types of robotic rehabilitation on the immediate improvement of timing accuracy: haptic guidance (HG), which consists of guiding the person to make the correct movement, and thus decreasing his or her movement errors, and error amplification (EA), which consists of increasing the person’s movement errors. The secondary objective consisted of exploring whether the side of the stroke lesion had an effect on timing accuracy following HG and EA training. Thirty-four persons that had a stroke (average age 67 ± 7 years) participated in a single training session of a timing-based task (simulated pinball-like task), where they had to activate a robot at the correct moment to successfully hit targets that were presented a random on a computer screen. Participants were randomly divided into two groups, receiving either HG or EA. During the same session, a baseline phase and a retention phase were given before and after each training, and these phases were compared in order to evaluate and compare the immediate impact of HG and EA on movement timing accuracy. The results showed that HG helped improve the immediate timing accuracy (p=0.03), but not EA (p=0.45). After comparing both trainings, HG was revealed to be superior to EA at improving timing (p=0.04). Furthermore, a significant correlation was found between the side of stroke lesion and the change in timing accuracy following EA (r[subscript pb]=0.7, p=0.001), but not HG (r[subscript pb]=0.18, p=0.24). In other words, a deterioration in timing accuracy was found for participants with a lesion in the left hemisphere that had trained with EA. On the other hand, for the participants having a right-sided stroke lesion, an improvement in timing accuracy was noted following EA. In sum, it seems that HG helps improve the immediate timing accuracy for individuals that had a stroke. Still, the side of the stroke lesion seems to play a part in the participants’ response to training. This remains to be further explored, in addition to the impact of providing more training sessions in order to assess any long-term benefits of HG or EA.
Résumé : À la suite d’un accident vasculaire cérébral (AVC), plusieurs atteintes, comme un déficit de timing, sont notées, et ce, même à la phase chronique d’un AVC, ce qui nuit à l’accomplissement de tâches quotidiennes comme se vêtir. L’entrainement robotisé est un entrainement qui est de plus en plus préconisé dans le but d’améliorer la récupération motrice à la suite d’un AVC. Par contre, la plupart des études ont étudié les effets de l’entrainement robotisé sur l’amélioration de l’aspect spatial du mouvement (ex : la direction du mouvement), et non l’aspect temporel (ex : timing). L’objectif principal de ce projet était donc d’évaluer et de comparer l’impact de deux entrainements robotisés sur l’amélioration immédiate du timing soit : la réduction de l’erreur (RE), qui consiste à guider la personne à faire le mouvement désiré, et l’augmentation de l’erreur (AE), qui nuit au mouvement de la personne. L’objectif secondaire consistait à explorer s’il y avait une relation entre le côté de la lésion cérébrale et le changement dans les erreurs de timing suivant l’entrainement par RE et AE. Trente-quatre personnes atteintes d’un AVC au stade chronique (âge moyen de 67 ± 7 années) ont participé à cette étude, où ils devaient jouer à un jeu simulé de machine à boules. Les participants devaient activer une main robotisée au bon moment pour atteindre des cibles présentées aléatoirement sur un écran d’ordinateur. Les participants recevaient soit RE ou AE. Une ligne de base et une phase de rétention étaient données avant et après chaque entrainement, et elles étaient utilisées pour évaluer et comparer l’effet immédiat de RE et AE sur le timing. Les résultats ont démontré que RE permet d’améliorer les erreurs de timing (p=0,03), mais pas AE (p=0,45). De plus, la comparaison entre les deux entrainements a démontré que RE était supérieur à AE pour améliorer le timing (p=0,04). Par ailleurs, une corrélation significative a été notée entre le côté de la lésion cérébrale et le changement des erreurs de timing suivant AE (r[indice inférieur pb]=0,70; p=0,001), mais pas RE (r[indice inférieur pb]=0,18; p=0,24). En d’autres mots, une détérioration de l’exécution de la tâche de timing a été notée pour les participants ayant leur lésion cérébrale à gauche. Par contre, ceux ayant leur lésion à droite ont bénéficié de l’entrainement par AE. Bref, l’entrainement par RE peut améliorer les erreurs de timing pour les survivants d’AVC au stade chronique. Toutefois, le côté de la lésion cérébrale semble jouer un rôle important dans la réponse à l’entrainement par AE. Ceci demeure à être exploré, ainsi que l’impact d’un entrainement par RE et AE de plus longue durée pour en déterminer leurs effets à long terme.
APA, Harvard, Vancouver, ISO, and other styles
16

Macknojia, Rizwan. "Design and Calibration of a Network of RGB-D Sensors for Robotic Applications over Large Workspaces." Thèse, Université d'Ottawa / University of Ottawa, 2013. http://hdl.handle.net/10393/23976.

Full text
Abstract:
This thesis presents an approach for configuring and calibrating a network of RGB-D sensors used to guide a robotic arm to interact with objects that get rapidly modeled in 3D. The system is based on Microsoft Kinect sensors for 3D data acquisition. The work presented here also details an analysis and experimental study of the Kinect’s depth sensor capabilities and performance. The study comprises examination of the resolution, quantization error, and random distribution of depth data. In addition, the effects of color and reflectance characteristics of an object are also analyzed. The study examines two versions of Kinect sensors, one dedicated to operate with the Xbox 360 video game console and the more recent Microsoft Kinect for Windows version. The study of the Kinect sensor is extended to the design of a rapid acquisition system dedicated to large workspaces by the linkage of multiple Kinect units to collect 3D data over a large object, such as an automotive vehicle. A customized calibration method for this large workspace is proposed which takes advantage of the rapid 3D measurement technology embedded in the Kinect sensor and provides registration accuracy between local sections of point clouds that is within the range of the depth measurements accuracy permitted by the Kinect technology. The method is developed to calibrate all Kinect units with respect to a reference Kinect. The internal calibration of the sensor in between the color and depth measurements is also performed to optimize the alignment between the modalities. The calibration of the 3D vision system is also extended to formally estimate its configuration with respect to the base of a manipulator robot, therefore allowing for seamless integration between the proposed vision platform and the kinematic control of the robot. The resulting vision-robotic system defines the comprehensive calibration of reference Kinect with the robot. The latter can then be used to interact under visual guidance with large objects, such as vehicles, that are positioned within a significantly enlarged field of view created by the network of RGB-D sensors. The proposed design and calibration method is validated in a real world scenario where five Kinect sensors operate collaboratively to rapidly and accurately reconstruct a 180 degrees coverage of the surface shape of various types of vehicles from a set of individual acquisitions performed in a semi-controlled environment, that is an underground parking garage. The vehicle geometrical properties generated from the acquired 3D data are compared with the original dimensions of the vehicle.
APA, Harvard, Vancouver, ISO, and other styles
17

Shen, Jun. "Framework for ultrasonography-based augmented reality in robotic surgery : application to transoral surgery and gastrointestinal surgery." Thesis, Rennes 1, 2019. http://www.theses.fr/2019REN1S078.

Full text
Abstract:
Cette thèse porte sur le développement d’une solution de réalité augmentée dans le cadre de la chirurgie robotisée et plus particulièrement pour la chirurgie transorale des tumeurs de la base de langue et la chirurgie laparoscopique des cancers du bas rectum. Une des problématiques pour les chirurgiens est de repérer sur la vue endoscopique les limites de la tumeur et les marges de résections. Celles-ci sont en effet non visibles directement. L’échographie peropératoire est largement utilisée pour repérer les tumeurs lors des interventions. Nous proposons donc une solution de réalité augmentée dans laquelle l’information extraite de l’échographie est reprojetée sur la vision binoculaire de la station de chirurgie robotisée afin de guider le chirurgien dans la résection de la tumeur. Plusieurs verrous de cette chaîne de traitement ont été repérés et étudiés. Nous avons ainsi proposé une nouvelle méthode pour la calibration de sondes d’échographie. Nous avons démontré que cette méthode était plus facile à mettre en œuvre, plus rapide et plus précise que les méthodes proposées dans la littérature. Cette sonde calibrée, associée à des outils de localisation et de calibration de la sonde endoscopique nous a permis de proposer une solution de réalité augmentée qui permettait de reprojeter l’information acquise sur l’image sur la vue endoscopique avec des erreurs inférieures à 1 mm. Nous avons alors établi la preuve de concept de l’application de cette chaîne de réalité augmentée dans deux expérimentations, l’une sur un fantôme physique en silicone du rectum et l’autre sur une langue de mouton en ex-vivo. Les résultats expérimentaux ont montré que l’information augmentée avait permis au chirurgien de percevoir avec précision les marges de résections des tumeurs simulées et d’accomplir le geste opératoire à l’aide de cette perception
The medical context of this thesis is transoral robotic surgery for base of tongue cancer and robot-assisted laparoscopic surgery for low-rectal cancer. One of the main challenges for surgeons to perform these two surgical procedures is to identify the tumor resection margins accurately, because tumors are often concealed in base of tongues or rectal walls and there is lack of efficient intraoperative guidance systems. However, ultrasonography is widely used to image soft-tissue tumors, which motivates our proposition of an augmented reality framework based on intraoperative ultrasonography images for tumor resection guidance. The framework, proposed, with clinical partners, consists to adapt to the surgical workflow of robot-assisted surgery for treating base of tongue cancer and low-rectal cancer. For this purpose, we developed a fast and accurate 3D ultrasound probe calibration method to track the probe and facilitate its intraoperative use. Moreover, we evaluated the performance of the proposed framework augmenting an intraoperative endoscopic camera with ultrasound information, which shows less than 1mm error. Furthermore, we designed experimental protocols using a silicone rectum phantom and an ex-vivo lamb tongue, that simulate the integration of the implemented framework into the current surgical workflow. The experimental results show that, according to the augmented endoscopic views provided by the proposed framework, a surgeon is able to accurately identify the resection margins of the simulated tumors in these phantoms
APA, Harvard, Vancouver, ISO, and other styles
18

Melikian, Simon Haig. "Visual Search for Objects with Straight Lines." Case Western Reserve University School of Graduate Studies / OhioLINK, 2006. http://rave.ohiolink.edu/etdc/view?acc_num=case1134003738.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Gu, Lifang. "Visual guidance of robot motion." University of Western Australia. Dept. of Computer Science, 1996. http://theses.library.uwa.edu.au/adt-WU2003.0004.

Full text
Abstract:
Future robots are expected to cooperate with humans in daily activities. Efficient cooperation requires new techniques for transferring human skills to robots. This thesis presents an approach on how a robot can extract and replicate a motion by observing how a human instructor conducts it. In this way, the robot can be taught without any explicit instructions and the human instructor does not need any expertise in robot programming. A system has been implemented which consists of two main parts. The first part is data acquisition and motion extraction. Vision is the most important sensor with which a human can interact with the surrounding world. Therefore two cameras are used to capture the image sequences of a moving rigid object. In order to compress the incoming images from the cameras and extract 3D motion information of the rigid object, feature detection and tracking are applied to the images. Corners are chosen as the main features because they are more stable under perspective projection and during motion. A reliable corner detector is implemented and a new corner tracking algorithm is proposed based on smooth motion constraints. With both spatial and temporal constraints, 3D trajectories of a set of points on the object can be obtained and the 3D motion parameters of the object can be reliably calculated by the algorithm proposed in this thesis. Once the 3D motion parameters are available through the vision system, the robot should be programmed to replicate this motion. Since we are interested in smooth motion and the similarity between two motions, the task of the second part of our system is therefore to extract motion characteristics and to transfer these to the robot. It can be proven that the characteristics of a parametric cubic B-spline curve are completely determined by its control points, which can be obtained by the least-squares fitting method, given some data points on the curve. Therefore a parametric cubic B–spline curve is fitted to the motion data and its control points are calculated. Given the robot configuration the obtained control points can be scaled, translated, and rotated so that a motion trajectory can be generated for the robot to replicate the given motion in its own workspace with the required smoothness and similarity, although the absolute motion trajectories of the robot and the instructor can be different. All the above modules have been integrated and results of an experiment with the whole system show that the approach proposed in this thesis can extract motion characteristics and transfer these to a robot. A robot arm has successfully replicated a human arm movement with similar shape characteristics by our approach. In conclusion, such a system collects human skills and intelligence through vision and transfers them to the robot. Therefore, a robot with such a system can interact with its environment and learn by observation.
APA, Harvard, Vancouver, ISO, and other styles
20

Teoh, Pek Loo. "A study of single laser interferometry-based sensing and measuring technique in robot manipulator control and guidance. Volume 1." Monash University, Dept. of Mechanical Engineering, 2003. http://arrow.monash.edu.au/hdl/1959.1/9565.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Taralle, Florent. "Guidage Gestuel pour des Robots Mobiles." Thesis, Paris Sciences et Lettres (ComUE), 2016. http://www.theses.fr/2016PSLEM096/document.

Full text
Abstract:
Utiliser une interface visuo-tactile peut être une gêne lorsqu'il est nécessaire de rester mobile et conscient de son environnement. Cela s'avère particulièrement problématique en milieu hostile comme pour la commande d'un drone militaire de contact.Dans ces travaux nous faisons l'hypothèse que le geste est une modalité de commande moins contraignante puisqu'elle n'impose pas de visualiser ni de manipuler une interface physique.Ainsi, différents travaux nous ont permis de confirmer d'une part, les avantages pratiques, et d'autre part, la faisabilité technique de la commande d'un robot mobile par geste.Tout d'abord, l'étude théorique du geste a permis de construire un modèle d'interaction. Celui-ci consiste en l'activation de commandes automatiques par la réalisation de gestes sémaphoriques normalisés. Des messages sonores permettent de renseigner l'opérateur et un mécanisme de confirmation sécurise l'interaction.Ensuite, le dictionnaire des gestes à utiliser a été constitué. Pour cela, une méthodologie a été proposée et appliquée : des utilisateurs proposent puis élisent les gestes les plus pertinents.Notre modèle d'interaction et le vocabulaire gestuel ont ensuite été évalués. Une étude en laboratoire nous a permis de montrer que l'interaction gestuelle telle que proposée est simple à apprendre et utiliser et qu'elle permet de conserver une bonne conscience de l'environnement.Un système interactif complet a ensuite été développé. Son architecture a été déduite du modèle d'interaction et une brique de reconnaissance gestuelle a été mise en oeuvre. En marge des méthodes classiques, la brique proposée utilise un principe de description formelle des gestes avec une grammaire régulière.Finalement, ce système a été évalué lors de tests utilisateurs. L'évaluation par des participants militaires a confirmé notre hypothèse de la pertinence du geste pour une interaction visuellement et physiquement moins contraignante
Using a visuo-tactil interface may be restrictive when mobility and situation awareness are required. This is particularly problematic in hostile environment as commanding a drone on a battlefield.In the work presented here, we hypothesize that gesture is a less restrictive modaility as it doesn't require to manipulate nor to look at a device.Thus we followed a user-centered approach to confirm practical advantages and technical feasibility of gestural interaction for drones.First, the theoretical study of gestures allowed us to design an interaction model. It consists on activating commands by executing standardized semaphoric gestures. Sound messages inform the user and a confirmation mechanism secure the interaction.Second, a gestural vocabulary has been built. To do so, a methodology has been proposed and used : end users elicited then elected the most appropriate gestures.Then, the interaction model and the selected gestures have been evaluated. A laboratory study showed that they are both easy to learn and use and helps situation awareness.An interactive system as then been developed. It's architecture has been deducted from our interaction model and a gesture recognizer as been buit. Different from usual methods, the recognizer we proposed is based on formal description of gestures using regular expressions.Finaly, we conducted a user testing of the proposed system. The evaluation by end-users confirmed our hypothesis that gestures are a modality less distractive both visualy and physicaly
APA, Harvard, Vancouver, ISO, and other styles
22

Clem, Garrett Stuart. "An Optimized Circulating Vector Field Obstacle Avoidance Guidance for UnmannedAerial Vehicles." Ohio University / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1530874969780028.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Brattain, Laura. "Enhanced Ultrasound Visualization for Procedure Guidance." Thesis, Harvard University, 2014. http://dissertations.umi.com/gsas.harvard:11649.

Full text
Abstract:
Intra-cardiac procedures often involve fast-moving anatomic structures with large spatial extent and high geometrical complexity. Real-time visualization of the moving structures and instrument-tissue contact is crucial to the success of these procedures. Real-time 3D ultrasound is a promising modality for procedure guidance as it offers improved spatial orientation information relative to 2D ultrasound. Imaging rates at 30 fps enable good visualization of instrument-tissue interactions, far faster than the volumetric imaging alternatives (MR/CT). Unlike fluoroscopy, 3D ultrasound also allows better contrast of soft tissues, and avoids the use of ionizing radiation.
Engineering and Applied Sciences
APA, Harvard, Vancouver, ISO, and other styles
24

Pretlove, John. "Stereoscopic eye-in-hand active machine vision for real-time adaptive robot arm guidance." Thesis, University of Surrey, 1993. http://epubs.surrey.ac.uk/843230/.

Full text
Abstract:
This thesis describes the design, development and implementation of a robot mounted active stereo vision system for adaptive robot arm guidance. This provides a very flexible and intelligent system that is able to react to uncertainty in a manufacturing environment. It is capable of tracking and determining the 3D position of an object so that the robot can move towards, and intercept, it. Such a system has particular applications in remotely controlled robot arms, typically working in hostile environments. The stereo vision system is designed on mechatronic principles and is modular, light-weight and uses state-of-the-art dc servo-motor technology. Based on visual information, it controls camera vergence and focus independently while making use of the flexibility of the robot for positioning. Calibration and modelling techniques have been developed to determine the geometry of the stereo vision system so that the 3D position of objects can be estimated from the 2D camera information. 3D position estimates are obtained by stereo triangulation. A method for obtaining a quantitative measure of the confidence of the 3D position estimate is presented which is a useful built-in error checking mechanism to reject false or poor 3D matches. A predictive gaze controller has been incorporated into the stereo head control system. This anticipates the relative 3D motion of the object to alleviate the effect of computational delays and ensures a smooth trajectory. Validation experiments have been undertaken with a Puma 562 industrial robot to show the functional integration of the camera system with the robot controller. The vision system is capable of tracking moving objects and the information this provides is used to update command information to the controller. The vision system has been shown to be in full control of the robot during a tracking and intercept duty cycle.
APA, Harvard, Vancouver, ISO, and other styles
25

Khambhaita, Harmish. "Human-aware space sharing and navigation for an interactive robot." Thesis, Toulouse 3, 2017. http://www.theses.fr/2017TOU30399.

Full text
Abstract:
Les méthodes de planification de mouvements robotiques se sont développées à un rythme accéléré ces dernières années. L'accent a principalement été mis sur le fait de rendre les robots plus efficaces, plus sécurisés et plus rapides à réagir à des situations imprévisibles. En conséquence, nous assistons de plus en plus à l'introduction des robots de service dans notre vie quotidienne, en particulier dans les lieux publics tels que les musées, les centres commerciaux et les aéroports. Tandis qu'un robot de service mobile se déplace dans l'environnement humain, il est important de prendre en compte l'effet de son comportement sur les personnes qu'il croise ou avec lesquelles il interagit. Nous ne les voyons pas comme de simples machines, mais comme des agents sociaux et nous nous attendons à ce qu'ils se comportent de manière similaire à l'homme en suivant les normes sociétales comme des règles. Ceci a créé de nouveaux défis et a ouvert de nouvelles directions de recherche pour concevoir des algorithmes de commande de robot, qui fournissent des comportements de robot acceptables, lisibles et proactifs. Cette thèse propose une méthode coopérative basée sur l'optimisation pour la planification de trajectoire et la navigation du robot avec des contraintes sociales intégrées pour assurer des mouvements de robots prudents, conscients de la présence de l'être humain et prévisibles. La trajectoire du robot est ajustée dynamiquement et continuellement pour satisfaire ces contraintes sociales. Pour ce faire, nous traitons la trajectoire du robot comme une bande élastique (une construction mathématique représentant la trajectoire du robot comme une série de positions et une différence de temps entre ces positions) qui peut être déformée (dans l'espace et dans le temps) par le processus d'optimisation pour respecter les contraintes données. De plus, le robot prédit aussi les trajectoires humaines plausibles dans la même zone d'exploitation en traitant les chemins humains aussi comme des bandes élastiques. Ce système nous permet d'optimiser les trajectoires des robots non seulement pour le moment présent, mais aussi pour l'interaction entière qui se produit lorsque les humains et les robots se croisent les uns les autres. Nous avons réalisé un ensemble d'expériences avec des situations interactives humains-robots qui se produisent dans la vie de tous les jours telles que traverser un couloir, passer par une porte et se croiser sur de grands espaces ouverts. La méthode de planification coopérative proposée se compare favorablement à d'autres schémas de planification de la navigation à la pointe de la technique. Nous avons augmenté le comportement de navigation du robot avec un mouvement synchronisé et réactif de sa tête. Cela permet au robot de regarder où il va et occasionnellement de détourner son regard vers les personnes voisines pour montrer que le robot va éviter toute collision possible avec eux comme prévu par le planificateur. À tout moment, le robot pondère les multiples critères selon le contexte social et décide de ce vers quoi il devrait porter le regard. Grâce à une étude utilisateur en ligne, nous avons montré que ce mécanisme de regard complète efficacement le comportement de navigation ce qui améliore la lisibilité des actions du robot. Enfin, nous avons intégré notre schéma de navigation avec un système de supervision plus large qui peut générer conjointement des comportements du robot standard tel que l'approche d'une personne et l'adaptation de la vitesse du robot selon le groupe de personnes que le robot guide dans des scénarios d'aéroport ou de musée
The methods of robotic movement planning have grown at an accelerated pace in recent years. The emphasis has mainly been on making robots more efficient, safer and react faster to unpredictable situations. As a result we are witnessing more and more service robots introduced in our everyday lives, especially in public places such as museums, shopping malls and airports. While a mobile service robot moves in a human environment, it leaves an innate effect on people about its demeanor. We do not see them as mere machines but as social agents and expect them to behave humanly by following societal norms and rules. This has created new challenges and opened new research avenues for designing robot control algorithms that deliver human-acceptable, legible and proactive robot behaviors. This thesis proposes a optimization-based cooperative method for trajectoryplanning and navigation with in-built social constraints for keeping robot motions safe, human-aware and predictable. The robot trajectory is dynamically and continuously adjusted to satisfy these social constraints. To do so, we treat the robot trajectory as an elastic band (a mathematical construct representing the robot path as a series of poses and time-difference between those poses) which can be deformed (both in space and time) by the optimization process to respect given constraints. Moreover, we also predict plausible human trajectories in the same operating area by treating human paths also as elastic bands. This scheme allows us to optimize the robot trajectories not only for the current moment but for the entire interaction that happens when humans and robot cross each other's paths. We carried out a set of experiments with canonical human-robot interactive situations that happen in our everyday lives such as crossing a hallway, passing through a door and intersecting paths on wide open spaces. The proposed cooperative planning method compares favorably against other stat-of-the-art human-aware navigation planning schemes. We have augmented robot navigation behavior with synchronized and responsive movements of the robot head, making the robot look where it is going and occasionally diverting its gaze towards nearby people to acknowledge that robot will avoid any possible collision with them. At any given moment the robot weighs multiple criteria according to the social context and decides where it should turn its gaze. Through an online user study we have shown that such gazing mechanism effectively complements the navigation behavior and it improves legibility of the robot actions. Finally, we have integrated our navigation scheme with a broader supervision system which can jointly generate normative robot behaviors such as approaching a person and adapting the robot speed according to a group of people who the robot guides in airports or museums
APA, Harvard, Vancouver, ISO, and other styles
26

Liu, Yue. "Multi-scale 3-D map building and representation for robotics applications and vehicle guidance." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/nq25437.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Mark, Wannes van der. "Stereo and colour vision techniques for autonomous vehicle guidance." [S.l : Amsterdam : s.n.] ; Universiteit van Amsterdam [Host], 2007. http://dare.uva.nl/document/47628.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Pannu, Rabindra. "Path Traversal Around Obstacles by a Robot using Terrain Marks for Guidance." University of Cincinnati / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1312571550.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Zajonz, Dirk Jörg. "Klinische Erfahrungen und Limitationen von Biopsien in verschiedenen Körperregionen mit einem robotischen Assistenzsystem in einem geschlossenen Magnetresonanztomographen." Doctoral thesis, Universitätsbibliothek Leipzig, 2010. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-63608.

Full text
Abstract:
Zielsetzung dieser Arbeit ist die Vorstellung des klinischen Aufbaus und des Arbeitsablaufs eines robotischen Assistenzsystems für bildgeführte Interventionen in einem konventionellen Magnetresonanztomographen (MRT), sowie die Beurteilung der Genauigkeit und der klinischen Erfahrungen bei perkutanen Biopsien in verschieden Körperregionen. Material und Methoden: Das MR- kompatible, servopneumatische robotische Assistenzsystem lässt sich mit dem Patienten in die 60- cm Gantry eines Standard- MR- Scanners fahren. Die Genauigkeit des Systems wurde anhand von Nadelpunktionen (n= 25) in einem Phantommodell ermittelt. Perkutane diagnostische Biopsien wurden bei sechs Patienten durchgeführt. Ergebnisse: Für eine Interventionstiefe zwischen 29 und 95 mm wurde eine 3-DGenauigkeit von 2,2 +/- 0,7 mm (Intervall 0,9- 3,8 mm) bestimmt. Patienten mit einem BMI bis zu ≈30 kg/m2 konnten mit dem System punktiert werden. Die klinischen Arbeitsschritte werden anhand der Fallbeispiele dargestellt. Die mittlere Interventionszeit betrug 44 Minuten (Intervall 36 – 68 Minuten). Zusammenfassung: Die Punktion verschiedener Körperregionen ist mit Hilfe des robotischen Assistenzsystems in einem geschlossenen MRT erfolgreich und sicher möglich. Die Genauigkeit des Systems ist vergleichbar mit anderen Assistenzsystemen in der Literatur und genügt den klinischen Anforderungen. Eine kürzere Interventionszeit ist mittels einer Optimierung der einzelnen Arbeitsschritte möglich.
APA, Harvard, Vancouver, ISO, and other styles
30

Ganji, Vinay G. "Exploring the application of haptic feedback guidance for port crane modernization." Thesis, California State University, Long Beach, 2013. http://pqdtopen.proquest.com/#viewpdf?dispub=1522629.

Full text
Abstract:

In this thesis, the author presents a feasibility study of methods to modernize the port crane systems with the application of haptic (force) feedback assistive technology to assist the crane operator in the container handling process. The assistive technology provides motion guidance to the operator that could help increase the safety and productivity of the system. The technology of haptic feedback is successful in applications such as gaming, simulators etc., and has proven quite efficient in alerting the user or the operator. This study involves the implementation of haptic feedback as an assistive mechanism through a force-feedback joystick used by the operator to control the motion of a scaled port crane system. The haptic feedback system has been integrated to work with the visual feedback system as part of this study. The visual feedback system shares information needed to trigger the haptic (force) feedback display on the joystick. The force feedback displayed on the joystick has been modeled on Hooke's law of spring force. The force feedback and the visual feedback form a motion guidance system. The integrated system has been implemented and tested on a lab-scale testbed of a gantry crane. For experimental purposes, this concept has been tested on a PC-based Windows platform and also on a portable single board Linux-based computer, called the Beagleboard platform. The results from test runs on both the platforms (PC and Beagleboard using ARM processor) are reported in this study. 2

APA, Harvard, Vancouver, ISO, and other styles
31

Samant, Chinmay. "Ultrasound laparoscopic guidance for minimally invasive surgery, biopsy, and ablation procedures." Thesis, Strasbourg, 2019. http://www.theses.fr/2019STRAD054.

Full text
Abstract:
La chirurgie laparoscopique minimalement invasive guidée par l'image permet la réduction de la durée des séjours à l'hôpital pour le patient, réduisant ainsi les traumatismes postopératoires et accélérant le temps de guérison. Avec les progrès récents des techniques d'imagerie, les chirurgiens peuvent planifier une chirurgie de manière efficace et en toute confiance en utilisant différentes modalités d'image telles que la tomodensitométrie / IRM, les images échographiques, etc. Les techniques de fusion d'images en temps réel permettent la superposition de différents types d'images pour fournir une vue complète au chirurgien. Un aspect important de la fusion en temps réel est que l'instrument laparoscopique est suivi en temps réel à l'aide de capteurs. Dans cette thèse, nous présentons une analyse détaillée de ces technologies de suivi tout en fournissant une nouvelle configuration de capteurs pour le suivi d'images par laparoscope à ultrasons. Nous présentons une chaîne cinématique pour la configuration des capteurs et nous fournissons une solution pour la réduction du bruit présent dans les données des capteurs en utilisant la technique de moyennage des rotations. Le Hand-Eye calibration (étalonnage main-œil) est également un élément fondamental des systèmes de suivi hybrides. Nous présentons une révision détaillée de cette technique. Nous présentons également une méthode déterministe, robuste et précise pour résoudre le problème d'étalonnage main-œil, même pour de grandes quantités de valeurs aberrantes et des niveaux élevés de bruit de mesure. La méthode proposée est basée sur une reformulation d'un problème de programmation semi-définie à contraintes de rang où la robustesse est renforcée via une approche d'optimisation pondérée de façon itérative
Minimally invasive image-guided laparoscopic surgery allows shorter hospital stays for the patient reducing post-operative trauma and faster healing time. With the recent advances in imaging techniques, surgeons can efficiently and confidently plan a surgery by using different image modalities such as CT/MRI scans, ultrasound images etc. Real-time image fusion techniques can overlay the images from different modalities together to provide a comprehensive view to the surgeon. An important aspect of real-time fusion is that the laparoscopic instrument is tracked in real-time using sensors. In this thesis, we present a detailed analysis of such tracking technologies while providing a novel sensor setup for ultrasound laparoscope image tracking. We present a kinematic chain for the sensor setup and provide a solution for noise reduction in the sensor data using rotation averaging technique. Hand-Eye calibration is also a fundamental part of hybrid tracking systems. We present a detailed review of this technique. We also present a deterministic, robust and accurate method for solving Hand-Eye calibration problem even for large amounts of outliers and high levels of measurement noise. The proposed method is based on a reformulation of a rank-constrained semi-definite programming problem allowing for robustness to be enforced via an iteratively re-weighted optimization approach
APA, Harvard, Vancouver, ISO, and other styles
32

Lapouge, Guillaume. "Guidage robotisé d’aiguilles flexibles sous échographie 3D." Thesis, Université Grenoble Alpes, 2020. http://www.theses.fr/2020GRALS014.

Full text
Abstract:
Le guidage robotisé d’aiguilles flexibles consiste à commander, à l’aide d’un robot, une aiguille flexible au cours de son insertion dans des tissus. Parmi les avantages attendus de cette approche, nous pouvons citer l’augmentation de la précision du geste, la diminution des traumatismes causés aux tissus ou encore l’évitement de régions anatomiques sensibles. Cependant, s’il fait l’objet de nombreux travaux de recherche, le guidage robotisé d’aiguilles flexibles n’est pas encore une réalité clinique.Cette thèse représente un pas de plus vers cet objectif à de multiples niveaux en proposant un guidage robuste sous échographie 3D peropératoire. Dans un premier temps, un algorithme de localisation de l’aiguille est détaillé. Il est capable d’estimer les propriétés mécaniques des tissus traversés et de s’adapter à la visibilité changeante de l’aiguille dans les volumes échographiques acquis. Dans un second temps, une méthode performante de suivi des mouvements de la zone anatomique à atteindre est présentée. Dans un troisième temps, un algorithme de planification adaptative de trajectoire permet la prise compte du bruit de mesure, des perturbations ainsi que d’éventuelles hétérogénéités des tissus à partir d’informations a priori sur les tissus traversés.Un ensemble de validations expérimentales étayent les performances des différentes approches proposées, caractérisées par des erreurs de ciblage moyennes de 1 mm, 1.5 +/- 0.9 mm, et 1.7 +/- 0.8 mm, pour un guidage dans des fantômes homogènes, des fantômes hétérogènes et des tissus biologiques respectivement. Enfin, des essais sur sujet anatomique ont permis de statuer sur les perspectives d’amélioration des méthodes proposées.Finalement, même si certaines problématiques du guidage robotisé d’aiguilles flexibles restent encore ouvertes, la solution proposée dans cette thèse se démarque par sa robustesse, sa compatibilité avec l’échographie 3D et sa nature patient-spécifique
Robotic needle steering refers to the robotic guidance of a flexible needle as it is inserted into a tissue. Increasing the gesture precision, reducing tissue trauma and avoiding sensitive anatomical regions are among the expected advantages of this approach. However, despite sustained research efforts, it is not yet a clinical reality.This thesis is a step towards such an objective, by contributing, at multiple levels, to a robust steering under 3D ultrasound imaging feedback. First, a needle localization algorithm is detailed. It enables the evaluation of the mechanical properties of the tissues that the needle navigates through. Besides, it adapts to the changing needle visibility in the 3D ultrasound volumes. Secondly, an efficient algorithm for the tracking of the targeted anatomical zone is elaborated. Thirdly, an adaptative path planner that considers uncertainties, noise and tissue heterogeneity is developped. It uses a priori knowledge of the tissue properties to predict the future needle deflection.An experimental validation supports the performance of these approaches. The steering is characterized by an average targeting error of 1 mm, 1.5 +/- 0.9 mm, and 1.7 +/- 0.8 mm, in homogeneous phantoms, heterogeneous phantoms and biological tissue respectively. Finally, trials on an anatomical subject have provided prospects for the improvement of the proposed methods, to serve the ultimate purpose of a clinical application.To conclude, robotic needle steering is still a work in progress. However, the work presented in this thesis stands out due to its its robustness, its compatibility with 3D ultrasound imaging and its patient-specific approach
APA, Harvard, Vancouver, ISO, and other styles
33

Aldrin, Martin. "Implementering av ett inbyggt system för automatisk styrning av en robotbil." Thesis, Växjö University, School of Mathematics and Systems Engineering, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:vxu:diva-1351.

Full text
Abstract:

Denna rapport beskriver ett examensarbete för högskoleingenjörsexamen i elektroteknik vid Växjö Universitet. Syftet är att konstruera ett styrsystem till en robotbil, ett program som hindrar bilen från att krocka med omgivningen. Roboten ska även kunna styras från en dator via ett grafiskt gränssnitt implementerat i Labview. Nödvändig hårdvara för styrning och kommunikation har konstruerats.

Det har behövts tre olika programmeringsspråk för att nå de krav som har ställts på uppgiften, C, Perl och Labview. Microprocessorn i robotbilen har programmerats i C och gör bilen helt autonom, endast beroende av signaler från avståndssensorer. Avlusningsprogrammet skrevs i Perl och styrningen från datorn har implementerats i Labview. Avlusningsprogrammet togs fram på grund av att det blev svårt att hålla koll på allt som skedde med värden och beräkningar i den automatiska styrningen av robotbilen.


This thesis describes a project for the bachelor degree in electrical engineering at Växjö University. The purpose with this project is to construct a guidance system for a robot car, a program that prevents the car from colliding with objects when moving without external control. The robot could also be controlled from the computer through a virtual instrument implemented in Labview. The necessary hardware for steering and communicating has been constructed. The software is implemented using three different programming languages, C, Perl and Labview.

APA, Harvard, Vancouver, ISO, and other styles
34

Stein, Procópio Silveira. "Framework for visual guidance of an autonomous robot using learning." Master's thesis, Universidade de Aveiro, 2009. http://hdl.handle.net/10773/2495.

Full text
Abstract:
Mestrado em Engenharia de Automação Industrial
Este documento apresenta os trabalhos de desenvolvimento de uma infraestrutura de aprendizagem para a condução de robôs móveis. Este método de aprendizagem utiliza redes neuronais artificias para calcular uma direcção capaz de manter um robô dentro de uma estrada. A rede "aprende"a calcular esta direcção baseada em exemplos de condutores humanos, replicando e, de uma certa forma, imitando comportamentos. Uma abordagem de aprendizagem pode superar alguns aspectos de algoritmos clássicos para o cálculo da direcção de um robot. No que se relaciona à velocidade de processamento, as redes neuronais artificiais são muito rápidas, o que as torna ideais para navegação em tempo real. Além disso as redes tem a capacidade de extrair informações que não foram detectadas por humanos e, por conseguinte, não podem ser codificadas em programas clássicos. A implementação desta nova forma de interacção entre humanos e robôs, que estão simultaneamente "ensinando"e "aprendendo", também vai ser destacada neste trabalho. A plataforma de testes utilizada nesta investigação será um robô do Projecto Atlas, desenvolvido como robô autónomo de competição, para participar da prova de Condução Autónoma que ocorre anualmente como parte do Festival Nacional de Robótica. Para transformar o robô numa plataforma robusta para testes, uma série de revisões e melhorias foram implementadas. Estas intervenções foram conduzidas a nível mecânico e electrónico, e também a nível de software, sendo este último de grande importância por estabelecer uma nova infraestrutura de desenvolvimento e programação para investigadores. ABSTRACT: This work describes the research and development of a learning infrastructure for mobile robot driving. This method uses artificial neural networks to compute the steer direction that a robot should perform to stay inside a track. The network "learns" to compute a direction based on examples from human drivers, replicating and sometimes even improving human-like behaviors. A learning approach can overcome some aspects of classical algorithms used for robot steering computation. Regarding the processing speed, artificial neural networks are very fast, which make them well suited for real-time navigation. They also have the possibility to perceive information that was undetected by humans and therefore could not be coded in classical programs. The implementation of this new form of interaction between humans and robots, that are simultaneously "teaching" and "learning" from each other, will also be emphasized in this work. The platform used for this research is one of the robots of the Project Atlas, designed as an autonomous robot to participate in the Autonomous Driving Competition, held annually as part of the Portuguese Robotics Open. To render this robot able to serve as a robust test platform, several revisions and improvements were conducted. These interventions were conducted at mechanical, electronic and software levels, with the latter having a big importance as it establishes a new framework for group and modular code development.
APA, Harvard, Vancouver, ISO, and other styles
35

Teimoori, Sangani Hamid Electrical Engineering &amp Telecommunications Faculty of Engineering UNSW. "Topics in navigation and guidance of wheeled robots." Publisher:University of New South Wales. Electrical Engineering & Telecommunications, 2009. http://handle.unsw.edu.au/1959.4/43709.

Full text
Abstract:
Navigation and guidance of mobile robots towards steady or maneuvering objects (targets) is one of the most important areas of robotics that has attracted a lot of attention in recent decades. However, in most of the existing methods, both the line-of-sight angle (bearing) and the relative distance (range) are assumed to be available for navigation and guidance algorithms. There is also a relatively large body of research on navigation and guidance with bearings-only measurements. In contrast, only a few results on navigation and guidance towards an unknown target using range-only measurements have been published. Various problems of navigation, guidance, location estimation and target tracking based on range-only measurements often arise in new wireless networks related applications. Recent advances in these applications allow us to use inexpensive transponders and receivers for range-only measurements which provide information in dynamic and noisy environments without the necessity of line-of-sight. To take advantage of these sensors, algorithms must be developed for range-only navigation. The main part of this thesis is concerned with the problem of real-time navigation and guidance of Wheeled Mobile Robots (WMRs) towards an unknown stationary or moving target using range-only measurements. The range can be estimated using the signal strength and the robust extended Kalman filtering. Several similar algorithms for navigation and guidance termed Equiangular Navigation and Guidance (ENG) laws are proposed and mathematically rigorous proofs of convergence and stability of the proposed guidance laws are given. The experimental investigation into the use of range data for a WMR navigation is documented and the results and discussions on the performance of the proposed guidance strategies are presented, where a wheeled robot successfully approach a stationary or follow a maneuvering target. In order to safely navigate and reliably operate in populated environments, ENG is then modified into Augmented-ENG (AENG), which enables the robot to approach a stationary target or follow an unpredictable maneuvering object in an unknown environment, while keeping a safe distance from the target, and simultaneously preserving a safety margin from the obstacles. Furthermore, we propose and experimentally investigate a new biologically inspired method for local obstacle avoidance and give the mathematically rigorous proof of the idea. In order for the robot to avoid collision and bypass the enroute obstacles in this method, the angle between the instantaneous moving direction of the robot and a reference point on the surface of the obstacle is kept constant. The proposed idea is combined with the ENG law, which leads to a reliable and fast long-range navigation. The performance of both navigation strategy and local obstacle avoidance techniques are confirmed with computer simulations and several experiments with ActivMedia Pioneer 3-DX wheeled robots. The second part of the thesis investigates some challenging problems in the area of wheeled robot navigation. We first address the problem of bearing-only guidance of an autonomous vehicle following a moving target with smaller minimum turning radius compared to that of the follower and propose a simple and constructive navigation law. In compliance with the increasing research on decentralized control laws for groups of mobile autonomous robots, we consider the problems of decentralized navigation of network of WMRs with limited communication and decentralized stabilization of formation of WMRs. New control laws are presented and simulation results are provided to illustrate the control laws and their applications.
APA, Harvard, Vancouver, ISO, and other styles
36

Keepence, B. S. "Navigation of autonomous mobile robots." Thesis, Cardiff University, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.304921.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Arthur, Richard B. "Vision-Based Human Directed Robot Guidance." Diss., CLICK HERE for online access, 2004. http://contentdm.lib.byu.edu/ETD/image/etd564.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Patel, Niravkumar Amrutlal. "Towards Closed-loop, Robot Assisted Percutaneous Interventions under MRI Guidance." Digital WPI, 2017. https://digitalcommons.wpi.edu/etd-dissertations/130.

Full text
Abstract:
Image guided therapy procedures under MRI guidance has been a focused research area over past decade. Also, over the last decade, various MRI guided robotic devices have been developed and used clinically for percutaneous interventions, such as prostate biopsy, brachytherapy, and tissue ablation. Though MRI provides better soft tissue contrast compared to Computed Tomography and Ultrasound, it poses various challenges like constrained space, less ergonomic patient access and limited material choices due to its high magnetic field. Even after, advancements in MRI compatible actuation methods and robotic devices using them, most MRI guided interventions are still open-loop in nature and relies on preoperative or intraoperative images. In this thesis, an intraoperative MRI guided robotic system for prostate biopsy comprising of an MRI compatible 4-DOF robotic manipulator, robot controller and control application with Clinical User Interface (CUI) and surgical planning applications (3DSlicer and RadVision) is presented. This system utilizes intraoperative images acquired after each full or partial needle insertion for needle tip localization. Presented system was approved by Institutional Review Board at Brigham and Women's Hospital(BWH) and has been used in 30 patient trials. Successful translation of such a system utilizing intraoperative MR images motivated towards the development of a system architecture for close-loop, real-time MRI guided percutaneous interventions. Robot assisted, close-loop intervention could help in accurate positioning and localization of the therapy delivery instrument, improve physician and patient comfort and allow real-time therapy monitoring. Also, utilizing real-time MR images could allow correction of surgical instrument trajectory and controlled therapy delivery. Two of the applications validating the presented architecture; closed-loop needle steering and MRI guided brain tumor ablation are demonstrated under real-time MRI guidance.
APA, Harvard, Vancouver, ISO, and other styles
39

Grepl, Pavel. "Strojové vidění pro navádění robotu." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2021. http://www.nusl.cz/ntk/nusl-443727.

Full text
Abstract:
Master's thesis deals with the design, assembly, and testing of a camera system for localization of randomly placed and oriented objects on a conveyor belt with the purpose of guiding a robot on those objects. The theoretical part is focused on research in individual components making a camera system and on the field of 2D and 3D localization of objects. The practical part consists of two possible arrangements of the camera system, solution of the chosen arrangement, creating testing images, programming the algorithm for image processing, creating HMI, and testing the complete system.
APA, Harvard, Vancouver, ISO, and other styles
40

Mignon, Paul. "Guidage robotisé d'une aiguille flexible sous échographie 3D pour la curiethérapie de la prostate." Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAS019/document.

Full text
Abstract:
La curiethérapie constitue 25% à 30% des opérations de traitement utilisées sur les 40.000 cas de cancer de prostate par an en France. Elle consiste à mettre manuellement une trentaine d'aiguilles creuses dans la prostate, à travers le périnée, en utilisant des images échographiques pour localiser la prostate et les aiguilles. Au moyen de ces aiguilles, des grains radioactifs sont insérés dans la prostate à des endroits précis pré-planifiés grâce à l'imagerie. Le succès de l'opération est étroitement lié à la répartition et l'homogénéité de la dose radioactive répartie dans la prostate, donc à la précision avec laquelle les grains y sont placés. Cette précision est affectée par plusieurs facteurs. Premièrement, la prostate bouge et se déforme pendant l'insertion des aiguilles et lors des déplacements de la sonde échographique pour les acquisitions d'images. Deuxièmement, la taille de la prostate est susceptible d'augmenter pendant l'opération à cause des saignements occasionnés. Enfin, les aiguilles sont très minces et susceptibles de se courber pendant leur insertion.Le laboratoire TIMC-IMAG (équipe GMCAO) a mis au point un système robotisé d'insertion d'aiguilles transpérinéale, guidé par échographie 3D avec le but d'améliorer la précision, la fiabilité et l'efficacité de la pose des grains. Ces travaux ont montré une première faisabilité globale de l'approche avec un premier prototype de laboratoire. Cependant l'approche actuelle permet de corriger seulement en partie les bougés et déformations de la prostate en cours de geste grâce au couplage avec des méthodes d'imagerie 3D. La correction ne tire pas parti des informations très riches issues de l'imagerie : seule la profondeur d'insertion est modifiée pendant le geste. Le LIRMM (équipe DEXTER) a développé récemment une approche de planification adaptative pour le guidage d'aiguille flexible lors de leur insertion dans des procédures percutanées. La technique proposée permet la mise à jour du chemin suivi par l'aiguille en intégrant des informations obtenues en ligne par un retour visuel. Cette stratégie de planification et de contrôle est définie dans une architecture en boucle fermée et permet ainsi de compenser les incertitudes du système et les perturbations (déformations des organes, inhomogénéité des tissus, etc) auxquelles il est soumis.Le but de ces travaux de thèse est donc de coupler les savoir-faire de chacun des deux laboratoires afin d’apporter une solution de guidage d’aiguilles flexibles pour la curiethérapie de prostate. La réalisation de cet objectif passe, dans un premier temps, par l’élaboration d’un algorithme de suivi d’aiguille sous échographie 3D. Cet algorithme est confronté à la faible visibilité des aiguilles offerte par cette modalité d’imagerie, associée à diverses sources de bruit. Ces conditions rendent très difficile la détection de l’aiguille. Dans le but d’améliorer la robustesse de cet algorithme, la zone de recherche de l’aiguille dans le volume est déterminée par un modèle prédictif, qui constitue une première contribution de ce manuscrit. Le contrôle de l’aiguille par planification en boucle fermée a été adapté aux spécificités de l’imagerie échographique 3D ainsi qu’à celles du robot développé précédemment. Ce contrôle est couplé au retour visuel de l’aiguille donné par l’algorithme de détection. Ce dispositif a, par la suite, été testé sur fantômes puis sur pièce anatomique afin de déterminer la viabilité et la pertinence du système proposé.Ce travail constitue donc une première étape vers une future application clinique du guidage d’aiguilles flexibles. Si voir un système robotique insérer seul une aiguille flexible en clinique est encore un rêve lointain, l’idée d’un système d’assistance à l’insertion d’aiguille, où le clinicien et le robot travaillent de pairs, est une solution envisageable dès maintenant
In France, 25% to 30% of the 40,000 prostate cancer cases per year are treated with brachytherapy. During this procedure, about thirty needles are manually inserted into the prostate through the perineum using ultrasound images to locate the prostate and needles. Radioactive seeds are then inserted into the prostate specific pre-planned locations using needle cannula. The success of the operation is closely related to the distribution and homogeneity of the radioactive dose distribution in the prostate, therefore the precision with which the seeds are positioned. This accuracy is affected by many factors. Firstly, the prostate moves and deforms due to the insertion of the needles and to the movements of the ultrasonic probe. Secondly, the size of the prostate increases due to tissue inflammation and bleeding. Finally, the needles are very thin and could bend during insertion.The TIMC-IMAG laboratory (CAMI team) has developed a robotic system for transperineal needle insertion. This system is guided by 3D ultrasound to improve the precision, reliability and efficiency of the radioactive source positioning. These works showed a first proof of concept using a laboratory prototype. However the current approach can only partially correct prostate movements and deformations using 3D imaging methods. The correction does not take advantage of the rich information of this imaging modality: only the insertion depth is changed during the gesture. LIRMM (DEXTER team) recently developed an adaptive planning approach to guide a flexible needle during its insertion in percutaneous procedures. The proposed technique allows to update the path followed by the needle using online information from the visual feedback. This planning and control approach forms a closed-loop architecture and allows to compensate system disturbances (organ deformities, tissue inhomogeneity, etc.).The purpose of this thesis is to combine the expertise of the two laboratories to provide a flexible needle steering system for prostate brachytherapy purposes. This objective is achieved first by developing a needle tracking algorithm in 3D ultrasound. This algorithm deals with low visibility of the needles offered by this imaging modality, combined with various noises. These conditions complicate the detection of the needle. In order to improve the robustness of our algorithm, a search area is defined to detect the needle in the volume. This area is then determined by a predictive model, which is a first contribution of this manuscript. Control of the closed-loop planning needle is adapted to the specifications of the 3D ultrasound imaging system as well as those of the previously developed robot. This control is coupled to the needle visual feedback given by the detection algorithm. This device is tested on phantoms then on anatomical specimen to assess the viability and relevance of the proposed system.This work is therefore a first step towards a future clinical application of flexible needle steering. The entirely automatic insertion of flexible needle in clinic is a distant dream. However, the idea of an assistance system for needle insertion, where the clinician and the robot work together, is reachable from now
APA, Harvard, Vancouver, ISO, and other styles
41

Khadraoui, Djamel. "La commande référencée vision pour le guidage automatique de véhicules." Clermont-Ferrand 2, 1996. http://www.theses.fr/1996CLF20860.

Full text
Abstract:
Le travail presente dans cette these entre dans le cadre de la commande referencee vision robuste, associant etudes theoriques et validations experimentales. Il aborde le probleme du controle lateral d'un vehicule autonome dans l'espace image. La demarche proposee est basee sur l'utilisation d'informations visuelles pour une commande en boucle fermee. Elle est consacree a une approche de l'asservissement visuel basee sur la robustesse des lois de commande en vue de la conduite automatique d'une voiture et d'un engin agricole. Nous pouvons traduire l'un des objectifs du controle lateral d'un vehicule comme etant une minimisation d'une erreur entre la distance separant la trajectoire actuelle du vehicule et la trajectoire desiree. Pour une telle tache, un modele bicyclette cinematique ou dynamique linearise a deux degres de liberte est suffisant. La scene visualisee est representee dans l'espace image par des primitives geometriques de type droite. Dans le contexte autoroutier, le modele de route retenue est assimile a un modele rigide a trois droites se coupant en un point de fuite. Ces differentes modelisations nous servent pour la synthese des lois de commande ainsi que pour la simulation du comportement cinematique ou dynamique du vehicule. La premiere solution retenue pour resoudre le probleme de commande consiste a construire un modele d'etat sous forme d'equations differentielles matricielles du premier ordre. Les variables d'etat constituent essentiellement les parametres relatifs aux informations visuelles de la scene. Grace a ce modele, construit dans le plan image de la camera, nous pouvons elaborer une loi de commande du type retour d'etat. Une technique basee sur le placement de poles, est utilisee dans notre cas. Neanmoins, cette premiere methode n'est pas robuste vis-a-vis des incertitudes liees aux parametres du modele d'etat. Ces incertitudes se traduisent par un mouvement de la camera (tangage par exemple). Par consequent, nous faisons intervenir un autre critere, faisant l'objet de la seconde solution, pour le controle lateral qui doit permettre un meilleur fonctionnement dans toute la plage de variation des parametres incertains. La seconde solution que nous avons adoptee est une approche plus generale, qui consiste a utiliser une technique robuste afin de tenir compte des eventuelles variations des parametres relatifs au modele de commande du vehicule. Une technique robuste est une technique attractive pour plusieurs raisons. Un controleur unique est utilise afin d'eliminer toute necessite d'utilisation d'algorithmes adaptatifs pour regler en permanence les gains. Il garantit aussi une stabilite dans la plage de variation des parametres. La technique retenue dans notre cas est basee sur l'optimisation d'une norme h du modele prealablement etabli dans le domaine frequentiel. Le modele d'incertitude choisi est de type non structure additif
APA, Harvard, Vancouver, ISO, and other styles
42

Watanabe, Yoko. "Stochastically optimized monocular vision-based navigation and guidance." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/22545.

Full text
Abstract:
Thesis (Ph. D.)--Aerospace Engineering, Georgia Institute of Technology, 2008.
Committee Chair: Johnson, Eric; Committee Co-Chair: Calise, Anthony; Committee Member: Prasad, J.V.R.; Committee Member: Tannenbaum, Allen; Committee Member: Tsiotras, Panagiotis.
APA, Harvard, Vancouver, ISO, and other styles
43

Bohora, Anil R. "Visual robot guidance in time-varying environment using quadtree data structure and parallel processing." Ohio : Ohio University, 1989. http://www.ohiolink.edu/etd/view.cgi?ohiou1182282896.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Anisi, David A. "Online trajectory planning and observer based control." Licentiate thesis, Stockholm : Optimization and systems theory, Royal Institute of Technology, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4153.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

McManus, Colin. "Learning place-dependant features for long-term vision-based localisation." Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:088715bb-cfba-408e-b1f6-b307ae43be97.

Full text
Abstract:
In order for autonomous vehicles to achieve life-long operation in outdoor environments, navigation systems must be able to cope with visual change---whether it's short term, such as variable lighting or weather conditions, or long term, such as different seasons. As a GPS is not always reliable, autonomous vehicles must be self sufficient with onboard sensors. This thesis examines the problem of localisation against a known map across extreme lighting and weather conditions using only a stereo camera as the primary sensor. The method presented departs from traditional techniques that blindly apply out-of-the-box interest-point detectors to all images of all places. This naive approach fails to take into account any prior knowledge that exists about the environment in which the robot is operating. Furthermore, the point-feature approach often fails when there are dramatic appearance changes, as associating low-level features such as corners or edges is extremely difficult and sometimes not possible. By leveraging knowledge of prior appearance, this thesis presents an unsupervised method for learning a set of distinctive and stable (i.e., stable under appearance changes) feature detectors that are unique to a specific place in the environment. In other words, we learn place-dependent feature detectors that enable vastly superior performance in terms of robustness in exchange for a reduced, but tolerable metric precision. By folding in a method for masking distracting objects in dynamic environments and examining a simple model for external illuminates, such as the sun, this thesis presents a robust localisation system that is able to achieve metric estimates from night-today or summer-to-winter conditions. Results are presented from various locations in the UK, including the Begbroke Science Park, Woodstock, Oxford, and central London.
APA, Harvard, Vancouver, ISO, and other styles
46

Mebarki, Rafik. "Automatic guidance of robotized 2D ultrasound probes with visual servoing based on image moments." Phd thesis, Université Européenne de Bretagne, 2010. http://tel.archives-ouvertes.fr/tel-00476718.

Full text
Abstract:
This dissertation presents a new 2D ultrasound-based visual servoing method. The main goal is to automatically guide a robotized 2D ultrasound probe held by a medical robot in order to reach a desired cross-section ultrasound image of an object of interest. This method allows to control both the in-plane and out-of-plane motions of a 2D ultrasound probe. It makes direct use of the 2D ultrasound image in the visual servo scheme, where the feed-back visual features are combinations of image moments. To build the servo scheme, we develop the analytical form of the interaction matrix that relates the image moments time variation to the probe velocity. That modeling is theoretically verified on simple shapes like spherical and cylindrical objects. In order to be able to automatically position the 2D ultrasound probe with respect to an observed object, we propose six relevant independent visual features to control the 6 degrees of freedom of the robotic system. Then, the system is endowed with the capability of automatically interacting with objects without any prior information about their shape, 3D parameters, nor 3D location. To do so, we develop on-line estimation methods that identify the parameters involved in the built visual servo scheme. We conducted both simulation and experimental trials respectively on simulated volumetric objects, and on both objects and soft tissues immersed in a water-filled tank. Successful results have been obtained, which show the validity of the developed methods and their robustness to different errors and perturbations especially those inherent to the ultrasound modality. Keywords: Medical robotics, visual servoing, 2D ultrasound imaging, kinematics modeling, model-free servoing.
APA, Harvard, Vancouver, ISO, and other styles
47

Mechri, Mehdi. "Contribution à l'étude et à la réalisation d'un robot mobile : développement du matériel : simulation et expérimentation des algorithmes de guidage." Bordeaux 1, 1986. http://www.theses.fr/1986BOR10520.

Full text
Abstract:
Ce mémoire décrit le matériel et le logiciel constituant un robot mobile dépourvu de mémoire de son environnement. Ce robot est prévu pour accomplir les taches suivantes: déplacement le long d'un couloir avec contournement d'obstacle; déplacement entre rangées d'arbres; suivi d'une cible mobile. Le robot découvre son environnement grâce a un capteur telemetrique ultrasonore qui le renseigne sur la présence d'obstacles dans son voisinage. Cette thèse présente essentiellement: les systèmes telemetriques; le capteur ultrasonore réalise au cours de notre travail; le processeur central assurant la gestion du robot; les algorithmes de guidage; les résultats de la simulation; le comportement du robot dans un environnement réel
APA, Harvard, Vancouver, ISO, and other styles
48

Caravaca, Mora Oscar Mauricio. "Development of a novel method using optical coherence tomography (OCT) for guidance of robotized interventional endoscopy." Thesis, Strasbourg, 2020. http://www.theses.fr/2020STRAD004.

Full text
Abstract:
Il manque actuellement aux médecins une nouvelle méthode qui rationalise le traitement peu invasif pour en faire des procédures à opérateur unique, assistées par une caractérisation précise des tissus in situ et en temps réel, en situation de prise de décisions dans la gestion du cancer colorectal. Une solution prometteuse à ce problème a été développée par l'équipe AVR (Automatique, Vision et Robotique) du laboratoire ICube, au sein de laquelle l'endoscope interventionnel flexible (fabriqué par Karl Storz) a été entièrement robotisé, permettant ainsi à un seul opérateur de télémanipuler indépendamment l'endoscope et deux instruments thérapeutiques insérables, grâce à unité de contrôle commune. Cependant, l'endoscope flexible assisté par robot est soumis aux mêmes limites de précision diagnostique que les systèmes d'endoscopie standards. Il a été démontré que l'OCT endoscopique présente un potentiel pour l'imagerie des troubles de la voie gastro-intestinale et pour la différenciation de tissus sains des tissus malades. Actuellement, l'OCT se limite à l'imagerie de l'œsophage humain, qui présente une géométrie simple et un accès facile. Ni l'OCT, ni l'endoscope robotisé ne peuvent résoudre à eux seuls les limites de la norme actuelle de soins pour la prise en charge d’un cancer du côlon. La combinaison de ces deux technologies et le développement d'une nouvelle plate-forme pour la détection et le traitement précoce du cancer constituent l'objet principal de cette thèse, avec la vision de développer une console d'imagerie OCT et une sonde de haute technologie intégrée à l'endoscope robotisé. Ce système permet d'obtenir des images de l'intérieur du gros intestin pour la caractérisation des tissus et l'assistance au traitement, permettant ainsi à un seul opérateur d'effectuer une intervention peu invasive en mode télémanipulation
There exists an unmet clinical need to provide doctors with a new method that streamlines minimally invasive endoscopic treatment of colorectal cancer to single operator procedures assisted by in-situ and real-time accurate tissue characterization for informed treatment decisions. A promising solution to this problem has been developed at the ICube laboratory, in which the flexible interventional endoscope (Karl Storz) was completely robotized, so allowing a single operator to independently telemanipulate the endoscope and two insertable therapeutic instruments with a joint control unit. However, the robot-assisted flexible endoscope is subject to the same diagnostic accuracy limitations as standard endoscopy systems. It has been demonstrated that endoscopic optical coherence tomography (OCT) has a good potential for imaging disorders in the gastrointestinal tract and differentiating healthy tissue from diseased. Neither OCT, nor the robotized endoscope can solve the limitations of current standard of care for colon cancer management alone. Combining these two technologies and developing a new platform for early detection and treatment of cancer is the main interest of this work, with the aim of developing a state-of-the-art OCT imaging console and probe integrated with the robotized endoscope. The capabilities of this new technology for imaging of the interior of the large intestine were tested in pre-clinical experiments showing potential for improvement in margin verification during minimally invasive endoscopic treatment in the telemanipulation mode
APA, Harvard, Vancouver, ISO, and other styles
49

Dupuy, Christine. "Guidage cartésien d'un chariot d'atelier." Toulouse, ENSAE, 1987. http://www.theses.fr/1987ESAE0004.

Full text
Abstract:
Les travaux réalisés s'inscrivent dans le cadre de recherches sur les robots mobiles d'atelier en vue d'accroître leur flexibilité quant à la modification des trajectoires et à l'extension du réseau et à plus long terme leur autonomie d'évolution. Ces chariots autonomes appartiennent à une nouvelle génération apparue afin de s'affranchir des contraintes imposées par le fil ou tout autre moyen de guidage impliquant la matérialisation des chemins à suivre. Afin de permettre l'exécution de manœuvres complexes, telles que notamment l'évitement d'obstacles, et la reconfiguration rapide et peu coûteuse des trajectoires, les recherches se sont orientées vers une commande des chariots permettant le suivi de trajectoires non matérialisées : le guidage cartésien. Ce type de commande permet de connaître à tout instant la position et l'orientation du véhicule dans divers référentiels de travail : le système de planification et de gestion de l'atelier ont alors accès à tout instant à ces informations. Notre travail a consisté à concevoir et réaliser le guidage cartésien d'un chariot autonome évoluant dans un environnement coopératif partiellement connu.
APA, Harvard, Vancouver, ISO, and other styles
50

Pélissier, François. "Localisation multisensorielle et guidage cartésien d'un robot mobile autonome." Toulouse, ENSAE, 1990. http://www.theses.fr/1990ESAE0017.

Full text
Abstract:
CARL, chariot autonome avec robot léger, est un chariot de nouvelle génération a vocation industrielle. Il a pour objectif de pallier le manque de flexibilité des chariots filoguidés, quant à la modification de leurs trajectoires matérialisées. Les trajectoires de CARL sont définies de façon cartésienne dans un repère lie à l'environnement et ne nécessitent donc aucun aménagement au sol. Pour se localiser dans son espace de travail, CARL dispose d'informations de localisation relative (odométrie), corrigées par une localisation absolue grâce à l'observation d'amers répertoires et définis par leurs coordonnées dans le repère absolu. Ces amers sont actuellement des balises et des lignes verticales, perçus par une camera embarquée. Les lignes verticales, amers naturels, sont avantageuses dans le sens où elles permettent de réduire l'équipement en balises, voire de le supprimer si l'environnement s'y prête (lignes isolées). De plus, la localisation absolue a une structure multi sensorielle: elle est à même de filtrer toute autre mesure que celles de la camera, comme par exemple les observations délivrées par les capteurs ultrasoniques embarques. Cette fusion d'observations repose sur la théorie du filtre de Kalman. Dote d'une telle localisation, CARL peut être guide le long de parcours complexes, constitues d'une succession d'arcs de cercles et de segments de droite, avec une bonne précision. Cette méthode offre ainsi une grande variété de trajectoires et une grande souplesse d'utilisation.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography