Dissertations / Theses on the topic 'Robots Vision artificielle (robotique)'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Robots Vision artificielle (robotique).'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Meyer, Cédric. "Théorie de vision dynamique et applications à la robotique mobile." Paris 6, 2013. http://www.theses.fr/2013PA066137.
Full textThe recognition of objects is a key challenge in increasing the autonomy of robots and their performances. Although many techniques of object recognition have been developed in the field of frame-based vision, none of them hold the comparison against human perception in terms of performance, weight and power consumption. Neuromorphic engineering models biological components into artificial chips. It recently provides an event-based camera inspired from a biological retina. The sparse, asynchronous, scene-driven visual data generated by these sensors allow the development of computationally efficient bio-inspired artificial vision. I focus in this thesis on studying how event-based acquisition and its accurate temporal precision can change object recognition by adding precise timing in the process. This thesis first introduces a frame-based object detection and recognition algorithm used for semantic mapping. It then studies quantitatively what are the advantages of using event-based acquisition using mutual information. It then inquires into low level event-based spatiotemporal features in the context of dynamic scene to introduce an implementation of a real-time multi-kernel feature tracking using Gabor filters or any kernel. Finally, a fully asynchronous time-oriented architecture of object recognition mimicking V1 visual cortex is presented. It extends the state of the art HMAX model in a pure temporal implementation of object recognition
Marin, Hernandez Antonio Devy Michel. "Vision dynamique pour la navigation d'un robot mobile." Toulouse : INP Toulouse, 2004. http://ethesis.inp-toulouse.fr/archive/00000009.
Full textMansard, Nicolas Chaumette François. "Enchaînement de tâches robotiques." [S.l.] : [s.n.], 2006. ftp://ftp.irisa.fr/techreports/theses/2006/mansard.pdf.
Full textLoménie, Nicolas. "Interprétation de nuages de points : application à la modélisation d'environnements 3D en robotique mobile." Paris 5, 2001. http://www.theses.fr/2001PA05S027.
Full textThis thesis work deals with unorganised 3D point sets analysis based on two tools : on the one hand a clustering algorithm inspired by K-means and on the other hand morphological filtering techniques specifically designed for Delaunay triangulation representations of unstructured point sets. Applications to autonomous mobile robotic navigation are performed in the field of computer vision in unknown environments. But the designed methodology has been generically applied to others types of more structured environments
Clérentin, Arnaud. "Localisation d'un robot mobile par coopération multi-capteurs et suivi multi-cibles." Amiens, 2001. http://www.theses.fr/2001AMIE0023.
Full textMerveilleux-Orzekowska, Pauline. "Exploration et navigation de robots basées vision omnidirectionnelle." Amiens, 2012. http://www.theses.fr/2012AMIE0112.
Full textFree space perception, without any prior knowledge, is still a challenging issue in mobile robotics. In this thesis, we propose to use active contour models and monocular omnidirectional vision as a framework within which to realize real-time unknown free space detections. Several approaches have been developed in the literature and can be classified in two main categories : Parametric and Geometric active contours. The parametric active contours are explicit models whose deformation is constrained by minimization of a functional energy. These latters provide fast segmentations but can not ensure the convergence of the contour to boundary concavities. In contrast, the geometric methods consider an implicit representation of the contour as the level sets of two dimensional distance functions which evolve according to an Eulerian formulation. They overcome contour convergence in boundary concavities and allow to automatically handle topology changes of the contour. However, their use is limited because of their prohibitive computational cost. In this work, various improvements to existing geometric and parametric models have been proposed to overcome these constraints of prohibitive computational cost and poorly convergence to boundary concavities. A new method based on an Bezier curves interpolation with a geometric construction is proposed. Experiments led in real environments have validated the interest and efficiency of these approaches
Cabrol, Aymeric de. "Système de vision robuste temps- réel dynamiquement reconfigurable pour la robotique mobile." Paris 13, 2005. http://www.theses.fr/2005PA132038.
Full textMansard, Nicolas. "Enchaînement de tâches robotiques." Rennes 1, 2006. http://www.theses.fr/2006REN1S097.
Full textLasserre, Patricia. "Vision pour la robotique en environnement naturel." Phd thesis, Université Paul Sabatier - Toulouse III, 1996. http://tel.archives-ouvertes.fr/tel-00139846.
Full textBreton, Stéphane. "Une approche neuronale du contrôle robotique utilisant la vision binoculaire par reconstruction tridimensionnelle." Mulhouse, 1999. http://www.theses.fr/1999MULH0532.
Full textMotamed, Cina, and Alain Schmitt. "Application de la vision artificielle à la sécurité en robotique." Compiègne, 1992. http://www.theses.fr/1992COMP0511.
Full textMurrieta, Cid Rafael. "Contribution au développement d'un système de vision pour robot mobile d'extérieur." Toulouse, INPT, 1998. http://www.theses.fr/1998INPT030H.
Full textMarin, Hernandez Antonio. "Vision dynamique pour la navigation d'un robot mobile." Phd thesis, Toulouse, INPT, 2004. http://oatao.univ-toulouse.fr/7346/1/marin_hernandez.pdf.
Full textLe, Bras-Mehlman Elizabeth. "Représentation de l'environnement d'un robot mobile." Paris 11, 1989. http://www.theses.fr/1989PA112195.
Full textMarey, Mohammed Abdel-Rahman. "Contributions to control modeling in visual servoing, task redundancy and joint limits avoidance." Rennes 1, 2010. http://www.theses.fr/2010REN1S134.
Full textL’asservissement visuel est devenu une approche classique dans le cadre de la commande de robots exploitant les informations fournies par un capteur de vision dans une boucle de commande. La recherche décrite dans cette thèse vise à résoudre des problèmes d’asservissement et à améliorer la capacité de gérer plus efficacement les tâches supplémentaires. Cette thèse présente tout d’abord l’état de l’art en asservissement visuel, redondance et évitement des butées articulaires. Elle propose ensuite les contributions suivantes: Un schéma de commande est obtenu en introduisant un paramètre de comportement dans un contrôle hybride. Il permet un meilleur comportement du système lorsque des valeurs appropriées du paramètre sont sélectionnées. Une étude analytique des lois de commandes les plus courantes et de la nouvelle loi proposée est effectuée dans le cas de mouvements de translation et de rotation selon l’axe optique. De nouveaux schémas de commande sont également proposés pour améliorer le comportement du système lorsque la configuration désirée est singulière. Les contributions théoriques concernant le formalisme de la redondance reposent sur l’élaboration d’un opérateur de projection obtenu en ne considérant que la norme de la tâche principale. Cela conduit à un problème moins contraint et permet d’élargir le domaine d’application. De nouvelles stratégies d’évitement des butées articulaires du robot fondées sur la redondance sont développées. Le problème d’ajouter des tâches secondaires à la tâche principale, tout en assurant l’évitement des butées articulaires, est également résolu. Tous ces travaux ont été validés par des expérimentations dans le cadre d’applications d’asservissement visuel
Bideaux, Eric. "Stan : systeme de transport a apprentissage neuronal. application de la vision omnidirectionnelle a la localisation d'un robot mobile autonome." Besançon, 1995. http://www.theses.fr/1995BESA2008.
Full textLéonard, François. "Contribution à la commande dynamique d'un robot industriel, en boucle fermée, par caméra embarquée." Université Louis Pasteur (Strasbourg) (1971-2008), 1990. http://www.theses.fr/1990STR13134.
Full textLallement, Alex. "Localisation d'un robot mobile par coopération entre vision monoculaire et télémétrie laser." Vandoeuvre-les-Nancy, INPL, 1999. http://www.theses.fr/1999INPL064N.
Full textKhadraoui, Djamel. "La commande référencée vision pour le guidage automatique de véhicules." Clermont-Ferrand 2, 1996. http://www.theses.fr/1996CLF20860.
Full textAyala, Ramirez Victor. "Fonctionnalités visuelles sur des scènes dynamiques pour la robotique mobile." Toulouse 3, 2000. http://www.theses.fr/2000TOU30184.
Full textAviña, Cervantes Juan Gabriel Devy Michel. "Navigation visuelle d'un robot mobile dans un environnement d'extérieur semi-structuré." Toulouse : INP Toulouse, 2005. http://ethesis.inp-toulouse.fr/archive/00000163.
Full textSolà, Ortega Joan Devy Michel Monin André. "Towards visual localization, mapping and moving objects tracking by a mobile robot a geometric and probabilistic approach /." Toulouse : INP Toulouse, 2007. http://ethesis.inp-toulouse.fr/archive/00000528.
Full textBienfait, Eric. "Pratic : Programmation de Robot Assistée par Traitement d'Image et Caméra." Lille 1, 1987. http://www.theses.fr/1987LIL10086.
Full textTrabelsi, Mohamed El Hadi. "Combinaison d'informations visuelles et ultrasonores pour la localisation d'un robot mobile et la saisie d'objets." Evry-Val d'Essonne, 2006. http://www.biblio.univ-evry.fr/theses/2006/Interne/2006EVRY0036.pdf.
Full textMy research bellows at the ARPH project. The aim of this project is to bring an assistance to disabled people in the various tasks of life, using a mobile robot and an arm manipulator. The first part of my thesis is devoted to the development of a localization system for the mobile robot. We use a 2D/3D matching between the 3D environment model enriched with ultrasonic information and 2D image segments. Function which transforms the 3D coordinates of the model segments to the camera coordinates is based on the Lowe linear principle. The choice of the best position is obtained by measuring distances between the two groups of segments in space. The second part of my thesis was devoted to the development of a seizure strategy for simple objects (cylinder or sphere). A camera and a sonar followed by a neural networks are installed at the robot gripper. The combination of information received from these two sensors allows the grabbing of the object. This object is centered in the visual field of the camera by image processing. The gripper approaches the object until the seizure. This method is based on several elements which take part in the development of a visual servoing strategy
Netter, Thomas. "De la vision naturelle à la vision artificielle : application du contrôle visuo-moteur de la mouche au pilotage d'un robot réactif volant." Nice, 2000. http://www.theses.fr/2000NICE5484.
Full textPrevious research on the visuo-motor system of the fly within the Neurocybernetics Group of the Laboratory of Neurobiolgy, CNRS, Marseilles, France, has led to the development of two mobile robots which feature an analogue electronic vision system based on Elementary Motion Detectors (EMD) derived from thoseof the fly. A tethered Unmanned Air Vehicle (UAV), called Fania, was developed to study Nap-ofthe-Eart (NOE)flight (terrain following) and obstacle avoidance using a motion sensing visual system. After an aerodynamic study, Fania was custom-built as a miniature (35 cm, 0. 840kg), electrically-powered, thrust-vectoring rotorcraft. It is constrained by a whirling-arm to 3 degrees of freedom with pitch and thrust control. The robotic aircraft's 20-photoreceptor onboard eye senses moving contrasts with 19 ground-based neuromorphic EMDs
Nottale, Matthieu. "Ancrage d'un lexique partagé entre robots autonomes dans un environnement non-contraint." Phd thesis, Université Pierre et Marie Curie - Paris VI, 2008. http://pastel.archives-ouvertes.fr/pastel-00004260.
Full textMartinez, Margarit Aleix. "Apprentissage visuel dans un système de vision active : application dans un contexte de robotique et reconnaissance du visage." Paris 8, 1998. http://www.theses.fr/1998PA081521.
Full textZhang, Zhongkai. "Vision-based calibration, position control and force sensing for soft robots." Thesis, Lille 1, 2019. http://www.theses.fr/2019LIL1I001/document.
Full textThe modeling of soft robots which have, theoretically, infinite degrees of freedom, are extremely difficult especially when the robots have complex configurations. This difficulty of modeling leads to new challenges for the calibration and the control design of the robots, but also new opportunities with possible new force sensing strategies. This dissertation aims to provide new and general solutions using modeling and vision. The thesis at first presents a discrete-time kinematic model for soft robots based on the real-time Finite Element (FE) method. Then, a vision-based simultaneous calibration of sensor-robot system and actuators is investigated. Two closed-loop position controllers are designed. Besides, to deal with the problem of image feature loss, a switched control strategy is proposed by combining both the open-loop controller and the closed-loop controller. Using soft robot itself as a force sensor is available due to the deformable feature of soft structures. Two methods (marker-based and marker-free) of external force sensing for soft robots are proposed based on the fusion of vision-based measurements and FE model. Using both methods, not only the intensities but also the locations of the external forces can be estimated.As a specific application, a cable-driven continuum catheter robot through contacts is modeled based on FE method. Then, the robot is controlled by a decoupled control strategy which allows to control insertion and bending independently. Both the control inputs and the contact forces along the entire catheter can be computed by solving a quadratic programming (QP) problem with a linear complementarity constraint (QPCC)
Tessier, Cédric. "Système de localisation basé sur une stratégie de perception cognitive appliqué à la navigation autonome d'un robot mobile." Clermont-Ferrand 2, 2007. http://www.theses.fr/2007CLF21784.
Full textAyache, Nicholas. "Construction et fusion de représentations visuelles (3D) : applications à la robotique mobile." Paris 11, 1988. http://www.theses.fr/1988PA112132.
Full textCourbon, Jonathan. "Navigation de Robots Mobiles par Mémoire Sensorielle." Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2009. http://tel.archives-ouvertes.fr/tel-00664837.
Full textHamdi, Hocine. "Étude d'une station de synthèse de programmes robots prenant en compte un système de vision." Compiègne, 1986. http://www.theses.fr/1986COMPI248.
Full textWe study the mechanisms for implementing an on-line and off-line robot programming station, to be coupled with a vision system. We describe six modules for a task specification in an object level language, for simulating the execution of the resulting code, and its effective execution in the field. Object modelling is based on the definition of a set of frames (locating, grasping) and the relations with other objects. Binds between objects are not explicitly declared. They are created and updated by the system, like the locations automatic updating. The main originality stems from the programming methodology, which allows high level programming, low level code generation, and simulating the execution during the program edition, statement by statement. Adding to its easy use, the modularity of the system leads to a great extensibility. Its enables dealing with different robots having their own command languages, using always the same language (stations language). For the moment, only the Puma 560 geometric model and its language (VAL) have been integrated
Remazeilles, Anthony. "Navigation à partir d'une mémoire d'images." Phd thesis, Université Rennes 1, 2004. http://tel.archives-ouvertes.fr/tel-00524505.
Full textPaccot, Flavien. "Contribution à la commande dynamique référencée capteurs de robots parallèles." Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2009. http://tel.archives-ouvertes.fr/tel-00725568.
Full textAviña, Cervantes Juan Gabriel. "Navigation visuelle d'un robot mobile dans un environnement d'extérieur semi-structuré." Toulouse, INPT, 2005. http://ethesis.inp-toulouse.fr/archive/00000163/.
Full textThis thesis deals with the automatic processing of color images, and its application to robotics in outdoor semi-structured environments. We propose a visual-based navigation method for mobile robots by using an onboard color camera. The objective is the robotization of agricultural machines, in order to navigate automatically on a network of roads (to go from a farm to a given field) [. . . ]
Baron, Thierry. "De la perception à la modélisation par la vision achrome et trichrome en robotique." Toulouse 3, 1991. http://www.theses.fr/1991TOU30216.
Full textFernandez, labrador Clara. "Indoor Scene Understanding using Non-Conventional Cameras." Thesis, Bourgogne Franche-Comté, 2020. http://www.theses.fr/2020UBFCK037.
Full textHumans understand environments effortlessly, under a wide variety of conditions, by the virtue of visual perception. Computer vision for similar visual understanding is highly desirable, so that machines can perform complex tasks by interacting with the real world, to assist or entertain humans. In this regard, we are particularly interested in indoor environments, where humans spend nearly all their lifetime.This thesis specifically addresses the problems that arise during the quest of the hierarchical visual understanding of indoor scenes.On the side of sensing the wide 3D world, we propose to use non-conventional cameras, namely 360º imaging and 3D sensors. On the side of understanding, we aim at three key aspects: room layout estimation; object detection, localization and segmentation; and object category shape modeling, for which novel and efficient solutions are provided.The focus of this thesis is on the following underlying challenges. First, the estimation of the 3D room layout from a single 360º image is investigated, which is used for the highest level of scene modelling and understanding. We exploit the assumption of Manhattan World and deep learning techniques to propose models that handle invisible parts of the room on the image, generalizing to more complex layouts. At the same time, new methods to work with 360º images are proposed, highlighting a special convolution that compensates the equirectangular image distortions.Second, considering the importance of context for scene understanding, we study the problem of object localization and segmentation, adapting the problem to leverage 360º images. We also exploit layout-objects interaction to lift detected 2D objects into the 3D room model.The final line of work of this thesis focuses on 3D object shape analysis. We use an explicit modelling of non-rigidity and a high-level notion of object symmetry to learn, in an unsupervised manner, 3D keypoints that are order-wise correspondent as well as geometrically and semantically consistent across objects in a category.Our models advance state-of-the-art on the aforementioned tasks, when each evaluated on respective reference benchmarks
Corrieu, Jean-Michel. "Élaboration d'un outil industriel de vision par ordinateur : application à l'inspection et à la robotique dans l'industrie informatique." Montpellier 2, 1986. http://www.theses.fr/1986MON20219.
Full textAlmanza, Ojeda Dora Luz. "Détection et suivi d'objets mobiles perçus depuis un capteur visuel embarqué." Toulouse 3, 2011. http://thesesups.ups-tlse.fr/2339/.
Full textThis dissertation concerns the detection and the tracking of mobile objets in a dynamic environment, using a camera embedded on a mobile robot. It is an important challenge because only a single camera is used to solve the problem. We must detect mobile objects in the scene, analyzing their apparent motions on images, excluding the motion caused by the ego-motion of the camera. First it is proposed a spatio-remporal analysis of the image sequence based on the sparse optical flow. The a contrario clustering method provides the grouping of dynamic points, without using a priori information and without parameter tuning. This method success is based on the accretion of sufficient information on positions and velocities of these points. We call tracking time, the time required in order to acquire images analyzed to provide the points characterization. A probabilistic map is built in order to find image areas with the higher probabilities to find a mobile objet; this map allows an active selection of new points close the previously detected mobile regions, making larger these regions. In a second step, it is proposed an iterative approach to perform the detection-clustering-tracking process on image sequences acquired from a fixed camera for indoor or outdoor applications. An object is described by an active contour, updated so that the initial object model remains inside the contour. Finally it is presented experimental results obtained on images acquired from a camera embedded on a mobile robot navigating in outdoor environments with rigid or non rigid mobile objects ; it is shown that the method works to detect obstacles during the navigation in a priori unknown environments, first with a weak speed, then with more a realistic speed, compensating the robot ego-motion in images
Dallej, Tej. "Contributions à un modèle générique pour l'asservissement visuel des robots parallèles par l'observation des éléments cinématiques." Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2007. http://tel.archives-ouvertes.fr/tel-00925695.
Full textMei, Christopher. "Couplage vision omnidirectionnelle et télémétrie laser pour la navigation en robotique." Phd thesis, École Nationale Supérieure des Mines de Paris, 2007. http://pastel.archives-ouvertes.fr/pastel-00004652.
Full textColonnier, Fabien. "Oeil composé artificiel doté d'hypercuité : applications robotiques à la stabilisation et à la poursuite." Thesis, Aix-Marseille, 2017. http://www.theses.fr/2017AIXM0076/document.
Full textInspired by the optical properties of the fly compound eyes and the observation of its retinal periodic micro-movements, several visual sensors established that the localization of a contrast can be made very precisely. It was the first demonstration of the visual hyperacuity of the fly compound eye.In this thesis, an artificial compound eye with a wide field of view was used. Thanks to a novel algorithm fusing the visual signals, the sensor embedded onboard an aerial robot measures its displacement and enables the robot to hover above a textured environment.The localization of a contrast precisely over the whole field of view is still difficult. A second algorithm improved the localization of a bar thanks to a calibration. But it has a dependency to the contrast and the illuminance variations.In order to avoid a calibration process, a third algorithm was proposed to localize two contrasts. It is based on the work of Heiligenberg and Baldi, which showed that an array of Gaussian receptive field can provide a linear estimation of a stimulus position. For the first time, we applied a modified version of their estimation to an artificial compound eye. This sensor mounted on a rover allows following a target precisely at a constant distance.Finally, an artificial compound eye with a coarse spatial resolution can be endowed with hyperacuity and enables a robot to follow a target with precision. A step forward has been made toward bio-inspired target localization and pursuit
Costis, Thomas. "Couplage perception-locomotion pour robot quadrupède autonome." Versailles-St Quentin en Yvelines, 2006. http://www.theses.fr/2006VERS0026.
Full textThis thesis presents a survey of perception-driven legged locomotion for robotics use. This research focuses on increasing the autonomy of the robot so it can adapt its gait according to the ground and the environment. In the first part, we present the visual perception system including colored object detection with a single video-camera. The perception system also incorporates different line detection algorithms, in order to get polygonal maps of the obstacles on the ground. Visual primitives are then used to localise the robot in a structured environment with respect to a segment and within an absolute reference frame. The experimental part is conducted on Sony quadruped robots and aims to implement autonomous behaviours such as line following or positionning relatively to polygonal shapes to cross or avoid obstacles
Guermeur, Philippe. "Vision robotique monoculaire : reconstruction du temps-avant-collision et de l'orientation des surfaces à partir de l'analyse de la déformation." Rouen, 2002. http://www.theses.fr/2002ROUES053.
Full textThis thesis presents a method to process axial monocular image sequences for evaluation of the time to collision and the surface orientation in a scene. Using a planar facet representation of the scene, we first calculate formally the apparent velocity field generated by the camera motion as a function of the 3-D facet model and the motion hypothesis. A global deformation model is calculated using the Green and Stokes theorems to integrate the normal and tangential components of the vector field and connect the results with the parameters to be identified In practice, the vector field is computed using the epipolar constraint, and the camera is fitted with a wide angle lens. Using the epipolar model needs accurate camera calibration in order to compensate for image distortion and fit the pinhole model. To this end, we introduce a new method based on evolutionary optimisation to determine the distortion coefficients, based on the EASEA evolutionary specification language
Caron, Guillaume. "Estimation de pose et asservissement de robot par vision omnidirectionnelle." Phd thesis, Université de Picardie Jules Verne, 2010. http://tel.archives-ouvertes.fr/tel-00577133.
Full textBen, Khelifa Mohamed Moncef. "Vision par Ordinateur et Robotique d'Assistance : application au projet M.A.R.H. , Mobile Autonome Robotisé pour Handicapés." Toulon, 2001. http://www.theses.fr/2001TOUL0012.
Full textThe assistance robotic systems aim at bringing a help to the handicapped people, for example, the piloting of the robotized travelling wheelchairs, while meeting the needs for planning aid of trajectory, and navigation by using techniques of dynamic vision. In this thesis we studied the analogies which exist between the human neuro-vision and the computer vision in order to overcome problems of the visuo-spacc deficits on travelling wheelchairs. Firstly, we treat the problem of the interest points detection by establishing a stable smoothing of the signal with Gaussien filters followed by an approximation of the local maximum with nonwhole positions. The developed detector leads to a subpixel precision of interest points. The repeatability of this detector, enabled us to calculate the epipolar geometry which is characterized by a stability of the undamental matrix, the last one plays a very significant role in the search of corresponding primitives in image sequences. A stratified calibration approach, has been used to the CCD camera self-calibration : The estimation of the infinite homographic matrix is based on affine calibration, and the estimation of the intrinsic and extrinsic parameters is performed by Euclidean calibration based on the absolute conic method (simplified Kruppa equations). Finally, we validate these algorithms by two experiments: one consists on planning trajectory followed by a navigation along a corridor, and the other one to avoid obstacles in indoor environment
Abdallah, Ahmad. "Contribution à la surveillance d'un site robotisé par traitement d'images." Compiègne, 1997. http://www.theses.fr/1997COMP1005.
Full textDugas, Olivier. "Localisation relative à six degrés de liberté basée sur les angles et sur le filtrage probabiliste." Master's thesis, Université Laval, 2014. http://hdl.handle.net/20.500.11794/25607.
Full textWhen a team of robots have to collaborate, it is useful to allow them to localize each other in order to maintain flight formations, for example. The solution of cooperative localization is of particular importance to teams of aerial or underwater robots operating in areas devoid of landmarks. The problem becomes harder if the localization system must be low-cost and lightweight enough that only consumer-grade cameras can be used. This paper presents an analytical solution to the six degrees of freedom cooperative localization problem using bearing only measurements. Probabilistic filters are integrated to this solution to increase it's accuracy. Given two mutually observing robots, each one equipped with a camera and two markers, and given that they each take a picture at the same moment, we can recover the coordinate transformation that expresses the pose of one robot in the frame of reference of the other. The novelty of our approach is the use of two pairs of bearing measurements for the pose estimation instead of using both bearing and range measurements. The accuracy of the results is verified in extensive simulations and in experiments with real hardware. In experiments at distances between 3:0 m and 15:0 m, we show that the relative position is estimated with less than 0:5 % error and that the mean orientation error is kept below 2:2 deg. An approximate generalization is formulated and simulated for the case where each robot's camera is not colinear with the same robot's markers. Passed the precision limit of the cameras, we show that an unscented Kalman filter can soften the error on the relative position estimations, and that an quaternion-based extended Kalman filter can do the same to the error on the relative orientation estimations. This makes our solution particularly well suited for deployment on fleets of inexpensive robots moving in 6 DoF such as blimps.
Clot, Robert. "Coopération robot - préhension - vision pour la manipulation des pièces souples : application à l'industrie du cuir." Lyon, INSA, 1991. http://www.theses.fr/1991ISAL0026.
Full textAutomation gradually supersedes conventional manual Processing in footwear and apparel manufacture. Automated handing of pieces corresponds to an increasing need. These pieces are flat, supple, porous and thin. At the present time, the supply of systems able to fulfill efficiently these requirements remains too limited. When handling is to be coupled with press cutting, the manipulation is closely linked to the study of a vision system. The pieces should be localised and identified, in order to determine the positionning and configuration of the prehension head. The vision, recognition, prehension and sorting system developped corresponds to these requirements. The main innovation concerns sorting, prehension and the interaction vision/prehension. The choice of the identification parameters has been adapted to the leather pieces constituting shoe uppers. The prehension head required an important developpement work : the matrix concept has been devel opped and led to three applications. In an experimental configuration, the pieces are picked up from a conveyor belt ; this pilot has demonstrated that the vision and the recognition systems are both satisfactory
Bosch, Sébastien. "Contribution à la modélisation d'environnements par vision monoculaire dans un contexte de robotique aéro-terrestre." Phd thesis, Université Paul Sabatier - Toulouse III, 2007. http://tel.archives-ouvertes.fr/tel-00181794.
Full text