To see the other types of publications on this topic, follow the link: Odometri.

Dissertations / Theses on the topic 'Odometri'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Odometri.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Johansson, Sixten. "Navigering och styrning av ett autonomt markfordon." Thesis, Linköping University, Department of Electrical Engineering, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-6006.

Full text
Abstract:

I detta examensarbete har ett system för navigering och styrning av ett autonomt fordon implementerats. Syftet med detta arbete är att vidareutveckla fordonet som ska användas vid utvärdering av banplaneringsalgoritmer och studier av andra autonomifunktioner. Med hjälp av olika sensormodeller och sensorkonfigurationer går det även att utvärdera olika strategier för navigering. Arbetet har utförts utgående från en given plattform där fordonet endast använder sig av enkla ultraljudssensorer samt pulsgivare på hjulen för att mäta förflyttningar. Fordonet kan även autonomt navigera samt följa en enklare given bana i en känd omgivning. Systemet använder ett partikelfilter för att skatta fordonets tillstånd med hjälp av modeller för fordon och sensorer.

Arbetet är en fortsättning på projektet Collision Avoidance för autonomt fordon som genomfördes vid Linköpings universitet våren 2005.


In this thesis a system for navigation and control of an autonomous ground vehicle has been implemented. The purpose of this thesis is to further develop the vehicle that is to be used in studies and evaluations of path planning algorithms as well as studies of other autonomy functions. With different sensor configurations and sensor models it is also possible to evaluate different strategies for navigation. The work has been performed using a given platform which measures the vehicle’s movement using only simple ultrasonic sensors and pulse encoders. The vehicle is able to navigate autonomously and follow a simple path in a known environment. The state estimation is performed using a particle filter.

The work is a continuation of a previous project, Collision Avoidance för autonomt fordon, at Linköpings University in the spring of 2005.

APA, Harvard, Vancouver, ISO, and other styles
2

CHEN, HONGYI. "GPS-oscillation-robust Localization and Visionaided Odometry Estimation." Thesis, KTH, Maskinkonstruktion (Inst.), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-247299.

Full text
Abstract:
GPS/IMU integrated systems are commonly used for vehicle navigation. The algorithm for this coupled system is normally based on Kalman filter. However, oscillated GPS measurements in the urban environment can lead to localization divergence easily. Moreover, heading estimation may be sensitive to magnetic interference if it relies on IMU with integrated magnetometer. This report tries to solve the localization problem on GPS oscillation and outage, based on adaptive extended Kalman filter(AEKF). In terms of the heading estimation, stereo visual odometry(VO) is fused to overcome the effect by magnetic disturbance. Vision-aided AEKF based algorithm is tested in the cases of both good GPS condition and GPS oscillation with magnetic interference. Under the situations considered, the algorithm is verified to outperform conventional extended Kalman filter(CEKF) and unscented Kalman filter(UKF) in position estimation by 53.74% and 40.09% respectively, and decrease the drifting of heading estimation.
GPS/IMU integrerade system används ofta för navigering av fordon. Algoritmen för detta kopplade system är normalt baserat på ett Kalmanfilter. Ett problem med systemet är att oscillerade GPS mätningar i stadsmiljöer enkelt kan leda till en lokaliseringsdivergens. Dessutom kan riktningsuppskattningen vara känslig för magnetiska störningar om den är beroende av en IMU med integrerad magnetometer. Rapporten försöker lösa lokaliseringsproblemet som skapas av GPS-oscillationer och avbrott med hjälp av ett adaptivt förlängt Kalmanfilter (AEKF). När det gäller riktningsuppskattningen används stereovisuell odometri (VO) för att försvaga effekten av magnetiska störningar genom sensorfusion. En Visionsstödd AEKF-baserad algoritm testas i fall med både goda GPS omständigheter och med oscillationer i GPS mätningar med magnetiska störningar. Under de fallen som är aktuella är algoritmen verifierad för att överträffa det konventionella utökade Kalmanfilteret (CEKF) och ”Unscented Kalman filter” (UKF) när det kommer till positionsuppskattning med 53,74% respektive 40,09% samt minska fel i riktningsuppskattningen.
APA, Harvard, Vancouver, ISO, and other styles
3

Pereira, Fabio Irigon. "High precision monocular visual odometry." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2018. http://hdl.handle.net/10183/183233.

Full text
Abstract:
Extrair informação de profundidade a partir de imagens bidimensionais é um importante problema na área de visão computacional. Diversas aplicações se beneficiam desta classe de algoritmos tais como: robótica, a indústria de entretenimento, aplicações médicas para diagnóstico e confecção de próteses e até mesmo exploração interplanetária. Esta aplicação pode ser dividida em duas etapas interdependentes: a estimação da posição e orientação da câmera no momento em que a imagem foi gerada, e a estimativa da estrutura tridimensional da cena. Este trabalho foca em técnicas de visão computacional usadas para estimar a trajetória de um veículo equipado com uma câmera, problema conhecido como odometria visual. Para obter medidas objetivas de eficiência e precisão, e poder comparar os resultados obtidos com o estado da arte, uma base de dados de alta precisão, bastante utilizada pela comunidade científica foi utilizada. No curso deste trabalho novas técnicas para rastreamento de detalhes, estimativa de posição de câmera, cálculo de posição 3D de pontos e recuperação de escala são propostos. Os resultados alcançados superam os mais bem ranqueados trabalhos na base de dados escolhida até o momento da publicação desta tese.
Recovering three-dimensional information from bi-dimensional images is an important problem in computer vision that finds several applications in our society. Robotics, entertainment industry, medical diagnose and prosthesis, and even interplanetary exploration benefit from vision based 3D estimation. The problem can be divided in two interdependent operations: estimating the camera position and orientation when each image was produced, and estimating the 3D scene structure. This work focuses on computer vision techniques, used to estimate the trajectory of a vehicle equipped camera, a problem known as visual odometry. In order to provide an objective measure of estimation efficiency and to compare the achieved results to the state-of-the-art works in visual odometry a high precision popular dataset was selected and used. In the course of this work new techniques for image feature tracking, camera pose estimation, point 3D position calculation and scale recovery are proposed. The achieved results outperform the best ranked results in the popular chosen dataset.
APA, Harvard, Vancouver, ISO, and other styles
4

Porteš, Petr. "Návrh a realizace odometrických snímačů pro mobilní robot s Ackermannovým řízením." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2017. http://www.nusl.cz/ntk/nusl-318145.

Full text
Abstract:
Aim of this thesis is to design and construct odometric sensors for a mobile robot with Ackermann steering Bender 2 and to design a mathematical model which would evaluate the the trajectory of the robot using measured data of these sensors. The first part summarizes theoretical knowledge, while the second, the practical part, describes the design of the front axle, the design and the operating software of the front encoders and the odometric models. The last part deals with the processing and evaluation of the measured data.
APA, Harvard, Vancouver, ISO, and other styles
5

Pärkkä, J. (Jarmo). "Reaaliaikainen visuaalinen odometria." Master's thesis, University of Oulu, 2013. http://urn.fi/URN:NBN:fi:oulu-201312021943.

Full text
Abstract:
Visuaalisella odometrialla estimoidaan ajoneuvon, ihmisen tai robotin liikettä käyttäen syötteenä kuvaa yhdestä tai useammasta kamerasta. Sovelluskohteita on robotiikassa, autoteollisuudessa, asustemikroissa ja lisätyssä todellisuudessa. Se on hyvä lisä navigointijärjestelmiin, koska se toimii ympäristöissä missä GPS ei toimi. Visuaalinen odometria kehitettiin pyöräodometrian korvaajaksi, koska sen käyttö ei ole riippuvainen maastosta ja liikkumismuodosta. Tässä työssä tutkitaan ja kehitetään visuaalisen odometrian menetelmää reaaliaikaiseen sulautettuun järjestelmään. Työssä esitellään visuaalisen odometrian perusteet ja sen sisältämät osamenetelmät. Lisäksi esitellään yhtäaikainen paikallistaminen ja kartoitus (SLAM), jonka osana visuaalinen odometria voi esiintyä. Kehitettyä visuaalisen odometrian menetelmää on tarkoituksena käyttää Parrotin robottihelikopterille AR.Drone 2.0:lle tunnistamaan sen liikkeet. Tällöin robottihelikopteri saa tarpeeksi tietoa ympäristöstään lentääkseen itsenäisesti. Työssä toteutetaan algoritmi robotin tallentaman videomateriaalin tulkitsemiseen. Työssä toteutettu menetelmä on monokulaarinen SLAM, jossa käytetään yhden pisteen RANSAC-menetelmää yhdistettynä käänteisen syvyyden EKF:ään. Menetelmän piirteenirroitus ja vastinpisteiden etsintä korvataan reaaliaikaisella sulautetulle järjestelmälle sopivalla menetelmällä. Algoritmin toiminta testataan mittaamalla sen suoritusaika useilla kuvasekvensseillä ja analysoimalla sen piirtämää karttaa kameran liikkeestä. Lisäksi tarkastellaan sen antamien navigointitietojen todenmukaisuutta. Toteutetun järjestelmän toimintaa analysoidaan visuaalisesti ja sen toimintaa tarkastellaan suhteessa vertailumenetelmään. Työssä toteutettu visuaalisen odometrian menetelmä todetaan toimivaksi ratkaisuksi reaaliaikaiselle sulautetulle järjestelmälle tietyt rajoitukset huomioiden
Visual odometry is the process of estimating the motion of a vehicle, human or robot using the input of a single or multiple cameras. Application domains include robotics, wearable computing, augmented reality and automotive. It is a good supplement to the navigation systems because it operates in the environments where GPS does not. Visual odometry was developed as a substitute for wheel odometry, because its use is not dependent of the terrain. Visual odometry can be applied without restrictions to the way of movement (wheels, flying, walking). In this work visual odometry is examined and developed to be used in real-time embedded system. The basics of visual odometry are discussed. Furthermore, simultaneous localization and mapping (SLAM) is introduced. Visual odometry can appear as a part of SLAM. The purpose of this work is to develop visual odometry algorithm for Parrot’s robot helicopter AR.Drone 2.0, so it could fly independently in the future. The algorithm is based on Civera’s EKF-SLAM method, where feature extraction is replaced with an approach used earlier in global motion estimation. The operation of the algorithm is tested by measuring its performance time with different image sequences and by analyzing the movement of the camera from the map drawn by it. Furthermore, the reality of the navigation information is examined. The operation of the executed system is visually analyzed on the basis of the video and its operation is examined in relation to the comparison method. Developed visual odometry method is found to be a functional solution to the real-time embedded system under certain constraints
APA, Harvard, Vancouver, ISO, and other styles
6

Nishitani, André Toshio Nogueira. "Localização baseada em odometria visual." Universidade de São Paulo, 2015. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-17082016-095838/.

Full text
Abstract:
O problema da localização consiste em estimar a posição de um robô com relação a algum referencial externo e é parte essencial de sistemas de navegação de robôs e veículos autônomos. A localização baseada em odometria visual destaca-se em relação a odometria de encoders na obtenção da rotação e direção do movimento do robô. Esse tipo de abordagem é também uma escolha atrativa para sistemas de controle de veículos autônomos em ambientes urbanos, onde a informação visual é necessária para a extração de informações semânticas de placas, semáforos e outras sinalizações. Neste contexto este trabalho propõe o desenvolvimento de um sistema de odometria visual utilizando informação visual de uma câmera monocular baseado em reconstrução 3D para estimar o posicionamento do veículo. O problema da escala absoluta, inerente ao uso de câmeras monoculares, é resolvido utilizando um conhecimento prévio da relação métrica entre os pontos da imagem e pontos do mundo em um mesmo plano.
The localization problem consists of estimating the position of the robot with regards to some external reference and it is an essential part of robots and autonomous vehicles navigation systems. Localization based on visual odometry, compared to encoder based odometry, stands out at the estimation of rotation and direction of the movement. This kind of approach is an interesting choice for vehicle control systems in urban environment, where the visual information is mandatory for the extraction of semantic information contained in the street signs and marks. In this context this project propose the development of a visual odometry system based on structure from motion using visual information acquired from a monocular camera to estimate the vehicle pose. The absolute scale problem, inherent with the use of monocular cameras, is achieved using som previous known information regarding the metric relation between image points and points lying on a same world plane.
APA, Harvard, Vancouver, ISO, and other styles
7

Ligocki, Adam. "Metody současné sebelokalizace a mapování pro hloubkové kamery." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2017. http://www.nusl.cz/ntk/nusl-316270.

Full text
Abstract:
Tato diplomová práce se zabývá tvorbou fúze pozičních dat z existující realtimové im- plementace vizuálního SLAMu a kolové odometrie. Výsledkem spojení dat je potlačení nežádoucích chyb u každé ze zmíněných metod měření, díky čemuž je možné vytvořit přesnější 3D model zkoumaného prostředí. Práce nejprve uvádí teorií potřebnou pro zvládnutí problematiky 3D SLAMu. Dále popisuje vlastnosti použitého open source SLAM projektu a jeho jednotlivé softwarové úpravy. Následně popisuje principy spo- jení pozičních informací získaných vizuálními a odometrickými snímači, dále uvádí popis diferenciálního podvozku, který byl použit pro tvorbu kolové odometrie. Na závěr práce shrnuje výsledky dosažené datovou fúzí a srovnává je s původní přesností vizuálního SLAMu.
APA, Harvard, Vancouver, ISO, and other styles
8

Souza, Anderson Abner de Santana. "Mapeamento com Sonar Usando Grade de Ocupa??o baseado em Modelagem Probabil?stica." Universidade Federal do Rio Grande do Norte, 2008. http://repositorio.ufrn.br:8080/jspui/handle/123456789/15203.

Full text
Abstract:
Made available in DSpace on 2014-12-17T14:55:08Z (GMT). No. of bitstreams: 1 AndersonASS.pdf: 906367 bytes, checksum: 22fe3d988905f9e44afd63465e16e0df (MD5) Previous issue date: 2008-02-15
In this work, we propose a probabilistic mapping method with the mapped environment represented through a modified occupancy grid. The main idea of the proposed method is to allow a mobile robot to construct in a systematic and incremental way the geometry of the underlying space, obtaining at the end a complete environment map. As a consequence, the robot can move in the environment in a safe way, based on a confidence value of data obtained from its perceptive system. The map is represented in a coherent way, according to its sensory data, being these noisy or not, that comes from exterior and proprioceptive sensors of the robot. Characteristic noise incorporated in the data from these sensors are treated by probabilistic modeling in such a way that their effects can be visible in the final result of the mapping process. The results of performed experiments indicate the viability of the methodology and its applicability in the area of autonomous mobile robotics, thus being an contribution to the field
Neste trabalho, propomos um m?todo de mapeamento probabil?stico com a representa??o do ambiente mapeado em uma grade de ocupa??o modificada. A id?ia principal do m?todo proposto ? deixar que um rob? m?vel construa de forma sistem?tica e incremental a geometria do seu entorno, obtendo ao final um mapa completo do ambiente. Como conseq??ncia, o rob? poder? locomover-se no seu ambiente de modo seguro, baseando-se em um ?ndice de confiabilidade dos dados colhidos do seu sistema perceptivo. O mapa ? representado de forma coerente com os dados sensoriais, sejam esses ruidosos ou n?o, oriundos dos sensores externoceptivos e proprioceptivos do rob?. Os ru?dos caracter?sticos incorporados nos dados de tais sensores s?o tratados por modelagem probabil?stica, de modo que seus efeitos possam ser vis?veis no resultado final do processo de mapeamento. Os resultados dos experimentos realizados, mostrados no presente trabalho, indicam a viabilidade desta metodologia e sua aplicabilidade na ?rea da rob?tica m?vel aut?noma, sendo assim uma contribui??o para a ?rea
APA, Harvard, Vancouver, ISO, and other styles
9

Silva, Bruno Marques Ferreira da. "Odometria visual baseada em t?cnicas de structure from motion." Universidade Federal do Rio Grande do Norte, 2011. http://repositorio.ufrn.br:8080/jspui/handle/123456789/15364.

Full text
Abstract:
Made available in DSpace on 2014-12-17T14:55:51Z (GMT). No. of bitstreams: 1 BrunoMFS_DISSERT.pdf: 2462891 bytes, checksum: b8ea846d0fcc23b0777a6002e9ba92ac (MD5) Previous issue date: 2011-02-15
Coordena??o de Aperfei?oamento de Pessoal de N?vel Superior
Visual Odometry is the process that estimates camera position and orientation based solely on images and in features (projections of visual landmarks present in the scene) extraced from them. With the increasing advance of Computer Vision algorithms and computer processing power, the subarea known as Structure from Motion (SFM) started to supply mathematical tools composing localization systems for robotics and Augmented Reality applications, in contrast with its initial purpose of being used in inherently offline solutions aiming 3D reconstruction and image based modelling. In that way, this work proposes a pipeline to obtain relative position featuring a previously calibrated camera as positional sensor and based entirely on models and algorithms from SFM. Techniques usually applied in camera localization systems such as Kalman filters and particle filters are not used, making unnecessary additional information like probabilistic models for camera state transition. Experiments assessing both 3D reconstruction quality and camera position estimated by the system were performed, in which image sequences captured in reallistic scenarios were processed and compared to localization data gathered from a mobile robotic platform
Odometria Visual ? o processo pelo qual consegue-se obter a posi??o e orienta??o de uma c?mera, baseado somente em imagens e consequentemente, em caracter?sticas (proje??es de marcos visuais da cena) nelas contidas. Com o avan?o nos algoritmos e no poder de processamento dos computadores, a sub?rea de Vis?o Computacional denominada de Structure from Motion (SFM) passou a fornecer ferramentas que comp?em sistemas de localiza??o visando aplica??es como rob?tica e Realidade Aumentada, em contraste com o seu prop?sito inicial de ser usada em aplica??es predominantemente offline como reconstru??o 3D e modelagem baseada em imagens. Sendo assim, este trabalho prop?e um pipeline de obten??o de posi??o relativa que tem como caracter?sticas fazer uso de uma ?nica c?mera calibrada como sensor posicional e ser baseado interamente nos modelos e algoritmos de SFM. T?cnicas usualmente presentes em sistemas de localiza??o de c?mera como filtros de Kalman e filtros de part?culas n?o s?o empregadas, dispensando que informa??es adicionais como um modelo probabil?stico de transi??o de estados para a c?mera sejam necess?rias. Experimentos foram realizados com o prop?sito de avaliar tanto a reconstru??o 3D quanto a posi??o de c?mera retornada pelo sistema, atrav?s de sequ?ncias de imagens capturadas em ambientes reais de opera??o e compara??es com um ground truth fornecido pelos dados do od?metro de uma plataforma rob?tica
APA, Harvard, Vancouver, ISO, and other styles
10

Quist, Eric Blaine. "UAV Navigation and Radar Odometry." BYU ScholarsArchive, 2015. https://scholarsarchive.byu.edu/etd/4439.

Full text
Abstract:
Prior to the wide deployment of robotic systems, they must be able to navigate autonomously. These systems cannot rely on good weather or daytime navigation and they must also be able to navigate in unknown environments. All of this must take place without human interaction. A majority of modern autonomous systems rely on GPS for position estimation. While GPS solutions are readily available, GPS is often lost and may even be jammed. To this end, a significant amount of research has focused on GPS-denied navigation. Many GPS-denied solutions rely on known environmental features for navigation. Others use vision sensors, which often perform poorly at high altitudes and are limited in poor weather. In contrast, radar systems accurately measure range at high and low altitudes. Additionally, these systems remain unaffected by inclimate weather. This dissertation develops the use of radar odometry for GPS-denied navigation. Using the range progression of unknown environmental features, the aircraft's motion is estimated. Results are presented for both simulated and real radar data. In Chapter 2 a greedy radar odometry algorithm is presented. It uses the Hough transform to identify the range progression of ground point-scatterers. A global nearest neighbor approach is implemented to perform data association. Assuming a piece-wise constant heading assumption, as the aircraft passes pairs of scatterers, the location of the scatterers are triangulated, and the motion of the aircraft is estimated. Real flight data is used to validate the approach. Simulated flight data explores the robustness of the approach when the heading assumption is violated. Chapter 3 explores a more robust radar odometry technique, where the relatively constant heading assumption is removed. This chapter uses the recursive-random sample consensus (R-RANSAC) Algorithm to identify, associate, and track the point scatterers. Using the measured ranges to the tracked scatterers, an extended Kalman filter (EKF) iteratively estimates the aircraft's position in addition to the relative locations of each reflector. Real flight data is used to validate the accuracy of this approach. Chapter 4 performs observability analysis of a range-only sensor. An observable, radar odometry approach is proposed. It improves the previous approaches by adding a more robust R-RANSAC above ground level (AGL) tracking algorithm to further improve the navigational accuracy. Real flight results are presented, comparing this approach to the techniques presented in previous chapters.
APA, Harvard, Vancouver, ISO, and other styles
11

Masson, Clément. "Direction estimation using visual odometry." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-169377.

Full text
Abstract:
This Master thesis tackles the problem of measuring objects’ directions from a motionlessobservation point. A new method based on a single rotating camera requiring the knowledge ofonly two (or more) landmarks’ direction is proposed. In a first phase, multi-view geometry isused to estimate camera rotations and key elements’ direction from a set of overlapping images.Then in a second phase, the direction of any object can be estimated by resectioning the cameraassociated to a picture showing this object. A detailed description of the algorithmic chain isgiven, along with test results on both synthetic data and real images taken with an infraredcamera.
Detta masterarbete behandlar problemet med att mäta objekts riktningar från en fastobservationspunkt. En ny metod föreslås, baserad på en enda roterande kamera som kräverendast två (eller flera) landmärkens riktningar. I en första fas används multiperspektivgeometri,för att uppskatta kamerarotationer och nyckelelements riktningar utifrån en uppsättningöverlappande bilder. I en andra fas kan sedan riktningen hos vilket objekt som helst uppskattasgenom att kameran, associerad till en bild visande detta objekt, omsektioneras. En detaljeradbeskrivning av den algoritmiska kedjan ges, tillsammans med testresultat av både syntetisk dataoch verkliga bilder tagen med en infraröd kamera.
APA, Harvard, Vancouver, ISO, and other styles
12

Pol, Sabine. "Odometry for a Planetary Exploration Rover." Thesis, KTH, Reglerteknik, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-106249.

Full text
Abstract:
IARES is a highly flexible planetary exploration demonstration rover developed by CNES (the French National Center for Space Studies) mainly for autonomous navigation and locomotion studies. It has 19 degrees of freedom, including six active, steerable wheels. The rover uses a software for autonomous navigation, including stereo camera perception, path planning and motion control. It is complemented by a visual simulator that can substitute the rover for practical purposes. The goal of this MSc thesis, carried out during the second semester 2006 at CNES in Toulouse, has been to make out the most of the localization capabilities of this rover using a recently implemented method : odometry. A previous study had been carried out at ONERA in Toulouse and the main goal of this thesis was to implement this new method into the environment used for the CNES rover and to test the performances of this method thanks to the simulator. All this work might be even tested on board at the very end of the internship. Given the hardware platform and the software environment, this new localization method had primarily to be studied from a theoretical point of view before being integrated into the CNES environment. The study was conducted on a Linux platform and code has been developed in C for the simulator whereas Scilab has been used for the validation tests.
APA, Harvard, Vancouver, ISO, and other styles
13

Johansson, Fredrik. "Visual Stereo Odometry for Indoor Positioning." Thesis, Linköpings universitet, Datorseende, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-81215.

Full text
Abstract:
In this master thesis a visual odometry system is implemented and explained. Visual odometry is a technique, which could be used on autonomous vehicles to determine its current position and is preferably used indoors when GPS is notworking. The only input to the system are the images from a stereo camera and the output is the current location given in relative position. In the C++ implementation, image features are found and matched between the stereo images and the previous stereo pair, which gives a range of 150-250 verified feature matchings. The image coordinates are triangulated into a 3D-point cloud. The distance between two subsequent point clouds is minimized with respect to rigid transformations, which gives the motion described with six parameters, three for the translation and three for the rotation. Noise in the image coordinates gives reconstruction errors which makes the motion estimation very sensitive. The results from six experiments show that the weakness of the system is the ability to distinguish rotations from translations. However, if the system has additional knowledge of how it is moving, the minimization can be done with only three parameters and the system can estimate its position with less than 5 % error.
APA, Harvard, Vancouver, ISO, and other styles
14

Venturelli, Cavalheiro Guilherme. "Fusing visual odometry and depth completion." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/122517.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2019
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 57-62).
Recent advances in technology indicate that autonomous vehicles and self-driving cats in particular may become commonplace in the near future. This thesis contributes to that scenario by studying the problem of depth perception based on sequences of camera images. We start by presenting a sensor fusion framework that achieves state-of-the-art performance when completing depth from sparse LiDAR measurements and a camera. Then, we study how the system performs under a variety of modifications of the sparse input until we ultimately replace LiDAR measurements with triangulations from a typical sparse visual odometry pipeline. We are then able to achieve a small improvement over the single image baseline and chart guidelines to assist in designing a system with even more substantial gains.
by Guilherme Venturelli Cavalheiro.
S.M.
S.M. Massachusetts Institute of Technology, Department of Aeronautics and Astronautics
APA, Harvard, Vancouver, ISO, and other styles
15

Szente, Michal. "Vizuální odometrie pro robotické vozidlo Car4." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2017. http://www.nusl.cz/ntk/nusl-317205.

Full text
Abstract:
This thesis deals with algorithms of visual odometry and its application on the experimental vehicle Car4. The first part contains different researches in this area on which the solution process is based. Next chapters introduce theoretical design and ideas of monocular and stereo visual odometry algorithms. The third part deals with the implementation in the software MATLAB with the use of Image processing toolbox. After tests done and based on real data, the chosen algorithm is applied to the vehicle Car4 used in practical conditions of interior and exterior. The last part summarizes the results of the work and address the problems which are asociated with the application of visual obmetry algorithms.
APA, Harvard, Vancouver, ISO, and other styles
16

Santos, Cristiano Flores dos. "Um framework para avaliação de mapeamento tridimensional Utilizando técnicas de estereoscopia e odometria visual." Universidade Federal de Santa Maria, 2016. http://repositorio.ufsm.br/handle/1/12038.

Full text
Abstract:
The three-dimensional mapping environments has been intensively studied in the last decade. Among the benefits of this research topic is possible to highlight the addition of autonomy for car or even drones. The three-dimensional representation also allows viewing of a given scenario iteratively and with greater detail. However, until the time of this work was not found one framework to present in detail the implementation of algorithms to perform 3D mapping outdoor approaching a real-time processing. In view of this, in this work we developed a framework with the main stages of three-dimensional reconstruction. Therefore, stereoscopy was chosen as a technique for acquiring the depth information of the scene. In addition, this study evaluated four algorithms depth map generation, where it was possible to achieve the rate of 9 frames per second.
O mapeamento tridimensional de ambientes tem sido intensivamente estudado na última década. Entre os benefícios deste tema de pesquisa é possível destacar adição de autonomia á automóveis ou mesmo drones. A representação tridimensional também permite a visualização de um dado cenário de modo iterativo e com maior riqueza de detalhes. No entanto, até o momento da elaboração deste trabalho não foi encontrado um framework que apresente em detalhes a implementação de algoritmos para realização do mapeamento 3D de ambientes externos que se aproximasse de um processamento em tempo real. Diante disto, neste trabalho foi desenvolvido um framework com as principais etapas de reconstrução tridimensional. Para tanto, a estereoscopia foi escolhida como técnica para a aquisição da informação de profundidade do cenário. Além disto, neste trabalho foram avaliados 4 algoritmos de geração do mapa de profundidade, onde foi possível atingir a taxa de 9 quadros por segundo.
APA, Harvard, Vancouver, ISO, and other styles
17

Holmqvist, Niclas. "HANDHELD LIDAR ODOMETRY ESTIMATION AND MAPPING SYSTEM." Thesis, Mälardalens högskola, Inbyggda system, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-41137.

Full text
Abstract:
Ego-motion sensors are commonly used for pose estimation in Simultaneous Localization And Mapping (SLAM) algorithms. Inertial Measurement Units (IMUs) are popular sensors but suffer from integration drift over longer time scales. To remedy the drift they are often used in combination with additional sensors, such as a LiDAR. Pose estimation is used when scans, produced by these additional sensors, are being matched. The matching of scans can be computationally heavy as one scan can contain millions of data points. Methods exist to simplify the problem of finding the relative pose between sensor data, such as the Normal Distribution Transform SLAM algorithm. The algorithm separates the point cloud data into a voxelgrid and represent each voxel as a normal distribution, effectively decreasing the amount of data points. Registration is based on a function which converges to a minimum. Sub-optimal conditions can cause the function to converge at a local minimum. To remedy this problem this thesis explores the benefits of combining IMU sensor data to estimate the pose to be used in the NDT SLAM algorithm.
APA, Harvard, Vancouver, ISO, and other styles
18

Burusa, Akshay Kumar. "Visual-Inertial Odometry for Autonomous Ground Vehicles." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-217284.

Full text
Abstract:
Monocular cameras are prominently used for estimating motion of Unmanned Aerial Vehicles. With growing interest in autonomous vehicle technology, the use of monocular cameras in ground vehicles is on the rise. This is especially favorable for localization in situations where Global Navigation Satellite System (GNSS) is unreliable, such as open-pit mining environments. However, most monocular camera based approaches suffer due to obscure scale information. Ground vehicles impose a greater difficulty due to high speeds and fast movements. This thesis aims to estimate the scale of monocular vision data by using an inertial sensor in addition to the camera. It is shown that the simultaneous estimation of pose and scale in autonomous ground vehicles is possible by the fusion of visual and inertial sensors in an Extended Kalman Filter (EKF) framework. However, the convergence of scale is sensitive to several factors including the initialization error. An accurate estimation of scale allows the accurate estimation of pose. This facilitates the localization of ground vehicles in the absence of GNSS, providing a reliable fall-back option.
Monokulära kameror används ofta vid rörelseestimering av obemannade flygande farkoster. Med det ökade intresset för autonoma fordon har även användningen av monokulära kameror i fordon ökat. Detta är fram för allt fördelaktigt i situationer där satellitnavigering (Global Navigation Satellite System (GNSS)) äropålitlig, exempelvis i dagbrott. De flesta system som använder sig av monokulära kameror har problem med att estimera skalan. Denna estimering blir ännu svårare på grund av ett fordons större hastigheter och snabbare rörelser. Syftet med detta exjobb är att försöka estimera skalan baserat på bild data från en monokulär kamera, genom att komplettera med data från tröghetssensorer. Det visas att simultan estimering av position och skala för ett fordon är möjligt genom fusion av bild- och tröghetsdata från sensorer med hjälp av ett utökat Kalmanfilter (EKF). Estimeringens konvergens beror på flera faktorer, inklusive initialiseringsfel. En noggrann estimering av skalan möjliggör också en noggrann estimering av positionen. Detta möjliggör lokalisering av fordon vid avsaknad av GNSS och erbjuder därmed en ökad redundans.
APA, Harvard, Vancouver, ISO, and other styles
19

Rao, Anantha N. "Learning-based Visual Odometry - A Transformer Approach." University of Cincinnati / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1627658636420617.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Pereira, Ana Rita. "Visual odometry: comparing a stereo and a multi-camera approach." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/18/18153/tde-11092017-095254/.

Full text
Abstract:
The purpose of this project is to implement, analyze and compare visual odometry approaches to help the localization task in autonomous vehicles. The stereo visual odometry algorithm Libviso2 is compared with a proposed omnidirectional multi-camera approach. The proposed method consists of performing monocular visual odometry on all cameras individually and selecting the best estimate through a voting scheme involving all cameras. The omnidirectionality of the vision system allows the part of the surroundings richest in features to be used in the relative pose estimation. Experiments are carried out using cameras Bumblebee XB3 and Ladybug 2, fixed on the roof of a vehicle. The voting process of the proposed omnidirectional multi-camera method leads to some improvements relatively to the individual monocular estimates. However, stereo visual odometry provides considerably more accurate results.
O objetivo deste mestrado é implementar, analisar e comparar abordagens de odometria visual, de forma a contribuir para a localização de um veículo autônomo. O algoritmo de odometria visual estéreo Libviso2 é comparado com um método proposto, que usa um sistema multi-câmera omnidirecional. De acordo com este método, odometria visual monocular é calculada para cada câmera individualmente e, seguidamente, a melhor estimativa é selecionada através de um processo de votação que involve todas as câmeras. O fato de o sistema de visão ser omnidirecional faz com que a parte dos arredores mais rica em características possa sempre ser usada para estimar a pose relativa do veículo. Nas experiências são utilizadas as câmeras Bumblebee XB3 e Ladybug 2, fixadas no teto de um veículo. O processo de votação do método multi-câmera omnidirecional proposto apresenta melhorias relativamente às estimativas monoculares individuais. No entanto, a odometria visual estéreo fornece resultados mais precisos.
APA, Harvard, Vancouver, ISO, and other styles
21

Najman, Jan. "Aplikace SLAM algoritmů pro vozidlo s čtyřmi řízenými koly." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2015. http://www.nusl.cz/ntk/nusl-231076.

Full text
Abstract:
This paper deals with the application of SLAM algorithms on experimental four wheel vehicle Car4. The first part shows the basic functioning of SLAM including a description of the extended Kalman filter, which is one of its main components. Then there is a brief list of software tools available to solve this problem in the environment of MATLAB and an overview of sensors used in this work. The second part presents methodology and results of the testing of individual sensors and their combinations to calculate odometry and scan the surrounding space. It also shows the process of applying SLAM algorithms on Car4 vehicle using the selected sensors and the results of testing of the entire system in practice.
APA, Harvard, Vancouver, ISO, and other styles
22

Arnould, Philippe. "Étude de la localisation d'un robot mobile par fusion de données." Vandoeuvre-les-Nancy, INPL, 1993. http://www.theses.fr/1993INPL095N.

Full text
Abstract:
Les travaux présentes, dans ce mémoire, concernent l'étude de la localisation d'un véhicule autonome à partir d'informations provenant d'un odomètre et d'un magnétomètre. Le but est d'obtenir des informations de positions et de cap suffisamment fiables et précises pour envisager la navigation du véhicule sur une distance la plus longue possible entre deux recalages par rapport à l'environnement. Afin de tirer partie au mieux des informations disponibles, des techniques de fusion de données multicapteurs ont été élaborées. Elles permettent de s'affranchir de la plupart des défauts intrinsèques à chaque capteur pris individuellement. Deux méthodes ont été étudiées et testées: une méthode de fusion par discrimination de l'orientation où l'on compare en permanence l'incertitude des deux capteurs et une méthode de fusion par filtrage de Kalman. Les résultats obtenus sur des parcours types avec la deuxième méthode notamment montrent que l'erreur de position en fin de parcours est ramenée à 0,5% de la distance parcourue. L'utilisation de telles techniques rend possible la navigation sans recalage sur des distances de quelques dizaines de mètres de 100 mètres suivant la précision requise
APA, Harvard, Vancouver, ISO, and other styles
23

Tomasi, Junior Darci Luiz. "Modelo de calibração para sistemas de odometria robótica." reponame:Repositório Institucional da UFPR, 2016. http://hdl.handle.net/1884/45704.

Full text
Abstract:
Orientador : Prof. Dr. Eduardo Todt
Dissertação (mestrado) - Universidade Federal do Paraná, Setor de Ciências Exatas, Programa de Pós-Graduação em Informática. Defesa: Curitiba, 30/11/2016
Inclui referências : f. 39
Resumo: Para realizar a navegação de uma base robótica em um ambiente desconhecido, alguns mecanismos para detectar o posicionamento e a localização devem ser fornecidos a base. Quando a base está em processo de navegação e faz uso desses mecanismos, erros provenientes do ambiente e da base robótica são inseridos no sistema, resultando em um posicionamento errôneo. Uma forma de reduzir a amplitude dos erros é através de um modelo de calibração eficiente, capaz de identificar e estimar valores aceitáveis para as principais fontes de incerteza nos cálculos de odometria. Este trabalho de pesquisa apresenta um novo modelo de calibração comparável aos métodos clássicos conhecidos, mas que diferencia-se pela forma com que a calibração é realizada, sendo essa a principal limitação para conseguir incrementar os resultados com o método proposto. Ao fim do procedimento padrão proposto ser realizado, os resultados são equivalentes aos dos métodos clássicos conhecidos. Palavras-chave: UMBmark, Odometria, Calibração.
Abstract: In order to navigate a robotic base in an unfamiliar environment, some mechanism to detect positioning and location must be provided. When the robot is in the process of navigation and makes use of this mechanism, errors from the environment and the robotic base are inserted into the system, resulting in an erroneous positioning. One way to reduce the error amplitude is through an efficient calibration model, capable of identifying and estimating acceptable values for the main sources of uncertainty in odometry calculations. This work presents a new calibration model comparable to the classical methods known, but it is distinguished by the way in which the calibration is performed, being this the main limitation to be able to increase the results with the proposed method. At the end of the proposed standard procedure, the results are equivalent to those of the known classical methods. Keywords: UMBmark, Odometry, Calibration.
APA, Harvard, Vancouver, ISO, and other styles
24

Silva, Ricardo Luís da Mota. "Removable odometry unit for vehicles with Ackermann steering." Master's thesis, Universidade de Aveiro, 2014. http://hdl.handle.net/10773/13699.

Full text
Abstract:
Mestrado em Engenharia Mecânica
O principal objetivo deste trabalho é o desenvolvimento de uma solução de hodometria para veículos com direção Ackermann. A solução tinha que ser portátil, exível e fácil de montar. Após o estudo do estado da arte e uma pesquisa de soluções, a solução escolhida foi baseada em hodometria visual. Os passos seguintes do trabalho foram estudar a viabilidade de utilizar câmaras lineares para hodometria visual. O sensor de imagem foi usado para calcular a velocidade longitudinal; e a orientação da movimento foi calculado usando dois giroscópios. Para testar o método, várias experiências foram feitas; as experiências ocorreram indoor, sob condições controladas. Foi testada a capacidade de medir a velocidade em movimentos de linha reta, movimentos diagonais, movimentos circulares e movimentos com variação da distância ao solo. Os dados foram processados usando algoritmos de correlação e os foram resultados documentados. Com base nos resultados, é seguro concluir que hodometria com câmaras lineares auxiliado por sensores inerciais tem um potencial de aplicabilidade no mundo real.
The main objective of this work is to develop a solution of odometry for vehicles with Ackermann steering. The solution had to be portable, exible and easy to mount. After the study of the state of the art and a survey of solutions, the solution chosen was based on visual odometry. The following steps of the work were to study the feasibility to use line scan image sensors for visual odometry. The image sensor was used to compute the longitudinal velocity; and the orientation of motion was computed using two gyroscopes. To test the method, several experiments were made; the experiments took place indoor, under controlled conditions. It was tested the ability to measure velocity on straight line movements, diagonal movements, circular movements and movements with a changing distance from the ground. The data was processed with correlation algorithms and the results were documented. Based on the results it is safe to conclude that odometry with line scan sensors aided by inertial sensors has a potential for a real world applicability.
APA, Harvard, Vancouver, ISO, and other styles
25

Wuthrich, Tori(Tori Lee). "Learning visual odometry primitives for computationally constrained platforms." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/122419.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2019
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 51-52).
Autonomous navigation for robotic platforms, particularly techniques that leverage an onboard camera, are of currently of significant interest to the robotics community. Designing methods to localize small, resource-constrained robots is a particular challenge due to limited availability of computing power and physical space for sensors. A computer vision, machine learning-based localization method was proposed by researchers investigating the automation of medical procedures. However, we believed the method to also be promising for low size, weight, and power (SWAP) budget robots. Unlike for traditional odometry methods, in this case, a machine learning model can be trained offline, and can then generate odometry measurements quickly and efficiently. This thesis describes the implementation of the learning-based, visual odometry method in the context of autonomous drones. We refer to the method as RetiNav due to its similarities with the way the human eye processes light signals from its surroundings. We make several modifications to the method relative to the initial design based on a detailed parameter study, and we test the method on a variety of challenging flight datasets. We show that over the course of a trajectory, RetiNav achieves as low as 1.4% error in predicting the distance traveled. We conclude that such a method is a viable component of a localization system, and propose the next steps for work in this area.
by Tori Wuthrich.
S.M.
S.M. Massachusetts Institute of Technology, Department of Aeronautics and Astronautics
APA, Harvard, Vancouver, ISO, and other styles
26

Henriksson, Johan. "Radar odometry based on Fuzzy-NDT scan registration." Thesis, Örebro universitet, Institutionen för naturvetenskap och teknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-94492.

Full text
Abstract:
Visual and lidar-based odometry for mobile robots has been thoroughlyinvestigated and performs very well in good weather conditions. However,both are sensitive to bad weather conditions with atmospheric disturbancessuch as rain and snow. Recently Radar sensors specialized for mobilerobot use have become available. Radar sensors are much more robustagainst atmospheric disturbances, which makes them an exciting alternative.This thesis presents a radar odometry pipeline that can handle both lidar andradar data with minor modifications. The results show that it outperformsthe current state of the art radar odometry solutions. While also being able tohandle 3d lidar odometry with good performance.
APA, Harvard, Vancouver, ISO, and other styles
27

Bezerra, Clauber Gomes. "Localiza??o de um rob? m?vel usando odometria e marcos naturais." Universidade Federal do Rio Grande do Norte, 2004. http://repositorio.ufrn.br:8080/jspui/handle/123456789/15411.

Full text
Abstract:
Made available in DSpace on 2014-12-17T14:56:01Z (GMT). No. of bitstreams: 1 ClauberGB.pdf: 726956 bytes, checksum: d3fb1b2d7c6ad784a1b7d40c1a54f8f8 (MD5) Previous issue date: 2004-03-08
Several methods of mobile robot navigation request the mensuration of robot position and orientation in its workspace. In the wheeled mobile robot case, techniques based on odometry allow to determine the robot localization by the integration of incremental displacements of its wheels. However, this technique is subject to errors that accumulate with the distance traveled by the robot, making unfeasible its exclusive use. Other methods are based on the detection of natural or artificial landmarks present in the environment and whose location is known. This technique doesnt generate cumulative errors, but it can request a larger processing time than the methods based on odometry. Thus, many methods make use of both techniques, in such a way that the odometry errors are periodically corrected through mensurations obtained from landmarks. Accordding to this approach, this work proposes a hybrid localization system for wheeled mobile robots in indoor environments based on odometry and natural landmarks. The landmarks are straight lines de.ned by the junctions in environments floor, forming a bi-dimensional grid. The landmark detection from digital images is perfomed through the Hough transform. Heuristics are associated with that transform to allow its application in real time. To reduce the search time of landmarks, we propose to map odometry errors in an area of the captured image that possesses high probability of containing the sought mark
Diversos m?todos de navega??o de rob?s m?veis requerem a medi??o da posi??o e orienta??o do rob? no seu espa?o de trabalho. No caso de rob?s m?veis com rodas, t?cnicas baseadas em odometria permitem determinar a localiza??o do rob? atrav?s da integra??o de medi??es dos deslocamentos incrementais de suas rodas. No entanto, essa t?cnica est? sujeita a erros que se acumulam com a dist?ncia percorrida pelo rob?, o que inviabiliza o seu uso exclusivo. Outros m?todos se baseiam na detec??o de marcos naturais ou artificiais, cuja localiza??o ? conhecida, presentes no ambiente. Apesar desta t?cnica n?o gerar erros cumulativos, ela pode requisitar um tempo de processamento bem maior do que o uso de odometria. Assim, muitos m?todos fazem uso de ambas as t?cnicas, de modo a corrigir periodicamente os erros de odometria, atrav?s de medi??es obtidas a partir dos marcos. De acordo com esta abordagem, propomos neste trabalho um sistema h?brido de localiza??o para rob?s m?veis com rodas em ambientes internos, baseado em odometria e marcos naturais, onde os marcos adotados s?o linhas retas definidas pelas jun??es existentes no piso do ambiente, formando uma grade bi-dimensional no ch?o. Para a detec??o deste tipo de marco, a partir de imagens digitais, ? utilizada a transformada de Hough, associada a heur?sticas que permitem a sua aplica??o em tempo real. Em particular, para reduzir o tempo de busca dos marcos, propomos mapear erros de odometria em uma regi?o da imagem capturada que possua grande probabilidade de conter o marco procurado
APA, Harvard, Vancouver, ISO, and other styles
28

Štěpán, Miroslav. "Model robota Trilobot." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2007. http://www.nusl.cz/ntk/nusl-412784.

Full text
Abstract:
This MSc Thesis describes creation of motion model of mobile robot called Trilobot. This model is implemented into simple simulation tool. Some laboratory experiments with the robot are described in this paper. There is also some information about SmallDEVS tool and Squeak Smalltalk environment in which the model was implemented. Motivation of this work is effort to simplify the design and testing of navigation algorithms for Trilobot, which is available for students of FIT BUT in the robotics lab of department of intelligent systems. This simple simulation tool could partially reduce dependence on physical availability of this robot.
APA, Harvard, Vancouver, ISO, and other styles
29

Clark, Ronald. "Visual-inertial odometry, mapping and re-localization through learning." Thesis, University of Oxford, 2017. http://ora.ox.ac.uk/objects/uuid:69b03c50-f315-42f8-ad41-d97cd4c9bf09.

Full text
Abstract:
Precise pose information is a fundamental prerequisite for numerous applications in robotics, AI and mobile computing. Monocular cameras are the ideal sensor for this purpose - they are cheap, lightweight and ubiquitous. As such, monocular visual localization is widely regarded as a cornerstone requirement of machine perception. However, a large gap still exists between the performance that these applications require and that which is achievable through existing monocular perception algorithms. In this thesis we directly tackle the issue of robust egocentric visual localization and mapping through a data-centric approach. As a first major contribution we propose novel learnt models for visual odometry which form the basis of the ego-motion estimates used in later chapters. The proposed approaches are less fragile and much more robust than existing approaches. We present experimental evidence that these approaches can not only approach the accuracy of standard methods but in many cases also show major improvements in computational and memory efficiency. To cope with the drift inherent to the odometry methods, we then introduce a novel learnt spatio-temporal model for performing global relocalization updates. The proposed approach allows one to efficiently infer the global location of an image stream at the fraction of the time of traditional feature-based approaches with minimal loss in localization accuracy. Finally, we present a novel SLAM system integrating our learnt priors for creating 3D maps from monocular image sequences. The approach is designed to harness multiple input sources, including prior depth and ego-motion estimates and incorporates both loop-closure and relocalization updates. The approach, based on the well-established standard visual-inertial structure-from-motion process, allows us to perform accurate posterior inference of camera poses and scene structure to significantly boost the reconstruction robustness and fidelity. Through our qualitative and quantitative experimentation on a wide range of datasets, we conclude that the proposed methods can bring accurate visual localization to a wide class of consumer devices and robotic platforms.
APA, Harvard, Vancouver, ISO, and other styles
30

Gui, Jianjun. "Direct visual and inertial odometry for monocular mobile platforms." Thesis, University of Essex, 2018. http://repository.essex.ac.uk/21726/.

Full text
Abstract:
Nowadays visual and inertial information is readily available from small mobile platforms, such as quadcopters. However, due to the limitation of onboard resource and capability, it is still a challenge to developing localisation and mapping estimation algorithms for small size mobile platforms. Visual-based techniques for tracking or motion estimation related tasks have been developed abundantly, especially using interest points as features. However, such sparse feature-based methods are quickly getting divergence, due to noise, partial occlusion or light condition variation in views. Only in recent years, direct visual based approaches, which densely, semi-densely or statistically use pixel information reveal significant improvement in algorithm robustness and stability. On the other hand, inertial sensors measure the changes in angular velocity and linear acceleration, which can be further integrated to predict relative velocity, position and orientation for mobile platforms. In practical usage, the accumulated error from inertial sensors is often compensated by cameras, while the loss of agile egomotion from visual sensors can be compensated by inertial-based motion estimation. Based on the complementary nature of visual and inertial information, in this research, we focus on how to use the direct visual based approaches to providing location information through a monocular camera, while fusing with the inertial information to enhance the robustness and accuracy. The proposed algorithms can be applied to practical datasets which are collected from mobile platforms. Particularly, direct-based and mutual information based methods are explored in details. Two visual-inertial odometry algorithms are proposed in the framework of multi-state constraint Kalman filter. They are also tested with the real data from a flying robot in complex indoor and outdoor environments. The results show that the direct-based methods have the merits of robustness in image processing and accuracy in the case of moving along straight lines with a slight rotation. Furthermore, the visual and inertial fusion strategies are investigated to build their intrinsic links, then the improvement done by iterative steps in filtering propagation is proposed. As an addition, for experimental implementation, a self-made flying robot for data collection is also developed.
APA, Harvard, Vancouver, ISO, and other styles
31

Myriokefalitakis, Panteleimon. "Real-time conversion of monodepth visual odometry enhanced network." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-288488.

Full text
Abstract:
This thesis work belongs to the field of self-supervised monocular depth estimation and constitutes a conversion of the work done in [1]. The purpose is to consider the computationally expensive model in [1] as the baseline model of this work and try to create a lightweight model out of it. The current work proposes a network suited to be deployed on embedded devices such as NVIDIA Jetson TX2 where the needs for short runtime, small memory footprint, and power consumption matters the most. In other words, if those requirements are missing, no matter if precision is extraordinarily high, the model cannot be functional on embedded processors. Thus, mobile platforms with small size such as drones, delivery robots, etc. cannot exploit the benefits of deep learning. The proposed network has _29.7 less parameters than the baseline model [1] and uses only 10.6 MB for a forward pass in contrast to 227MB used by the network in [1]. Consequently, the proposed model can be functional on embedded devices’ GPU. Lastly, it is able to infer depth with promising speed even on standard CPUs and at the same time provides comparable or higher accuracy than other works.
Detta examensarbete tillhör området för självkontrollerad monokulär djupbedömning och utgör en omvandling av det arbete som gjorts under [1]. Syftet är att överväga den beräkningsmässiga dyra modellen i [1] som basmodellen för detta arbete och försöka skapa en lätt modell ur den. Det nuvarande arbetet förutsätter ett nätverk som är lämpligt att distribueras på inbäddade enheter som NVIDIA Jetson TX2 där behoven för kort driftstid, liten minnesfotavtryck och kraftförbrukning är viktigast. Med andra ord, om dessa krav saknas, oavsett om precisionen är extra hög, kan modellen inte fungera på inbäddade processorer. Således kan mobilplattformar med små storlekar som drönare, leveransrobotar, etc. inte utnyttja fördelarna med djupbildning. Det föreslagna nätverket har _29,7 mindre parametrar än baselinemodellen [1] och använder endast 10,6MB för ett framåtpass i motsats till 227MB som används av nätverket i [1]. Följaktligen kan den föreslagna modellen fungera på inbäddade enheters GPU. Slutligen kan den dra slutsatsen med lovande hastighet på standard CPUs och samtidigt ger jämförbar eller högre noggrannhet än andra arbete.
APA, Harvard, Vancouver, ISO, and other styles
32

Chermak, Lounis. "Standalone and embedded stereo visual odometry based navigation solution." Thesis, Cranfield University, 2015. http://dspace.lib.cranfield.ac.uk/handle/1826/9319.

Full text
Abstract:
This thesis investigates techniques and designs an autonomous visual stereo based navigation sensor to improve stereo visual odometry for purpose of navigation in unknown environments. In particular, autonomous navigation in a space mission context which imposes challenging constraints on algorithm development and hardware requirements. For instance, Global Positioning System (GPS) is not available in this context. Thus, a solution for navigation cannot rely on similar external sources of information. Support to handle this problem is required with the conception of an intelligent perception-sensing device that provides precise outputs related to absolute and relative 6 degrees of freedom (DOF) positioning. This is achieved using only images from stereo calibrated cameras possibly coupled with an inertial measurement unit (IMU) while fulfilling real time processing requirements. Moreover, no prior knowledge about the environment is assumed. Robotic navigation has been the motivating research to investigate different and complementary areas such as stereovision, visual motion estimation, optimisation and data fusion. Several contributions have been made in these areas. Firstly, an efficient feature detection, stereo matching and feature tracking strategy based on Kanade-Lucas-Tomasi (KLT) feature tracker is proposed to form the base of the visual motion estimation. Secondly, in order to cope with extreme illumination changes, High dynamic range (HDR) imaging solution is investigated and a comparative assessment of feature tracking performance is conducted. Thirdly, a two views local bundle adjustment scheme based on trust region minimisation is proposed for precise visual motion estimation. Fourthly, a novel KLT feature tracker using IMU information is integrated into the visual odometry pipeline. Finally, a smart standalone stereo visual/IMU navigation sensor has been designed integrating an innovative combination of hardware as well as the novel software solutions proposed above. As a result of a balanced combination of hardware and software implementation, we achieved 5fps frame rate processing up to 750 initials features at a resolution of 1280x960. This is the highest reached resolution in real time for visual odometry applications to our knowledge. In addition visual odometry accuracy of our algorithm achieves the state of the art with less than 1% relative error in the estimated trajectories.
APA, Harvard, Vancouver, ISO, and other styles
33

Greenberg, Jacob. "Visual Odometry for Autonomous MAV with On-Board Processing." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-177290.

Full text
Abstract:
A new visual registration algorithm (Adaptive Iterative Closest Keypoint, AICK) is tested and evaluated as a positioning tool on a Micro Aerial Vehicle (MAV). Captured frames from a Kinect like RGB-D camera are analyzed and an estimated position of the MAV is extracted. The hope is to find a positioning solution for GPS-denied environments. This thesis is focused on an indoor office environment. The MAV is flown manually, capturing in-flight RGB-D images which are registered with the AICK algorithm. The result is analyzed to come to a conclusion if AICK is viable or not for autonomous flight based on on-board positioning estimates. The result shows potential for a working autonomous MAV in GPS-denied environments, however there are some surroundings that have proven difficult. The lack of visual features on e.g., a white wall causes problems and uncertainties in the positioning, which is even more troublesome when the distance to the surroundings exceed the RGB-D cameras depth range. With further work on these weaknesses we believe that a robust autonomous MAV using AICK for positioning is plausible.
En ny visuell registreringsalgoritm (Adaptive Iterative Closest Keypoint, AICK) testas och utvärderas som ett positioneringsverktyg på en Micro Aerial Vehicle (MAV). Tagna bilder från en Kinect liknande RGB-D kamera analyseras och en approximerad position av MAVen beräknas. Förhoppningen är att hitta en positioneringslösning för miljöer utan GPS förbindelse, där detta arbete fokuserar på kontorsmiljöer inomhus. MAVen flygs manuellt samtidigt som RGB-D bilder tas, dessa registreras sedan med hjälp av AICK. Resultatet analyseras för att kunna dra en slutsats om AICK är en rimlig metod eller inte för att åstadkomma autonom flygning med hjälp av den uppskattade positionen. Resultatet visar potentialen för en fungerande autonom MAV i miljöer utan GPS förbindelse, men det finns testade miljöer där AICK i dagsläget fungerar undermåligt. Bristen på visuella särdrag på t.ex. en vit vägg inför problem och osäkerheter i positioneringen, ännu mer besvärande är det när avståndet till omgivningen överskrider RGB-D kamerornas räckvidd. Med fortsatt arbete med dessa svagheter är en robust autonom MAV som använder AICK för positioneringen rimlig.
APA, Harvard, Vancouver, ISO, and other styles
34

Svoboda, Ondřej. "Analýza vlastností stereokamery ZED ve venkovním prostředí." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2019. http://www.nusl.cz/ntk/nusl-399416.

Full text
Abstract:
The Master thesis is focused on analyzing stereo camera ZED in the outdoor environment. There is compared ZEDfu visual odometry with commonly used methods like GPS or wheel odometry. Moreover, the thesis includes analyses of SLAM in the changeable outdoor environment, too. The simultaneous mapping and localization in RTAB-Map were processed separately with SIFT and BRISK descriptors. The aim of this master thesis is to analyze the behaviour ZED camera in the outdoor environment for future implementation in mobile robotics.
APA, Harvard, Vancouver, ISO, and other styles
35

Gräter, Johannes [Verfasser]. "Monokulare Visuelle Odometrie auf Multisensorplattformen für autonome Fahrzeuge / Johannes Gräter." Karlsruhe : KIT Scientific Publishing, 2019. http://d-nb.info/1196294682/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Proenca, Pedro F. "Robust RGB-D odometry under depth uncertainty for structured environments." Thesis, University of Surrey, 2018. http://epubs.surrey.ac.uk/849961/.

Full text
Abstract:
Visual odometry, the process of tracking the trajectory of a moving camera based on its captured video is a fundamental problem behind autonomous mobile robotics and augmented reality applications. Yet, despite almost 40 years of extensive research on the problem, state-of-the-art systems are still vulnerable to several pitfalls that arise in challenging environments due to specific sensor limitations and restrictive assumptions. This thesis, in particular, investigates the use of RGB-D cameras for robust visual odometry in man-made environments, such as industrial plants. These spaces, contrary to natural environments, follow mainly a rectilinear structure made of simple geometric entities. Thus, this work exploits this structure by taking a feature-based approach, where lines, planes and cylinder segments are explicitly extracted as visual cues for egomotion estimation. While the depth captured by RGB-D cameras helps to resolve the ambiguity inherent of passive cameras especially on uniform and low textured surfaces, these active cameras suffer from several limitations, which may deteriorate the performance of RGB-D Odometry, such as, limited operating range, near-infrared light interference and systematic errors, leading to incomplete and noisy depth maps. To address these issues, we have first developed a visual odometry framework that leverages both depth measurements from active sensing and depth estimates from temporal stereo obtained via probabilistic filtering. Our experiments demonstrate that this framework is able to operate on large indoor and outdoor spaces, where the absence and inaccuracy of depth measurements is too high to rely just on RGB-D Odometry. Secondly, this thesis considers the depth sensor error by proposing a depth fusion framework based on Mixture of Gaussians to denoise the depth measurements and model their uncertainties through spatio-temporal observations. Extensive results on RGB-D sequences show that applying this depth model to RGB-D odometry improves significantly its performance and supports our hypothesis that the uncertainty of fused depth needs to be exposed. To fully exploit this probabilistic depth model, the depth uncertainty needs to be propagated throughout the visual odometry pipeline. Therefore, we reformulated the visual odometry system as a probabilistic process by (i) deriving plane and 3D line fitting solutions that model the uncertainties of the feature parameters and (ii) estimating the camera pose by combining different feature-type matches weighted by their respective uncertainties. Lastly, this thesis addresses man-made environments made also of smooth curved surfaces by proposing a curve-aware plane and cylinder extraction algorithm which is shown empirically to be more efficient and accurate than an alternative state-of-the-art plane extraction approach, leading ultimately to better visual odometry performance in scenes made of cylindrical surfaces. To incorporate this feature extractor in visual odometry, the system described above is extended to handle cylinder primitives.
APA, Harvard, Vancouver, ISO, and other styles
37

Frey, Kristoffer M. (Kristoffer Martin). "Sparsity and computation reduction for high-rate visual-inertial odometry." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/113745.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2017.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 147-151).
The navigation problem for mobile robots operating in unknown environments can be posed as a subset of Simultaneous Localization and Mapping (SLAM). For computationally-constrained systems, maintaining and promoting system sparsity is key to achieving the high-rate solutions required for agile trajectory tracking. This thesis focuses on the computation involved in the elimination step of optimization, showing it to be a function of the corresponding graph structure. This observation directly motivates the search for measurement selection techniques to promote sparse structure and reduce computation. While many sophisticated selection techniques exist in the literature, relatively little attention has been paid to the simple yet ubiquitous heuristic of decimation. This thesis shows that decimation produces graphs with an inherently sparse, partitioned super-structure. Furthermore, it is shown analytically for single-landmark graphs that the even spacing of observations characteristic of decimation is near optimal in a weighted number of spanning trees sense. Recent results in the SLAM community suggest that maximizing this connectivity metric corresponds to good information-theoretic performance. Simulation results confirm that decimation-style strategies perform as well or better than sophisticated policies which require significant computation to execute. Given that decimation consumes negligible computation to evaluate, its performance demonstrated here makes decimation a formidable measurement selection strategy for high-rate, realtime SLAM solutions. Finally, the SAMWISE visual-inertial estimator is described, and thorough experimental results demonstrate its robustness in a variety of scenarios, particularly to the challenges prescribed by the DARPA Fast Lightweight Autonomy program.
This thesis was supported by the Defense Advanced Research Projects Agency (DARPA) under the Fast Lightweight Autonomy program.
by Kristoffer M. Frey.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
38

Verpers, Felix. "Improving a stereo-based visual odometry prototype with global optimization." Thesis, Uppsala universitet, Avdelningen för visuell information och interaktion, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-383268.

Full text
Abstract:
In this degree project global optimization methods, for a previously developedsoftwareprototype of a stereo odometry system, were studied. The existing softwareestimatesthe motion between stereo frames and builds up a map of selected stereo frameswhich accumulates increasing error over time. The aim of the project was to studymethods to mitigate the error accumulated over time in the step-wise motionestimation.One approach based on relative pose estimates and another approach based onreprojection optimization were implemented and evaluated for the existing platform.The results indicate that optimization based on relative keyframe estimates ispromising for real-time usage. The second strategy based on reprojection of stereotriangulatedpoints proved useful as a refinement step but the relatively small errorreduction comes at an increased computational cost. Therefore, this approachrequiresfurther improvements to become applicable in situations where corrections areneededin real-time, and it is hard to justify the increased computations for the relatively smallerror reduction.The results also show that the global optimization primarily improves the absolutetrajectory error.
APA, Harvard, Vancouver, ISO, and other styles
39

Araújo, Darla Caroline da Silva 1989. "Uso de fluxo óptico na odometria visual aplicada a robótica." [s.n.], 2015. http://repositorio.unicamp.br/jspui/handle/REPOSIP/265835.

Full text
Abstract:
Orientador: Paulo Roberto Gardel Kurka
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Mecânica
Made available in DSpace on 2018-08-26T21:38:28Z (GMT). No. of bitstreams: 1 Araujo_DarlaCarolinedaSilva_M.pdf: 5678583 bytes, checksum: a6ed9886369705a8853f15d431565a3d (MD5) Previous issue date: 2015
Resumo: O presente trabalho descreve um método de odometria visual empregando a técnica de fluxo óptico, para estimar o movimento de um robô móvel, através de imagens digitais capturadas de duas câmeras estereoscópicas nele fixadas. Busca-se assim a construção de um mapa para a localização do Robô. Esta proposta, além de alternativa ao cálculo autônomo de movimento realizado por outros tipos de sensores como GPS, laser, sonares, utiliza uma técnica de processamento óptico de grande eficiência computacional. Foi construído um ambiente 3D para simulação do movimento do robô e captura das imagens necessárias para estimar sua trajetória e verificar a acurácia da técnica proposta. Utiliza-se a técnica de fluxo óptico de Lucas Kanade na identificação de características em imagens. Os resultados obtidos neste trabalho são de grande importância para os estudos de navegação robótica
Abstract: This work describes a method of visual odometry using the optical flow technique to estimate the motion of a mobile robot, through digital images captured from two stereoscopic cameras fixed on it, in order to obtain a map of location of the robot. This proposal is an alternative to the autonomous motion calculation performed by other types of sensors such as GPS, laser, sonar, and uses an optical processing technique of high computational efficiency. To check the accuracy of the technique it was necessary to build a 3D environment to simulate the robot performing a trajectory and capture the necessary images to estimate the trajectory. The optical flow technique of Lucas Kanade was used for identifying features in the images. The results of this work are of great importance for future robotic navigation studies
Mestrado
Mecanica dos Sólidos e Projeto Mecanico
Mestra em Engenharia Mecânica
APA, Harvard, Vancouver, ISO, and other styles
40

Santos, Vinícius Araújo. "SiameseVO-Depth: odometria visual através de redes neurais convolucionais siamesas." Universidade Federal de Goiás, 2018. http://repositorio.bc.ufg.br/tede/handle/tede/9083.

Full text
Abstract:
Submitted by Luciana Ferreira (lucgeral@gmail.com) on 2018-11-21T11:05:44Z No. of bitstreams: 2 Dissertação - Vinícius Araújo Santos - 2018.pdf: 14601054 bytes, checksum: e02a8bcd3cdc93bf2bf202c3933b3f27 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2018-11-21T11:06:26Z (GMT) No. of bitstreams: 2 Dissertação - Vinícius Araújo Santos - 2018.pdf: 14601054 bytes, checksum: e02a8bcd3cdc93bf2bf202c3933b3f27 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
Made available in DSpace on 2018-11-21T11:06:26Z (GMT). No. of bitstreams: 2 Dissertação - Vinícius Araújo Santos - 2018.pdf: 14601054 bytes, checksum: e02a8bcd3cdc93bf2bf202c3933b3f27 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Previous issue date: 2018-10-11
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES
Visual Odometry is an important process in image based navigation of robots. The standard methods of this field rely on the good feature matching between frames where feature detection on images stands as a well adressed problem within Computer Vision. Such techniques are subject to illumination problems, noise and poor feature localization accuracy. Thus, 3D information on a scene may mitigate the uncertainty of the features on images. Deep Learning techniques show great results when dealing with common difficulties of VO such as low illumination conditions and bad feature selection. While Visual Odometry and Deep Learning have been connected previously, no techniques applying Siamese Convolutional Networks on depth infomation given by disparity maps have been acknowledged as far as this work’s researches went. This work aims to fill this gap by applying Deep Learning to estimate egomotion through disparity maps on an Siamese architeture. The SiameseVO-Depth architeture is compared to state of the art techniques on OV by using the KITTI Vision Benchmark Suite. The results reveal that the chosen methodology succeeded on the estimation of Visual Odometry although it doesn’t outperform the state-of-the-art techniques. This work presents fewer steps in relation to standard VO techniques for it consists of an end-to-end solution and demonstrates a new approach of Deep Learning applied to Visual Odometry.
Odometria Visual é um importante processo na navegação de robôs baseada em imagens. Os métodos clássicos deste tema dependem de boas correspondências de características feitas entre imagens sendo que a detecção de características em imagens é um tema amplamente discutido no campo de Visão Computacional. Estas técnicas estão sujeitas a problemas de iluminação, presença de ruído e baixa de acurácia de localização. Nesse contexto, a informação tridimensional de uma cena pode ser uma forma de mitigar as incertezas sobre as características em imagens. Técnicas de Deep Learning têm demonstrado bons resultados lidando com problemas comuns em técnicas de OV como insuficiente iluminação e erros na seleção de características. Ainda que já existam trabalhos que relacionam Odometria Visual e Deep Learning, não foram encontradas técnicas que utilizem Redes Convolucionais Siamesas com sucesso utilizando informações de profundidade de mapas de disparidade durante esta pesquisa. Este trabalho visa preencher esta lacuna aplicando Deep Learning na estimativa do movimento por de mapas de disparidade em uma arquitetura Siamesa. A arquitetura SiameseVO-Depth proposta neste trabalho é comparada à técnicas do estado da arte em OV utilizando a base de dados KITTI Vision Benchmark Suite. Os resultados demonstram que através da metodologia proposta é possível a estimativa dos valores de uma Odometria Visual ainda que o desempenho não supere técnicas consideradas estado da arte. O trabalho proposto possui menos etapas em comparação com técnicas clássicas de OV por apresentar-se como uma solução fim-a-fim e apresenta nova abordagem no campo de Deep Learning aplicado à Odometria Visual.
APA, Harvard, Vancouver, ISO, and other styles
41

Aksjonova, Jevgenija. "LDD: Learned Detector and Descriptor of Points for Visual Odometry." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-233571.

Full text
Abstract:
Simultaneous localization and mapping is an important problem in robotics that can be solved using visual odometry -- the process of estimating ego-motion from subsequent camera images. In turn, visual odometry systems rely on point matching between different frames. This work presents a novel method for matching key-points by applying neural networks to point detection and description. Traditionally, point detectors are used in order to select good key-points (like corners) and then these key-points are matched using features extracted with descriptors. However, in this work a descriptor is trained to match points densely and then a detector is trained to predict, which points are more likely to be matched with the descriptor. This information is further used for selection of good key-points. The results of this project show that this approach can lead to more accurate results compared to model-based methods.
Samtidig lokalisering och kartläggning är ett viktigt problem inom robotik som kan lösas med hjälp av visuell odometri -- processen att uppskatta självrörelse från efterföljande kamerabilder. Visuella odometrisystem förlitar sig i sin tur på punktmatchningar mellan olika bildrutor. Detta arbete presenterar en ny metod för matchning av nyckelpunkter genom att applicera neurala nätverk för detektion av punkter och deskriptorer. Traditionellt sett används punktdetektorer för att välja ut bra nyckelpunkter (som hörn) och sedan används dessa nyckelpunkter för att matcha särdrag. I detta arbete tränas istället en deskriptor att matcha punkterna. Sedan tränas en detektor till att förutspå vilka punker som är mest troliga att matchas korrekt med deskriptorn. Denna information används sedan för att välja ut bra nyckelpunkter. Resultatet av projektet visar att det kan leda till mer precisa resultat jämfört med andra modellbaserade metoder.
APA, Harvard, Vancouver, ISO, and other styles
42

Awang, Salleh Dayang Nur Salmi Dharmiza. "Study of vehicle localization optimization with visual odometry trajectory tracking." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS601.

Full text
Abstract:
Au sein des systèmes avancés d’aide à la conduite (Advanced Driver Assistance Systems - ADAS) pour les systèmes de transport intelligents (Intelligent Transport Systems - ITS), les systèmes de positionnement, ou de localisation, du véhicule jouent un rôle primordial. Le système GPS (Global Positioning System) largement employé ne peut donner seul un résultat précis à cause de facteurs extérieurs comme un environnement contraint ou l’affaiblissement des signaux. Ces erreurs peuvent être en partie corrigées en fusionnant les données GPS avec des informations supplémentaires provenant d'autres capteurs. La multiplication des systèmes d’aide à la conduite disponibles dans les véhicules nécessite de plus en plus de capteurs installés et augmente le volume de données utilisables. Dans ce cadre, nous nous sommes intéressés à la fusion des données provenant de capteurs bas cout pour améliorer le positionnement du véhicule. Parmi ces sources d’information, en parallèle au GPS, nous avons considérés les caméras disponibles sur les véhicules dans le but de faire de l’odométrie visuelle (Visual Odometry - VO), couplée à une carte de l’environnement. Nous avons étudié les caractéristiques de cette trajectoire reconstituée dans le but d’améliorer la qualité du positionnement latéral et longitudinal du véhicule sur la route, et de détecter les changements de voies possibles. Après avoir été fusionnée avec les données GPS, cette trajectoire générée est couplée avec la carte de l’environnement provenant d’Open-StreetMap (OSM). L'erreur de positionnement latérale est réduite en utilisant les informations de distribution de voie fournies par OSM, tandis que le positionnement longitudinal est optimisé avec une correspondance de courbes entre la trajectoire provenant de l’odométrie visuelle et les routes segmentées décrites dans OSM. Pour vérifier la robustesse du système, la méthode a été validée avec des jeux de données KITTI en considérant des données GPS bruitées par des modèles de bruits usuels. Plusieurs méthodes d’odométrie visuelle ont été utilisées pour comparer l’influence de la méthode sur le niveau d'amélioration du résultat après fusion des données. En utilisant la technique d’appariement des courbes que nous proposons, la précision du positionnement connait une amélioration significative, en particulier pour l’erreur longitudinale. Les performances de localisation sont comparables à celles des techniques SLAM (Simultaneous Localization And Mapping), corrigeant l’erreur d’orientation initiale provenant de l’odométrie visuelle. Nous avons ensuite employé la trajectoire provenant de l’odométrie visuelle dans le cadre de la détection de changement de voie. Cette indication est utile dans pour les systèmes de navigation des véhicules. La détection de changement de voie a été réalisée par une somme cumulative et une technique d’ajustement de courbe et obtient de très bon taux de réussite. Des perspectives de recherche sur la stratégie de détection sont proposées pour déterminer la voie initiale du véhicule. En conclusion, les résultats obtenus lors de ces travaux montrent l’intérêt de l’utilisation de la trajectoire provenant de l’odométrie visuelle comme source d’information pour la fusion de données à faible coût pour la localisation des véhicules. Cette source d’information provenant de la caméra est complémentaire aux données d’images traitées qui pourront par ailleurs être utilisées pour les différentes taches visée par les systèmes d’aides à la conduite
With the growing research on Advanced Driver Assistance Systems (ADAS) for Intelligent Transport Systems (ITS), accurate vehicle localization plays an important role in intelligent vehicles. The Global Positioning System (GPS) has been widely used but its accuracy deteriorates and susceptible to positioning error due to factors such as the restricting environments that results in signal weakening. This problem can be addressed by integrating the GPS data with additional information from other sensors. Meanwhile, nowadays, we can find vehicles equipped with sensors for ADAS applications. In this research, fusion of GPS with visual odometry (VO) and digital map is proposed as a solution to localization improvement with low-cost data fusion. From the published works on VO, it is interesting to know how the generated trajectory can further improve vehicle localization. By integrating the VO output with GPS and OpenStreetMap (OSM) data, estimates of vehicle position on the map can be obtained. The lateral positioning error is reduced by utilizing lane distribution information provided by OSM while the longitudinal positioning is optimized with curve matching between VO trajectory trail and segmented roads. To observe the system robustness, the method was validated with KITTI datasets tested with different common GPS noise. Several published VO methods were also used to compare improvement level after data fusion. Validation results show that the positioning accuracy achieved significant improvement especially for the longitudinal error with curve matching technique. The localization performance is on par with Simultaneous Localization and Mapping (SLAM) SLAM techniques despite the drift in VO trajectory input. The research on employability of VO trajectory is extended for a deterministic task in lane-change detection. This is to assist the routing service for lane-level direction in navigation. The lane-change detection was conducted by CUSUM and curve fitting technique that resulted in 100% successful detection for stereo VO. Further study for the detection strategy is however required to obtain the current true lane of the vehicle for lane-level accurate localization. With the results obtained from the proposed low-cost data fusion for localization, we see a bright prospect of utilizing VO trajectory with information from OSM to improve the performance. In addition to obtain VO trajectory, the camera mounted on the vehicle can also be used for other image processing applications to complement the system. This research will continue to develop with future works concluded in the last chapter of this thesis
APA, Harvard, Vancouver, ISO, and other styles
43

Jílek, Tomáš. "Pokročilá navigace v heterogenních multirobotických systémech ve vnějším prostředí." Doctoral thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2015. http://www.nusl.cz/ntk/nusl-234530.

Full text
Abstract:
The doctoral thesis discusses current options for the navigation of unmanned ground vehicles with a focus on achieving high absolute compliance of the required motion trajectory and the obtained one. The current possibilities of key self-localization methods, such as global satellite navigation systems, inertial navigation systems, and odometry, are analyzed. The description of the navigation method, which allows achieving a centimeter-level accuracy of the required trajectory tracking with the above mentioned self-localization methods, forms the core of the thesis. The new navigation method was designed with regard to its very simple parameterization, respecting the limitations of the used robot drive configuration. Thus, after an appropriate parametrization of the navigation method, it can be applied to any drive configuration. The concept of the navigation method allows integrating and using more self-localization systems and external navigation methods simultaneously. This increases the overall robustness of the whole process of the mobile robot navigation. The thesis also deals with the solution of cooperative convoying heterogeneous mobile robots. The proposed algorithms were validated under real outdoor conditions in three different experiments.
APA, Harvard, Vancouver, ISO, and other styles
44

Vodrážka, Jakub. "Návrh konstrukce mobilního autonomního robotu." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2010. http://www.nusl.cz/ntk/nusl-229186.

Full text
Abstract:
The thesis deals with design of the device for testing the localization techniques for indoor navigation. Autonomous robot was designed as the most appropriate platform for testing. The thesis is divided into three parts. The first one describes various kinds of robots, their possible use and sensors, which could be of use for solving the problem. The second part deals with the design and construction of the robot. The robot is built on the chassis of the differential type with support spur. Two electric motors with a gearbox and output shaft speed sensor represent the drive unit. Coat of the robot was designed for good functionality and attractive overall look. The robot is also used for the presentation of robotics. Thesis provides complete design of chassis and body construction, along with control section and sensorics. The last part describes a statistical model of the robot movement, which was based on several performed experiments. The experiments were realized to find any possible deviations of sensor measurement comparing to the real situation.
APA, Harvard, Vancouver, ISO, and other styles
45

Epton, Thomas. "Odometry correction of a mobile robot using a range-finding laser." Connect to this title online, 2007. http://etd.lib.clemson.edu/documents/1202499136/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Gonzalez, Cadenillas Clayder Alejandro. "An improved feature extractor for the lidar odometry and mapping algorithm." Tesis, Universidad de Chile, 2019. http://repositorio.uchile.cl/handle/2250/171499.

Full text
Abstract:
Tesis para optar al grado de Magíster en Ciencias de la Ingeniería, Mención Eléctrica
La extracción de características es una tarea crítica en la localización y mapeo simultáneo o Simultaneous Localization and Mapping (SLAM) basado en características, que es uno de los problemas más importantes de la comunidad robótica. Un algoritmo que resuelve SLAM utilizando características basadas en LiDAR es el algoritmo LiDAR Odometry and Mapping (LOAM). Este algoritmo se considera actualmente como el mejor algoritmo SLAM según el Benchmark KITTI. El algoritmo LOAM resuelve el problema de SLAM a través de un enfoque de emparejamiento de características y su algoritmo de extracción de características detecta las características clasifican los puntos de una nube de puntos como planos o agudos. Esta clasificación resulta de una ecuación que define el nivel de suavidad para cada punto. Sin embargo, esta ecuación no considera el ruido de rango del sensor. Por lo tanto, si el ruido de rango del LiDAR es alto, el extractor de características de LOAM podría confundir los puntos planos y agudos, lo que provocaría que la tarea de emparejamiento de características falle. Esta tesis propone el reemplazo del algoritmo de extracción de características del LOAM original por el algoritmo Curvature Scale Space (CSS). La elección de este algoritmo se realizó después de estudiar varios extractores de características en la literatura. El algoritmo CSS puede mejorar potencialmente la tarea de extracción de características en entornos ruidosos debido a sus diversos niveles de suavizado Gaussiano. La sustitución del extractor de características original de LOAM por el algoritmo CSS se logró mediante la adaptación del algoritmo CSS al Velodyne VLP-16 3D LiDAR. El extractor de características de LOAM y el extractor de características de CSS se probaron y compararon con datos reales y simulados, incluido el dataset KITTI utilizando las métricas Optimal Sub-Pattern Assignment (OSPA) y Absolute Trajectory Error (ATE). Para todos estos datasets, el rendimiento de extracción de características de CSS fue mejor que el del algoritmo LOAM en términos de métricas OSPA y ATE.
APA, Harvard, Vancouver, ISO, and other styles
47

Delgado, Vargas Jaime Armando 1986. "Localização e navegação de robô autônomo através de odometria e visão estereoscópica." [s.n.], 2012. http://repositorio.unicamp.br/jspui/handle/REPOSIP/264542.

Full text
Abstract:
Orientador: Paulo Roberto Gardel Kurka
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Mecânica
Made available in DSpace on 2018-08-20T13:27:04Z (GMT). No. of bitstreams: 1 DelgadoVargas_JaimeArmando_M.pdf: 4350704 bytes, checksum: 8e7dab5b1630b88bde95e287a62b2f7e (MD5) Previous issue date: 2012
Resumo: Este trabalho apresenta a implementação de um sistema de navegação com visão estereoscópica em um robô móvel, que permite a construção de mapa de ambiente e localização. Para isto é necessário conhecer o modelo cinemático do robô, técnicas de controle, algoritmos de identificação de características em imagens (features), reconstrução 3D com visão estereoscópica e algoritmos de navegação. Utilizam-se métodos para a calibração de câmera desenvolvida no âmbito do grupo de pesquisa da FEM/UNICAMP e da literatura. Resultados de análises experimentais e teóricas são comparados. Resultados adicionais mostram a validação do algoritmo de calibração de câmera, acurácia dos sensores, resposta do sistema de controle, e reconstrução 3D. Os resultados deste trabalho são de importância para futuros estudos de navegação robótica e calibração de câmeras
Abstract: This paper presents a navigation system with stereoscopic vision on a mobile robot, which allows the construction of environment map and location. In that way must know the kinematic model of the robot, algorithms for identifying features in images (features) as a Sift, 3D reconstruction with stereoscopic vision and navigation algorithms. Methods are used to calibrate the camera developed within the research group of the FEM / UNICAMP and literature. Results of experimental and theoretical analyzes are compared. Additional results show the validation of the algorithm for camera calibration, accuracy of sensors, control system response, and 3D reconstruction. These results are important for future studies of robotic navigation and calibration of cameras
Mestrado
Mecanica dos Sólidos e Projeto Mecanico
Mestre em Engenharia Mecânica
APA, Harvard, Vancouver, ISO, and other styles
48

Terzakis, George. "Visual odometry and mapping in natural environments for arbitrary camera motion models." Thesis, University of Plymouth, 2016. http://hdl.handle.net/10026.1/6686.

Full text
Abstract:
This is a thesis on outdoor monocular visual SLAM in natural environments. The techniques proposed herein aim at estimating camera pose and 3D geometrical structure of the surrounding environment. This problem statement was motivated by the GPS-denied scenario for a sea-surface vehicle developed at Plymouth University named Springer. The algorithms proposed in this thesis are mainly adapted for the Springer’s environmental conditions, so that the vehicle can navigate on a vision based localization system when GPS is not available; such environments include estuarine areas, forests and the occasional semi-urban territories. The research objectives are constrained versions of the ever-abiding problems in the fields of multiple view geometry and mobile robotics. The research is proposing new techniques or improving existing ones for problems such as scene reconstruction, relative camera pose recovery and filtering, always in the context of the aforementioned landscapes (i.e., rivers, forests, etc.). Although visual tracking is paramount for the generation of data point correspondences, this thesis focuses primarily on the geometric aspect of the problem as well as with the probabilistic framework in which the optimization of pose and structure estimates takes place. Besides algorithms, the deliverables of this research should include the respective implementations and test data for these algorithms in the form of a software library and a dataset containing footage of estuarine regions taken from a boat, along with synchronized sensor logs. This thesis is not the final analysis on vision based navigation. It merely proposes various solutions for the localization problem of a vehicle navigating in natural environments either on land or on the surface of the water. Although these solutions can be used to provide position and orientation estimates when GPS is not available, they have limitations and there is still a vast new world of ideas to be explored.
APA, Harvard, Vancouver, ISO, and other styles
49

Janíček, Kryštof. "Odhad rychlosti vozidla ze záznamu on-board kamery." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2018. http://www.nusl.cz/ntk/nusl-385901.

Full text
Abstract:
This thesis describes the design and implementation of system for vehicle speed estimation from on-board camera recording. Speed estimation is based on optical flow estimation and convolutional neural network. Designed system is able to estimate speed with average error of 20% on created data set where actual speed is greater than 35 kilometers per hour.
APA, Harvard, Vancouver, ISO, and other styles
50

Peñaloza, González Andrés. "Implementación de odometría visual utilizando una cámara estereoscópica." Tesis, Universidad de Chile, 2015. http://repositorio.uchile.cl/handle/2250/137817.

Full text
Abstract:
Ingeniero Civil Eléctrico
En ciertas aplicaciones de robótica es importante la utilización de un odómetro para poder estimar la posición de un robot en movimiento. Esto permite que el actor tenga una noción de la ubicación en el entorno por donde se mueve. En aplicaciones como vehículos autónomos es especialmente importante, pues es crítico conocer la posición del vehículo con respecto a su mapa interno para evitar colisiones. Usualmente los odómetros más utilizados son las ruedas y el GPS. Sin embargo estos no siempre están disponibles, debido a adversidades del ambiente. Es por estos motivos que se emplea odometría visual. La odometría visual es el proceso de estimación del movimiento de un vehículo o agente utilizando las imágenes que éste obtiene de sus cámaras. Ella ha sido utilizada en la industria minera con los camiones de carga, y, últimamente en drones aéreos que podrían ser ocupados para el transporte de paquetes. También se ha utilizado para estimar la posición de los robots que actualmente transitan en la superficie de Marte. El presente trabajo tiene por finalidad la implementación de un algoritmo de odometría visual usando una cámara estereoscópica para estimar la trayectoria de un robot, y la evaluación del desempeño de éste comparándola con los valores conocidos de posición. La metodología utilizada permite identificar qué parámetros del algoritmo de estimación de movimiento tienen mayor relevancia y cómo influyen en la rapidez y calidad de la solución. También se determina la influencia de las condiciones de iluminación, y se determina qué zona geométrica de la imagen es mejor para realizar la triangulación de puntos. La solución se compone de un sistema capaz de ejecutar las distintas partes que requiere el algoritmo de manera extensible, siendo fácil reemplazar un método en el futuro con un mínimo impacto en el código. Se obtienen resultados favorables, donde el error de estimación de movimiento es pequeño y, además, se concluye acerca de los factores más importantes en la ejecución del algoritmo. Se discute acerca de la rapidez del algoritmo y se proponen soluciones que ayuden a su implementación en tiempo real.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography