To see the other types of publications on this topic, follow the link: Odometria.

Dissertations / Theses on the topic 'Odometria'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Odometria.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Pärkkä, J. (Jarmo). "Reaaliaikainen visuaalinen odometria." Master's thesis, University of Oulu, 2013. http://urn.fi/URN:NBN:fi:oulu-201312021943.

Full text
Abstract:
Visuaalisella odometrialla estimoidaan ajoneuvon, ihmisen tai robotin liikettä käyttäen syötteenä kuvaa yhdestä tai useammasta kamerasta. Sovelluskohteita on robotiikassa, autoteollisuudessa, asustemikroissa ja lisätyssä todellisuudessa. Se on hyvä lisä navigointijärjestelmiin, koska se toimii ympäristöissä missä GPS ei toimi. Visuaalinen odometria kehitettiin pyöräodometrian korvaajaksi, koska sen käyttö ei ole riippuvainen maastosta ja liikkumismuodosta. Tässä työssä tutkitaan ja kehitetään visuaalisen odometrian menetelmää reaaliaikaiseen sulautettuun järjestelmään. Työssä esitellään visuaalisen odometrian perusteet ja sen sisältämät osamenetelmät. Lisäksi esitellään yhtäaikainen paikallistaminen ja kartoitus (SLAM), jonka osana visuaalinen odometria voi esiintyä. Kehitettyä visuaalisen odometrian menetelmää on tarkoituksena käyttää Parrotin robottihelikopterille AR.Drone 2.0:lle tunnistamaan sen liikkeet. Tällöin robottihelikopteri saa tarpeeksi tietoa ympäristöstään lentääkseen itsenäisesti. Työssä toteutetaan algoritmi robotin tallentaman videomateriaalin tulkitsemiseen. Työssä toteutettu menetelmä on monokulaarinen SLAM, jossa käytetään yhden pisteen RANSAC-menetelmää yhdistettynä käänteisen syvyyden EKF:ään. Menetelmän piirteenirroitus ja vastinpisteiden etsintä korvataan reaaliaikaisella sulautetulle järjestelmälle sopivalla menetelmällä. Algoritmin toiminta testataan mittaamalla sen suoritusaika useilla kuvasekvensseillä ja analysoimalla sen piirtämää karttaa kameran liikkeestä. Lisäksi tarkastellaan sen antamien navigointitietojen todenmukaisuutta. Toteutetun järjestelmän toimintaa analysoidaan visuaalisesti ja sen toimintaa tarkastellaan suhteessa vertailumenetelmään. Työssä toteutettu visuaalisen odometrian menetelmä todetaan toimivaksi ratkaisuksi reaaliaikaiselle sulautetulle järjestelmälle tietyt rajoitukset huomioiden
Visual odometry is the process of estimating the motion of a vehicle, human or robot using the input of a single or multiple cameras. Application domains include robotics, wearable computing, augmented reality and automotive. It is a good supplement to the navigation systems because it operates in the environments where GPS does not. Visual odometry was developed as a substitute for wheel odometry, because its use is not dependent of the terrain. Visual odometry can be applied without restrictions to the way of movement (wheels, flying, walking). In this work visual odometry is examined and developed to be used in real-time embedded system. The basics of visual odometry are discussed. Furthermore, simultaneous localization and mapping (SLAM) is introduced. Visual odometry can appear as a part of SLAM. The purpose of this work is to develop visual odometry algorithm for Parrot’s robot helicopter AR.Drone 2.0, so it could fly independently in the future. The algorithm is based on Civera’s EKF-SLAM method, where feature extraction is replaced with an approach used earlier in global motion estimation. The operation of the algorithm is tested by measuring its performance time with different image sequences and by analyzing the movement of the camera from the map drawn by it. Furthermore, the reality of the navigation information is examined. The operation of the executed system is visually analyzed on the basis of the video and its operation is examined in relation to the comparison method. Developed visual odometry method is found to be a functional solution to the real-time embedded system under certain constraints
APA, Harvard, Vancouver, ISO, and other styles
2

Nishitani, André Toshio Nogueira. "Localização baseada em odometria visual." Universidade de São Paulo, 2015. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-17082016-095838/.

Full text
Abstract:
O problema da localização consiste em estimar a posição de um robô com relação a algum referencial externo e é parte essencial de sistemas de navegação de robôs e veículos autônomos. A localização baseada em odometria visual destaca-se em relação a odometria de encoders na obtenção da rotação e direção do movimento do robô. Esse tipo de abordagem é também uma escolha atrativa para sistemas de controle de veículos autônomos em ambientes urbanos, onde a informação visual é necessária para a extração de informações semânticas de placas, semáforos e outras sinalizações. Neste contexto este trabalho propõe o desenvolvimento de um sistema de odometria visual utilizando informação visual de uma câmera monocular baseado em reconstrução 3D para estimar o posicionamento do veículo. O problema da escala absoluta, inerente ao uso de câmeras monoculares, é resolvido utilizando um conhecimento prévio da relação métrica entre os pontos da imagem e pontos do mundo em um mesmo plano.
The localization problem consists of estimating the position of the robot with regards to some external reference and it is an essential part of robots and autonomous vehicles navigation systems. Localization based on visual odometry, compared to encoder based odometry, stands out at the estimation of rotation and direction of the movement. This kind of approach is an interesting choice for vehicle control systems in urban environment, where the visual information is mandatory for the extraction of semantic information contained in the street signs and marks. In this context this project propose the development of a visual odometry system based on structure from motion using visual information acquired from a monocular camera to estimate the vehicle pose. The absolute scale problem, inherent with the use of monocular cameras, is achieved using som previous known information regarding the metric relation between image points and points lying on a same world plane.
APA, Harvard, Vancouver, ISO, and other styles
3

Tomasi, Junior Darci Luiz. "Modelo de calibração para sistemas de odometria robótica." reponame:Repositório Institucional da UFPR, 2016. http://hdl.handle.net/1884/45704.

Full text
Abstract:
Orientador : Prof. Dr. Eduardo Todt
Dissertação (mestrado) - Universidade Federal do Paraná, Setor de Ciências Exatas, Programa de Pós-Graduação em Informática. Defesa: Curitiba, 30/11/2016
Inclui referências : f. 39
Resumo: Para realizar a navegação de uma base robótica em um ambiente desconhecido, alguns mecanismos para detectar o posicionamento e a localização devem ser fornecidos a base. Quando a base está em processo de navegação e faz uso desses mecanismos, erros provenientes do ambiente e da base robótica são inseridos no sistema, resultando em um posicionamento errôneo. Uma forma de reduzir a amplitude dos erros é através de um modelo de calibração eficiente, capaz de identificar e estimar valores aceitáveis para as principais fontes de incerteza nos cálculos de odometria. Este trabalho de pesquisa apresenta um novo modelo de calibração comparável aos métodos clássicos conhecidos, mas que diferencia-se pela forma com que a calibração é realizada, sendo essa a principal limitação para conseguir incrementar os resultados com o método proposto. Ao fim do procedimento padrão proposto ser realizado, os resultados são equivalentes aos dos métodos clássicos conhecidos. Palavras-chave: UMBmark, Odometria, Calibração.
Abstract: In order to navigate a robotic base in an unfamiliar environment, some mechanism to detect positioning and location must be provided. When the robot is in the process of navigation and makes use of this mechanism, errors from the environment and the robotic base are inserted into the system, resulting in an erroneous positioning. One way to reduce the error amplitude is through an efficient calibration model, capable of identifying and estimating acceptable values for the main sources of uncertainty in odometry calculations. This work presents a new calibration model comparable to the classical methods known, but it is distinguished by the way in which the calibration is performed, being this the main limitation to be able to increase the results with the proposed method. At the end of the proposed standard procedure, the results are equivalent to those of the known classical methods. Keywords: UMBmark, Odometry, Calibration.
APA, Harvard, Vancouver, ISO, and other styles
4

Silva, Bruno Marques Ferreira da. "Odometria visual baseada em t?cnicas de structure from motion." Universidade Federal do Rio Grande do Norte, 2011. http://repositorio.ufrn.br:8080/jspui/handle/123456789/15364.

Full text
Abstract:
Made available in DSpace on 2014-12-17T14:55:51Z (GMT). No. of bitstreams: 1 BrunoMFS_DISSERT.pdf: 2462891 bytes, checksum: b8ea846d0fcc23b0777a6002e9ba92ac (MD5) Previous issue date: 2011-02-15
Coordena??o de Aperfei?oamento de Pessoal de N?vel Superior
Visual Odometry is the process that estimates camera position and orientation based solely on images and in features (projections of visual landmarks present in the scene) extraced from them. With the increasing advance of Computer Vision algorithms and computer processing power, the subarea known as Structure from Motion (SFM) started to supply mathematical tools composing localization systems for robotics and Augmented Reality applications, in contrast with its initial purpose of being used in inherently offline solutions aiming 3D reconstruction and image based modelling. In that way, this work proposes a pipeline to obtain relative position featuring a previously calibrated camera as positional sensor and based entirely on models and algorithms from SFM. Techniques usually applied in camera localization systems such as Kalman filters and particle filters are not used, making unnecessary additional information like probabilistic models for camera state transition. Experiments assessing both 3D reconstruction quality and camera position estimated by the system were performed, in which image sequences captured in reallistic scenarios were processed and compared to localization data gathered from a mobile robotic platform
Odometria Visual ? o processo pelo qual consegue-se obter a posi??o e orienta??o de uma c?mera, baseado somente em imagens e consequentemente, em caracter?sticas (proje??es de marcos visuais da cena) nelas contidas. Com o avan?o nos algoritmos e no poder de processamento dos computadores, a sub?rea de Vis?o Computacional denominada de Structure from Motion (SFM) passou a fornecer ferramentas que comp?em sistemas de localiza??o visando aplica??es como rob?tica e Realidade Aumentada, em contraste com o seu prop?sito inicial de ser usada em aplica??es predominantemente offline como reconstru??o 3D e modelagem baseada em imagens. Sendo assim, este trabalho prop?e um pipeline de obten??o de posi??o relativa que tem como caracter?sticas fazer uso de uma ?nica c?mera calibrada como sensor posicional e ser baseado interamente nos modelos e algoritmos de SFM. T?cnicas usualmente presentes em sistemas de localiza??o de c?mera como filtros de Kalman e filtros de part?culas n?o s?o empregadas, dispensando que informa??es adicionais como um modelo probabil?stico de transi??o de estados para a c?mera sejam necess?rias. Experimentos foram realizados com o prop?sito de avaliar tanto a reconstru??o 3D quanto a posi??o de c?mera retornada pelo sistema, atrav?s de sequ?ncias de imagens capturadas em ambientes reais de opera??o e compara??es com um ground truth fornecido pelos dados do od?metro de uma plataforma rob?tica
APA, Harvard, Vancouver, ISO, and other styles
5

Araújo, Darla Caroline da Silva 1989. "Uso de fluxo óptico na odometria visual aplicada a robótica." [s.n.], 2015. http://repositorio.unicamp.br/jspui/handle/REPOSIP/265835.

Full text
Abstract:
Orientador: Paulo Roberto Gardel Kurka
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Mecânica
Made available in DSpace on 2018-08-26T21:38:28Z (GMT). No. of bitstreams: 1 Araujo_DarlaCarolinedaSilva_M.pdf: 5678583 bytes, checksum: a6ed9886369705a8853f15d431565a3d (MD5) Previous issue date: 2015
Resumo: O presente trabalho descreve um método de odometria visual empregando a técnica de fluxo óptico, para estimar o movimento de um robô móvel, através de imagens digitais capturadas de duas câmeras estereoscópicas nele fixadas. Busca-se assim a construção de um mapa para a localização do Robô. Esta proposta, além de alternativa ao cálculo autônomo de movimento realizado por outros tipos de sensores como GPS, laser, sonares, utiliza uma técnica de processamento óptico de grande eficiência computacional. Foi construído um ambiente 3D para simulação do movimento do robô e captura das imagens necessárias para estimar sua trajetória e verificar a acurácia da técnica proposta. Utiliza-se a técnica de fluxo óptico de Lucas Kanade na identificação de características em imagens. Os resultados obtidos neste trabalho são de grande importância para os estudos de navegação robótica
Abstract: This work describes a method of visual odometry using the optical flow technique to estimate the motion of a mobile robot, through digital images captured from two stereoscopic cameras fixed on it, in order to obtain a map of location of the robot. This proposal is an alternative to the autonomous motion calculation performed by other types of sensors such as GPS, laser, sonar, and uses an optical processing technique of high computational efficiency. To check the accuracy of the technique it was necessary to build a 3D environment to simulate the robot performing a trajectory and capture the necessary images to estimate the trajectory. The optical flow technique of Lucas Kanade was used for identifying features in the images. The results of this work are of great importance for future robotic navigation studies
Mestrado
Mecanica dos Sólidos e Projeto Mecanico
Mestra em Engenharia Mecânica
APA, Harvard, Vancouver, ISO, and other styles
6

Santos, Vinícius Araújo. "SiameseVO-Depth: odometria visual através de redes neurais convolucionais siamesas." Universidade Federal de Goiás, 2018. http://repositorio.bc.ufg.br/tede/handle/tede/9083.

Full text
Abstract:
Submitted by Luciana Ferreira (lucgeral@gmail.com) on 2018-11-21T11:05:44Z No. of bitstreams: 2 Dissertação - Vinícius Araújo Santos - 2018.pdf: 14601054 bytes, checksum: e02a8bcd3cdc93bf2bf202c3933b3f27 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2018-11-21T11:06:26Z (GMT) No. of bitstreams: 2 Dissertação - Vinícius Araújo Santos - 2018.pdf: 14601054 bytes, checksum: e02a8bcd3cdc93bf2bf202c3933b3f27 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
Made available in DSpace on 2018-11-21T11:06:26Z (GMT). No. of bitstreams: 2 Dissertação - Vinícius Araújo Santos - 2018.pdf: 14601054 bytes, checksum: e02a8bcd3cdc93bf2bf202c3933b3f27 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Previous issue date: 2018-10-11
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES
Visual Odometry is an important process in image based navigation of robots. The standard methods of this field rely on the good feature matching between frames where feature detection on images stands as a well adressed problem within Computer Vision. Such techniques are subject to illumination problems, noise and poor feature localization accuracy. Thus, 3D information on a scene may mitigate the uncertainty of the features on images. Deep Learning techniques show great results when dealing with common difficulties of VO such as low illumination conditions and bad feature selection. While Visual Odometry and Deep Learning have been connected previously, no techniques applying Siamese Convolutional Networks on depth infomation given by disparity maps have been acknowledged as far as this work’s researches went. This work aims to fill this gap by applying Deep Learning to estimate egomotion through disparity maps on an Siamese architeture. The SiameseVO-Depth architeture is compared to state of the art techniques on OV by using the KITTI Vision Benchmark Suite. The results reveal that the chosen methodology succeeded on the estimation of Visual Odometry although it doesn’t outperform the state-of-the-art techniques. This work presents fewer steps in relation to standard VO techniques for it consists of an end-to-end solution and demonstrates a new approach of Deep Learning applied to Visual Odometry.
Odometria Visual é um importante processo na navegação de robôs baseada em imagens. Os métodos clássicos deste tema dependem de boas correspondências de características feitas entre imagens sendo que a detecção de características em imagens é um tema amplamente discutido no campo de Visão Computacional. Estas técnicas estão sujeitas a problemas de iluminação, presença de ruído e baixa de acurácia de localização. Nesse contexto, a informação tridimensional de uma cena pode ser uma forma de mitigar as incertezas sobre as características em imagens. Técnicas de Deep Learning têm demonstrado bons resultados lidando com problemas comuns em técnicas de OV como insuficiente iluminação e erros na seleção de características. Ainda que já existam trabalhos que relacionam Odometria Visual e Deep Learning, não foram encontradas técnicas que utilizem Redes Convolucionais Siamesas com sucesso utilizando informações de profundidade de mapas de disparidade durante esta pesquisa. Este trabalho visa preencher esta lacuna aplicando Deep Learning na estimativa do movimento por de mapas de disparidade em uma arquitetura Siamesa. A arquitetura SiameseVO-Depth proposta neste trabalho é comparada à técnicas do estado da arte em OV utilizando a base de dados KITTI Vision Benchmark Suite. Os resultados demonstram que através da metodologia proposta é possível a estimativa dos valores de uma Odometria Visual ainda que o desempenho não supere técnicas consideradas estado da arte. O trabalho proposto possui menos etapas em comparação com técnicas clássicas de OV por apresentar-se como uma solução fim-a-fim e apresenta nova abordagem no campo de Deep Learning aplicado à Odometria Visual.
APA, Harvard, Vancouver, ISO, and other styles
7

Pereira, Fabio Irigon. "High precision monocular visual odometry." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2018. http://hdl.handle.net/10183/183233.

Full text
Abstract:
Extrair informação de profundidade a partir de imagens bidimensionais é um importante problema na área de visão computacional. Diversas aplicações se beneficiam desta classe de algoritmos tais como: robótica, a indústria de entretenimento, aplicações médicas para diagnóstico e confecção de próteses e até mesmo exploração interplanetária. Esta aplicação pode ser dividida em duas etapas interdependentes: a estimação da posição e orientação da câmera no momento em que a imagem foi gerada, e a estimativa da estrutura tridimensional da cena. Este trabalho foca em técnicas de visão computacional usadas para estimar a trajetória de um veículo equipado com uma câmera, problema conhecido como odometria visual. Para obter medidas objetivas de eficiência e precisão, e poder comparar os resultados obtidos com o estado da arte, uma base de dados de alta precisão, bastante utilizada pela comunidade científica foi utilizada. No curso deste trabalho novas técnicas para rastreamento de detalhes, estimativa de posição de câmera, cálculo de posição 3D de pontos e recuperação de escala são propostos. Os resultados alcançados superam os mais bem ranqueados trabalhos na base de dados escolhida até o momento da publicação desta tese.
Recovering three-dimensional information from bi-dimensional images is an important problem in computer vision that finds several applications in our society. Robotics, entertainment industry, medical diagnose and prosthesis, and even interplanetary exploration benefit from vision based 3D estimation. The problem can be divided in two interdependent operations: estimating the camera position and orientation when each image was produced, and estimating the 3D scene structure. This work focuses on computer vision techniques, used to estimate the trajectory of a vehicle equipped camera, a problem known as visual odometry. In order to provide an objective measure of estimation efficiency and to compare the achieved results to the state-of-the-art works in visual odometry a high precision popular dataset was selected and used. In the course of this work new techniques for image feature tracking, camera pose estimation, point 3D position calculation and scale recovery are proposed. The achieved results outperform the best ranked results in the popular chosen dataset.
APA, Harvard, Vancouver, ISO, and other styles
8

Bezerra, Clauber Gomes. "Localiza??o de um rob? m?vel usando odometria e marcos naturais." Universidade Federal do Rio Grande do Norte, 2004. http://repositorio.ufrn.br:8080/jspui/handle/123456789/15411.

Full text
Abstract:
Made available in DSpace on 2014-12-17T14:56:01Z (GMT). No. of bitstreams: 1 ClauberGB.pdf: 726956 bytes, checksum: d3fb1b2d7c6ad784a1b7d40c1a54f8f8 (MD5) Previous issue date: 2004-03-08
Several methods of mobile robot navigation request the mensuration of robot position and orientation in its workspace. In the wheeled mobile robot case, techniques based on odometry allow to determine the robot localization by the integration of incremental displacements of its wheels. However, this technique is subject to errors that accumulate with the distance traveled by the robot, making unfeasible its exclusive use. Other methods are based on the detection of natural or artificial landmarks present in the environment and whose location is known. This technique doesnt generate cumulative errors, but it can request a larger processing time than the methods based on odometry. Thus, many methods make use of both techniques, in such a way that the odometry errors are periodically corrected through mensurations obtained from landmarks. Accordding to this approach, this work proposes a hybrid localization system for wheeled mobile robots in indoor environments based on odometry and natural landmarks. The landmarks are straight lines de.ned by the junctions in environments floor, forming a bi-dimensional grid. The landmark detection from digital images is perfomed through the Hough transform. Heuristics are associated with that transform to allow its application in real time. To reduce the search time of landmarks, we propose to map odometry errors in an area of the captured image that possesses high probability of containing the sought mark
Diversos m?todos de navega??o de rob?s m?veis requerem a medi??o da posi??o e orienta??o do rob? no seu espa?o de trabalho. No caso de rob?s m?veis com rodas, t?cnicas baseadas em odometria permitem determinar a localiza??o do rob? atrav?s da integra??o de medi??es dos deslocamentos incrementais de suas rodas. No entanto, essa t?cnica est? sujeita a erros que se acumulam com a dist?ncia percorrida pelo rob?, o que inviabiliza o seu uso exclusivo. Outros m?todos se baseiam na detec??o de marcos naturais ou artificiais, cuja localiza??o ? conhecida, presentes no ambiente. Apesar desta t?cnica n?o gerar erros cumulativos, ela pode requisitar um tempo de processamento bem maior do que o uso de odometria. Assim, muitos m?todos fazem uso de ambas as t?cnicas, de modo a corrigir periodicamente os erros de odometria, atrav?s de medi??es obtidas a partir dos marcos. De acordo com esta abordagem, propomos neste trabalho um sistema h?brido de localiza??o para rob?s m?veis com rodas em ambientes internos, baseado em odometria e marcos naturais, onde os marcos adotados s?o linhas retas definidas pelas jun??es existentes no piso do ambiente, formando uma grade bi-dimensional no ch?o. Para a detec??o deste tipo de marco, a partir de imagens digitais, ? utilizada a transformada de Hough, associada a heur?sticas que permitem a sua aplica??o em tempo real. Em particular, para reduzir o tempo de busca dos marcos, propomos mapear erros de odometria em uma regi?o da imagem capturada que possua grande probabilidade de conter o marco procurado
APA, Harvard, Vancouver, ISO, and other styles
9

Delgado, Vargas Jaime Armando 1986. "Localização e navegação de robô autônomo através de odometria e visão estereoscópica." [s.n.], 2012. http://repositorio.unicamp.br/jspui/handle/REPOSIP/264542.

Full text
Abstract:
Orientador: Paulo Roberto Gardel Kurka
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Mecânica
Made available in DSpace on 2018-08-20T13:27:04Z (GMT). No. of bitstreams: 1 DelgadoVargas_JaimeArmando_M.pdf: 4350704 bytes, checksum: 8e7dab5b1630b88bde95e287a62b2f7e (MD5) Previous issue date: 2012
Resumo: Este trabalho apresenta a implementação de um sistema de navegação com visão estereoscópica em um robô móvel, que permite a construção de mapa de ambiente e localização. Para isto é necessário conhecer o modelo cinemático do robô, técnicas de controle, algoritmos de identificação de características em imagens (features), reconstrução 3D com visão estereoscópica e algoritmos de navegação. Utilizam-se métodos para a calibração de câmera desenvolvida no âmbito do grupo de pesquisa da FEM/UNICAMP e da literatura. Resultados de análises experimentais e teóricas são comparados. Resultados adicionais mostram a validação do algoritmo de calibração de câmera, acurácia dos sensores, resposta do sistema de controle, e reconstrução 3D. Os resultados deste trabalho são de importância para futuros estudos de navegação robótica e calibração de câmeras
Abstract: This paper presents a navigation system with stereoscopic vision on a mobile robot, which allows the construction of environment map and location. In that way must know the kinematic model of the robot, algorithms for identifying features in images (features) as a Sift, 3D reconstruction with stereoscopic vision and navigation algorithms. Methods are used to calibrate the camera developed within the research group of the FEM / UNICAMP and literature. Results of experimental and theoretical analyzes are compared. Additional results show the validation of the algorithm for camera calibration, accuracy of sensors, control system response, and 3D reconstruction. These results are important for future studies of robotic navigation and calibration of cameras
Mestrado
Mecanica dos Sólidos e Projeto Mecanico
Mestre em Engenharia Mecânica
APA, Harvard, Vancouver, ISO, and other styles
10

Santos, Cristiano Flores dos. "Um framework para avaliação de mapeamento tridimensional Utilizando técnicas de estereoscopia e odometria visual." Universidade Federal de Santa Maria, 2016. http://repositorio.ufsm.br/handle/1/12038.

Full text
Abstract:
The three-dimensional mapping environments has been intensively studied in the last decade. Among the benefits of this research topic is possible to highlight the addition of autonomy for car or even drones. The three-dimensional representation also allows viewing of a given scenario iteratively and with greater detail. However, until the time of this work was not found one framework to present in detail the implementation of algorithms to perform 3D mapping outdoor approaching a real-time processing. In view of this, in this work we developed a framework with the main stages of three-dimensional reconstruction. Therefore, stereoscopy was chosen as a technique for acquiring the depth information of the scene. In addition, this study evaluated four algorithms depth map generation, where it was possible to achieve the rate of 9 frames per second.
O mapeamento tridimensional de ambientes tem sido intensivamente estudado na última década. Entre os benefícios deste tema de pesquisa é possível destacar adição de autonomia á automóveis ou mesmo drones. A representação tridimensional também permite a visualização de um dado cenário de modo iterativo e com maior riqueza de detalhes. No entanto, até o momento da elaboração deste trabalho não foi encontrado um framework que apresente em detalhes a implementação de algoritmos para realização do mapeamento 3D de ambientes externos que se aproximasse de um processamento em tempo real. Diante disto, neste trabalho foi desenvolvido um framework com as principais etapas de reconstrução tridimensional. Para tanto, a estereoscopia foi escolhida como técnica para a aquisição da informação de profundidade do cenário. Além disto, neste trabalho foram avaliados 4 algoritmos de geração do mapa de profundidade, onde foi possível atingir a taxa de 9 quadros por segundo.
APA, Harvard, Vancouver, ISO, and other styles
11

Selvatici, Antonio Henrique Pinto. "AAREACT: uma arquitetura comportamental adaptativa para robôs móveis que integra visão, sonares e odometria." Universidade de São Paulo, 2005. http://www.teses.usp.br/teses/disponiveis/3/3141/tde-10062005-104556/.

Full text
Abstract:
Para ter uma aplicação real, um robô móvel deve poder desempenhar sua tarefa em ambientes desconhecidos. Uma arquitetura para robôs móveis que se adapte ao meio em que o robô se encontra é então desejável. Este trabalho apresenta uma arquitetura adaptativa para robôs móveis, de nome AAREACT, que aprende como coordenar comportamentos primitivos codificados por Campos Potenciais através de aprendizado por reforço. Cada comportamento utiliza a informação de apenas um tipo de sensor (visão, sonar ou odometria). O sensor de visão foi desenvolvido neste trabalho, e utiliza os tempos para colisão obtidos através da análise de seqüências de imagens para indicar a disposição dos objetos à frente do robô. A atuação da arquitetura proposta é comparada com a apresentada por uma arquitetura com coordenação fixa dos comportamentos, demonstrando melhor desempenho. Os resultados obtidos neste trabalho também apontam a alta capacidade de adaptação da arquitetura AAREACT.
It is desirable that mobile robots applied to real world applications perform their operations in previously unknown environments. Thus, a mobile robot architecture capable of adaptation is very suitable. This work presents an adaptive architecture for mobile robots called AAREACT, that has the ability of learning how to coordinate primitive behaviors codified by the Potential Fields method through reinforcement learning. Each behavior uses the information of a single sensor (vision, sonar or odometer). This work also brings details about the vision sensor\'s development, which uses time-to-crash information in order to detect distances to frontal obstacles. The proposed architecture\'s actuation is compared to that showed by an architecture that performs a fixed coordination of its behaviors, and shows a better performance. The obtained results also suggest that AAREACT has good adaptation skills.
APA, Harvard, Vancouver, ISO, and other styles
12

Souza, Anderson Abner de Santana. "Mapeamento com Sonar Usando Grade de Ocupa??o baseado em Modelagem Probabil?stica." Universidade Federal do Rio Grande do Norte, 2008. http://repositorio.ufrn.br:8080/jspui/handle/123456789/15203.

Full text
Abstract:
Made available in DSpace on 2014-12-17T14:55:08Z (GMT). No. of bitstreams: 1 AndersonASS.pdf: 906367 bytes, checksum: 22fe3d988905f9e44afd63465e16e0df (MD5) Previous issue date: 2008-02-15
In this work, we propose a probabilistic mapping method with the mapped environment represented through a modified occupancy grid. The main idea of the proposed method is to allow a mobile robot to construct in a systematic and incremental way the geometry of the underlying space, obtaining at the end a complete environment map. As a consequence, the robot can move in the environment in a safe way, based on a confidence value of data obtained from its perceptive system. The map is represented in a coherent way, according to its sensory data, being these noisy or not, that comes from exterior and proprioceptive sensors of the robot. Characteristic noise incorporated in the data from these sensors are treated by probabilistic modeling in such a way that their effects can be visible in the final result of the mapping process. The results of performed experiments indicate the viability of the methodology and its applicability in the area of autonomous mobile robotics, thus being an contribution to the field
Neste trabalho, propomos um m?todo de mapeamento probabil?stico com a representa??o do ambiente mapeado em uma grade de ocupa??o modificada. A id?ia principal do m?todo proposto ? deixar que um rob? m?vel construa de forma sistem?tica e incremental a geometria do seu entorno, obtendo ao final um mapa completo do ambiente. Como conseq??ncia, o rob? poder? locomover-se no seu ambiente de modo seguro, baseando-se em um ?ndice de confiabilidade dos dados colhidos do seu sistema perceptivo. O mapa ? representado de forma coerente com os dados sensoriais, sejam esses ruidosos ou n?o, oriundos dos sensores externoceptivos e proprioceptivos do rob?. Os ru?dos caracter?sticos incorporados nos dados de tais sensores s?o tratados por modelagem probabil?stica, de modo que seus efeitos possam ser vis?veis no resultado final do processo de mapeamento. Os resultados dos experimentos realizados, mostrados no presente trabalho, indicam a viabilidade desta metodologia e sua aplicabilidade na ?rea da rob?tica m?vel aut?noma, sendo assim uma contribui??o para a ?rea
APA, Harvard, Vancouver, ISO, and other styles
13

Delgado, Vargas Jaime Armando 1986. "Odometria visual e fusão de sensores no problema de localização e mapeamento simultâneo de ambientes exteriores." [s.n.], 2015. http://repositorio.unicamp.br/jspui/handle/REPOSIP/265760.

Full text
Abstract:
Orientador: Paulo Roberto Gardel Kurka
Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Mecânica
Made available in DSpace on 2018-08-28T12:48:00Z (GMT). No. of bitstreams: 1 DelgadoVargas_JaimeArmando_D.pdf: 6984853 bytes, checksum: 4647b847de6d3abb110923a021a6f376 (MD5) Previous issue date: 2015
Resumo: A localização de robôs móveis é foco de estudo em diferentes grupos de pesquisa ao redor do mundo. Robôs móveis são equipados com diferentes sensores, utilizando uma variedade de métodos de localização para as tarefas de exploração de ambientes desconhecidos ou para seguir uma trajetória predefinida. Este trabalho apresenta a investigação e implementação de um método robusto e eficiente da estimativa de movimento utilizando visão computacional, conhecido como odometria visual. Também, é estudada a fusão de estimativas de movimento de diferentes sensores através da técnica do filtro de Kalman. Neste trabalho utilizam-se câmeras estereoscópicas com lentes fixas de 9mm e simulações do movimento de uma câmera no ambiente 3D-Max. A validação experimental dos algoritmos é feita em uma plataforma robótica tipo Seekur Jr equipada com Lasers, GPS, encoders e câmeras estereoscópicas. O movimento do robô é estimado pelos diferentes sensores gerando redundância de localização Os algoritmos de odometria visual são validados em ambientes de interiores e exteriores. A velocidade de processamento dos métodos é comparada usando em diferentes processadores de tipo CPU e GPU, indicando a possibilidade um sistema de realização de odometria visual em tempo real
Abstract: The localization of mobile robots problem is addressed to a number of research groups around the world. Mobile robots are equipped with different sensors, using a variety of methods of localization in the exploration of unknown environments or following a pre-defined trajectory. The present work investigates and implements a robust method of estimation of movement using computer vision, known as visual odometry. The work investigates also the results of fusion of the estimates of movement obtained from different sensors, using the Kalman filter technique. Visual odometry uses stereoscopic vision techniques with real time computing in graphic processing units (GPU). Stereoscopic cameras with fixed 9mm lens and movement simulations in the 3d-Max computer environment are used in the present work. Experimental validation of the visual odometry algorithms is made in a Sekur Jr mobile robot platform, equipped with lasers, GPS, wheel encoders and stereoscopic cameras. Movements of the robot are estimated from the different sensors, yielding redundant localization information. The information from such sensors are fused together through the Kalman filter. Visual odometry algorithms are tested in indoors and outdoors navigation experiments. Processing speed of the methods is compared using different processing units: CPU and GPU, indicating the possibility of performing real time visual odometry
Doutorado
Mecanica dos Sólidos e Projeto Mecanico
Doutor em Engenharia Mecânica
APA, Harvard, Vancouver, ISO, and other styles
14

Santos, Guilherme Leal. "Localiza??o de rob?s m?veis aut?nomos utilizando fus?o sensorial de odometria e vis?o monocular." Universidade Federal do Rio Grande do Norte, 2010. http://repositorio.ufrn.br:8080/jspui/handle/123456789/15334.

Full text
Abstract:
Made available in DSpace on 2014-12-17T14:55:46Z (GMT). No. of bitstreams: 1 GuilhermeLS_DISSERT.pdf: 861871 bytes, checksum: 8461d130e59e8fb9ac951602b094fd18 (MD5) Previous issue date: 2010-05-07
Coordena??o de Aperfei?oamento de Pessoal de N?vel Superior
The development and refinement of techniques that make simultaneous localization and mapping (SLAM) for an autonomous mobile robot and the building of local 3-D maps from a sequence of images, is widely studied in scientific circles. This work presents a monocular visual SLAM technique based on extended Kalman filter, which uses features found in a sequence of images using the SURF descriptor (Speeded Up Robust Features) and determines which features can be used as marks by a technique based on delayed initialization from 3-D straight lines. For this, only the coordinates of the features found in the image and the intrinsic and extrinsic camera parameters are avaliable. Its possible to determine the position of the marks only on the availability of information of depth. Tests have shown that during the route, the mobile robot detects the presence of characteristics in the images and through a proposed technique for delayed initialization of marks, adds new marks to the state vector of the extended Kalman filter (EKF), after estimating the depth of features. With the estimated position of the marks, it was possible to estimate the updated position of the robot at each step, obtaining good results that demonstrate the effectiveness of monocular visual SLAM system proposed in this paper
O desenvolvimento e aperfei?oamento de t?cnicas que fa?am simultaneamente o mapeamento e a localiza??o (Simultaneous Localization and Mapping - SLAM) de um rob? m?vel aut?nomo e a cria??o de mapas locais 3-D, a partir de uma sequ?ncia de imagens, ? bastante estudada no meio cient?fico. Neste trabalho ? apresentado uma t?cnica de SLAM visual monocular baseada no filtro de Kalman estendido, que utiliza caracter?sticas encontradas em uma sequ?ncia de imagens atrav?s do descritor SURF (Speeded Up Robust Features) e determina quais caracter?sticas podem ser utilizadas como marcas atrav?s de uma t?cnica de inicializa??o atrasada baseada em retas 3-D. Para isso, tem-se dispon?vel apenas as coordenadas das caracter?sticas detectadas na imagem e os par?metros intr?nsecos e extr?nsecos da c?mera. ? poss?vel determinar a posi??o das marcas somente com a disponibilidade da informa??o de profundidade. Os experimentos realizados mostraram que durante o percurso, o rob? m?vel detecta a presen?a de caracter?sticas nas imagens e, atrav?s de uma t?cnica proposta para inicializa??o atrasada de marcas, adiciona novas marcas ao vetor de estados do filtro de Kalman estendido (FKE) ap?s estimar a profundidade das caracter?sticas. Com a posi??o estimada das marcas, foi poss?vel estimar a posi??o atualizada do rob? a cada passo; obtendo resultados satisfat?rios que comprovam a efici?ncia do sistema de SLAM visual monocular proposto neste trabalho
APA, Harvard, Vancouver, ISO, and other styles
15

Bettini, Guido. "Determinazione della posizione di un treno con dead reckoning di precisione." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019.

Find full text
Abstract:
Il sistema sviluppato ha come obiettivo l’implementazione del sotto sistema di Odometria interamente su Field Programmable Gate Array. Esso è necessario per la determinazione della posizione del treno con Dead Reckoning di precisione. Il progetto svolto all’interno dei laboratori Advanced Research Center On Electronic Systems "Ercole De Castro", in collaborazione con Rete Ferroviaria Italiana, si inserisce nel framework di sviluppo del settore, definito dal Regolamento UE 2016/919 relativo alla specifica tecnica di interoperabilità per i sottosistemi «controllo-comando e segnalamento» del sistema ferroviario dell’Unione Europea (ERTMS). Il carattere innovativo di questo progetto si riassume in due risultati. Innanzitutto, per la prima volta il progetto di un sottosistema Safety Critical di bordo viene implementato totalmente su FPGA, e secondariamente anche tutta l’attività di autodiagnosi ad esso correlata viene svolta su hardware. Tutto il progetto è stato sviluppato e verificato seguendo il flusso rigoroso indicato dal modello a V, come previsto dal livello 4 di Safety Integrity (SIL4).
APA, Harvard, Vancouver, ISO, and other styles
16

Pereira, Ana Rita. "Visual odometry: comparing a stereo and a multi-camera approach." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/18/18153/tde-11092017-095254/.

Full text
Abstract:
The purpose of this project is to implement, analyze and compare visual odometry approaches to help the localization task in autonomous vehicles. The stereo visual odometry algorithm Libviso2 is compared with a proposed omnidirectional multi-camera approach. The proposed method consists of performing monocular visual odometry on all cameras individually and selecting the best estimate through a voting scheme involving all cameras. The omnidirectionality of the vision system allows the part of the surroundings richest in features to be used in the relative pose estimation. Experiments are carried out using cameras Bumblebee XB3 and Ladybug 2, fixed on the roof of a vehicle. The voting process of the proposed omnidirectional multi-camera method leads to some improvements relatively to the individual monocular estimates. However, stereo visual odometry provides considerably more accurate results.
O objetivo deste mestrado é implementar, analisar e comparar abordagens de odometria visual, de forma a contribuir para a localização de um veículo autônomo. O algoritmo de odometria visual estéreo Libviso2 é comparado com um método proposto, que usa um sistema multi-câmera omnidirecional. De acordo com este método, odometria visual monocular é calculada para cada câmera individualmente e, seguidamente, a melhor estimativa é selecionada através de um processo de votação que involve todas as câmeras. O fato de o sistema de visão ser omnidirecional faz com que a parte dos arredores mais rica em características possa sempre ser usada para estimar a pose relativa do veículo. Nas experiências são utilizadas as câmeras Bumblebee XB3 e Ladybug 2, fixadas no teto de um veículo. O processo de votação do método multi-câmera omnidirecional proposto apresenta melhorias relativamente às estimativas monoculares individuais. No entanto, a odometria visual estéreo fornece resultados mais precisos.
APA, Harvard, Vancouver, ISO, and other styles
17

Bandi, Kristian. "Soluzioni hardware e software per il dead reckoning di precisione in sottosistemi critici ferroviari SIL4." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020.

Find full text
Abstract:
Il costante aumento del traffico ferroviario, dovuto alla grande richiesta di trasporto persone e merci, introduce il problema dell’aumento di capacità delle attuali infrastrutture. Onde evitare la costruzione di nuovi impianti, cosa che comporterebbe costi e tempi di realizzazione elevati, si vuole accrescere il numero di convogli circolanti sulle attuali diminuendo la distanza tra essi. Questo si traduce nell'incrementare la precisione nella localizzazione di un treno sui binari. A tal proposito l’azienda RFI - Rete Ferroviaria Italiana, in collaborazione con il centro di ricerca ARCES, sta sviluppando un nuovo sistema di localizzazione odometrica. L’innovazione di questo progetto risiede nell'integrazione di sensori inerziali, combinati con gli encoder tachimetrici, utilizzati per i calcoli di odometria. Nella prima parte viene descritto lo sviluppo dell’ Hardware che ha la funzione di rilevare i segnali dai sensori tachimetrici e inerziali. La progettazione è stata effettuata rispettando le normative richieste per garantire il livello di sicurezza SIL4, ossia il più alto, ed aumentare la precisione nella localizzazione del convoglio. E’ stata scelta un’architettura ridondata due su due (2oo2 ), isolando galvanicamente le due parti, e con funzionalità di self testing, in modo da garantire una Probability of Failure per Hour non superiore a 10−9. La seconda parte riguarda l’algoritmo di odometria, ovvero estrapolare un nuovo modello dai dati acquisiti dalla scheda tenendo conto dei fenomeni che inficiano la stima dello spazio percorso. L’obiettivo finale è diminuire l’intervallo di confidenza della stima utilizzando sensori inerziali. L’algoritmo è stato realizzato per dispositivi con architettura mista FPGA e MCU e secondo le norme richieste per garantire il livello di sicurezza necessario. I risultati ottenuti mediante simulazioni mostrano un miglioramento delle prestazioni, portando l’errore commesso dall'algoritmo allo stato dell’arte del 11.26% al 0.10%.
APA, Harvard, Vancouver, ISO, and other styles
18

Piva, Filippo. "Soluzioni digitali e analogiche per la garanzia di sicurezza in sottosistemi critici ferroviari." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019.

Find full text
Abstract:
Il trasporto ferroviario evolve verso scenari caratterizzati da convogli che viaggiano sempre più ravvicinati e a velocità crescente, il che richiede standard di sicurezza sempre più elevati. Rendere più sicuri treni che viaggiano a distanze ridotte significa poterli localizzare con precisione, cosa che oggi viene fatta anche con soluzioni che si basano sul principio della odometria. Con lo sguardo rivolto al futuro, Rete Ferroviaria Italiana e Arces stanno sviluppando un nuovo sistema di localizzazione odometrica. Questa tesi ha l’obiettivo di studiare una soluzione che assicuri al percorso dei dati del sottosistema di odometria, dal sensore all’elaborazione digitale, una Probability of Failure per Hour non inferiore a 10^-9. Per realizzarlo è stato necessario assicurare l’integrità di una sezione analogica rispetto ai guasti di tipo stuck-at. Sono state quindi studiate le dinamiche di guasto, calcolato il MTBF e infine sono stati pensati stimoli di test in ingresso con controllo delle uscite, pilotati da una sezione digitale. La difficoltà principale è stata rendere trasparenti gli stimoli di test alla logica di elaborazione odometrica, in modo da non compromettere i segnali vitali. Inoltre è stato necessario controllare l’assenza di anomalie nel sensore e nel cavo di trasmissione che lo collega alla sezione analogica, attraverso la misura di corrente assorbita dall’alimentazione. Per farlo è stato progettato un circuito per le misure di corrente ed è stata programmata la sezione digitale per digitalizzare e verificare le misure. Infine è stata redatta la documentazione formale con la descrizione delle scelte progettuali e dei collaudi effettuati in laboratorio, al fine di ottenere la certificazione di sicurezza SIL4, la più alta possibile, come previsto da RFI. L’architettura della sezione digitale in cui si è lavorato era mista, comprendente FPGA e MCU. L’innovazione del progetto risiede nel far eseguire quante più operazioni possibile alla logica programmabile.
APA, Harvard, Vancouver, ISO, and other styles
19

Porteš, Petr. "Návrh a realizace odometrických snímačů pro mobilní robot s Ackermannovým řízením." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2017. http://www.nusl.cz/ntk/nusl-318145.

Full text
Abstract:
Aim of this thesis is to design and construct odometric sensors for a mobile robot with Ackermann steering Bender 2 and to design a mathematical model which would evaluate the the trajectory of the robot using measured data of these sensors. The first part summarizes theoretical knowledge, while the second, the practical part, describes the design of the front axle, the design and the operating software of the front encoders and the odometric models. The last part deals with the processing and evaluation of the measured data.
APA, Harvard, Vancouver, ISO, and other styles
20

Ligocki, Adam. "Metody současné sebelokalizace a mapování pro hloubkové kamery." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2017. http://www.nusl.cz/ntk/nusl-316270.

Full text
Abstract:
Tato diplomová práce se zabývá tvorbou fúze pozičních dat z existující realtimové im- plementace vizuálního SLAMu a kolové odometrie. Výsledkem spojení dat je potlačení nežádoucích chyb u každé ze zmíněných metod měření, díky čemuž je možné vytvořit přesnější 3D model zkoumaného prostředí. Práce nejprve uvádí teorií potřebnou pro zvládnutí problematiky 3D SLAMu. Dále popisuje vlastnosti použitého open source SLAM projektu a jeho jednotlivé softwarové úpravy. Následně popisuje principy spo- jení pozičních informací získaných vizuálními a odometrickými snímači, dále uvádí popis diferenciálního podvozku, který byl použit pro tvorbu kolové odometrie. Na závěr práce shrnuje výsledky dosažené datovou fúzí a srovnává je s původní přesností vizuálního SLAMu.
APA, Harvard, Vancouver, ISO, and other styles
21

Urban, Daniel. "Lokalizace mobilního robota v prostředí." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2018. http://www.nusl.cz/ntk/nusl-385923.

Full text
Abstract:
This diploma thesis deals with the problem of mobile robot localisation in the environment based on current 2D and 3D sensor data and previous records. Work is focused on detecting previously visited places by robot. The implemented system is suitable for loop detection, using the Gestalt 3D descriptors. The output of the system provides corresponding positions on which the robot was already located. The functionality of the system has been tested and evaluated on LiDAR data.
APA, Harvard, Vancouver, ISO, and other styles
22

Szente, Michal. "Vizuální odometrie pro robotické vozidlo Car4." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2017. http://www.nusl.cz/ntk/nusl-317205.

Full text
Abstract:
This thesis deals with algorithms of visual odometry and its application on the experimental vehicle Car4. The first part contains different researches in this area on which the solution process is based. Next chapters introduce theoretical design and ideas of monocular and stereo visual odometry algorithms. The third part deals with the implementation in the software MATLAB with the use of Image processing toolbox. After tests done and based on real data, the chosen algorithm is applied to the vehicle Car4 used in practical conditions of interior and exterior. The last part summarizes the results of the work and address the problems which are asociated with the application of visual obmetry algorithms.
APA, Harvard, Vancouver, ISO, and other styles
23

Johansson, Sixten. "Navigering och styrning av ett autonomt markfordon." Thesis, Linköping University, Department of Electrical Engineering, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-6006.

Full text
Abstract:

I detta examensarbete har ett system för navigering och styrning av ett autonomt fordon implementerats. Syftet med detta arbete är att vidareutveckla fordonet som ska användas vid utvärdering av banplaneringsalgoritmer och studier av andra autonomifunktioner. Med hjälp av olika sensormodeller och sensorkonfigurationer går det även att utvärdera olika strategier för navigering. Arbetet har utförts utgående från en given plattform där fordonet endast använder sig av enkla ultraljudssensorer samt pulsgivare på hjulen för att mäta förflyttningar. Fordonet kan även autonomt navigera samt följa en enklare given bana i en känd omgivning. Systemet använder ett partikelfilter för att skatta fordonets tillstånd med hjälp av modeller för fordon och sensorer.

Arbetet är en fortsättning på projektet Collision Avoidance för autonomt fordon som genomfördes vid Linköpings universitet våren 2005.


In this thesis a system for navigation and control of an autonomous ground vehicle has been implemented. The purpose of this thesis is to further develop the vehicle that is to be used in studies and evaluations of path planning algorithms as well as studies of other autonomy functions. With different sensor configurations and sensor models it is also possible to evaluate different strategies for navigation. The work has been performed using a given platform which measures the vehicle’s movement using only simple ultrasonic sensors and pulse encoders. The vehicle is able to navigate autonomously and follow a simple path in a known environment. The state estimation is performed using a particle filter.

The work is a continuation of a previous project, Collision Avoidance för autonomt fordon, at Linköpings University in the spring of 2005.

APA, Harvard, Vancouver, ISO, and other styles
24

Peñaloza, González Andrés. "Implementación de odometría visual utilizando una cámara estereoscópica." Tesis, Universidad de Chile, 2015. http://repositorio.uchile.cl/handle/2250/137817.

Full text
Abstract:
Ingeniero Civil Eléctrico
En ciertas aplicaciones de robótica es importante la utilización de un odómetro para poder estimar la posición de un robot en movimiento. Esto permite que el actor tenga una noción de la ubicación en el entorno por donde se mueve. En aplicaciones como vehículos autónomos es especialmente importante, pues es crítico conocer la posición del vehículo con respecto a su mapa interno para evitar colisiones. Usualmente los odómetros más utilizados son las ruedas y el GPS. Sin embargo estos no siempre están disponibles, debido a adversidades del ambiente. Es por estos motivos que se emplea odometría visual. La odometría visual es el proceso de estimación del movimiento de un vehículo o agente utilizando las imágenes que éste obtiene de sus cámaras. Ella ha sido utilizada en la industria minera con los camiones de carga, y, últimamente en drones aéreos que podrían ser ocupados para el transporte de paquetes. También se ha utilizado para estimar la posición de los robots que actualmente transitan en la superficie de Marte. El presente trabajo tiene por finalidad la implementación de un algoritmo de odometría visual usando una cámara estereoscópica para estimar la trayectoria de un robot, y la evaluación del desempeño de éste comparándola con los valores conocidos de posición. La metodología utilizada permite identificar qué parámetros del algoritmo de estimación de movimiento tienen mayor relevancia y cómo influyen en la rapidez y calidad de la solución. También se determina la influencia de las condiciones de iluminación, y se determina qué zona geométrica de la imagen es mejor para realizar la triangulación de puntos. La solución se compone de un sistema capaz de ejecutar las distintas partes que requiere el algoritmo de manera extensible, siendo fácil reemplazar un método en el futuro con un mínimo impacto en el código. Se obtienen resultados favorables, donde el error de estimación de movimiento es pequeño y, además, se concluye acerca de los factores más importantes en la ejecución del algoritmo. Se discute acerca de la rapidez del algoritmo y se proponen soluciones que ayuden a su implementación en tiempo real.
APA, Harvard, Vancouver, ISO, and other styles
25

Quist, Eric Blaine. "UAV Navigation and Radar Odometry." BYU ScholarsArchive, 2015. https://scholarsarchive.byu.edu/etd/4439.

Full text
Abstract:
Prior to the wide deployment of robotic systems, they must be able to navigate autonomously. These systems cannot rely on good weather or daytime navigation and they must also be able to navigate in unknown environments. All of this must take place without human interaction. A majority of modern autonomous systems rely on GPS for position estimation. While GPS solutions are readily available, GPS is often lost and may even be jammed. To this end, a significant amount of research has focused on GPS-denied navigation. Many GPS-denied solutions rely on known environmental features for navigation. Others use vision sensors, which often perform poorly at high altitudes and are limited in poor weather. In contrast, radar systems accurately measure range at high and low altitudes. Additionally, these systems remain unaffected by inclimate weather. This dissertation develops the use of radar odometry for GPS-denied navigation. Using the range progression of unknown environmental features, the aircraft's motion is estimated. Results are presented for both simulated and real radar data. In Chapter 2 a greedy radar odometry algorithm is presented. It uses the Hough transform to identify the range progression of ground point-scatterers. A global nearest neighbor approach is implemented to perform data association. Assuming a piece-wise constant heading assumption, as the aircraft passes pairs of scatterers, the location of the scatterers are triangulated, and the motion of the aircraft is estimated. Real flight data is used to validate the approach. Simulated flight data explores the robustness of the approach when the heading assumption is violated. Chapter 3 explores a more robust radar odometry technique, where the relatively constant heading assumption is removed. This chapter uses the recursive-random sample consensus (R-RANSAC) Algorithm to identify, associate, and track the point scatterers. Using the measured ranges to the tracked scatterers, an extended Kalman filter (EKF) iteratively estimates the aircraft's position in addition to the relative locations of each reflector. Real flight data is used to validate the accuracy of this approach. Chapter 4 performs observability analysis of a range-only sensor. An observable, radar odometry approach is proposed. It improves the previous approaches by adding a more robust R-RANSAC above ground level (AGL) tracking algorithm to further improve the navigational accuracy. Real flight results are presented, comparing this approach to the techniques presented in previous chapters.
APA, Harvard, Vancouver, ISO, and other styles
26

Masson, Clément. "Direction estimation using visual odometry." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-169377.

Full text
Abstract:
This Master thesis tackles the problem of measuring objects’ directions from a motionlessobservation point. A new method based on a single rotating camera requiring the knowledge ofonly two (or more) landmarks’ direction is proposed. In a first phase, multi-view geometry isused to estimate camera rotations and key elements’ direction from a set of overlapping images.Then in a second phase, the direction of any object can be estimated by resectioning the cameraassociated to a picture showing this object. A detailed description of the algorithmic chain isgiven, along with test results on both synthetic data and real images taken with an infraredcamera.
Detta masterarbete behandlar problemet med att mäta objekts riktningar från en fastobservationspunkt. En ny metod föreslås, baserad på en enda roterande kamera som kräverendast två (eller flera) landmärkens riktningar. I en första fas används multiperspektivgeometri,för att uppskatta kamerarotationer och nyckelelements riktningar utifrån en uppsättningöverlappande bilder. I en andra fas kan sedan riktningen hos vilket objekt som helst uppskattasgenom att kameran, associerad till en bild visande detta objekt, omsektioneras. En detaljeradbeskrivning av den algoritmiska kedjan ges, tillsammans med testresultat av både syntetisk dataoch verkliga bilder tagen med en infraröd kamera.
APA, Harvard, Vancouver, ISO, and other styles
27

Najman, Jan. "Aplikace SLAM algoritmů pro vozidlo s čtyřmi řízenými koly." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2015. http://www.nusl.cz/ntk/nusl-231076.

Full text
Abstract:
This paper deals with the application of SLAM algorithms on experimental four wheel vehicle Car4. The first part shows the basic functioning of SLAM including a description of the extended Kalman filter, which is one of its main components. Then there is a brief list of software tools available to solve this problem in the environment of MATLAB and an overview of sensors used in this work. The second part presents methodology and results of the testing of individual sensors and their combinations to calculate odometry and scan the surrounding space. It also shows the process of applying SLAM algorithms on Car4 vehicle using the selected sensors and the results of testing of the entire system in practice.
APA, Harvard, Vancouver, ISO, and other styles
28

Gräter, Johannes [Verfasser]. "Monokulare Visuelle Odometrie auf Multisensorplattformen für autonome Fahrzeuge / Johannes Gräter." Karlsruhe : KIT Scientific Publishing, 2019. http://d-nb.info/1196294682/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Štěpán, Miroslav. "Model robota Trilobot." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2007. http://www.nusl.cz/ntk/nusl-412784.

Full text
Abstract:
This MSc Thesis describes creation of motion model of mobile robot called Trilobot. This model is implemented into simple simulation tool. Some laboratory experiments with the robot are described in this paper. There is also some information about SmallDEVS tool and Squeak Smalltalk environment in which the model was implemented. Motivation of this work is effort to simplify the design and testing of navigation algorithms for Trilobot, which is available for students of FIT BUT in the robotics lab of department of intelligent systems. This simple simulation tool could partially reduce dependence on physical availability of this robot.
APA, Harvard, Vancouver, ISO, and other styles
30

CHEN, HONGYI. "GPS-oscillation-robust Localization and Visionaided Odometry Estimation." Thesis, KTH, Maskinkonstruktion (Inst.), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-247299.

Full text
Abstract:
GPS/IMU integrated systems are commonly used for vehicle navigation. The algorithm for this coupled system is normally based on Kalman filter. However, oscillated GPS measurements in the urban environment can lead to localization divergence easily. Moreover, heading estimation may be sensitive to magnetic interference if it relies on IMU with integrated magnetometer. This report tries to solve the localization problem on GPS oscillation and outage, based on adaptive extended Kalman filter(AEKF). In terms of the heading estimation, stereo visual odometry(VO) is fused to overcome the effect by magnetic disturbance. Vision-aided AEKF based algorithm is tested in the cases of both good GPS condition and GPS oscillation with magnetic interference. Under the situations considered, the algorithm is verified to outperform conventional extended Kalman filter(CEKF) and unscented Kalman filter(UKF) in position estimation by 53.74% and 40.09% respectively, and decrease the drifting of heading estimation.
GPS/IMU integrerade system används ofta för navigering av fordon. Algoritmen för detta kopplade system är normalt baserat på ett Kalmanfilter. Ett problem med systemet är att oscillerade GPS mätningar i stadsmiljöer enkelt kan leda till en lokaliseringsdivergens. Dessutom kan riktningsuppskattningen vara känslig för magnetiska störningar om den är beroende av en IMU med integrerad magnetometer. Rapporten försöker lösa lokaliseringsproblemet som skapas av GPS-oscillationer och avbrott med hjälp av ett adaptivt förlängt Kalmanfilter (AEKF). När det gäller riktningsuppskattningen används stereovisuell odometri (VO) för att försvaga effekten av magnetiska störningar genom sensorfusion. En Visionsstödd AEKF-baserad algoritm testas i fall med både goda GPS omständigheter och med oscillationer i GPS mätningar med magnetiska störningar. Under de fallen som är aktuella är algoritmen verifierad för att överträffa det konventionella utökade Kalmanfilteret (CEKF) och ”Unscented Kalman filter” (UKF) när det kommer till positionsuppskattning med 53,74% respektive 40,09% samt minska fel i riktningsuppskattningen.
APA, Harvard, Vancouver, ISO, and other styles
31

Pol, Sabine. "Odometry for a Planetary Exploration Rover." Thesis, KTH, Reglerteknik, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-106249.

Full text
Abstract:
IARES is a highly flexible planetary exploration demonstration rover developed by CNES (the French National Center for Space Studies) mainly for autonomous navigation and locomotion studies. It has 19 degrees of freedom, including six active, steerable wheels. The rover uses a software for autonomous navigation, including stereo camera perception, path planning and motion control. It is complemented by a visual simulator that can substitute the rover for practical purposes. The goal of this MSc thesis, carried out during the second semester 2006 at CNES in Toulouse, has been to make out the most of the localization capabilities of this rover using a recently implemented method : odometry. A previous study had been carried out at ONERA in Toulouse and the main goal of this thesis was to implement this new method into the environment used for the CNES rover and to test the performances of this method thanks to the simulator. All this work might be even tested on board at the very end of the internship. Given the hardware platform and the software environment, this new localization method had primarily to be studied from a theoretical point of view before being integrated into the CNES environment. The study was conducted on a Linux platform and code has been developed in C for the simulator whereas Scilab has been used for the validation tests.
APA, Harvard, Vancouver, ISO, and other styles
32

Johansson, Fredrik. "Visual Stereo Odometry for Indoor Positioning." Thesis, Linköpings universitet, Datorseende, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-81215.

Full text
Abstract:
In this master thesis a visual odometry system is implemented and explained. Visual odometry is a technique, which could be used on autonomous vehicles to determine its current position and is preferably used indoors when GPS is notworking. The only input to the system are the images from a stereo camera and the output is the current location given in relative position. In the C++ implementation, image features are found and matched between the stereo images and the previous stereo pair, which gives a range of 150-250 verified feature matchings. The image coordinates are triangulated into a 3D-point cloud. The distance between two subsequent point clouds is minimized with respect to rigid transformations, which gives the motion described with six parameters, three for the translation and three for the rotation. Noise in the image coordinates gives reconstruction errors which makes the motion estimation very sensitive. The results from six experiments show that the weakness of the system is the ability to distinguish rotations from translations. However, if the system has additional knowledge of how it is moving, the minimization can be done with only three parameters and the system can estimate its position with less than 5 % error.
APA, Harvard, Vancouver, ISO, and other styles
33

Venturelli, Cavalheiro Guilherme. "Fusing visual odometry and depth completion." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/122517.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2019
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 57-62).
Recent advances in technology indicate that autonomous vehicles and self-driving cats in particular may become commonplace in the near future. This thesis contributes to that scenario by studying the problem of depth perception based on sequences of camera images. We start by presenting a sensor fusion framework that achieves state-of-the-art performance when completing depth from sparse LiDAR measurements and a camera. Then, we study how the system performs under a variety of modifications of the sparse input until we ultimately replace LiDAR measurements with triangulations from a typical sparse visual odometry pipeline. We are then able to achieve a small improvement over the single image baseline and chart guidelines to assist in designing a system with even more substantial gains.
by Guilherme Venturelli Cavalheiro.
S.M.
S.M. Massachusetts Institute of Technology, Department of Aeronautics and Astronautics
APA, Harvard, Vancouver, ISO, and other styles
34

Svoboda, Ondřej. "Analýza vlastností stereokamery ZED ve venkovním prostředí." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2019. http://www.nusl.cz/ntk/nusl-399416.

Full text
Abstract:
The Master thesis is focused on analyzing stereo camera ZED in the outdoor environment. There is compared ZEDfu visual odometry with commonly used methods like GPS or wheel odometry. Moreover, the thesis includes analyses of SLAM in the changeable outdoor environment, too. The simultaneous mapping and localization in RTAB-Map were processed separately with SIFT and BRISK descriptors. The aim of this master thesis is to analyze the behaviour ZED camera in the outdoor environment for future implementation in mobile robotics.
APA, Harvard, Vancouver, ISO, and other styles
35

Holmqvist, Niclas. "HANDHELD LIDAR ODOMETRY ESTIMATION AND MAPPING SYSTEM." Thesis, Mälardalens högskola, Inbyggda system, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-41137.

Full text
Abstract:
Ego-motion sensors are commonly used for pose estimation in Simultaneous Localization And Mapping (SLAM) algorithms. Inertial Measurement Units (IMUs) are popular sensors but suffer from integration drift over longer time scales. To remedy the drift they are often used in combination with additional sensors, such as a LiDAR. Pose estimation is used when scans, produced by these additional sensors, are being matched. The matching of scans can be computationally heavy as one scan can contain millions of data points. Methods exist to simplify the problem of finding the relative pose between sensor data, such as the Normal Distribution Transform SLAM algorithm. The algorithm separates the point cloud data into a voxelgrid and represent each voxel as a normal distribution, effectively decreasing the amount of data points. Registration is based on a function which converges to a minimum. Sub-optimal conditions can cause the function to converge at a local minimum. To remedy this problem this thesis explores the benefits of combining IMU sensor data to estimate the pose to be used in the NDT SLAM algorithm.
APA, Harvard, Vancouver, ISO, and other styles
36

Burusa, Akshay Kumar. "Visual-Inertial Odometry for Autonomous Ground Vehicles." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-217284.

Full text
Abstract:
Monocular cameras are prominently used for estimating motion of Unmanned Aerial Vehicles. With growing interest in autonomous vehicle technology, the use of monocular cameras in ground vehicles is on the rise. This is especially favorable for localization in situations where Global Navigation Satellite System (GNSS) is unreliable, such as open-pit mining environments. However, most monocular camera based approaches suffer due to obscure scale information. Ground vehicles impose a greater difficulty due to high speeds and fast movements. This thesis aims to estimate the scale of monocular vision data by using an inertial sensor in addition to the camera. It is shown that the simultaneous estimation of pose and scale in autonomous ground vehicles is possible by the fusion of visual and inertial sensors in an Extended Kalman Filter (EKF) framework. However, the convergence of scale is sensitive to several factors including the initialization error. An accurate estimation of scale allows the accurate estimation of pose. This facilitates the localization of ground vehicles in the absence of GNSS, providing a reliable fall-back option.
Monokulära kameror används ofta vid rörelseestimering av obemannade flygande farkoster. Med det ökade intresset för autonoma fordon har även användningen av monokulära kameror i fordon ökat. Detta är fram för allt fördelaktigt i situationer där satellitnavigering (Global Navigation Satellite System (GNSS)) äropålitlig, exempelvis i dagbrott. De flesta system som använder sig av monokulära kameror har problem med att estimera skalan. Denna estimering blir ännu svårare på grund av ett fordons större hastigheter och snabbare rörelser. Syftet med detta exjobb är att försöka estimera skalan baserat på bild data från en monokulär kamera, genom att komplettera med data från tröghetssensorer. Det visas att simultan estimering av position och skala för ett fordon är möjligt genom fusion av bild- och tröghetsdata från sensorer med hjälp av ett utökat Kalmanfilter (EKF). Estimeringens konvergens beror på flera faktorer, inklusive initialiseringsfel. En noggrann estimering av skalan möjliggör också en noggrann estimering av positionen. Detta möjliggör lokalisering av fordon vid avsaknad av GNSS och erbjuder därmed en ökad redundans.
APA, Harvard, Vancouver, ISO, and other styles
37

Rao, Anantha N. "Learning-based Visual Odometry - A Transformer Approach." University of Cincinnati / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1627658636420617.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Arellano, Zea Luis Alberto. "Diseño e implementación de un robot móvil con Control de trayectoria mediante principios odométricos." Bachelor's thesis, Universidad Nacional Mayor de San Marcos, 2015. https://hdl.handle.net/20.500.12672/4440.

Full text
Abstract:
El presente trabajo de tesis consiste en el diseño e implementación de un robot móvil de tres grados de libertad, capaz de controlar su posición y trayectoria en un plano cartesiano, además de posicionarse en lugares definidos por el usuario. El objetivo del proyecto es controlar el movimiento del robot, manipulando su traslación y rotación de manera precisa y eficiente. El móvil utiliza dos motores acoplados a llantas para su locomoción, estos motores están colocados en una configuración diferencial, haciendo que el desplazamiento y la rotación sobre su eje sea mucho más eficiente. El robot cuenta con un sistema de medición basado en dos encoders incrementales situados a los lados de los motores. Las señales generadas por estos sensores son procesadas por el móvil, el cual hará el análisis cinemático en línea empleando principios de odometría y ecuaciones en diferencia para estimar la posición y orientación relativa del robot. El resultado de esta operación es utilizado en el algoritmo de control, que consiste en dos controladores PID (proporcional, integral y derivativo) discretos [1]. El primero controla la orientación del robot, asegurando que se posicione en el ángulo correcto antes de iniciar su movimiento y durante el recorrido lineal para que el móvil no se desvíe de su trayectoria. El segundo controlador PID regula la posición lineal del robot en función de las coordenadas iniciales y finales de la trayectoria trazada. Este recorrido es planificado en línea en función a las coordenadas de puntos predefinidos en la lógica de generación de trayectorias. El robot es monitoreado en tiempo real por una computadora que a través de una interfaz gráfica desarrollada en Java permite observar los parámetros de control en cuadros de texto y gráficas dinámicas. Además, permite el envío de comandos pre configurados y secuencias de trayectorias lineales. Para establecer la conexión entre el robot y la PC se utilizó comunicación serial asíncrona bajo el estándar RS-232 y utilizando el protocolo UART. La unidad de procesamiento para la implementación de lógica y algoritmos de control fue un dsPIC30F4011 [2] (controlador digital de señales), ya que posee una alta velocidad para el procesamiento de señales y operaciones matemáticas de punto flotante. Además, cuenta con módulos especializados para el control de motores y comunicación serial, haciendo que la programación sea mucho más eficiente. Al finalizar la implementación del robot, este mostró muy buenos resultados durante las pruebas cumpliendo con los algoritmos de control de rotación y traslación, así como el monitoreo y control desde la PC. Uno de los principales aportes de este trabajo es que se demostró poder tener un control eficiente y preciso de un robot móvil empleando únicamente 2 encoders como sistema de medición.
--- The present thesis consists in the design and implementation of a mobile robot of three degrees of freedom, able to control their position and trajectory in a Cartesian plane, besides being positioned in user-defined locations. The objective of the project is controlling the movement of the robot, manipulating its translation and rotation accurately and efficiently. The robot uses two motors coupled wheels for locomotion, these engines are placed in a differential configuration, causing the displacement and rotation on its axis much more efficient. The robot has a measurement system based on two incremental encoders situated on the sides of the engines. The signals generated by these sensors are processed by the robot, which will do a kinematic analysis in line using odometry principles and difference equations to estimate the relative position and orientation of the robot. The result of this operation is used in the control algorithm, which consists of two discrete PID controllers (proportional, integral and derivative). The first controls the orientation of the robot, ensuring that it is positioned at the correct angle before starting its motion and during the linear path in order to the robot does not deviate from its trajectory. The second linear PID controller regulates the position of the robot according to the initial and final coordinates of the traced path. This trajectory is planned in line according to the coordinates of the predefined points in the logic of paths generation. The robot is monitored in real time by a computer through a graphical interface developed in Java, which allows observing the control parameters in dynamic text boxes and graphics. Additionally, allows sending pre-configured commands and sequences of linear trajectories. To establish the connection between the robot and the PC, it has used serial asynchronous communication under the RS- 232 standard and using the UART protocol. The processing unit for the implementation of logic and control algorithms was a dsPIC30F4011 (digital signal controller), as it has a highspeed signal processing and floating point math operations. It also has specialist modules for motor control and serial communication, making programming much more efficient. After the implementation of the robot, this showed very good results during testing, compliance with the rotation and translation control algorithms, as well as monitoring and controlling from the PC. One of the main contributions of this work is that it showed that you could have an efficient and accurate control of a mobile robot with three degrees of freedom using only two encoders as a measurement system.
Tesis
APA, Harvard, Vancouver, ISO, and other styles
39

Jílek, Tomáš. "Pokročilá navigace v heterogenních multirobotických systémech ve vnějším prostředí." Doctoral thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2015. http://www.nusl.cz/ntk/nusl-234530.

Full text
Abstract:
The doctoral thesis discusses current options for the navigation of unmanned ground vehicles with a focus on achieving high absolute compliance of the required motion trajectory and the obtained one. The current possibilities of key self-localization methods, such as global satellite navigation systems, inertial navigation systems, and odometry, are analyzed. The description of the navigation method, which allows achieving a centimeter-level accuracy of the required trajectory tracking with the above mentioned self-localization methods, forms the core of the thesis. The new navigation method was designed with regard to its very simple parameterization, respecting the limitations of the used robot drive configuration. Thus, after an appropriate parametrization of the navigation method, it can be applied to any drive configuration. The concept of the navigation method allows integrating and using more self-localization systems and external navigation methods simultaneously. This increases the overall robustness of the whole process of the mobile robot navigation. The thesis also deals with the solution of cooperative convoying heterogeneous mobile robots. The proposed algorithms were validated under real outdoor conditions in three different experiments.
APA, Harvard, Vancouver, ISO, and other styles
40

Gräter, Johannes [Verfasser], and C. [Akademischer Betreuer] Stiller. "Monokulare Visuelle Odometrie auf Multisensorplattformen für autonome Fahrzeuge / Johannes Gräter ; Betreuer: C. Stiller." Karlsruhe : KIT-Bibliothek, 2019. http://d-nb.info/1182430775/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Martins, Renato José 1986. "Explorando redundâncias e restrições entre odometrias e sensores absolutos em localização robótica terrestre." [s.n.], 2013. http://repositorio.unicamp.br/jspui/handle/REPOSIP/261567.

Full text
Abstract:
Orientadores: Paulo Augusto Valente Ferreira, Samuel Siqueira Bueno
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação
Made available in DSpace on 2018-08-23T20:27:15Z (GMT). No. of bitstreams: 1 Martins_RenatoJose_M.pdf: 2887206 bytes, checksum: cfdb6978d48e6f68b3ae81fad838d7ed (MD5) Previous issue date: 2013
Resumo: Esta dissertação tem como escopo o tema de localização de um veículo terrestre que evolui em ambiente externo. O trabalho consiste no desenvolvimento de técnicas de percepção sensorial capazes de fornecer estimativas de pose (posição e orientação), aspecto fundamental para toda tarefa de navegação robótica. Sucintamente, os enfoques abordados utilizam-se de diferentes classes de sensores como encoders, lasers, GPS e suas combinações, de maneira a minimizar as incertezas intrínsecas de cada sensor. A principal contribuição do trabalho está em uma nova formulação de odometria por otimização, bem como suas extensões para a estimação concomitante de polarizações. É apresentado também um esquema de estimação determinística de poses em batelada no contexto da fusão odometria-GPS, além da definição de mapeamentos por funções suaves que viabilizam o uso de medidas de orientação descontínuas. As metodologias são formuladas, avaliadas em simulação e validadas experimentalmente com a plataforma robótica do projeto VERO (Veículo Robótico de Exterior) do CTI Renato Archer
Abstract: This dissertation addresses the problem of localizing a ground vehicle that navigates in an outdoor environment. The work consists in the development of sensorial perception and odometry techniques capable of furnishing pose estimates (position and attitude), a fundamental aspect of any robotic navigation task. In short, we focus on exploring different sensor classes, such as encoders, lasers, GPSs, and their combinations, in order to minimize the intrinsic uncertainties of each sensor. The main contribution of the work is a new odometry formulation and its extension for simultaneous bias estimation. We also present a deterministic batch estimation framework for the odometry-GPS fusion, as well as the definition of mappings by smooth functions of the orientation state component that allow the use of discontinuous heading measures. Methodologies are formulated, analysed in simulation and experimentaly validated using the VERO ("VEículo RObótico de Exterior", in Portuguese) robotic platform
Mestrado
Automação
Mestre em Engenharia Elétrica
APA, Harvard, Vancouver, ISO, and other styles
42

Silva, Ricardo Luís da Mota. "Removable odometry unit for vehicles with Ackermann steering." Master's thesis, Universidade de Aveiro, 2014. http://hdl.handle.net/10773/13699.

Full text
Abstract:
Mestrado em Engenharia Mecânica
O principal objetivo deste trabalho é o desenvolvimento de uma solução de hodometria para veículos com direção Ackermann. A solução tinha que ser portátil, exível e fácil de montar. Após o estudo do estado da arte e uma pesquisa de soluções, a solução escolhida foi baseada em hodometria visual. Os passos seguintes do trabalho foram estudar a viabilidade de utilizar câmaras lineares para hodometria visual. O sensor de imagem foi usado para calcular a velocidade longitudinal; e a orientação da movimento foi calculado usando dois giroscópios. Para testar o método, várias experiências foram feitas; as experiências ocorreram indoor, sob condições controladas. Foi testada a capacidade de medir a velocidade em movimentos de linha reta, movimentos diagonais, movimentos circulares e movimentos com variação da distância ao solo. Os dados foram processados usando algoritmos de correlação e os foram resultados documentados. Com base nos resultados, é seguro concluir que hodometria com câmaras lineares auxiliado por sensores inerciais tem um potencial de aplicabilidade no mundo real.
The main objective of this work is to develop a solution of odometry for vehicles with Ackermann steering. The solution had to be portable, exible and easy to mount. After the study of the state of the art and a survey of solutions, the solution chosen was based on visual odometry. The following steps of the work were to study the feasibility to use line scan image sensors for visual odometry. The image sensor was used to compute the longitudinal velocity; and the orientation of motion was computed using two gyroscopes. To test the method, several experiments were made; the experiments took place indoor, under controlled conditions. It was tested the ability to measure velocity on straight line movements, diagonal movements, circular movements and movements with a changing distance from the ground. The data was processed with correlation algorithms and the results were documented. Based on the results it is safe to conclude that odometry with line scan sensors aided by inertial sensors has a potential for a real world applicability.
APA, Harvard, Vancouver, ISO, and other styles
43

Wuthrich, Tori(Tori Lee). "Learning visual odometry primitives for computationally constrained platforms." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/122419.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2019
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 51-52).
Autonomous navigation for robotic platforms, particularly techniques that leverage an onboard camera, are of currently of significant interest to the robotics community. Designing methods to localize small, resource-constrained robots is a particular challenge due to limited availability of computing power and physical space for sensors. A computer vision, machine learning-based localization method was proposed by researchers investigating the automation of medical procedures. However, we believed the method to also be promising for low size, weight, and power (SWAP) budget robots. Unlike for traditional odometry methods, in this case, a machine learning model can be trained offline, and can then generate odometry measurements quickly and efficiently. This thesis describes the implementation of the learning-based, visual odometry method in the context of autonomous drones. We refer to the method as RetiNav due to its similarities with the way the human eye processes light signals from its surroundings. We make several modifications to the method relative to the initial design based on a detailed parameter study, and we test the method on a variety of challenging flight datasets. We show that over the course of a trajectory, RetiNav achieves as low as 1.4% error in predicting the distance traveled. We conclude that such a method is a viable component of a localization system, and propose the next steps for work in this area.
by Tori Wuthrich.
S.M.
S.M. Massachusetts Institute of Technology, Department of Aeronautics and Astronautics
APA, Harvard, Vancouver, ISO, and other styles
44

Henriksson, Johan. "Radar odometry based on Fuzzy-NDT scan registration." Thesis, Örebro universitet, Institutionen för naturvetenskap och teknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-94492.

Full text
Abstract:
Visual and lidar-based odometry for mobile robots has been thoroughlyinvestigated and performs very well in good weather conditions. However,both are sensitive to bad weather conditions with atmospheric disturbancessuch as rain and snow. Recently Radar sensors specialized for mobilerobot use have become available. Radar sensors are much more robustagainst atmospheric disturbances, which makes them an exciting alternative.This thesis presents a radar odometry pipeline that can handle both lidar andradar data with minor modifications. The results show that it outperformsthe current state of the art radar odometry solutions. While also being able tohandle 3d lidar odometry with good performance.
APA, Harvard, Vancouver, ISO, and other styles
45

Vodrážka, Jakub. "Návrh konstrukce mobilního autonomního robotu." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2010. http://www.nusl.cz/ntk/nusl-229186.

Full text
Abstract:
The thesis deals with design of the device for testing the localization techniques for indoor navigation. Autonomous robot was designed as the most appropriate platform for testing. The thesis is divided into three parts. The first one describes various kinds of robots, their possible use and sensors, which could be of use for solving the problem. The second part deals with the design and construction of the robot. The robot is built on the chassis of the differential type with support spur. Two electric motors with a gearbox and output shaft speed sensor represent the drive unit. Coat of the robot was designed for good functionality and attractive overall look. The robot is also used for the presentation of robotics. Thesis provides complete design of chassis and body construction, along with control section and sensorics. The last part describes a statistical model of the robot movement, which was based on several performed experiments. The experiments were realized to find any possible deviations of sensor measurement comparing to the real situation.
APA, Harvard, Vancouver, ISO, and other styles
46

Clark, Ronald. "Visual-inertial odometry, mapping and re-localization through learning." Thesis, University of Oxford, 2017. http://ora.ox.ac.uk/objects/uuid:69b03c50-f315-42f8-ad41-d97cd4c9bf09.

Full text
Abstract:
Precise pose information is a fundamental prerequisite for numerous applications in robotics, AI and mobile computing. Monocular cameras are the ideal sensor for this purpose - they are cheap, lightweight and ubiquitous. As such, monocular visual localization is widely regarded as a cornerstone requirement of machine perception. However, a large gap still exists between the performance that these applications require and that which is achievable through existing monocular perception algorithms. In this thesis we directly tackle the issue of robust egocentric visual localization and mapping through a data-centric approach. As a first major contribution we propose novel learnt models for visual odometry which form the basis of the ego-motion estimates used in later chapters. The proposed approaches are less fragile and much more robust than existing approaches. We present experimental evidence that these approaches can not only approach the accuracy of standard methods but in many cases also show major improvements in computational and memory efficiency. To cope with the drift inherent to the odometry methods, we then introduce a novel learnt spatio-temporal model for performing global relocalization updates. The proposed approach allows one to efficiently infer the global location of an image stream at the fraction of the time of traditional feature-based approaches with minimal loss in localization accuracy. Finally, we present a novel SLAM system integrating our learnt priors for creating 3D maps from monocular image sequences. The approach is designed to harness multiple input sources, including prior depth and ego-motion estimates and incorporates both loop-closure and relocalization updates. The approach, based on the well-established standard visual-inertial structure-from-motion process, allows us to perform accurate posterior inference of camera poses and scene structure to significantly boost the reconstruction robustness and fidelity. Through our qualitative and quantitative experimentation on a wide range of datasets, we conclude that the proposed methods can bring accurate visual localization to a wide class of consumer devices and robotic platforms.
APA, Harvard, Vancouver, ISO, and other styles
47

Gui, Jianjun. "Direct visual and inertial odometry for monocular mobile platforms." Thesis, University of Essex, 2018. http://repository.essex.ac.uk/21726/.

Full text
Abstract:
Nowadays visual and inertial information is readily available from small mobile platforms, such as quadcopters. However, due to the limitation of onboard resource and capability, it is still a challenge to developing localisation and mapping estimation algorithms for small size mobile platforms. Visual-based techniques for tracking or motion estimation related tasks have been developed abundantly, especially using interest points as features. However, such sparse feature-based methods are quickly getting divergence, due to noise, partial occlusion or light condition variation in views. Only in recent years, direct visual based approaches, which densely, semi-densely or statistically use pixel information reveal significant improvement in algorithm robustness and stability. On the other hand, inertial sensors measure the changes in angular velocity and linear acceleration, which can be further integrated to predict relative velocity, position and orientation for mobile platforms. In practical usage, the accumulated error from inertial sensors is often compensated by cameras, while the loss of agile egomotion from visual sensors can be compensated by inertial-based motion estimation. Based on the complementary nature of visual and inertial information, in this research, we focus on how to use the direct visual based approaches to providing location information through a monocular camera, while fusing with the inertial information to enhance the robustness and accuracy. The proposed algorithms can be applied to practical datasets which are collected from mobile platforms. Particularly, direct-based and mutual information based methods are explored in details. Two visual-inertial odometry algorithms are proposed in the framework of multi-state constraint Kalman filter. They are also tested with the real data from a flying robot in complex indoor and outdoor environments. The results show that the direct-based methods have the merits of robustness in image processing and accuracy in the case of moving along straight lines with a slight rotation. Furthermore, the visual and inertial fusion strategies are investigated to build their intrinsic links, then the improvement done by iterative steps in filtering propagation is proposed. As an addition, for experimental implementation, a self-made flying robot for data collection is also developed.
APA, Harvard, Vancouver, ISO, and other styles
48

Myriokefalitakis, Panteleimon. "Real-time conversion of monodepth visual odometry enhanced network." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-288488.

Full text
Abstract:
This thesis work belongs to the field of self-supervised monocular depth estimation and constitutes a conversion of the work done in [1]. The purpose is to consider the computationally expensive model in [1] as the baseline model of this work and try to create a lightweight model out of it. The current work proposes a network suited to be deployed on embedded devices such as NVIDIA Jetson TX2 where the needs for short runtime, small memory footprint, and power consumption matters the most. In other words, if those requirements are missing, no matter if precision is extraordinarily high, the model cannot be functional on embedded processors. Thus, mobile platforms with small size such as drones, delivery robots, etc. cannot exploit the benefits of deep learning. The proposed network has _29.7 less parameters than the baseline model [1] and uses only 10.6 MB for a forward pass in contrast to 227MB used by the network in [1]. Consequently, the proposed model can be functional on embedded devices’ GPU. Lastly, it is able to infer depth with promising speed even on standard CPUs and at the same time provides comparable or higher accuracy than other works.
Detta examensarbete tillhör området för självkontrollerad monokulär djupbedömning och utgör en omvandling av det arbete som gjorts under [1]. Syftet är att överväga den beräkningsmässiga dyra modellen i [1] som basmodellen för detta arbete och försöka skapa en lätt modell ur den. Det nuvarande arbetet förutsätter ett nätverk som är lämpligt att distribueras på inbäddade enheter som NVIDIA Jetson TX2 där behoven för kort driftstid, liten minnesfotavtryck och kraftförbrukning är viktigast. Med andra ord, om dessa krav saknas, oavsett om precisionen är extra hög, kan modellen inte fungera på inbäddade processorer. Således kan mobilplattformar med små storlekar som drönare, leveransrobotar, etc. inte utnyttja fördelarna med djupbildning. Det föreslagna nätverket har _29,7 mindre parametrar än baselinemodellen [1] och använder endast 10,6MB för ett framåtpass i motsats till 227MB som används av nätverket i [1]. Följaktligen kan den föreslagna modellen fungera på inbäddade enheters GPU. Slutligen kan den dra slutsatsen med lovande hastighet på standard CPUs och samtidigt ger jämförbar eller högre noggrannhet än andra arbete.
APA, Harvard, Vancouver, ISO, and other styles
49

Chermak, Lounis. "Standalone and embedded stereo visual odometry based navigation solution." Thesis, Cranfield University, 2015. http://dspace.lib.cranfield.ac.uk/handle/1826/9319.

Full text
Abstract:
This thesis investigates techniques and designs an autonomous visual stereo based navigation sensor to improve stereo visual odometry for purpose of navigation in unknown environments. In particular, autonomous navigation in a space mission context which imposes challenging constraints on algorithm development and hardware requirements. For instance, Global Positioning System (GPS) is not available in this context. Thus, a solution for navigation cannot rely on similar external sources of information. Support to handle this problem is required with the conception of an intelligent perception-sensing device that provides precise outputs related to absolute and relative 6 degrees of freedom (DOF) positioning. This is achieved using only images from stereo calibrated cameras possibly coupled with an inertial measurement unit (IMU) while fulfilling real time processing requirements. Moreover, no prior knowledge about the environment is assumed. Robotic navigation has been the motivating research to investigate different and complementary areas such as stereovision, visual motion estimation, optimisation and data fusion. Several contributions have been made in these areas. Firstly, an efficient feature detection, stereo matching and feature tracking strategy based on Kanade-Lucas-Tomasi (KLT) feature tracker is proposed to form the base of the visual motion estimation. Secondly, in order to cope with extreme illumination changes, High dynamic range (HDR) imaging solution is investigated and a comparative assessment of feature tracking performance is conducted. Thirdly, a two views local bundle adjustment scheme based on trust region minimisation is proposed for precise visual motion estimation. Fourthly, a novel KLT feature tracker using IMU information is integrated into the visual odometry pipeline. Finally, a smart standalone stereo visual/IMU navigation sensor has been designed integrating an innovative combination of hardware as well as the novel software solutions proposed above. As a result of a balanced combination of hardware and software implementation, we achieved 5fps frame rate processing up to 750 initials features at a resolution of 1280x960. This is the highest reached resolution in real time for visual odometry applications to our knowledge. In addition visual odometry accuracy of our algorithm achieves the state of the art with less than 1% relative error in the estimated trajectories.
APA, Harvard, Vancouver, ISO, and other styles
50

Greenberg, Jacob. "Visual Odometry for Autonomous MAV with On-Board Processing." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-177290.

Full text
Abstract:
A new visual registration algorithm (Adaptive Iterative Closest Keypoint, AICK) is tested and evaluated as a positioning tool on a Micro Aerial Vehicle (MAV). Captured frames from a Kinect like RGB-D camera are analyzed and an estimated position of the MAV is extracted. The hope is to find a positioning solution for GPS-denied environments. This thesis is focused on an indoor office environment. The MAV is flown manually, capturing in-flight RGB-D images which are registered with the AICK algorithm. The result is analyzed to come to a conclusion if AICK is viable or not for autonomous flight based on on-board positioning estimates. The result shows potential for a working autonomous MAV in GPS-denied environments, however there are some surroundings that have proven difficult. The lack of visual features on e.g., a white wall causes problems and uncertainties in the positioning, which is even more troublesome when the distance to the surroundings exceed the RGB-D cameras depth range. With further work on these weaknesses we believe that a robust autonomous MAV using AICK for positioning is plausible.
En ny visuell registreringsalgoritm (Adaptive Iterative Closest Keypoint, AICK) testas och utvärderas som ett positioneringsverktyg på en Micro Aerial Vehicle (MAV). Tagna bilder från en Kinect liknande RGB-D kamera analyseras och en approximerad position av MAVen beräknas. Förhoppningen är att hitta en positioneringslösning för miljöer utan GPS förbindelse, där detta arbete fokuserar på kontorsmiljöer inomhus. MAVen flygs manuellt samtidigt som RGB-D bilder tas, dessa registreras sedan med hjälp av AICK. Resultatet analyseras för att kunna dra en slutsats om AICK är en rimlig metod eller inte för att åstadkomma autonom flygning med hjälp av den uppskattade positionen. Resultatet visar potentialen för en fungerande autonom MAV i miljöer utan GPS förbindelse, men det finns testade miljöer där AICK i dagsläget fungerar undermåligt. Bristen på visuella särdrag på t.ex. en vit vägg inför problem och osäkerheter i positioneringen, ännu mer besvärande är det när avståndet till omgivningen överskrider RGB-D kamerornas räckvidd. Med fortsatt arbete med dessa svagheter är en robust autonom MAV som använder AICK för positioneringen rimlig.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography