Dissertations / Theses on the topic 'Odometri'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Odometri.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Johansson, Sixten. "Navigering och styrning av ett autonomt markfordon." Thesis, Linköping University, Department of Electrical Engineering, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-6006.
Full textI detta examensarbete har ett system för navigering och styrning av ett autonomt fordon implementerats. Syftet med detta arbete är att vidareutveckla fordonet som ska användas vid utvärdering av banplaneringsalgoritmer och studier av andra autonomifunktioner. Med hjälp av olika sensormodeller och sensorkonfigurationer går det även att utvärdera olika strategier för navigering. Arbetet har utförts utgående från en given plattform där fordonet endast använder sig av enkla ultraljudssensorer samt pulsgivare på hjulen för att mäta förflyttningar. Fordonet kan även autonomt navigera samt följa en enklare given bana i en känd omgivning. Systemet använder ett partikelfilter för att skatta fordonets tillstånd med hjälp av modeller för fordon och sensorer.
Arbetet är en fortsättning på projektet Collision Avoidance för autonomt fordon som genomfördes vid Linköpings universitet våren 2005.
In this thesis a system for navigation and control of an autonomous ground vehicle has been implemented. The purpose of this thesis is to further develop the vehicle that is to be used in studies and evaluations of path planning algorithms as well as studies of other autonomy functions. With different sensor configurations and sensor models it is also possible to evaluate different strategies for navigation. The work has been performed using a given platform which measures the vehicle’s movement using only simple ultrasonic sensors and pulse encoders. The vehicle is able to navigate autonomously and follow a simple path in a known environment. The state estimation is performed using a particle filter.
The work is a continuation of a previous project, Collision Avoidance för autonomt fordon, at Linköpings University in the spring of 2005.
CHEN, HONGYI. "GPS-oscillation-robust Localization and Visionaided Odometry Estimation." Thesis, KTH, Maskinkonstruktion (Inst.), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-247299.
Full textGPS/IMU integrerade system används ofta för navigering av fordon. Algoritmen för detta kopplade system är normalt baserat på ett Kalmanfilter. Ett problem med systemet är att oscillerade GPS mätningar i stadsmiljöer enkelt kan leda till en lokaliseringsdivergens. Dessutom kan riktningsuppskattningen vara känslig för magnetiska störningar om den är beroende av en IMU med integrerad magnetometer. Rapporten försöker lösa lokaliseringsproblemet som skapas av GPS-oscillationer och avbrott med hjälp av ett adaptivt förlängt Kalmanfilter (AEKF). När det gäller riktningsuppskattningen används stereovisuell odometri (VO) för att försvaga effekten av magnetiska störningar genom sensorfusion. En Visionsstödd AEKF-baserad algoritm testas i fall med både goda GPS omständigheter och med oscillationer i GPS mätningar med magnetiska störningar. Under de fallen som är aktuella är algoritmen verifierad för att överträffa det konventionella utökade Kalmanfilteret (CEKF) och ”Unscented Kalman filter” (UKF) när det kommer till positionsuppskattning med 53,74% respektive 40,09% samt minska fel i riktningsuppskattningen.
Pereira, Fabio Irigon. "High precision monocular visual odometry." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2018. http://hdl.handle.net/10183/183233.
Full textRecovering three-dimensional information from bi-dimensional images is an important problem in computer vision that finds several applications in our society. Robotics, entertainment industry, medical diagnose and prosthesis, and even interplanetary exploration benefit from vision based 3D estimation. The problem can be divided in two interdependent operations: estimating the camera position and orientation when each image was produced, and estimating the 3D scene structure. This work focuses on computer vision techniques, used to estimate the trajectory of a vehicle equipped camera, a problem known as visual odometry. In order to provide an objective measure of estimation efficiency and to compare the achieved results to the state-of-the-art works in visual odometry a high precision popular dataset was selected and used. In the course of this work new techniques for image feature tracking, camera pose estimation, point 3D position calculation and scale recovery are proposed. The achieved results outperform the best ranked results in the popular chosen dataset.
Porteš, Petr. "Návrh a realizace odometrických snímačů pro mobilní robot s Ackermannovým řízením." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2017. http://www.nusl.cz/ntk/nusl-318145.
Full textPärkkä, J. (Jarmo). "Reaaliaikainen visuaalinen odometria." Master's thesis, University of Oulu, 2013. http://urn.fi/URN:NBN:fi:oulu-201312021943.
Full textVisual odometry is the process of estimating the motion of a vehicle, human or robot using the input of a single or multiple cameras. Application domains include robotics, wearable computing, augmented reality and automotive. It is a good supplement to the navigation systems because it operates in the environments where GPS does not. Visual odometry was developed as a substitute for wheel odometry, because its use is not dependent of the terrain. Visual odometry can be applied without restrictions to the way of movement (wheels, flying, walking). In this work visual odometry is examined and developed to be used in real-time embedded system. The basics of visual odometry are discussed. Furthermore, simultaneous localization and mapping (SLAM) is introduced. Visual odometry can appear as a part of SLAM. The purpose of this work is to develop visual odometry algorithm for Parrot’s robot helicopter AR.Drone 2.0, so it could fly independently in the future. The algorithm is based on Civera’s EKF-SLAM method, where feature extraction is replaced with an approach used earlier in global motion estimation. The operation of the algorithm is tested by measuring its performance time with different image sequences and by analyzing the movement of the camera from the map drawn by it. Furthermore, the reality of the navigation information is examined. The operation of the executed system is visually analyzed on the basis of the video and its operation is examined in relation to the comparison method. Developed visual odometry method is found to be a functional solution to the real-time embedded system under certain constraints
Nishitani, André Toshio Nogueira. "Localização baseada em odometria visual." Universidade de São Paulo, 2015. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-17082016-095838/.
Full textThe localization problem consists of estimating the position of the robot with regards to some external reference and it is an essential part of robots and autonomous vehicles navigation systems. Localization based on visual odometry, compared to encoder based odometry, stands out at the estimation of rotation and direction of the movement. This kind of approach is an interesting choice for vehicle control systems in urban environment, where the visual information is mandatory for the extraction of semantic information contained in the street signs and marks. In this context this project propose the development of a visual odometry system based on structure from motion using visual information acquired from a monocular camera to estimate the vehicle pose. The absolute scale problem, inherent with the use of monocular cameras, is achieved using som previous known information regarding the metric relation between image points and points lying on a same world plane.
Ligocki, Adam. "Metody současné sebelokalizace a mapování pro hloubkové kamery." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2017. http://www.nusl.cz/ntk/nusl-316270.
Full textSouza, Anderson Abner de Santana. "Mapeamento com Sonar Usando Grade de Ocupa??o baseado em Modelagem Probabil?stica." Universidade Federal do Rio Grande do Norte, 2008. http://repositorio.ufrn.br:8080/jspui/handle/123456789/15203.
Full textIn this work, we propose a probabilistic mapping method with the mapped environment represented through a modified occupancy grid. The main idea of the proposed method is to allow a mobile robot to construct in a systematic and incremental way the geometry of the underlying space, obtaining at the end a complete environment map. As a consequence, the robot can move in the environment in a safe way, based on a confidence value of data obtained from its perceptive system. The map is represented in a coherent way, according to its sensory data, being these noisy or not, that comes from exterior and proprioceptive sensors of the robot. Characteristic noise incorporated in the data from these sensors are treated by probabilistic modeling in such a way that their effects can be visible in the final result of the mapping process. The results of performed experiments indicate the viability of the methodology and its applicability in the area of autonomous mobile robotics, thus being an contribution to the field
Neste trabalho, propomos um m?todo de mapeamento probabil?stico com a representa??o do ambiente mapeado em uma grade de ocupa??o modificada. A id?ia principal do m?todo proposto ? deixar que um rob? m?vel construa de forma sistem?tica e incremental a geometria do seu entorno, obtendo ao final um mapa completo do ambiente. Como conseq??ncia, o rob? poder? locomover-se no seu ambiente de modo seguro, baseando-se em um ?ndice de confiabilidade dos dados colhidos do seu sistema perceptivo. O mapa ? representado de forma coerente com os dados sensoriais, sejam esses ruidosos ou n?o, oriundos dos sensores externoceptivos e proprioceptivos do rob?. Os ru?dos caracter?sticos incorporados nos dados de tais sensores s?o tratados por modelagem probabil?stica, de modo que seus efeitos possam ser vis?veis no resultado final do processo de mapeamento. Os resultados dos experimentos realizados, mostrados no presente trabalho, indicam a viabilidade desta metodologia e sua aplicabilidade na ?rea da rob?tica m?vel aut?noma, sendo assim uma contribui??o para a ?rea
Silva, Bruno Marques Ferreira da. "Odometria visual baseada em t?cnicas de structure from motion." Universidade Federal do Rio Grande do Norte, 2011. http://repositorio.ufrn.br:8080/jspui/handle/123456789/15364.
Full textCoordena??o de Aperfei?oamento de Pessoal de N?vel Superior
Visual Odometry is the process that estimates camera position and orientation based solely on images and in features (projections of visual landmarks present in the scene) extraced from them. With the increasing advance of Computer Vision algorithms and computer processing power, the subarea known as Structure from Motion (SFM) started to supply mathematical tools composing localization systems for robotics and Augmented Reality applications, in contrast with its initial purpose of being used in inherently offline solutions aiming 3D reconstruction and image based modelling. In that way, this work proposes a pipeline to obtain relative position featuring a previously calibrated camera as positional sensor and based entirely on models and algorithms from SFM. Techniques usually applied in camera localization systems such as Kalman filters and particle filters are not used, making unnecessary additional information like probabilistic models for camera state transition. Experiments assessing both 3D reconstruction quality and camera position estimated by the system were performed, in which image sequences captured in reallistic scenarios were processed and compared to localization data gathered from a mobile robotic platform
Odometria Visual ? o processo pelo qual consegue-se obter a posi??o e orienta??o de uma c?mera, baseado somente em imagens e consequentemente, em caracter?sticas (proje??es de marcos visuais da cena) nelas contidas. Com o avan?o nos algoritmos e no poder de processamento dos computadores, a sub?rea de Vis?o Computacional denominada de Structure from Motion (SFM) passou a fornecer ferramentas que comp?em sistemas de localiza??o visando aplica??es como rob?tica e Realidade Aumentada, em contraste com o seu prop?sito inicial de ser usada em aplica??es predominantemente offline como reconstru??o 3D e modelagem baseada em imagens. Sendo assim, este trabalho prop?e um pipeline de obten??o de posi??o relativa que tem como caracter?sticas fazer uso de uma ?nica c?mera calibrada como sensor posicional e ser baseado interamente nos modelos e algoritmos de SFM. T?cnicas usualmente presentes em sistemas de localiza??o de c?mera como filtros de Kalman e filtros de part?culas n?o s?o empregadas, dispensando que informa??es adicionais como um modelo probabil?stico de transi??o de estados para a c?mera sejam necess?rias. Experimentos foram realizados com o prop?sito de avaliar tanto a reconstru??o 3D quanto a posi??o de c?mera retornada pelo sistema, atrav?s de sequ?ncias de imagens capturadas em ambientes reais de opera??o e compara??es com um ground truth fornecido pelos dados do od?metro de uma plataforma rob?tica
Quist, Eric Blaine. "UAV Navigation and Radar Odometry." BYU ScholarsArchive, 2015. https://scholarsarchive.byu.edu/etd/4439.
Full textMasson, Clément. "Direction estimation using visual odometry." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-169377.
Full textDetta masterarbete behandlar problemet med att mäta objekts riktningar från en fastobservationspunkt. En ny metod föreslås, baserad på en enda roterande kamera som kräverendast två (eller flera) landmärkens riktningar. I en första fas används multiperspektivgeometri,för att uppskatta kamerarotationer och nyckelelements riktningar utifrån en uppsättningöverlappande bilder. I en andra fas kan sedan riktningen hos vilket objekt som helst uppskattasgenom att kameran, associerad till en bild visande detta objekt, omsektioneras. En detaljeradbeskrivning av den algoritmiska kedjan ges, tillsammans med testresultat av både syntetisk dataoch verkliga bilder tagen med en infraröd kamera.
Pol, Sabine. "Odometry for a Planetary Exploration Rover." Thesis, KTH, Reglerteknik, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-106249.
Full textJohansson, Fredrik. "Visual Stereo Odometry for Indoor Positioning." Thesis, Linköpings universitet, Datorseende, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-81215.
Full textVenturelli, Cavalheiro Guilherme. "Fusing visual odometry and depth completion." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/122517.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (pages 57-62).
Recent advances in technology indicate that autonomous vehicles and self-driving cats in particular may become commonplace in the near future. This thesis contributes to that scenario by studying the problem of depth perception based on sequences of camera images. We start by presenting a sensor fusion framework that achieves state-of-the-art performance when completing depth from sparse LiDAR measurements and a camera. Then, we study how the system performs under a variety of modifications of the sparse input until we ultimately replace LiDAR measurements with triangulations from a typical sparse visual odometry pipeline. We are then able to achieve a small improvement over the single image baseline and chart guidelines to assist in designing a system with even more substantial gains.
by Guilherme Venturelli Cavalheiro.
S.M.
S.M. Massachusetts Institute of Technology, Department of Aeronautics and Astronautics
Szente, Michal. "Vizuální odometrie pro robotické vozidlo Car4." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2017. http://www.nusl.cz/ntk/nusl-317205.
Full textSantos, Cristiano Flores dos. "Um framework para avaliação de mapeamento tridimensional Utilizando técnicas de estereoscopia e odometria visual." Universidade Federal de Santa Maria, 2016. http://repositorio.ufsm.br/handle/1/12038.
Full textO mapeamento tridimensional de ambientes tem sido intensivamente estudado na última década. Entre os benefícios deste tema de pesquisa é possível destacar adição de autonomia á automóveis ou mesmo drones. A representação tridimensional também permite a visualização de um dado cenário de modo iterativo e com maior riqueza de detalhes. No entanto, até o momento da elaboração deste trabalho não foi encontrado um framework que apresente em detalhes a implementação de algoritmos para realização do mapeamento 3D de ambientes externos que se aproximasse de um processamento em tempo real. Diante disto, neste trabalho foi desenvolvido um framework com as principais etapas de reconstrução tridimensional. Para tanto, a estereoscopia foi escolhida como técnica para a aquisição da informação de profundidade do cenário. Além disto, neste trabalho foram avaliados 4 algoritmos de geração do mapa de profundidade, onde foi possível atingir a taxa de 9 quadros por segundo.
Holmqvist, Niclas. "HANDHELD LIDAR ODOMETRY ESTIMATION AND MAPPING SYSTEM." Thesis, Mälardalens högskola, Inbyggda system, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-41137.
Full textBurusa, Akshay Kumar. "Visual-Inertial Odometry for Autonomous Ground Vehicles." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-217284.
Full textMonokulära kameror används ofta vid rörelseestimering av obemannade flygande farkoster. Med det ökade intresset för autonoma fordon har även användningen av monokulära kameror i fordon ökat. Detta är fram för allt fördelaktigt i situationer där satellitnavigering (Global Navigation Satellite System (GNSS)) äropålitlig, exempelvis i dagbrott. De flesta system som använder sig av monokulära kameror har problem med att estimera skalan. Denna estimering blir ännu svårare på grund av ett fordons större hastigheter och snabbare rörelser. Syftet med detta exjobb är att försöka estimera skalan baserat på bild data från en monokulär kamera, genom att komplettera med data från tröghetssensorer. Det visas att simultan estimering av position och skala för ett fordon är möjligt genom fusion av bild- och tröghetsdata från sensorer med hjälp av ett utökat Kalmanfilter (EKF). Estimeringens konvergens beror på flera faktorer, inklusive initialiseringsfel. En noggrann estimering av skalan möjliggör också en noggrann estimering av positionen. Detta möjliggör lokalisering av fordon vid avsaknad av GNSS och erbjuder därmed en ökad redundans.
Rao, Anantha N. "Learning-based Visual Odometry - A Transformer Approach." University of Cincinnati / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1627658636420617.
Full textPereira, Ana Rita. "Visual odometry: comparing a stereo and a multi-camera approach." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/18/18153/tde-11092017-095254/.
Full textO objetivo deste mestrado é implementar, analisar e comparar abordagens de odometria visual, de forma a contribuir para a localização de um veículo autônomo. O algoritmo de odometria visual estéreo Libviso2 é comparado com um método proposto, que usa um sistema multi-câmera omnidirecional. De acordo com este método, odometria visual monocular é calculada para cada câmera individualmente e, seguidamente, a melhor estimativa é selecionada através de um processo de votação que involve todas as câmeras. O fato de o sistema de visão ser omnidirecional faz com que a parte dos arredores mais rica em características possa sempre ser usada para estimar a pose relativa do veículo. Nas experiências são utilizadas as câmeras Bumblebee XB3 e Ladybug 2, fixadas no teto de um veículo. O processo de votação do método multi-câmera omnidirecional proposto apresenta melhorias relativamente às estimativas monoculares individuais. No entanto, a odometria visual estéreo fornece resultados mais precisos.
Najman, Jan. "Aplikace SLAM algoritmů pro vozidlo s čtyřmi řízenými koly." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2015. http://www.nusl.cz/ntk/nusl-231076.
Full textArnould, Philippe. "Étude de la localisation d'un robot mobile par fusion de données." Vandoeuvre-les-Nancy, INPL, 1993. http://www.theses.fr/1993INPL095N.
Full textTomasi, Junior Darci Luiz. "Modelo de calibração para sistemas de odometria robótica." reponame:Repositório Institucional da UFPR, 2016. http://hdl.handle.net/1884/45704.
Full textDissertação (mestrado) - Universidade Federal do Paraná, Setor de Ciências Exatas, Programa de Pós-Graduação em Informática. Defesa: Curitiba, 30/11/2016
Inclui referências : f. 39
Resumo: Para realizar a navegação de uma base robótica em um ambiente desconhecido, alguns mecanismos para detectar o posicionamento e a localização devem ser fornecidos a base. Quando a base está em processo de navegação e faz uso desses mecanismos, erros provenientes do ambiente e da base robótica são inseridos no sistema, resultando em um posicionamento errôneo. Uma forma de reduzir a amplitude dos erros é através de um modelo de calibração eficiente, capaz de identificar e estimar valores aceitáveis para as principais fontes de incerteza nos cálculos de odometria. Este trabalho de pesquisa apresenta um novo modelo de calibração comparável aos métodos clássicos conhecidos, mas que diferencia-se pela forma com que a calibração é realizada, sendo essa a principal limitação para conseguir incrementar os resultados com o método proposto. Ao fim do procedimento padrão proposto ser realizado, os resultados são equivalentes aos dos métodos clássicos conhecidos. Palavras-chave: UMBmark, Odometria, Calibração.
Abstract: In order to navigate a robotic base in an unfamiliar environment, some mechanism to detect positioning and location must be provided. When the robot is in the process of navigation and makes use of this mechanism, errors from the environment and the robotic base are inserted into the system, resulting in an erroneous positioning. One way to reduce the error amplitude is through an efficient calibration model, capable of identifying and estimating acceptable values for the main sources of uncertainty in odometry calculations. This work presents a new calibration model comparable to the classical methods known, but it is distinguished by the way in which the calibration is performed, being this the main limitation to be able to increase the results with the proposed method. At the end of the proposed standard procedure, the results are equivalent to those of the known classical methods. Keywords: UMBmark, Odometry, Calibration.
Silva, Ricardo Luís da Mota. "Removable odometry unit for vehicles with Ackermann steering." Master's thesis, Universidade de Aveiro, 2014. http://hdl.handle.net/10773/13699.
Full textO principal objetivo deste trabalho é o desenvolvimento de uma solução de hodometria para veículos com direção Ackermann. A solução tinha que ser portátil, exível e fácil de montar. Após o estudo do estado da arte e uma pesquisa de soluções, a solução escolhida foi baseada em hodometria visual. Os passos seguintes do trabalho foram estudar a viabilidade de utilizar câmaras lineares para hodometria visual. O sensor de imagem foi usado para calcular a velocidade longitudinal; e a orientação da movimento foi calculado usando dois giroscópios. Para testar o método, várias experiências foram feitas; as experiências ocorreram indoor, sob condições controladas. Foi testada a capacidade de medir a velocidade em movimentos de linha reta, movimentos diagonais, movimentos circulares e movimentos com variação da distância ao solo. Os dados foram processados usando algoritmos de correlação e os foram resultados documentados. Com base nos resultados, é seguro concluir que hodometria com câmaras lineares auxiliado por sensores inerciais tem um potencial de aplicabilidade no mundo real.
The main objective of this work is to develop a solution of odometry for vehicles with Ackermann steering. The solution had to be portable, exible and easy to mount. After the study of the state of the art and a survey of solutions, the solution chosen was based on visual odometry. The following steps of the work were to study the feasibility to use line scan image sensors for visual odometry. The image sensor was used to compute the longitudinal velocity; and the orientation of motion was computed using two gyroscopes. To test the method, several experiments were made; the experiments took place indoor, under controlled conditions. It was tested the ability to measure velocity on straight line movements, diagonal movements, circular movements and movements with a changing distance from the ground. The data was processed with correlation algorithms and the results were documented. Based on the results it is safe to conclude that odometry with line scan sensors aided by inertial sensors has a potential for a real world applicability.
Wuthrich, Tori(Tori Lee). "Learning visual odometry primitives for computationally constrained platforms." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/122419.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (pages 51-52).
Autonomous navigation for robotic platforms, particularly techniques that leverage an onboard camera, are of currently of significant interest to the robotics community. Designing methods to localize small, resource-constrained robots is a particular challenge due to limited availability of computing power and physical space for sensors. A computer vision, machine learning-based localization method was proposed by researchers investigating the automation of medical procedures. However, we believed the method to also be promising for low size, weight, and power (SWAP) budget robots. Unlike for traditional odometry methods, in this case, a machine learning model can be trained offline, and can then generate odometry measurements quickly and efficiently. This thesis describes the implementation of the learning-based, visual odometry method in the context of autonomous drones. We refer to the method as RetiNav due to its similarities with the way the human eye processes light signals from its surroundings. We make several modifications to the method relative to the initial design based on a detailed parameter study, and we test the method on a variety of challenging flight datasets. We show that over the course of a trajectory, RetiNav achieves as low as 1.4% error in predicting the distance traveled. We conclude that such a method is a viable component of a localization system, and propose the next steps for work in this area.
by Tori Wuthrich.
S.M.
S.M. Massachusetts Institute of Technology, Department of Aeronautics and Astronautics
Henriksson, Johan. "Radar odometry based on Fuzzy-NDT scan registration." Thesis, Örebro universitet, Institutionen för naturvetenskap och teknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-94492.
Full textBezerra, Clauber Gomes. "Localiza??o de um rob? m?vel usando odometria e marcos naturais." Universidade Federal do Rio Grande do Norte, 2004. http://repositorio.ufrn.br:8080/jspui/handle/123456789/15411.
Full textSeveral methods of mobile robot navigation request the mensuration of robot position and orientation in its workspace. In the wheeled mobile robot case, techniques based on odometry allow to determine the robot localization by the integration of incremental displacements of its wheels. However, this technique is subject to errors that accumulate with the distance traveled by the robot, making unfeasible its exclusive use. Other methods are based on the detection of natural or artificial landmarks present in the environment and whose location is known. This technique doesnt generate cumulative errors, but it can request a larger processing time than the methods based on odometry. Thus, many methods make use of both techniques, in such a way that the odometry errors are periodically corrected through mensurations obtained from landmarks. Accordding to this approach, this work proposes a hybrid localization system for wheeled mobile robots in indoor environments based on odometry and natural landmarks. The landmarks are straight lines de.ned by the junctions in environments floor, forming a bi-dimensional grid. The landmark detection from digital images is perfomed through the Hough transform. Heuristics are associated with that transform to allow its application in real time. To reduce the search time of landmarks, we propose to map odometry errors in an area of the captured image that possesses high probability of containing the sought mark
Diversos m?todos de navega??o de rob?s m?veis requerem a medi??o da posi??o e orienta??o do rob? no seu espa?o de trabalho. No caso de rob?s m?veis com rodas, t?cnicas baseadas em odometria permitem determinar a localiza??o do rob? atrav?s da integra??o de medi??es dos deslocamentos incrementais de suas rodas. No entanto, essa t?cnica est? sujeita a erros que se acumulam com a dist?ncia percorrida pelo rob?, o que inviabiliza o seu uso exclusivo. Outros m?todos se baseiam na detec??o de marcos naturais ou artificiais, cuja localiza??o ? conhecida, presentes no ambiente. Apesar desta t?cnica n?o gerar erros cumulativos, ela pode requisitar um tempo de processamento bem maior do que o uso de odometria. Assim, muitos m?todos fazem uso de ambas as t?cnicas, de modo a corrigir periodicamente os erros de odometria, atrav?s de medi??es obtidas a partir dos marcos. De acordo com esta abordagem, propomos neste trabalho um sistema h?brido de localiza??o para rob?s m?veis com rodas em ambientes internos, baseado em odometria e marcos naturais, onde os marcos adotados s?o linhas retas definidas pelas jun??es existentes no piso do ambiente, formando uma grade bi-dimensional no ch?o. Para a detec??o deste tipo de marco, a partir de imagens digitais, ? utilizada a transformada de Hough, associada a heur?sticas que permitem a sua aplica??o em tempo real. Em particular, para reduzir o tempo de busca dos marcos, propomos mapear erros de odometria em uma regi?o da imagem capturada que possua grande probabilidade de conter o marco procurado
Štěpán, Miroslav. "Model robota Trilobot." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2007. http://www.nusl.cz/ntk/nusl-412784.
Full textClark, Ronald. "Visual-inertial odometry, mapping and re-localization through learning." Thesis, University of Oxford, 2017. http://ora.ox.ac.uk/objects/uuid:69b03c50-f315-42f8-ad41-d97cd4c9bf09.
Full textGui, Jianjun. "Direct visual and inertial odometry for monocular mobile platforms." Thesis, University of Essex, 2018. http://repository.essex.ac.uk/21726/.
Full textMyriokefalitakis, Panteleimon. "Real-time conversion of monodepth visual odometry enhanced network." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-288488.
Full textDetta examensarbete tillhör området för självkontrollerad monokulär djupbedömning och utgör en omvandling av det arbete som gjorts under [1]. Syftet är att överväga den beräkningsmässiga dyra modellen i [1] som basmodellen för detta arbete och försöka skapa en lätt modell ur den. Det nuvarande arbetet förutsätter ett nätverk som är lämpligt att distribueras på inbäddade enheter som NVIDIA Jetson TX2 där behoven för kort driftstid, liten minnesfotavtryck och kraftförbrukning är viktigast. Med andra ord, om dessa krav saknas, oavsett om precisionen är extra hög, kan modellen inte fungera på inbäddade processorer. Således kan mobilplattformar med små storlekar som drönare, leveransrobotar, etc. inte utnyttja fördelarna med djupbildning. Det föreslagna nätverket har _29,7 mindre parametrar än baselinemodellen [1] och använder endast 10,6MB för ett framåtpass i motsats till 227MB som används av nätverket i [1]. Följaktligen kan den föreslagna modellen fungera på inbäddade enheters GPU. Slutligen kan den dra slutsatsen med lovande hastighet på standard CPUs och samtidigt ger jämförbar eller högre noggrannhet än andra arbete.
Chermak, Lounis. "Standalone and embedded stereo visual odometry based navigation solution." Thesis, Cranfield University, 2015. http://dspace.lib.cranfield.ac.uk/handle/1826/9319.
Full textGreenberg, Jacob. "Visual Odometry for Autonomous MAV with On-Board Processing." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-177290.
Full textEn ny visuell registreringsalgoritm (Adaptive Iterative Closest Keypoint, AICK) testas och utvärderas som ett positioneringsverktyg på en Micro Aerial Vehicle (MAV). Tagna bilder från en Kinect liknande RGB-D kamera analyseras och en approximerad position av MAVen beräknas. Förhoppningen är att hitta en positioneringslösning för miljöer utan GPS förbindelse, där detta arbete fokuserar på kontorsmiljöer inomhus. MAVen flygs manuellt samtidigt som RGB-D bilder tas, dessa registreras sedan med hjälp av AICK. Resultatet analyseras för att kunna dra en slutsats om AICK är en rimlig metod eller inte för att åstadkomma autonom flygning med hjälp av den uppskattade positionen. Resultatet visar potentialen för en fungerande autonom MAV i miljöer utan GPS förbindelse, men det finns testade miljöer där AICK i dagsläget fungerar undermåligt. Bristen på visuella särdrag på t.ex. en vit vägg inför problem och osäkerheter i positioneringen, ännu mer besvärande är det när avståndet till omgivningen överskrider RGB-D kamerornas räckvidd. Med fortsatt arbete med dessa svagheter är en robust autonom MAV som använder AICK för positioneringen rimlig.
Svoboda, Ondřej. "Analýza vlastností stereokamery ZED ve venkovním prostředí." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2019. http://www.nusl.cz/ntk/nusl-399416.
Full textGräter, Johannes [Verfasser]. "Monokulare Visuelle Odometrie auf Multisensorplattformen für autonome Fahrzeuge / Johannes Gräter." Karlsruhe : KIT Scientific Publishing, 2019. http://d-nb.info/1196294682/34.
Full textProenca, Pedro F. "Robust RGB-D odometry under depth uncertainty for structured environments." Thesis, University of Surrey, 2018. http://epubs.surrey.ac.uk/849961/.
Full textFrey, Kristoffer M. (Kristoffer Martin). "Sparsity and computation reduction for high-rate visual-inertial odometry." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/113745.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (pages 147-151).
The navigation problem for mobile robots operating in unknown environments can be posed as a subset of Simultaneous Localization and Mapping (SLAM). For computationally-constrained systems, maintaining and promoting system sparsity is key to achieving the high-rate solutions required for agile trajectory tracking. This thesis focuses on the computation involved in the elimination step of optimization, showing it to be a function of the corresponding graph structure. This observation directly motivates the search for measurement selection techniques to promote sparse structure and reduce computation. While many sophisticated selection techniques exist in the literature, relatively little attention has been paid to the simple yet ubiquitous heuristic of decimation. This thesis shows that decimation produces graphs with an inherently sparse, partitioned super-structure. Furthermore, it is shown analytically for single-landmark graphs that the even spacing of observations characteristic of decimation is near optimal in a weighted number of spanning trees sense. Recent results in the SLAM community suggest that maximizing this connectivity metric corresponds to good information-theoretic performance. Simulation results confirm that decimation-style strategies perform as well or better than sophisticated policies which require significant computation to execute. Given that decimation consumes negligible computation to evaluate, its performance demonstrated here makes decimation a formidable measurement selection strategy for high-rate, realtime SLAM solutions. Finally, the SAMWISE visual-inertial estimator is described, and thorough experimental results demonstrate its robustness in a variety of scenarios, particularly to the challenges prescribed by the DARPA Fast Lightweight Autonomy program.
This thesis was supported by the Defense Advanced Research Projects Agency (DARPA) under the Fast Lightweight Autonomy program.
by Kristoffer M. Frey.
S.M.
Verpers, Felix. "Improving a stereo-based visual odometry prototype with global optimization." Thesis, Uppsala universitet, Avdelningen för visuell information och interaktion, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-383268.
Full textAraújo, Darla Caroline da Silva 1989. "Uso de fluxo óptico na odometria visual aplicada a robótica." [s.n.], 2015. http://repositorio.unicamp.br/jspui/handle/REPOSIP/265835.
Full textDissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Mecânica
Made available in DSpace on 2018-08-26T21:38:28Z (GMT). No. of bitstreams: 1 Araujo_DarlaCarolinedaSilva_M.pdf: 5678583 bytes, checksum: a6ed9886369705a8853f15d431565a3d (MD5) Previous issue date: 2015
Resumo: O presente trabalho descreve um método de odometria visual empregando a técnica de fluxo óptico, para estimar o movimento de um robô móvel, através de imagens digitais capturadas de duas câmeras estereoscópicas nele fixadas. Busca-se assim a construção de um mapa para a localização do Robô. Esta proposta, além de alternativa ao cálculo autônomo de movimento realizado por outros tipos de sensores como GPS, laser, sonares, utiliza uma técnica de processamento óptico de grande eficiência computacional. Foi construído um ambiente 3D para simulação do movimento do robô e captura das imagens necessárias para estimar sua trajetória e verificar a acurácia da técnica proposta. Utiliza-se a técnica de fluxo óptico de Lucas Kanade na identificação de características em imagens. Os resultados obtidos neste trabalho são de grande importância para os estudos de navegação robótica
Abstract: This work describes a method of visual odometry using the optical flow technique to estimate the motion of a mobile robot, through digital images captured from two stereoscopic cameras fixed on it, in order to obtain a map of location of the robot. This proposal is an alternative to the autonomous motion calculation performed by other types of sensors such as GPS, laser, sonar, and uses an optical processing technique of high computational efficiency. To check the accuracy of the technique it was necessary to build a 3D environment to simulate the robot performing a trajectory and capture the necessary images to estimate the trajectory. The optical flow technique of Lucas Kanade was used for identifying features in the images. The results of this work are of great importance for future robotic navigation studies
Mestrado
Mecanica dos Sólidos e Projeto Mecanico
Mestra em Engenharia Mecânica
Santos, Vinícius Araújo. "SiameseVO-Depth: odometria visual através de redes neurais convolucionais siamesas." Universidade Federal de Goiás, 2018. http://repositorio.bc.ufg.br/tede/handle/tede/9083.
Full textApproved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2018-11-21T11:06:26Z (GMT) No. of bitstreams: 2 Dissertação - Vinícius Araújo Santos - 2018.pdf: 14601054 bytes, checksum: e02a8bcd3cdc93bf2bf202c3933b3f27 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
Made available in DSpace on 2018-11-21T11:06:26Z (GMT). No. of bitstreams: 2 Dissertação - Vinícius Araújo Santos - 2018.pdf: 14601054 bytes, checksum: e02a8bcd3cdc93bf2bf202c3933b3f27 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Previous issue date: 2018-10-11
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES
Visual Odometry is an important process in image based navigation of robots. The standard methods of this field rely on the good feature matching between frames where feature detection on images stands as a well adressed problem within Computer Vision. Such techniques are subject to illumination problems, noise and poor feature localization accuracy. Thus, 3D information on a scene may mitigate the uncertainty of the features on images. Deep Learning techniques show great results when dealing with common difficulties of VO such as low illumination conditions and bad feature selection. While Visual Odometry and Deep Learning have been connected previously, no techniques applying Siamese Convolutional Networks on depth infomation given by disparity maps have been acknowledged as far as this work’s researches went. This work aims to fill this gap by applying Deep Learning to estimate egomotion through disparity maps on an Siamese architeture. The SiameseVO-Depth architeture is compared to state of the art techniques on OV by using the KITTI Vision Benchmark Suite. The results reveal that the chosen methodology succeeded on the estimation of Visual Odometry although it doesn’t outperform the state-of-the-art techniques. This work presents fewer steps in relation to standard VO techniques for it consists of an end-to-end solution and demonstrates a new approach of Deep Learning applied to Visual Odometry.
Odometria Visual é um importante processo na navegação de robôs baseada em imagens. Os métodos clássicos deste tema dependem de boas correspondências de características feitas entre imagens sendo que a detecção de características em imagens é um tema amplamente discutido no campo de Visão Computacional. Estas técnicas estão sujeitas a problemas de iluminação, presença de ruído e baixa de acurácia de localização. Nesse contexto, a informação tridimensional de uma cena pode ser uma forma de mitigar as incertezas sobre as características em imagens. Técnicas de Deep Learning têm demonstrado bons resultados lidando com problemas comuns em técnicas de OV como insuficiente iluminação e erros na seleção de características. Ainda que já existam trabalhos que relacionam Odometria Visual e Deep Learning, não foram encontradas técnicas que utilizem Redes Convolucionais Siamesas com sucesso utilizando informações de profundidade de mapas de disparidade durante esta pesquisa. Este trabalho visa preencher esta lacuna aplicando Deep Learning na estimativa do movimento por de mapas de disparidade em uma arquitetura Siamesa. A arquitetura SiameseVO-Depth proposta neste trabalho é comparada à técnicas do estado da arte em OV utilizando a base de dados KITTI Vision Benchmark Suite. Os resultados demonstram que através da metodologia proposta é possível a estimativa dos valores de uma Odometria Visual ainda que o desempenho não supere técnicas consideradas estado da arte. O trabalho proposto possui menos etapas em comparação com técnicas clássicas de OV por apresentar-se como uma solução fim-a-fim e apresenta nova abordagem no campo de Deep Learning aplicado à Odometria Visual.
Aksjonova, Jevgenija. "LDD: Learned Detector and Descriptor of Points for Visual Odometry." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-233571.
Full textSamtidig lokalisering och kartläggning är ett viktigt problem inom robotik som kan lösas med hjälp av visuell odometri -- processen att uppskatta självrörelse från efterföljande kamerabilder. Visuella odometrisystem förlitar sig i sin tur på punktmatchningar mellan olika bildrutor. Detta arbete presenterar en ny metod för matchning av nyckelpunkter genom att applicera neurala nätverk för detektion av punkter och deskriptorer. Traditionellt sett används punktdetektorer för att välja ut bra nyckelpunkter (som hörn) och sedan används dessa nyckelpunkter för att matcha särdrag. I detta arbete tränas istället en deskriptor att matcha punkterna. Sedan tränas en detektor till att förutspå vilka punker som är mest troliga att matchas korrekt med deskriptorn. Denna information används sedan för att välja ut bra nyckelpunkter. Resultatet av projektet visar att det kan leda till mer precisa resultat jämfört med andra modellbaserade metoder.
Awang, Salleh Dayang Nur Salmi Dharmiza. "Study of vehicle localization optimization with visual odometry trajectory tracking." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS601.
Full textWith the growing research on Advanced Driver Assistance Systems (ADAS) for Intelligent Transport Systems (ITS), accurate vehicle localization plays an important role in intelligent vehicles. The Global Positioning System (GPS) has been widely used but its accuracy deteriorates and susceptible to positioning error due to factors such as the restricting environments that results in signal weakening. This problem can be addressed by integrating the GPS data with additional information from other sensors. Meanwhile, nowadays, we can find vehicles equipped with sensors for ADAS applications. In this research, fusion of GPS with visual odometry (VO) and digital map is proposed as a solution to localization improvement with low-cost data fusion. From the published works on VO, it is interesting to know how the generated trajectory can further improve vehicle localization. By integrating the VO output with GPS and OpenStreetMap (OSM) data, estimates of vehicle position on the map can be obtained. The lateral positioning error is reduced by utilizing lane distribution information provided by OSM while the longitudinal positioning is optimized with curve matching between VO trajectory trail and segmented roads. To observe the system robustness, the method was validated with KITTI datasets tested with different common GPS noise. Several published VO methods were also used to compare improvement level after data fusion. Validation results show that the positioning accuracy achieved significant improvement especially for the longitudinal error with curve matching technique. The localization performance is on par with Simultaneous Localization and Mapping (SLAM) SLAM techniques despite the drift in VO trajectory input. The research on employability of VO trajectory is extended for a deterministic task in lane-change detection. This is to assist the routing service for lane-level direction in navigation. The lane-change detection was conducted by CUSUM and curve fitting technique that resulted in 100% successful detection for stereo VO. Further study for the detection strategy is however required to obtain the current true lane of the vehicle for lane-level accurate localization. With the results obtained from the proposed low-cost data fusion for localization, we see a bright prospect of utilizing VO trajectory with information from OSM to improve the performance. In addition to obtain VO trajectory, the camera mounted on the vehicle can also be used for other image processing applications to complement the system. This research will continue to develop with future works concluded in the last chapter of this thesis
Jílek, Tomáš. "Pokročilá navigace v heterogenních multirobotických systémech ve vnějším prostředí." Doctoral thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2015. http://www.nusl.cz/ntk/nusl-234530.
Full textVodrážka, Jakub. "Návrh konstrukce mobilního autonomního robotu." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2010. http://www.nusl.cz/ntk/nusl-229186.
Full textEpton, Thomas. "Odometry correction of a mobile robot using a range-finding laser." Connect to this title online, 2007. http://etd.lib.clemson.edu/documents/1202499136/.
Full textGonzalez, Cadenillas Clayder Alejandro. "An improved feature extractor for the lidar odometry and mapping algorithm." Tesis, Universidad de Chile, 2019. http://repositorio.uchile.cl/handle/2250/171499.
Full textLa extracción de características es una tarea crítica en la localización y mapeo simultáneo o Simultaneous Localization and Mapping (SLAM) basado en características, que es uno de los problemas más importantes de la comunidad robótica. Un algoritmo que resuelve SLAM utilizando características basadas en LiDAR es el algoritmo LiDAR Odometry and Mapping (LOAM). Este algoritmo se considera actualmente como el mejor algoritmo SLAM según el Benchmark KITTI. El algoritmo LOAM resuelve el problema de SLAM a través de un enfoque de emparejamiento de características y su algoritmo de extracción de características detecta las características clasifican los puntos de una nube de puntos como planos o agudos. Esta clasificación resulta de una ecuación que define el nivel de suavidad para cada punto. Sin embargo, esta ecuación no considera el ruido de rango del sensor. Por lo tanto, si el ruido de rango del LiDAR es alto, el extractor de características de LOAM podría confundir los puntos planos y agudos, lo que provocaría que la tarea de emparejamiento de características falle. Esta tesis propone el reemplazo del algoritmo de extracción de características del LOAM original por el algoritmo Curvature Scale Space (CSS). La elección de este algoritmo se realizó después de estudiar varios extractores de características en la literatura. El algoritmo CSS puede mejorar potencialmente la tarea de extracción de características en entornos ruidosos debido a sus diversos niveles de suavizado Gaussiano. La sustitución del extractor de características original de LOAM por el algoritmo CSS se logró mediante la adaptación del algoritmo CSS al Velodyne VLP-16 3D LiDAR. El extractor de características de LOAM y el extractor de características de CSS se probaron y compararon con datos reales y simulados, incluido el dataset KITTI utilizando las métricas Optimal Sub-Pattern Assignment (OSPA) y Absolute Trajectory Error (ATE). Para todos estos datasets, el rendimiento de extracción de características de CSS fue mejor que el del algoritmo LOAM en términos de métricas OSPA y ATE.
Delgado, Vargas Jaime Armando 1986. "Localização e navegação de robô autônomo através de odometria e visão estereoscópica." [s.n.], 2012. http://repositorio.unicamp.br/jspui/handle/REPOSIP/264542.
Full textDissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Mecânica
Made available in DSpace on 2018-08-20T13:27:04Z (GMT). No. of bitstreams: 1 DelgadoVargas_JaimeArmando_M.pdf: 4350704 bytes, checksum: 8e7dab5b1630b88bde95e287a62b2f7e (MD5) Previous issue date: 2012
Resumo: Este trabalho apresenta a implementação de um sistema de navegação com visão estereoscópica em um robô móvel, que permite a construção de mapa de ambiente e localização. Para isto é necessário conhecer o modelo cinemático do robô, técnicas de controle, algoritmos de identificação de características em imagens (features), reconstrução 3D com visão estereoscópica e algoritmos de navegação. Utilizam-se métodos para a calibração de câmera desenvolvida no âmbito do grupo de pesquisa da FEM/UNICAMP e da literatura. Resultados de análises experimentais e teóricas são comparados. Resultados adicionais mostram a validação do algoritmo de calibração de câmera, acurácia dos sensores, resposta do sistema de controle, e reconstrução 3D. Os resultados deste trabalho são de importância para futuros estudos de navegação robótica e calibração de câmeras
Abstract: This paper presents a navigation system with stereoscopic vision on a mobile robot, which allows the construction of environment map and location. In that way must know the kinematic model of the robot, algorithms for identifying features in images (features) as a Sift, 3D reconstruction with stereoscopic vision and navigation algorithms. Methods are used to calibrate the camera developed within the research group of the FEM / UNICAMP and literature. Results of experimental and theoretical analyzes are compared. Additional results show the validation of the algorithm for camera calibration, accuracy of sensors, control system response, and 3D reconstruction. These results are important for future studies of robotic navigation and calibration of cameras
Mestrado
Mecanica dos Sólidos e Projeto Mecanico
Mestre em Engenharia Mecânica
Terzakis, George. "Visual odometry and mapping in natural environments for arbitrary camera motion models." Thesis, University of Plymouth, 2016. http://hdl.handle.net/10026.1/6686.
Full textJaníček, Kryštof. "Odhad rychlosti vozidla ze záznamu on-board kamery." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2018. http://www.nusl.cz/ntk/nusl-385901.
Full textPeñaloza, González Andrés. "Implementación de odometría visual utilizando una cámara estereoscópica." Tesis, Universidad de Chile, 2015. http://repositorio.uchile.cl/handle/2250/137817.
Full textEn ciertas aplicaciones de robótica es importante la utilización de un odómetro para poder estimar la posición de un robot en movimiento. Esto permite que el actor tenga una noción de la ubicación en el entorno por donde se mueve. En aplicaciones como vehículos autónomos es especialmente importante, pues es crítico conocer la posición del vehículo con respecto a su mapa interno para evitar colisiones. Usualmente los odómetros más utilizados son las ruedas y el GPS. Sin embargo estos no siempre están disponibles, debido a adversidades del ambiente. Es por estos motivos que se emplea odometría visual. La odometría visual es el proceso de estimación del movimiento de un vehículo o agente utilizando las imágenes que éste obtiene de sus cámaras. Ella ha sido utilizada en la industria minera con los camiones de carga, y, últimamente en drones aéreos que podrían ser ocupados para el transporte de paquetes. También se ha utilizado para estimar la posición de los robots que actualmente transitan en la superficie de Marte. El presente trabajo tiene por finalidad la implementación de un algoritmo de odometría visual usando una cámara estereoscópica para estimar la trayectoria de un robot, y la evaluación del desempeño de éste comparándola con los valores conocidos de posición. La metodología utilizada permite identificar qué parámetros del algoritmo de estimación de movimiento tienen mayor relevancia y cómo influyen en la rapidez y calidad de la solución. También se determina la influencia de las condiciones de iluminación, y se determina qué zona geométrica de la imagen es mejor para realizar la triangulación de puntos. La solución se compone de un sistema capaz de ejecutar las distintas partes que requiere el algoritmo de manera extensible, siendo fácil reemplazar un método en el futuro con un mínimo impacto en el código. Se obtienen resultados favorables, donde el error de estimación de movimiento es pequeño y, además, se concluye acerca de los factores más importantes en la ejecución del algoritmo. Se discute acerca de la rapidez del algoritmo y se proponen soluciones que ayuden a su implementación en tiempo real.