Academic literature on the topic 'Visual Odometry'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Visual Odometry.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Visual Odometry"

1

Sun, Qian, Ming Diao, Yibing Li, and Ya Zhang. "An improved binocular visual odometry algorithm based on the Random Sample Consensus in visual navigation systems." Industrial Robot: An International Journal 44, no. 4 (2017): 542–51. http://dx.doi.org/10.1108/ir-11-2016-0280.

Full text
Abstract:
Purpose The purpose of this paper is to propose a binocular visual odometry algorithm based on the Random Sample Consensus (RANSAC) in visual navigation systems. Design/methodology/approach The authors propose a novel binocular visual odometry algorithm based on features from accelerated segment test (FAST) extractor and an improved matching method based on the RANSAC. Firstly, features are detected by utilizing the FAST extractor. Secondly, the detected features are roughly matched by utilizing the distance ration of the nearest neighbor and the second nearest neighbor. Finally, wrong matched
APA, Harvard, Vancouver, ISO, and other styles
2

Srinivasan, M., S. Zhang, and N. Bidwell. "Visually mediated odometry in honeybees." Journal of Experimental Biology 200, no. 19 (1997): 2513–22. http://dx.doi.org/10.1242/jeb.200.19.2513.

Full text
Abstract:
The ability of honeybees to gauge the distances of short flights was investigated under controlled laboratory conditions where a variety of potential odometric cues such as flight duration, energy consumption, image motion, airspeed, inertial navigation and landmarks were manipulated. Our findings indicate that honeybees can indeed measure short distances travelled and that they do so solely by analysis of image motion. Visual odometry seems to rely primarily on the motion that is sensed by the lateral regions of the visual field. Computation of distance flown is re-commenced whenever a promin
APA, Harvard, Vancouver, ISO, and other styles
3

Wang, Chenggong, Gen Li, Ruiqi Wang, and Lin Li. "Wheeled Robot Visual Odometer Based on Two-dimensional Iterative Closest Point Algorithm." Journal of Physics: Conference Series 2504, no. 1 (2023): 012002. http://dx.doi.org/10.1088/1742-6596/2504/1/012002.

Full text
Abstract:
Abstract According to the two-dimensional motion characteristics of planar motion wheeled robot, the visual odometer was dimensionally reduced in this study. In the feature point matching part of visual odometer, the contour constraint was used to filter out the mismatched feature point pairs (abbreviated as FPP). This method could also filter out the matched FPP, and the feature of FPP was correct color image matches, however, their depth image error was large. This offered higher quality matched FPP for the subsequent interframe motion estimation. Dimension reduction was performed in the int
APA, Harvard, Vancouver, ISO, and other styles
4

Scaramuzza, Davide, and Friedrich Fraundorfer. "Visual Odometry [Tutorial]." IEEE Robotics & Automation Magazine 18, no. 4 (2011): 80–92. http://dx.doi.org/10.1109/mra.2011.943233.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

An, Lifeng, Xinyu Zhang, Hongbo Gao, and Yuchao Liu. "Semantic segmentation–aided visual odometry for urban autonomous driving." International Journal of Advanced Robotic Systems 14, no. 5 (2017): 172988141773566. http://dx.doi.org/10.1177/1729881417735667.

Full text
Abstract:
Visual odometry plays an important role in urban autonomous driving cars. Feature-based visual odometry methods sample the candidates randomly from all available feature points, while alignment-based visual odometry methods take all pixels into account. These methods hold an assumption that quantitative majority of candidate visual cues could represent the truth of motions. But in real urban traffic scenes, this assumption could be broken by lots of dynamic traffic participants. Big trucks or buses may occupy the main image parts of a front-view monocular camera and result in wrong visual odom
APA, Harvard, Vancouver, ISO, and other styles
6

CIOCOIU, Titus, Florin MOLDOVEANU, and Caius SULIMAN. "CAMERA CALIBRATION FOR VISUAL ODOMETRY SYSTEM." SCIENTIFIC RESEARCH AND EDUCATION IN THE AIR FORCE 18, no. 1 (2016): 227–32. http://dx.doi.org/10.19062/2247-3173.2016.18.1.30.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Wang, Jiabin, and Faqin Gao. "Improved visual inertial odometry based on deep learning." Journal of Physics: Conference Series 2078, no. 1 (2021): 012016. http://dx.doi.org/10.1088/1742-6596/2078/1/012016.

Full text
Abstract:
Abstract The traditional visual inertial odometry according to the manually designed rules extracts key points. However, the manually designed extraction rules are easy to be affected and have poor robustness in the scene of illumination and perspective change, resulting in the decline of positioning accuracy. Deep learning methods show strong robustness in key point extraction. In order to improve the positioning accuracy of visual inertial odometer in the scene of illumination and perspective change, deep learning is introduced into the visual inertial odometer system for key point detection
APA, Harvard, Vancouver, ISO, and other styles
8

Borges, Paulo Vinicius Koerich, and Stephen Vidas. "Practical Infrared Visual Odometry." IEEE Transactions on Intelligent Transportation Systems 17, no. 8 (2016): 2205–13. http://dx.doi.org/10.1109/tits.2016.2515625.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Gonzalez, Ramon, Francisco Rodriguez, Jose Luis Guzman, Cedric Pradalier, and Roland Siegwart. "Combined visual odometry and visual compass for off-road mobile robots localization." Robotica 30, no. 6 (2011): 865–78. http://dx.doi.org/10.1017/s026357471100110x.

Full text
Abstract:
SUMMARYIn this paper, we present the work related to the application of a visual odometry approach to estimate the location of mobile robots operating in off-road conditions. The visual odometry approach is based on template matching, which deals with estimating the robot displacement through a matching process between two consecutive images. Standard visual odometry has been improved using visual compass method for orientation estimation. For this purpose, two consumer-grade monocular cameras have been employed. One camera is pointing at the ground under the robot, and the other is looking at
APA, Harvard, Vancouver, ISO, and other styles
10

Aguiar, André, Filipe Santos, Armando Jorge Sousa, and Luís Santos. "FAST-FUSION: An Improved Accuracy Omnidirectional Visual Odometry System with Sensor Fusion and GPU Optimization for Embedded Low Cost Hardware." Applied Sciences 9, no. 24 (2019): 5516. http://dx.doi.org/10.3390/app9245516.

Full text
Abstract:
The main task while developing a mobile robot is to achieve accurate and robust navigation in a given environment. To achieve such a goal, the ability of the robot to localize itself is crucial. In outdoor, namely agricultural environments, this task becomes a real challenge because odometry is not always usable and global navigation satellite systems (GNSS) signals are blocked or significantly degraded. To answer this challenge, this work presents a solution for outdoor localization based on an omnidirectional visual odometry technique fused with a gyroscope and a low cost planar light detect
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Visual Odometry"

1

Pereira, Fabio Irigon. "High precision monocular visual odometry." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2018. http://hdl.handle.net/10183/183233.

Full text
Abstract:
Extrair informação de profundidade a partir de imagens bidimensionais é um importante problema na área de visão computacional. Diversas aplicações se beneficiam desta classe de algoritmos tais como: robótica, a indústria de entretenimento, aplicações médicas para diagnóstico e confecção de próteses e até mesmo exploração interplanetária. Esta aplicação pode ser dividida em duas etapas interdependentes: a estimação da posição e orientação da câmera no momento em que a imagem foi gerada, e a estimativa da estrutura tridimensional da cena. Este trabalho foca em técnicas de visão computacional usa
APA, Harvard, Vancouver, ISO, and other styles
2

Masson, Clément. "Direction estimation using visual odometry." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-169377.

Full text
Abstract:
This Master thesis tackles the problem of measuring objects’ directions from a motionlessobservation point. A new method based on a single rotating camera requiring the knowledge ofonly two (or more) landmarks’ direction is proposed. In a first phase, multi-view geometry isused to estimate camera rotations and key elements’ direction from a set of overlapping images.Then in a second phase, the direction of any object can be estimated by resectioning the cameraassociated to a picture showing this object. A detailed description of the algorithmic chain isgiven, along with test results on both sy
APA, Harvard, Vancouver, ISO, and other styles
3

Johansson, Fredrik. "Visual Stereo Odometry for Indoor Positioning." Thesis, Linköpings universitet, Datorseende, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-81215.

Full text
Abstract:
In this master thesis a visual odometry system is implemented and explained. Visual odometry is a technique, which could be used on autonomous vehicles to determine its current position and is preferably used indoors when GPS is notworking. The only input to the system are the images from a stereo camera and the output is the current location given in relative position. In the C++ implementation, image features are found and matched between the stereo images and the previous stereo pair, which gives a range of 150-250 verified feature matchings. The image coordinates are triangulated into a 3D
APA, Harvard, Vancouver, ISO, and other styles
4

Venturelli, Cavalheiro Guilherme. "Fusing visual odometry and depth completion." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/122517.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2019<br>Cataloged from PDF version of thesis.<br>Includes bibliographical references (pages 57-62).<br>Recent advances in technology indicate that autonomous vehicles and self-driving cats in particular may become commonplace in the near future. This thesis contributes to that scenario by studying the problem of depth perception based on sequences of camera images. We start by presenting a sensor fusion framework that achieves state-of-the-art performance when completing depth from sparse LiDAR mea
APA, Harvard, Vancouver, ISO, and other styles
5

Burusa, Akshay Kumar. "Visual-Inertial Odometry for Autonomous Ground Vehicles." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-217284.

Full text
Abstract:
Monocular cameras are prominently used for estimating motion of Unmanned Aerial Vehicles. With growing interest in autonomous vehicle technology, the use of monocular cameras in ground vehicles is on the rise. This is especially favorable for localization in situations where Global Navigation Satellite System (GNSS) is unreliable, such as open-pit mining environments. However, most monocular camera based approaches suffer due to obscure scale information. Ground vehicles impose a greater difficulty due to high speeds and fast movements. This thesis aims to estimate the scale of monocular visio
APA, Harvard, Vancouver, ISO, and other styles
6

Rao, Anantha N. "Learning-based Visual Odometry - A Transformer Approach." University of Cincinnati / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1627658636420617.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Campanholo, Guizilini Vitor. "Non-Parametric Learning for Monocular Visual Odometry." Thesis, The University of Sydney, 2013. http://hdl.handle.net/2123/9903.

Full text
Abstract:
This thesis addresses the problem of incremental localization from visual information, a scenario commonly known as visual odometry. Current visual odometry algorithms are heavily dependent on camera calibration, using a pre-established geometric model to provide the transformation between input (optical flow estimates) and output (vehicle motion estimates) information. A novel approach to visual odometry is proposed in this thesis where the need for camera calibration, or even for a geometric model, is circumvented by the use of machine learning principles and techniques. A non-parametric Bay
APA, Harvard, Vancouver, ISO, and other styles
8

Wuthrich, Tori(Tori Lee). "Learning visual odometry primitives for computationally constrained platforms." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/122419.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2019<br>Cataloged from PDF version of thesis.<br>Includes bibliographical references (pages 51-52).<br>Autonomous navigation for robotic platforms, particularly techniques that leverage an onboard camera, are of currently of significant interest to the robotics community. Designing methods to localize small, resource-constrained robots is a particular challenge due to limited availability of computing power and physical space for sensors. A computer vision, machine learning-based localization metho
APA, Harvard, Vancouver, ISO, and other styles
9

Greenberg, Jacob. "Visual Odometry for Autonomous MAV with On-Board Processing." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-177290.

Full text
Abstract:
A new visual registration algorithm (Adaptive Iterative Closest Keypoint, AICK) is tested and evaluated as a positioning tool on a Micro Aerial Vehicle (MAV). Captured frames from a Kinect like RGB-D camera are analyzed and an estimated position of the MAV is extracted. The hope is to find a positioning solution for GPS-denied environments. This thesis is focused on an indoor office environment. The MAV is flown manually, capturing in-flight RGB-D images which are registered with the AICK algorithm. The result is analyzed to come to a conclusion if AICK is viable or not for autonomous flight b
APA, Harvard, Vancouver, ISO, and other styles
10

Clark, Ronald. "Visual-inertial odometry, mapping and re-localization through learning." Thesis, University of Oxford, 2017. http://ora.ox.ac.uk/objects/uuid:69b03c50-f315-42f8-ad41-d97cd4c9bf09.

Full text
Abstract:
Precise pose information is a fundamental prerequisite for numerous applications in robotics, AI and mobile computing. Monocular cameras are the ideal sensor for this purpose - they are cheap, lightweight and ubiquitous. As such, monocular visual localization is widely regarded as a cornerstone requirement of machine perception. However, a large gap still exists between the performance that these applications require and that which is achievable through existing monocular perception algorithms. In this thesis we directly tackle the issue of robust egocentric visual localization and mapping thr
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Visual Odometry"

1

Erdem, Uğur Murat, Nicholas Roy, John J. Leonard, and Michael E. Hasselmo. Spatial and episodic memory. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780199674923.003.0029.

Full text
Abstract:
The neuroscience of spatial memory is one of the most promising areas for developing biomimetic solutions to complex engineering challenges. Grid cells are neurons recorded in the medial entorhinal cortex that fire when rats are in an array of locations in the environment falling on the vertices of tightly packed equilateral triangles. Grid cells suggest an exciting new approach for enhancing robot simultaneous localization and mapping (SLAM) in changing environments and could provide a common map for situational awareness between human and robotic teammates. Current models of grid cells are w
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Visual Odometry"

1

Chien, Hsiang-Jen, Jr-Jiun Lin, Tang-Kai Yin, and Reinhard Klette. "Multi-objective Visual Odometry." In Image and Video Technology. Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-75786-5_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Gao, Xiang, and Tao Zhang. "Visual Odometry: Part II." In Introduction to Visual SLAM. Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-4939-4_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Gao, Xiang, and Tao Zhang. "Practice: Stereo Visual Odometry." In Introduction to Visual SLAM. Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-4939-4_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Gao, Xiang, and Tao Zhang. "Visual Odometry: Part I." In Introduction to Visual SLAM. Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-4939-4_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Lianos, Konstantinos-Nektarios, Johannes L. Schönberger, Marc Pollefeys, and Torsten Sattler. "VSO: Visual Semantic Odometry." In Computer Vision – ECCV 2018. Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-01225-0_15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Messikommer, Nico, Giovanni Cioffi, Mathias Gehrig, and Davide Scaramuzza. "Reinforcement Learning Meets Visual Odometry." In Lecture Notes in Computer Science. Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-73202-7_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Fei, Sicheng, Jingfeng Li, Lei Li, et al. "Transformer Based Visual Inertial Odometry." In Lecture Notes in Electrical Engineering. Springer Nature Singapore, 2025. https://doi.org/10.1007/978-981-96-2264-1_54.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kalambe, Shrijay S., Elizabeth Rufus, Vinod Karar, and Shashi Poddar. "Descriptor- Using Low- for Visual Odometry." In Proceedings of 3rd International Conference on Computer Vision and Image Processing. Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-32-9291-8_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Rani, Prachi, Arpit Jangid, Vinay P. Namboodiri, and K. S. Venkatesh. "Visual Odometry Based Omni-directional Hyperlapse." In Communications in Computer and Information Science. Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-13-0020-2_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Mirabdollah, M. Hossein, and Bärbel Mertsching. "Fast Techniques for Monocular Visual Odometry." In Lecture Notes in Computer Science. Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-24947-6_24.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Visual Odometry"

1

Cheng, Ruiqi, Jian Li, and Jieqiong Wu. "Visual-Tiered LiDAR-Inertial-Visual Odometry." In 2024 IEEE International Conference on Signal, Information and Data Processing (ICSIDP). IEEE, 2024. https://doi.org/10.1109/icsidp62679.2024.10868629.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Peng, Yuxiang, Chuchu Chen, and Guoquan Huang. "Quantized Visual-Inertial Odometry." In 2024 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2024. http://dx.doi.org/10.1109/icra57147.2024.10610513.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Abouee, Amin, Ashwanth Ravi, Lars Hinneburg, et al. "Weakly Supervised End2End Deep Visual Odometry." In 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, 2024. http://dx.doi.org/10.1109/cvprw63382.2024.00091.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Alberico, Ivan, Jeff Delaune, Giovanni Cioffi, and Davide Scaramuzza. "Structure-Invariant Range-Visual-Inertial Odometry." In 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2024. https://doi.org/10.1109/iros58592.2024.10801775.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Lima Rezende, Gabriel, Felipe O. Silva, and Leomar S. Marques. "Comparison of Wheel Odometry and Visual Odometry for Low-Cost Vehicle Navigation." In 2024 Brazilian Symposium on Robotics (SBR), and 2024 Workshop on Robotics in Education (WRE). IEEE, 2024. https://doi.org/10.1109/sbr/wre63066.2024.10838039.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kleinschmidt, Sebastian P., and Bernardo Wagner. "Visual Multimodal Odometry: Robust Visual Odometry in Harsh Environments." In 2018 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR). IEEE, 2018. http://dx.doi.org/10.1109/ssrr.2018.8468653.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Lin, Minjie, Qixin Cao, and Haoruo Zhang. "PVO:Panoramic Visual Odometry." In 2018 3rd International Conference on Advanced Robotics and Mechatronics (ICARM). IEEE, 2018. http://dx.doi.org/10.1109/icarm.2018.8610700.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Center, Julian L., Kevin H. Knuth, Ali Mohammad-Djafari, Jean-François Bercher, and Pierre Bessiére. "Bayesian Visual Odometry." In BAYESIAN INFERENCE AND MAXIMUM ENTROPY METHODS IN SCIENCE AND ENGINEERING: Proceedings of the 30th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering. AIP, 2011. http://dx.doi.org/10.1063/1.3573659.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Flemmen, Henrik D., Rudolf Mester, Annette Stahl, Torleiv H. Bryne, and Edmund Førland Brekke. "Maritime radar odometry inspired by visual odometry." In 2023 26th International Conference on Information Fusion (FUSION). IEEE, 2023. http://dx.doi.org/10.23919/fusion52260.2023.10224142.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Huai, Zheng, and Guoquan Huang. "Robocentric Visual-Inertial Odometry." In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2018. http://dx.doi.org/10.1109/iros.2018.8593643.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Visual Odometry"

1

Pirozzo, David M., Philip A. Frederick, Shawn Hunt, Bernard Theisen, and Mike Del Rose. Spectrally Queued Feature Selection for Robotic Visual Odometery. Defense Technical Information Center, 2010. http://dx.doi.org/10.21236/ada535663.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!