Academic literature on the topic 'Visual Odometry (VO)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Visual Odometry (VO).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Visual Odometry (VO)"

1

Singh, Gurpreet, Deepam Goyal, and Vijay Kumar. "Mobile robot localization using visual odometry in indoor environments with TurtleBot4." IAES International Journal of Artificial Intelligence (IJ-AI) 14, no. 1 (2025): 760. http://dx.doi.org/10.11591/ijai.v14.i1.pp760-768.

Full text
Abstract:
Accurate localization is crucial for mobile robots to navigate autonomously in indoor environments. This article presents a novel visual odometry (VO) approach for localizing a TurtleBot4 mobile robot in indoor settings using only an onboard red green blue – depth (RGB-D) camera. Motivated by the challenges posed by slippery floors and the limitations of traditional wheel odometry, an attempt has been made to develop a reliable, accurate, and low-cost localization solution. The present method extracts oriented FAST and rotated BRIEF (ORB) features for feature extraction and matching using brut
APA, Harvard, Vancouver, ISO, and other styles
2

Gurpreet, Singh, Goyal Deepam, and Kumar Vijay. "Mobile robot localization using visual odometry in indoor environments with TurtleBot4." IAES International Journal of Artificial Intelligence (IJ-AI) 14, no. 1 (2025): 760–68. https://doi.org/10.11591/ijai.v14.i1.pp760-768.

Full text
Abstract:
Accurate localization is crucial for mobile robots to navigate autonomously in indoor environments. This article presents a novel visual odometry (VO) approach for localizing a TurtleBot4 mobile robot in indoor settings using only an onboard red green blue – depth (RGB-D) camera. Motivated by the challenges posed by slippery floors and the limitations of traditional wheel odometry, an attempt has been made to develop a reliable, accurate, and low cost localization solution. The present method extracts oriented FAST and rotated BRIEF (ORB) features for feature extraction and matching usin
APA, Harvard, Vancouver, ISO, and other styles
3

Wang, Hongjian, Xicheng Ban, Fuguang Ding, Yao Xiao, and Jiajia Zhou. "Monocular VO Based on Deep Siamese Convolutional Neural Network." Complexity 2020 (March 28, 2020): 1–13. http://dx.doi.org/10.1155/2020/6367273.

Full text
Abstract:
Deep learning-based visual odometry systems have shown promising performance compared with geometric-based visual odometry systems. In this paper, we propose a new framework of deep neural network, named Deep Siamese convolutional neural network (DSCNN), and design a DL-based monocular VO relying on DSCNN. The proposed DSCNN-VO not only considers positive order information of image sequence but also focuses on the reverse order information. It employs supervised data-driven training without relying on any modules in traditional visual odometry algorithm to make the DSCNN to learn the geometry
APA, Harvard, Vancouver, ISO, and other styles
4

Zhang, Sumin, Shouyi Lu, Rui He, and Zhipeng Bao. "Stereo Visual Odometry Pose Correction through Unsupervised Deep Learning." Sensors 21, no. 14 (2021): 4735. http://dx.doi.org/10.3390/s21144735.

Full text
Abstract:
Visual simultaneous localization and mapping (VSLAM) plays a vital role in the field of positioning and navigation. At the heart of VSLAM is visual odometry (VO), which uses continuous images to estimate the camera’s ego-motion. However, due to many assumptions of the classical VO system, robots can hardly operate in challenging environments. To solve this challenge, we combine the multiview geometry constraints of the classical stereo VO system with the robustness of deep learning to present an unsupervised pose correction network for the classical stereo VO system. The pose correction networ
APA, Harvard, Vancouver, ISO, and other styles
5

Yoon, Sung-Joo, and Taejung Kim. "Development of Stereo Visual Odometry Based on Photogrammetric Feature Optimization." Remote Sensing 11, no. 1 (2019): 67. http://dx.doi.org/10.3390/rs11010067.

Full text
Abstract:
One of the important image processing technologies is visual odometry (VO) technology. VO estimates platform motion through a sequence of images. VO is of interest in the virtual reality (VR) industry as well as the automobile industry because the construction cost is low. In this study, we developed stereo visual odometry (SVO) based on photogrammetric geometric interpretation. The proposed method performed feature optimization and pose estimation through photogrammetric bundle adjustment. After corresponding the point extraction step, the feature optimization was carried out with photogramme
APA, Harvard, Vancouver, ISO, and other styles
6

Faiçal, Bruno S., Cesar A. C. Marcondes, and Filipe A. N. Verri. "SiaN-VO: Siamese Network for Visual Odometry." Sensors 24, no. 3 (2024): 973. http://dx.doi.org/10.3390/s24030973.

Full text
Abstract:
Despite the significant advancements in drone sensory device reliability, data integrity from these devices remains critical in securing successful flight plans. A notable issue is the vulnerability of GNSS to jamming attacks or signal loss from satellites, potentially leading to incomplete drone flight plans. To address this, we introduce SiaN-VO, a Siamese neural network designed for visual odometry prediction in such challenging scenarios. Our preliminary studies have shown promising results, particularly for flights under static conditions (constant speed and altitude); while these finding
APA, Harvard, Vancouver, ISO, and other styles
7

Congram, Benjamin, and Timothy Barfoot. "Field Testing and Evaluation of Single-Receiver GPS Odometry for Use in Robotic Navigation." Field Robotics 2, no. 1 (2022): 1849–73. http://dx.doi.org/10.55417/fr.2022057.

Full text
Abstract:
Mobile robots rely on odometry to navigate in areas where localization fails. Visual odometry (VO), for instance, is a common solution for obtaining robust and consistent relative motion estimates of the vehicle frame. In contrast, Global Positioning System (GPS) measurements are typically used for absolute positioning and localization. However, when the constraint on absolute accuracy is relaxed, accurate relative position estimates can be found with one single-frequency GPS receiver by using time-differenced carrier phase (TDCP) measurements. In this paper, we implement and field test a sing
APA, Harvard, Vancouver, ISO, and other styles
8

Wang, Sen, Ronald Clark, Hongkai Wen, and Niki Trigoni. "End-to-end, sequence-to-sequence probabilistic visual odometry through deep neural networks." International Journal of Robotics Research 37, no. 4-5 (2017): 513–42. http://dx.doi.org/10.1177/0278364917734298.

Full text
Abstract:
This paper studies visual odometry (VO) from the perspective of deep learning. After tremendous efforts in the robotics and computer vision communities over the past few decades, state-of-the-art VO algorithms have demonstrated incredible performance. However, since the VO problem is typically formulated as a pure geometric problem, one of the key features still missing from current VO systems is the capability to automatically gain knowledge and improve performance through learning. In this paper, we investigate whether deep neural networks can be effective and beneficial to the VO problem. A
APA, Harvard, Vancouver, ISO, and other styles
9

Kersten, J., and V. Rodehorst. "ENHANCEMENT STRATEGIES FOR FRAME-TO-FRAME UAS STEREO VISUAL ODOMETRY." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B3 (June 9, 2016): 511–18. http://dx.doi.org/10.5194/isprs-archives-xli-b3-511-2016.

Full text
Abstract:
Autonomous navigation of indoor unmanned aircraft systems (UAS) requires accurate pose estimations usually obtained from indirect measurements. Navigation based on inertial measurement units (IMU) is known to be affected by high drift rates. The incorporation of cameras provides complementary information due to the different underlying measurement principle. The scale ambiguity problem for monocular cameras is avoided when a light-weight stereo camera setup is used. However, also frame-to-frame stereo visual odometry (VO) approaches are known to accumulate pose estimation errors over time. Sev
APA, Harvard, Vancouver, ISO, and other styles
10

Kersten, J., and V. Rodehorst. "ENHANCEMENT STRATEGIES FOR FRAME-TO-FRAME UAS STEREO VISUAL ODOMETRY." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B3 (June 9, 2016): 511–18. http://dx.doi.org/10.5194/isprsarchives-xli-b3-511-2016.

Full text
Abstract:
Autonomous navigation of indoor unmanned aircraft systems (UAS) requires accurate pose estimations usually obtained from indirect measurements. Navigation based on inertial measurement units (IMU) is known to be affected by high drift rates. The incorporation of cameras provides complementary information due to the different underlying measurement principle. The scale ambiguity problem for monocular cameras is avoided when a light-weight stereo camera setup is used. However, also frame-to-frame stereo visual odometry (VO) approaches are known to accumulate pose estimation errors over time. Sev
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Visual Odometry (VO)"

1

Lam, Benny, and Jakob Nilsson. "Creating Good User Experience in a Hand-Gesture-Based Augmented Reality Game." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-156878.

Full text
Abstract:
The dissemination of new innovative technology requires feasibility and simplicity. The problem with marker-based augmented reality is similar to glove-based hand gesture recognition: they both require an additional component to function. This thesis investigates the possibility of combining markerless augmented reality together with appearance-based hand gesture recognition by implementing a game with good user experience. The methods employed in this research consist of a game implementation and a pre-study meant for measuring interactive accuracy and precision, and for deciding upon which g
APA, Harvard, Vancouver, ISO, and other styles
2

Eleutério, António Manuel Passos. "2D position system for a mobile robot in unstructured environments." Master's thesis, 2014. http://hdl.handle.net/10362/13283.

Full text
Abstract:
Nowadays, several sensors and mechanisms are available to estimate a mobile robot trajectory and location with respect to its surroundings. Usually absolute positioning mechanisms are the most accurate, but they also are the most expensive ones, and require pre installed equipment in the environment. Therefore, a system capable of measuring its motion and location within the environment (relative positioning) has been a research goal since the beginning of autonomous vehicles. With the increasing of the computational performance, computer vision has become faster and, therefore, became pos
APA, Harvard, Vancouver, ISO, and other styles
3

Zhou, Yi. "Exploiting Structural Regularities and Beyond: Vision-based Localization and Mapping in Man-Made Environments." Phd thesis, 2018. http://hdl.handle.net/1885/155255.

Full text
Abstract:
Image-based estimation of camera motion, known as visual odometry (VO), plays a very important role in many robotic applications such as control and navigation of unmanned mobile robots, especially when no external navigation reference signal is available. The core problem of VO is the estimation of the camera’s ego-motion (i.e. tracking) either between successive frames, namely relative pose estimation, or with respect to a global map, namely absolute pose estimation. This thesis aims to develop efficient, accurate and robust VO solutions by taking
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Visual Odometry (VO)"

1

Shen, Zhiwei, and Bin Kong. "MAIM-VO: A Robust Visual Odometry with Mixed MLP for Weak Textured Environment." In Image and Graphics Technologies and Applications. Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-7549-5_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Xiao, Jun, Yazhou Yin, ShenChong Li, and Xinhua Zeng. "Pipeline-VO: A High-Speed and Lightweight Stereo Visual Odometry with an Optimized Pipeline Architecture." In Lecture Notes in Computer Science. Springer Nature Singapore, 2025. https://doi.org/10.1007/978-981-96-9908-7_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Zhang, Tenglong, Qing Li, Miaosheng Zhou, and Fei Yu. "Self-Supervised Sparse Direct Visual Odometry with Half-Geometric Correspondence Network." In Frontiers in Artificial Intelligence and Applications. IOS Press, 2024. http://dx.doi.org/10.3233/faia231259.

Full text
Abstract:
Current mainstream visual odometry method often suffers from tracking lost in low texture and motion blur scenarios due to fewer effective features and difficulty in getting stable matches. And feature matching process affects the overall real-time performance. For the fast localization task in low texture environments, this paper proposes an efficient self-supervised direct visual odometry framework based on keypoint extraction network, HGCN-VO. First we build the half-geometric correspondence network, HGCN, for fast extraction of robust keypoints in images. During training, we propose traini
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Visual Odometry (VO)"

1

Chen, Weirong, Le Chen, Rui Wang, and Marc Pollefeys. "LEAP-VO: Long-term Effective Any Point Tracking for Visual Odometry." In 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2024. http://dx.doi.org/10.1109/cvpr52733.2024.01876.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Li, Heng, Yifan Duan, Xinran Zhang, Haiyi Liu, Jianmin Ji, and Yanyong Zhang. "OCC-VO: Dense Mapping via 3D Occupancy-Based Visual Odometry for Autonomous Driving." In 2024 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2024. http://dx.doi.org/10.1109/icra57147.2024.10611516.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kaygusuz, Nimet, Oscar Mendez, and Richard Bowden. "MDN-VO: Estimating Visual Odometry with Confidence." In 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2021. http://dx.doi.org/10.1109/iros51168.2021.9636827.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Liu, Peidong, Xingxing Zuo, Viktor Larsson, and Marc Pollefeys. "MBA-VO: Motion Blur Aware Visual Odometry." In 2021 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, 2021. http://dx.doi.org/10.1109/iccv48922.2021.00550.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Cleveston, Iury, and Esther L. Colombini. "RAM-VO: A Recurrent Attentional Model for Visual Odometry." In Anais Estendidos do Simpósio Brasileiro de Robótica e Simpósio Latino Americano de Robótica. Sociedade Brasileira de Computação, 2021. http://dx.doi.org/10.5753/wtdr_ctdr.2021.18684.

Full text
Abstract:
Determining the agent's pose is fundamental for developing autonomous vehicles. Visual Odometry (VO) algorithms estimate the egomotion using only visual differences from the input frames. The most recent VO methods implement deep-learning techniques using convolutional neural networks (CNN) widely, adding a high cost to process large images. Also, more data does not imply a better prediction, and the network may have to filter out useless information. In this context, we incrementally formulate a lightweight model called RAM-VO to perform visual odometry regressions using large monocular image
APA, Harvard, Vancouver, ISO, and other styles
6

Wei, Peng, Guoliang Hua, Weibo Huang, Fanyang Meng, and Hong Liu. "Unsupervised Monocular Visual-inertial Odometry Network." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/325.

Full text
Abstract:
Recently, unsupervised methods for monocular visual odometry (VO), with no need for quantities of expensive labeled ground truth, have attracted much attention. However, these methods are inadequate for long-term odometry task, due to the inherent limitation of only using monocular visual data and the inability to handle the error accumulation problem. By utilizing supplemental low-cost inertial measurements, and exploiting the multi-view geometric constraint and sequential constraint, an unsupervised visual-inertial odometry framework (UnVIO) is proposed in this paper. Our method is able to p
APA, Harvard, Vancouver, ISO, and other styles
7

Li, Kun, Shuhui Bu, Yifei Dong, Yu Wang, Xuan Jia, and Zhenyu Xia. "UWB-VO: Ultra-Wideband Anchor Assisted Visual Odometry." In 2023 IEEE International Conference on Unmanned Systems (ICUS). IEEE, 2023. http://dx.doi.org/10.1109/icus58632.2023.10318307.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ma, Tianli, Jie Zhang, Song Gao, and Chaobo Chen. "Dyna-VO: A Semantic Visual Odometry in Dynamic Environment." In 2021 China Automation Congress (CAC). IEEE, 2021. http://dx.doi.org/10.1109/cac53003.2021.9728172.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Zhao, Haimei, Wei Bian, Bo Yuan, and Dacheng Tao. "Collaborative Learning of Depth Estimation, Visual Odometry and Camera Relocalization from Monocular Videos." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/68.

Full text
Abstract:
Scene perceiving and understanding tasks including depth estimation, visual odometry (VO) and camera relocalization are fundamental for applications such as autonomous driving, robots and drones. Driven by the power of deep learning, significant progress has been achieved on individual tasks but the rich correlations among the three tasks are largely neglected. In previous studies, VO is generally accurate in local scope yet suffers from drift in long distances. By contrast, camera relocalization performs well in the global sense but lacks local precision. We argue that these two tasks should
APA, Harvard, Vancouver, ISO, and other styles
10

Shan, Chenxing, Haofeng Zhang, and Qingyuan Xia. "Dealing with the structured scene in visual odometry(VO): Incomplete SURF." In 2017 2nd International Conference on Robotics and Automation Engineering (ICRAE). IEEE, 2017. http://dx.doi.org/10.1109/icrae.2017.8291425.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!