To see the other types of publications on this topic, follow the link: Visual Odometry (VO).

Journal articles on the topic 'Visual Odometry (VO)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Visual Odometry (VO).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Singh, Gurpreet, Deepam Goyal, and Vijay Kumar. "Mobile robot localization using visual odometry in indoor environments with TurtleBot4." IAES International Journal of Artificial Intelligence (IJ-AI) 14, no. 1 (2025): 760. http://dx.doi.org/10.11591/ijai.v14.i1.pp760-768.

Full text
Abstract:
Accurate localization is crucial for mobile robots to navigate autonomously in indoor environments. This article presents a novel visual odometry (VO) approach for localizing a TurtleBot4 mobile robot in indoor settings using only an onboard red green blue – depth (RGB-D) camera. Motivated by the challenges posed by slippery floors and the limitations of traditional wheel odometry, an attempt has been made to develop a reliable, accurate, and low-cost localization solution. The present method extracts oriented FAST and rotated BRIEF (ORB) features for feature extraction and matching using brut
APA, Harvard, Vancouver, ISO, and other styles
2

Gurpreet, Singh, Goyal Deepam, and Kumar Vijay. "Mobile robot localization using visual odometry in indoor environments with TurtleBot4." IAES International Journal of Artificial Intelligence (IJ-AI) 14, no. 1 (2025): 760–68. https://doi.org/10.11591/ijai.v14.i1.pp760-768.

Full text
Abstract:
Accurate localization is crucial for mobile robots to navigate autonomously in indoor environments. This article presents a novel visual odometry (VO) approach for localizing a TurtleBot4 mobile robot in indoor settings using only an onboard red green blue – depth (RGB-D) camera. Motivated by the challenges posed by slippery floors and the limitations of traditional wheel odometry, an attempt has been made to develop a reliable, accurate, and low cost localization solution. The present method extracts oriented FAST and rotated BRIEF (ORB) features for feature extraction and matching usin
APA, Harvard, Vancouver, ISO, and other styles
3

Wang, Hongjian, Xicheng Ban, Fuguang Ding, Yao Xiao, and Jiajia Zhou. "Monocular VO Based on Deep Siamese Convolutional Neural Network." Complexity 2020 (March 28, 2020): 1–13. http://dx.doi.org/10.1155/2020/6367273.

Full text
Abstract:
Deep learning-based visual odometry systems have shown promising performance compared with geometric-based visual odometry systems. In this paper, we propose a new framework of deep neural network, named Deep Siamese convolutional neural network (DSCNN), and design a DL-based monocular VO relying on DSCNN. The proposed DSCNN-VO not only considers positive order information of image sequence but also focuses on the reverse order information. It employs supervised data-driven training without relying on any modules in traditional visual odometry algorithm to make the DSCNN to learn the geometry
APA, Harvard, Vancouver, ISO, and other styles
4

Zhang, Sumin, Shouyi Lu, Rui He, and Zhipeng Bao. "Stereo Visual Odometry Pose Correction through Unsupervised Deep Learning." Sensors 21, no. 14 (2021): 4735. http://dx.doi.org/10.3390/s21144735.

Full text
Abstract:
Visual simultaneous localization and mapping (VSLAM) plays a vital role in the field of positioning and navigation. At the heart of VSLAM is visual odometry (VO), which uses continuous images to estimate the camera’s ego-motion. However, due to many assumptions of the classical VO system, robots can hardly operate in challenging environments. To solve this challenge, we combine the multiview geometry constraints of the classical stereo VO system with the robustness of deep learning to present an unsupervised pose correction network for the classical stereo VO system. The pose correction networ
APA, Harvard, Vancouver, ISO, and other styles
5

Yoon, Sung-Joo, and Taejung Kim. "Development of Stereo Visual Odometry Based on Photogrammetric Feature Optimization." Remote Sensing 11, no. 1 (2019): 67. http://dx.doi.org/10.3390/rs11010067.

Full text
Abstract:
One of the important image processing technologies is visual odometry (VO) technology. VO estimates platform motion through a sequence of images. VO is of interest in the virtual reality (VR) industry as well as the automobile industry because the construction cost is low. In this study, we developed stereo visual odometry (SVO) based on photogrammetric geometric interpretation. The proposed method performed feature optimization and pose estimation through photogrammetric bundle adjustment. After corresponding the point extraction step, the feature optimization was carried out with photogramme
APA, Harvard, Vancouver, ISO, and other styles
6

Faiçal, Bruno S., Cesar A. C. Marcondes, and Filipe A. N. Verri. "SiaN-VO: Siamese Network for Visual Odometry." Sensors 24, no. 3 (2024): 973. http://dx.doi.org/10.3390/s24030973.

Full text
Abstract:
Despite the significant advancements in drone sensory device reliability, data integrity from these devices remains critical in securing successful flight plans. A notable issue is the vulnerability of GNSS to jamming attacks or signal loss from satellites, potentially leading to incomplete drone flight plans. To address this, we introduce SiaN-VO, a Siamese neural network designed for visual odometry prediction in such challenging scenarios. Our preliminary studies have shown promising results, particularly for flights under static conditions (constant speed and altitude); while these finding
APA, Harvard, Vancouver, ISO, and other styles
7

Congram, Benjamin, and Timothy Barfoot. "Field Testing and Evaluation of Single-Receiver GPS Odometry for Use in Robotic Navigation." Field Robotics 2, no. 1 (2022): 1849–73. http://dx.doi.org/10.55417/fr.2022057.

Full text
Abstract:
Mobile robots rely on odometry to navigate in areas where localization fails. Visual odometry (VO), for instance, is a common solution for obtaining robust and consistent relative motion estimates of the vehicle frame. In contrast, Global Positioning System (GPS) measurements are typically used for absolute positioning and localization. However, when the constraint on absolute accuracy is relaxed, accurate relative position estimates can be found with one single-frequency GPS receiver by using time-differenced carrier phase (TDCP) measurements. In this paper, we implement and field test a sing
APA, Harvard, Vancouver, ISO, and other styles
8

Wang, Sen, Ronald Clark, Hongkai Wen, and Niki Trigoni. "End-to-end, sequence-to-sequence probabilistic visual odometry through deep neural networks." International Journal of Robotics Research 37, no. 4-5 (2017): 513–42. http://dx.doi.org/10.1177/0278364917734298.

Full text
Abstract:
This paper studies visual odometry (VO) from the perspective of deep learning. After tremendous efforts in the robotics and computer vision communities over the past few decades, state-of-the-art VO algorithms have demonstrated incredible performance. However, since the VO problem is typically formulated as a pure geometric problem, one of the key features still missing from current VO systems is the capability to automatically gain knowledge and improve performance through learning. In this paper, we investigate whether deep neural networks can be effective and beneficial to the VO problem. A
APA, Harvard, Vancouver, ISO, and other styles
9

Kersten, J., and V. Rodehorst. "ENHANCEMENT STRATEGIES FOR FRAME-TO-FRAME UAS STEREO VISUAL ODOMETRY." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B3 (June 9, 2016): 511–18. http://dx.doi.org/10.5194/isprs-archives-xli-b3-511-2016.

Full text
Abstract:
Autonomous navigation of indoor unmanned aircraft systems (UAS) requires accurate pose estimations usually obtained from indirect measurements. Navigation based on inertial measurement units (IMU) is known to be affected by high drift rates. The incorporation of cameras provides complementary information due to the different underlying measurement principle. The scale ambiguity problem for monocular cameras is avoided when a light-weight stereo camera setup is used. However, also frame-to-frame stereo visual odometry (VO) approaches are known to accumulate pose estimation errors over time. Sev
APA, Harvard, Vancouver, ISO, and other styles
10

Kersten, J., and V. Rodehorst. "ENHANCEMENT STRATEGIES FOR FRAME-TO-FRAME UAS STEREO VISUAL ODOMETRY." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B3 (June 9, 2016): 511–18. http://dx.doi.org/10.5194/isprsarchives-xli-b3-511-2016.

Full text
Abstract:
Autonomous navigation of indoor unmanned aircraft systems (UAS) requires accurate pose estimations usually obtained from indirect measurements. Navigation based on inertial measurement units (IMU) is known to be affected by high drift rates. The incorporation of cameras provides complementary information due to the different underlying measurement principle. The scale ambiguity problem for monocular cameras is avoided when a light-weight stereo camera setup is used. However, also frame-to-frame stereo visual odometry (VO) approaches are known to accumulate pose estimation errors over time. Sev
APA, Harvard, Vancouver, ISO, and other styles
11

Liu, Fei, Yashar Balazadegan Sarvrood, and Yang Gao. "Implementation and Analysis of Tightly Integrated INS/Stereo VO for Land Vehicle Navigation." Journal of Navigation 71, no. 1 (2017): 83–99. http://dx.doi.org/10.1017/s037346331700056x.

Full text
Abstract:
Tight integration of inertial sensors and stereo visual odometry to bridge Global Navigation Satellite System (GNSS) signal outages in challenging environments has drawn increasing attention. However, the details of how feature pixel coordinates from visual odometry can be directly used to limit the quick drift of inertial sensors in a tight integration implementation have rarely been provided in previous works. For instance, a key challenge in tight integration of inertial and stereo visual datasets is how to correct inertial sensor errors using the pixel measurements from visual odometry, ho
APA, Harvard, Vancouver, ISO, and other styles
12

Esfahani, Mahdi Abolfazli, Keyu Wu, Shenghai Yuan, and Han Wang. "From Local Understanding to Global Regression in Monocular Visual Odometry." International Journal of Pattern Recognition and Artificial Intelligence 34, no. 01 (2019): 2055002. http://dx.doi.org/10.1142/s0218001420550022.

Full text
Abstract:
The most significant part of any autonomous intelligent robot is the localization module that gives the robot knowledge about its position and orientation. This knowledge assists the robot to move to the location of its desired goal and complete its task. Visual Odometry (VO) measures the displacement of the robots’ camera in consecutive frames which results in the estimation of the robot position and orientation. Deep Learning, nowadays, helps to learn rich and informative features for the problem of VO to estimate frame-by-frame camera movement. Recent Deep Learning-based VO methods train an
APA, Harvard, Vancouver, ISO, and other styles
13

zhang, Peihua, and Wanlin li. "P‐3.6: A Survey of Visual Odometry Based on Deep Learning." SID Symposium Digest of Technical Papers 56, S1 (2025): 888–90. https://doi.org/10.1002/sdtp.18956.

Full text
Abstract:
Traditional VO methods are categorized into feature‐based methods and direct methods. Feature‐based methods rely on prominent features in the images for motion estimation but perform poorly in sparse texture or dynamic environments. Direct methods optimize by minimizing photometric errors, making them suitable for textureless scenes, but still face challenges in large motion or dynamic environments.In recent years, deep learning has shown great potential in the VO field. Convolutional Neural Networks (CNNs) can automatically learn image features, simplifying the VO process and improving robust
APA, Harvard, Vancouver, ISO, and other styles
14

Cimarelli, Claudio, Hriday Bavle, Jose Luis Sanchez-Lopez, and Holger Voos. "RAUM-VO: Rotational Adjusted Unsupervised Monocular Visual Odometry." Sensors 22, no. 7 (2022): 2651. http://dx.doi.org/10.3390/s22072651.

Full text
Abstract:
Unsupervised learning for monocular camera motion and 3D scene understanding has gained popularity over traditional methods, which rely on epipolar geometry or non-linear optimization. Notably, deep learning can overcome many issues of monocular vision, such as perceptual aliasing, low-textured areas, scale drift, and degenerate motions. In addition, concerning supervised learning, we can fully leverage video stream data without the need for depth or motion labels. However, in this work, we note that rotational motion can limit the accuracy of the unsupervised pose networks more than the trans
APA, Harvard, Vancouver, ISO, and other styles
15

Zhao, Leyang, Yanguang Yang, Ding Ma, Xing Lin, and Wang Wang. "PLL-VO: An Efficient and Robust Visual Odometry Integrating Point-Line Features and Neural Networks." ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences X-G-2025 (July 14, 2025): 1045–52. https://doi.org/10.5194/isprs-annals-x-g-2025-1045-2025.

Full text
Abstract:
Abstract. Visual odometry is crucial for the navigation and planning of autonomous robots, but low-light conditions, dramatic lighting changes, and low-texture scenes pose significant challenges to odometry estimation. This paper proposes PLL-VO, which integrates point-line features and deep learning. To overcome the impact of complex lighting conditions, a self-supervised learning method for interest point detection and a line detection algorithm that combines line optical flow tracking with cross-constraints is presented. After selecting keyframes based on point feature counts and line featu
APA, Harvard, Vancouver, ISO, and other styles
16

Duan, Chao, Steffen Junginger, Kerstin Thurow, and Hui Liu. "StereoVO: Learning Stereo Visual Odometry Approach Based on Optical Flow and Depth Information." Applied Sciences 13, no. 10 (2023): 5842. http://dx.doi.org/10.3390/app13105842.

Full text
Abstract:
We present a novel stereo visual odometry (VO) model that utilizes both optical flow and depth information. While some existing monocular VO methods demonstrate superior performance, they require extra frames or information to initialize the model in order to obtain absolute scale, and they do not take into account moving objects. To address these issues, we have combined optical flow and depth information to estimate ego-motion and proposed a framework for stereo VO using deep neural networks. The model simultaneously generates optical flow and depth information outputs from sequential stereo
APA, Harvard, Vancouver, ISO, and other styles
17

Chen, Yonggang, Yanghan Mei, and Songfeng Wan. "Visual Odometry for Self-Driving with Multihypothesis and Network Prediction." Mathematical Problems in Engineering 2021 (August 25, 2021): 1–10. http://dx.doi.org/10.1155/2021/1930881.

Full text
Abstract:
Robustness in visual odometry (VO) systems is critical, as it determines reliable performance in various scenarios and challenging environments. Especially with the development of data-driven technology, such as deep learning, the combination of data-driven VO and traditional model-based VO has achieved accurate tracking performance. However, the existence of local optimums in the model-based cost function still limits the robustness. In this study, we introduce a novel framework with a particle filter (PF) in the optimization process, where the PF is constructed by deep neural network (DNN) p
APA, Harvard, Vancouver, ISO, and other styles
18

Yuan, Cheng, Jizhou Lai, Pin Lyu, Peng Shi, Wei Zhao, and Kai Huang. "A Novel Fault-Tolerant Navigation and Positioning Method with Stereo-Camera/Micro Electro Mechanical Systems Inertial Measurement Unit (MEMS-IMU) in Hostile Environment." Micromachines 9, no. 12 (2018): 626. http://dx.doi.org/10.3390/mi9120626.

Full text
Abstract:
Visual odometry (VO) is a new navigation and positioning method that estimates the ego-motion of vehicles from images. However, VO with unsatisfactory performance can fail severely in hostile environment because of the less feature, fast angular motions, or illumination change. Thus, enhancing the robustness of VO in hostile environment has become a popular research topic. In this paper, a novel fault-tolerant visual-inertial odometry (VIO) navigation and positioning method framework is presented. The micro electro mechanical systems inertial measurement unit (MEMS-IMU) is used to aid the ster
APA, Harvard, Vancouver, ISO, and other styles
19

Neyestani, Arman, Francesco Picariello, Imran Ahmed, Pasquale Daponte, and Luca De Vito. "From Pixels to Precision: A Survey of Monocular Visual Odometry in Digital Twin Applications." Sensors 24, no. 4 (2024): 1274. http://dx.doi.org/10.3390/s24041274.

Full text
Abstract:
This survey provides a comprehensive overview of traditional techniques and deep learning-based methodologies for monocular visual odometry (VO), with a focus on displacement measurement applications. This paper outlines the fundamental concepts and general procedures for VO implementation, including feature detection, tracking, motion estimation, triangulation, and trajectory estimation. This paper also explores the research challenges inherent in VO implementation, including scale estimation and ground plane considerations. The scientific literature is rife with diverse methodologies aiming
APA, Harvard, Vancouver, ISO, and other styles
20

Varshosaz, Masood, Alireza Afary, Barat Mojaradi, Mohammad Saadatseresht, and Ebadat Ghanbari Parmehr. "Spoofing Detection of Civilian UAVs Using Visual Odometry." ISPRS International Journal of Geo-Information 9, no. 1 (2019): 6. http://dx.doi.org/10.3390/ijgi9010006.

Full text
Abstract:
Spoofing of Unmanned Aerial Vehicles (UAV) is generally carried out through spoofing of the UAV’s Global Positioning System (GPS) receiver. This paper presents a vision-based UAV spoofing detection method that utilizes Visual Odometry (VO). This method is independent of the other complementary sensors and any knowledge or archived map and datasets. The proposed method is based on the comparison of relative sub-trajectory of the UAV from VO, with its absolute replica from GPS within a moving window along the flight path. The comparison is done using three dissimilarity measures including (1) Su
APA, Harvard, Vancouver, ISO, and other styles
21

Sabry, Mohamed, Mostafa Osman, Ahmed Hussein, Mohamed W. Mehrez, Soo Jeon, and William Melek. "A Generic Image Processing Pipeline for Enhancing Accuracy and Robustness of Visual Odometry." Sensors 22, no. 22 (2022): 8967. http://dx.doi.org/10.3390/s22228967.

Full text
Abstract:
The accuracy of pose estimation from feature-based Visual Odometry (VO) algorithms is affected by several factors such as lighting conditions and outliers in the matched features. In this paper, a generic image processing pipeline is proposed to enhance the accuracy and robustness of feature-based VO algorithms. The pipeline consists of three stages, each addressing a problem that affects the performance of VO algorithms. The first stage tackles the lighting condition problem, where a filter called Contrast Limited Adaptive Histogram Equalization (CLAHE) is applied to the images to overcome ch
APA, Harvard, Vancouver, ISO, and other styles
22

Zhang, Chaofan, Yong Liu, Fan Wang, Yingwei Xia, and Wen Zhang. "VINS-MKF: A Tightly-Coupled Multi-Keyframe Visual-Inertial Odometry for Accurate and Robust State Estimation." Sensors 18, no. 11 (2018): 4036. http://dx.doi.org/10.3390/s18114036.

Full text
Abstract:
State estimation is crucial for robot autonomy, visual odometry (VO) has received significant attention in the robotics field because it can provide accurate state estimation. However, the accuracy and robustness of most existing VO methods are degraded in complex conditions, due to the limited field of view (FOV) of the utilized camera. In this paper, we present a novel tightly-coupled multi-keyframe visual-inertial odometry (called VINS-MKF), which can provide an accurate and robust state estimation for robots in an indoor environment. We first modify the monocular ORBSLAM (Oriented FAST and
APA, Harvard, Vancouver, ISO, and other styles
23

Pandey, Tejas, Dexmont Pena, Jonathan Byrne, and David Moloney. "Leveraging Deep Learning for Visual Odometry Using Optical Flow." Sensors 21, no. 4 (2021): 1313. http://dx.doi.org/10.3390/s21041313.

Full text
Abstract:
In this paper, we study deep learning approaches for monocular visual odometry (VO). Deep learning solutions have shown to be effective in VO applications, replacing the need for highly engineered steps, such as feature extraction and outlier rejection in a traditional pipeline. We propose a new architecture combining ego-motion estimation and sequence-based learning using deep neural networks. We estimate camera motion from optical flow using Convolutional Neural Networks (CNNs) and model the motion dynamics using Recurrent Neural Networks (RNNs). The network outputs the relative 6-DOF camera
APA, Harvard, Vancouver, ISO, and other styles
24

Liu, Fei, Yashar Balazadegan Sarvrood, Yue Liu, and Yang Gao. "Stereo visual odometry with velocity constraint for ground vehicle applications." Journal of Navigation 74, no. 5 (2021): 1026–38. http://dx.doi.org/10.1017/s037346332100028x.

Full text
Abstract:
AbstractThis paper proposes a novel method of error mitigation for stereo visual odometry (VO) applied in land vehicles. A non-holonomic constraint (NHC), which imposes physical constraint to the rightward velocity of a land vehicle, is implemented as an observation in an extended Kalman filter (EKF) to reduce the drift of stereo VO. The EKF state vector includes position errors in an Earth-centred, Earth-fixed (ECEF) frame, velocity errors in the camera frame, angular rate errors and attitude errors. All the related equations are described and presented in detail. In this approach, no additio
APA, Harvard, Vancouver, ISO, and other styles
25

Al-Hadithi, Basil Mohammed, David Thomas, and Carlos Pastor. "Hybrid Visual Odometry Algorithm Using a Downward-Facing Monocular Camera." Applied Sciences 14, no. 17 (2024): 7732. http://dx.doi.org/10.3390/app14177732.

Full text
Abstract:
The increasing interest in developing robots capable of navigating autonomously has led to the necessity of developing robust methods that enable these robots to operate in challenging and dynamic environments. Visual odometry (VO) has emerged in this context as a key technique, offering the possibility of estimating the position of a robot using sequences of onboard cameras. In this paper, a VO algorithm is proposed that achieves sub-pixel precision by combining optical flow and direct methods. This approach uses only a downward-facing, monocular camera, eliminating the need for additional se
APA, Harvard, Vancouver, ISO, and other styles
26

Xu, Qingwen, Haofei Kuang, Laurent Kneip, and Sören Schwertfeger. "Rethinking the Fourier-Mellin Transform: Multiple Depths in the Camera’s View." Remote Sensing 13, no. 5 (2021): 1000. http://dx.doi.org/10.3390/rs13051000.

Full text
Abstract:
Remote sensing and robotics often rely on visual odometry (VO) for localization. Many standard approaches for VO use feature detection. However, these methods will meet challenges if the environments are feature-deprived or highly repetitive. Fourier-Mellin Transform (FMT) is an alternative VO approach that has been shown to show superior performance in these scenarios and is often used in remote sensing. One limitation of FMT is that it requires an environment that is equidistant to the camera, i.e., single-depth. To extend the applications of FMT to multi-depth environments, this paper prese
APA, Harvard, Vancouver, ISO, and other styles
27

B, Jayaraj P., Ebin J, Karthik R, and Pournami P N. "ViT VO - A Visual Odometry technique Using CNN-Transformer Hybrid Architecture." ITM Web of Conferences 54 (2023): 01004. http://dx.doi.org/10.1051/itmconf/20235401004.

Full text
Abstract:
Localization is one of the main tasks involved in the operation of autonomous agents (e.g., vehicle, robot etc.). It allows them to be able to track their paths and properly detect and avoid obstacles. Visual Odometry (VO) is one of the techniques used for agent localization. VO involves estimating the motion of an agent using the images taken by cameras attached to it. Conventional VO algorithms require specific workarounds for challenges posed by the working environment and the captured sensor data. On the other hand, Deep Learning approaches have shown tremendous efficiency and accuracy in
APA, Harvard, Vancouver, ISO, and other styles
28

Liu, Qiang, Haidong Zhang, Yiming Xu, and Li Wang. "Unsupervised Deep Learning-Based RGB-D Visual Odometry." Applied Sciences 10, no. 16 (2020): 5426. http://dx.doi.org/10.3390/app10165426.

Full text
Abstract:
Recently, deep learning frameworks have been deployed in visual odometry systems and achieved comparable results to traditional feature matching based systems. However, most deep learning-based frameworks inevitably need labeled data as ground truth for training. On the other hand, monocular odometry systems are incapable of restoring absolute scale. External or prior information has to be introduced for scale recovery. To solve these problems, we present a novel deep learning-based RGB-D visual odometry system. Our two main contributions are: (i) during network training and pose estimation, t
APA, Harvard, Vancouver, ISO, and other styles
29

George, Anand, Niko Koivumäki, Teemu Hakala, Juha Suomalainen, and Eija Honkavaara. "Visual-Inertial Odometry Using High Flying Altitude Drone Datasets." Drones 7, no. 1 (2023): 36. http://dx.doi.org/10.3390/drones7010036.

Full text
Abstract:
Positioning of unoccupied aerial systems (UAS, drones) is predominantly based on Global Navigation Satellite Systems (GNSS). Due to potential signal disruptions, redundant positioning systems are needed for reliable operation. The objective of this study was to implement and assess a redundant positioning system for high flying altitude drone operation based on visual-inertial odometry (VIO). A new sensor suite with stereo cameras and an inertial measurement unit (IMU) was developed, and a state-of-the-art VIO algorithm, VINS-Fusion, was used for localisation. Empirical testing of the system w
APA, Harvard, Vancouver, ISO, and other styles
30

Saha, Arindam, Bibhas Chandra Dhara, Saiyed Umer, Ahmad Ali AlZubi, Jazem Mutared Alanazi, and Kulakov Yurii. "CORB2I-SLAM: An Adaptive Collaborative Visual-Inertial SLAM for Multiple Robots." Electronics 11, no. 18 (2022): 2814. http://dx.doi.org/10.3390/electronics11182814.

Full text
Abstract:
The generation of robust global maps of an unknown cluttered environment through a collaborative robotic framework is challenging. We present a collaborative SLAM framework, CORB2I-SLAM, in which each participating robot carries a camera (monocular/stereo/RGB-D) and an inertial sensor to run odometry. A centralized server stores all the maps and executes processor-intensive tasks, e.g., loop closing, map merging, and global optimization. The proposed framework uses well-established Visual-Inertial Odometry (VIO), and can be adapted to use Visual Odometry (VO) when the measurements from inertia
APA, Harvard, Vancouver, ISO, and other styles
31

Wang, Zihao, Sen Yang, Mengji Shi, and Kaiyu Qin. "MLSS-VO: A Multi-Level Scale Stabilizer with Self-Supervised Features for Monocular Visual Odometry in Target Tracking." Electronics 11, no. 2 (2022): 223. http://dx.doi.org/10.3390/electronics11020223.

Full text
Abstract:
In this study, a multi-level scale stabilizer intended for visual odometry (MLSS-VO) combined with a self-supervised feature matching method is proposed to address the scale uncertainty and scale drift encountered in the field of monocular visual odometry. Firstly, the architecture of an instance-level recognition model is adopted to propose a feature matching model based on a Siamese neural network. Combined with the traditional approach to feature point extraction, the feature baselines on different levels are extracted, and then treated as a reference for estimating the motion scale of the
APA, Harvard, Vancouver, ISO, and other styles
32

Mostafa, Mostafa, Shady Zahran, Adel Moussa, Naser El-Sheimy, and Abu Sesay. "Radar and Visual Odometry Integrated System Aided Navigation for UAVS in GNSS Denied Environment." Sensors 18, no. 9 (2018): 2776. http://dx.doi.org/10.3390/s18092776.

Full text
Abstract:
Drones are becoming increasingly significant for vast applications, such as firefighting, and rescue. While flying in challenging environments, reliable Global Navigation Satellite System (GNSS) measurements cannot be guaranteed all the time, and the Inertial Navigation System (INS) navigation solution will deteriorate dramatically. Although different aiding sensors, such as cameras, are proposed to reduce the effect of these drift errors, the positioning accuracy by using these techniques is still affected by some challenges, such as the lack of the observed features, inconsistent matches, il
APA, Harvard, Vancouver, ISO, and other styles
33

Duong, Thanh Trung, Kai-Wei Chiang, and Dinh Thuan Le. "On-line Smoothing and Error Modelling for Integration of GNSS and Visual Odometry." Sensors 19, no. 23 (2019): 5259. http://dx.doi.org/10.3390/s19235259.

Full text
Abstract:
Global navigation satellite systems (GNSSs) are commonly used for navigation and mapping applications. However, in GNSS-hostile environments, where the GNSS signal is noisy or blocked, the navigation information provided by a GNSS is inaccurate or unavailable. To overcome these issues, this study proposed a real-time visual odometry (VO)/GNSS integrated navigation system. An on-line smoothing method based on the extended Kalman filter (EKF) and the Rauch-Tung-Striebel (RTS) smoother was proposed. VO error modelling was also proposed to estimate the VO error and compensate the incoming measurem
APA, Harvard, Vancouver, ISO, and other styles
34

Zhi, Henghui, Chenyang Yin, Huibin Li, and Shanmin Pang. "An Unsupervised Monocular Visual Odometry Based on Multi-Scale Modeling." Sensors 22, no. 14 (2022): 5193. http://dx.doi.org/10.3390/s22145193.

Full text
Abstract:
Unsupervised deep learning methods have shown great success in jointly estimating camera pose and depth from monocular videos. However, previous methods mostly ignore the importance of multi-scale information, which is crucial for pose estimation and depth estimation, especially when the motion pattern is changed. This article proposes an unsupervised framework for monocular visual odometry (VO) that can model multi-scale information. The proposed method utilizes densely linked atrous convolutions to increase the receptive field size without losing image information, and adopts a non-local sel
APA, Harvard, Vancouver, ISO, and other styles
35

Kostusiak, Aleksander, and Piotr Skrzypczyński. "Enhancing Visual Odometry with Estimated Scene Depth: Leveraging RGB-D Data with Deep Learning." Electronics 13, no. 14 (2024): 2755. http://dx.doi.org/10.3390/electronics13142755.

Full text
Abstract:
Advances in visual odometry (VO) systems have benefited from the widespread use of affordable RGB-D cameras, improving indoor localization and mapping accuracy. However, older sensors like the Kinect v1 face challenges due to depth inaccuracies and incomplete data. This study compares indoor VO systems that use RGB-D images, exploring methods to enhance depth information. We examine conventional image inpainting techniques and a deep learning approach, utilizing newer depth data from devices like the Kinect v2. Our research highlights the importance of refining data from lower-quality sensors,
APA, Harvard, Vancouver, ISO, and other styles
36

Yao, Erliang, Hexin Zhang, Haitao Song, and Guoliang Zhang. "Fast and robust visual odometry with a low-cost IMU in dynamic environments." Industrial Robot: the international journal of robotics research and application 46, no. 6 (2019): 882–94. http://dx.doi.org/10.1108/ir-01-2019-0001.

Full text
Abstract:
Purpose To realize stable and precise localization in the dynamic environments, the authors propose a fast and robust visual odometry (VO) approach with a low-cost Inertial Measurement Unit (IMU) in this study. Design/methodology/approach The proposed VO incorporates the direct method with the indirect method to track the features and to optimize the camera pose. It initializes the positions of tracked pixels with the IMU information. Besides, the tracked pixels are refined by minimizing the photometric errors. Due to the small convergence radius of the indirect method, the dynamic pixels are
APA, Harvard, Vancouver, ISO, and other styles
37

Company-Corcoles, Joan P., Emilio Garcia-Fidalgo, and Alberto Ortiz. "MSC-VO: Exploiting Manhattan and Structural Constraints for Visual Odometry." IEEE Robotics and Automation Letters 7, no. 2 (2022): 2803–10. http://dx.doi.org/10.1109/lra.2022.3142900.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Ortiz, Alberto. "MSC-VO: Exploiting Manhattan and Structural Constraints for Visual Odometry." IEEE Robotics and Automation Letters 7, no. 2 (2022): 2803–10. https://doi.org/10.5281/zenodo.10636473.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Lu, Yao, Xiaoli Xu, Mingyu Ding, Zhiwu Lu, and Tao Xiang. "A Global Occlusion-Aware Approach to Self-Supervised Monocular Visual Odometry." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 3 (2021): 2260–68. http://dx.doi.org/10.1609/aaai.v35i3.16325.

Full text
Abstract:
Self-Supervised monocular visual odometry (VO) is often cast into a view synthesis problem based on depth and camera pose estimation. One of the key challenges is to accurately and robustly estimate depth with occlusions and moving objects in the scene. Existing methods simply detect and mask out regions of occlusions locally by several convolutional layers, and then perform only partial view synthesis in the rest of the image. However, occlusion and moving object detection is an unsolved problem itself which requires global layout information. Inaccurate detection inevitably results in incorr
APA, Harvard, Vancouver, ISO, and other styles
40

Azhmukhamedov, I. M., P. I. Tamkov, N. D. Svishchev, and A. V. Rybakov. "Visual odometry in local underwater navigation problems." Journal of Physics: Conference Series 2091, no. 1 (2021): 012053. http://dx.doi.org/10.1088/1742-6596/2091/1/012053.

Full text
Abstract:
Abstract The work processes of the ORB-SLAM algorithm are presented. The results of experimental studies on temporal comparisons of the operation of the algorithm with different parameters and cameras are presented. The necessity of forming a visual odometry (VO) system as a local navigation of remote-controlled and autonomous underwater robots has been substantiated. The two most suitable odometry methods in the underwater environment are described, such as their advantages and disadvantages. The work processes of the ORB-SLAM algorithm are presented. The results of experimental studies on te
APA, Harvard, Vancouver, ISO, and other styles
41

Nguyen, Thien Hoang, Thien-Minh Nguyen, Muqing Cao, and Lihua Xie. "Loosely-Coupled Ultra-wideband-Aided Scale Correction for Monocular Visual Odometry." Unmanned Systems 08, no. 02 (2020): 179–90. http://dx.doi.org/10.1142/s2301385020500119.

Full text
Abstract:
In this paper, we propose a method to address the problem of scale uncertainty in monocular visual odometry (VO), which includes scale ambiguity and scale drift, using distance measurements from a single ultra-wideband (UWB) anchor. A variant of Levenberg–Marquardt (LM) nonlinear least squares regression method is proposed to rectify unscaled position data from monocular odometry with 1D point-to-point distance measurements. As a loosely-coupled approach, our method is flexible in that each input block can be replaced with one’s preferred choices for monocular odometry/SLAM algorithm and UWB s
APA, Harvard, Vancouver, ISO, and other styles
42

Zhang, Zhaoxing, Junda Cheng, Gangwei Xu, Xiaoxiang Wang, Can Zhang, and Xin Yang. "Leveraging Consistent Spatio-Temporal Correspondence for Robust Visual Odometry." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 10 (2025): 10367–75. https://doi.org/10.1609/aaai.v39i10.33125.

Full text
Abstract:
Recent approaches to VO have significantly improved performance by using deep networks to predict optical flow between video frames. However, existing methods still suffer from noisy and inconsistent flow matching, making it difficult to handle challenging scenarios and long-sequence estimation.To overcome these challenges, we introduce Spatio-Temporal Visual Odometry (STVO), a novel deep network architecture that effectively leverages inherent spatio-temporal cues to enhance the accuracy and consistency of multi-frame flow matching. With more accurate and consistent flow matching, STVO can ac
APA, Harvard, Vancouver, ISO, and other styles
43

Wang, Yifei, Libo Sun, and Wenhu Qin. "OFPoint: Real-Time Keypoint Detection for Optical Flow Tracking in Visual Odometry." Mathematics 13, no. 7 (2025): 1087. https://doi.org/10.3390/math13071087.

Full text
Abstract:
Visual odometry (VO), including keypoint detection, correspondence establishment, and pose estimation, is a crucial technique for determining motion in machine vision, with significant applications in augmented reality (AR), autonomous driving, and visual simultaneous localization and mapping (SLAM). For feature-based VO, the repeatability of keypoints affects the pose estimation. The convolutional neural network (CNN)-based detectors extract high-level features from images, thereby exhibiting robustness to viewpoint and illumination changes. Compared with descriptor matching, optical flow tra
APA, Harvard, Vancouver, ISO, and other styles
44

Kahmen, O., N. Haase, and T. Luhmann. "ORIENTATION OF POINT CLOUDS FOR COMPLEX SURFACES IN MEDICAL SURGERY USING TRINOCULAR VISUAL ODOMETRY AND STEREO ORB-SLAM2." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B2-2020 (August 12, 2020): 35–42. http://dx.doi.org/10.5194/isprs-archives-xliii-b2-2020-35-2020.

Full text
Abstract:
Abstract. In photogrammetry, computer vision and robotics, visual odometry (VO) and SLAM algorithms are well-known methods to estimate camera poses from image sequences. When dealing with unknown scenes there is often no reference data available and also the scene needs to be reconstructed for further analysis. In this contribution a trinocular visual odometry approach is implemented and compared to stereo VO and ORB-SLAM2 in an experimental setup imitating the scene of a knee replacement surgery. Two datasets are analysed. While a test-field provides excellent conditions for feature detection
APA, Harvard, Vancouver, ISO, and other styles
45

Chen, Y., L. Yan, and X. Lin. "AN AUTOMATIC KEY-FRAME SELECTION METHOD FOR VISUAL ODOMETRY BASED ON THE IMPROVED PWC-NET." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B2-2020 (August 12, 2020): 567–73. http://dx.doi.org/10.5194/isprs-archives-xliii-b2-2020-567-2020.

Full text
Abstract:
Abstract. In order to quick response to the rapid changes of mobile platforms in complex situations such as speedy changing direction or camera shake, visual odometry/visual simultaneous localization and mapping (VO/VSLAM) always needs a high frame rate vision sensor. However, the high frame rate of the sensor will affect the real-time performance of the odometry. Therefore, we need to investigate how to make a balance between the frame rate and the pose quality of the sensor. In this paper, we propose an automatic key-frame method based on the improved PWC-Net for mobile platforms, which can
APA, Harvard, Vancouver, ISO, and other styles
46

Kim, Sungkwan, Inhwan Kim, Luiz Felipe Vecchietti, and Dongsoo Har. "Pose Estimation Utilizing a Gated Recurrent Unit Network for Visual Localization." Applied Sciences 10, no. 24 (2020): 8876. http://dx.doi.org/10.3390/app10248876.

Full text
Abstract:
Lately, pose estimation based on learning-based Visual Odometry (VO) methods, where raw image data are provided as the input of a neural network to get 6 Degrees of Freedom (DoF) information, has been intensively investigated. Despite its recent advances, learning-based VO methods still perform worse than the classical VO that consists of feature-based VO methods and direct VO methods. In this paper, a new pose estimation method with the help of a Gated Recurrent Unit (GRU) network trained by pose data acquired by an accurate sensor is proposed. The historical trajectory data of the yaw angle
APA, Harvard, Vancouver, ISO, and other styles
47

Herrera-Granda, Erick P., Juan C. Torres-Cantero, Andrés Rosales, and Diego H. Peluffo-Ordóñez. "A Comparison of Monocular Visual SLAM and Visual Odometry Methods Applied to 3D Reconstruction." Applied Sciences 13, no. 15 (2023): 8837. http://dx.doi.org/10.3390/app13158837.

Full text
Abstract:
Pure monocular 3D reconstruction is a complex problem that has attracted the research community’s interest due to the affordability and availability of RGB sensors. SLAM, VO, and SFM are disciplines formulated to solve the 3D reconstruction problem and estimate the camera’s ego-motion; so, many methods have been proposed. However, most of these methods have not been evaluated on large datasets and under various motion patterns, have not been tested under the same metrics, and most of them have not been evaluated following a taxonomy, making their comparison and selection difficult. In this res
APA, Harvard, Vancouver, ISO, and other styles
48

Costante, Gabriele, and Thomas Alessandro Ciarfuglia. "LS-VO: Learning Dense Optical Subspace for Robust Visual Odometry Estimation." IEEE Robotics and Automation Letters 3, no. 3 (2018): 1735–42. http://dx.doi.org/10.1109/lra.2018.2803211.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Lin, C. C., Y. H. Tseng, K. Y. Lin, and K. W. Chiang. "NETWORK ADJUSTMENT OF AUTOMATIC RELATIVE ORIENTATION FROM IMAGE SEQUENCES." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-1 (September 26, 2018): 269–73. http://dx.doi.org/10.5194/isprs-archives-xlii-1-269-2018.

Full text
Abstract:
<p><strong>Abstract.</strong> Recently, Visual Odometry (VO) using cameras for navigation is known as an alternative solution in GNSS-hostile environments. VO is a process of estimating the egomotion based on consecutive frames captured by the camera. 3D Motion including the attitude and position can be described as the exterior orientation parameters (EOPs) in photogrammetry. The advantage of VO compared with wheel odometry is that VO is not affected by wheel slip in uneven terrain or other adverse conditions. Since VO computes the camera path incrementally, the errors are a
APA, Harvard, Vancouver, ISO, and other styles
50

Liu, Zhe, Dianxi Shi, Ruihao Li, and Shaowu Yang. "ESVIO: Event-Based Stereo Visual-Inertial Odometry." Sensors 23, no. 4 (2023): 1998. http://dx.doi.org/10.3390/s23041998.

Full text
Abstract:
The emerging event cameras are bio-inspired sensors that can output pixel-level brightness changes at extremely high rates, and event-based visual-inertial odometry (VIO) is widely studied and used in autonomous robots. In this paper, we propose an event-based stereo VIO system, namely ESVIO. Firstly, we present a novel direct event-based VIO method, which fuses events’ depth, Time-Surface images, and pre-integrated inertial measurement to estimate the camera motion and inertial measurement unit (IMU) biases in a sliding window non-linear optimization framework, effectively improving the state
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!