To see the other types of publications on this topic, follow the link: Odometry estimation.

Journal articles on the topic 'Odometry estimation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Odometry estimation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Nurmaini, Siti, and Sahat Pangidoan. "Localization of Leader-Follower Robot Using Extended Kalman Filter." Computer Engineering and Applications Journal 7, no. 2 (2018): 95–108. http://dx.doi.org/10.18495/comengapp.v7i2.253.

Full text
Abstract:
Non-holonomic leader-follower robot must be capable to find its own position in order to be able to navigating autonomously in the environment this problem is known as localization. A common way to estimate the robot pose by using odometer. However, odometry measurement may cause inaccurate result due to the wheel slippage or other small noise sources. In this research, the Extended Kalman Filter (EKF) is proposed to minimize the error or the inaccuracy caused by the odometry measurement. The EKF algorithm works by fusing odometry and landmark information to produce a better estimation. A bett
APA, Harvard, Vancouver, ISO, and other styles
2

Li, Q., C. Wang, S. Chen, et al. "DEEP LIDAR ODOMETRY." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W13 (June 5, 2019): 1681–86. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w13-1681-2019.

Full text
Abstract:
<p><strong>Abstract.</strong> Most existing lidar odometry estimation strategies are formulated under a standard framework that includes feature selection, and pose estimation through feature matching. In this work, we present a novel pipeline called LO-Net for lidar odometry estimation from 3D lidar scanning data using deep convolutional networks. The network is trained in an end-to-end manner, it infers 6-DoF poses from the encoded sequential lidar data. Based on the new designed mask-weighted geometric constraint loss, the network automatically learns effective feature rep
APA, Harvard, Vancouver, ISO, and other styles
3

Martínez-García, Edgar Alonso, Joaquín Rivero-Juárez, Luz Abril Torres-Méndez, and Jorge Enrique Rodas-Osollo. "Divergent trinocular vision observers design for extended Kalman filter robot state estimation." Proceedings of the Institution of Mechanical Engineers, Part I: Journal of Systems and Control Engineering 233, no. 5 (2018): 524–47. http://dx.doi.org/10.1177/0959651818800908.

Full text
Abstract:
Here, we report the design of two deterministic observers that exploit the capabilities of a home-made divergent trinocular visual sensor to sense depth data. The three-dimensional key points that the observers can measure are triangulated for visual odometry and estimated by an extended Kalman filter. This work deals with a four-wheel-drive mobile robot with four passive suspensions. The direct and inverse kinematic solutions are deduced and used for the updating and prediction models of the extended Kalman filter as feedback for the robot’s position controller. The state-estimation visual od
APA, Harvard, Vancouver, ISO, and other styles
4

Wu, Qin Fan, Qing Li, and Nong Cheng. "Visual Odometry and 3D Mapping in Indoor Environments." Applied Mechanics and Materials 336-338 (July 2013): 348–54. http://dx.doi.org/10.4028/www.scientific.net/amm.336-338.348.

Full text
Abstract:
This paper presents a robust state estimation and 3D environment modeling approach that enables Micro Aerial Vehicle (MAV) operating in challenging GPS-denied indoor environments. A fast, accurate and robust approach to visual odometry is developed based on Microsoft Kinect. Discriminative features are extracted from RGB images and matched across consecutive frames. A robust least-square estimator is applied to get relative motion estimation. All computation is performed in real-time, which provides high frequency of 6 degree-of-freedom state estimation. A detailed 3D map of an indoor environm
APA, Harvard, Vancouver, ISO, and other styles
5

Jiménez, Paulo A., and Bijan Shirinzadeh. "Laser interferometry measurements based calibration and error propagation identification for pose estimation in mobile robots." Robotica 32, no. 1 (2013): 165–74. http://dx.doi.org/10.1017/s0263574713000660.

Full text
Abstract:
SUMMARYA widely used method for pose estimation in mobile robots is odometry. Odometry allows the robot in real time to reconstruct its position and orientation from the wheels' encoder measurements. Given to its unbounded nature, odometry calculation accumulates errors with quadratic increase of error variance with traversed distance. This paper develops a novel method for odometry calibration and error propagation identification for mobile robots. The proposed method uses a laser-based interferometer to measure distance precisely. Two variants of the proposed calibration method are examined:
APA, Harvard, Vancouver, ISO, and other styles
6

Gonzalez, Ramon, Francisco Rodriguez, Jose Luis Guzman, Cedric Pradalier, and Roland Siegwart. "Combined visual odometry and visual compass for off-road mobile robots localization." Robotica 30, no. 6 (2011): 865–78. http://dx.doi.org/10.1017/s026357471100110x.

Full text
Abstract:
SUMMARYIn this paper, we present the work related to the application of a visual odometry approach to estimate the location of mobile robots operating in off-road conditions. The visual odometry approach is based on template matching, which deals with estimating the robot displacement through a matching process between two consecutive images. Standard visual odometry has been improved using visual compass method for orientation estimation. For this purpose, two consumer-grade monocular cameras have been employed. One camera is pointing at the ground under the robot, and the other is looking at
APA, Harvard, Vancouver, ISO, and other styles
7

Valiente García, David, Lorenzo Fernández Rojo, Arturo Gil Aparicio, Luis Payá Castelló, and Oscar Reinoso García. "Visual Odometry through Appearance- and Feature-Based Method with Omnidirectional Images." Journal of Robotics 2012 (2012): 1–13. http://dx.doi.org/10.1155/2012/797063.

Full text
Abstract:
In the field of mobile autonomous robots, visual odometry entails the retrieval of a motion transformation between two consecutive poses of the robot by means of a camera sensor solely. A visual odometry provides an essential information for trajectory estimation in problems such as Localization and SLAM (Simultaneous Localization and Mapping). In this work we present a motion estimation based on a single omnidirectional camera. We exploited the maximized horizontal field of view provided by this camera, which allows us to encode large scene information into the same image. The estimation of t
APA, Harvard, Vancouver, ISO, and other styles
8

Jung, Changbae, and Woojin Chung. "Calibration of Kinematic Parameters for Two Wheel Differential Mobile Robots by Using Experimental Heading Errors." International Journal of Advanced Robotic Systems 8, no. 5 (2011): 68. http://dx.doi.org/10.5772/50906.

Full text
Abstract:
Odometry using incremental wheel encoder sensors provides the relative position of mobile robots. This relative position is fundamental information for pose estimation by various sensors for EKF Localization, Monte Carlo Localization etc. Odometry is also used as unique information for localization of environmental conditions when absolute measurement systems are not available. However, odometry suffers from the accumulation of kinematic modeling errors of the wheel as the robot's travel distance increases. Therefore, systematic odometry errors need to be calibrated. Principal systematic error
APA, Harvard, Vancouver, ISO, and other styles
9

Thapa, Vikas, Abhishek Sharma, Beena Gairola, Amit K. Mondal, Vindhya Devalla, and Ravi K. Patel. "A Review on Visual Odometry Techniques for Mobile Robots: Types and Challenges." Recent Advances in Electrical & Electronic Engineering (Formerly Recent Patents on Electrical & Electronic Engineering) 13, no. 5 (2020): 618–31. http://dx.doi.org/10.2174/2352096512666191004142546.

Full text
Abstract:
For autonomous navigation, tracking and obstacle avoidance, a mobile robot must have the knowledge of its position and localization over time. Among the available techniques for odometry, vision-based odometry is robust and economical technique. In addition, a combination of position estimation from odometry with interpretations of the surroundings using a mobile camera is effective. This paper presents an overview of current visual odometry approaches, applications, and challenges in mobile robots. The study offers a comparative analysis of different available techniques and algorithms associ
APA, Harvard, Vancouver, ISO, and other styles
10

Lee, Kyuman, and Eric N. Johnson. "Latency Compensated Visual-Inertial Odometry for Agile Autonomous Flight." Sensors 20, no. 8 (2020): 2209. http://dx.doi.org/10.3390/s20082209.

Full text
Abstract:
In visual-inertial odometry (VIO), inertial measurement unit (IMU) dead reckoning acts as the dynamic model for flight vehicles while camera vision extracts information about the surrounding environment and determines features or points of interest. With these sensors, the most widely used algorithm for estimating vehicle and feature states for VIO is an extended Kalman filter (EKF). The design of the standard EKF does not inherently allow for time offsets between the timestamps of the IMU and vision data. In fact, sensor-related delays that arise in various realistic conditions are at least p
APA, Harvard, Vancouver, ISO, and other styles
11

Salameh, Mohammed, Azizi Abdullah, and Shahnorbanun Sahran. "Multiple Descriptors for Visual Odometry Trajectory Estimation." International Journal on Advanced Science, Engineering and Information Technology 8, no. 4-2 (2018): 1423. http://dx.doi.org/10.18517/ijaseit.8.4-2.6834.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Ramezani, Milad, Kourosh Khoshelham, and Clive Fraser. "Pose estimation by Omnidirectional Visual-Inertial Odometry." Robotics and Autonomous Systems 105 (July 2018): 26–37. http://dx.doi.org/10.1016/j.robot.2018.03.007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Costante, Gabriele, and Michele Mancini. "Uncertainty Estimation for Data-Driven Visual Odometry." IEEE Transactions on Robotics 36, no. 6 (2020): 1738–57. http://dx.doi.org/10.1109/tro.2020.3001674.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Teixeira, Bernardo, Hugo Silva, Anibal Matos, and Eduardo Silva. "Deep Learning for Underwater Visual Odometry Estimation." IEEE Access 8 (2020): 44687–701. http://dx.doi.org/10.1109/access.2020.2978406.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

An, Lifeng, Xinyu Zhang, Hongbo Gao, and Yuchao Liu. "Semantic segmentation–aided visual odometry for urban autonomous driving." International Journal of Advanced Robotic Systems 14, no. 5 (2017): 172988141773566. http://dx.doi.org/10.1177/1729881417735667.

Full text
Abstract:
Visual odometry plays an important role in urban autonomous driving cars. Feature-based visual odometry methods sample the candidates randomly from all available feature points, while alignment-based visual odometry methods take all pixels into account. These methods hold an assumption that quantitative majority of candidate visual cues could represent the truth of motions. But in real urban traffic scenes, this assumption could be broken by lots of dynamic traffic participants. Big trucks or buses may occupy the main image parts of a front-view monocular camera and result in wrong visual odom
APA, Harvard, Vancouver, ISO, and other styles
16

Aguiar, André, Filipe Santos, Armando Jorge Sousa, and Luís Santos. "FAST-FUSION: An Improved Accuracy Omnidirectional Visual Odometry System with Sensor Fusion and GPU Optimization for Embedded Low Cost Hardware." Applied Sciences 9, no. 24 (2019): 5516. http://dx.doi.org/10.3390/app9245516.

Full text
Abstract:
The main task while developing a mobile robot is to achieve accurate and robust navigation in a given environment. To achieve such a goal, the ability of the robot to localize itself is crucial. In outdoor, namely agricultural environments, this task becomes a real challenge because odometry is not always usable and global navigation satellite systems (GNSS) signals are blocked or significantly degraded. To answer this challenge, this work presents a solution for outdoor localization based on an omnidirectional visual odometry technique fused with a gyroscope and a low cost planar light detect
APA, Harvard, Vancouver, ISO, and other styles
17

Conduraru, Ionel, Ioan Doroftei, Dorin Luca, and Alina Conduraru Slatineanu. "Odometry Aspects of an Omni-Directional Mobile Robot with Modified Mecanum Wheels." Applied Mechanics and Materials 658 (October 2014): 587–92. http://dx.doi.org/10.4028/www.scientific.net/amm.658.587.

Full text
Abstract:
Mobile robots have a large scale use in industry, military operations, exploration and other applications where human intervention is risky. When a mobile robot has to move in small and narrow spaces and to avoid obstacles, mobility is one of its main issues. An omni-directional drive mechanism is very attractive because it guarantees a very good mobility in such cases. Also, the accurate estimation of the position is a key component for the successful operation for most of autonomous mobile robots. In this work, some odometry aspects of an omni-directional robot are presented and a simple odo
APA, Harvard, Vancouver, ISO, and other styles
18

Fazekas, Máté, Péter Gáspár, and Balázs Németh. "Velocity Estimation via Wheel Circumference Identification." Periodica Polytechnica Transportation Engineering 49, no. 3 (2021): 250–60. http://dx.doi.org/10.3311/pptr.18623.

Full text
Abstract:
The article presents a velocity estimation algorithm through the wheel encoder-based odometry and wheel circumference identification. The motivation of the paper is that a proper model can improve the motion estimation in poor sensor performance cases. For example, when the GNSS signals are unavailable, or when the vision-based methods are incorrect due to the insufficient number of features, furthermore, when the IMU-based method fails due to the lack of frequent accelerations. In these situations, the wheel encoders can be an appropriate choice for state estimation. However, this type of est
APA, Harvard, Vancouver, ISO, and other styles
19

Boukhers, Zeyd, Kimiaki Shirahama, and Marcin Grzegorzek. "Less restrictive camera odometry estimation from monocular camera." Multimedia Tools and Applications 77, no. 13 (2017): 16199–222. http://dx.doi.org/10.1007/s11042-017-5195-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

de Saxe, Christopher, and David Cebon. "Estimation of trailer off-tracking using visual odometry." Vehicle System Dynamics 57, no. 5 (2018): 752–76. http://dx.doi.org/10.1080/00423114.2018.1484498.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Parra, I., M. A. Sotelo, D. F. Llorca, and M. Ocaña. "Robust visual odometry for vehicle localization in urban environments." Robotica 28, no. 3 (2009): 441–52. http://dx.doi.org/10.1017/s026357470900575x.

Full text
Abstract:
SUMMARYThis paper describes a new approach for estimating the vehicle motion trajectory in complex urban environments by means of visual odometry. A new strategy for robust feature extraction and data post-processing is developed and tested on-road. Images from scale-invariant feature transform (SIFT) features are used in order to cope with the complexity of urban environments. The obtained results are discussed and compared to previous works. In the prototype system, the ego-motion of the vehicle is computed using a stereo-vision system mounted next to the rear view mirror of the car. Feature
APA, Harvard, Vancouver, ISO, and other styles
22

Fazekas, Máté, Péter Gáspár, and Balázs Németh. "Calibration and Improvement of an Odometry Model with Dynamic Wheel and Lateral Dynamics Integration." Sensors 21, no. 2 (2021): 337. http://dx.doi.org/10.3390/s21020337.

Full text
Abstract:
Localization is a key part of an autonomous system, such as a self-driving car. The main sensor for the task is the GNSS, however its limitations can be eliminated only by integrating other methods, for example wheel odometry, which requires a well-calibrated model. This paper proposes a novel wheel odometry model and its calibration. The parameters of the nonlinear dynamic system are estimated with Gauss–Newton regression. Due to only automotive-grade sensors are applied to reach a cost-effective system, the measurement uncertainty highly corrupts the estimation accuracy. The problem is handl
APA, Harvard, Vancouver, ISO, and other styles
23

Yoon, Sung-Joo, and Taejung Kim. "Development of Stereo Visual Odometry Based on Photogrammetric Feature Optimization." Remote Sensing 11, no. 1 (2019): 67. http://dx.doi.org/10.3390/rs11010067.

Full text
Abstract:
One of the important image processing technologies is visual odometry (VO) technology. VO estimates platform motion through a sequence of images. VO is of interest in the virtual reality (VR) industry as well as the automobile industry because the construction cost is low. In this study, we developed stereo visual odometry (SVO) based on photogrammetric geometric interpretation. The proposed method performed feature optimization and pose estimation through photogrammetric bundle adjustment. After corresponding the point extraction step, the feature optimization was carried out with photogramme
APA, Harvard, Vancouver, ISO, and other styles
24

Esfandiari, Hooman, Derek Lichti, and Carolyn Anglin. "Single-camera visual odometry to track a surgical X-ray C-arm base." Proceedings of the Institution of Mechanical Engineers, Part H: Journal of Engineering in Medicine 231, no. 12 (2017): 1140–51. http://dx.doi.org/10.1177/0954411917735556.

Full text
Abstract:
This study provides a framework for a single-camera odometry system for localizing a surgical C-arm base. An application-specific monocular visual odometry system (a downward-looking consumer-grade camera rigidly attached to the C-arm base) is proposed in this research. The cumulative dead-reckoning estimation of the base is extracted based on frame-to-frame homography estimation. Optical-flow results are utilized to feed the odometry. Online positional and orientation parameters are then reported. Positional accuracy of better than 2% (of the total traveled distance) for most of the cases and
APA, Harvard, Vancouver, ISO, and other styles
25

Hong, Euntae, and Jongwoo Lim. "Visual-Inertial Odometry with Robust Initialization and Online Scale Estimation." Sensors 18, no. 12 (2018): 4287. http://dx.doi.org/10.3390/s18124287.

Full text
Abstract:
Visual-inertial odometry (VIO) has recently received much attention for efficient and accurate ego-motion estimation of unmanned aerial vehicle systems (UAVs). Recent studies have shown that optimization-based algorithms achieve typically high accuracy when given enough amount of information, but occasionally suffer from divergence when solving highly non-linear problems. Further, their performance significantly depends on the accuracy of the initialization of inertial measurement unit (IMU) parameters. In this paper, we propose a novel VIO algorithm of estimating the motional state of UAVs wi
APA, Harvard, Vancouver, ISO, and other styles
26

Aladem, Mohamed, and Samir Rawashdeh. "Lightweight Visual Odometry for Autonomous Mobile Robots." Sensors 18, no. 9 (2018): 2837. http://dx.doi.org/10.3390/s18092837.

Full text
Abstract:
Vision-based motion estimation is an effective means for mobile robot localization and is often used in conjunction with other sensors for navigation and path planning. This paper presents a low-overhead real-time ego-motion estimation (visual odometry) system based on either a stereo or RGB-D sensor. The algorithm’s accuracy outperforms typical frame-to-frame approaches by maintaining a limited local map, while requiring significantly less memory and computational power in contrast to using global maps common in full visual SLAM methods. The algorithm is evaluated on common publicly available
APA, Harvard, Vancouver, ISO, and other styles
27

Javanmard-Gh., A., D. Iwaszczuk, and S. Roth. "DEEPLIO: DEEP LIDAR INERTIAL SENSOR FUSION FOR ODOMETRY ESTIMATION." ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences V-1-2021 (June 17, 2021): 47–54. http://dx.doi.org/10.5194/isprs-annals-v-1-2021-47-2021.

Full text
Abstract:
Abstract. Having a good estimate of the position and orientation of a mobile agent is essential for many application domains such as robotics, autonomous driving, and virtual and augmented reality. In particular, when using LiDAR and IMU sensors as the inputs, most existing methods still use classical filter-based fusion methods to achieve this task. In this work, we propose DeepLIO, a modular, end-to-end learning-based fusion framework for odometry estimation using LiDAR and IMU sensors. For this task, our network learns an appropriate fusion function by considering different modalities of it
APA, Harvard, Vancouver, ISO, and other styles
28

Han, Chenlei, Michael Frey, and Frank Gauterin. "Modular Approach for Odometry Localization Method for Vehicles with Increased Maneuverability." Sensors 21, no. 1 (2020): 79. http://dx.doi.org/10.3390/s21010079.

Full text
Abstract:
Localization and navigation not only serve to provide positioning and route guidance information for users, but also are important inputs for vehicle control. This paper investigates the possibility of using odometry to estimate the position and orientation of a vehicle with a wheel individual steering system in omnidirectional parking maneuvers. Vehicle models and sensors have been identified for this application. Several odometry versions are designed using a modular approach, which was developed in this paper to help users to design state estimators. Different odometry versions have been im
APA, Harvard, Vancouver, ISO, and other styles
29

Xu, Bo, Yu Chen, Shoujian Zhang, and Jingrong Wang. "Improved Point–Line Visual–Inertial Odometry System Using Helmert Variance Component Estimation." Remote Sensing 12, no. 18 (2020): 2901. http://dx.doi.org/10.3390/rs12182901.

Full text
Abstract:
Mobile platform visual image sequence inevitably has large areas with various types of weak textures, which affect the acquisition of accurate pose in the subsequent platform moving process. The visual–inertial odometry (VIO) with point features and line features as visual information shows a good performance in weak texture environments, which can solve these problems to a certain extent. However, the extraction and matching of line features are time consuming, and reasonable weights between the point and line features are hard to estimate, which makes it difficult to accurately track the pos
APA, Harvard, Vancouver, ISO, and other styles
30

Zhang, Chaofan, Yong Liu, Fan Wang, Yingwei Xia, and Wen Zhang. "VINS-MKF: A Tightly-Coupled Multi-Keyframe Visual-Inertial Odometry for Accurate and Robust State Estimation." Sensors 18, no. 11 (2018): 4036. http://dx.doi.org/10.3390/s18114036.

Full text
Abstract:
State estimation is crucial for robot autonomy, visual odometry (VO) has received significant attention in the robotics field because it can provide accurate state estimation. However, the accuracy and robustness of most existing VO methods are degraded in complex conditions, due to the limited field of view (FOV) of the utilized camera. In this paper, we present a novel tightly-coupled multi-keyframe visual-inertial odometry (called VINS-MKF), which can provide an accurate and robust state estimation for robots in an indoor environment. We first modify the monocular ORBSLAM (Oriented FAST and
APA, Harvard, Vancouver, ISO, and other styles
31

Bag, Suvam, Vishwas Venkatachalapathy, and RaymondW Ptucha. "Motion Estimation Using Visual Odometry and Deep Learning Localization." Electronic Imaging 2017, no. 19 (2017): 62–69. http://dx.doi.org/10.2352/issn.2470-1173.2017.19.avm-022.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Abdu, Ahmed, Hakim A. Abdo, and Al-Alimi Dalal. "Robust Monocular Visual Odometry Trajectory Estimation in Urban Environments." International Journal of Information Technology and Computer Science 11, no. 10 (2019): 12–18. http://dx.doi.org/10.5815/ijitcs.2019.10.02.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Lin, Lili, Weisheng Wang, Wan Luo, Lesheng Song, and Wenhui Zhou. "Unsupervised monocular visual odometry with decoupled camera pose estimation." Digital Signal Processing 114 (July 2021): 103052. http://dx.doi.org/10.1016/j.dsp.2021.103052.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Liu, Qiang, Haidong Zhang, Yiming Xu, and Li Wang. "Unsupervised Deep Learning-Based RGB-D Visual Odometry." Applied Sciences 10, no. 16 (2020): 5426. http://dx.doi.org/10.3390/app10165426.

Full text
Abstract:
Recently, deep learning frameworks have been deployed in visual odometry systems and achieved comparable results to traditional feature matching based systems. However, most deep learning-based frameworks inevitably need labeled data as ground truth for training. On the other hand, monocular odometry systems are incapable of restoring absolute scale. External or prior information has to be introduced for scale recovery. To solve these problems, we present a novel deep learning-based RGB-D visual odometry system. Our two main contributions are: (i) during network training and pose estimation, t
APA, Harvard, Vancouver, ISO, and other styles
35

Liu, Fei, Yashar Balazadegan Sarvrood, and Yang Gao. "Implementation and Analysis of Tightly Integrated INS/Stereo VO for Land Vehicle Navigation." Journal of Navigation 71, no. 1 (2017): 83–99. http://dx.doi.org/10.1017/s037346331700056x.

Full text
Abstract:
Tight integration of inertial sensors and stereo visual odometry to bridge Global Navigation Satellite System (GNSS) signal outages in challenging environments has drawn increasing attention. However, the details of how feature pixel coordinates from visual odometry can be directly used to limit the quick drift of inertial sensors in a tight integration implementation have rarely been provided in previous works. For instance, a key challenge in tight integration of inertial and stereo visual datasets is how to correct inertial sensor errors using the pixel measurements from visual odometry, ho
APA, Harvard, Vancouver, ISO, and other styles
36

Boulekchour, Mohammed, Nabil Aouf, and Mark Richardson. "Robust L∞convex optimisation for monocular visual odometry trajectory estimation." Robotica 34, no. 3 (2014): 703–22. http://dx.doi.org/10.1017/s0263574714001829.

Full text
Abstract:
SUMMARYThe most important applications of many computer vision systems are based on robust features extraction, matching and tracking. Due to their extraction techniques, image features locations accuracy is heavily dependent on the variation in intensity within their neighbourhoods, from which their uncertainties are estimated. In the present work, a robust L∞optimisation solution for monocular motion estimation systems has been presented. The uncertainty estimation techniques based on SIFT derivative approach and its propagation through the eight-point algorithm, singular value decomposition
APA, Harvard, Vancouver, ISO, and other styles
37

Kersten, J., and V. Rodehorst. "ENHANCEMENT STRATEGIES FOR FRAME-TO-FRAME UAS STEREO VISUAL ODOMETRY." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B3 (June 9, 2016): 511–18. http://dx.doi.org/10.5194/isprsarchives-xli-b3-511-2016.

Full text
Abstract:
Autonomous navigation of indoor unmanned aircraft systems (UAS) requires accurate pose estimations usually obtained from indirect measurements. Navigation based on inertial measurement units (IMU) is known to be affected by high drift rates. The incorporation of cameras provides complementary information due to the different underlying measurement principle. The scale ambiguity problem for monocular cameras is avoided when a light-weight stereo camera setup is used. However, also frame-to-frame stereo visual odometry (VO) approaches are known to accumulate pose estimation errors over time. Sev
APA, Harvard, Vancouver, ISO, and other styles
38

Kersten, J., and V. Rodehorst. "ENHANCEMENT STRATEGIES FOR FRAME-TO-FRAME UAS STEREO VISUAL ODOMETRY." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B3 (June 9, 2016): 511–18. http://dx.doi.org/10.5194/isprs-archives-xli-b3-511-2016.

Full text
Abstract:
Autonomous navigation of indoor unmanned aircraft systems (UAS) requires accurate pose estimations usually obtained from indirect measurements. Navigation based on inertial measurement units (IMU) is known to be affected by high drift rates. The incorporation of cameras provides complementary information due to the different underlying measurement principle. The scale ambiguity problem for monocular cameras is avoided when a light-weight stereo camera setup is used. However, also frame-to-frame stereo visual odometry (VO) approaches are known to accumulate pose estimation errors over time. Sev
APA, Harvard, Vancouver, ISO, and other styles
39

Yuan, Cheng, Jizhou Lai, Pin Lyu, Peng Shi, Wei Zhao, and Kai Huang. "A Novel Fault-Tolerant Navigation and Positioning Method with Stereo-Camera/Micro Electro Mechanical Systems Inertial Measurement Unit (MEMS-IMU) in Hostile Environment." Micromachines 9, no. 12 (2018): 626. http://dx.doi.org/10.3390/mi9120626.

Full text
Abstract:
Visual odometry (VO) is a new navigation and positioning method that estimates the ego-motion of vehicles from images. However, VO with unsatisfactory performance can fail severely in hostile environment because of the less feature, fast angular motions, or illumination change. Thus, enhancing the robustness of VO in hostile environment has become a popular research topic. In this paper, a novel fault-tolerant visual-inertial odometry (VIO) navigation and positioning method framework is presented. The micro electro mechanical systems inertial measurement unit (MEMS-IMU) is used to aid the ster
APA, Harvard, Vancouver, ISO, and other styles
40

Yoon, S. J., W. S. Yoon, J. W. Jung, and T. Kim. "DEVELOPMENT OF A SINGLE-VIEW ODOMETER BASED ON PHOTOGRAMMETRIC BUNDLE ADJUSTMENT." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2 (May 30, 2018): 1219–23. http://dx.doi.org/10.5194/isprs-archives-xlii-2-1219-2018.

Full text
Abstract:
Recently, a vehicle is equipped with various sensors, which aim smart and autonomous functions. Single-view odometer estimates its pose using a monoscopic camera mounted on a vehicle. It was generally studied in the field of computer vision. On the other hands, photogrammetry focuses to produce precise three-dimensional position information using bundle adjustment methods. Therefore, this paper proposes to apply photogrammetric approach to single view odometer. Firstly, it performs real-time corresponding point extraction. Next, it estimates the pose using relative orientation based on coplana
APA, Harvard, Vancouver, ISO, and other styles
41

Jeong, Jae Heon, and Nikolaus Correll. "Towards Real-Time Trinocular Visual Odometry." Applied Mechanics and Materials 490-491 (January 2014): 1424–29. http://dx.doi.org/10.4028/www.scientific.net/amm.490-491.1424.

Full text
Abstract:
Pose estimation of multi-camera rig which has not enough overlapping field of views for the stereo, is generally computationally expensive due to the offset of camera center and the bundle adjustment algorithm. We proposed a divide and conquer approach, which reduces the trinocular visual odometry problem to five monocular visual odometry problems, one for each individual camera sequence and two more using features matched temporally from consecutive images from the center to the left and right cameras, respectively. While this approach provides high accuracy over long distances in outdoor env
APA, Harvard, Vancouver, ISO, and other styles
42

Ma, Fangwu, Jinzhu Shi, Yu Yang, Jinhang Li, and Kai Dai. "ACK-MSCKF: Tightly-Coupled Ackermann Multi-State Constraint Kalman Filter for Autonomous Vehicle Localization." Sensors 19, no. 21 (2019): 4816. http://dx.doi.org/10.3390/s19214816.

Full text
Abstract:
Visual-Inertial Odometry (VIO) is subjected to additional unobservable directions under the special motions of ground vehicles, resulting in larger pose estimation errors. To address this problem, a tightly-coupled Ackermann visual-inertial odometry (ACK-MSCKF) is proposed to fuse Ackermann error state measurements and the Stereo Multi-State Constraint Kalman Filter (S-MSCKF) with a tightly-coupled filter-based mechanism. In contrast with S-MSCKF, in which the inertial measurement unit (IMU) propagates the vehicle motion and then the propagation is corrected by stereo visual measurements, we s
APA, Harvard, Vancouver, ISO, and other styles
43

Kim, Joo-Hee, and In-Cheol Kim. "Robust Real-Time Visual Odometry Estimation for 3D Scene Reconstruction." KIPS Transactions on Software and Data Engineering 4, no. 4 (2015): 187–94. http://dx.doi.org/10.3745/ktsde.2015.4.4.187.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Meng, Xuyang, Chunxiao Fan, Yue Ming, Yuan Shen, and Hui Yu. "Un-VDNet: unsupervised network for visual odometry and depth estimation." Journal of Electronic Imaging 28, no. 06 (2019): 1. http://dx.doi.org/10.1117/1.jei.28.6.063015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Zhou, Dingfu, Yuchao Dai, and Hongdong Li. "Ground-Plane-Based Absolute Scale Estimation for Monocular Visual Odometry." IEEE Transactions on Intelligent Transportation Systems 21, no. 2 (2020): 791–802. http://dx.doi.org/10.1109/tits.2019.2900330.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Yang, Xiaohan, Xiaojuan Li, Yong Guan, Jiadong Song, and Rui Wang. "Overfitting reduction of pose estimation for deep learning visual odometry." China Communications 17, no. 6 (2020): 196–210. http://dx.doi.org/10.23919/jcc.2020.06.016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Aqel, Mohammad O. A., Mohammad H. Marhaban, M. Iqbal Saripan, and Napsiah Bt Ismail. "Estimation of image scale variations in monocular visual odometry systems." IEEJ Transactions on Electrical and Electronic Engineering 12, no. 2 (2016): 228–43. http://dx.doi.org/10.1002/tee.22370.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Nisar, Barza, Philipp Foehn, Davide Falanga, and Davide Scaramuzza. "VIMO: Simultaneous Visual Inertial Model-Based Odometry and Force Estimation." IEEE Robotics and Automation Letters 4, no. 3 (2019): 2785–92. http://dx.doi.org/10.1109/lra.2019.2918689.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Nguyen, Thien Hoang, Thien-Minh Nguyen, Muqing Cao, and Lihua Xie. "Loosely-Coupled Ultra-wideband-Aided Scale Correction for Monocular Visual Odometry." Unmanned Systems 08, no. 02 (2020): 179–90. http://dx.doi.org/10.1142/s2301385020500119.

Full text
Abstract:
In this paper, we propose a method to address the problem of scale uncertainty in monocular visual odometry (VO), which includes scale ambiguity and scale drift, using distance measurements from a single ultra-wideband (UWB) anchor. A variant of Levenberg–Marquardt (LM) nonlinear least squares regression method is proposed to rectify unscaled position data from monocular odometry with 1D point-to-point distance measurements. As a loosely-coupled approach, our method is flexible in that each input block can be replaced with one’s preferred choices for monocular odometry/SLAM algorithm and UWB s
APA, Harvard, Vancouver, ISO, and other styles
50

Kučić, Mario, and Marko Valčić. "Stereo Visual Odometry for Indoor Localization of Ship Model." Journal of Maritime & Transportation Science 58, no. 1 (2020): 57–75. http://dx.doi.org/10.18048/2020.58.04.

Full text
Abstract:
Typically, ships are designed for open sea navigation and thus research of autonomous ships is mostly done for that particular area. This paper explores the possibility of using low-cost sensors for localization inside the small navigation area. The localization system is based on the technology used for developing autonomous cars. The main part of the system is visual odometry using stereo cameras fused with Inertial Measurement Unit (IMU) data coupled with Kalman and particle filters to get decimetre level accuracy inside a basin for different surface conditions. The visual odometry uses cro
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!