Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Odometry estimation.

Zeitschriftenartikel zum Thema „Odometry estimation“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "Odometry estimation" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Wang, Chenggong, Gen Li, Ruiqi Wang, and Lin Li. "Wheeled Robot Visual Odometer Based on Two-dimensional Iterative Closest Point Algorithm." Journal of Physics: Conference Series 2504, no. 1 (2023): 012002. http://dx.doi.org/10.1088/1742-6596/2504/1/012002.

Der volle Inhalt der Quelle
Annotation:
Abstract According to the two-dimensional motion characteristics of planar motion wheeled robot, the visual odometer was dimensionally reduced in this study. In the feature point matching part of visual odometer, the contour constraint was used to filter out the mismatched feature point pairs (abbreviated as FPP). This method could also filter out the matched FPP, and the feature of FPP was correct color image matches, however, their depth image error was large. This offered higher quality matched FPP for the subsequent interframe motion estimation. Dimension reduction was performed in the int
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Nurmaini, Siti, and Sahat Pangidoan. "Localization of Leader-Follower Robot Using Extended Kalman Filter." Computer Engineering and Applications Journal 7, no. 2 (2018): 95–108. http://dx.doi.org/10.18495/comengapp.v7i2.253.

Der volle Inhalt der Quelle
Annotation:
Non-holonomic leader-follower robot must be capable to find its own position in order to be able to navigating autonomously in the environment this problem is known as localization. A common way to estimate the robot pose by using odometer. However, odometry measurement may cause inaccurate result due to the wheel slippage or other small noise sources. In this research, the Extended Kalman Filter (EKF) is proposed to minimize the error or the inaccuracy caused by the odometry measurement. The EKF algorithm works by fusing odometry and landmark information to produce a better estimation. A bett
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Li, Q., C. Wang, S. Chen, et al. "DEEP LIDAR ODOMETRY." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W13 (June 5, 2019): 1681–86. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w13-1681-2019.

Der volle Inhalt der Quelle
Annotation:
<p><strong>Abstract.</strong> Most existing lidar odometry estimation strategies are formulated under a standard framework that includes feature selection, and pose estimation through feature matching. In this work, we present a novel pipeline called LO-Net for lidar odometry estimation from 3D lidar scanning data using deep convolutional networks. The network is trained in an end-to-end manner, it infers 6-DoF poses from the encoded sequential lidar data. Based on the new designed mask-weighted geometric constraint loss, the network automatically learns effective feature rep
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Eising, Ciarán, Leroy‐Francisco Pereira, Jonathan Horgan, Anbuchezhiyan Selvaraju, John McDonald, and Paul Moran. "2.5D vehicle odometry estimation." IET Intelligent Transport Systems 16, no. 3 (2021): 292–308. http://dx.doi.org/10.1049/itr2.12143.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Wan, Yingcai, Qiankun Zhao, Cheng Guo, Chenlong Xu, and Lijing Fang. "Multi-Sensor Fusion Self-Supervised Deep Odometry and Depth Estimation." Remote Sensing 14, no. 5 (2022): 1228. http://dx.doi.org/10.3390/rs14051228.

Der volle Inhalt der Quelle
Annotation:
This paper presents a new deep visual-inertial odometry and depth estimation framework for improving the accuracy of depth estimation and ego-motion from image sequences and inertial measurement unit (IMU) raw data. The proposed framework predicts ego-motion and depth with absolute scale in a self-supervised manner. We first capture dense features and solve the pose by deep visual odometry (DVO), and then combine the pose estimation pipeline with deep inertial odometry (DIO) by the extended Kalman filter (EKF) method to produce the sparse depth and pose with absolute scale. We then join deep v
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Zhao, Zixu, Yucheng Zhang, Jinglin Shi, Long Long, and Zaiwang Lu. "Robust Lidar-Inertial Odometry with Ground Condition Perception and Optimization Algorithm for UGV." Sensors 22, no. 19 (2022): 7424. http://dx.doi.org/10.3390/s22197424.

Der volle Inhalt der Quelle
Annotation:
Unmanned ground vehicles (UGVs) are making more and more progress in many application scenarios in recent years, such as exploring unknown wild terrain, working in precision agriculture and serving in emergency rescue. Due to the complex ground conditions and changeable surroundings of these unstructured environments, it is challenging for these UGVs to obtain robust and accurate state estimations by using sensor fusion odometry without prior perception and optimization for specific scenarios. In this paper, based on an error-state Kalman filter (ESKF) fusion model, we propose a robust lidar-i
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Chen, Baifan, Haowu Zhao, Ruyi Zhu, and Yemin Hu. "Marked-LIEO: Visual Marker-Aided LiDAR/IMU/Encoder Integrated Odometry." Sensors 22, no. 13 (2022): 4749. http://dx.doi.org/10.3390/s22134749.

Der volle Inhalt der Quelle
Annotation:
In this paper, we propose a visual marker-aided LiDAR/IMU/encoder integrated odometry, Marked-LIEO, to achieve pose estimation of mobile robots in an indoor long corridor environment. In the first stage, we design the pre-integration model of encoder and IMU respectively to realize the pose estimation combined with the pose estimation from the second stage providing prediction for the LiDAR odometry. In the second stage, we design low-frequency visual marker odometry, which is optimized jointly with LiDAR odometry to obtain the final pose estimation. In view of the wheel slipping and LiDAR deg
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Zhao, Zixu, Yucheng Zhang, Long Long, Zaiwang Lu, and Jinglin Shi. "Efficient and adaptive lidar–visual–inertial odometry for agricultural unmanned ground vehicle." International Journal of Advanced Robotic Systems 19, no. 2 (2022): 172988062210949. http://dx.doi.org/10.1177/17298806221094925.

Der volle Inhalt der Quelle
Annotation:
The accuracy of agricultural unmanned ground vehicles’ localization directly affects the accuracy of their navigation. However, due to the changeable environment and fewer features in the agricultural scene, it is challenging for these unmanned ground vehicles to localize precisely in global positioning system-denied areas with a single sensor. In this article, we present an efficient and adaptive sensor-fusion odometry framework based on simultaneous localization and mapping to handle the localization problems of agricultural unmanned ground vehicles without the assistance of a global positio
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Qiu, Haiyang, Xu Zhang, Hui Wang, et al. "A Robust and Integrated Visual Odometry Framework Exploiting the Optical Flow and Feature Point Method." Sensors 23, no. 20 (2023): 8655. http://dx.doi.org/10.3390/s23208655.

Der volle Inhalt der Quelle
Annotation:
In this paper, we propose a robust and integrated visual odometry framework exploiting the optical flow and feature point method that achieves faster pose estimate and considerable accuracy and robustness during the odometry process. Our method utilizes optical flow tracking to accelerate the feature point matching process. In the odometry, two visual odometry methods are used: global feature point method and local feature point method. When there is good optical flow tracking and enough key points optical flow tracking matching is successful, the local feature point method utilizes prior info
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Martínez-García, Edgar Alonso, Joaquín Rivero-Juárez, Luz Abril Torres-Méndez, and Jorge Enrique Rodas-Osollo. "Divergent trinocular vision observers design for extended Kalman filter robot state estimation." Proceedings of the Institution of Mechanical Engineers, Part I: Journal of Systems and Control Engineering 233, no. 5 (2018): 524–47. http://dx.doi.org/10.1177/0959651818800908.

Der volle Inhalt der Quelle
Annotation:
Here, we report the design of two deterministic observers that exploit the capabilities of a home-made divergent trinocular visual sensor to sense depth data. The three-dimensional key points that the observers can measure are triangulated for visual odometry and estimated by an extended Kalman filter. This work deals with a four-wheel-drive mobile robot with four passive suspensions. The direct and inverse kinematic solutions are deduced and used for the updating and prediction models of the extended Kalman filter as feedback for the robot’s position controller. The state-estimation visual od
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Wu, Qin Fan, Qing Li, and Nong Cheng. "Visual Odometry and 3D Mapping in Indoor Environments." Applied Mechanics and Materials 336-338 (July 2013): 348–54. http://dx.doi.org/10.4028/www.scientific.net/amm.336-338.348.

Der volle Inhalt der Quelle
Annotation:
This paper presents a robust state estimation and 3D environment modeling approach that enables Micro Aerial Vehicle (MAV) operating in challenging GPS-denied indoor environments. A fast, accurate and robust approach to visual odometry is developed based on Microsoft Kinect. Discriminative features are extracted from RGB images and matched across consecutive frames. A robust least-square estimator is applied to get relative motion estimation. All computation is performed in real-time, which provides high frequency of 6 degree-of-freedom state estimation. A detailed 3D map of an indoor environm
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Peng, Gang, Qiang Gao, Yue Xu, Jianfeng Li, Zhang Deng, and Cong Li. "Pose Estimation Based on Bidirectional Visual–Inertial Odometry with 3D LiDAR (BV-LIO)." Remote Sensing 16, no. 16 (2024): 2970. http://dx.doi.org/10.3390/rs16162970.

Der volle Inhalt der Quelle
Annotation:
Due to the limitation of a single sensor such as only camera or only LiDAR, the Visual SLAM detects few effective features in the case of poor lighting or no texture. The LiDAR SLAM will also degrade in an unstructured environment and open spaces, which reduces the accuracy of pose estimation and the quality of mapping. In order to solve this problem, on account of the high efficiency of Visual odometry and the high accuracy of LiDAR odometry, this paper investigates the multi-sensor fusion of bidirectional visual–inertial odometry with 3D LiDAR for pose estimation. This method can couple the
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Jiménez, Paulo A., and Bijan Shirinzadeh. "Laser interferometry measurements based calibration and error propagation identification for pose estimation in mobile robots." Robotica 32, no. 1 (2013): 165–74. http://dx.doi.org/10.1017/s0263574713000660.

Der volle Inhalt der Quelle
Annotation:
SUMMARYA widely used method for pose estimation in mobile robots is odometry. Odometry allows the robot in real time to reconstruct its position and orientation from the wheels' encoder measurements. Given to its unbounded nature, odometry calculation accumulates errors with quadratic increase of error variance with traversed distance. This paper develops a novel method for odometry calibration and error propagation identification for mobile robots. The proposed method uses a laser-based interferometer to measure distance precisely. Two variants of the proposed calibration method are examined:
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Gonzalez, Ramon, Francisco Rodriguez, Jose Luis Guzman, Cedric Pradalier, and Roland Siegwart. "Combined visual odometry and visual compass for off-road mobile robots localization." Robotica 30, no. 6 (2011): 865–78. http://dx.doi.org/10.1017/s026357471100110x.

Der volle Inhalt der Quelle
Annotation:
SUMMARYIn this paper, we present the work related to the application of a visual odometry approach to estimate the location of mobile robots operating in off-road conditions. The visual odometry approach is based on template matching, which deals with estimating the robot displacement through a matching process between two consecutive images. Standard visual odometry has been improved using visual compass method for orientation estimation. For this purpose, two consumer-grade monocular cameras have been employed. One camera is pointing at the ground under the robot, and the other is looking at
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Valiente García, David, Lorenzo Fernández Rojo, Arturo Gil Aparicio, Luis Payá Castelló, and Oscar Reinoso García. "Visual Odometry through Appearance- and Feature-Based Method with Omnidirectional Images." Journal of Robotics 2012 (2012): 1–13. http://dx.doi.org/10.1155/2012/797063.

Der volle Inhalt der Quelle
Annotation:
In the field of mobile autonomous robots, visual odometry entails the retrieval of a motion transformation between two consecutive poses of the robot by means of a camera sensor solely. A visual odometry provides an essential information for trajectory estimation in problems such as Localization and SLAM (Simultaneous Localization and Mapping). In this work we present a motion estimation based on a single omnidirectional camera. We exploited the maximized horizontal field of view provided by this camera, which allows us to encode large scene information into the same image. The estimation of t
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Yuan, Shuangjie, Jun Zhang, Yujia Lin, and Lu Yang. "Hybrid self-supervised monocular visual odometry system based on spatio-temporal features." Electronic Research Archive 32, no. 5 (2024): 3543–68. http://dx.doi.org/10.3934/era.2024163.

Der volle Inhalt der Quelle
Annotation:
<abstract><p>For the autonomous and intelligent operation of robots in unknown environments, simultaneous localization and mapping (SLAM) is essential. Since the proposal of visual odometry, the use of visual odometry in the mapping process has greatly advanced the development of pure visual SLAM techniques. However, the main challenges in current monocular odometry algorithms are the poor generalization of traditional methods and the low interpretability of deep learning-based methods. This paper presented a hybrid self-supervised visual monocular odometry framework that combined
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Ghasemieh, Alireza, and Rasha Kashef. "Advanced Monocular Outdoor Pose Estimation in Autonomous Systems: Leveraging Optical Flow, Depth Estimation, and Semantic Segmentation with Dynamic Object Removal." Sensors 24, no. 24 (2024): 8040. https://doi.org/10.3390/s24248040.

Der volle Inhalt der Quelle
Annotation:
Autonomous technologies have revolutionized transportation, military operations, and space exploration, necessitating precise localization in environments where traditional GPS-based systems are unreliable or unavailable. While widespread for outdoor localization, GPS systems face limitations in obstructed environments such as dense urban areas, forests, and indoor spaces. Moreover, GPS reliance introduces vulnerabilities to signal disruptions, which can lead to significant operational failures. Hence, developing alternative localization techniques that do not depend on external signals is ess
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Palacín, Jordi, Elena Rubies, and Eduard Clotet. "Systematic Odometry Error Evaluation and Correction in a Human-Sized Three-Wheeled Omnidirectional Mobile Robot Using Flower-Shaped Calibration Trajectories." Applied Sciences 12, no. 5 (2022): 2606. http://dx.doi.org/10.3390/app12052606.

Der volle Inhalt der Quelle
Annotation:
Odometry is a simple and practical method that provides a periodic real-time estimation of the relative displacement of a mobile robot based on the measurement of the angular rotational speed of its wheels. The main disadvantage of odometry is its unbounded accumulation of errors, a factor that reduces the accuracy of the estimation of the absolute position and orientation of a mobile robot. This paper proposes a general procedure to evaluate and correct the systematic odometry errors of a human-sized three-wheeled omnidirectional mobile robot designed as a versatile personal assistant tool. T
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Jung, Changbae, and Woojin Chung. "Calibration of Kinematic Parameters for Two Wheel Differential Mobile Robots by Using Experimental Heading Errors." International Journal of Advanced Robotic Systems 8, no. 5 (2011): 68. http://dx.doi.org/10.5772/50906.

Der volle Inhalt der Quelle
Annotation:
Odometry using incremental wheel encoder sensors provides the relative position of mobile robots. This relative position is fundamental information for pose estimation by various sensors for EKF Localization, Monte Carlo Localization etc. Odometry is also used as unique information for localization of environmental conditions when absolute measurement systems are not available. However, odometry suffers from the accumulation of kinematic modeling errors of the wheel as the robot's travel distance increases. Therefore, systematic odometry errors need to be calibrated. Principal systematic error
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Yan, Yaxuan, Baohua Zhang, Jun Zhou, Yibo Zhang, and Xiao’ang Liu. "Real-Time Localization and Mapping Utilizing Multi-Sensor Fusion and Visual–IMU–Wheel Odometry for Agricultural Robots in Unstructured, Dynamic and GPS-Denied Greenhouse Environments." Agronomy 12, no. 8 (2022): 1740. http://dx.doi.org/10.3390/agronomy12081740.

Der volle Inhalt der Quelle
Annotation:
Autonomous navigation in greenhouses requires agricultural robots to localize and generate a globally consistent map of surroundings in real-time. However, accurate and robust localization and mapping are still challenging for agricultural robots due to the unstructured, dynamic and GPS-denied environmental conditions. In this study, a state-of-the-art real-time localization and mapping system was presented to achieve precise pose estimation and dense three-dimensional (3D) point cloud mapping in complex greenhouses by utilizing multi-sensor fusion and Visual–IMU–Wheel odometry. In this method
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Thapa, Vikas, Abhishek Sharma, Beena Gairola, Amit K. Mondal, Vindhya Devalla, and Ravi K. Patel. "A Review on Visual Odometry Techniques for Mobile Robots: Types and Challenges." Recent Advances in Electrical & Electronic Engineering (Formerly Recent Patents on Electrical & Electronic Engineering) 13, no. 5 (2020): 618–31. http://dx.doi.org/10.2174/2352096512666191004142546.

Der volle Inhalt der Quelle
Annotation:
For autonomous navigation, tracking and obstacle avoidance, a mobile robot must have the knowledge of its position and localization over time. Among the available techniques for odometry, vision-based odometry is robust and economical technique. In addition, a combination of position estimation from odometry with interpretations of the surroundings using a mobile camera is effective. This paper presents an overview of current visual odometry approaches, applications, and challenges in mobile robots. The study offers a comparative analysis of different available techniques and algorithms associ
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Zhang, Xiao Ya, Abdul Hadi Abd Rahman, and Faizan Qamar. "Semantic visual simultaneous localization and mapping (SLAM) using deep learning for dynamic scenes." PeerJ Computer Science 9 (October 10, 2023): e1628. http://dx.doi.org/10.7717/peerj-cs.1628.

Der volle Inhalt der Quelle
Annotation:
Simultaneous localization and mapping (SLAM) is a fundamental problem in robotics and computer vision. It involves the task of a robot or an autonomous system navigating an unknown environment, simultaneously creating a map of the surroundings, and accurately estimating its position within that map. While significant progress has been made in SLAM over the years, challenges still need to be addressed. One prominent issue is robustness and accuracy in dynamic environments, which can cause uncertainties and errors in the estimation process. Traditional methods using temporal information to diffe
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Lee, Kyuman, and Eric N. Johnson. "Latency Compensated Visual-Inertial Odometry for Agile Autonomous Flight." Sensors 20, no. 8 (2020): 2209. http://dx.doi.org/10.3390/s20082209.

Der volle Inhalt der Quelle
Annotation:
In visual-inertial odometry (VIO), inertial measurement unit (IMU) dead reckoning acts as the dynamic model for flight vehicles while camera vision extracts information about the surrounding environment and determines features or points of interest. With these sensors, the most widely used algorithm for estimating vehicle and feature states for VIO is an extended Kalman filter (EKF). The design of the standard EKF does not inherently allow for time offsets between the timestamps of the IMU and vision data. In fact, sensor-related delays that arise in various realistic conditions are at least p
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Zhang, Jiaxin, Wei Sui, Qian Zhang, Tao Chen, and Cong Yang. "Towards Accurate Ground Plane Normal Estimation from Ego-Motion." Sensors 22, no. 23 (2022): 9375. http://dx.doi.org/10.3390/s22239375.

Der volle Inhalt der Quelle
Annotation:
In this paper, we introduce a novel approach for ground plane normal estimation of wheeled vehicles. In practice, the ground plane is dynamically changed due to braking and unstable road surface. As a result, the vehicle pose, especially the pitch angle, is oscillating from subtle to obvious. Thus, estimating ground plane normal is meaningful since it can be encoded to improve the robustness of various autonomous driving tasks (e.g., 3D object detection, road surface reconstruction, and trajectory planning). Our proposed method only uses odometry as input and estimates accurate ground plane no
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Ye, C., Z. Kang, and X. Guo. "A CAMERA-LIDAR CALIBRATION METHOD ASSISTED BY INDOOR SPATIAL STRUCTURE." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVIII-1/W2-2023 (December 13, 2023): 693–98. http://dx.doi.org/10.5194/isprs-archives-xlviii-1-w2-2023-693-2023.

Der volle Inhalt der Quelle
Annotation:
Abstract. The calibration of the camera and LiDAR is one of the basis for the construction of the multi-sensor fusion mapping system. Planar features of walls and grounds in indoor environments provides effective constraints for multi-sensor calibration. In this paper, we proposed a new camera-LiDAR calibration method with the constraint of indoor spatial structure. Using the image and point cloud data collected by sensors, visual odometry and LiDAR odometry can be constructed to calculate the transformation between sensors. Based on visual odometry and LiDAR odometry, structural parameters in
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Hao, Yun, Jiacheng Liu, Yuzhen Liu, Xinyuan Liu, Ziyang Meng, and Fei Xing. "Global Visual–Inertial Localization for Autonomous Vehicles with Pre-Built Map." Sensors 23, no. 9 (2023): 4510. http://dx.doi.org/10.3390/s23094510.

Der volle Inhalt der Quelle
Annotation:
Accurate, robust and drift-free global pose estimation is a fundamental problem for autonomous vehicles. In this work, we propose a global drift-free map-based localization method for estimating the global poses of autonomous vehicles that integrates visual–inertial odometry and global localization with respect to a pre-built map. In contrast to previous work on visual–inertial localization, the global pre-built map provides global information to eliminate drift and assists in obtaining the global pose. Additionally, in order to ensure the local odometry frame and the global map frame can be a
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Xu, Wenhao, Jianmin Yang, Jinghang Mao, Haining Lu, Changyu Lu, and Xinran Liu. "Direct Forward-Looking Sonar Odometry: A Two-Stage Odometry for Underwater Robot Localization." Remote Sensing 17, no. 13 (2025): 2166. https://doi.org/10.3390/rs17132166.

Der volle Inhalt der Quelle
Annotation:
Underwater robots require fast and accurate localization results during challenging near-bottom operations. However, commonly used methods such as acoustic baseline localization, dead reckoning, and sensor fusion have limited accuracy. The use of forward-looking sonar (FLS) images to observe the seabed environment for pose estimation has gained significant traction in recent years. This paper proposes a lightweight front-end FLS odometry to provide consistent and accurate localization for underwater robots. The proposed direct FLS odometry (DFLSO) includes several key innovations that realize
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

An, Lifeng, Xinyu Zhang, Hongbo Gao, and Yuchao Liu. "Semantic segmentation–aided visual odometry for urban autonomous driving." International Journal of Advanced Robotic Systems 14, no. 5 (2017): 172988141773566. http://dx.doi.org/10.1177/1729881417735667.

Der volle Inhalt der Quelle
Annotation:
Visual odometry plays an important role in urban autonomous driving cars. Feature-based visual odometry methods sample the candidates randomly from all available feature points, while alignment-based visual odometry methods take all pixels into account. These methods hold an assumption that quantitative majority of candidate visual cues could represent the truth of motions. But in real urban traffic scenes, this assumption could be broken by lots of dynamic traffic participants. Big trucks or buses may occupy the main image parts of a front-view monocular camera and result in wrong visual odom
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Song, Haili, Zengxu Zhao, Bin Ren, Zhanpu Xue, Junliang Li, and Hao Zhang. "Study on Optimization Method of Visual Odometry Based on Feature Matching." Mathematical Problems in Engineering 2022 (November 17, 2022): 1–10. http://dx.doi.org/10.1155/2022/6785066.

Der volle Inhalt der Quelle
Annotation:
The mismatching of image features affects the calculation of the fundamental matrix and then leads to poor estimation accuracy of SLAM visual odometry. Aiming at the above problems, a visual odometry optimization method based on feature matching is proposed. Firstly, the initial matching set is roughly filtered by the minimum distance threshold method, and then, the relative transformation relationship between images is calculated by the RANSAC algorithm. If it conforms to the transformation relationship, it is an interior point. The iteration result with most interior points is the correct ma
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Gao, Wenxiang, Guizhi Yang, Yuzhang Wang, Jiaxin Ke, Xungao Zhong, and Lihua Chen. "Robust visual odometry based on image enhancement." Journal of Physics: Conference Series 2402, no. 1 (2022): 012010. http://dx.doi.org/10.1088/1742-6596/2402/1/012010.

Der volle Inhalt der Quelle
Annotation:
Abstract With the rise of augmented reality and autonomous driving, visual SLAM (simultaneous localization and mapping) has become the focus of research again. Visual odometry is an important part of visual SLAM. Too dark or too strong light will reduce the image quality, resulting in a large deviation in the visual odometry trajectory. Therefore, this paper proposes a visual odometry with image enhancement. Identify the lighting state of the image by estimating the brightness value of the input image. Gamma correction based on truncated cumulative distribution function modulation is used to e
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

SMIRNOV, A. O. "Camera Pose Estimation Using a 3D Gaussian Splatting Radiance Field." Kibernetika i vyčislitelʹnaâ tehnika 216, no. 2(216) (2024): 15–25. http://dx.doi.org/10.15407/kvt216.02.015.

Der volle Inhalt der Quelle
Annotation:
Introduction. Accurate camera pose estimation is crucial for many applications ranging from robotics to virtual and augmented reality. The process of determining agents pose from a set of observations is called odometry. This work focuses on visual odometry, which utilizes only images from camera as the input data. The purpose of the paper is to demonstrate an approach for small-scale camera pose estimation using 3D Gaussians as the environment representation. Methods. Given the rise of neural volumetric representations for the environment reconstruction, this work relies on Gaussian Splatting
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Aguiar, André, Filipe Santos, Armando Jorge Sousa, and Luís Santos. "FAST-FUSION: An Improved Accuracy Omnidirectional Visual Odometry System with Sensor Fusion and GPU Optimization for Embedded Low Cost Hardware." Applied Sciences 9, no. 24 (2019): 5516. http://dx.doi.org/10.3390/app9245516.

Der volle Inhalt der Quelle
Annotation:
The main task while developing a mobile robot is to achieve accurate and robust navigation in a given environment. To achieve such a goal, the ability of the robot to localize itself is crucial. In outdoor, namely agricultural environments, this task becomes a real challenge because odometry is not always usable and global navigation satellite systems (GNSS) signals are blocked or significantly degraded. To answer this challenge, this work presents a solution for outdoor localization based on an omnidirectional visual odometry technique fused with a gyroscope and a low cost planar light detect
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Xiao, Zhiyao, and Guobao Zhang. "An Attention-Based Odometry Framework for Multisensory Unmanned Ground Vehicles (UGVs)." Drones 7, no. 12 (2023): 699. http://dx.doi.org/10.3390/drones7120699.

Der volle Inhalt der Quelle
Annotation:
Recently, deep learning methods and multisensory fusion have been applied to address odometry challenges in unmanned ground vehicles (UGVs). In this paper, we propose an end-to-end visual-lidar-inertial odometry framework to enhance the accuracy of pose estimation. Grayscale images, 3D point clouds, and inertial data are used as inputs to overcome the limitations of a single sensor. Convolutional neural network (CNN) and recurrent neural network (RNN) are employed as encoders for different sensor modalities. In contrast to previous multisensory odometry methods, our framework introduces a nove
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Uzun, S. S., and H. E. Soken. "Visual aid for unmanned aircraft navigation in unknown environments." Journal of Physics: Conference Series 2526, no. 1 (2023): 012090. http://dx.doi.org/10.1088/1742-6596/2526/1/012090.

Der volle Inhalt der Quelle
Annotation:
Abstract This study presents a visual-aided inertial navigation technique that can be used for unmanned aircraft in unknown environments. An angular and linear velocity estimation algorithm, which is based on the solution of the Wahba’s problem, is developed using sequential images for visual odometry. It is possible that the visual sensors do not receive continuous measurements throughout the mission, for example, when a sufficient number of features cannot be detected by the camera. Considering that a closed loop Extended Kalman Filter algorithm is designed for integration of visual odometry
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Zhi, Henghui, Chenyang Yin, Huibin Li, and Shanmin Pang. "An Unsupervised Monocular Visual Odometry Based on Multi-Scale Modeling." Sensors 22, no. 14 (2022): 5193. http://dx.doi.org/10.3390/s22145193.

Der volle Inhalt der Quelle
Annotation:
Unsupervised deep learning methods have shown great success in jointly estimating camera pose and depth from monocular videos. However, previous methods mostly ignore the importance of multi-scale information, which is crucial for pose estimation and depth estimation, especially when the motion pattern is changed. This article proposes an unsupervised framework for monocular visual odometry (VO) that can model multi-scale information. The proposed method utilizes densely linked atrous convolutions to increase the receptive field size without losing image information, and adopts a non-local sel
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Conduraru, Ionel, Ioan Doroftei, Dorin Luca, and Alina Conduraru Slatineanu. "Odometry Aspects of an Omni-Directional Mobile Robot with Modified Mecanum Wheels." Applied Mechanics and Materials 658 (October 2014): 587–92. http://dx.doi.org/10.4028/www.scientific.net/amm.658.587.

Der volle Inhalt der Quelle
Annotation:
Mobile robots have a large scale use in industry, military operations, exploration and other applications where human intervention is risky. When a mobile robot has to move in small and narrow spaces and to avoid obstacles, mobility is one of its main issues. An omni-directional drive mechanism is very attractive because it guarantees a very good mobility in such cases. Also, the accurate estimation of the position is a key component for the successful operation for most of autonomous mobile robots. In this work, some odometry aspects of an omni-directional robot are presented and a simple odo
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Salameh, Mohammed, Azizi Abdullah, and Shahnorbanun Sahran. "Multiple Descriptors for Visual Odometry Trajectory Estimation." International Journal on Advanced Science, Engineering and Information Technology 8, no. 4-2 (2018): 1423. http://dx.doi.org/10.18517/ijaseit.8.4-2.6834.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Costante, Gabriele, and Michele Mancini. "Uncertainty Estimation for Data-Driven Visual Odometry." IEEE Transactions on Robotics 36, no. 6 (2020): 1738–57. http://dx.doi.org/10.1109/tro.2020.3001674.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Ramezani, Milad, Kourosh Khoshelham, and Clive Fraser. "Pose estimation by Omnidirectional Visual-Inertial Odometry." Robotics and Autonomous Systems 105 (July 2018): 26–37. http://dx.doi.org/10.1016/j.robot.2018.03.007.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Teixeira, Bernardo, Hugo Silva, Anibal Matos, and Eduardo Silva. "Deep Learning for Underwater Visual Odometry Estimation." IEEE Access 8 (2020): 44687–701. http://dx.doi.org/10.1109/access.2020.2978406.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Mar-Castro, Enrique, Luis Mario Aparicio-Lastiri, Omar Vicente Pérez-Arista, Rafael Stanley Núñez-Cruz, and Elba Dolores Antonio-Yañez. "Comparisson between generalized geometric triangulation and odometry." Pädi Boletín Científico de Ciencias Básicas e Ingenierías del ICBI 12, Especial2 (2024): 28–33. http://dx.doi.org/10.29057/icbi.v12iespecial2.12240.

Der volle Inhalt der Quelle
Annotation:
The localization in mobile robotics is essential for carrying out autonomous tasks. For this reason, different algorithms have been developed to estimate the robot's pose, either relatively or absolutely. One of the best known is Wheel-based Odometry, which is easy to implement but the error tends to increase with respect to time producing an unreliable estimation. In contrast, absolute localization algorithms such as Generalized Geometric Triangulation (GGT) offer higher accuracy, although their implementation may require more advanced measurement systems, and pose estimation can be slow. Thi
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Yang, Yandi, and Naser El-Sheimy. "M-GCLO: Multiple Ground Constrained LiDAR Odometry." ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences X-1-2024 (May 9, 2024): 283–88. http://dx.doi.org/10.5194/isprs-annals-x-1-2024-283-2024.

Der volle Inhalt der Quelle
Annotation:
Abstract. Accurate LiDAR odometry results contribute directly to high-quality point cloud maps. However, traditional LiDAR odometry methods drift easily upward, leading to inaccuracies and inconsistencies in the point cloud maps. Considering abundant and reliable ground points in the Mobile Mapping System(MMS), ground points can be extracted, and constraints can be built to eliminate pose drifts. However, existing LiDAR-based odometry methods either do not use ground point cloud constraints or consider the ground plane as an infinite plane (i.e., single ground constraint), making pose estimati
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Fazekas, Máté, Péter Gáspár, and Balázs Németh. "Calibration and Improvement of an Odometry Model with Dynamic Wheel and Lateral Dynamics Integration." Sensors 21, no. 2 (2021): 337. http://dx.doi.org/10.3390/s21020337.

Der volle Inhalt der Quelle
Annotation:
Localization is a key part of an autonomous system, such as a self-driving car. The main sensor for the task is the GNSS, however its limitations can be eliminated only by integrating other methods, for example wheel odometry, which requires a well-calibrated model. This paper proposes a novel wheel odometry model and its calibration. The parameters of the nonlinear dynamic system are estimated with Gauss–Newton regression. Due to only automotive-grade sensors are applied to reach a cost-effective system, the measurement uncertainty highly corrupts the estimation accuracy. The problem is handl
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Esfandiari, Hooman, Derek Lichti, and Carolyn Anglin. "Single-camera visual odometry to track a surgical X-ray C-arm base." Proceedings of the Institution of Mechanical Engineers, Part H: Journal of Engineering in Medicine 231, no. 12 (2017): 1140–51. http://dx.doi.org/10.1177/0954411917735556.

Der volle Inhalt der Quelle
Annotation:
This study provides a framework for a single-camera odometry system for localizing a surgical C-arm base. An application-specific monocular visual odometry system (a downward-looking consumer-grade camera rigidly attached to the C-arm base) is proposed in this research. The cumulative dead-reckoning estimation of the base is extracted based on frame-to-frame homography estimation. Optical-flow results are utilized to feed the odometry. Online positional and orientation parameters are then reported. Positional accuracy of better than 2% (of the total traveled distance) for most of the cases and
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Yoon, Sung-Joo, and Taejung Kim. "Development of Stereo Visual Odometry Based on Photogrammetric Feature Optimization." Remote Sensing 11, no. 1 (2019): 67. http://dx.doi.org/10.3390/rs11010067.

Der volle Inhalt der Quelle
Annotation:
One of the important image processing technologies is visual odometry (VO) technology. VO estimates platform motion through a sequence of images. VO is of interest in the virtual reality (VR) industry as well as the automobile industry because the construction cost is low. In this study, we developed stereo visual odometry (SVO) based on photogrammetric geometric interpretation. The proposed method performed feature optimization and pose estimation through photogrammetric bundle adjustment. After corresponding the point extraction step, the feature optimization was carried out with photogramme
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Zhao, Leyang, Yanguang Yang, Ding Ma, Xing Lin, and Wang Wang. "PLL-VO: An Efficient and Robust Visual Odometry Integrating Point-Line Features and Neural Networks." ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences X-G-2025 (July 14, 2025): 1045–52. https://doi.org/10.5194/isprs-annals-x-g-2025-1045-2025.

Der volle Inhalt der Quelle
Annotation:
Abstract. Visual odometry is crucial for the navigation and planning of autonomous robots, but low-light conditions, dramatic lighting changes, and low-texture scenes pose significant challenges to odometry estimation. This paper proposes PLL-VO, which integrates point-line features and deep learning. To overcome the impact of complex lighting conditions, a self-supervised learning method for interest point detection and a line detection algorithm that combines line optical flow tracking with cross-constraints is presented. After selecting keyframes based on point feature counts and line featu
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Fazekas, Máté, Péter Gáspár, and Balázs Németh. "Velocity Estimation via Wheel Circumference Identification." Periodica Polytechnica Transportation Engineering 49, no. 3 (2021): 250–60. http://dx.doi.org/10.3311/pptr.18623.

Der volle Inhalt der Quelle
Annotation:
The article presents a velocity estimation algorithm through the wheel encoder-based odometry and wheel circumference identification. The motivation of the paper is that a proper model can improve the motion estimation in poor sensor performance cases. For example, when the GNSS signals are unavailable, or when the vision-based methods are incorrect due to the insufficient number of features, furthermore, when the IMU-based method fails due to the lack of frequent accelerations. In these situations, the wheel encoders can be an appropriate choice for state estimation. However, this type of est
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Villaseñor-Aguilar, Marcos J., José E. Peralta-López, David Lázaro-Mata, et al. "Fuzzy Fusion of Stereo Vision, Odometer, and GPS for Tracking Land Vehicles." Mathematics 10, no. 12 (2022): 2052. http://dx.doi.org/10.3390/math10122052.

Der volle Inhalt der Quelle
Annotation:
The incorporation of high precision vehicle positioning systems has been demanded by the autonomous electric vehicle (AEV) industry. For this reason, research on visual odometry (VO) and Artificial Intelligence (AI) to reduce positioning errors automatically has become essential in this field. In this work, a new method to reduce the error in the absolute location of AEV using fuzzy logic (FL) is presented. The cooperative data fusion of GPS, odometer, and stereo camera signals is then performed to improve the estimation of AEV localization. Although the most important challenge of this work f
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Luo, Lishu, Fulun Peng, and Longhui Dong. "Improved Multi-Sensor Fusion Dynamic Odometry Based on Neural Networks." Sensors 24, no. 19 (2024): 6193. http://dx.doi.org/10.3390/s24196193.

Der volle Inhalt der Quelle
Annotation:
High-precision simultaneous localization and mapping (SLAM) in dynamic real-world environments plays a crucial role in autonomous robot navigation, self-driving cars, and drone control. To address this dynamic localization issue, in this paper, a dynamic odometry method is proposed based on FAST-LIVO, a fast LiDAR (light detection and ranging)–inertial–visual odometry system, integrating neural networks with laser, camera, and inertial measurement unit modalities. The method first constructs visual–inertial and LiDAR–inertial odometry subsystems. Then, a lightweight neural network is used to r
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Parra, I., M. A. Sotelo, D. F. Llorca, and M. Ocaña. "Robust visual odometry for vehicle localization in urban environments." Robotica 28, no. 3 (2009): 441–52. http://dx.doi.org/10.1017/s026357470900575x.

Der volle Inhalt der Quelle
Annotation:
SUMMARYThis paper describes a new approach for estimating the vehicle motion trajectory in complex urban environments by means of visual odometry. A new strategy for robust feature extraction and data post-processing is developed and tested on-road. Images from scale-invariant feature transform (SIFT) features are used in order to cope with the complexity of urban environments. The obtained results are discussed and compared to previous works. In the prototype system, the ego-motion of the vehicle is computed using a stereo-vision system mounted next to the rear view mirror of the car. Feature
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!