To see the other types of publications on this topic, follow the link: Odometri.

Journal articles on the topic 'Odometri'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Odometri.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Rahman, Abdul, Mohammad Zaenal Arifin, Gesit Pratiknyo, and Bagus Irawan. "DESIGN OF MECHANISM AND MOTION SYSTEM ON TANK PROTOTYPE USING ODOMETRY." JOURNAL ASRO 11, no. 03 (August 31, 2020): 10. http://dx.doi.org/10.37875/asro.v11i03.303.

Full text
Abstract:
At present, Indonesia's defense forces are not yet on par with European countries. The size is the variation of Indonesia's defense equipment technology that has not been able to keep up with the defense equipment of European countries . Utilization of unmanned tank rides is a solution in empowering mobile combat equipment as a means of national defense and defense . Unarmed armored tank defense equipment as a means of national defense and defense . The Odometry technique applied in this study uses an encoder sensor device mounted on the two right and left wheels of the prototype tank. Wheel movement will produce the number of turns that can be calculated using an encoder sensor . the authors use odometry techniques aimed at estimating distances to destinations within a relatively small measurement area, in the wider measuring area this study uses GPS. Another goal is to get the accuracy and accuracy of distance measurements against the destination location . The results obtained in this study are the level of accuracy of the measurement of distance traveled in units of cm from the results of testing and implementation of the odometry system . The application of odometry is used to get the calculation of the change in distance due to the displacement of the prototype location during the autopilot motion process in relation to getting the accuracy of moving the position towards the waypoint . The measurement error rate obtained is less than 1%. It is expected that this system can be used as a motion mechanism system on alutsi s ta tank. Keywords : Odometri, Encoder , Waypoint, Autopilot
APA, Harvard, Vancouver, ISO, and other styles
2

Asrofi, Anan, Achmad Komarudin, and Agus Pracoyo. "Navigasi Robot Mobil 3wd Omni-wheeled dengan Metode Odometri." Jurnal Elektronika dan Otomasi Industri 1, no. 1 (March 5, 2020): 44. http://dx.doi.org/10.33795/elkolind.v1i1.33.

Full text
Abstract:
Makalah ini menyajikan tentang model pergerakanrobot mobil. Tantangan yang dihadapi bagaimana robot dapatbernavigasi dan mengetahui dimana posisi robot saat bergerakdengan sudut tertentu untuk mencapai suatu titik target. Robotmenggunakan system penggerak differential dengan sensor ro-tary encoder yang diolah dengan metode odometry sehinggamenghasilkan titik koordinat. Robot nantinya bergerak padaposisi awal (0,0) menuju ke titik target (x’,y’), untuk menujuke titik target robot ini di kontrol dengan metode odometrydengan memadukan metode PID. Hasil yang didapat denganmenentukan titik target dengan kecepatan 50 (pwm) menunjukanbahwa robot dapat mengikuti jalur yang dibuat dengan simpan-gan terjauh sebesar 8cm pada sumbu x negatif sampai 7cm padasumbu x positif serta 25cm pada sumbu y negatif dan 16cm padasumbu y positif
APA, Harvard, Vancouver, ISO, and other styles
3

Srinivasan, M., S. Zhang, and N. Bidwell. "Visually mediated odometry in honeybees." Journal of Experimental Biology 200, no. 19 (October 1, 1997): 2513–22. http://dx.doi.org/10.1242/jeb.200.19.2513.

Full text
Abstract:
The ability of honeybees to gauge the distances of short flights was investigated under controlled laboratory conditions where a variety of potential odometric cues such as flight duration, energy consumption, image motion, airspeed, inertial navigation and landmarks were manipulated. Our findings indicate that honeybees can indeed measure short distances travelled and that they do so solely by analysis of image motion. Visual odometry seems to rely primarily on the motion that is sensed by the lateral regions of the visual field. Computation of distance flown is re-commenced whenever a prominent landmark is encountered en route. 'Re-setting' the odometer (or starting a new one) at each landmark facilitates accurate long-range navigation by preventing excessive accumulation of odometric errors. Distance appears to be learnt on the way to the food source and not on the way back.
APA, Harvard, Vancouver, ISO, and other styles
4

Yang, Jingdong, Jinghui Yang, and Zesu Cai. "An efficient approach to pose tracking based on odometric error modelling for mobile robots." Robotica 33, no. 6 (April 1, 2014): 1231–49. http://dx.doi.org/10.1017/s0263574714000654.

Full text
Abstract:
SUMMARYOdometric error modelling for mobile robots is the basis of pose tracking. Without bounds the odometric accumulative error decreases localisation precision after long-range movement, which is often not capable of being compensated for in real time. Therefore, an efficient approach to odometric error modelling is proposed in regard to different drive type mobile robots. This method presents a hypothesis that the motion path approximates a circular arc. The approximate functional expressions between the control input of odometry and non-systematic error as well as systematic error derived from odometric error propagation law. Further an efficient algorithm of pose tracking is proposed for mobile robots, which is able to compensate for the non-systematic and systematic error in real time. These experiments denote that the odometric error modelling reduces the accumulative error of odometry efficiently and improves the specific localisation process significantly during autonomous navigation.
APA, Harvard, Vancouver, ISO, and other styles
5

Nevalainen, Paavo, Qingqing Li, Timo Melkas, Kirsi Riekki, Tomi Westerlund, and Jukka Heikkonen. "Navigation and Mapping in Forest Environment Using Sparse Point Clouds." Remote Sensing 12, no. 24 (December 14, 2020): 4088. http://dx.doi.org/10.3390/rs12244088.

Full text
Abstract:
Odometry during forest operations is demanding, involving limited field of vision (FOV), back-and-forth work cycle movements, and occasional close obstacles, which create problems for state-of-the-art systems. We propose a two-phase on-board process, where tree stem registration produces a sparse point cloud (PC) which is then used for simultaneous location and mapping (SLAM). A field test was carried out using a harvester with a laser scanner and a global navigation satellite system (GNSS) performing forest thinning over a 520 m strip route. Two SLAM methods are used: The proposed sparse SLAM (sSLAM) and a standard method, LeGO-LOAM (LLOAM). A generic SLAM post-processing method is presented, which improves the odometric accuracy with a small additional processing cost. The sSLAM method uses only tree stem centers, reducing the allocated memory to approximately 1% of the total PC size. Odometry and mapping comparisons between sSLAM and LLOAM are presented. Both methods show 85% agreement in registration within 15 m of the strip road and odometric accuracy of 0.5 m per 100 m. Accuracy is evaluated by comparing the harvester location derived through odometry to locations collected by a GNSS receiver mounted on the harvester.
APA, Harvard, Vancouver, ISO, and other styles
6

Sun, Qian, Ming Diao, Yibing Li, and Ya Zhang. "An improved binocular visual odometry algorithm based on the Random Sample Consensus in visual navigation systems." Industrial Robot: An International Journal 44, no. 4 (June 19, 2017): 542–51. http://dx.doi.org/10.1108/ir-11-2016-0280.

Full text
Abstract:
Purpose The purpose of this paper is to propose a binocular visual odometry algorithm based on the Random Sample Consensus (RANSAC) in visual navigation systems. Design/methodology/approach The authors propose a novel binocular visual odometry algorithm based on features from accelerated segment test (FAST) extractor and an improved matching method based on the RANSAC. Firstly, features are detected by utilizing the FAST extractor. Secondly, the detected features are roughly matched by utilizing the distance ration of the nearest neighbor and the second nearest neighbor. Finally, wrong matched feature pairs are removed by using the RANSAC method to reduce the interference of error matchings. Findings The performance of this new algorithm has been examined by an actual experiment data. The results shown that not only the robustness of feature detection and matching can be enhanced but also the positioning error can be significantly reduced by utilizing this novel binocular visual odometry algorithm. The feasibility and effectiveness of the proposed matching method and the improved binocular visual odometry algorithm were also verified in this paper. Practical implications This paper presents an improved binocular visual odometry algorithm which has been tested by real data. This algorithm can be used for outdoor vehicle navigation. Originality/value A binocular visual odometer algorithm based on FAST extractor and RANSAC methods is proposed to improve the positioning accuracy and robustness. Experiment results have verified the effectiveness of the present visual odometer algorithm.
APA, Harvard, Vancouver, ISO, and other styles
7

Tang, Hengbo, and Yunhui Liu. "Automatic Simultaneous Extrinsic-Odometric Calibration for Camera-Odometry System." IEEE Sensors Journal 18, no. 1 (January 1, 2018): 348–55. http://dx.doi.org/10.1109/jsen.2017.2764125.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Al Fadli, Muhammad Hanifudin, Munawar Agus Riyadi, and Budi Setiyono. "PERANCANGAN SISTEM ROKET KENDALI BERPEMANDU INFRAMERAH MENGGUNAKAN METODE PENGOLAHAN CITRA YANG DISIMULASIKAN DALAM TEROWONGAN ANGIN." TRANSIENT 7, no. 1 (March 13, 2018): 152. http://dx.doi.org/10.14710/transient.7.1.152-159.

Full text
Abstract:
Roket kendali adalah sistem senjata berbasis roket yang memiliki pengendalian otomatis untuk mencari target dan menyesuaikan arah terbangnya. Pada Penelitian ini dirancang sebuah prototipe sistem roket kendali dengan sensor pelacakan sasaran inframerah menggunakan kamera webcam yang dimodifikasi dengan penggantian lensa bawaan dengan lensa tapis pelewat sempit 940 nm. Kamera tersebut diakses menggunakan Raspberry Pi untuk selanjutnya dilakukan proses penapisan citra menggunakan metode pengambangan parameter HSV. Mikrokontroler atmega328 dipasang untuk mengendalikan pergerakan 4 buah servo canard menggunakan metode kendali PID. Dilakukan pula pengambilan parameter IMU 9-DOF dari sensor giroskop dan akselerometer MPU-6050 serta kompas HMC5883l untuk ditampilkan dalam antarmuka C#. Parameter data pelacakan sasaran dan IMU dikirimkan ke komputer menggunakan modul telemetri APC220. Sistem roket kendali yang dirancang kemudian disimulasikan gerakannya dalam terowongan angin. Keluaran dari penelitian ini menghasilkan prototype sistem roket kendali dengan instrumen pelacakan inframerah yang mampu melacak sasaran inframerah 940 nm dengan kecepatan pelacakan sebesar 49,81 FPS. Parameter pengambangan HSV untuk sasaran inframerah 940 nm bernilai hue 0-153, saturation 0-32 dan value 179-255. Parameter PID yang digunakan dalam simulasi dengan kecepatan angin 9±4 m/s bernilai kp = 7, ki = 0, dan kd = 50. Data dari telemetri dapat ditampilkan dalam odometri 2D menggunakan C#.
APA, Harvard, Vancouver, ISO, and other styles
9

Youssef, Ahmed A., Naif Al-Subaie, Naser El-Sheimy, and Mohamed Elhabiby. "Accelerometer-Based Wheel Odometer for Kinematics Determination." Sensors 21, no. 4 (February 13, 2021): 1327. http://dx.doi.org/10.3390/s21041327.

Full text
Abstract:
Various high budget industries that utilize wheel-based vehicles rely on wheel odometry as an integral aspect of their navigation process. This research introduces a low-cost alternative for typical wheel encoders that are typically used to determine the on-track speed of vehicles. The proposed system is referred to as an Accelerometer-based Wheel Odometer for Kinematics determination (AWOK). The AWOK system comprises just a single axis accelerometer mounted radially at the center of any given wheel. The AWOK system can provide direct distances instead of just velocities, which are provided by typical wheel speedometers. Hence, the AWOK system is advantageous in comparison to typical wheel odometers. Besides, the AWOK system comprises a simple assembly with a highly efficient data processing algorithm. Additionally, the AWOK system provides a high capacity to handle high dynamics in comparison to similar approaches found in previous related work. Furthermore, the AWOK system is not affected by the inherited stochastic errors in micro-machined electro-mechanical systems (MEMS) inertial sensors, whether short-term or long-term errors. Above all, the AWOK system reported a relative accuracy of 0.15% in determining the distance covered by a car.
APA, Harvard, Vancouver, ISO, and other styles
10

Nikitenko, Agris, Aleksis Liekna, Martins Ekmanis, Guntis Kulikovskis, and Ilze Andersone. "Single Robot Localisation Approach for Indoor Robotic Systems through Integration of Odometry and Artificial Landmarks." Applied Computer Systems 14, no. 1 (June 1, 2013): 50–58. http://dx.doi.org/10.2478/acss-2013-0006.

Full text
Abstract:
Abstract we present an integrated approach for robot localization that allows to integrate for the artificial landmark localization data with odometric sensors and signal transfer function data to provide means for different practical application scenarios. The sensor data fusion deals with asynchronous sensor data using inverse Laplace transform. We demonstrate a simulation software system that ensures smooth integration of the odometry-based and signal transfer - based localization into one approach.
APA, Harvard, Vancouver, ISO, and other styles
11

Chindakham, Nachaya, Young-Yong Kim, Alongkorn Pirayawaraporn, and Mun-Ho Jeong. "Simultaneous Calibration of Odometry and Head-Eye Parameters for Mobile Robots with a Pan-Tilt Camera." Sensors 19, no. 16 (August 20, 2019): 3623. http://dx.doi.org/10.3390/s19163623.

Full text
Abstract:
In the field of robot navigation, the odometric parameters, such as wheel radii and wheelbase length, and the relative pose of the optical sensing camera with respect to the robot are very important criteria for accurate operation. Hence, these parameters are necessary to be estimated for more precise operation. However, the odometric and head-eye parameters are typically estimated separately, which is an inconvenience and requires longer calibration time. Even though several researchers have proposed simultaneous calibration methods that obtain both odometric and head-eye parameters simultaneously to reduce the calibration time, they are only applicable to a mobile robot with a fixed camera mounted, not for mobile robots equipped with a pan-tilt motorized camera systems, which is a very common configuration and widely used for wide view. Previous approaches could not provide the z-axis translation parameter between head-eye coordinate systems on mobile robots equipped with a pan-tilt camera. In this paper, we present a full simultaneous mobile robot calibration of head–eye and odometric parameters, which is appropriate for a mobile robot equipped with a camera mounted on the pan-tilt motorized device. After a set of visual features obtained from a chessboard or natural scene and the odometry measurements are synchronized and received, both odometric and head-eye parameters are iteratively adjusted until convergence prior to using a nonlinear optimization method for more accuracy.
APA, Harvard, Vancouver, ISO, and other styles
12

Liu, Guohua, Juan Guan, Haiying Liu, and Chenlin Wang. "Multirobot Collaborative Navigation Algorithms Based on Odometer/Vision Information Fusion." Mathematical Problems in Engineering 2020 (August 27, 2020): 1–16. http://dx.doi.org/10.1155/2020/5819409.

Full text
Abstract:
Collaborative navigation is the key technology for multimobile robot system. In order to improve the performance of collaborative navigation system, the collaborative navigation algorithms based on odometer/vision multisource information fusion are presented in this paper. Firstly, the multisource information fusion collaborative navigation system model is established, including mobile robot model, odometry measurement model, lidar relative measurement model, UWB relative measurement model, and the SLAM model based on lidar measurement. Secondly, the frameworks of centralized and decentralized collaborative navigation based on odometer/vision fusion are given, and the SLAM algorithms based on vision are presented. Then, the centralized and decentralized odometer/vision collaborative navigation algorithms are derived, including the time update, single node measurement update, relative measurement update between nodes, and covariance cross filtering algorithm. Finally, different simulation experiments are designed to verify the effectiveness of the algorithms. Two kinds of multirobot collaborative navigation experimental scenes, which are relative measurement aided odometer and odometer/SLAM fusion, are designed, respectively. The advantages and disadvantages of centralized versus decentralized collaborative navigation algorithms in different experimental scenes are analyzed.
APA, Harvard, Vancouver, ISO, and other styles
13

Bonnifait, Ph, P. Bouron, D. Meizel, and P. Crubillé. "Dynamic Localization of Car-like Vehicles using Data Fusion of Redundant ABS Sensors." Journal of Navigation 56, no. 3 (August 26, 2003): 429–41. http://dx.doi.org/10.1017/s037346330300242x.

Full text
Abstract:
A localization system using GPS, ABS sensors and a driving wheel encoder is described and tested through real experiments. A new odometric technique using the four ABS sensors is presented. Due to the redundancy of measurements, the precision is better than the method using differential odometry on the rear wheels only. The sampling is performed when necessary and when a GPS measurement is performed. This implies a noticeable reduction of the GPS latency, simplifying the data-fusion process and improving the quality of its results.
APA, Harvard, Vancouver, ISO, and other styles
14

Nurmaini, Siti, and Sahat Pangidoan. "Localization of Leader-Follower Robot Using Extended Kalman Filter." Computer Engineering and Applications Journal 7, no. 2 (July 12, 2018): 95–108. http://dx.doi.org/10.18495/comengapp.v7i2.253.

Full text
Abstract:
Non-holonomic leader-follower robot must be capable to find its own position in order to be able to navigating autonomously in the environment this problem is known as localization. A common way to estimate the robot pose by using odometer. However, odometry measurement may cause inaccurate result due to the wheel slippage or other small noise sources. In this research, the Extended Kalman Filter (EKF) is proposed to minimize the error or the inaccuracy caused by the odometry measurement. The EKF algorithm works by fusing odometry and landmark information to produce a better estimation. A better estimation acknowledged whenever the estimated position lies close to the actual path, which represents a system without noise. Another experiment is conducted to observe the influence of numbers of landmark to the estimated position. The results show that the EKF technique is effective to estimate the leader pose and orientation pose with small error and the follower has the ability traverse close to leader based-on the actual path.
APA, Harvard, Vancouver, ISO, and other styles
15

Osman, Mostafa, Ahmed Hussein, Abdulla Al-Kaff, Fernando García, and Dongpu Cao. "A Novel Online Approach for Drift Covariance Estimation of Odometries Used in Intelligent Vehicle Localization." Sensors 19, no. 23 (November 26, 2019): 5178. http://dx.doi.org/10.3390/s19235178.

Full text
Abstract:
Localization is the fundamental problem of intelligent vehicles. For a vehicle to autonomously operate, it first needs to locate itself in the environment. A lot of different odometries (visual, inertial, wheel encoders) have been introduced through the past few years for autonomous vehicle localization. However, such odometries suffers from drift due to their reliance on integration of sensor measurements. In this paper, the drift error in an odometry is modeled and a Drift Covariance Estimation (DCE) algorithm is introduced. The DCE algorithm estimates the covariance of an odometry using the readings of another on-board sensor which does not suffer from drift. To validate the proposed algorithm, several real-world experiments in different conditions as well as sequences from Oxford RobotCar Dataset and EU long-term driving dataset are used. The effect of the covariance estimation on three different fusion-based localization algorithms (EKF, UKF and EH-infinity) is studied in comparison with the use of constant covariance, which were calculated based on the true variance of the sensors being used. The obtained results show the efficacy of the estimation algorithm compared to constant covariances in terms of improving the accuracy of localization.
APA, Harvard, Vancouver, ISO, and other styles
16

Wei, Weichen, Bijan Shirinzadeh, Rohan Nowell, Mohammadali Ghafarian, Mohamed M. A. Ammar, and Tianyao Shen. "Enhancing Solid State LiDAR Mapping with a 2D Spinning LiDAR in Urban Scenario SLAM on Ground Vehicles." Sensors 21, no. 5 (March 4, 2021): 1773. http://dx.doi.org/10.3390/s21051773.

Full text
Abstract:
Solid-State LiDAR (SSL) takes an increasing share of the LiDAR market. Compared with traditional spinning LiDAR, SSLs are more compact, energy-efficient and cost-effective. Generally, the current study of SSL mapping is limited to adapting existing SLAM algorithms to an SSL sensor. However, compared with spinning LiDARs, SSLs are different in terms of their irregular scan patterns and limited FOV. Directly applying existing SLAM approaches on them often increase the instability of a mapping process. This study proposes a systematic design, which consists of a dual-LiDAR mapping system and a three DOF interpolated six DOF odometry. For dual-LiDAR mapping, this work uses a 2D LiDAR to enhance a 3D SSL performance on a ground vehicle platform. The proposed system takes a 2D LiDAR to preprocess the scanning field into a number of feature sections according to the curvatures on the 2D fraction. Subsequently, this section information is passed to 3D SSL for direction feature selection. Additionally, this work proposes an odometry interpolation method which uses both LiDARs to generate two separated odometries. The proposed odometry interpolation method selectively determines the appropriate odometry information to update the system state under challenging conditions. Experiments are conducted in different scenarios. The results proves that the proposed approach is able to utilise 12 times more corner features from the environment than the comparied method, thus results in a demonstrable improvement in its absolute position error.
APA, Harvard, Vancouver, ISO, and other styles
17

Faiq Hatta, Mohammad Iqbalul, and Nuryono Satya Widodo. "Robot Operating System (ROS) in Quadcopter Flying Robot Using Telemetry System." International Journal of Robotics and Control Systems 1, no. 1 (February 22, 2021): 54–65. http://dx.doi.org/10.31763/ijrcs.v1i1.247.

Full text
Abstract:
In this study implementing odometry using RVIZ on a quadcopter flying robot that uses the Pixhawk Cube firmware version 3.6.8 as the sub-controller. Then the Lenovo G400 laptop as the main-controller as well as the Ground Control Station using the ubuntu 16.04 Linux operating system. The ROS platform uses the Kinetic and MAVROS versions as a quadcopter platform package using MAVlink communication with the telemetry module. The odometry system was tested using Rviz as navigation for Quadcopter movements in carrying out movements that follow movement patterns in certain shapes and perform basic robot movements. Data were collected using a standard measuring instrument inclinometer as a measurement of the slope of the robot and visualization RVIZ as a visual display of the odometric robot. The results of the research obtained are that the flying robot can maneuver according to the shape on the RVIZ according to the movements carried out directly at the airport, as well as the effect of the roll angle on the quadcopter (negative left roll, positive right) and the pitch angle on the quadcopter (negative forward pitch, the pitch returns positive).
APA, Harvard, Vancouver, ISO, and other styles
18

Aguiar, André, Filipe Santos, Armando Jorge Sousa, and Luís Santos. "FAST-FUSION: An Improved Accuracy Omnidirectional Visual Odometry System with Sensor Fusion and GPU Optimization for Embedded Low Cost Hardware." Applied Sciences 9, no. 24 (December 15, 2019): 5516. http://dx.doi.org/10.3390/app9245516.

Full text
Abstract:
The main task while developing a mobile robot is to achieve accurate and robust navigation in a given environment. To achieve such a goal, the ability of the robot to localize itself is crucial. In outdoor, namely agricultural environments, this task becomes a real challenge because odometry is not always usable and global navigation satellite systems (GNSS) signals are blocked or significantly degraded. To answer this challenge, this work presents a solution for outdoor localization based on an omnidirectional visual odometry technique fused with a gyroscope and a low cost planar light detection and ranging (LIDAR), that is optimized to run in a low cost graphical processing unit (GPU). This solution, named FAST-FUSION, proposes to the scientific community three core contributions. The first contribution is an extension to the state-of-the-art monocular visual odometry (Libviso2) to work with omnidirectional cameras and single axis gyro to increase the system accuracy. The second contribution, it is an algorithm that considers low cost LIDAR data to estimate the motion scale and solve the limitations of monocular visual odometer systems. Finally, we propose an heterogeneous computing optimization that considers a Raspberry Pi GPU to improve the visual odometry runtime performance in low cost platforms. To test and evaluate FAST-FUSION, we created three open-source datasets in an outdoor environment. Results shows that FAST-FUSION is acceptable to run in real-time in low cost hardware and that outperforms the original Libviso2 approach in terms of time performance and motion estimation accuracy.
APA, Harvard, Vancouver, ISO, and other styles
19

Martínez-García, Edgar Alonso, Joaquín Rivero-Juárez, Luz Abril Torres-Méndez, and Jorge Enrique Rodas-Osollo. "Divergent trinocular vision observers design for extended Kalman filter robot state estimation." Proceedings of the Institution of Mechanical Engineers, Part I: Journal of Systems and Control Engineering 233, no. 5 (September 24, 2018): 524–47. http://dx.doi.org/10.1177/0959651818800908.

Full text
Abstract:
Here, we report the design of two deterministic observers that exploit the capabilities of a home-made divergent trinocular visual sensor to sense depth data. The three-dimensional key points that the observers can measure are triangulated for visual odometry and estimated by an extended Kalman filter. This work deals with a four-wheel-drive mobile robot with four passive suspensions. The direct and inverse kinematic solutions are deduced and used for the updating and prediction models of the extended Kalman filter as feedback for the robot’s position controller. The state-estimation visual odometry results were compared with the robot’s dead-reckoning kinematics, and both are combined as a recursive position controller. One observer model design is based on the analytical geometric multi-view approach. The other observer model has fundamentals on multi-view lateral optical flow, which was reformulated as nonspatial–temporal and is modeled by an exponential function. This work presents the analytical deductions of the models and formulations. Experimental validation deals with five main aspects: multi-view correction, a geometric observer for range measurement, an optical flow observer for range measurement, dead-reckoning and visual odometry. Furthermore, comparison of positioning includes a four-wheel odometer, deterministic visual observers and the observer–extended Kalman filter, compared with a vision-based global reference localization system.
APA, Harvard, Vancouver, ISO, and other styles
20

Yang, Jung-Cheng, Chun-Jung Lin, Bing-Yuan You, Yin-Long Yan, and Teng-Hu Cheng. "RTLIO: Real-Time LiDAR-Inertial Odometry and Mapping for UAVs." Sensors 21, no. 12 (June 8, 2021): 3955. http://dx.doi.org/10.3390/s21123955.

Full text
Abstract:
Most UAVs rely on GPS for localization in an outdoor environment. However, in GPS-denied environment, other sources of localization are required for UAVs to conduct feedback control and navigation. LiDAR has been used for indoor localization, but the sampling rate is usually too low for feedback control of UAVs. To compensate this drawback, IMU sensors are usually fused to generate high-frequency odometry, with only few extra computation resources. To achieve this goal, a real-time LiDAR inertial odometer system (RTLIO) is developed in this work to generate high-precision and high-frequency odometry for the feedback control of UAVs in an indoor environment, and this is achieved by solving cost functions that consist of the LiDAR and IMU residuals. Compared to the traditional LIO approach, the initialization process of the developed RTLIO can be achieved, even when the device is stationary. To further reduce the accumulated pose errors, loop closure and pose-graph optimization are also developed in RTLIO. To demonstrate the efficacy of the developed RTLIO, experiments with long-range trajectory are conducted, and the results indicate that the RTLIO can outperform LIO with a smaller drift. Experiments with odometry benchmark dataset (i.e., KITTI) are also conducted to compare the performance with other methods, and the results show that the RTLIO can outperform ALOAM and LOAM in terms of exhibiting a smaller time delay and greater position accuracy.
APA, Harvard, Vancouver, ISO, and other styles
21

Iacò, Maria Rita, Wolfgang Steiner, and Robert F. Tichy. "Linear Recursive Odometers and Beta-Expansions." Uniform distribution theory 11, no. 1 (June 1, 2016): 175–86. http://dx.doi.org/10.1515/udt-2016-0010.

Full text
Abstract:
AbstractThe aim of this paper is to study the connection between different properties related to β-expansions. In particular, the relation between two conditions, both ensuring purely discrete spectrum of the odometer, is analyzed. The first one is the so-called Hypothesis B for the G-odometers and the second one is denoted by (QM) and it has been introduced in the framework of tilings associated to Pisot β-numerations.
APA, Harvard, Vancouver, ISO, and other styles
22

Conduraru, Ionel, Ioan Doroftei, Dorin Luca, and Alina Conduraru Slatineanu. "Odometry Aspects of an Omni-Directional Mobile Robot with Modified Mecanum Wheels." Applied Mechanics and Materials 658 (October 2014): 587–92. http://dx.doi.org/10.4028/www.scientific.net/amm.658.587.

Full text
Abstract:
Mobile robots have a large scale use in industry, military operations, exploration and other applications where human intervention is risky. When a mobile robot has to move in small and narrow spaces and to avoid obstacles, mobility is one of its main issues. An omni-directional drive mechanism is very attractive because it guarantees a very good mobility in such cases. Also, the accurate estimation of the position is a key component for the successful operation for most of autonomous mobile robots. In this work, some odometry aspects of an omni-directional robot are presented and a simple odometer solution is proposed.
APA, Harvard, Vancouver, ISO, and other styles
23

Fariña, Bibiana, Jonay Toledo, Jose Ignacio Estevez, and Leopoldo Acosta. "Improving Robot Localization Using Doppler-Based Variable Sensor Covariance Calculation." Sensors 20, no. 8 (April 17, 2020): 2287. http://dx.doi.org/10.3390/s20082287.

Full text
Abstract:
This paper describes a localization module for an autonomous wheelchair. This module includes a combination of various sensors such as odometers, laser scanners, IMU and Doppler speed sensors. Every sensor used in the module features variable covariance estimation in order to yield a final accurate localization. The main problem of a localization module composed of different sensors is the accuracy estimation of each sensor. Average static values are normally used, but these can lead to failure in some situations. In this paper, all the sensors have a variable covariance estimation that depends on the data quality. A Doppler speed sensor is used to estimate the covariance of the encoder odometric localization. Lidar is also used as a scan matching localization algorithm, comparing the difference between two consecutive scans to obtain the change in position. Matching quality gives the accuracy of the scan matcher localization. This structure yields a better position than a traditional odometric static covariance method. This is tested in a real prototype and compared to a standard fusion technique.
APA, Harvard, Vancouver, ISO, and other styles
24

Gao, Jiaxin, Kui Li, and Jiyang Chen. "Research on the Integrated Navigation Technology of SINS with Couple Odometers for Land Vehicles." Sensors 20, no. 2 (January 19, 2020): 546. http://dx.doi.org/10.3390/s20020546.

Full text
Abstract:
Autonomous and accurate acquisition of the position and azimuth of the vehicle is critical to the combat effectiveness of land-fighting vehicles. The integrated navigation system, consisting of a strap-down inertial navigation system (SINS) and odometer (OD), is commonly applied in vehicles. In the SINS/OD integrated system, the odometer is installed around the vehicle’s wheel, while SINS is usually installed on the base of the vehicle. The distance along SINS and OD would cause a velocity difference when the vehicle maneuvers, which may lead to a significant influence on the integration positioning accuracy. Furthermore, SINS navigation errors, especially azimuth error, would diverge over time due to gyro drifts and accelerometer biases. The azimuth error would cause the divergence of dead-reckoning positioning errors with the distance that the vehicle drives. To solve these problems, an integrated positioning and orientation method based on the configuration of SINS and couple odometers was proposed in this paper. The proposed method designed a high precision integrated navigation algorithm, which compensated the lever arm effect to eliminate the velocity difference between SINS and odometers. At the same time, by using the measured information of couple odometers, azimuth reference was calculated and used as an external measurement to suppress SINS azimuth error’s divergence over time, thus could further improve the navigation precision of the integrated system, especially the orientation accuracy. The performance of the proposed method was verified by simulations. The results demonstrated that SINS/2ODs integrated system could achieve a positioning accuracy of 0.01% D (total mileage) and orientation accuracy of ±30″ by using SINS with 0.01°/h Fiber-Optic Gyroscope (FOGs) and 50 µg accelerometers.
APA, Harvard, Vancouver, ISO, and other styles
25

Yoon, S. J., W. S. Yoon, J. W. Jung, and T. Kim. "DEVELOPMENT OF A SINGLE-VIEW ODOMETER BASED ON PHOTOGRAMMETRIC BUNDLE ADJUSTMENT." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2 (May 30, 2018): 1219–23. http://dx.doi.org/10.5194/isprs-archives-xlii-2-1219-2018.

Full text
Abstract:
Recently, a vehicle is equipped with various sensors, which aim smart and autonomous functions. Single-view odometer estimates its pose using a monoscopic camera mounted on a vehicle. It was generally studied in the field of computer vision. On the other hands, photogrammetry focuses to produce precise three-dimensional position information using bundle adjustment methods. Therefore, this paper proposes to apply photogrammetric approach to single view odometer. Firstly, it performs real-time corresponding point extraction. Next, it estimates the pose using relative orientation based on coplanarity conditions. Then, scale calibration is performed to convert the estimated translation in the model space to the translation in the real space. Finally, absolute orientation is performed using more than three images. In this step, we also extract the appropriate model points through verification procedure. For experiments, we used the data provided by KITTI (Karlsruhe Institute of Technology and Toyota Technological Institute) community. This technique took 0.12 seconds of processing time per frame. The rotation estimation error was about 0.005 degree per meter and the translation estimation error was about 6.8 %. The results of this study have shown the applicability of photogrammetry to visual odometry technology.
APA, Harvard, Vancouver, ISO, and other styles
26

Piatkov, Oleksandr, Veronika Zhuk, and Olha Poliukhovych. "Influence of the effect of crumpling of clay soils in compression tests on the determination of the subsidence of the base." Bases and Foundations, no. 40 (June 4, 2020): 83–90. http://dx.doi.org/10.32347/0475-1132.40.2020.83-90.

Full text
Abstract:
The effect of "compression error" - the effect of crumpling samples of dusty clay soils in compression tests - has been investigated experimentally. For this purpose, compression tests of sands and loams were carried out on a special compression device with a ring area A = 360 cm2 and a height hk = 7 cm. The second difference of this device is the presence in the upper stamp of holes with a diameter of up to 5 mm for the installation of screw marks, which were placed in clay soil samples at a distance of up to 5 mm from the contact surface between the stamp and the soil. The modulus of deformation is the main deformation mechanical characteristic of the soil. It is known from practice that more reliable values of this characteristic can be obtained by testing soil bases with stamps in the field. However, when designing the foundations of shallow foundation, as a rule, apply odometric tests of soils in the laboratory on standard compression devices with a ring area of 60 cm2 and a height of hk = 25 mm by the method of 2 curves [6]. To obtain the calculated values, the compression modules are corrected using transient coefficients [7, 4]. And most importantly, when determining the values of odometric laboratory modules, using standard compression devices, the result is affected by the "compression error". That is, the initial data for the calculated modules of deformation of the engineering-geological elements of the base can be significantly underestimated, which has been confirmed many times by complex pair (stamp-odometer) studies of soils under compression. Studies of soil compression in compression conditions, conducted in the laboratory of foundations and foundations of KNUBA [2] prove the possibility of improving the results of odometric tests of soils for compression, bringing them closer to the strain.
APA, Harvard, Vancouver, ISO, and other styles
27

Hashimoto, Masafumi, Takanori Kurazumi, and Fuminori Oba. "Odometry in Cooperative Multi-Mobile Robots." Journal of Robotics and Mechatronics 11, no. 5 (October 20, 1999): 411–16. http://dx.doi.org/10.20965/jrm.1999.p0411.

Full text
Abstract:
We propose odometry in cooperative multi-mobile robots by integrating conventional odometry and interrobot position sensor information. In our odometry, each robot is considered a moving landmark with imprecise location. Robots in the group locally estimate their own absolute positions based on conventional odometry and find the relative positions of each other using interrobot position sensors. They communicate and exchange information on local estimates and relative positions. The information is integrated decentralized based on the extended Kalman filter and robots improve their absolute positions. Simulation and experiments show that our odometry eliminates large robot location errors found in conventional odometry.
APA, Harvard, Vancouver, ISO, and other styles
28

Dutle, Aaron, and Bill Kay. "Graph odometry." Discrete Applied Mathematics 214 (December 2016): 108–15. http://dx.doi.org/10.1016/j.dam.2016.06.023.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Gutiérrez, Álvaro, Alexandre Campo, Francisco C. Santos, Félix Monasterio-Huelin, and Marco Dorigo. "Social Odometry: Imitation Based Odometry in Collective Robotics." International Journal of Advanced Robotic Systems 6, no. 2 (June 2009): 11. http://dx.doi.org/10.5772/6794.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Asselman, Hassan, and Aouatef Daouidi. "Study of clay soils swelling by the new method based on Laser Interferometry and the classical Odometer test." MATEC Web of Conferences 149 (2018): 02018. http://dx.doi.org/10.1051/matecconf/201814902018.

Full text
Abstract:
Geotechnical engineering participates in the act of construction, which means that it must meet a double concern for securityand economy. Therefore, an essential part of the engineer's responsibility rests on the recognition of soils in order to determine their nature and properties (taking into account the flow of water). In the present work, we measure the swelling and the permeability by detection of the swelling, by our new optical method based on the interferometry-laser, invented by Hassan Asselman, within the team of Optics and photonics of sciences faculty, Tetouan-Morocco. This new prototype allows us to directly measure the following parameters: the permeability k (m / s), the Young module Eand the swelling index Cs. For the latter parameter, the evolution of the strain as a function of the stresses ρ (Pa) is measured for a given degree of saturation (Until saturation). Moreover, we will use the classical odometer test, which reproduces the conditions of deformation of the soils. Using the results of the latter by the graphic methods of Taylor and Gasagrande, it is possible to determine the value of the coefficient of consolidation of the soil Cv. According to the Darcy theoretical modelfor a saturated medium, Cv depends on the permeability, the compressibility coefficient mv (or the inverse of the model of Young odometric) and the voluminal weight of the water γw. These tests will be carried out at the GEORET Geotechnical Laboratory in Tetouan. To perform this work, we chose a sample of claydistrubed, already characterized by X-ray diffraction (whose clay fraction is illite). It is extracted from the so-called "Teffalin" quarry of the Tetouan region, used in the manufacture of pottery. Finally we give a comparison between our new patented method and the classic Odometric test
APA, Harvard, Vancouver, ISO, and other styles
31

An, Lifeng, Xinyu Zhang, Hongbo Gao, and Yuchao Liu. "Semantic segmentation–aided visual odometry for urban autonomous driving." International Journal of Advanced Robotic Systems 14, no. 5 (September 1, 2017): 172988141773566. http://dx.doi.org/10.1177/1729881417735667.

Full text
Abstract:
Visual odometry plays an important role in urban autonomous driving cars. Feature-based visual odometry methods sample the candidates randomly from all available feature points, while alignment-based visual odometry methods take all pixels into account. These methods hold an assumption that quantitative majority of candidate visual cues could represent the truth of motions. But in real urban traffic scenes, this assumption could be broken by lots of dynamic traffic participants. Big trucks or buses may occupy the main image parts of a front-view monocular camera and result in wrong visual odometry estimation. Finding available visual cues that could represent real motion is the most important and hardest step for visual odometry in the dynamic environment. Semantic attributes of pixels could be considered as a more reasonable factor for candidate selection in that case. This article analyzed the availability of all visual cues with the help of pixel-level semantic information and proposed a new visual odometry method that combines feature-based and alignment-based visual odometry methods with one optimization pipeline. The proposed method was compared with three open-source visual odometry algorithms on Kitti benchmark data sets and our own data set. Experimental results confirmed that the new approach provided effective improvement both on accurate and robustness in the complex dynamic scenes.
APA, Harvard, Vancouver, ISO, and other styles
32

CIOCOIU, Titus, Florin MOLDOVEANU, and Caius SULIMAN. "CAMERA CALIBRATION FOR VISUAL ODOMETRY SYSTEM." SCIENTIFIC RESEARCH AND EDUCATION IN THE AIR FORCE 18, no. 1 (June 24, 2016): 227–32. http://dx.doi.org/10.19062/2247-3173.2016.18.1.30.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Bazeille, Stephane, Emmanuel Battesti, and David Filliat. "A Light Visual Mapping and Navigation Framework for Low-Cost Robots." Journal of Intelligent Systems 24, no. 4 (December 1, 2015): 505–24. http://dx.doi.org/10.1515/jisys-2014-0116.

Full text
Abstract:
AbstractWe address the problems of localization, mapping, and guidance for robots with limited computational resources by combining vision with the metrical information given by the robot odometry. We propose in this article a novel light and robust topometric simultaneous localization and mapping framework using appearance-based visual loop-closure detection enhanced with the odometry. The main advantage of this combination is that the odometry makes the loop-closure detection more accurate and reactive, while the loop-closure detection enables the long-term use of odometry for guidance by correcting the drift. The guidance approach is based on qualitative localization using vision and odometry, and is robust to visual sensor occlusions or changes in the scene. The resulting framework is incremental, real-time, and based on cheap sensors provided on many robots (a camera and odometry encoders). This approach is, moreover, particularly well suited for low-power robots as it is not dependent on the image processing frequency and latency, and thus it can be applied using remote processing. The algorithm has been validated on a Pioneer P3DX mobile robot in indoor environments, and its robustness is demonstrated experimentally for a large range of odometry noise levels.
APA, Harvard, Vancouver, ISO, and other styles
34

Thapa, Vikas, Abhishek Sharma, Beena Gairola, Amit K. Mondal, Vindhya Devalla, and Ravi K. Patel. "A Review on Visual Odometry Techniques for Mobile Robots: Types and Challenges." Recent Advances in Electrical & Electronic Engineering (Formerly Recent Patents on Electrical & Electronic Engineering) 13, no. 5 (September 22, 2020): 618–31. http://dx.doi.org/10.2174/2352096512666191004142546.

Full text
Abstract:
For autonomous navigation, tracking and obstacle avoidance, a mobile robot must have the knowledge of its position and localization over time. Among the available techniques for odometry, vision-based odometry is robust and economical technique. In addition, a combination of position estimation from odometry with interpretations of the surroundings using a mobile camera is effective. This paper presents an overview of current visual odometry approaches, applications, and challenges in mobile robots. The study offers a comparative analysis of different available techniques and algorithms associated with it, emphasizing on its efficiency and other feature extraction capability, applications and optimality of various techniques.
APA, Harvard, Vancouver, ISO, and other styles
35

García Daza, Iván, Mónica Rentero, Carlota Salinas Maldonado, Ruben Izquierdo Gonzalo, Noelia Hernández Parra, Augusto Ballardini, and David Fernandez Llorca. "Fail-Aware LIDAR-Based Odometry for Autonomous Vehicles." Sensors 20, no. 15 (July 23, 2020): 4097. http://dx.doi.org/10.3390/s20154097.

Full text
Abstract:
Autonomous driving systems are set to become a reality in transport systems and, so, maximum acceptance is being sought among users. Currently, the most advanced architectures require driver intervention when functional system failures or critical sensor operations take place, presenting problems related to driver state, distractions, fatigue, and other factors that prevent safe control. Therefore, this work presents a redundant, accurate, robust, and scalable LiDAR odometry system with fail-aware system features that can allow other systems to perform a safe stop manoeuvre without driver mediation. All odometry systems have drift error, making it difficult to use them for localisation tasks over extended periods. For this reason, the paper presents an accurate LiDAR odometry system with a fail-aware indicator. This indicator estimates a time window in which the system manages the localisation tasks appropriately. The odometry error is minimised by applying a dynamic 6-DoF model and fusing measures based on the Iterative Closest Points (ICP), environment feature extraction, and Singular Value Decomposition (SVD) methods. The obtained results are promising for two reasons: First, in the KITTI odometry data set, the ranking achieved by the proposed method is twelfth, considering only LiDAR-based methods, where its translation and rotation errors are 1.00 % and 0.0041 deg/m, respectively. Second, the encouraging results of the fail-aware indicator demonstrate the safety of the proposed LiDAR odometry system. The results depict that, in order to achieve an accurate odometry system, complex models and measurement fusion techniques must be used to improve its behaviour. Furthermore, if an odometry system is to be used for redundant localisation features, it must integrate a fail-aware indicator for use in a safe manner.
APA, Harvard, Vancouver, ISO, and other styles
36

Wexler, Mark. "Bugs in odometry." Trends in Cognitive Sciences 5, no. 8 (August 2001): 331. http://dx.doi.org/10.1016/s1364-6613(00)01739-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Li, Q., C. Wang, S. Chen, X. Li, C. Wen, M. Cheng, and J. Li. "DEEP LIDAR ODOMETRY." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W13 (June 5, 2019): 1681–86. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w13-1681-2019.

Full text
Abstract:
<p><strong>Abstract.</strong> Most existing lidar odometry estimation strategies are formulated under a standard framework that includes feature selection, and pose estimation through feature matching. In this work, we present a novel pipeline called LO-Net for lidar odometry estimation from 3D lidar scanning data using deep convolutional networks. The network is trained in an end-to-end manner, it infers 6-DoF poses from the encoded sequential lidar data. Based on the new designed mask-weighted geometric constraint loss, the network automatically learns effective feature representation for the lidar odometry estimation problem, and implicitly exploits the sequential dependencies and dynamics. Experiments on benchmark datasets demonstrate that LO-Net has similar accuracy with the geometry-based approach.</p>
APA, Harvard, Vancouver, ISO, and other styles
38

Scaramuzza, Davide, and Friedrich Fraundorfer. "Visual Odometry [Tutorial]." IEEE Robotics & Automation Magazine 18, no. 4 (December 2011): 80–92. http://dx.doi.org/10.1109/mra.2011.943233.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Engel, Jakob, Vladlen Koltun, and Daniel Cremers. "Direct Sparse Odometry." IEEE Transactions on Pattern Analysis and Machine Intelligence 40, no. 3 (March 1, 2018): 611–25. http://dx.doi.org/10.1109/tpami.2017.2658577.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Mouats, Tarek, Nabil Aouf, Angel Domingo Sappa, Cristhian Aguilera, and Ricardo Toledo. "Multispectral Stereo Odometry." IEEE Transactions on Intelligent Transportation Systems 16, no. 3 (June 2015): 1210–24. http://dx.doi.org/10.1109/tits.2014.2354731.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Das, Anweshan, Jos Elfring, and Gijs Dubbelman. "Real-Time Vehicle Positioning and Mapping Using Graph Optimization." Sensors 21, no. 8 (April 16, 2021): 2815. http://dx.doi.org/10.3390/s21082815.

Full text
Abstract:
In this work, we propose and evaluate a pose-graph optimization-based real-time multi-sensor fusion framework for vehicle positioning using low-cost automotive-grade sensors. Pose-graphs can model multiple absolute and relative vehicle positioning sensor measurements and can be optimized using nonlinear techniques. We model pose-graphs using measurements from a precise stereo camera-based visual odometry system, a robust odometry system using the in-vehicle velocity and yaw-rate sensor, and an automotive-grade GNSS receiver. Our evaluation is based on a dataset with 180 km of vehicle trajectories recorded in highway, urban, and rural areas, accompanied by postprocessed Real-Time Kinematic GNSS as ground truth. We compare the architecture’s performance with (i) vehicle odometry and GNSS fusion and (ii) stereo visual odometry, vehicle odometry, and GNSS fusion; for offline and real-time optimization strategies. The results exhibit a 20.86% reduction in the localization error’s standard deviation and a significant reduction in outliers when compared with automotive-grade GNSS receivers.
APA, Harvard, Vancouver, ISO, and other styles
42

Jeon, Hyun-Ho, Jin-Hyung Kim, and Yun-Ho Ko. "RAFSet (Robust Aged Feature Set)-Based Monocular Visual Odometry." Journal of Institute of Control, Robotics and Systems 23, no. 12 (December 31, 2017): 1063–69. http://dx.doi.org/10.5302/j.icros.2017.17.0160.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Jung, Changbae, and Woojin Chung. "Calibration of Kinematic Parameters for Two Wheel Differential Mobile Robots by Using Experimental Heading Errors." International Journal of Advanced Robotic Systems 8, no. 5 (January 1, 2011): 68. http://dx.doi.org/10.5772/50906.

Full text
Abstract:
Odometry using incremental wheel encoder sensors provides the relative position of mobile robots. This relative position is fundamental information for pose estimation by various sensors for EKF Localization, Monte Carlo Localization etc. Odometry is also used as unique information for localization of environmental conditions when absolute measurement systems are not available. However, odometry suffers from the accumulation of kinematic modeling errors of the wheel as the robot's travel distance increases. Therefore, systematic odometry errors need to be calibrated. Principal systematic error sources are unequal wheel diameters and uncertainty of the effective wheelbase. The UMBmark method is a practical and useful calibration scheme for systematic odometry errors of two-wheel differential mobile robots. However, the approximation errors of the calibration equations and the coupled effect between the two systematic error sources affect the performance of the kinematic parameter estimation. In this paper, we proposed a new calibration scheme whose calibration equations have less approximation errors. This new scheme uses the orientation errors of the robot's final pose in the test track. This scheme also considers the coupled effect between wheel diameter error and wheelbase error. Numerical simulations and experimental results verified that the proposed scheme accurately estimated the kinematic error parameters and improved the accuracy of odometry calibration significantly.
APA, Harvard, Vancouver, ISO, and other styles
44

Alapetite, Alexandre, Zhongyu Wang, John Paulin Hansen, Marcin Zajączkowski, and Mikołaj Patalan. "Comparison of Three Off-the-Shelf Visual Odometry Systems." Robotics 9, no. 3 (July 21, 2020): 56. http://dx.doi.org/10.3390/robotics9030056.

Full text
Abstract:
Positioning is an essential aspect of robot navigation, and visual odometry an important technique for continuous updating the internal information about robot position, especially indoors without GPS (Global Positioning System). Visual odometry is using one or more cameras to find visual clues and estimate robot movements in 3D relatively. Recent progress has been made, especially with fully integrated systems such as the RealSense T265 from Intel, which is the focus of this article. We compare between each other three visual odometry systems (and one wheel odometry, as a known baseline), on a ground robot. We do so in eight scenarios, varying the speed, the number of visual features, and with or without humans walking in the field of view. We continuously measure the position error in translation and rotation thanks to a ground truth positioning system. Our result shows that all odometry systems are challenged, but in different ways. The RealSense T265 and the ZED Mini have comparable performance, better than our baseline ORB-SLAM2 (mono-lens without inertial measurement unit (IMU)) but not excellent. In conclusion, a single odometry system might still not be sufficient, so using multiple instances and sensor fusion approaches are necessary while waiting for additional research and further improved products.
APA, Harvard, Vancouver, ISO, and other styles
45

Jiménez, Paulo A., and Bijan Shirinzadeh. "Laser interferometry measurements based calibration and error propagation identification for pose estimation in mobile robots." Robotica 32, no. 1 (August 6, 2013): 165–74. http://dx.doi.org/10.1017/s0263574713000660.

Full text
Abstract:
SUMMARYA widely used method for pose estimation in mobile robots is odometry. Odometry allows the robot in real time to reconstruct its position and orientation from the wheels' encoder measurements. Given to its unbounded nature, odometry calculation accumulates errors with quadratic increase of error variance with traversed distance. This paper develops a novel method for odometry calibration and error propagation identification for mobile robots. The proposed method uses a laser-based interferometer to measure distance precisely. Two variants of the proposed calibration method are examined: the two-parameter model and the three-parameter model. Experimental results obtained using a Khepera 3 mobile robot showed that both methods significantly increase accuracy of the pose estimation, validating the effectiveness of the proposed calibration method.
APA, Harvard, Vancouver, ISO, and other styles
46

Kim, Kyu-Won, Tae-Ki Jung, Seong-Hun Seo, and Gyu-In Jee. "Development of Tightly Coupled based LIDAR-Visual-Inertial Odometry." Journal of Institute of Control, Robotics and Systems 26, no. 8 (August 31, 2020): 597–603. http://dx.doi.org/10.5302/j.icros.2020.20.0076.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Huang Ping, 黄平, 曹镇 Cao Zhen, and 王欢 Wang Huan. "基于环形特征匹配的双目视觉里程计." Acta Optica Sinica 41, no. 15 (2021): 1515002. http://dx.doi.org/10.3788/aos202141.1515002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Muniandy, Murelitharan, and Kanesan Muthusamy. "An Innovative Drive Train Design for Improved Dead Reckoning Accuracy in Automated Guided Vehicles." Advanced Materials Research 383-390 (November 2011): 5375–80. http://dx.doi.org/10.4028/www.scientific.net/amr.383-390.5375.

Full text
Abstract:
The automated guided vehicle (AGV) is a key component for the successful implementation of flexible manufacturing systems (FMS). AGVs are wheeled mobile robots (WMR) employed for material handling in the constantly evolving layouts of these modern factory shop floors. As such their ability to navigate autonomously is an equally important aspect to sustain an efficient manufacturing process. However, their mobility efficiency is inherently affected by the unproductive systematic and non-systematic odometry errors. Odometry errors mainly occur due to the mobility configuration of the AGV drive train and the surface characteristics the robot is interacting with. Odometry error accumulates over the distance traveled and leads to severe dead reckoning inaccuracy if the robot’s feedback control mechanism is unable to correct the error fast. This paper proposes an innovative drive train mechanism called dual planetary drive (DPD) that will minimize odometry errors without the need for complex electronic feedback control systems
APA, Harvard, Vancouver, ISO, and other styles
49

Zarei, Jafar, and Abdolrahman Ramezani. "Performance Improvement for Mobile Robot Position Determination Using Cubature Kalman Filter." Journal of Navigation 71, no. 2 (October 2, 2017): 389–402. http://dx.doi.org/10.1017/s0373463317000716.

Full text
Abstract:
The objective of this paper is to accurately determine mobile robots' position and orientation by integrating information received from odometry and an inertial sensor. The position and orientation provided by odometry are subject to different types of errors. To improve the odometry, an inertial measurement unit is exploited to give more reliable attitude information. However, the nonlinear dynamic of these systems and their complexities such as different sources of errors make navigation difficult. Since the dynamic models of navigation systems are nonlinear in practice, in this study, a Cubature Kalman Filter (CKF) has been proposed to estimate and correct the errors of these systems. The information from odometry and a gyroscope are integrated using a CKF. Simulation results are provided to illustrate the superiority and the higher reliability of the proposed approach in comparison with conventional nonlinear filtering algorithms such as an Extended Kalman Filter (EKF).
APA, Harvard, Vancouver, ISO, and other styles
50

Borges, Paulo Vinicius Koerich, and Stephen Vidas. "Practical Infrared Visual Odometry." IEEE Transactions on Intelligent Transportation Systems 17, no. 8 (August 2016): 2205–13. http://dx.doi.org/10.1109/tits.2016.2515625.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography