Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Visual Odometry.

Artykuły w czasopismach na temat „Visual Odometry”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „Visual Odometry”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.

1

Sun, Qian, Ming Diao, Yibing Li, and Ya Zhang. "An improved binocular visual odometry algorithm based on the Random Sample Consensus in visual navigation systems." Industrial Robot: An International Journal 44, no. 4 (2017): 542–51. http://dx.doi.org/10.1108/ir-11-2016-0280.

Pełny tekst źródła
Streszczenie:
Purpose The purpose of this paper is to propose a binocular visual odometry algorithm based on the Random Sample Consensus (RANSAC) in visual navigation systems. Design/methodology/approach The authors propose a novel binocular visual odometry algorithm based on features from accelerated segment test (FAST) extractor and an improved matching method based on the RANSAC. Firstly, features are detected by utilizing the FAST extractor. Secondly, the detected features are roughly matched by utilizing the distance ration of the nearest neighbor and the second nearest neighbor. Finally, wrong matched
Style APA, Harvard, Vancouver, ISO itp.
2

Srinivasan, M., S. Zhang, and N. Bidwell. "Visually mediated odometry in honeybees." Journal of Experimental Biology 200, no. 19 (1997): 2513–22. http://dx.doi.org/10.1242/jeb.200.19.2513.

Pełny tekst źródła
Streszczenie:
The ability of honeybees to gauge the distances of short flights was investigated under controlled laboratory conditions where a variety of potential odometric cues such as flight duration, energy consumption, image motion, airspeed, inertial navigation and landmarks were manipulated. Our findings indicate that honeybees can indeed measure short distances travelled and that they do so solely by analysis of image motion. Visual odometry seems to rely primarily on the motion that is sensed by the lateral regions of the visual field. Computation of distance flown is re-commenced whenever a promin
Style APA, Harvard, Vancouver, ISO itp.
3

Wang, Chenggong, Gen Li, Ruiqi Wang, and Lin Li. "Wheeled Robot Visual Odometer Based on Two-dimensional Iterative Closest Point Algorithm." Journal of Physics: Conference Series 2504, no. 1 (2023): 012002. http://dx.doi.org/10.1088/1742-6596/2504/1/012002.

Pełny tekst źródła
Streszczenie:
Abstract According to the two-dimensional motion characteristics of planar motion wheeled robot, the visual odometer was dimensionally reduced in this study. In the feature point matching part of visual odometer, the contour constraint was used to filter out the mismatched feature point pairs (abbreviated as FPP). This method could also filter out the matched FPP, and the feature of FPP was correct color image matches, however, their depth image error was large. This offered higher quality matched FPP for the subsequent interframe motion estimation. Dimension reduction was performed in the int
Style APA, Harvard, Vancouver, ISO itp.
4

Scaramuzza, Davide, and Friedrich Fraundorfer. "Visual Odometry [Tutorial]." IEEE Robotics & Automation Magazine 18, no. 4 (2011): 80–92. http://dx.doi.org/10.1109/mra.2011.943233.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

An, Lifeng, Xinyu Zhang, Hongbo Gao, and Yuchao Liu. "Semantic segmentation–aided visual odometry for urban autonomous driving." International Journal of Advanced Robotic Systems 14, no. 5 (2017): 172988141773566. http://dx.doi.org/10.1177/1729881417735667.

Pełny tekst źródła
Streszczenie:
Visual odometry plays an important role in urban autonomous driving cars. Feature-based visual odometry methods sample the candidates randomly from all available feature points, while alignment-based visual odometry methods take all pixels into account. These methods hold an assumption that quantitative majority of candidate visual cues could represent the truth of motions. But in real urban traffic scenes, this assumption could be broken by lots of dynamic traffic participants. Big trucks or buses may occupy the main image parts of a front-view monocular camera and result in wrong visual odom
Style APA, Harvard, Vancouver, ISO itp.
6

CIOCOIU, Titus, Florin MOLDOVEANU, and Caius SULIMAN. "CAMERA CALIBRATION FOR VISUAL ODOMETRY SYSTEM." SCIENTIFIC RESEARCH AND EDUCATION IN THE AIR FORCE 18, no. 1 (2016): 227–32. http://dx.doi.org/10.19062/2247-3173.2016.18.1.30.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Wang, Jiabin, and Faqin Gao. "Improved visual inertial odometry based on deep learning." Journal of Physics: Conference Series 2078, no. 1 (2021): 012016. http://dx.doi.org/10.1088/1742-6596/2078/1/012016.

Pełny tekst źródła
Streszczenie:
Abstract The traditional visual inertial odometry according to the manually designed rules extracts key points. However, the manually designed extraction rules are easy to be affected and have poor robustness in the scene of illumination and perspective change, resulting in the decline of positioning accuracy. Deep learning methods show strong robustness in key point extraction. In order to improve the positioning accuracy of visual inertial odometer in the scene of illumination and perspective change, deep learning is introduced into the visual inertial odometer system for key point detection
Style APA, Harvard, Vancouver, ISO itp.
8

Borges, Paulo Vinicius Koerich, and Stephen Vidas. "Practical Infrared Visual Odometry." IEEE Transactions on Intelligent Transportation Systems 17, no. 8 (2016): 2205–13. http://dx.doi.org/10.1109/tits.2016.2515625.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Gonzalez, Ramon, Francisco Rodriguez, Jose Luis Guzman, Cedric Pradalier, and Roland Siegwart. "Combined visual odometry and visual compass for off-road mobile robots localization." Robotica 30, no. 6 (2011): 865–78. http://dx.doi.org/10.1017/s026357471100110x.

Pełny tekst źródła
Streszczenie:
SUMMARYIn this paper, we present the work related to the application of a visual odometry approach to estimate the location of mobile robots operating in off-road conditions. The visual odometry approach is based on template matching, which deals with estimating the robot displacement through a matching process between two consecutive images. Standard visual odometry has been improved using visual compass method for orientation estimation. For this purpose, two consumer-grade monocular cameras have been employed. One camera is pointing at the ground under the robot, and the other is looking at
Style APA, Harvard, Vancouver, ISO itp.
10

Aguiar, André, Filipe Santos, Armando Jorge Sousa, and Luís Santos. "FAST-FUSION: An Improved Accuracy Omnidirectional Visual Odometry System with Sensor Fusion and GPU Optimization for Embedded Low Cost Hardware." Applied Sciences 9, no. 24 (2019): 5516. http://dx.doi.org/10.3390/app9245516.

Pełny tekst źródła
Streszczenie:
The main task while developing a mobile robot is to achieve accurate and robust navigation in a given environment. To achieve such a goal, the ability of the robot to localize itself is crucial. In outdoor, namely agricultural environments, this task becomes a real challenge because odometry is not always usable and global navigation satellite systems (GNSS) signals are blocked or significantly degraded. To answer this challenge, this work presents a solution for outdoor localization based on an omnidirectional visual odometry technique fused with a gyroscope and a low cost planar light detect
Style APA, Harvard, Vancouver, ISO itp.
11

Martínez-García, Edgar Alonso, Joaquín Rivero-Juárez, Luz Abril Torres-Méndez, and Jorge Enrique Rodas-Osollo. "Divergent trinocular vision observers design for extended Kalman filter robot state estimation." Proceedings of the Institution of Mechanical Engineers, Part I: Journal of Systems and Control Engineering 233, no. 5 (2018): 524–47. http://dx.doi.org/10.1177/0959651818800908.

Pełny tekst źródła
Streszczenie:
Here, we report the design of two deterministic observers that exploit the capabilities of a home-made divergent trinocular visual sensor to sense depth data. The three-dimensional key points that the observers can measure are triangulated for visual odometry and estimated by an extended Kalman filter. This work deals with a four-wheel-drive mobile robot with four passive suspensions. The direct and inverse kinematic solutions are deduced and used for the updating and prediction models of the extended Kalman filter as feedback for the robot’s position controller. The state-estimation visual od
Style APA, Harvard, Vancouver, ISO itp.
12

Alapetite, Alexandre, Zhongyu Wang, John Paulin Hansen, Marcin Zajączkowski, and Mikołaj Patalan. "Comparison of Three Off-the-Shelf Visual Odometry Systems." Robotics 9, no. 3 (2020): 56. http://dx.doi.org/10.3390/robotics9030056.

Pełny tekst źródła
Streszczenie:
Positioning is an essential aspect of robot navigation, and visual odometry an important technique for continuous updating the internal information about robot position, especially indoors without GPS (Global Positioning System). Visual odometry is using one or more cameras to find visual clues and estimate robot movements in 3D relatively. Recent progress has been made, especially with fully integrated systems such as the RealSense T265 from Intel, which is the focus of this article. We compare between each other three visual odometry systems (and one wheel odometry, as a known baseline), on
Style APA, Harvard, Vancouver, ISO itp.
13

Bazeille, Stephane, Emmanuel Battesti, and David Filliat. "A Light Visual Mapping and Navigation Framework for Low-Cost Robots." Journal of Intelligent Systems 24, no. 4 (2015): 505–24. http://dx.doi.org/10.1515/jisys-2014-0116.

Pełny tekst źródła
Streszczenie:
AbstractWe address the problems of localization, mapping, and guidance for robots with limited computational resources by combining vision with the metrical information given by the robot odometry. We propose in this article a novel light and robust topometric simultaneous localization and mapping framework using appearance-based visual loop-closure detection enhanced with the odometry. The main advantage of this combination is that the odometry makes the loop-closure detection more accurate and reactive, while the loop-closure detection enables the long-term use of odometry for guidance by co
Style APA, Harvard, Vancouver, ISO itp.
14

Zhu, Zihan, Yi Zhang, Weijun Wang, Wei Feng, Haowen Luo, and Yaojie Zhang. "Adaptive Adjustment of Factor’s Weight for a Multi-Sensor SLAM." Journal of Physics: Conference Series 2451, no. 1 (2023): 012004. http://dx.doi.org/10.1088/1742-6596/2451/1/012004.

Pełny tekst źródła
Streszczenie:
Abstract A multi-sensor fusion simultaneous localization and mapping(SLAM) method based on factor graph optimization that can adaptively modify the weight of the graph factor is proposed in this study, to enhance the localization and mapping capability of autonomous robots in tough situations. Firstly, the algorithm fuses multi-lines lidar, monocular camera, and inertial measurement unit(IMU) in the odometry. Second, the factor graph is constructed using lidar and visual odometry as the unary edge and binary edge constraints, respectively, with the motion determined by IMU odometry serving as
Style APA, Harvard, Vancouver, ISO itp.
15

Yuan, Shuangjie, Jun Zhang, Yujia Lin, and Lu Yang. "Hybrid self-supervised monocular visual odometry system based on spatio-temporal features." Electronic Research Archive 32, no. 5 (2024): 3543–68. http://dx.doi.org/10.3934/era.2024163.

Pełny tekst źródła
Streszczenie:
<abstract><p>For the autonomous and intelligent operation of robots in unknown environments, simultaneous localization and mapping (SLAM) is essential. Since the proposal of visual odometry, the use of visual odometry in the mapping process has greatly advanced the development of pure visual SLAM techniques. However, the main challenges in current monocular odometry algorithms are the poor generalization of traditional methods and the low interpretability of deep learning-based methods. This paper presented a hybrid self-supervised visual monocular odometry framework that combined
Style APA, Harvard, Vancouver, ISO itp.
16

Jeon, Hyun-Ho, Jin-Hyung Kim, and Yun-Ho Ko. "RAFSet (Robust Aged Feature Set)-Based Monocular Visual Odometry." Journal of Institute of Control, Robotics and Systems 23, no. 12 (2017): 1063–69. http://dx.doi.org/10.5302/j.icros.2017.17.0160.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
17

Jiang, Feng, Jianjun Gu, Shiqiang Zhu, Te Li, and Xinliang Zhong. "Visual Odometry Based 3D-Reconstruction." Journal of Physics: Conference Series 1961, no. 1 (2021): 012074. http://dx.doi.org/10.1088/1742-6596/1961/1/012074.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
18

Comport, A. I., E. Malis, and P. Rives. "Real-time Quadrifocal Visual Odometry." International Journal of Robotics Research 29, no. 2-3 (2010): 245–66. http://dx.doi.org/10.1177/0278364909356601.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
19

Wang, Yandong, Tao Zhang, Yuanchao Wang, Jingwei Ma, Yanhui Li, and Jingzhuang Han. "Compass aided visual-inertial odometry." Journal of Visual Communication and Image Representation 60 (April 2019): 101–15. http://dx.doi.org/10.1016/j.jvcir.2018.12.029.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
20

Lappe, M., M. Jenkin, and L. Harris. "Visual odometry by leaky integration." Journal of Vision 7, no. 9 (2010): 147. http://dx.doi.org/10.1167/7.9.147.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
21

PrasadSingh, Indushekhar. "VISUAL ODOMETRY FOR AUTONOMOUS VEHICLES." International Journal of Advanced Research 7, no. 9 (2019): 1136–44. http://dx.doi.org/10.21474/ijar01/9765.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
22

ZHANG, Jieqiang, and Ryuichi UEDA. "Visual Odometry from Brick Road." Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec) 2022 (2022): 2P1—I12. http://dx.doi.org/10.1299/jsmermd.2022.2p1-i12.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
23

Xu, Shuchen, Yongrong Sun, Kedong Zhao, Xiyu Fu, and Shuaishuai Wang. "Road-Network-Map-Assisted Vehicle Positioning Based on Pose Graph Optimization." Sensors 23, no. 17 (2023): 7581. http://dx.doi.org/10.3390/s23177581.

Pełny tekst źródła
Streszczenie:
Satellite signals are easily lost in urban areas, which causes difficulty in vehicles being located with high precision. Visual odometry has been increasingly applied in navigation systems to solve this problem. However, visual odometry relies on dead-reckoning technology, where a slight positioning error can accumulate over time, resulting in a catastrophic positioning error. Thus, this paper proposes a road-network-map-assisted vehicle positioning method based on the theory of pose graph optimization. This method takes the dead-reckoning result of visual odometry as the input and introduces
Style APA, Harvard, Vancouver, ISO itp.
24

Chen, Baifan, Haowu Zhao, Ruyi Zhu, and Yemin Hu. "Marked-LIEO: Visual Marker-Aided LiDAR/IMU/Encoder Integrated Odometry." Sensors 22, no. 13 (2022): 4749. http://dx.doi.org/10.3390/s22134749.

Pełny tekst źródła
Streszczenie:
In this paper, we propose a visual marker-aided LiDAR/IMU/encoder integrated odometry, Marked-LIEO, to achieve pose estimation of mobile robots in an indoor long corridor environment. In the first stage, we design the pre-integration model of encoder and IMU respectively to realize the pose estimation combined with the pose estimation from the second stage providing prediction for the LiDAR odometry. In the second stage, we design low-frequency visual marker odometry, which is optimized jointly with LiDAR odometry to obtain the final pose estimation. In view of the wheel slipping and LiDAR deg
Style APA, Harvard, Vancouver, ISO itp.
25

Huang, Gang, Zhaozheng Hu, Qianwen Tao, Fan Zhang, and Zhe Zhou. "Improved intelligent vehicle self-localization with integration of sparse visual map and high-speed pavement visual odometry." Proceedings of the Institution of Mechanical Engineers, Part D: Journal of Automobile Engineering 235, no. 1 (2020): 177–87. http://dx.doi.org/10.1177/0954407020943306.

Pełny tekst źródła
Streszczenie:
Localization is a fundamental requirement for intelligent vehicles. Conventional localization methods usually suffer from various limitations, such as low accuracy and blocked areas for Global Positioning System, high cost for inertial navigation system or light detection and ranging, and low robustness for visual simultaneous localization and mapping or visual odometry. To overcome these problems, we propose a novel localization method integrated with a sparse visual map and a high-speed pavement visual odometry. We use a lateral-view camera to sense the sparse visual map node for accurate ma
Style APA, Harvard, Vancouver, ISO itp.
26

Wang, Haoran, Zhenglong Li, Hongwei Wang, Wenyan Cao, Fujing Zhang, and Yuheng Wang. "A Roadheader Positioning Method Based on Multi-Sensor Fusion." Electronics 12, no. 22 (2023): 4556. http://dx.doi.org/10.3390/electronics12224556.

Pełny tekst źródła
Streszczenie:
In coal mines, accurate positioning is vital for roadheader equipment. However, most roadheaders use a standalone strapdown inertial navigation system (SINS) which faces challenges like error accumulation, drift, initial alignment needs, temperature sensitivity, and the demand for high-quality sensors. In this paper, a roadheader Visual–Inertial Odometry (VIO) system is proposed, combining SINS and stereo visual odometry to adjust to coal mine environments. Given the inherently dimly lit conditions of coal mines, our system includes an image-enhancement module to preprocess images, aiding in f
Style APA, Harvard, Vancouver, ISO itp.
27

Thapa, Vikas, Abhishek Sharma, Beena Gairola, Amit K. Mondal, Vindhya Devalla, and Ravi K. Patel. "A Review on Visual Odometry Techniques for Mobile Robots: Types and Challenges." Recent Advances in Electrical & Electronic Engineering (Formerly Recent Patents on Electrical & Electronic Engineering) 13, no. 5 (2020): 618–31. http://dx.doi.org/10.2174/2352096512666191004142546.

Pełny tekst źródła
Streszczenie:
For autonomous navigation, tracking and obstacle avoidance, a mobile robot must have the knowledge of its position and localization over time. Among the available techniques for odometry, vision-based odometry is robust and economical technique. In addition, a combination of position estimation from odometry with interpretations of the surroundings using a mobile camera is effective. This paper presents an overview of current visual odometry approaches, applications, and challenges in mobile robots. The study offers a comparative analysis of different available techniques and algorithms associ
Style APA, Harvard, Vancouver, ISO itp.
28

Zhao, Zixu, Yucheng Zhang, Long Long, Zaiwang Lu, and Jinglin Shi. "Efficient and adaptive lidar–visual–inertial odometry for agricultural unmanned ground vehicle." International Journal of Advanced Robotic Systems 19, no. 2 (2022): 172988062210949. http://dx.doi.org/10.1177/17298806221094925.

Pełny tekst źródła
Streszczenie:
The accuracy of agricultural unmanned ground vehicles’ localization directly affects the accuracy of their navigation. However, due to the changeable environment and fewer features in the agricultural scene, it is challenging for these unmanned ground vehicles to localize precisely in global positioning system-denied areas with a single sensor. In this article, we present an efficient and adaptive sensor-fusion odometry framework based on simultaneous localization and mapping to handle the localization problems of agricultural unmanned ground vehicles without the assistance of a global positio
Style APA, Harvard, Vancouver, ISO itp.
29

Wan, Yingcai, Qiankun Zhao, Cheng Guo, Chenlong Xu, and Lijing Fang. "Multi-Sensor Fusion Self-Supervised Deep Odometry and Depth Estimation." Remote Sensing 14, no. 5 (2022): 1228. http://dx.doi.org/10.3390/rs14051228.

Pełny tekst źródła
Streszczenie:
This paper presents a new deep visual-inertial odometry and depth estimation framework for improving the accuracy of depth estimation and ego-motion from image sequences and inertial measurement unit (IMU) raw data. The proposed framework predicts ego-motion and depth with absolute scale in a self-supervised manner. We first capture dense features and solve the pose by deep visual odometry (DVO), and then combine the pose estimation pipeline with deep inertial odometry (DIO) by the extended Kalman filter (EKF) method to produce the sparse depth and pose with absolute scale. We then join deep v
Style APA, Harvard, Vancouver, ISO itp.
30

Liu, Qiang, Haidong Zhang, Yiming Xu, and Li Wang. "Unsupervised Deep Learning-Based RGB-D Visual Odometry." Applied Sciences 10, no. 16 (2020): 5426. http://dx.doi.org/10.3390/app10165426.

Pełny tekst źródła
Streszczenie:
Recently, deep learning frameworks have been deployed in visual odometry systems and achieved comparable results to traditional feature matching based systems. However, most deep learning-based frameworks inevitably need labeled data as ground truth for training. On the other hand, monocular odometry systems are incapable of restoring absolute scale. External or prior information has to be introduced for scale recovery. To solve these problems, we present a novel deep learning-based RGB-D visual odometry system. Our two main contributions are: (i) during network training and pose estimation, t
Style APA, Harvard, Vancouver, ISO itp.
31

Kim, Kyu-Won, Tae-Ki Jung, Seong-Hun Seo, and Gyu-In Jee. "Development of Tightly Coupled based LIDAR-Visual-Inertial Odometry." Journal of Institute of Control, Robotics and Systems 26, no. 8 (2020): 597–603. http://dx.doi.org/10.5302/j.icros.2020.20.0076.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
32

Qiu, Haiyang, Xu Zhang, Hui Wang, et al. "A Robust and Integrated Visual Odometry Framework Exploiting the Optical Flow and Feature Point Method." Sensors 23, no. 20 (2023): 8655. http://dx.doi.org/10.3390/s23208655.

Pełny tekst źródła
Streszczenie:
In this paper, we propose a robust and integrated visual odometry framework exploiting the optical flow and feature point method that achieves faster pose estimate and considerable accuracy and robustness during the odometry process. Our method utilizes optical flow tracking to accelerate the feature point matching process. In the odometry, two visual odometry methods are used: global feature point method and local feature point method. When there is good optical flow tracking and enough key points optical flow tracking matching is successful, the local feature point method utilizes prior info
Style APA, Harvard, Vancouver, ISO itp.
33

Liu, Fei, Yashar Balazadegan Sarvrood, and Yang Gao. "Implementation and Analysis of Tightly Integrated INS/Stereo VO for Land Vehicle Navigation." Journal of Navigation 71, no. 1 (2017): 83–99. http://dx.doi.org/10.1017/s037346331700056x.

Pełny tekst źródła
Streszczenie:
Tight integration of inertial sensors and stereo visual odometry to bridge Global Navigation Satellite System (GNSS) signal outages in challenging environments has drawn increasing attention. However, the details of how feature pixel coordinates from visual odometry can be directly used to limit the quick drift of inertial sensors in a tight integration implementation have rarely been provided in previous works. For instance, a key challenge in tight integration of inertial and stereo visual datasets is how to correct inertial sensor errors using the pixel measurements from visual odometry, ho
Style APA, Harvard, Vancouver, ISO itp.
34

Ramezani, M., D. Acharya, F. Gu, and K. Khoshelham. "INDOOR POSITIONING BY VISUAL-INERTIAL ODOMETRY." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences IV-2/W4 (September 14, 2017): 371–76. http://dx.doi.org/10.5194/isprs-annals-iv-2-w4-371-2017.

Pełny tekst źródła
Streszczenie:
Indoor positioning is a fundamental requirement of many indoor location-based services and applications. In this paper, we explore the potential of low-cost and widely available visual and inertial sensors for indoor positioning. We describe the Visual-Inertial Odometry (VIO) approach and propose a measurement model for omnidirectional visual-inertial odometry (OVIO). The results of experiments in two simulated indoor environments show that the OVIO approach outperforms VIO and achieves a positioning accuracy of 1.1 % of the trajectory length.
Style APA, Harvard, Vancouver, ISO itp.
35

Luo, Lishu, Fulun Peng, and Longhui Dong. "Improved Multi-Sensor Fusion Dynamic Odometry Based on Neural Networks." Sensors 24, no. 19 (2024): 6193. http://dx.doi.org/10.3390/s24196193.

Pełny tekst źródła
Streszczenie:
High-precision simultaneous localization and mapping (SLAM) in dynamic real-world environments plays a crucial role in autonomous robot navigation, self-driving cars, and drone control. To address this dynamic localization issue, in this paper, a dynamic odometry method is proposed based on FAST-LIVO, a fast LiDAR (light detection and ranging)–inertial–visual odometry system, integrating neural networks with laser, camera, and inertial measurement unit modalities. The method first constructs visual–inertial and LiDAR–inertial odometry subsystems. Then, a lightweight neural network is used to r
Style APA, Harvard, Vancouver, ISO itp.
36

Mostofi, N., A. Moussa, M. Elhabiby, and N. El-Sheimy. "RGB-D Indoor Plane-based 3D-Modeling using Autonomous Robot." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-1 (November 7, 2014): 301–8. http://dx.doi.org/10.5194/isprsarchives-xl-1-301-2014.

Pełny tekst źródła
Streszczenie:
3D model of indoor environments provide rich information that can facilitate the disambiguation of different places and increases the familiarization process to any indoor environment for the remote users. In this research work, we describe a system for visual odometry and 3D modeling using information from RGB-D sensor (Camera). The visual odometry method estimates the relative pose of the consecutive RGB-D frames through feature extraction and matching techniques. The pose estimated by visual odometry algorithm is then refined with iterative closest point (ICP) method. The switching techniqu
Style APA, Harvard, Vancouver, ISO itp.
37

Peng, Gang, Qiang Gao, Yue Xu, Jianfeng Li, Zhang Deng, and Cong Li. "Pose Estimation Based on Bidirectional Visual–Inertial Odometry with 3D LiDAR (BV-LIO)." Remote Sensing 16, no. 16 (2024): 2970. http://dx.doi.org/10.3390/rs16162970.

Pełny tekst źródła
Streszczenie:
Due to the limitation of a single sensor such as only camera or only LiDAR, the Visual SLAM detects few effective features in the case of poor lighting or no texture. The LiDAR SLAM will also degrade in an unstructured environment and open spaces, which reduces the accuracy of pose estimation and the quality of mapping. In order to solve this problem, on account of the high efficiency of Visual odometry and the high accuracy of LiDAR odometry, this paper investigates the multi-sensor fusion of bidirectional visual–inertial odometry with 3D LiDAR for pose estimation. This method can couple the
Style APA, Harvard, Vancouver, ISO itp.
38

Singh, Gurpreet, Deepam Goyal, and Vijay Kumar. "Mobile robot localization using visual odometry in indoor environments with TurtleBot4." IAES International Journal of Artificial Intelligence (IJ-AI) 14, no. 1 (2025): 760. http://dx.doi.org/10.11591/ijai.v14.i1.pp760-768.

Pełny tekst źródła
Streszczenie:
Accurate localization is crucial for mobile robots to navigate autonomously in indoor environments. This article presents a novel visual odometry (VO) approach for localizing a TurtleBot4 mobile robot in indoor settings using only an onboard red green blue – depth (RGB-D) camera. Motivated by the challenges posed by slippery floors and the limitations of traditional wheel odometry, an attempt has been made to develop a reliable, accurate, and low-cost localization solution. The present method extracts oriented FAST and rotated BRIEF (ORB) features for feature extraction and matching using brut
Style APA, Harvard, Vancouver, ISO itp.
39

Gurpreet, Singh, Goyal Deepam, and Kumar Vijay. "Mobile robot localization using visual odometry in indoor environments with TurtleBot4." IAES International Journal of Artificial Intelligence (IJ-AI) 14, no. 1 (2025): 760–68. https://doi.org/10.11591/ijai.v14.i1.pp760-768.

Pełny tekst źródła
Streszczenie:
Accurate localization is crucial for mobile robots to navigate autonomously in indoor environments. This article presents a novel visual odometry (VO) approach for localizing a TurtleBot4 mobile robot in indoor settings using only an onboard red green blue – depth (RGB-D) camera. Motivated by the challenges posed by slippery floors and the limitations of traditional wheel odometry, an attempt has been made to develop a reliable, accurate, and low cost localization solution. The present method extracts oriented FAST and rotated BRIEF (ORB) features for feature extraction and matching usin
Style APA, Harvard, Vancouver, ISO itp.
40

Terekhov, Mikhail A. "Overview of Modern Approaches to Visual Odometry." Computer tools in education, no. 3 (September 30, 2019): 5–14. http://dx.doi.org/10.32603/2071-2340-2019-3-5-14.

Pełny tekst źródła
Streszczenie:
In this paper we describe the tasks of Visual Odometry and Simultaneous Localization and Mapping systems along with their main applications. Next, we list some approaches used by the scientific community to create such systems in different time periods. We then proceed to explain in detail the more recent method based on bundle adjustment and show some of its variations for different applications. At last, we overview present-day research directions in the field of visual odometry and briefly present our work.
Style APA, Harvard, Vancouver, ISO itp.
41

Gao, Wenxiang, Guizhi Yang, Yuzhang Wang, Jiaxin Ke, Xungao Zhong, and Lihua Chen. "Robust visual odometry based on image enhancement." Journal of Physics: Conference Series 2402, no. 1 (2022): 012010. http://dx.doi.org/10.1088/1742-6596/2402/1/012010.

Pełny tekst źródła
Streszczenie:
Abstract With the rise of augmented reality and autonomous driving, visual SLAM (simultaneous localization and mapping) has become the focus of research again. Visual odometry is an important part of visual SLAM. Too dark or too strong light will reduce the image quality, resulting in a large deviation in the visual odometry trajectory. Therefore, this paper proposes a visual odometry with image enhancement. Identify the lighting state of the image by estimating the brightness value of the input image. Gamma correction based on truncated cumulative distribution function modulation is used to e
Style APA, Harvard, Vancouver, ISO itp.
42

Das, Anweshan, Jos Elfring, and Gijs Dubbelman. "Real-Time Vehicle Positioning and Mapping Using Graph Optimization." Sensors 21, no. 8 (2021): 2815. http://dx.doi.org/10.3390/s21082815.

Pełny tekst źródła
Streszczenie:
In this work, we propose and evaluate a pose-graph optimization-based real-time multi-sensor fusion framework for vehicle positioning using low-cost automotive-grade sensors. Pose-graphs can model multiple absolute and relative vehicle positioning sensor measurements and can be optimized using nonlinear techniques. We model pose-graphs using measurements from a precise stereo camera-based visual odometry system, a robust odometry system using the in-vehicle velocity and yaw-rate sensor, and an automotive-grade GNSS receiver. Our evaluation is based on a dataset with 180 km of vehicle trajector
Style APA, Harvard, Vancouver, ISO itp.
43

Saha, Arindam, Bibhas Chandra Dhara, Saiyed Umer, Ahmad Ali AlZubi, Jazem Mutared Alanazi, and Kulakov Yurii. "CORB2I-SLAM: An Adaptive Collaborative Visual-Inertial SLAM for Multiple Robots." Electronics 11, no. 18 (2022): 2814. http://dx.doi.org/10.3390/electronics11182814.

Pełny tekst źródła
Streszczenie:
The generation of robust global maps of an unknown cluttered environment through a collaborative robotic framework is challenging. We present a collaborative SLAM framework, CORB2I-SLAM, in which each participating robot carries a camera (monocular/stereo/RGB-D) and an inertial sensor to run odometry. A centralized server stores all the maps and executes processor-intensive tasks, e.g., loop closing, map merging, and global optimization. The proposed framework uses well-established Visual-Inertial Odometry (VIO), and can be adapted to use Visual Odometry (VO) when the measurements from inertia
Style APA, Harvard, Vancouver, ISO itp.
44

Congram, Benjamin, and Timothy Barfoot. "Field Testing and Evaluation of Single-Receiver GPS Odometry for Use in Robotic Navigation." Field Robotics 2, no. 1 (2022): 1849–73. http://dx.doi.org/10.55417/fr.2022057.

Pełny tekst źródła
Streszczenie:
Mobile robots rely on odometry to navigate in areas where localization fails. Visual odometry (VO), for instance, is a common solution for obtaining robust and consistent relative motion estimates of the vehicle frame. In contrast, Global Positioning System (GPS) measurements are typically used for absolute positioning and localization. However, when the constraint on absolute accuracy is relaxed, accurate relative position estimates can be found with one single-frequency GPS receiver by using time-differenced carrier phase (TDCP) measurements. In this paper, we implement and field test a sing
Style APA, Harvard, Vancouver, ISO itp.
45

Zhu, Bihong, Aihua Yu, Beiping Hou, Gang Li, and Yong Zhang. "A Novel Visual SLAM Based on Multiple Deep Neural Networks." Applied Sciences 13, no. 17 (2023): 9630. http://dx.doi.org/10.3390/app13179630.

Pełny tekst źródła
Streszczenie:
The current visual simultaneous localization and mapping (SLAM) systems require the use of matched feature point pairs to estimate camera pose and construct environmental maps. Therefore, they suffer from poor performance of the visual feature matchers. To address this problem, a visual SLAM using deep feature matcher is proposed, which is mainly composed of three parallel threads: Visual Odometry, Backend Optimizer and LoopClosing. In the Visual Odometry, the deep feature extractor with convolutional neural networks is utilized for extracting feature points in each image frame. Then, the deep
Style APA, Harvard, Vancouver, ISO itp.
46

Guizilini, Vitor, and Fabio Ramos. "Semi-parametric learning for visual odometry." International Journal of Robotics Research 32, no. 5 (2013): 526–46. http://dx.doi.org/10.1177/0278364912472245.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
47

Renner, Alpha, Lazar Supic, Andreea Danielescu, et al. "Visual odometry with neuromorphic resonator networks." Nature Machine Intelligence 6, no. 6 (2024): 653–63. http://dx.doi.org/10.1038/s42256-024-00846-2.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
48

Wang, Kunfeng, Kaichun Zhao, Wenshuai Lu, and Zheng You. "Stereo Event-Based Visual–Inertial Odometry." Sensors 25, no. 3 (2025): 887. https://doi.org/10.3390/s25030887.

Pełny tekst źródła
Streszczenie:
Event-based cameras are a new type of vision sensor in which pixels operate independently and respond asynchronously to changes in brightness with microsecond resolution, instead of providing standard intensity frames. Compared with traditional cameras, event-based cameras have low latency, no motion blur, and high dynamic range (HDR), which provide possibilities for robots to deal with some challenging scenes. We propose a visual–inertial odometry for stereo event-based cameras based on Error-State Kalman Filter (ESKF). The vision module updates the pose by relying on the edge alignment of a
Style APA, Harvard, Vancouver, ISO itp.
49

Xu, Shaoyan, Tao Wang, Congyan Lang, Songhe Feng, and Yi Jin. "Graph-based visual odometry for VSLAM." Industrial Robot: An International Journal 45, no. 5 (2018): 679–87. http://dx.doi.org/10.1108/ir-04-2018-0061.

Pełny tekst źródła
Streszczenie:
Purpose Typical feature-matching algorithms use only unary constraints on appearances to build correspondences where little structure information is used. Ignoring structure information makes them sensitive to various environmental perturbations. The purpose of this paper is to propose a novel graph-based method that aims to improve matching accuracy by fully exploiting the structure information. Design/methodology/approach Instead of viewing a frame as a simple collection of keypoints, the proposed approach organizes a frame as a graph by treating each keypoint as a vertex, where structure in
Style APA, Harvard, Vancouver, ISO itp.
50

Zhu, Kaiying, Xiaoyan Jiang, Zhijun Fang, Yongbin Gao, Hamido Fujita, and Jenq-Neng Hwang. "Photometric transfer for direct visual odometry." Knowledge-Based Systems 213 (February 2021): 106671. http://dx.doi.org/10.1016/j.knosys.2020.106671.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!