Auswahl der wissenschaftlichen Literatur zum Thema „Odometry estimation“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Odometry estimation" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Odometry estimation"

1

Nurmaini, Siti, and Sahat Pangidoan. "Localization of Leader-Follower Robot Using Extended Kalman Filter." Computer Engineering and Applications Journal 7, no. 2 (2018): 95–108. http://dx.doi.org/10.18495/comengapp.v7i2.253.

Der volle Inhalt der Quelle
Annotation:
Non-holonomic leader-follower robot must be capable to find its own position in order to be able to navigating autonomously in the environment this problem is known as localization. A common way to estimate the robot pose by using odometer. However, odometry measurement may cause inaccurate result due to the wheel slippage or other small noise sources. In this research, the Extended Kalman Filter (EKF) is proposed to minimize the error or the inaccuracy caused by the odometry measurement. The EKF algorithm works by fusing odometry and landmark information to produce a better estimation. A bett
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Li, Q., C. Wang, S. Chen, et al. "DEEP LIDAR ODOMETRY." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W13 (June 5, 2019): 1681–86. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w13-1681-2019.

Der volle Inhalt der Quelle
Annotation:
<p><strong>Abstract.</strong> Most existing lidar odometry estimation strategies are formulated under a standard framework that includes feature selection, and pose estimation through feature matching. In this work, we present a novel pipeline called LO-Net for lidar odometry estimation from 3D lidar scanning data using deep convolutional networks. The network is trained in an end-to-end manner, it infers 6-DoF poses from the encoded sequential lidar data. Based on the new designed mask-weighted geometric constraint loss, the network automatically learns effective feature rep
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Martínez-García, Edgar Alonso, Joaquín Rivero-Juárez, Luz Abril Torres-Méndez, and Jorge Enrique Rodas-Osollo. "Divergent trinocular vision observers design for extended Kalman filter robot state estimation." Proceedings of the Institution of Mechanical Engineers, Part I: Journal of Systems and Control Engineering 233, no. 5 (2018): 524–47. http://dx.doi.org/10.1177/0959651818800908.

Der volle Inhalt der Quelle
Annotation:
Here, we report the design of two deterministic observers that exploit the capabilities of a home-made divergent trinocular visual sensor to sense depth data. The three-dimensional key points that the observers can measure are triangulated for visual odometry and estimated by an extended Kalman filter. This work deals with a four-wheel-drive mobile robot with four passive suspensions. The direct and inverse kinematic solutions are deduced and used for the updating and prediction models of the extended Kalman filter as feedback for the robot’s position controller. The state-estimation visual od
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Wu, Qin Fan, Qing Li, and Nong Cheng. "Visual Odometry and 3D Mapping in Indoor Environments." Applied Mechanics and Materials 336-338 (July 2013): 348–54. http://dx.doi.org/10.4028/www.scientific.net/amm.336-338.348.

Der volle Inhalt der Quelle
Annotation:
This paper presents a robust state estimation and 3D environment modeling approach that enables Micro Aerial Vehicle (MAV) operating in challenging GPS-denied indoor environments. A fast, accurate and robust approach to visual odometry is developed based on Microsoft Kinect. Discriminative features are extracted from RGB images and matched across consecutive frames. A robust least-square estimator is applied to get relative motion estimation. All computation is performed in real-time, which provides high frequency of 6 degree-of-freedom state estimation. A detailed 3D map of an indoor environm
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Jiménez, Paulo A., and Bijan Shirinzadeh. "Laser interferometry measurements based calibration and error propagation identification for pose estimation in mobile robots." Robotica 32, no. 1 (2013): 165–74. http://dx.doi.org/10.1017/s0263574713000660.

Der volle Inhalt der Quelle
Annotation:
SUMMARYA widely used method for pose estimation in mobile robots is odometry. Odometry allows the robot in real time to reconstruct its position and orientation from the wheels' encoder measurements. Given to its unbounded nature, odometry calculation accumulates errors with quadratic increase of error variance with traversed distance. This paper develops a novel method for odometry calibration and error propagation identification for mobile robots. The proposed method uses a laser-based interferometer to measure distance precisely. Two variants of the proposed calibration method are examined:
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Gonzalez, Ramon, Francisco Rodriguez, Jose Luis Guzman, Cedric Pradalier, and Roland Siegwart. "Combined visual odometry and visual compass for off-road mobile robots localization." Robotica 30, no. 6 (2011): 865–78. http://dx.doi.org/10.1017/s026357471100110x.

Der volle Inhalt der Quelle
Annotation:
SUMMARYIn this paper, we present the work related to the application of a visual odometry approach to estimate the location of mobile robots operating in off-road conditions. The visual odometry approach is based on template matching, which deals with estimating the robot displacement through a matching process between two consecutive images. Standard visual odometry has been improved using visual compass method for orientation estimation. For this purpose, two consumer-grade monocular cameras have been employed. One camera is pointing at the ground under the robot, and the other is looking at
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Valiente García, David, Lorenzo Fernández Rojo, Arturo Gil Aparicio, Luis Payá Castelló, and Oscar Reinoso García. "Visual Odometry through Appearance- and Feature-Based Method with Omnidirectional Images." Journal of Robotics 2012 (2012): 1–13. http://dx.doi.org/10.1155/2012/797063.

Der volle Inhalt der Quelle
Annotation:
In the field of mobile autonomous robots, visual odometry entails the retrieval of a motion transformation between two consecutive poses of the robot by means of a camera sensor solely. A visual odometry provides an essential information for trajectory estimation in problems such as Localization and SLAM (Simultaneous Localization and Mapping). In this work we present a motion estimation based on a single omnidirectional camera. We exploited the maximized horizontal field of view provided by this camera, which allows us to encode large scene information into the same image. The estimation of t
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Jung, Changbae, and Woojin Chung. "Calibration of Kinematic Parameters for Two Wheel Differential Mobile Robots by Using Experimental Heading Errors." International Journal of Advanced Robotic Systems 8, no. 5 (2011): 68. http://dx.doi.org/10.5772/50906.

Der volle Inhalt der Quelle
Annotation:
Odometry using incremental wheel encoder sensors provides the relative position of mobile robots. This relative position is fundamental information for pose estimation by various sensors for EKF Localization, Monte Carlo Localization etc. Odometry is also used as unique information for localization of environmental conditions when absolute measurement systems are not available. However, odometry suffers from the accumulation of kinematic modeling errors of the wheel as the robot's travel distance increases. Therefore, systematic odometry errors need to be calibrated. Principal systematic error
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Thapa, Vikas, Abhishek Sharma, Beena Gairola, Amit K. Mondal, Vindhya Devalla, and Ravi K. Patel. "A Review on Visual Odometry Techniques for Mobile Robots: Types and Challenges." Recent Advances in Electrical & Electronic Engineering (Formerly Recent Patents on Electrical & Electronic Engineering) 13, no. 5 (2020): 618–31. http://dx.doi.org/10.2174/2352096512666191004142546.

Der volle Inhalt der Quelle
Annotation:
For autonomous navigation, tracking and obstacle avoidance, a mobile robot must have the knowledge of its position and localization over time. Among the available techniques for odometry, vision-based odometry is robust and economical technique. In addition, a combination of position estimation from odometry with interpretations of the surroundings using a mobile camera is effective. This paper presents an overview of current visual odometry approaches, applications, and challenges in mobile robots. The study offers a comparative analysis of different available techniques and algorithms associ
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Lee, Kyuman, and Eric N. Johnson. "Latency Compensated Visual-Inertial Odometry for Agile Autonomous Flight." Sensors 20, no. 8 (2020): 2209. http://dx.doi.org/10.3390/s20082209.

Der volle Inhalt der Quelle
Annotation:
In visual-inertial odometry (VIO), inertial measurement unit (IMU) dead reckoning acts as the dynamic model for flight vehicles while camera vision extracts information about the surrounding environment and determines features or points of interest. With these sensors, the most widely used algorithm for estimating vehicle and feature states for VIO is an extended Kalman filter (EKF). The design of the standard EKF does not inherently allow for time offsets between the timestamps of the IMU and vision data. In fact, sensor-related delays that arise in various realistic conditions are at least p
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Mehr Quellen

Dissertationen zum Thema "Odometry estimation"

1

Masson, Clément. "Direction estimation using visual odometry." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-169377.

Der volle Inhalt der Quelle
Annotation:
This Master thesis tackles the problem of measuring objects’ directions from a motionlessobservation point. A new method based on a single rotating camera requiring the knowledge ofonly two (or more) landmarks’ direction is proposed. In a first phase, multi-view geometry isused to estimate camera rotations and key elements’ direction from a set of overlapping images.Then in a second phase, the direction of any object can be estimated by resectioning the cameraassociated to a picture showing this object. A detailed description of the algorithmic chain isgiven, along with test results on both sy
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Holmqvist, Niclas. "HANDHELD LIDAR ODOMETRY ESTIMATION AND MAPPING SYSTEM." Thesis, Mälardalens högskola, Inbyggda system, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-41137.

Der volle Inhalt der Quelle
Annotation:
Ego-motion sensors are commonly used for pose estimation in Simultaneous Localization And Mapping (SLAM) algorithms. Inertial Measurement Units (IMUs) are popular sensors but suffer from integration drift over longer time scales. To remedy the drift they are often used in combination with additional sensors, such as a LiDAR. Pose estimation is used when scans, produced by these additional sensors, are being matched. The matching of scans can be computationally heavy as one scan can contain millions of data points. Methods exist to simplify the problem of finding the relative pose between senso
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

CHEN, HONGYI. "GPS-oscillation-robust Localization and Visionaided Odometry Estimation." Thesis, KTH, Maskinkonstruktion (Inst.), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-247299.

Der volle Inhalt der Quelle
Annotation:
GPS/IMU integrated systems are commonly used for vehicle navigation. The algorithm for this coupled system is normally based on Kalman filter. However, oscillated GPS measurements in the urban environment can lead to localization divergence easily. Moreover, heading estimation may be sensitive to magnetic interference if it relies on IMU with integrated magnetometer. This report tries to solve the localization problem on GPS oscillation and outage, based on adaptive extended Kalman filter(AEKF). In terms of the heading estimation, stereo visual odometry(VO) is fused to overcome the effect by m
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Rao, Anantha N. "Learning-based Visual Odometry - A Transformer Approach." University of Cincinnati / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1627658636420617.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Awang, Salleh Dayang Nur Salmi Dharmiza. "Study of vehicle localization optimization with visual odometry trajectory tracking." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS601.

Der volle Inhalt der Quelle
Annotation:
Au sein des systèmes avancés d’aide à la conduite (Advanced Driver Assistance Systems - ADAS) pour les systèmes de transport intelligents (Intelligent Transport Systems - ITS), les systèmes de positionnement, ou de localisation, du véhicule jouent un rôle primordial. Le système GPS (Global Positioning System) largement employé ne peut donner seul un résultat précis à cause de facteurs extérieurs comme un environnement contraint ou l’affaiblissement des signaux. Ces erreurs peuvent être en partie corrigées en fusionnant les données GPS avec des informations supplémentaires provenant d'autres ca
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Ay, Emre. "Ego-Motion Estimation of Drones." Thesis, KTH, Robotik, perception och lärande, RPL, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-210772.

Der volle Inhalt der Quelle
Annotation:
To remove the dependency on external structure for drone positioning in GPS-denied environments, it is desirable to estimate the ego-motion of drones on-board. Visual positioning systems have been studied for quite some time and the literature on the area is diligent. The aim of this project is to investigate the currently available methods and implement a visual odometry system for drones which is capable of giving continuous estimates with a lightweight solution. In that manner, the state of the art systems are investigated and a visual odometry system is implemented based on the design deci
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Lee, Hong Yun. "Deep Learning for Visual-Inertial Odometry: Estimation of Monocular Camera Ego-Motion and its Uncertainty." The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu156331321922759.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Ringdahl, Viktor. "Stereo Camera Pose Estimation to Enable Loop Detection." Thesis, Linköpings universitet, Datorseende, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-154392.

Der volle Inhalt der Quelle
Annotation:
Visual Simultaneous Localization And Mapping (SLAM) allows for three dimensionalreconstruction from a camera’s output and simultaneous positioning of the camera withinthe reconstruction. With use cases ranging from autonomous vehicles to augmentedreality, the SLAM field has garnered interest both commercially and academically. A SLAM system performs odometry as it estimates the camera’s movement throughthe scene. The incremental estimation of odometry is not error free and exhibits driftover time with map inconsistencies as a result. Detecting the return to a previously seenplace, a loop, mean
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Ready, Bryce Benson. "Filtering Techniques for Pose Estimation with Applications to Unmanned Air Vehicles." BYU ScholarsArchive, 2012. https://scholarsarchive.byu.edu/etd/3490.

Der volle Inhalt der Quelle
Annotation:
This work presents two novel methods of estimating the state of a dynamic system in a Kalman Filtering framework. The first is an application specific method for use with systems performing Visual Odometry in a mostly planar scene. Because a Visual Odometry method inherently provides relative information about the pose of a platform, we use this system as part of the time update in a Kalman Filtering framework, and develop a novel way to propagate the uncertainty of the pose through this time update method. Our initial results show that this method is able to reduce localization error signific
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Kim, Jae-Hak, and Jae-Hak Kim@anu edu au. "Camera Motion Estimation for Multi-Camera Systems." The Australian National University. Research School of Information Sciences and Engineering, 2008. http://thesis.anu.edu.au./public/adt-ANU20081211.011120.

Der volle Inhalt der Quelle
Annotation:
The estimation of motion of multi-camera systems is one of the most important tasks in computer vision research. Recently, some issues have been raised about general camera models and multi-camera systems. Using many cameras as a single camera is studied [60], and the epipolar geometry constraints of general camera models is theoretically derived. Methods for calibration, including a self-calibration method for general camera models, are studied [78, 62]. Multi-camera systems are an example of practically implementable general camera models and they are widely used in many applications nowaday
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Mehr Quellen

Buchteile zum Thema "Odometry estimation"

1

Santamaria-Navarro, A., J. Solà, and J. Andrade-Cetto. "Odometry Estimation for Aerial Manipulators." In Springer Tracts in Advanced Robotics. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-12945-3_15.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Poddar, Shashi, Rahul Kottath, and Vinod Karar. "Motion Estimation Made Easy: Evolution and Trends in Visual Odometry." In Recent Advances in Computer Vision. Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-03000-1_13.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Clement, Lee, Valentin Peretroukhin, and Jonathan Kelly. "Improving the Accuracy of Stereo Visual Odometry Using Visual Illumination Estimation." In Springer Proceedings in Advanced Robotics. Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-50115-4_36.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Guerrero, Pablo, and Javier Ruiz-del-Solar. "Improving Robot Self-localization Using Landmarks’ Poses Tracking and Odometry Error Estimation." In RoboCup 2007: Robot Soccer World Cup XI. Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-68847-1_13.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Galarza, Juan, Esteban Pérez, Esteban Serrano, Andrés Tapia, and Wilbert G. Aguilar. "Pose Estimation Based on Monocular Visual Odometry and Lane Detection for Intelligent Vehicles." In Lecture Notes in Computer Science. Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-95282-6_40.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Zhang, Xue-bo, Cong-yuan Wang, Yong-chun Fang, and Ke-xin Xing. "An Extended Kalman Filter-Based Robot Pose Estimation Approach with Vision and Odometry." In Wearable Sensors and Robots. Springer Singapore, 2016. http://dx.doi.org/10.1007/978-981-10-2404-7_41.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Nguyen, Huu Hung, Quang Thi Nguyen, Cong Manh Tran, and Dong-Seong Kim. "Adaptive Essential Matrix Based Stereo Visual Odometry with Joint Forward-Backward Translation Estimation." In Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-63083-6_10.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Musleh, Basam, David Martin, Arturo de la Escalera, Domingo Miguel Guinea, and Maria Carmen Garcia-Alegre. "Estimation and Prediction of the Vehicle’s Motion Based on Visual Odometry and Kalman Filter." In Advanced Concepts for Intelligent Vision Systems. Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33140-4_43.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Alonso Martínez-García, Edgar, and Luz Abril Torres-Méndez. "4WD Robot Posture Estimation by Radial Multi-View Visual Odometry." In Applications of Mobile Robots. IntechOpen, 2019. http://dx.doi.org/10.5772/intechopen.79130.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Harr, M., and C. Schäfer. "Robust Odometry Estimation for Automated Driving and Map Data Collection." In AUTOREG 2017. VDI Verlag, 2017. http://dx.doi.org/10.51202/9783181022924-91.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Odometry estimation"

1

Pereira, Fabio Irigon, Gustavo Ilha, Joel Luft, Marcelo Negreiros, and Altamiro Susin. "Monocular Visual Odometry with Cyclic Estimation." In 2017 30th SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI). IEEE, 2017. http://dx.doi.org/10.1109/sibgrapi.2017.7.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Song, Xiaojing, Lakmal D. Seneviratne, Kaspar Althoefer, Zibin Song, and Yahya H. Zweiri. "Visual Odometry for Velocity Estimation of UGVs." In 2007 International Conference on Mechatronics and Automation. IEEE, 2007. http://dx.doi.org/10.1109/icma.2007.4303790.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Fanani, Nolang, Alina Sturck, Marc Barnada, and Rudolf Mester. "Multimodal scale estimation for monocular visual odometry." In 2017 IEEE Intelligent Vehicles Symposium (IV). IEEE, 2017. http://dx.doi.org/10.1109/ivs.2017.7995955.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Mou, Wei, Han Wang, and Gerald Seet. "Efficient visual odometry estimation using stereo camera." In 2014 11th IEEE International Conference on Control & Automation (ICCA). IEEE, 2014. http://dx.doi.org/10.1109/icca.2014.6871128.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

So, Edmond Wai Yan, Tetsuo Yoshimitsu, and Takashi Kubota. "Hopping Odometry: Motion Estimation with Selective Vision." In 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2009). IEEE, 2009. http://dx.doi.org/10.1109/iros.2009.5354065.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Kerl, Christian, Jurgen Sturm, and Daniel Cremers. "Robust odometry estimation for RGB-D cameras." In 2013 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2013. http://dx.doi.org/10.1109/icra.2013.6631104.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Nowruzi, Farzan, Dhanvin Kolhatkar, Prince Kapoor, and Robert Laganiere. "Point Cloud based Hierarchical Deep Odometry Estimation." In 7th International Conference on Vehicle Technology and Intelligent Transport Systems. SCITEPRESS - Science and Technology Publications, 2021. http://dx.doi.org/10.5220/0010442901120121.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Lu, Tongwei, Shihui Ai, Yudian xiong, and Yongyuan Jiang. "Monocular visual odometry-based 3D-2D motion estimation." In Automatic Target Recognition and Navigation, edited by Jayaram K. Udupa, Hanyu Hong, and Jianguo Liu. SPIE, 2018. http://dx.doi.org/10.1117/12.2286251.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

yuan, Rong, Hongyi Fan, and Benjamin Kimia. "Dissecting scale from pose estimation in visual odometry." In British Machine Vision Conference 2017. British Machine Vision Association, 2017. http://dx.doi.org/10.5244/c.31.170.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Cho, Hae Min, HyungGi Jo, Seongwon Lee, and Euntai Kim. "Odometry Estimation via CNN using Sparse LiDAR Data." In 2019 16th International Conference on Ubiquitous Robots (UR). IEEE, 2019. http://dx.doi.org/10.1109/urai.2019.8768571.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!