Artykuły w czasopismach na temat „Camera Ego-Motion Estimation”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „Camera Ego-Motion Estimation”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.
Mansour, M., P. Davidson, O. A. Stepanov, J. P. Raunio, M. M. Aref, and R. Piché. "Depth estimation with ego-motion assisted monocular camera." Giroskopiya i Navigatsiya 27, no. 2 (2019): 28–51. http://dx.doi.org/10.17285/0869-7035.2019.27.2.028-051.
Pełny tekst źródłaMansour, M., P. Davidson, O. Stepanov, J. P. Raunio, M. M. Aref, and R. Piché. "Depth Estimation with Ego-Motion Assisted Monocular Camera." Gyroscopy and Navigation 10, no. 3 (2019): 111–23. http://dx.doi.org/10.1134/s2075108719030064.
Pełny tekst źródłaLinok, S. A., and D. A. Yudin. "Influence of Neural Network Receptive Field on Monocular Depth and Ego-Motion Estimation." Optical Memory and Neural Networks 32, S2 (2023): S206—S213. http://dx.doi.org/10.3103/s1060992x23060103.
Pełny tekst źródłaYusuf, Sait Erdem, Galip Feyza, Furkan Ince Ibrahim, and Haidar Sharif Md. "Estimation of Camera Ego-Motion for Real-Time Computer Vision Applications." International Journal of Scientific Research in Information Systems and Engineering (IJSRISE) 1, no. 2 (2015): 115–20. https://doi.org/10.5281/zenodo.836175.
Pełny tekst źródłaChen, Haiwen, Jin Chen, Zhuohuai Guan, Yaoming Li, Kai Cheng, and Zhihong Cui. "Stereovision-Based Ego-Motion Estimation for Combine Harvesters." Sensors 22, no. 17 (2022): 6394. http://dx.doi.org/10.3390/s22176394.
Pełny tekst źródłaLi, Jiaman, C. Karen Liu, and Jiajun Wu. "Ego-Body Pose Estimation via Ego-Head Pose Estimation." AI Matters 9, no. 2 (2023): 20–23. http://dx.doi.org/10.1145/3609468.3609473.
Pełny tekst źródłaYamaguchi, Koichiro, Takeo Kato, and Yoshiki Ninomiya. "Ego-Motion Estimation Using a Vehicle Mounted Monocular Camera." IEEJ Transactions on Electronics, Information and Systems 129, no. 12 (2009): 2213–21. http://dx.doi.org/10.1541/ieejeiss.129.2213.
Pełny tekst źródłaMinami, Mamoru, and Wei Song. "Hand-Eye Motion-Invariant Pose Estimation with Online 1-Step GA -3D Pose Tracking Accuracy Evaluation in Dynamic Hand-Eye Oscillation-." Journal of Robotics and Mechatronics 21, no. 6 (2009): 709–19. http://dx.doi.org/10.20965/jrm.2009.p0709.
Pełny tekst źródłaCzech, Phillip, Markus Braun, Ulrich Kreßel, and Bin Yang. "Behavior-Aware Pedestrian Trajectory Prediction in Ego-Centric Camera Views with Spatio-Temporal Ego-Motion Estimation." Machine Learning and Knowledge Extraction 5, no. 3 (2023): 957–78. http://dx.doi.org/10.3390/make5030050.
Pełny tekst źródłaLin, Lili, Wan Luo, Zhengmao Yan, and Wenhui Zhou. "Rigid-aware self-supervised GAN for camera ego-motion estimation." Digital Signal Processing 126 (June 2022): 103471. http://dx.doi.org/10.1016/j.dsp.2022.103471.
Pełny tekst źródłaMatsuhisa, Ryota, Shintaro Ono, Hiroshi Kawasaki, Atsuhiko Banno, and Katsushi Ikeuchi. "Image-Based Ego-Motion Estimation Using On-Vehicle Omnidirectional Camera." International Journal of Intelligent Transportation Systems Research 8, no. 2 (2010): 106–17. http://dx.doi.org/10.1007/s13177-010-0011-z.
Pełny tekst źródłaPełczyński, Paweł, Bartosz Ostrowski, and Dariusz Rzeszotarski. "Motion Vector Estimation of a Stereovision Camera with Inertial Sensors." Metrology and Measurement Systems 19, no. 1 (2012): 141–50. http://dx.doi.org/10.2478/v10178-012-0013-z.
Pełny tekst źródłaSharma, Alisha, Ryan Nett, and Jonathan Ventura. "Unsupervised Learning of Depth and Ego-Motion from Cylindrical Panoramic Video with Applications for Virtual Reality." International Journal of Semantic Computing 14, no. 03 (2020): 333–56. http://dx.doi.org/10.1142/s1793351x20400139.
Pełny tekst źródłaWang, Ke, Xin Huang, JunLan Chen, Chuan Cao, Zhoubing Xiong, and Long Chen. "Forward and Backward Visual Fusion Approach to Motion Estimation with High Robustness and Low Cost." Remote Sensing 11, no. 18 (2019): 2139. http://dx.doi.org/10.3390/rs11182139.
Pełny tekst źródłaZhang, Jiaxin, Wei Sui, Qian Zhang, Tao Chen, and Cong Yang. "Towards Accurate Ground Plane Normal Estimation from Ego-Motion." Sensors 22, no. 23 (2022): 9375. http://dx.doi.org/10.3390/s22239375.
Pełny tekst źródłaShariati, Armon, Christian Holz, and Sudipta Sinha. "Towards Privacy-Preserving Ego-Motion Estimation Using an Extremely Low-Resolution Camera." IEEE Robotics and Automation Letters 5, no. 2 (2020): 1223–30. http://dx.doi.org/10.1109/lra.2020.2967307.
Pełny tekst źródłaGandhi, Tarak, and Mohan Trivedi. "Parametric ego-motion estimation for vehicle surround analysis using an omnidirectional camera." Machine Vision and Applications 16, no. 2 (2005): 85–95. http://dx.doi.org/10.1007/s00138-004-0168-z.
Pełny tekst źródłaWang, Zhongli, Litong Fan, and Baigen Cai. "A 3D Relative-Motion Context Constraint-Based MAP Solution for Multiple-Object Tracking Problems." Sensors 18, no. 7 (2018): 2363. http://dx.doi.org/10.3390/s18072363.
Pełny tekst źródłaPandey, Tejas, Dexmont Pena, Jonathan Byrne, and David Moloney. "Leveraging Deep Learning for Visual Odometry Using Optical Flow." Sensors 21, no. 4 (2021): 1313. http://dx.doi.org/10.3390/s21041313.
Pełny tekst źródłaXiong, Lu, Yongkun Wen, Yuyao Huang, Junqiao Zhao, and Wei Tian. "Joint Unsupervised Learning of Depth, Pose, Ground Normal Vector and Ground Segmentation by a Monocular Camera Sensor." Sensors 20, no. 13 (2020): 3737. http://dx.doi.org/10.3390/s20133737.
Pełny tekst źródłaCi, Wenyan, and Yingping Huang. "A Robust Method for Ego-Motion Estimation in Urban Environment Using Stereo Camera." Sensors 16, no. 10 (2016): 1704. http://dx.doi.org/10.3390/s16101704.
Pełny tekst źródłaTian, Miao, Banglei Guan, Zhibin Xing, and Friedrich Fraundorfer. "Efficient Ego-Motion Estimation for Multi-Camera Systems With Decoupled Rotation and Translation." IEEE Access 8 (2020): 153804–14. http://dx.doi.org/10.1109/access.2020.3018225.
Pełny tekst źródłaYang, DongFang, FuChun Sun, ShiCheng Wang, and JinSheng Zhang. "Simultaneous estimation of ego-motion and vehicle distance by using a monocular camera." Science China Information Sciences 57, no. 5 (2013): 1–10. http://dx.doi.org/10.1007/s11432-013-4884-8.
Pełny tekst źródłaHaggag, M., A. Moussa, and N. El-Sheimy. "HYBRID DEEP LEARNING APPROACH FOR VEHICLE’S RELATIVE ATTITUDE ESTIMATION USING MONOCULAR CAMERA." ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences X-1/W1-2023 (December 5, 2023): 649–55. http://dx.doi.org/10.5194/isprs-annals-x-1-w1-2023-649-2023.
Pełny tekst źródłaFlögel, Daniel, Neel Pratik Bhatt, and Ehsan Hashemi. "Infrastructure-Aided Localization and State Estimation for Autonomous Mobile Robots." Robotics 11, no. 4 (2022): 82. http://dx.doi.org/10.3390/robotics11040082.
Pełny tekst źródłaMueggler, Elias, Henri Rebecq, Guillermo Gallego, Tobi Delbruck, and Davide Scaramuzza. "The event-camera dataset and simulator: Event-based data for pose estimation, visual odometry, and SLAM." International Journal of Robotics Research 36, no. 2 (2017): 142–49. http://dx.doi.org/10.1177/0278364917691115.
Pełny tekst źródłaCho, Jaechan, Yongchul Jung, Dong-Sun Kim, Seongjoo Lee, and Yunho Jung. "Moving Object Detection Based on Optical Flow Estimation and a Gaussian Mixture Model for Advanced Driver Assistance Systems." Sensors 19, no. 14 (2019): 3217. http://dx.doi.org/10.3390/s19143217.
Pełny tekst źródłaZhao, Baigan, Yingping Huang, Hongjian Wei, and Xing Hu. "Ego-Motion Estimation Using Recurrent Convolutional Neural Networks through Optical Flow Learning." Electronics 10, no. 3 (2021): 222. http://dx.doi.org/10.3390/electronics10030222.
Pełny tekst źródłaYuan, Cheng, Jizhou Lai, Pin Lyu, Peng Shi, Wei Zhao, and Kai Huang. "A Novel Fault-Tolerant Navigation and Positioning Method with Stereo-Camera/Micro Electro Mechanical Systems Inertial Measurement Unit (MEMS-IMU) in Hostile Environment." Micromachines 9, no. 12 (2018): 626. http://dx.doi.org/10.3390/mi9120626.
Pełny tekst źródłaZhao, Baigan, Yingping Huang, Wenyan Ci, and Xing Hu. "Unsupervised Learning of Monocular Depth and Ego-Motion with Optical Flow Features and Multiple Constraints." Sensors 22, no. 4 (2022): 1383. http://dx.doi.org/10.3390/s22041383.
Pełny tekst źródłaCutolo, Fabrizio, Virginia Mamone, Nicola Carbonaro, Vincenzo Ferrari, and Alessandro Tognetti. "Ambiguity-Free Optical–Inertial Tracking for Augmented Reality Headsets." Sensors 20, no. 5 (2020): 1444. http://dx.doi.org/10.3390/s20051444.
Pełny tekst źródłaDuan, Chao, Steffen Junginger, Kerstin Thurow, and Hui Liu. "StereoVO: Learning Stereo Visual Odometry Approach Based on Optical Flow and Depth Information." Applied Sciences 13, no. 10 (2023): 5842. http://dx.doi.org/10.3390/app13105842.
Pełny tekst źródłaKim, Pyojin, Jungha Kim, Minkyeong Song, Yeoeun Lee, Moonkyeong Jung, and Hyeong-Geun Kim. "A Benchmark Comparison of Four Off-the-Shelf Proprietary Visual–Inertial Odometry Systems." Sensors 22, no. 24 (2022): 9873. http://dx.doi.org/10.3390/s22249873.
Pełny tekst źródłaLee, Seokju, Sunghoon Im, Stephen Lin, and In So Kweon. "Learning Monocular Depth in Dynamic Scenes via Instance-Aware Projection Consistency." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 3 (2021): 1863–72. http://dx.doi.org/10.1609/aaai.v35i3.16281.
Pełny tekst źródłaAraar, Oualid, Nabil Aouf, and Jose Luis Vallejo Dietz. "Power pylon detection and monocular depth estimation from inspection UAVs." Industrial Robot: An International Journal 42, no. 3 (2015): 200–213. http://dx.doi.org/10.1108/ir-11-2014-0419.
Pełny tekst źródłaTibebu, Haileleol, Varuna De-Silva, Corentin Artaud, Rafael Pina, and Xiyu Shi. "Towards Interpretable Camera and LiDAR Data Fusion for Autonomous Ground Vehicles Localisation." Sensors 22, no. 20 (2022): 8021. http://dx.doi.org/10.3390/s22208021.
Pełny tekst źródłaWang, Zhe, Xisheng Li, Xiaojuan Zhang, Yanru Bai, and Chengcai Zheng. "Blind image deblurring for a close scene under a 6-DOF motion path." Sensor Review 41, no. 2 (2021): 216–26. http://dx.doi.org/10.1108/sr-06-2020-0143.
Pełny tekst źródłaDurant, Szonya, and Johannes M. Zanker. "Variation in the Local Motion Statistics of Real-Life Optic Flow Scenes." Neural Computation 24, no. 7 (2012): 1781–805. http://dx.doi.org/10.1162/neco_a_00294.
Pełny tekst źródłaSong, Moonhyung, and Dongho Shin. "A study on ego-motion estimation based on stereo camera sensor and 2G1Y inertial sensor with considering vehicle dynamics." Proceedings of the Institution of Mechanical Engineers, Part D: Journal of Automobile Engineering 233, no. 8 (2018): 2174–86. http://dx.doi.org/10.1177/0954407018776429.
Pełny tekst źródłaLiu, Hailin, Liangfang Tian, Qiliang Du, and Wenjie Xu. "Robust RGB: D-SLAM in highly dynamic environments based on probability observations and clustering optimization." Measurement Science and Technology 35, no. 3 (2023): 035405. http://dx.doi.org/10.1088/1361-6501/ad0afd.
Pełny tekst źródłaNam, Dinh Van, and Kim Gon-Woo. "Robust Stereo Visual Inertial Navigation System Based on Multi-Stage Outlier Removal in Dynamic Environments." Sensors 20, no. 10 (2020): 2922. http://dx.doi.org/10.3390/s20102922.
Pełny tekst źródłaUhm, Taeyoung, Minsoo Ryu, and Jong-Il Park. "Fine-Motion Estimation Using Ego/Exo-Cameras." ETRI Journal 37, no. 4 (2015): 766–71. http://dx.doi.org/10.4218/etrij.15.0114.0525.
Pełny tekst źródłaChen, Yong-Sheng, Lin-Gwo Liou, Yi-Ping Hung, and Chiou-Shann Fuh. "Three-dimensional ego-motion estimation from motion fields observed with multiple cameras." Pattern Recognition 34, no. 8 (2001): 1573–83. http://dx.doi.org/10.1016/s0031-3203(00)00092-3.
Pełny tekst źródłaDimiccoli, Mariella, and Petia Radeva. "Visual Lifelogging in the Era of Outstanding Digitization." Digital Presentation and Preservation of Cultural and Scientific Heritage 5 (September 30, 2015): 59–64. http://dx.doi.org/10.55630/dipp.2015.5.4.
Pełny tekst źródłaYin, Hongpei, Peter Xiaoping Liu, and Minhua Zheng. "Ego-Motion Estimation With Stereo Cameras Using Efficient 3D–2D Edge Correspondences." IEEE Transactions on Instrumentation and Measurement 71 (2022): 1–11. http://dx.doi.org/10.1109/tim.2022.3198489.
Pełny tekst źródłaDroeschel, David, Stefan May, Dirk Holz, and Sven Behnke. "Fusing Time-of-Flight Cameras and Inertial Measurement Units for Ego-Motion Estimation." Automatika 52, no. 3 (2011): 189–98. http://dx.doi.org/10.1080/00051144.2011.11828419.
Pełny tekst źródłaVujasinović, Stéphane, Stefan Becker, Timo Breuer, Sebastian Bullinger, Norbert Scherer-Negenborn, and Michael Arens. "Integration of the 3D Environment for UAV Onboard Visual Object Tracking." Applied Sciences 10, no. 21 (2020): 7622. http://dx.doi.org/10.3390/app10217622.
Pełny tekst źródłaZhang, Yongcong, Bangyan Liao, Delin Qu, et al. "Ego-motion Estimation for Vehicles with a Rolling Shutter Camera." IEEE Transactions on Intelligent Vehicles, 2024, 1–13. http://dx.doi.org/10.1109/tiv.2024.3436703.
Pełny tekst źródłaGilles, Maximilian, and Sascha Ibrahimpasic. "Unsupervised deep learning based ego motion estimation with a downward facing camera." Visual Computer, November 27, 2021. http://dx.doi.org/10.1007/s00371-021-02345-6.
Pełny tekst źródłaZhou, Wenhui, Hua Zhang, Zhengmao Yan, Weisheng Wang, and Lili Lin. "DecoupledPoseNet: Cascade Decoupled Pose Learning for Unsupervised Camera Ego-motion Estimation." IEEE Transactions on Multimedia, 2022, 1. http://dx.doi.org/10.1109/tmm.2022.3144958.
Pełny tekst źródła