Academic literature on the topic 'Visual-inertial sensor fusion'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Visual-inertial sensor fusion.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Visual-inertial sensor fusion"
Liu, Zhenbin, Zengke Li, Ao Liu, Kefan Shao, Qiang Guo, and Chuanhao Wang. "LVI-Fusion: A Robust Lidar-Visual-Inertial SLAM Scheme." Remote Sensing 16, no. 9 (April 25, 2024): 1524. http://dx.doi.org/10.3390/rs16091524.
Full textWu, Peng, Rongjun Mu, and Bingli Liu. "Upper Stage Visual Inertial Integrated Navigation Method Based on Factor Graph." Journal of Physics: Conference Series 2085, no. 1 (November 1, 2021): 012018. http://dx.doi.org/10.1088/1742-6596/2085/1/012018.
Full textMartinelli, Agostino, Alexander Oliva, and Bernard Mourrain. "Cooperative Visual-Inertial Sensor Fusion: The Analytic Solution." IEEE Robotics and Automation Letters 4, no. 2 (April 2019): 453–60. http://dx.doi.org/10.1109/lra.2019.2891025.
Full textXu, Shaofeng, and Somi Lee. "An Inertial Sensing-Based Approach to Swimming Pose Recognition and Data Analysis." Journal of Sensors 2022 (January 27, 2022): 1–12. http://dx.doi.org/10.1155/2022/5151105.
Full textLu, Zhufei, Xing Xu, Yihao Luo, Lianghui Ding, Chao Zhou, and Jiarong Wang. "A Visual–Inertial Pressure Fusion-Based Underwater Simultaneous Localization and Mapping System." Sensors 24, no. 10 (May 18, 2024): 3207. http://dx.doi.org/10.3390/s24103207.
Full textWan, Yingcai, Qiankun Zhao, Cheng Guo, Chenlong Xu, and Lijing Fang. "Multi-Sensor Fusion Self-Supervised Deep Odometry and Depth Estimation." Remote Sensing 14, no. 5 (March 2, 2022): 1228. http://dx.doi.org/10.3390/rs14051228.
Full textKelly, Jonathan, and Gaurav S. Sukhatme. "Visual-Inertial Sensor Fusion: Localization, Mapping and Sensor-to-Sensor Self-calibration." International Journal of Robotics Research 30, no. 1 (November 5, 2010): 56–79. http://dx.doi.org/10.1177/0278364910382802.
Full textBrown, Alison, and Paul Olson. "Navigation and Electro-Optic Sensor Integration Technology for Fusion of Imagery and Digital Mapping Products." Journal of Navigation 53, no. 1 (January 2000): 132–45. http://dx.doi.org/10.1017/s0373463399008735.
Full textKim, Youngji, Sungho Yoon, Sujung Kim, and Ayoung Kim. "Unsupervised Balanced Covariance Learning for Visual-Inertial Sensor Fusion." IEEE Robotics and Automation Letters 6, no. 2 (April 2021): 819–26. http://dx.doi.org/10.1109/lra.2021.3051571.
Full textCahyadi, M. N., T. Asfihani, H. F. Suhandri, and S. C. Navisa. "Analysis of GNSS/IMU Sensor Fusion at UAV Quadrotor for Navigation." IOP Conference Series: Earth and Environmental Science 1276, no. 1 (December 1, 2023): 012021. http://dx.doi.org/10.1088/1755-1315/1276/1/012021.
Full textDissertations / Theses on the topic "Visual-inertial sensor fusion"
Aufderheide, Dominik. "VISrec! : visual-inertial sensor fusion for 3D scene reconstruction." Thesis, University of Bolton, 2014. http://ubir.bolton.ac.uk/649/.
Full textLarsson, Olof. "Visual-inertial tracking using Optical Flow measurements." Thesis, Linköping University, Automatic Control, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-59970.
Full text
Visual-inertial tracking is a well known technique to track a combination of a camera and an inertial measurement unit (IMU). An issue with the straight-forward approach is the need of known 3D points. To by-pass this, 2D information can be used without recovering depth to estimate the position and orientation (pose) of the camera. This Master's thesis investigates the feasibility of using Optical Flow (OF) measurements and indicates the benifits using this approach.
The 2D information is added using OF measurements. OF describes the visual flow of interest points in the image plane. Without the necessity to estimate depth of these points, the computational complexity is reduced. With the increased 2D information, the 3D information required for the pose estimate decreases.
The usage of 2D points for the pose estimation has been verified with experimental data gathered by a real camera/IMU-system. Several data sequences containing different trajectories are used to estimate the pose. It is shown that OF measurements can be used to improve visual-inertial tracking with reduced need of 3D-point registrations.
Zachariah, Dave. "Fusing Visual and Inertial Information." Licentiate thesis, KTH, Signalbehandling, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-32112.
Full textPanahandeh, Ghazaleh. "Selected Topics in Inertial and Visual Sensor Fusion : Calibration, Observability Analysis and Applications." Doctoral thesis, KTH, Signalbehandling, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-142602.
Full textQC 20140312
Burusa, Akshay Kumar. "Visual-Inertial Odometry for Autonomous Ground Vehicles." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-217284.
Full textMonokulära kameror används ofta vid rörelseestimering av obemannade flygande farkoster. Med det ökade intresset för autonoma fordon har även användningen av monokulära kameror i fordon ökat. Detta är fram för allt fördelaktigt i situationer där satellitnavigering (Global Navigation Satellite System (GNSS)) äropålitlig, exempelvis i dagbrott. De flesta system som använder sig av monokulära kameror har problem med att estimera skalan. Denna estimering blir ännu svårare på grund av ett fordons större hastigheter och snabbare rörelser. Syftet med detta exjobb är att försöka estimera skalan baserat på bild data från en monokulär kamera, genom att komplettera med data från tröghetssensorer. Det visas att simultan estimering av position och skala för ett fordon är möjligt genom fusion av bild- och tröghetsdata från sensorer med hjälp av ett utökat Kalmanfilter (EKF). Estimeringens konvergens beror på flera faktorer, inklusive initialiseringsfel. En noggrann estimering av skalan möjliggör också en noggrann estimering av positionen. Detta möjliggör lokalisering av fordon vid avsaknad av GNSS och erbjuder därmed en ökad redundans.
Wisely, Babu Benzun. "Motion Conflict Detection and Resolution in Visual-Inertial Localization Algorithm." Digital WPI, 2018. https://digitalcommons.wpi.edu/etd-dissertations/503.
Full textGintrand, Pierre. "Estimation de l'état d'un hélicoptère par vision monoculaire en environnement inconnu." Electronic Thesis or Diss., Université Côte d'Azur, 2024. http://www.theses.fr/2024COAZ4021.
Full textVision is the primary means for helicopter pilots to perceive and evaluate the surrounding environment, especially when navigating near terrain or close to obstacles not listed on aeronautical charts. However, despite over half a century of research into the use of vision in robotics, few of the results have been transferred technologically to aid aircraft piloting. Thanks to developments in computing resources, recent decades have seen the emergence of computer vision techniques, which now enable the processing, analysis, and understanding of digital images to extract and interpret information. For several decades, Airbus Helicopters has equipped its medium and heavy helicopter range with an autopilot system to improve flying qualities, and to offer piloting aids such as hovering and trajectory following. The company is now considering integrating visual sensors into its helicopters to enhance the robustness of its kinematic state estimation (position, speed, attitude), crucial information for the autopilot. Thus, the thesis focuses on the synthesis of nonlinear observers for state estimation of a visual-inertial system, using Riccati-type techniques to fuse visual and inertial sensors. The deterministic nature of the proposed observers has allowed determining sufficient conditions, expressed in terms of positioning and number of source points, and persistent excitation of camera motion, for which exponential and local stability is formally demonstrated. This aspect is particularly valuable in designing technological bricks intended for integration into systems subject to rigorous certification constraints. The performance of the proposed solution is compared to state-of-the-art algorithms using datasets provided by the scientific community
Manerikar, Ninad. "Fusion de capteurs visuels-inertiels et estimation d'état pour la navigation des véhicules autonomes." Thesis, Université Côte d'Azur, 2022. http://www.theses.fr/2022COAZ4111.
Full textAccurate state estimation is a fundamental problem for the navigation of Autonomous vehicles. This is particularly important when the vehicle is navigating through cluttered environments or it has to navigate in close proximity to its physical surroundings in order to perform localization, obstacle avoidance, environmental mapping etc. Although several algorithms were proposed in the past for this problem of state estimtation, they were usually applied to a single sensor or a specific sensor suite. To this end, researchers in the computer vision and control community came up with a visual-inertial framework (Camera + Imu) that exploit the combined properties of this sensor suite to produce precise local estimates (position, orientation, velocity etc). Taking inspiration from this, my thesis focuses on developing nonlinear observers for State Estimation by exploiting the classical Riccati design framework with a particular emphasis on visual-inertial sensor fusion. In the context of this thesis, we use a suite of low-cost sensors consisting of a monocular camera and an IMU. Throughout the thesis, the assumption on the planarity of the visual target has been considered. In the present thesis, two research topics have been considered. Firstly, an extensive study for the existing techniques for homography estimation has been carried out after which a novel nonlinear observer on the SL(3) group has been proposed with application to optical flow estimation. The novelty lies in the linearization approach undertaken to linearize a nonlinear observer on SL(3), thus making it more simplistic and suitable for practical implementation. Then, another novel observer based on deterministic Ricatti observer has been proposed for the problem of partial attitude, linear velocity and depth estimation for planar targets. The proposed approach does not rely on the strong assumption that the IMU provides the measurements of the vehicle’s linear acceleration in the body-fixed frame. Again experimental validations have been carried out to show the performance of the observer. An extension to this observer has been further proposed to filter the noisy optical flow estimates obtained from the extraction of continuous homography. Secondly, two novel observers for tackling the classical problem of homography decomposition have been proposed. The key contribution here lies in the design of two deterministic Riccati observers for addressing the homography decomposition problem instead of solving it on a frame-by-frame basis like traditional algebraic approaches. The performance and robustness of the observers have been validated over simulations and practical experiments. All the observers proposed above are part of the Homography-Lab library that has been evaluated at the TRL 7 (Technology Readiness Level) and is protected by the French APP (Agency for the Protection of Programs) which serves as the main brick for various applications like velocity, optical flow estimation and visual homography based stabilization
Khairallah, Mahmoud. "Flow-Based Visual-Inertial Odometry for Neuromorphic Vision Sensors." Electronic Thesis or Diss., université Paris-Saclay, 2022. http://www.theses.fr/2022UPAST117.
Full textRather than generating images constantly and synchronously, neuromorphic vision sensors -also known as event-based cameras- permit each pixel to provide information independently and asynchronously whenever brightness change is detected. Consequently, neuromorphic vision sensors do not encounter the problems of conventional frame-based cameras like image artifacts and motion blur. Furthermore, they can provide lossless data compression, higher temporal resolution and higher dynamic range. Hence, event-based cameras conveniently replace frame-based cameras in robotic applications requiring high maneuverability and varying environmental conditions. In this thesis, we address the problem of visual-inertial odometry using event-based cameras and an inertial measurement unit. Exploiting the consistency of event-based cameras with the brightness constancy conditions, we discuss the availability of building a visual odometry system based on optical flow estimation. We develop our approach based on the assumption that event-based cameras provide edge-like information about the objects in the scene and apply a line detection algorithm for data reduction. Line tracking allows us to gain more time for computations and provides a better representation of the environment than feature points. In this thesis, we do not only show an approach for event-based visual-inertial odometry but also event-based algorithms that can be used as stand-alone algorithms or integrated into other approaches if needed
Wu, Huang-Yi, and 吳皇毅. "Fusion of Inertial Measurement and Visual Sensor for Simultaneous Localization and Mapping." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/41510478187409324977.
Full text淡江大學
機械與機電工程學系碩士班
104
This study investigates the issues of inertial measurement unit (IMU) assisted monocular simultaneous localization and mapping (SLAM). The speeded-up robust features (SURF) algorithm is used for interest point detection and description. The positions of environment landmarks are represented by inverse depth parameterization method. The positions of camera and landmarks can be estimated by using the extended Kalman filter (EKF). The map scale for monocular SLAM initialization can be estimated by the displacement of IMU. The experiment results demonstrate that the IMU successfully initialize monocular SLAM.
Book chapters on the topic "Visual-inertial sensor fusion"
He, Hongsheng, Yan Li, and Jindong Tan. "Rotational Coordinate Transformation for Visual-Inertial Sensor Fusion." In Social Robotics, 431–40. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-47437-3_42.
Full textLu, Yao, Xiaoxu Yin, Feng Qin, Ke Huang, Menghua Zhang, and Weijie Huang. "A Lightweight Sensor Fusion for Neural Visual Inertial Odometry." In International Conference on Neural Computing for Advanced Applications, 46–59. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-5847-4_4.
Full textWang, Guan, Yifeng Pan, and Hui Zhou. "Fusion of Inertial and Visual Sensor Data for Accurate Localization." In Advances in Intelligent Automation and Soft Computing, 758–66. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-81007-8_86.
Full textLi, Tong, Juntao Wang, Yi Chen, and Tianyun Dong. "Visual–Inertial Sensor Fusion and OpenSim Based Body Pose Estimation." In Intelligent Robotics and Applications, 279–85. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-6486-4_24.
Full textMarcon, Marco, Augusto Sarti, and Stefano Tubaro. "Smart Toothbrushes: Inertial Measurement Sensors Fusion with Visual Tracking." In Lecture Notes in Computer Science, 480–94. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-48881-3_33.
Full textLin, Tianliang, Zhongyuan He, Jiangdong Wu, Qihuai Chen, and Shengjie Fu. "Intelligent Construction Machinery SLAM with Stereo Vision and Inertia Fusion." In Lecture Notes in Mechanical Engineering, 1035–47. Singapore: Springer Nature Singapore, 2024. http://dx.doi.org/10.1007/978-981-97-1876-4_82.
Full textDuc, Tran Minh, and Hee-Jun Kang. "Fusion of Vision and Inertial Sensors for Position-Based Visual Servoing of a Robot Manipulator." In Intelligent Computing Theories, 536–45. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-39479-9_63.
Full textStepanov, Dmitrii, Alexander Popov, Dmitrii Gromoshinskii, and Oleg Shmakov. "Visual-Inertial Sensor Fusion to Accuracy Increase of Autonomous Underwater Vehicles Positioning." In Proceedings of the 29th International DAAAM Symposium 2018, 0615–23. DAAAM International Vienna, 2018. http://dx.doi.org/10.2507/29th.daaam.proceedings.089.
Full textCasha, Owen. "A Comparative Analysis and Review of Indoor Positioning Systems and Technologies." In Innovation in Indoor Positioning Systems [Working Title]. IntechOpen, 2024. http://dx.doi.org/10.5772/intechopen.1005185.
Full textTroll, Péter, Károly Szipka, and Andreas Archenti. "Indoor Localization of Quadcopters in Industrial Environment." In Advances in Transdisciplinary Engineering. IOS Press, 2020. http://dx.doi.org/10.3233/atde200183.
Full textConference papers on the topic "Visual-inertial sensor fusion"
Troncoso, Juan Manuel Reyes, and Alexander Cerón Correa. "Visual and Inertial Odometry Based on Sensor Fusion." In 2024 XXIV Symposium of Image, Signal Processing, and Artificial Vision (STSIVA), 1–5. IEEE, 2024. http://dx.doi.org/10.1109/stsiva63281.2024.10637841.
Full textMartinelli, Agostino, and Alessandro Renzaglia. "Cooperative visual-inertial sensor fusion: Fundamental equations." In 2017 International Symposium on Multi-Robot and Multi-Agent Systems (MRS). IEEE, 2017. http://dx.doi.org/10.1109/mrs.2017.8250927.
Full textTsotsos, Konstantine, Alessandro Chiuso, and Stefano Soatto. "Robust inference for visual-inertial sensor fusion." In 2015 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2015. http://dx.doi.org/10.1109/icra.2015.7139924.
Full textStapleton, Mehdi P., Md Zulfiquar Ali Bhotto, and Ivan V. Bajic. "A simulation environment for visual-inertial sensor fusion." In 2016 IEEE Canadian Conference on Electrical and Computer Engineering (CCECE). IEEE, 2016. http://dx.doi.org/10.1109/ccece.2016.7726705.
Full textChen, Changhao, Stefano Rosa, Yishu Miao, Chris Xiaoxuan Lu, Wei Wu, Andrew Markham, and Niki Trigoni. "Selective Sensor Fusion for Neural Visual-Inertial Odometry." In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2019. http://dx.doi.org/10.1109/cvpr.2019.01079.
Full textHartzer, Jacob, and Srikanth Saripalli. "Online Multi-IMU Calibration Using Visual-Inertial Odometry." In 2023 IEEE Symposium Sensor Data Fusion and International Conference on Multisensor Fusion and Integration (SDF-MFI). IEEE, 2023. http://dx.doi.org/10.1109/sdf-mfi59545.2023.10361310.
Full textZhao, Yang, Eric Tkaczyk, and Feng Pan. "Visual and inertial sensor fusion for mobile X-ray detector tracking." In SenSys '20: The 18th ACM Conference on Embedded Networked Sensor Systems. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3384419.3430435.
Full textBleser, Gabriele, and Didier Stricker. "Advanced tracking through efficient image processing and visual-inertial sensor fusion." In 2008 IEEE Virtual Reality Conference. IEEE, 2008. http://dx.doi.org/10.1109/vr.2008.4480765.
Full textLiu, Tianbo, and Shaojie Shen. "High altitude monocular visual-inertial state estimation: Initialization and sensor fusion." In 2017 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2017. http://dx.doi.org/10.1109/icra.2017.7989528.
Full textUbezio, Barnaba, Shashank Sharma, Guglielmo Van der Meer, and Michele Taragna. "Kalman Filter Based Sensor Fusion for a Mobile Manipulator." In ASME 2019 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2019. http://dx.doi.org/10.1115/detc2019-97241.
Full text