Academic literature on the topic 'Visual-inertial sensor fusion'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Visual-inertial sensor fusion.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Visual-inertial sensor fusion"

1

Liu, Zhenbin, Zengke Li, Ao Liu, Kefan Shao, Qiang Guo, and Chuanhao Wang. "LVI-Fusion: A Robust Lidar-Visual-Inertial SLAM Scheme." Remote Sensing 16, no. 9 (April 25, 2024): 1524. http://dx.doi.org/10.3390/rs16091524.

Full text
Abstract:
With the development of simultaneous positioning and mapping technology in the field of automatic driving, the current simultaneous localization and mapping scheme is no longer limited to a single sensor and is developing in the direction of multi-sensor fusion to enhance the robustness and accuracy. In this study, a localization and mapping scheme named LVI-fusion based on multi-sensor fusion of camera, lidar and IMU is proposed. Different sensors have different data acquisition frequencies. To solve the problem of time inconsistency in heterogeneous sensor data tight coupling, the time alignment module is used to align the time stamp between the lidar, camera and IMU. The image segmentation algorithm is used to segment the dynamic target of the image and extract the static key points. At the same time, the optical flow tracking based on the static key points are carried out and a robust feature point depth recovery model is proposed to realize the robust estimation of feature point depth. Finally, lidar constraint factor, IMU pre-integral constraint factor and visual constraint factor together construct the error equation that is processed with a sliding window-based optimization module. Experimental results show that the proposed algorithm has competitive accuracy and robustness.
APA, Harvard, Vancouver, ISO, and other styles
2

Wu, Peng, Rongjun Mu, and Bingli Liu. "Upper Stage Visual Inertial Integrated Navigation Method Based on Factor Graph." Journal of Physics: Conference Series 2085, no. 1 (November 1, 2021): 012018. http://dx.doi.org/10.1088/1742-6596/2085/1/012018.

Full text
Abstract:
Abstract In the working process of the upper stage integrated navigation information fusion system, the multi-source navigation information fusion algorithm based on factor graph Bayesian estimation is used to fuse the information of inertial sensors, visual sensors and other sensors. The overall joint probability distribution of the system is described in the form of probability graph model with the dependence of local variables, so as to reduce the complexity of the system, adjust the data structure of information fusion to improve the efficiency of information fusion and smoothly switch the sensor configuration.
APA, Harvard, Vancouver, ISO, and other styles
3

Martinelli, Agostino, Alexander Oliva, and Bernard Mourrain. "Cooperative Visual-Inertial Sensor Fusion: The Analytic Solution." IEEE Robotics and Automation Letters 4, no. 2 (April 2019): 453–60. http://dx.doi.org/10.1109/lra.2019.2891025.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Xu, Shaofeng, and Somi Lee. "An Inertial Sensing-Based Approach to Swimming Pose Recognition and Data Analysis." Journal of Sensors 2022 (January 27, 2022): 1–12. http://dx.doi.org/10.1155/2022/5151105.

Full text
Abstract:
In this paper, inertial sensing is used to identify a swimming stance and analyze its swimming stance data. A wireless monitoring device based on a nine-axis microinertial sensor is designed for the characteristics of swimming motion, and measurement experiments are conducted for different intensities and stances of swimming motion. By comparing and analyzing the motion characteristics of various swimming stances, the basis for performing stroke identification is proposed, and the monitoring data characteristics of the experimental results match with it. The stance reconstruction technology is studied, PC-based OpenGL multithreaded data synchronization and stance following reconstruction are designed to reconstruct the joint association data of multiple nodes in a constrained set, and the reconstruction results are displayed through graphic image rendering. For the whole system, each key technology is organically integrated to design a wearable wireless sensing network-based pose resolution analysis and reconstruction recognition system. Inertial sensors inevitably suffer from drift after a long period of position trajectory tracking. The proposed fusion algorithm corrects the drift of position estimation using the measurement of the visual sensor, and the measurement of the inertial sensor complements the missing measurement of the visual sensor for the case of occlusion of the visual sensor and fast movement of the upper limb. An experimental platform for upper-limb position estimation based on the fusion of inertial and visual sensors is built to verify the effectiveness of the proposed method. Finally, the full paper is summarized, and an outlook for further research is provided.
APA, Harvard, Vancouver, ISO, and other styles
5

Lu, Zhufei, Xing Xu, Yihao Luo, Lianghui Ding, Chao Zhou, and Jiarong Wang. "A Visual–Inertial Pressure Fusion-Based Underwater Simultaneous Localization and Mapping System." Sensors 24, no. 10 (May 18, 2024): 3207. http://dx.doi.org/10.3390/s24103207.

Full text
Abstract:
Detecting objects, particularly naval mines, on the seafloor is a complex task. In naval mine countermeasures (MCM) operations, sidescan or synthetic aperture sonars have been used to search large areas. However, a single sensor cannot meet the requirements of high-precision autonomous navigation. Based on the ORB-SLAM3-VI framework, we propose ORB-SLAM3-VIP, which integrates a depth sensor, an IMU sensor and an optical sensor. This method integrates the measurements of depth sensors and an IMU sensor into the visual SLAM algorithm through tight coupling, and establishes a multi-sensor fusion SLAM model. Depth constraints are introduced into the process of initialization, scale fine-tuning, tracking and mapping to constrain the position of the sensor in the z-axis and improve the accuracy of pose estimation and map scale estimate. The test on seven sets of underwater multi-sensor sequence data in the AQUALOC dataset shows that, compared with ORB-SLAM3-VI, the ORB-SLAM3-VIP system proposed in this paper reduces the scale error in all sequences by up to 41.2%, and reduces the trajectory error by up to 41.2%. The square root has also been reduced by up to 41.6%.
APA, Harvard, Vancouver, ISO, and other styles
6

Wan, Yingcai, Qiankun Zhao, Cheng Guo, Chenlong Xu, and Lijing Fang. "Multi-Sensor Fusion Self-Supervised Deep Odometry and Depth Estimation." Remote Sensing 14, no. 5 (March 2, 2022): 1228. http://dx.doi.org/10.3390/rs14051228.

Full text
Abstract:
This paper presents a new deep visual-inertial odometry and depth estimation framework for improving the accuracy of depth estimation and ego-motion from image sequences and inertial measurement unit (IMU) raw data. The proposed framework predicts ego-motion and depth with absolute scale in a self-supervised manner. We first capture dense features and solve the pose by deep visual odometry (DVO), and then combine the pose estimation pipeline with deep inertial odometry (DIO) by the extended Kalman filter (EKF) method to produce the sparse depth and pose with absolute scale. We then join deep visual-inertial odometry (DeepVIO) with depth estimation by using sparse depth and the pose from DeepVIO pipeline to align the scale of the depth prediction with the triangulated point cloud and reduce image reconstruction error. Specifically, we use the strengths of learning-based visual-inertial odometry (VIO) and depth estimation to build an end-to-end self-supervised learning architecture. We evaluated the new framework on the KITTI datasets and compared it to the previous techniques. We show that our approach improves results for ego-motion estimation and achieves comparable results for depth estimation, especially in the detail area.
APA, Harvard, Vancouver, ISO, and other styles
7

Kelly, Jonathan, and Gaurav S. Sukhatme. "Visual-Inertial Sensor Fusion: Localization, Mapping and Sensor-to-Sensor Self-calibration." International Journal of Robotics Research 30, no. 1 (November 5, 2010): 56–79. http://dx.doi.org/10.1177/0278364910382802.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Brown, Alison, and Paul Olson. "Navigation and Electro-Optic Sensor Integration Technology for Fusion of Imagery and Digital Mapping Products." Journal of Navigation 53, no. 1 (January 2000): 132–45. http://dx.doi.org/10.1017/s0373463399008735.

Full text
Abstract:
Several military and commercial platforms are currently installing GPS and inertial navigation sensors concurrently with the introduction of high-quality visual capabilities and digital mapping/imagery databases. This enables autonomous geo-registration of sensor imagery using GPS/inertial position and attitude data, and also permits data from digital mapping products to be overlaid automatically on the sensor imagery. This paper describes the system architecture for a Navigation/Electro-Optic Sensor Integration Technology (NEOSIT) software application. The design is highly modular and based on commercial off-the-shelf (COTS) tools to facilitate integration with sensors, navigation and digital data sources already installed on different host platforms.
APA, Harvard, Vancouver, ISO, and other styles
9

Kim, Youngji, Sungho Yoon, Sujung Kim, and Ayoung Kim. "Unsupervised Balanced Covariance Learning for Visual-Inertial Sensor Fusion." IEEE Robotics and Automation Letters 6, no. 2 (April 2021): 819–26. http://dx.doi.org/10.1109/lra.2021.3051571.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Cahyadi, M. N., T. Asfihani, H. F. Suhandri, and S. C. Navisa. "Analysis of GNSS/IMU Sensor Fusion at UAV Quadrotor for Navigation." IOP Conference Series: Earth and Environmental Science 1276, no. 1 (December 1, 2023): 012021. http://dx.doi.org/10.1088/1755-1315/1276/1/012021.

Full text
Abstract:
Abstract To determine the position and navigation of an unknown environment, UAVs rely on sensors that provide information regarding position, speed, and orientation. There are sensors to provide direct navigation information such as the Global Navigation Satellite System (GNSS) by providing position data, or indirect sensors such as inertial sensors which provide speed and orientation data. An inertial sensor or commonly known as an Inertial Measurement Unit (IMU) is a combination of data acceleration (accelerometer) and angular velocity (gyroscope). By performing GNSS/IMU sensor fusion at UAV Quadrotor will increase the accuracy of aircraft localization based on its mathematical model involving the Kalman Filter approach. The main goal is to improve the coordinates obtained from Quadrotor UAV measurements, so that position of UAV Quadrotor aircraft is more accurate. Raw data of sensors GNSS/IMU is obtained during the flight of the aircraft. Visual comparison is used to determine whether the coordinate of the processed data has better accuracy than the raw data. The results showed that the Unscented Kalman Filter (UKF) simulation gave 3D position accuracy of 0.403 m to the measurement data. It can improve 23,47% fprm EKF Estimation which give 3D position accuracy of 16.598 m.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Visual-inertial sensor fusion"

1

Aufderheide, Dominik. "VISrec! : visual-inertial sensor fusion for 3D scene reconstruction." Thesis, University of Bolton, 2014. http://ubir.bolton.ac.uk/649/.

Full text
Abstract:
The self-acting generation of three-dimensional models, by analysing monocular image streams from standard cameras, is one fundamental problem in the field of computer vision. A prerequisite for the scene modelling is the computation of the camera pose for the different frames of the sequence. Several techniques and methodologies have been introduced during the last decade to solve this classical Structure from Motion (SfM) problem, which incorporates camera egomotion estimation and subsequent recovery of 3D scene structure. However the applicability of those approaches to real world devices and applications is still limited, due to non-satisfactorily properties in terms of computational costs, accuracy and robustness. Thus tactile systems and laser scanners are still the predominantly used methods in industry for 3D measurements. This thesis suggests a novel framework for 3D scene reconstruction based on visual-inertial measurements and a corresponding sensor fusion framework. The integration of additional modalities, such as inertial measurements, are useful to compensate for typical problems of systems which rely only on visual information. The complete system is implemented based on a generic framework for designing Multi-Sensor Data Fusion (MSDF) systems. It is demonstrated that the incorporation of inertial measurements into a visual-inertial sensor fusion scheme for scene reconstruction (VISrec!) outperforms classical methods in terms of robustness and accuracy. It can be shown that the combination of visual and inertial modalities for scene reconstruction allows a reduction of the mean reconstruction error of typical scenes by up to 30%. Furthermore, the number of 3D feature points, which can be successfully reconstructed can be nearly doubled. In addition range and RGB-D sensors have been successfully incorporated into the VISrec! scheme proving the general applicability of the framework. By this it is possible to increase the number of 3D points within the reconstructed point cloud by a factor of five hundred if compared to standard visual SfM. Finally the applicability of the VISrec!-sensor to a specific industrial problem, in corporation with a local company, for reverse engineering of tailor-made car racing components demonstrates the usefulness of the developed system.
APA, Harvard, Vancouver, ISO, and other styles
2

Larsson, Olof. "Visual-inertial tracking using Optical Flow measurements." Thesis, Linköping University, Automatic Control, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-59970.

Full text
Abstract:

 

Visual-inertial tracking is a well known technique to track a combination of a camera and an inertial measurement unit (IMU). An issue with the straight-forward approach is the need of known 3D points. To by-pass this, 2D information can be used without recovering depth to estimate the position and orientation (pose) of the camera. This Master's thesis investigates the feasibility of using Optical Flow (OF) measurements and indicates the benifits using this approach.

The 2D information is added using OF measurements. OF describes the visual flow of interest points in the image plane. Without the necessity to estimate depth of these points, the computational complexity is reduced. With the increased 2D information, the 3D information required for the pose estimate decreases.

The usage of 2D points for the pose estimation has been verified with experimental data gathered by a real camera/IMU-system. Several data sequences containing different trajectories are used to estimate the pose. It is shown that OF measurements can be used to improve visual-inertial tracking with reduced need of 3D-point registrations.

APA, Harvard, Vancouver, ISO, and other styles
3

Zachariah, Dave. "Fusing Visual and Inertial Information." Licentiate thesis, KTH, Signalbehandling, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-32112.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Panahandeh, Ghazaleh. "Selected Topics in Inertial and Visual Sensor Fusion : Calibration, Observability Analysis and Applications." Doctoral thesis, KTH, Signalbehandling, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-142602.

Full text
Abstract:
Recent improvements in the development of inertial and visual sensors allow building small, lightweight, and cheap motion capture systems, which are becoming a standard feature of smartphones and personal digital assistants. This dissertation describes developments of new motion sensing strategies using the inertial and inertial-visual sensors. The thesis contributions are presented in two parts. The first part focuses mainly on the use of inertial measurement units. First, the problem of sensor calibration is addressed and a low-cost and accurate method to calibrate the accelerometer cluster of this unit is proposed. The method is based on the maximum likelihood estimation framework, which results in a minimum variance unbiased estimator.Then using the inertial measurement unit, a probabilistic user-independent method is proposed for pedestrian activity classification and gait analysis.The work targets two groups of applications including human activity classificationand joint human activity and gait-phase classification.The developed methods are based on continuous hidden Markov models. The achieved relative figure-of-merits using the collected data validate the reliability of the proposed methods for the desired applications. In the second part, the problem of inertial and visual sensor fusion is studied.This part describes the contributions related to sensor calibration, motion estimation,and observability analysis. The proposed visual-inertial schemes in this part can mainly be divided into three systems. For each system, an estimation approach is proposed and its observability properties are analyzed.Moreover, the performances of the proposed methods are illustrated using both simulations and experimental data. Firstly, a novel calibration scheme is proposed to estimate the relative transformation between the inertial and visual sensors, which are rigidly mounted together. The main advantage of the developed method is that the calibration is performed using a planar mirror instead of using a calibration pattern.By performing the observability analysis for this system, it is proved that the calibration parameters are observable. Moreover, the achieved results show subcentimeter and subdegree accuracy for the calibration parameters.Secondly, an ego-motion estimation approach is introduced that is based on using horizontal plane features where the camera is restricted to be downward looking. The observability properties of this system are then analyzed when only one feature point is used.In particular, it is proved that the system has only three unobservable directions corresponding to global translations parallel to the horizontal plane, and rotations around the gravity vector.Hence, compared to general visual-inertial navigation systems, an advantage of the proposed system is that the vertical translation becomes observable.Finally, a 6-DoF positioning system is developed based on using only planar features on a desired horizontal plane. Compared to the previously mentioned approach, the restriction of using a downward looking camera is relaxed, while the observability properties of the system are preserved.The achieved results indicate promising accuracy and reliability of the proposed algorithm and validate the findings of the theoretical analysis and 6-DoF motion estimation.The proposed motion estimation approach is then extended by developing a new planar feature detection method. Hence, a complete positioning approach is introduced, which simultaneously performs 6-DoF motion estimation and horizontal plane feature detection.

QC 20140312

APA, Harvard, Vancouver, ISO, and other styles
5

Burusa, Akshay Kumar. "Visual-Inertial Odometry for Autonomous Ground Vehicles." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-217284.

Full text
Abstract:
Monocular cameras are prominently used for estimating motion of Unmanned Aerial Vehicles. With growing interest in autonomous vehicle technology, the use of monocular cameras in ground vehicles is on the rise. This is especially favorable for localization in situations where Global Navigation Satellite System (GNSS) is unreliable, such as open-pit mining environments. However, most monocular camera based approaches suffer due to obscure scale information. Ground vehicles impose a greater difficulty due to high speeds and fast movements. This thesis aims to estimate the scale of monocular vision data by using an inertial sensor in addition to the camera. It is shown that the simultaneous estimation of pose and scale in autonomous ground vehicles is possible by the fusion of visual and inertial sensors in an Extended Kalman Filter (EKF) framework. However, the convergence of scale is sensitive to several factors including the initialization error. An accurate estimation of scale allows the accurate estimation of pose. This facilitates the localization of ground vehicles in the absence of GNSS, providing a reliable fall-back option.
Monokulära kameror används ofta vid rörelseestimering av obemannade flygande farkoster. Med det ökade intresset för autonoma fordon har även användningen av monokulära kameror i fordon ökat. Detta är fram för allt fördelaktigt i situationer där satellitnavigering (Global Navigation Satellite System (GNSS)) äropålitlig, exempelvis i dagbrott. De flesta system som använder sig av monokulära kameror har problem med att estimera skalan. Denna estimering blir ännu svårare på grund av ett fordons större hastigheter och snabbare rörelser. Syftet med detta exjobb är att försöka estimera skalan baserat på bild data från en monokulär kamera, genom att komplettera med data från tröghetssensorer. Det visas att simultan estimering av position och skala för ett fordon är möjligt genom fusion av bild- och tröghetsdata från sensorer med hjälp av ett utökat Kalmanfilter (EKF). Estimeringens konvergens beror på flera faktorer, inklusive initialiseringsfel. En noggrann estimering av skalan möjliggör också en noggrann estimering av positionen. Detta möjliggör lokalisering av fordon vid avsaknad av GNSS och erbjuder därmed en ökad redundans.
APA, Harvard, Vancouver, ISO, and other styles
6

Wisely, Babu Benzun. "Motion Conflict Detection and Resolution in Visual-Inertial Localization Algorithm." Digital WPI, 2018. https://digitalcommons.wpi.edu/etd-dissertations/503.

Full text
Abstract:
In this dissertation, we have focused on conflicts that occur due to disagreeing motions in multi-modal localization algorithms. In spite of the recent achievements in robust localization by means of multi-sensor fusion, these algorithms are not applicable to all environments. This is primarily attributed to the following fundamental assumptions: (i) the environment is predominantly stationary, (ii) only ego-motion of the sensor platform exists, and (iii) multiple sensors are always in agreement with each other regarding the observed motion. Recently, studies have shown how to relax the static environment assumption using outlier rejection techniques and dynamic object segmentation. Additionally, to handle non ego-motion, approaches that extend the localization algorithm to multi-body tracking have been studied. However, there has been no attention given to the conditions where multiple sensors contradict each other with regard to the motions observed. Vision based localization has become an attractive approach for both indoor and outdoor applications due to the large information bandwidth provided by images and reduced cost of the cameras used. In order to improve the robustness and overcome the limitations of vision, an Inertial Measurement Unit (IMU) may be used. Even though visual-inertial localization has better accuracy and improved robustness due to the complementary nature of camera and IMU sensor, they are affected by disagreements in motion observations. We term such dynamic situations as environments with motion conflictbecause these are caused when multiple different but self- consistent motions are observed by different sensors. Tightly coupled visual inertial fusion approaches that disregard such challenging situations exhibit drift that can lead to catastrophic errors. We have provided a probabilistic model for motion conflict. Additionally, a novel algorithm to detect and resolve motion conflicts is also presented. Our method to detect motion conflicts is based on per-frame positional estimate discrepancy and per- landmark reprojection errors. Motion conflicts were resolved by eliminating inconsistent IMU and landmark measurements. Finally, a Motion Conflict aware Visual Inertial Odometry (MC- VIO) algorithm that combined both detection and resolution of motion conflict was implemented. Both quantitative and qualitative evaluation of MC-VIO on visually and inertially challenging datasets were obtained. Experimental results indicated that MC-VIO algorithm reduced the absolute trajectory error by 70% and the relative pose error by 34% in scenes with motion conflict, in comparison to the reference VIO algorithm. Motion conflict detection and resolution enables the application of visual inertial localization algorithms to real dynamic environments. This paves the way for articulate object tracking in robotics. It may also find numerous applications in active long term augmented reality.
APA, Harvard, Vancouver, ISO, and other styles
7

Gintrand, Pierre. "Estimation de l'état d'un hélicoptère par vision monoculaire en environnement inconnu." Electronic Thesis or Diss., Université Côte d'Azur, 2024. http://www.theses.fr/2024COAZ4021.

Full text
Abstract:
La vision est cruciale pour les pilotes d'hélicoptère, surtout près du relief ou des obstacles non cartographiés. Toutefois, et malgré des décennies de recherche avancée dans l'utilisation de la vision en robotique, peu de résultats ont été exploités dans l'aide au pilotage d'aéronefs. Les avancées en calcul informatique ont néanmoins permis l'émergence de techniques de vision par ordinateur, capables de traiter, analyser et comprendre des images numériques pour en extraire des informations pertinentes. Airbus Helicopters a depuis longtemps équipé ses hélicoptères moyens et lourds de pilotes automatiques pour améliorer les performances de vol et proposer des aides au pilotage telles que le maintien stationnaire et le suivi de trajectoire. Aujourd'hui, l'entreprise envisage d'intégrer des capteurs visuels à ses hélicoptères pour renforcer l'estimation de l'état (position, vitesse, attitude) pour le pilote automatique. La thèse se concentre donc sur la synthèse d'observateurs non linéaires pour l'estimation de l'état d'un système visuel-inertiel, en utilisant des techniques de type Riccati pour fusionner les données des capteurs visuels et inertiels. Les observateurs proposés sont déterministes, offrant ainsi des conditions suffisantes pour garantir la stabilité exponentielle localement de l'origine de l'observateur, ce qui est crucial dans le contexte de certifications rigoureuses des systèmes aéronautiques. Les performances de cette solution sont comparées à l'état de l'art sur des jeux de données fournis par la communauté scientifique, validant ainsi son potentiel pour d'éventuelles utilisations futures
Vision is the primary means for helicopter pilots to perceive and evaluate the surrounding environment, especially when navigating near terrain or close to obstacles not listed on aeronautical charts. However, despite over half a century of research into the use of vision in robotics, few of the results have been transferred technologically to aid aircraft piloting. Thanks to developments in computing resources, recent decades have seen the emergence of computer vision techniques, which now enable the processing, analysis, and understanding of digital images to extract and interpret information. For several decades, Airbus Helicopters has equipped its medium and heavy helicopter range with an autopilot system to improve flying qualities, and to offer piloting aids such as hovering and trajectory following. The company is now considering integrating visual sensors into its helicopters to enhance the robustness of its kinematic state estimation (position, speed, attitude), crucial information for the autopilot. Thus, the thesis focuses on the synthesis of nonlinear observers for state estimation of a visual-inertial system, using Riccati-type techniques to fuse visual and inertial sensors. The deterministic nature of the proposed observers has allowed determining sufficient conditions, expressed in terms of positioning and number of source points, and persistent excitation of camera motion, for which exponential and local stability is formally demonstrated. This aspect is particularly valuable in designing technological bricks intended for integration into systems subject to rigorous certification constraints. The performance of the proposed solution is compared to state-of-the-art algorithms using datasets provided by the scientific community
APA, Harvard, Vancouver, ISO, and other styles
8

Manerikar, Ninad. "Fusion de capteurs visuels-inertiels et estimation d'état pour la navigation des véhicules autonomes." Thesis, Université Côte d'Azur, 2022. http://www.theses.fr/2022COAZ4111.

Full text
Abstract:
L’estimation précise de l’état du système est un problème fondamental pour la navigation des véhicules autonomes. Ceci est particulièrement important lorsque le véhicule navigue dans des environnements encombrés ou à proximité d’obstacles, afin d’effectuer la localisation, l’évitement d’obstacles, la cartographie de l’environnement, etc. Bien que plusieurs algorithmes aient été proposés dans le passé pour ce problème d’estimation d’état, ils impliquent généralement un seul capteur ou plusieurs du même type. Afin de pouvoir exploiter les propriétés de multiples capteurs dotés de caractéristiques différentes (tels que Camera, IMU, Lidar, etc.), les chercheurs de la communauté de vision et de contrôle ont mis au point des modèles mathématiques qui produisent des estimations locales précises (position, orientation, vitesse, etc.). En m’inspirant de cela, ma thèse se concentre sur le développement d’observateurs non-linéaires pour l’estimation d’état en exploitant les algorithmes classiques de type Riccati en mettant l’accent sur la fusion de capteurs visuels-inertiels. Dans le cadre de cette thèse, nous utilisons une suite de capteurs à faible coût composée d’une caméra monoculaire et d’une centrale inertielle. Dans le cadre de la vision monoculaire, nous faisons l’hypothèse que la cible est pratiquement plate. Bien que cette hypothèse soit restrictive, les solutions proposées sont pertinentes pour de nombreuses applications dans les domaines de la robotique aérienne, terrestre et sous-marine. Dans ce contexte, deux nouveaux observateurs non linéaires sont proposés, le premier pour l’estimation de l’homographie et le deuxième pour l’estimation de l’attitude partielle, de la vitesse linéaire et de la profondeur. Dans la deuxième partie de la thèse, deux nouveaux observateurs déterministes de Riccati sont proposés pour traiter le problème classique de décomposition d’homographie au lieu de le résoudre image par image comme les approches algébriques traditionnelles. Tous ces travaux sont publiés dans des conférences internationales de haut-niveau. Tous les observateurs proposés ci-dessus font partie de la bibliothèque HomographyLab dont je suis l’un des principaux contributeurs. Cette bibliothèque a été évaluée au niveau TRL 7 (Technology Readiness Level) et est protégée par l’APP (Agence pour la Protection des Programmes) qui sert de brique principale pour diverses applications telles que l’estimation de vitesse et de flux optique, et la stabilisation basée sur l’homographie visuelle
Accurate state estimation is a fundamental problem for the navigation of Autonomous vehicles. This is particularly important when the vehicle is navigating through cluttered environments or it has to navigate in close proximity to its physical surroundings in order to perform localization, obstacle avoidance, environmental mapping etc. Although several algorithms were proposed in the past for this problem of state estimtation, they were usually applied to a single sensor or a specific sensor suite. To this end, researchers in the computer vision and control community came up with a visual-inertial framework (Camera + Imu) that exploit the combined properties of this sensor suite to produce precise local estimates (position, orientation, velocity etc). Taking inspiration from this, my thesis focuses on developing nonlinear observers for State Estimation by exploiting the classical Riccati design framework with a particular emphasis on visual-inertial sensor fusion. In the context of this thesis, we use a suite of low-cost sensors consisting of a monocular camera and an IMU. Throughout the thesis, the assumption on the planarity of the visual target has been considered. In the present thesis, two research topics have been considered. Firstly, an extensive study for the existing techniques for homography estimation has been carried out after which a novel nonlinear observer on the SL(3) group has been proposed with application to optical flow estimation. The novelty lies in the linearization approach undertaken to linearize a nonlinear observer on SL(3), thus making it more simplistic and suitable for practical implementation. Then, another novel observer based on deterministic Ricatti observer has been proposed for the problem of partial attitude, linear velocity and depth estimation for planar targets. The proposed approach does not rely on the strong assumption that the IMU provides the measurements of the vehicle’s linear acceleration in the body-fixed frame. Again experimental validations have been carried out to show the performance of the observer. An extension to this observer has been further proposed to filter the noisy optical flow estimates obtained from the extraction of continuous homography. Secondly, two novel observers for tackling the classical problem of homography decomposition have been proposed. The key contribution here lies in the design of two deterministic Riccati observers for addressing the homography decomposition problem instead of solving it on a frame-by-frame basis like traditional algebraic approaches. The performance and robustness of the observers have been validated over simulations and practical experiments. All the observers proposed above are part of the Homography-Lab library that has been evaluated at the TRL 7 (Technology Readiness Level) and is protected by the French APP (Agency for the Protection of Programs) which serves as the main brick for various applications like velocity, optical flow estimation and visual homography based stabilization
APA, Harvard, Vancouver, ISO, and other styles
9

Khairallah, Mahmoud. "Flow-Based Visual-Inertial Odometry for Neuromorphic Vision Sensors." Electronic Thesis or Diss., université Paris-Saclay, 2022. http://www.theses.fr/2022UPAST117.

Full text
Abstract:
Plutôt que de générer des images de manière constante et synchrone, les capteurs neuromorphiques de vision -également connus sous le nom de caméras événementielles, permettent à chaque pixel de fournir des informations de manière indépendante et asynchrone chaque fois qu'un changement de luminosité est détecté. Par conséquent, les capteurs de vision neuromorphiques n'ont pas les problèmes des caméras conventionnelles telles que les artefacts d'image et le Flou cinétique. De plus, ils peuvent fournir une compression sans perte de donné avec une résolution temporelle et une plage dynamique plus élevée. Par conséquent, les caméras événmentielles remplacent commodément les caméras conventionelles dans les applications robotiques nécessitant une grande maniabilité et des conditions environnementales variables. Dans cette thèse, nous abordons le problème de l'odométrie visio-inertielle à l'aide de caméras événementielles et d'une centrale inertielle. En exploitant la cohérence des caméras événementielles avec les conditions de constance de la luminosité, nous discutons de la possibilité de construire un système d'odométrie visuelle basé sur l'estimation du flot optique. Nous développons notre approche basée sur l'hypothèse que ces caméras fournissent des informations des contours des objets de la scène et appliquons un algorithme de détection de ligne pour la réduction des données. Le suivi de ligne nous permet de gagner plus de temps pour les calculs et fournit une meilleure représentation de l'environnement que les points d'intérêt. Dans cette thèse, nous ne montrons pas seulement une approche pour l'odométrie visio-inertielle basée sur les événements, mais également des algorithmes qui peuvent être utilisés comme algorithmes des caméras événementielles autonomes ou intégrés dans d'autres approches si nécessaire
Rather than generating images constantly and synchronously, neuromorphic vision sensors -also known as event-based cameras- permit each pixel to provide information independently and asynchronously whenever brightness change is detected. Consequently, neuromorphic vision sensors do not encounter the problems of conventional frame-based cameras like image artifacts and motion blur. Furthermore, they can provide lossless data compression, higher temporal resolution and higher dynamic range. Hence, event-based cameras conveniently replace frame-based cameras in robotic applications requiring high maneuverability and varying environmental conditions. In this thesis, we address the problem of visual-inertial odometry using event-based cameras and an inertial measurement unit. Exploiting the consistency of event-based cameras with the brightness constancy conditions, we discuss the availability of building a visual odometry system based on optical flow estimation. We develop our approach based on the assumption that event-based cameras provide edge-like information about the objects in the scene and apply a line detection algorithm for data reduction. Line tracking allows us to gain more time for computations and provides a better representation of the environment than feature points. In this thesis, we do not only show an approach for event-based visual-inertial odometry but also event-based algorithms that can be used as stand-alone algorithms or integrated into other approaches if needed
APA, Harvard, Vancouver, ISO, and other styles
10

Wu, Huang-Yi, and 吳皇毅. "Fusion of Inertial Measurement and Visual Sensor for Simultaneous Localization and Mapping." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/41510478187409324977.

Full text
Abstract:
碩士
淡江大學
機械與機電工程學系碩士班
104
This study investigates the issues of inertial measurement unit (IMU) assisted monocular simultaneous localization and mapping (SLAM). The speeded-up robust features (SURF) algorithm is used for interest point detection and description. The positions of environment landmarks are represented by inverse depth parameterization method. The positions of camera and landmarks can be estimated by using the extended Kalman filter (EKF). The map scale for monocular SLAM initialization can be estimated by the displacement of IMU. The experiment results demonstrate that the IMU successfully initialize monocular SLAM.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Visual-inertial sensor fusion"

1

He, Hongsheng, Yan Li, and Jindong Tan. "Rotational Coordinate Transformation for Visual-Inertial Sensor Fusion." In Social Robotics, 431–40. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-47437-3_42.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lu, Yao, Xiaoxu Yin, Feng Qin, Ke Huang, Menghua Zhang, and Weijie Huang. "A Lightweight Sensor Fusion for Neural Visual Inertial Odometry." In International Conference on Neural Computing for Advanced Applications, 46–59. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-5847-4_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wang, Guan, Yifeng Pan, and Hui Zhou. "Fusion of Inertial and Visual Sensor Data for Accurate Localization." In Advances in Intelligent Automation and Soft Computing, 758–66. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-81007-8_86.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Li, Tong, Juntao Wang, Yi Chen, and Tianyun Dong. "Visual–Inertial Sensor Fusion and OpenSim Based Body Pose Estimation." In Intelligent Robotics and Applications, 279–85. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-6486-4_24.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Marcon, Marco, Augusto Sarti, and Stefano Tubaro. "Smart Toothbrushes: Inertial Measurement Sensors Fusion with Visual Tracking." In Lecture Notes in Computer Science, 480–94. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-48881-3_33.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Lin, Tianliang, Zhongyuan He, Jiangdong Wu, Qihuai Chen, and Shengjie Fu. "Intelligent Construction Machinery SLAM with Stereo Vision and Inertia Fusion." In Lecture Notes in Mechanical Engineering, 1035–47. Singapore: Springer Nature Singapore, 2024. http://dx.doi.org/10.1007/978-981-97-1876-4_82.

Full text
Abstract:
AbstractPositioning technology is the foundation of intelligent construction machinery, the current mainstream positioning solution is simultaneous localization and mapping (SLAM) technology, which is mainly divided into lidar SLAM and visual SLAM. Due to the high cost of lidar, it is easy to degrade or even fail in scenes with a single environmental texture; while the cost of vision sensors is low and has a wealth of environmental texture information acquisition capabilities, which can effectively avoid degradation problems. In order to reduce the localization cost of intelligent construction machinery and improve the positioning accuracy, based on the VINS-Fusion stereo visual-inertial tightly coupled system framework, an improved Random Sampling Consensus (RANSAC) algorithm is used to reduce feature mismatch, and the Huber kernel function is used to IMU residuals and visual residuals are constrained to improve the effect of the SLAM system. Compared with the mainstream VINS-Fusion algorithm, the positioning root mean square error of this method on the EuRoC dataset is reduced by an average of 12.41%, which improves the positioning accuracy; simultaneously, the experimental results in the actual scene show that the motion trajectory of the algorithm, it is closer to the real trajectory than VINS-Fusion, which verifies the effectiveness of the method.
APA, Harvard, Vancouver, ISO, and other styles
7

Duc, Tran Minh, and Hee-Jun Kang. "Fusion of Vision and Inertial Sensors for Position-Based Visual Servoing of a Robot Manipulator." In Intelligent Computing Theories, 536–45. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-39479-9_63.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Stepanov, Dmitrii, Alexander Popov, Dmitrii Gromoshinskii, and Oleg Shmakov. "Visual-Inertial Sensor Fusion to Accuracy Increase of Autonomous Underwater Vehicles Positioning." In Proceedings of the 29th International DAAAM Symposium 2018, 0615–23. DAAAM International Vienna, 2018. http://dx.doi.org/10.2507/29th.daaam.proceedings.089.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Casha, Owen. "A Comparative Analysis and Review of Indoor Positioning Systems and Technologies." In Innovation in Indoor Positioning Systems [Working Title]. IntechOpen, 2024. http://dx.doi.org/10.5772/intechopen.1005185.

Full text
Abstract:
This chapter presents a comparative analysis and review of indoor positioning systems, both from an algorithm and a technology point of view. It sheds light on the evolving landscape of location-based services within confined spaces. The review encompasses a diverse range of technologies employed in indoor positioning systems, including Wi-Fi-based systems, Bluetooth low-energy solutions, radio frequency identification technologies, ultra-wideband, inertial measurement units, visual-based systems, and sensor fusion approaches amongst many others. By summarising a multitude of research findings and technological advancements, the chapter offers insights into the strengths, limitations, and emerging trends within the field. Furthermore, it critically assesses the performance metrics of various indoor positioning systems, thus providing a comprehensive guide for researchers, developers, and practitioners. The comparative analysis delves into the practical implications of these systems, by considering factors such as design and deployment cost, power efficiency, and adaptability to different indoor environments. The main types of signal acquisition and position estimation techniques used in indoor positioning systems are discussed, while providing the advantages and disadvantages of each approach. This chapter aims to contribute to the advancement of indoor positioning technology, by offering valuable perspectives for future research directions and practical applications.
APA, Harvard, Vancouver, ISO, and other styles
10

Troll, Péter, Károly Szipka, and Andreas Archenti. "Indoor Localization of Quadcopters in Industrial Environment." In Advances in Transdisciplinary Engineering. IOS Press, 2020. http://dx.doi.org/10.3233/atde200183.

Full text
Abstract:
The research work in this paper was carried out to reach advanced positioning capabilities of unmanned aerial vehicles (UAVs) for indoor applications. The paper includes the design of a quadcopter and the implementation of a control system with the capability to position the quadcopter indoor using onboard visual pose estimation system, without the help of GPS. The project also covered the design and implementation of quadcopter hardware and the control software. The developed hardware enables the quadcopter to raise at least 0.5kg additional payload. The system was developed on a Raspberry single-board computer in combination with a PixHawk flight controller. OpenCV library was used to implement the necessary computer vision. The Open-source software-based solution was developed in the Robotic Operating System (ROS) environment, which performs sensor reading and communication with the flight controller while recording data about its operation and transmits those to the user interface. For the vision-based position estimation, pre-positioned printed markers were used. The markers were generated by ArUco coding, which exactly defines the current position and orientation of the quadcopter, with the help of computer vision. The resulting data was processed in the ROS environment. LiDAR with Hector SLAM algorithm was used to map the objects around the quadcopter. The project also deals with the necessary camera calibration. The fusion of signals from the camera and from the IMU (Inertial Measurement Unit) was achieved by using Extended Kalman Filter (EKF). The evaluation of the completed positioning system was performed with an OptiTrack optical-based external multi-camera measurement system. The introduced evaluation method has enough precision to be used to investigate the enhancement of positioning performance of quadcopters, as well as fine-tuning the parameters of the used controller and filtering approach. The payload capacity allows autonomous material handling indoors. Based on the experiments, the system has an accurate positioning system to be suitable for industrial application.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Visual-inertial sensor fusion"

1

Troncoso, Juan Manuel Reyes, and Alexander Cerón Correa. "Visual and Inertial Odometry Based on Sensor Fusion." In 2024 XXIV Symposium of Image, Signal Processing, and Artificial Vision (STSIVA), 1–5. IEEE, 2024. http://dx.doi.org/10.1109/stsiva63281.2024.10637841.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Martinelli, Agostino, and Alessandro Renzaglia. "Cooperative visual-inertial sensor fusion: Fundamental equations." In 2017 International Symposium on Multi-Robot and Multi-Agent Systems (MRS). IEEE, 2017. http://dx.doi.org/10.1109/mrs.2017.8250927.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Tsotsos, Konstantine, Alessandro Chiuso, and Stefano Soatto. "Robust inference for visual-inertial sensor fusion." In 2015 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2015. http://dx.doi.org/10.1109/icra.2015.7139924.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Stapleton, Mehdi P., Md Zulfiquar Ali Bhotto, and Ivan V. Bajic. "A simulation environment for visual-inertial sensor fusion." In 2016 IEEE Canadian Conference on Electrical and Computer Engineering (CCECE). IEEE, 2016. http://dx.doi.org/10.1109/ccece.2016.7726705.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Chen, Changhao, Stefano Rosa, Yishu Miao, Chris Xiaoxuan Lu, Wei Wu, Andrew Markham, and Niki Trigoni. "Selective Sensor Fusion for Neural Visual-Inertial Odometry." In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2019. http://dx.doi.org/10.1109/cvpr.2019.01079.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Hartzer, Jacob, and Srikanth Saripalli. "Online Multi-IMU Calibration Using Visual-Inertial Odometry." In 2023 IEEE Symposium Sensor Data Fusion and International Conference on Multisensor Fusion and Integration (SDF-MFI). IEEE, 2023. http://dx.doi.org/10.1109/sdf-mfi59545.2023.10361310.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Zhao, Yang, Eric Tkaczyk, and Feng Pan. "Visual and inertial sensor fusion for mobile X-ray detector tracking." In SenSys '20: The 18th ACM Conference on Embedded Networked Sensor Systems. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3384419.3430435.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Bleser, Gabriele, and Didier Stricker. "Advanced tracking through efficient image processing and visual-inertial sensor fusion." In 2008 IEEE Virtual Reality Conference. IEEE, 2008. http://dx.doi.org/10.1109/vr.2008.4480765.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Liu, Tianbo, and Shaojie Shen. "High altitude monocular visual-inertial state estimation: Initialization and sensor fusion." In 2017 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2017. http://dx.doi.org/10.1109/icra.2017.7989528.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ubezio, Barnaba, Shashank Sharma, Guglielmo Van der Meer, and Michele Taragna. "Kalman Filter Based Sensor Fusion for a Mobile Manipulator." In ASME 2019 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2019. http://dx.doi.org/10.1115/detc2019-97241.

Full text
Abstract:
Abstract End-effector tracking for a mobile manipulator is achieved through Sensor Fusion techniques, implemented with a particular visual-inertial sensor suite and an Extended Kalman Filter algorithm. The suite is composed of an Optitrack motion capture system and a Honeywell HG4930 MEMS IMU, for which a further analysis on the mathematical noise model is reported. The filter is constructed in such a way that its complexity remains constant and independent of the visual algorithm, with the possibility of inserting additional sensors, to further improve the estimation accuracy. Experiments in real-time have been performed with the 12-DOF KUKA VALERI robot, extracting the position and the orientation of the end-effector and comparing their estimates with pure sensor measurements. Along with the physical results, issues related to calibration, working frequency and physical mounting are described.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography