To see the other types of publications on this topic, follow the link: Visual-inertial sensor fusion.

Journal articles on the topic 'Visual-inertial sensor fusion'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Visual-inertial sensor fusion.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Liu, Zhenbin, Zengke Li, Ao Liu, Kefan Shao, Qiang Guo, and Chuanhao Wang. "LVI-Fusion: A Robust Lidar-Visual-Inertial SLAM Scheme." Remote Sensing 16, no. 9 (April 25, 2024): 1524. http://dx.doi.org/10.3390/rs16091524.

Full text
Abstract:
With the development of simultaneous positioning and mapping technology in the field of automatic driving, the current simultaneous localization and mapping scheme is no longer limited to a single sensor and is developing in the direction of multi-sensor fusion to enhance the robustness and accuracy. In this study, a localization and mapping scheme named LVI-fusion based on multi-sensor fusion of camera, lidar and IMU is proposed. Different sensors have different data acquisition frequencies. To solve the problem of time inconsistency in heterogeneous sensor data tight coupling, the time alignment module is used to align the time stamp between the lidar, camera and IMU. The image segmentation algorithm is used to segment the dynamic target of the image and extract the static key points. At the same time, the optical flow tracking based on the static key points are carried out and a robust feature point depth recovery model is proposed to realize the robust estimation of feature point depth. Finally, lidar constraint factor, IMU pre-integral constraint factor and visual constraint factor together construct the error equation that is processed with a sliding window-based optimization module. Experimental results show that the proposed algorithm has competitive accuracy and robustness.
APA, Harvard, Vancouver, ISO, and other styles
2

Wu, Peng, Rongjun Mu, and Bingli Liu. "Upper Stage Visual Inertial Integrated Navigation Method Based on Factor Graph." Journal of Physics: Conference Series 2085, no. 1 (November 1, 2021): 012018. http://dx.doi.org/10.1088/1742-6596/2085/1/012018.

Full text
Abstract:
Abstract In the working process of the upper stage integrated navigation information fusion system, the multi-source navigation information fusion algorithm based on factor graph Bayesian estimation is used to fuse the information of inertial sensors, visual sensors and other sensors. The overall joint probability distribution of the system is described in the form of probability graph model with the dependence of local variables, so as to reduce the complexity of the system, adjust the data structure of information fusion to improve the efficiency of information fusion and smoothly switch the sensor configuration.
APA, Harvard, Vancouver, ISO, and other styles
3

Martinelli, Agostino, Alexander Oliva, and Bernard Mourrain. "Cooperative Visual-Inertial Sensor Fusion: The Analytic Solution." IEEE Robotics and Automation Letters 4, no. 2 (April 2019): 453–60. http://dx.doi.org/10.1109/lra.2019.2891025.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Xu, Shaofeng, and Somi Lee. "An Inertial Sensing-Based Approach to Swimming Pose Recognition and Data Analysis." Journal of Sensors 2022 (January 27, 2022): 1–12. http://dx.doi.org/10.1155/2022/5151105.

Full text
Abstract:
In this paper, inertial sensing is used to identify a swimming stance and analyze its swimming stance data. A wireless monitoring device based on a nine-axis microinertial sensor is designed for the characteristics of swimming motion, and measurement experiments are conducted for different intensities and stances of swimming motion. By comparing and analyzing the motion characteristics of various swimming stances, the basis for performing stroke identification is proposed, and the monitoring data characteristics of the experimental results match with it. The stance reconstruction technology is studied, PC-based OpenGL multithreaded data synchronization and stance following reconstruction are designed to reconstruct the joint association data of multiple nodes in a constrained set, and the reconstruction results are displayed through graphic image rendering. For the whole system, each key technology is organically integrated to design a wearable wireless sensing network-based pose resolution analysis and reconstruction recognition system. Inertial sensors inevitably suffer from drift after a long period of position trajectory tracking. The proposed fusion algorithm corrects the drift of position estimation using the measurement of the visual sensor, and the measurement of the inertial sensor complements the missing measurement of the visual sensor for the case of occlusion of the visual sensor and fast movement of the upper limb. An experimental platform for upper-limb position estimation based on the fusion of inertial and visual sensors is built to verify the effectiveness of the proposed method. Finally, the full paper is summarized, and an outlook for further research is provided.
APA, Harvard, Vancouver, ISO, and other styles
5

Lu, Zhufei, Xing Xu, Yihao Luo, Lianghui Ding, Chao Zhou, and Jiarong Wang. "A Visual–Inertial Pressure Fusion-Based Underwater Simultaneous Localization and Mapping System." Sensors 24, no. 10 (May 18, 2024): 3207. http://dx.doi.org/10.3390/s24103207.

Full text
Abstract:
Detecting objects, particularly naval mines, on the seafloor is a complex task. In naval mine countermeasures (MCM) operations, sidescan or synthetic aperture sonars have been used to search large areas. However, a single sensor cannot meet the requirements of high-precision autonomous navigation. Based on the ORB-SLAM3-VI framework, we propose ORB-SLAM3-VIP, which integrates a depth sensor, an IMU sensor and an optical sensor. This method integrates the measurements of depth sensors and an IMU sensor into the visual SLAM algorithm through tight coupling, and establishes a multi-sensor fusion SLAM model. Depth constraints are introduced into the process of initialization, scale fine-tuning, tracking and mapping to constrain the position of the sensor in the z-axis and improve the accuracy of pose estimation and map scale estimate. The test on seven sets of underwater multi-sensor sequence data in the AQUALOC dataset shows that, compared with ORB-SLAM3-VI, the ORB-SLAM3-VIP system proposed in this paper reduces the scale error in all sequences by up to 41.2%, and reduces the trajectory error by up to 41.2%. The square root has also been reduced by up to 41.6%.
APA, Harvard, Vancouver, ISO, and other styles
6

Wan, Yingcai, Qiankun Zhao, Cheng Guo, Chenlong Xu, and Lijing Fang. "Multi-Sensor Fusion Self-Supervised Deep Odometry and Depth Estimation." Remote Sensing 14, no. 5 (March 2, 2022): 1228. http://dx.doi.org/10.3390/rs14051228.

Full text
Abstract:
This paper presents a new deep visual-inertial odometry and depth estimation framework for improving the accuracy of depth estimation and ego-motion from image sequences and inertial measurement unit (IMU) raw data. The proposed framework predicts ego-motion and depth with absolute scale in a self-supervised manner. We first capture dense features and solve the pose by deep visual odometry (DVO), and then combine the pose estimation pipeline with deep inertial odometry (DIO) by the extended Kalman filter (EKF) method to produce the sparse depth and pose with absolute scale. We then join deep visual-inertial odometry (DeepVIO) with depth estimation by using sparse depth and the pose from DeepVIO pipeline to align the scale of the depth prediction with the triangulated point cloud and reduce image reconstruction error. Specifically, we use the strengths of learning-based visual-inertial odometry (VIO) and depth estimation to build an end-to-end self-supervised learning architecture. We evaluated the new framework on the KITTI datasets and compared it to the previous techniques. We show that our approach improves results for ego-motion estimation and achieves comparable results for depth estimation, especially in the detail area.
APA, Harvard, Vancouver, ISO, and other styles
7

Kelly, Jonathan, and Gaurav S. Sukhatme. "Visual-Inertial Sensor Fusion: Localization, Mapping and Sensor-to-Sensor Self-calibration." International Journal of Robotics Research 30, no. 1 (November 5, 2010): 56–79. http://dx.doi.org/10.1177/0278364910382802.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Brown, Alison, and Paul Olson. "Navigation and Electro-Optic Sensor Integration Technology for Fusion of Imagery and Digital Mapping Products." Journal of Navigation 53, no. 1 (January 2000): 132–45. http://dx.doi.org/10.1017/s0373463399008735.

Full text
Abstract:
Several military and commercial platforms are currently installing GPS and inertial navigation sensors concurrently with the introduction of high-quality visual capabilities and digital mapping/imagery databases. This enables autonomous geo-registration of sensor imagery using GPS/inertial position and attitude data, and also permits data from digital mapping products to be overlaid automatically on the sensor imagery. This paper describes the system architecture for a Navigation/Electro-Optic Sensor Integration Technology (NEOSIT) software application. The design is highly modular and based on commercial off-the-shelf (COTS) tools to facilitate integration with sensors, navigation and digital data sources already installed on different host platforms.
APA, Harvard, Vancouver, ISO, and other styles
9

Kim, Youngji, Sungho Yoon, Sujung Kim, and Ayoung Kim. "Unsupervised Balanced Covariance Learning for Visual-Inertial Sensor Fusion." IEEE Robotics and Automation Letters 6, no. 2 (April 2021): 819–26. http://dx.doi.org/10.1109/lra.2021.3051571.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Cahyadi, M. N., T. Asfihani, H. F. Suhandri, and S. C. Navisa. "Analysis of GNSS/IMU Sensor Fusion at UAV Quadrotor for Navigation." IOP Conference Series: Earth and Environmental Science 1276, no. 1 (December 1, 2023): 012021. http://dx.doi.org/10.1088/1755-1315/1276/1/012021.

Full text
Abstract:
Abstract To determine the position and navigation of an unknown environment, UAVs rely on sensors that provide information regarding position, speed, and orientation. There are sensors to provide direct navigation information such as the Global Navigation Satellite System (GNSS) by providing position data, or indirect sensors such as inertial sensors which provide speed and orientation data. An inertial sensor or commonly known as an Inertial Measurement Unit (IMU) is a combination of data acceleration (accelerometer) and angular velocity (gyroscope). By performing GNSS/IMU sensor fusion at UAV Quadrotor will increase the accuracy of aircraft localization based on its mathematical model involving the Kalman Filter approach. The main goal is to improve the coordinates obtained from Quadrotor UAV measurements, so that position of UAV Quadrotor aircraft is more accurate. Raw data of sensors GNSS/IMU is obtained during the flight of the aircraft. Visual comparison is used to determine whether the coordinate of the processed data has better accuracy than the raw data. The results showed that the Unscented Kalman Filter (UKF) simulation gave 3D position accuracy of 0.403 m to the measurement data. It can improve 23,47% fprm EKF Estimation which give 3D position accuracy of 16.598 m.
APA, Harvard, Vancouver, ISO, and other styles
11

Wang, Haoran, Zhenglong Li, Hongwei Wang, Wenyan Cao, Fujing Zhang, and Yuheng Wang. "A Roadheader Positioning Method Based on Multi-Sensor Fusion." Electronics 12, no. 22 (November 7, 2023): 4556. http://dx.doi.org/10.3390/electronics12224556.

Full text
Abstract:
In coal mines, accurate positioning is vital for roadheader equipment. However, most roadheaders use a standalone strapdown inertial navigation system (SINS) which faces challenges like error accumulation, drift, initial alignment needs, temperature sensitivity, and the demand for high-quality sensors. In this paper, a roadheader Visual–Inertial Odometry (VIO) system is proposed, combining SINS and stereo visual odometry to adjust to coal mine environments. Given the inherently dimly lit conditions of coal mines, our system includes an image-enhancement module to preprocess images, aiding in feature matching for stereo visual odometry. Additionally, a Kalman filter merges the positional data from SINS and stereo visual odometry. When tested against three other methods on the KITTI and EuRoC datasets, our approach showed notable precision on the EBZ160M-2 Roadheader, with attitude errors less than 0.2751° and position discrepancies within 0.0328 m, proving its advantages over SINS.
APA, Harvard, Vancouver, ISO, and other styles
12

Tschopp, Florian, Michael Riner, Marius Fehr, Lukas Bernreiter, Fadri Furrer, Tonci Novkovic, Andreas Pfrunder, Cesar Cadena, Roland Siegwart, and Juan Nieto. "VersaVIS—An Open Versatile Multi-Camera Visual-Inertial Sensor Suite." Sensors 20, no. 5 (March 6, 2020): 1439. http://dx.doi.org/10.3390/s20051439.

Full text
Abstract:
Robust and accurate pose estimation is crucial for many applications in mobile robotics. Extending visual Simultaneous Localization and Mapping (SLAM) with other modalities such as an inertial measurement unit (IMU) can boost robustness and accuracy. However, for a tight sensor fusion, accurate time synchronization of the sensors is often crucial. Changing exposure times, internal sensor filtering, multiple clock sources and unpredictable delays from operation system scheduling and data transfer can make sensor synchronization challenging. In this paper, we present VersaVIS, an Open Versatile Multi-Camera Visual-Inertial Sensor Suite aimed to be an efficient research platform for easy deployment, integration and extension for many mobile robotic applications. VersaVIS provides a complete, open-source hardware, firmware and software bundle to perform time synchronization of multiple cameras with an IMU featuring exposure compensation, host clock translation and independent and stereo camera triggering. The sensor suite supports a wide range of cameras and IMUs to match the requirements of the application. The synchronization accuracy of the framework is evaluated on multiple experiments achieving timing accuracy of less than 1 ms . Furthermore, the applicability and versatility of the sensor suite is demonstrated in multiple applications including visual-inertial SLAM, multi-camera applications, multi-modal mapping, reconstruction and object based mapping.
APA, Harvard, Vancouver, ISO, and other styles
13

Liu, Yunfei, Zhitian Li, Shuaikang Zheng, Pengcheng Cai, and Xudong Zou. "An Evaluation of MEMS-IMU Performance on the Absolute Trajectory Error of Visual-Inertial Navigation System." Micromachines 13, no. 4 (April 12, 2022): 602. http://dx.doi.org/10.3390/mi13040602.

Full text
Abstract:
Nowadays, accurate and robust localization is preliminary for achieving a high autonomy for robots and emerging applications. More and more, sensors are fused to guarantee these requirements. A lot of related work has been developed, such as visual-inertial odometry (VIO). In this research, benefiting from the complementary sensing capabilities of IMU and cameras, many problems have been solved. However, few of them pay attention to the impact of different performance IMU on the accuracy of sensor fusion. When faced with actual scenarios, especially in the case of massive hardware deployment, there is the question of how to choose an IMU appropriately? In this paper, we chose six representative IMUs with different performances from consumer-grade to tactical grade for exploring. According to the final performance of VIO based on different IMUs in different scenarios, we analyzed the absolute trajectory error of Visual-Inertial Systems (VINS_Fusion). The assistance of IMU can improve the accuracy of multi-sensor fusion, but the improvement of fusion accuracy with different grade MEMS-IMU is not very significant in the eight experimental scenarios; the consumer-grade IMU can also have an excellent result. In addition, the IMU with low noise is more versatile and stable in various scenarios. The results build the route for the development of Inertial Navigation System (INS) fusion with visual odometry and at the same time, provide a guideline for the selection of IMU.
APA, Harvard, Vancouver, ISO, and other styles
14

Shi, Zhenlian, Yanfeng Sun, Linxin Xiong, Yongli Hu, and Baocai Yin. "A Multisource Heterogeneous Data Fusion Method for Pedestrian Tracking." Mathematical Problems in Engineering 2015 (2015): 1–10. http://dx.doi.org/10.1155/2015/150541.

Full text
Abstract:
Traditional visual pedestrian tracking methods perform poorly when faced with problems such as occlusion, illumination changes, and complex backgrounds. In principle, collecting more sensing information should resolve these issues. However, it is extremely challenging to properly fuse different sensing information to achieve accurate tracking results. In this study, we develop a pedestrian tracking method for fusing multisource heterogeneous sensing information, including video, RGB-D sequences, and inertial sensor data. In our method, a RGB-D sequence is used to position the target locally by fusing the texture and depth features. The local position is then used to eliminate the cumulative error resulting from the inertial sensor positioning. A camera calibration process is used to map the inertial sensor position onto the video image plane, where the visual tracking position and the mapped position are fused using a similarity feature to obtain accurate tracking results. Experiments using real scenarios show that the developed method outperforms the existing tracking method, which uses only a single sensing dataset, and is robust to target occlusion, illumination changes, and interference from similar textures or complex backgrounds.
APA, Harvard, Vancouver, ISO, and other styles
15

Jing, Qianfeng, Haichao Wang, Bin Hu, Xiuwen Liu, and Yong Yin. "A Universal Simulation Framework of Shipborne Inertial Sensors Based on the Ship Motion Model and Robot Operating System." Journal of Marine Science and Engineering 9, no. 8 (August 20, 2021): 900. http://dx.doi.org/10.3390/jmse9080900.

Full text
Abstract:
A complete virtual test environment is a powerful tool for Autonomous Surface Vessels (ASVs) research, and the simulation of ship motion and shipborne sensors is one of the prerequisites for constructing such an environment. This paper proposed a universal simulation framework of shipborne inertial sensors. A ship motion model considering environmental disturbances is proposed to simulate the six-degrees-of-freedom motion of ships. The discrete form of the inertial sensor stochastic error model is derived. The inertial measurement data are simulated by adding artificial errors to a simulated motion status. In addition, the ship motion simulation, inertial measurement simulation, and environment simulation nodes are implemented based on the computational graph architecture of the Robot Operating System (ROS). The benefit from the versatility of the ROS messages, the format of simulated inertial measurement is exactly the same as that of real sensors, which provides a research basis for the fusion perception algorithm based on visual–inertial and laser–inertial sensors in the research field of ASVs.
APA, Harvard, Vancouver, ISO, and other styles
16

Meza-Ibarra, José Ramón, Joaquín Martínez-Ulloa, Luis Alfonso Moreno-Pacheco, and Hugo Rodríguez-Cortés. "A Sensor Fusion Approach to Observe Quadrotor Velocity." Sensors 24, no. 11 (June 3, 2024): 3605. http://dx.doi.org/10.3390/s24113605.

Full text
Abstract:
The growing use of Unmanned Aerial Vehicles (UAVs) raises the need to improve their autonomous navigation capabilities. Visual odometry allows for dispensing positioning systems, such as GPS, especially on indoor flights. This paper reports an effort toward UAV autonomous navigation by proposing a translational velocity observer based on inertial and visual measurements for a quadrotor. The proposed observer complementarily fuses available measurements from different domains and is synthesized following the Immersion and Invariance observer design technique. A formal Lyapunov-based observer error convergence to zero is provided. The proposed observer algorithm is evaluated using numerical simulations in the Parrot Mambo Minidrone App from Simulink-Matlab.
APA, Harvard, Vancouver, ISO, and other styles
17

Lee, Kyuman, and Eric N. Johnson. "Latency Compensated Visual-Inertial Odometry for Agile Autonomous Flight." Sensors 20, no. 8 (April 14, 2020): 2209. http://dx.doi.org/10.3390/s20082209.

Full text
Abstract:
In visual-inertial odometry (VIO), inertial measurement unit (IMU) dead reckoning acts as the dynamic model for flight vehicles while camera vision extracts information about the surrounding environment and determines features or points of interest. With these sensors, the most widely used algorithm for estimating vehicle and feature states for VIO is an extended Kalman filter (EKF). The design of the standard EKF does not inherently allow for time offsets between the timestamps of the IMU and vision data. In fact, sensor-related delays that arise in various realistic conditions are at least partially unknown parameters. A lack of compensation for unknown parameters often leads to a serious impact on the accuracy of VIO systems and systems like them. To compensate for the uncertainties of the unknown time delays, this study incorporates parameter estimation into feature initialization and state estimation. Moreover, computing cross-covariance and estimating delays in online temporal calibration correct residual, Jacobian, and covariance. Results from flight dataset testing validate the improved accuracy of VIO employing latency compensated filtering frameworks. The insights and methods proposed here are ultimately useful in any estimation problem (e.g., multi-sensor fusion scenarios) where compensation for partially unknown time delays can enhance performance.
APA, Harvard, Vancouver, ISO, and other styles
18

Bleser, Gabriele, and Didier Stricker. "Advanced tracking through efficient image processing and visual–inertial sensor fusion." Computers & Graphics 33, no. 1 (February 2009): 59–72. http://dx.doi.org/10.1016/j.cag.2008.11.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Santoso, Fendy, Matthew A. Garratt, and Sreenatha G. Anavatti. "Visual–Inertial Navigation Systems for Aerial Robotics: Sensor Fusion and Technology." IEEE Transactions on Automation Science and Engineering 14, no. 1 (January 2017): 260–75. http://dx.doi.org/10.1109/tase.2016.2582752.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Liu, Cheng, Shuai Xiong, Yongchao Geng, Song Cheng, Fang Hu, Bo Shao, Fang Li,, and Jie Zhang. "An Embedded High-Precision GNSS-Visual-Inertial Multi-Sensor Fusion Suite." NAVIGATION: Journal of the Institute of Navigation 70, no. 4 (2023): navi.607. http://dx.doi.org/10.33012/navi.607.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Li, Xin Yu, and Dong Yi Chen. "Sensor Fusion Based on Strong Tracking Filter for Augmented Reality Registration." Key Engineering Materials 467-469 (February 2011): 108–13. http://dx.doi.org/10.4028/www.scientific.net/kem.467-469.108.

Full text
Abstract:
Accurate tracking for Augmented Reality applications is a challenging task. Multi-sensors hybrid tracking generally provide more stable than the effect of the single visual tracking. This paper presents a new tightly-coupled hybrid tracking approach combining vision-based systems with inertial sensor. Based on multi-frequency sampling theory in the measurement data synchronization, a strong tracking filter (STF) is used to smooth sensor data and estimate position and orientation. Through adding time-varying fading factor to adaptively adjust the prediction error covariance of filter, this method improves the performance of tracking for fast moving targets. Experimental results show the efficiency and robustness of this proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
22

MIZOBUCHI, Yasuhiro, and Kazuhiro SHIMONOMURA. "1P1-F02 Robotic visual stabilization with vision and inertial sensors(3D Measurement/Sensor Fusion)." Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec) 2011 (2011): _1P1—F02_1—_1P1—F02_2. http://dx.doi.org/10.1299/jsmermd.2011._1p1-f02_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Yin, Tao, Jingzheng Yao, Yan Lu, and Chunrui Na. "Solid-State-LiDAR-Inertial-Visual Odometry and Mapping via Quadratic Motion Model and Reflectivity Information." Electronics 12, no. 17 (August 28, 2023): 3633. http://dx.doi.org/10.3390/electronics12173633.

Full text
Abstract:
This paper proposes a solid-state-LiDAR-inertial-visual fusion framework containing two subsystems: the solid-state-LiDAR-inertial odometry (SSLIO) subsystem and the visual-inertial odometry (VIO) subsystem. Our SSLIO subsystem has two novelties that enable it to handle drastic acceleration and angular velocity changes: (1) the quadratic motion model is adopted in the in-frame motion compensation step of the LiDAR feature points, and (2) the system has a weight function for each residual term to ensure consistency in geometry and reflectivity. The VIO subsystem renders the global map in addition to further optimizing the state output by the SSLIO. To save computing resources, we calibrate our VIO subsystem’s extrinsic parameter indirectly in advance, instead of using real-time estimation. We test the SSLIO subsystem using publicly available datasets and a steep ramp experiment, and show that our SSLIO exhibits better performance than the state-of-the-art LiDAR-inertial SLAM algorithm Point-LIO in terms of coping with strong vibrations transmitted to the sensors due to the violent motion of the crawler robot. Furthermore, we present several outdoor field experiments evaluating our framework. The results show that our proposed multi-sensor fusion framework can achieve good robustness, localization and mapping accuracy, as well as strong real-time performance.
APA, Harvard, Vancouver, ISO, and other styles
24

He, Xuan, Wang Gao, Chuanzhen Sheng, Ziteng Zhang, Shuguo Pan, Lijin Duan, Hui Zhang, and Xinyu Lu. "LiDAR-Visual-Inertial Odometry Based on Optimized Visual Point-Line Features." Remote Sensing 14, no. 3 (January 27, 2022): 622. http://dx.doi.org/10.3390/rs14030622.

Full text
Abstract:
This study presents a LiDAR-Visual-Inertial Odometry (LVIO) based on optimized visual point-line features, which can effectively compensate for the limitations of a single sensor in real-time localization and mapping. Firstly, an improved line feature extraction in scale space and constraint matching strategy, using the least square method, is proposed to provide a richer visual feature for the front-end of LVIO. Secondly, multi-frame LiDAR point clouds were projected into the visual frame for feature depth correlation. Thirdly, the initial estimation results of Visual-Inertial Odometry (VIO) were carried out to optimize the scanning matching accuracy of LiDAR. Finally, a factor graph based on Bayesian network is proposed to build the LVIO fusion system, in which GNSS factor and loop factor are introduced to constrain LVIO globally. The evaluations on indoor and outdoor datasets show that the proposed algorithm is superior to other state-of-the-art algorithms in real-time efficiency, positioning accuracy, and mapping effect. Specifically, the average RMSE of absolute trajectory in the indoor environment is 0.075 m and that in the outdoor environment is 3.77 m. These experimental results can prove that the proposed algorithm can effectively solve the problem of line feature mismatching and the accumulated error of local sensors in mobile carrier positioning.
APA, Harvard, Vancouver, ISO, and other styles
25

Xia, Linlin, Ruimin Liu, Daochang Zhang, and Jingjing Zhang. "Polarized light-aided visual-inertial navigation system: global heading measurements and graph optimization-based multi-sensor fusion." Measurement Science and Technology 33, no. 5 (February 17, 2022): 055111. http://dx.doi.org/10.1088/1361-6501/ac4637.

Full text
Abstract:
Abstract Polarized skylight is as fundamental a constituent of passive navigation as the geomagnetic field. With regard to its applicability to outdoor robot localization, a polarized light-aided visual-inertial navigation system (VINS) modelization dedicated to globally optimized pose estimation and heading correction is constructed. The combined system follows typical visual simultaneous localization and mapping (SLAM) frameworks, and we propose a methodology to fuse global heading measurements with visual and inertial information in a graph optimization-based estimator. With ideas of‘adding new attributes of graph vertices and creating heading error-encoded constraint edges’, the heading, as the absolute orientation reference, is estimated by the Berry polarization model and continuously updated in a graph structure. The formulized graph optimization process for multi-sensor fusion is simultaneously provided. In terms of campus road experiments on the Bulldog-CX robot platform, the results are compared against purely stereo camera-dependent and VINS Fusion frameworks, revealing that our design is substantially more accurate than others with both locally and globally consistent position and attitude estimates. As a passive and tightly coupled navigation mode, the polarized light-aided VINS can therefore be considered as a tool candidate for a class of visual SLAM-based multi-sensor fusion.
APA, Harvard, Vancouver, ISO, and other styles
26

Xiao, Zhiyao, and Guobao Zhang. "An Attention-Based Odometry Framework for Multisensory Unmanned Ground Vehicles (UGVs)." Drones 7, no. 12 (December 9, 2023): 699. http://dx.doi.org/10.3390/drones7120699.

Full text
Abstract:
Recently, deep learning methods and multisensory fusion have been applied to address odometry challenges in unmanned ground vehicles (UGVs). In this paper, we propose an end-to-end visual-lidar-inertial odometry framework to enhance the accuracy of pose estimation. Grayscale images, 3D point clouds, and inertial data are used as inputs to overcome the limitations of a single sensor. Convolutional neural network (CNN) and recurrent neural network (RNN) are employed as encoders for different sensor modalities. In contrast to previous multisensory odometry methods, our framework introduces a novel attention-based fusion module that remaps feature vectors to adapt to various scenes. Evaluations on the Karlsruhe Institute of Technology and Toyota Technological Institute at Chicago (KITTI) odometry benchmark demonstrate the effectiveness of our framework.
APA, Harvard, Vancouver, ISO, and other styles
27

Cai, Yiyi, Yang Ou, and Tuanfa Qin. "Improving SLAM Techniques with Integrated Multi-Sensor Fusion for 3D Reconstruction." Sensors 24, no. 7 (March 22, 2024): 2033. http://dx.doi.org/10.3390/s24072033.

Full text
Abstract:
Simultaneous Localization and Mapping (SLAM) poses distinct challenges, especially in settings with variable elements, which demand the integration of multiple sensors to ensure robustness. This study addresses these issues by integrating advanced technologies like LiDAR-inertial odometry (LIO), visual-inertial odometry (VIO), and sophisticated Inertial Measurement Unit (IMU) preintegration methods. These integrations enhance the robustness and reliability of the SLAM process for precise mapping of complex environments. Additionally, incorporating an object-detection network aids in identifying and excluding transient objects such as pedestrians and vehicles, essential for maintaining the integrity and accuracy of environmental mapping. The object-detection network features a lightweight design and swift performance, enabling real-time analysis without significant resource utilization. Our approach focuses on harmoniously blending these techniques to yield superior mapping outcomes in complex scenarios. The effectiveness of our proposed methods is substantiated through experimental evaluation, demonstrating their capability to produce more reliable and precise maps in environments with variable elements. The results indicate improvements in autonomous navigation and mapping, providing a practical solution for SLAM in challenging and dynamic settings.
APA, Harvard, Vancouver, ISO, and other styles
28

Kong, Xianglong, Wenqi Wu, Lilian Zhang, Xiaofeng He, and Yujie Wang. "Performance improvement of visual-inertial navigation system by using polarized light compass." Industrial Robot: An International Journal 43, no. 6 (October 17, 2016): 588–95. http://dx.doi.org/10.1108/ir-03-2016-0103.

Full text
Abstract:
Purpose This paper aims to present a method for improving the performance of the visual-inertial navigation system (VINS) by using a bio-inspired polarized light compass. Design/methodology/approach The measurement model of each sensor module is derived, and a robust stochastic cloning extended Kalman filter (RSC-EKF) is implemented for data fusion. This fusion framework can not only handle multiple relative and absolute measurements, but can also deal with outliers, sensor outages of each measurement module. Findings The paper tests the approach on data sets acquired by a land vehicle moving in different environments and compares its performance against other methods. The results demonstrate the effectiveness of the proposed method for reducing the error growth of the VINS in the long run. Originality/value The main contribution of this paper lies in the design/implementation of the RSC-EKF for incorporating the homemade polarized light compass into visual-inertial navigation pipeline. The real-world tests in different environments demonstrate the effectiveness and feasibility of the proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
29

Li, Kailin, Jiansheng Li, Ancheng Wang, Haolong Luo, Xueqiang Li, and Zidi Yang. "A Resilient Method for Visual–Inertial Fusion Based on Covariance Tuning." Sensors 22, no. 24 (December 14, 2022): 9836. http://dx.doi.org/10.3390/s22249836.

Full text
Abstract:
To improve localization and pose precision of visual–inertial simultaneous localization and mapping (viSLAM) in complex scenarios, it is necessary to tune the weights of the visual and inertial inputs during sensor fusion. To this end, we propose a resilient viSLAM algorithm based on covariance tuning. During back-end optimization of the viSLAM process, the unit-weight root-mean-square error (RMSE) of the visual reprojection and IMU preintegration in each optimization is computed to construct a covariance tuning function, producing a new covariance matrix. This is used to perform another round of nonlinear optimization, effectively improving pose and localization precision without closed-loop detection. In the validation experiment, our algorithm outperformed the OKVIS, R-VIO, and VINS-Mono open-source viSLAM frameworks in pose and localization precision on the EuRoc dataset, at all difficulty levels.
APA, Harvard, Vancouver, ISO, and other styles
30

Yeh, T. H., K. W. Chiang, P. R. Lu, P. L. Li, Y. S. Lin, and C. Y. Hsu. "V-SLAM ENHANCED INS/GNSS FUSION SCHEME FOR LANE LEVEL VEHICULAR NAVIGATION APPLICATIONS IN DYNAMIC ENVIRONMENT." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVIII-1/W1-2023 (May 25, 2023): 547–53. http://dx.doi.org/10.5194/isprs-archives-xlviii-1-w1-2023-547-2023.

Full text
Abstract:
Abstract. With the development of different sensors, such as global navigation satellite system (GNSS), inertial measurement unit (IMU), LiDAR, radar and camera, more localization information is available for autonomous vehicular applications. However, each sensor has its limitations in different circumstances. For example, visual Simultaneous Localization and Mapping (SLAM) easily loses tracking in an open sky area where accurate GNSS measurements can be obtained. Sensors can complement each other by integrated their information in a multi-sensor fusion scheme. In this study, we proposed a visual-SLAM enhanced INS/GNSS localization fusion scheme for a high dynamic environment. Oriented FAST and rotated BRIEF (ORB) SLAM are used to pre-process image sequences from monocular camera, rescaled and refreshed after applying GNSS measurements, and convert to position and velocity information, which can provide updates to the system. The performance of the fusion system was verified through two field tests at different speed ranges (about 30–60 km/s), using a reliable reference system as ground-truth to assess the accuracy of the proposed localization fusion scheme. The results indicated that the proposed system could improve the navigation accuracy compared to INS/GNSS integration scheme and achieve which-lane level or even where-in-lane level.
APA, Harvard, Vancouver, ISO, and other styles
31

Song, Chengqun, Bo Zeng, Jun Cheng, Fuxiang Wu, and Fusheng Hao. "PSMD-SLAM: Panoptic Segmentation-Aided Multi-Sensor Fusion Simultaneous Localization and Mapping in Dynamic Scenes." Applied Sciences 14, no. 9 (April 30, 2024): 3843. http://dx.doi.org/10.3390/app14093843.

Full text
Abstract:
Multi-sensor fusion is pivotal in augmenting the robustness and precision of simultaneous localization and mapping (SLAM) systems. The LiDAR–visual–inertial approach has been empirically shown to adeptly amalgamate the benefits of these sensors for SLAM across various scenarios. Furthermore, methods of panoptic segmentation have been introduced to deliver pixel-level semantic and instance segmentation data in a single instance. This paper delves deeper into these methodologies, introducing PSMD-SLAM, a novel panoptic segmentation assisted multi-sensor fusion SLAM approach tailored for dynamic environments. Our approach employs both probability propagation-based and PCA-based clustering techniques, supplemented by panoptic segmentation. This is utilized for dynamic object detection and the removal of visual and LiDAR data, respectively. Furthermore, we introduce a module designed for the robust real-time estimation of the 6D pose of dynamic objects. We test our approach on a publicly available dataset and show that PSMD-SLAM outperforms other SLAM algorithms in terms of accuracy and robustness, especially in dynamic environments.
APA, Harvard, Vancouver, ISO, and other styles
32

Dong, Xin, Yuzhe Gao, Jinglong Guo, Shiyu Zuo, Jinwu Xiang, Daochun Li, and Zhan Tu. "An Integrated UWB-IMU-Vision Framework for Autonomous Approaching and Landing of UAVs." Aerospace 9, no. 12 (December 5, 2022): 797. http://dx.doi.org/10.3390/aerospace9120797.

Full text
Abstract:
Unmanned Aerial Vehicles (UAVs) autonomous approaching and landing on mobile platforms always play an important role in various application scenarios. Such a complicated autonomous task requires an integrated multi-sensor system to guarantee environmental adaptability in contrast to using each sensor individually. Multi-sensor fusion perception demonstrates great feasibility to compensate for adverse visual events, undesired vibrations of inertia sensors, and satellite positioning loss. In this paper, a UAV autonomous landing scheme based on multi-sensor fusion is proposed. In particular, Ultra Wide-Band (UWB) sensor, Inertial Measurement Unit (IMU), and vision feedback are integrated to guide the UAV to approach and land on a moving object. In the approaching stage, a UWB-IMU-based sensor fusion algorithm is proposed to provide relative position estimation of vehicles with real time and high consistency. Such a sensor integration addresses the open challenge of inaccurate satellite positioning when the UAV is near the ground. It can also be extended to satellite-denied environmental applications. When the landing platform is detected by the onboard camera, the UAV performs autonomous landing. In the landing stage, the vision sensor is involved. With the visual feedback, a deep-learning-based detector and local pose estimator are enabled when the UAV approaches the landing platform. To validate the feasibility of the proposed autonomous landing scheme, both simulation and real-world experiments in extensive scenes are performed. As a result, the proposed landing scheme can land successfully with adequate accuracy in most common scenarios.
APA, Harvard, Vancouver, ISO, and other styles
33

Yan, Yaxuan, Baohua Zhang, Jun Zhou, Yibo Zhang, and Xiao’ang Liu. "Real-Time Localization and Mapping Utilizing Multi-Sensor Fusion and Visual–IMU–Wheel Odometry for Agricultural Robots in Unstructured, Dynamic and GPS-Denied Greenhouse Environments." Agronomy 12, no. 8 (July 23, 2022): 1740. http://dx.doi.org/10.3390/agronomy12081740.

Full text
Abstract:
Autonomous navigation in greenhouses requires agricultural robots to localize and generate a globally consistent map of surroundings in real-time. However, accurate and robust localization and mapping are still challenging for agricultural robots due to the unstructured, dynamic and GPS-denied environmental conditions. In this study, a state-of-the-art real-time localization and mapping system was presented to achieve precise pose estimation and dense three-dimensional (3D) point cloud mapping in complex greenhouses by utilizing multi-sensor fusion and Visual–IMU–Wheel odometry. In this method, measurements from wheel odometry, an inertial measurement unit (IMU) and a tightly coupled visual–inertial odometry (VIO) are integrated into a loosely coupled framework based on the Extended Kalman Filter (EKF) to obtain a more accurate state estimation of the robot. In the multi-sensor fusion algorithm, the pose estimations from the wheel odometry and IMU are treated as predictions and the localization results from VIO are used as observations to update the state vector. Simultaneously, the dense 3D map of the greenhouse is reconstructed in real-time by employing the modified ORB-SLAM2. The performance of the proposed system was evaluated in modern standard solar greenhouses with harsh environmental conditions. Taking advantage of measurements from individual sensors, our method is robust enough to cope with various challenges, as shown by extensive experiments conducted in the greenhouses and outdoor campus environment. Additionally, the results show that our proposed framework can improve the localization accuracy of the visual–inertial odometry, demonstrating the satisfactory capability of the proposed approach and highlighting its promising applications in autonomous navigation of agricultural robots.
APA, Harvard, Vancouver, ISO, and other styles
34

Sun, Hui Qin. "Design of Indoor Large-Scale Multi-Target Precise Positioning and Tracking System." Advanced Materials Research 1049-1050 (October 2014): 1233–36. http://dx.doi.org/10.4028/www.scientific.net/amr.1049-1050.1233.

Full text
Abstract:
This paper is to build a interior Large-scale multi-target precise positioning and tracking system.In-depth resear of a visual-based optical tracking technology, inertial tracking technology and multi-sensor data fusion technology.Breaking the graphics, images, data fusion, tracking and other related key technologies.Develop high versatility, real-time, robustness of a wide range of high-precision optical tracking system for interior Large-scale multi-target precise positioning and tracking system.
APA, Harvard, Vancouver, ISO, and other styles
35

Liu, Yanjie, Changsen Zhao, and Meixuan Ren. "An Enhanced Hybrid Visual–Inertial Odometry System for Indoor Mobile Robot." Sensors 22, no. 8 (April 11, 2022): 2930. http://dx.doi.org/10.3390/s22082930.

Full text
Abstract:
As mobile robots are being widely used, accurate localization of the robot counts for the system. Compared with position systems with a single sensor, multi-sensor fusion systems provide better performance and increase the accuracy and robustness. At present, camera and IMU (Inertial Measurement Unit) fusion positioning is extensively studied and many representative Visual–Inertial Odometry (VIO) systems have been produced. Multi-State Constraint Kalman Filter (MSCKF), one of the tightly coupled filtering methods, is characterized by high accuracy and low computational load among typical VIO methods. In the general framework, IMU information is not used after predicting the state and covariance propagation. In this article, we proposed a framework which introduce IMU pre-integration result into MSCKF framework as observation information to improve the system positioning accuracy. Additionally, the system uses the Helmert variance component estimation (HVCE) method to adjust the weight between feature points and pre-integration to further improve the positioning accuracy. Similarly, this article uses the wheel odometer information of the mobile robot to perform zero speed detection, zero-speed update, and pre-integration update to enhance the positioning accuracy of the system. Finally, after experiments carried out in Gazebo simulation environment, public dataset and real scenarios, it is proved that the proposed algorithm has better accuracy results while ensuring real-time performance than existing mainstream algorithms.
APA, Harvard, Vancouver, ISO, and other styles
36

Xu, Changhui, Zhenbin Liu, and Zengke Li. "Robust Visual-Inertial Navigation System for Low Precision Sensors under Indoor and Outdoor Environments." Remote Sensing 13, no. 4 (February 20, 2021): 772. http://dx.doi.org/10.3390/rs13040772.

Full text
Abstract:
Simultaneous Localization and Mapping (SLAM) has always been the focus of the robot navigation for many decades and becomes a research hotspot in recent years. Because a SLAM system based on vision sensor is vulnerable to environment illumination and texture, the problem of initial scale ambiguity still exists in a monocular SLAM system. The fusion of a monocular camera and an inertial measurement unit (IMU) can effectively solve the scale blur problem, improve the robustness of the system, and achieve higher positioning accuracy. Based on a monocular visual-inertial navigation system (VINS-mono), a state-of-the-art fusion performance of monocular vision and IMU, this paper designs a new initialization scheme that can calculate the acceleration bias as a variable during the initialization process so that it can be applied to low-cost IMU sensors. Besides, in order to obtain better initialization accuracy, visual matching positioning method based on feature point is used to assist the initialization process. After the initialization process, it switches to optical flow tracking visual positioning mode to reduce the calculation complexity. By using the proposed method, the advantages of feature point method and optical flow method can be fused. This paper, the first one to use both the feature point method and optical flow method, has better performance in the comprehensive performance of positioning accuracy and robustness under the low-cost sensors. Through experiments conducted with the EuRoc dataset and campus environment, the results show that the initial values obtained through the initialization process can be efficiently used for launching nonlinear visual-inertial state estimator and positioning accuracy of the improved VINS-mono has been improved by about 10% than VINS-mono.
APA, Harvard, Vancouver, ISO, and other styles
37

Zhang, Chuanwei, Lei Lei, Xiaowen Ma, Rui Zhou, Zhenghe Shi, and Zhongyu Guo. "Map Construction Based on LiDAR Vision Inertial Multi-Sensor Fusion." World Electric Vehicle Journal 12, no. 4 (December 12, 2021): 261. http://dx.doi.org/10.3390/wevj12040261.

Full text
Abstract:
In order to make up for the shortcomings of independent sensors and provide more reliable estimation, a multi-sensor fusion framework for simultaneous localization and mapping is proposed in this paper. Firstly, the light detection and ranging (LiDAR) point cloud is screened in the front-end processing to eliminate abnormal points and improve the positioning and mapping accuracy. Secondly, for the problem of false detection when the LiDAR is surrounded by repeated structures, the intensity value of the laser point cloud is used as the screening condition to screen out robust visual features with high distance confidence, for the purpose of softening. Then, the initial factor, registration factor, inertial measurement units (IMU) factor and loop factor are inserted into the factor graph. A factor graph optimization algorithm based on a Bayesian tree is used for incremental optimization estimation to realize the data fusion. The algorithm was tested in campus and real road environments. The experimental results show that the proposed algorithm can realize state estimation and map construction with high accuracy and strong robustness.
APA, Harvard, Vancouver, ISO, and other styles
38

Martinelli, Agostino, Alessandro Renzaglia, and Alexander Oliva. "Cooperative visual-inertial sensor fusion: fundamental equations and state determination in closed-form." Autonomous Robots 44, no. 3-4 (March 4, 2019): 339–57. http://dx.doi.org/10.1007/s10514-019-09841-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Choi, Junho, Christiansen Marsim Kevin, Myeongwoo Jeong, Kihwan Ryoo, Jeewon Kim, and Hyun Myung. "Multi-unmanned Aerial Vehicle Pose Estimation Based on Visual-inertial-range Sensor Fusion." Journal of Institute of Control, Robotics and Systems 29, no. 11 (November 30, 2023): 859–65. http://dx.doi.org/10.5302/j.icros.2023.23.0135.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Eckenhoff, Kevin, Patrick Geneva, and Guoquan Huang. "Closed-form preintegration methods for graph-based visual–inertial navigation." International Journal of Robotics Research 38, no. 5 (April 2019): 563–86. http://dx.doi.org/10.1177/0278364919835021.

Full text
Abstract:
In this paper, we propose a new analytical preintegration theory for graph-based sensor fusion with an inertial measurement unit (IMU) and a camera (or other aiding sensors). Rather than using discrete sampling of the measurement dynamics as in current methods, we derive the closed-form solutions to the preintegration equations, yielding improved accuracy in state estimation. We advocate two new different inertial models for preintegration: (i) the model that assumes piecewise constant measurements; and (ii) the model that assumes piecewise constant local true acceleration. Through extensive Monte Carlo simulations, we show the effect that the choice of preintegration model has on estimation performance. To validate the proposed preintegration theory, we develop both direct and indirect visual–inertial navigation systems (VINSs) that leverage our preintegration. In the first, within a tightly coupled, sliding-window optimization framework, we jointly estimate the features in the window and the IMU states while performing marginalization to bound the computational cost. In the second, we loosely couple the IMU preintegration with a direct image alignment that estimates relative camera motion by minimizing the photometric errors (i.e., image intensity difference), allowing for efficient and informative loop closures. Both systems are extensively validated in real-world experiments and are shown to offer competitive performance to state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
41

Peng, Gang, Yicheng Zhou, Lu Hu, Li Xiao, Zhigang Sun, Zhangang Wu, and Xukang Zhu. "VILO SLAM: Tightly Coupled Binocular Vision–Inertia SLAM Combined with LiDAR." Sensors 23, no. 10 (May 9, 2023): 4588. http://dx.doi.org/10.3390/s23104588.

Full text
Abstract:
For the existing visual–inertial SLAM algorithm, when the robot is moving at a constant speed or purely rotating and encounters scenes with insufficient visual features, problems of low accuracy and poor robustness arise. Aiming to solve the problems of low accuracy and robustness of the visual inertial SLAM algorithm, a tightly coupled vision-IMU-2D lidar odometry (VILO) algorithm is proposed. Firstly, low-cost 2D lidar observations and visual–inertial observations are fused in a tightly coupled manner. Secondly, the low-cost 2D lidar odometry model is used to derive the Jacobian matrix of the lidar residual with respect to the state variable to be estimated, and the residual constraint equation of the vision-IMU-2D lidar is constructed. Thirdly, the nonlinear solution method is used to obtain the optimal robot pose, which solves the problem of how to fuse 2D lidar observations with visual–inertial information in a tightly coupled manner. The results show that the algorithm still has reliable pose-estimation accuracy and robustness in many special environments, and the position error and yaw angle error are greatly reduced. Our research improves the accuracy and robustness of the multi-sensor fusion SLAM algorithm.
APA, Harvard, Vancouver, ISO, and other styles
42

Peng, Gang, Qiang Gao, Yue Xu, Jianfeng Li, Zhang Deng, and Cong Li. "Pose Estimation Based on Bidirectional Visual–Inertial Odometry with 3D LiDAR (BV-LIO)." Remote Sensing 16, no. 16 (August 14, 2024): 2970. http://dx.doi.org/10.3390/rs16162970.

Full text
Abstract:
Due to the limitation of a single sensor such as only camera or only LiDAR, the Visual SLAM detects few effective features in the case of poor lighting or no texture. The LiDAR SLAM will also degrade in an unstructured environment and open spaces, which reduces the accuracy of pose estimation and the quality of mapping. In order to solve this problem, on account of the high efficiency of Visual odometry and the high accuracy of LiDAR odometry, this paper investigates the multi-sensor fusion of bidirectional visual–inertial odometry with 3D LiDAR for pose estimation. This method can couple the IMU with the bidirectional vision respectively, and the LiDAR odometry is obtained assisted by the bidirectional visual inertial. The factor graph optimization is constructed, which effectively improves the accuracy of pose estimation. The algorithm in this paper is compared with LIO-LOAM, LeGO-LOAM, VINS-Mono, and so on using challenging datasets such as KITTI and M2DGR. The results show that this method effectively improves the accuracy of pose estimation and has high application value for mobile robots.
APA, Harvard, Vancouver, ISO, and other styles
43

Xu, Haoran, Yi Li, and Yucheng Lu. "Research on Indoor AGV Fusion Localization Based on Adaptive Weight EKF Using Multi-sensor." Journal of Physics: Conference Series 2428, no. 1 (February 1, 2023): 012028. http://dx.doi.org/10.1088/1742-6596/2428/1/012028.

Full text
Abstract:
Abstract To solve the problem of large accumulative errors of wheel odometer and inertial measurement unit (IMU), which make pose calculation mistakes due to friction coefficient of the ground or bumps and collisions, furthermore, affect path planning and navigation of automated guided vehicle (AGV), an extensible fusion localization method based on multiple sensors is proposed. Based on the traditional extended Kalman filter (EKF), the internal positioning data is used for prediction, while the external positioning data is used for correction. According to the states of different sensors, the adaptive weight method is applied to eliminate the influence of system accumulative error and provide continuous and accurate positioning data. The experimental results show that compared to single-sensor, the fusion localization method has an increase in quantity, which can reach an accuracy within 5~8cm. It can deal with non-line-of-sight (NLOS) landmark occlusion situations, and can navigate in dim light or dark environments. The system ensures the accuracy of fusion data through visual positioning and ensures the stability of the system through the odometer and IMU. New positioning sensors are also possible to be added into the system for data fusion.
APA, Harvard, Vancouver, ISO, and other styles
44

Fan, Jingjin, Shuoben Bi, Guojie Wang, Li Zhang, and Shilei Sun. "Sensor Fusion Basketball Shooting Posture Recognition System Based on CNN." Journal of Sensors 2021 (March 29, 2021): 1–16. http://dx.doi.org/10.1155/2021/6664776.

Full text
Abstract:
In recent years, with the development of wearable sensor devices, research on sports monitoring using inertial measurement units has received increasing attention; however, a specific system for identifying various basketball shooting postures does not exist thus far. In this study, we designed a sensor fusion basketball shooting posture recognition system based on convolutional neural networks. The system, using the sensor fusion framework, collected the basketball shooting posture data of the players’ main force hand and main force foot for sensor fusion and used a deep learning model based on convolutional neural networks for recognition. We collected 12,177 sensor fusion basketball shooting posture data entries of 13 Chinese adult male subjects aged 18–40 years and with at least 2 years of basketball experience without professional training. We then trained and tested the shooting posture data using the classic visual geometry group network 16 deep learning model. The intratest achieved a 98.6% average recall rate, 98.6% average precision rate, and 98.6% accuracy rate. The intertest achieved an average recall rate of 89.8%, an average precision rate of 91.1%, and an accuracy rate of 89.9%.
APA, Harvard, Vancouver, ISO, and other styles
45

Qu, Shaochun, Jian Cui, Zijian Cao, Yongxing Qiao, Xuemeng Men, and Yanfang Fu. "Position Estimation Method for Small Drones Based on the Fusion of Multisource, Multimodal Data and Digital Twins." Electronics 13, no. 11 (June 6, 2024): 2218. http://dx.doi.org/10.3390/electronics13112218.

Full text
Abstract:
In response to the issue of low positioning accuracy and insufficient robustness in small UAVs (unmanned aerial vehicle) caused by sensor noise and cumulative motion errors during flight in complex environments, this paper proposes a multisource, multimodal data fusion method. Initially, it employs a multimodal data fusion of various sensors, including GPS (global positioning system), an IMU (inertial measurement unit), and visual sensors, to complement the strengths and weaknesses of each hardware component, thereby mitigating motion errors to enhance accuracy. To mitigate the impact of sudden changes in sensor data, a high-fidelity UAV model is established in the digital twin based on the real UAV parameters, providing a robust reference for data fusion. By utilizing the extended Kalman filter algorithm, it fuses data from both the real UAV and its digital twin, and the filtered positional information is fed back into the control system of the real UAV. This enables the real-time correction of UAV positional deviations caused by sensor noise and environmental disturbances. The multisource, multimodal fusion Kalman filter method proposed in this paper significantly improves the positioning accuracy of UAVs in complex scenarios and the overall stability of the system. This method holds significant value in maintaining high-precision positioning in variable environments and has important practical implications for enhancing UAV navigation and application efficiency.
APA, Harvard, Vancouver, ISO, and other styles
46

Dahlke, Dennis, Petros Drakoulis, Anaida Fernández García, Susanna Kaiser, Sotiris Karavarsamis, Michail Mallis, William Oliff, et al. "Seamless Fusion: Multi-Modal Localization for First Responders in Challenging Environments." Sensors 24, no. 9 (April 30, 2024): 2864. http://dx.doi.org/10.3390/s24092864.

Full text
Abstract:
In dynamic and unpredictable environments, the precise localization of first responders and rescuers is crucial for effective incident response. This paper introduces a novel approach leveraging three complementary localization modalities: visual-based, Galileo-based, and inertial-based. Each modality contributes uniquely to the final Fusion tool, facilitating seamless indoor and outdoor localization, offering a robust and accurate localization solution without reliance on pre-existing infrastructure, essential for maintaining responder safety and optimizing operational effectiveness. The visual-based localization method utilizes an RGB camera coupled with a modified implementation of the ORB-SLAM2 method, enabling operation with or without prior area scanning. The Galileo-based localization method employs a lightweight prototype equipped with a high-accuracy GNSS receiver board, tailored to meet the specific needs of first responders. The inertial-based localization method utilizes sensor fusion, primarily leveraging smartphone inertial measurement units, to predict and adjust first responders’ positions incrementally, compensating for the GPS signal attenuation indoors. A comprehensive validation test involving various environmental conditions was carried out to demonstrate the efficacy of the proposed fused localization tool. Our results show that our proposed solution always provides a location regardless of the conditions (indoors, outdoors, etc.), with an overall mean error of 1.73 m.
APA, Harvard, Vancouver, ISO, and other styles
47

He, Shenghuang, Yanzhou Li, Yongkang Lu, and Yishan Liu. "Design of visual inertial state estimator for autonomous systems via multi-sensor fusion approach." Mechatronics 95 (November 2023): 103066. http://dx.doi.org/10.1016/j.mechatronics.2023.103066.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Qayyum, Usman, and Jonghyuk Kim. "Depth-Camera-Aided Inertial Navigation Utilizing Directional Constraints." Sensors 21, no. 17 (September 2, 2021): 5913. http://dx.doi.org/10.3390/s21175913.

Full text
Abstract:
This paper presents a practical yet effective solution for integrating an RGB-D camera and an inertial sensor to handle the depth dropouts that frequently happen in outdoor environments, due to the short detection range and sunlight interference. In depth drop conditions, only the partial 5-degrees-of-freedom pose information (attitude and position with an unknown scale) is available from the RGB-D sensor. To enable continuous fusion with the inertial solutions, the scale ambiguous position is cast into a directional constraint of the vehicle motion, which is, in essence, an epipolar constraint in multi-view geometry. Unlike other visual navigation approaches, this can effectively reduce the drift in the inertial solutions without delay or under small parallax motion. If a depth image is available, a window-based feature map is maintained to compute the RGB-D odometry, which is then fused with inertial outputs in an extended Kalman filter framework. Flight results from the indoor and outdoor environments, as well as public datasets, demonstrate the improved navigation performance of the proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
49

Chang, Ching-Wei, Li-Yu Lo, Hiu Ching Cheung, Yurong Feng, An-Shik Yang, Chih-Yung Wen, and Weifeng Zhou. "Proactive Guidance for Accurate UAV Landing on a Dynamic Platform: A Visual–Inertial Approach." Sensors 22, no. 1 (January 5, 2022): 404. http://dx.doi.org/10.3390/s22010404.

Full text
Abstract:
This work aimed to develop an autonomous system for unmanned aerial vehicles (UAVs) to land on moving platforms such as an automobile or a marine vessel, providing a promising solution for a long-endurance flight operation, a large mission coverage range, and a convenient recharging ground station. Unlike most state-of-the-art UAV landing frameworks that rely on UAV onboard computers and sensors, the proposed system fully depends on the computation unit situated on the ground vehicle/marine vessel to serve as a landing guidance system. Such a novel configuration can therefore lighten the burden of the UAV, and the computation power of the ground vehicle/marine vessel can be enhanced. In particular, we exploit a sensor fusion-based algorithm for the guidance system to perform UAV localization, whilst a control method based upon trajectory optimization is integrated. Indoor and outdoor experiments are conducted, and the results show that precise autonomous landing on a 43 cm × 43 cm platform can be performed.
APA, Harvard, Vancouver, ISO, and other styles
50

Nemec, Dušan, Vojtech Šimák, Aleš Janota, Marián Hruboš, and Emília Bubeníková. "Precise localization of the mobile wheeled robot using sensor fusion of odometry, visual artificial landmarks and inertial sensors." Robotics and Autonomous Systems 112 (February 2019): 168–77. http://dx.doi.org/10.1016/j.robot.2018.11.019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography