Academic literature on the topic 'Slam LiDAR'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Slam LiDAR.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Slam LiDAR"

1

Jie, Lu, Zhi Jin, Jinping Wang, Letian Zhang, and Xiaojun Tan. "A SLAM System with Direct Velocity Estimation for Mechanical and Solid-State LiDARs." Remote Sensing 14, no. 7 (2022): 1741. http://dx.doi.org/10.3390/rs14071741.

Full text
Abstract:
Simultaneous localization and mapping (SLAM) is essential for intelligent robots operating in unknown environments. However, existing algorithms are typically developed for specific types of solid-state LiDARs, leading to weak feature representation abilities for new sensors. Moreover, LiDAR-based SLAM methods are limited by distortions caused by LiDAR ego motion. To address the above issues, this paper presents a versatile and velocity-aware LiDAR-based odometry and mapping (VLOM) system. A spherical projection-based feature extraction module is utilized to process the raw point cloud generat
APA, Harvard, Vancouver, ISO, and other styles
2

Sier, Ha, Qingqing Li, Xianjia Yu, Jorge Peña Queralta, Zhuo Zou, and Tomi Westerlund. "A Benchmark for Multi-Modal LiDAR SLAM with Ground Truth in GNSS-Denied Environments." Remote Sensing 15, no. 13 (2023): 3314. http://dx.doi.org/10.3390/rs15133314.

Full text
Abstract:
LiDAR-based simultaneous localization and mapping (SLAM) approaches have obtained considerable success in autonomous robotic systems. This is in part owing to the high accuracy of robust SLAM algorithms and the emergence of new and lower-cost LiDAR products. This study benchmarks the current state-of-the-art LiDAR SLAM algorithms with a multi-modal LiDAR sensor setup, showcasing diverse scanning modalities (spinning and solid state) and sensing technologies, and LiDAR cameras, mounted on a mobile sensing and computing platform. We extend our previous multi-modal multi-LiDAR dataset with additi
APA, Harvard, Vancouver, ISO, and other styles
3

Zhao, Yu-Lin, Yi-Tian Hong, and Han-Pang Huang. "Comprehensive Performance Evaluation between Visual SLAM and LiDAR SLAM for Mobile Robots: Theories and Experiments." Applied Sciences 14, no. 9 (2024): 3945. http://dx.doi.org/10.3390/app14093945.

Full text
Abstract:
SLAM (Simultaneous Localization and Mapping), primarily relying on camera or LiDAR (Light Detection and Ranging) sensors, plays a crucial role in robotics for localization and environmental reconstruction. This paper assesses the performance of two leading methods, namely ORB-SLAM3 and SC-LeGO-LOAM, focusing on localization and mapping in both indoor and outdoor environments. The evaluation employs artificial and cost-effective datasets incorporating data from a 3D LiDAR and an RGB-D (color and depth) camera. A practical approach is introduced for calculating ground-truth trajectories and duri
APA, Harvard, Vancouver, ISO, and other styles
4

Chen, Shoubin, Baoding Zhou, Changhui Jiang, Weixing Xue, and Qingquan Li. "A LiDAR/Visual SLAM Backend with Loop Closure Detection and Graph Optimization." Remote Sensing 13, no. 14 (2021): 2720. http://dx.doi.org/10.3390/rs13142720.

Full text
Abstract:
LiDAR (light detection and ranging), as an active sensor, is investigated in the simultaneous localization and mapping (SLAM) system. Typically, a LiDAR SLAM system consists of front-end odometry and back-end optimization modules. Loop closure detection and pose graph optimization are the key factors determining the performance of the LiDAR SLAM system. However, the LiDAR works at a single wavelength (905 nm), and few textures or visual features are extracted, which restricts the performance of point clouds matching based loop closure detection and graph optimization. With the aim of improving
APA, Harvard, Vancouver, ISO, and other styles
5

Peng, Gang, Yicheng Zhou, Lu Hu, et al. "VILO SLAM: Tightly Coupled Binocular Vision–Inertia SLAM Combined with LiDAR." Sensors 23, no. 10 (2023): 4588. http://dx.doi.org/10.3390/s23104588.

Full text
Abstract:
For the existing visual–inertial SLAM algorithm, when the robot is moving at a constant speed or purely rotating and encounters scenes with insufficient visual features, problems of low accuracy and poor robustness arise. Aiming to solve the problems of low accuracy and robustness of the visual inertial SLAM algorithm, a tightly coupled vision-IMU-2D lidar odometry (VILO) algorithm is proposed. Firstly, low-cost 2D lidar observations and visual–inertial observations are fused in a tightly coupled manner. Secondly, the low-cost 2D lidar odometry model is used to derive the Jacobian matrix of th
APA, Harvard, Vancouver, ISO, and other styles
6

Dang, Xiangwei, Zheng Rong, and Xingdong Liang. "Sensor Fusion-Based Approach to Eliminating Moving Objects for SLAM in Dynamic Environments." Sensors 21, no. 1 (2021): 230. http://dx.doi.org/10.3390/s21010230.

Full text
Abstract:
Accurate localization and reliable mapping is essential for autonomous navigation of robots. As one of the core technologies for autonomous navigation, Simultaneous Localization and Mapping (SLAM) has attracted widespread attention in recent decades. Based on vision or LiDAR sensors, great efforts have been devoted to achieving real-time SLAM that can support a robot’s state estimation. However, most of the mature SLAM methods generally work under the assumption that the environment is static, while in dynamic environments they will yield degenerate performance or even fail. In this paper, fir
APA, Harvard, Vancouver, ISO, and other styles
7

Debeunne, César, and Damien Vivet. "A Review of Visual-LiDAR Fusion based Simultaneous Localization and Mapping." Sensors 20, no. 7 (2020): 2068. http://dx.doi.org/10.3390/s20072068.

Full text
Abstract:
Autonomous navigation requires both a precise and robust mapping and localization solution. In this context, Simultaneous Localization and Mapping (SLAM) is a very well-suited solution. SLAM is used for many applications including mobile robotics, self-driving cars, unmanned aerial vehicles, or autonomous underwater vehicles. In these domains, both visual and visual-IMU SLAM are well studied, and improvements are regularly proposed in the literature. However, LiDAR-SLAM techniques seem to be relatively the same as ten or twenty years ago. Moreover, few research works focus on vision-LiDAR appr
APA, Harvard, Vancouver, ISO, and other styles
8

Xu, Xiaobin, Lei Zhang, Jian Yang, et al. "A Review of Multi-Sensor Fusion SLAM Systems Based on 3D LIDAR." Remote Sensing 14, no. 12 (2022): 2835. http://dx.doi.org/10.3390/rs14122835.

Full text
Abstract:
The ability of intelligent unmanned platforms to achieve autonomous navigation and positioning in a large-scale environment has become increasingly demanding, in which LIDAR-based Simultaneous Localization and Mapping (SLAM) is the mainstream of research schemes. However, the LIDAR-based SLAM system will degenerate and affect the localization and mapping effects in extreme environments with high dynamics or sparse features. In recent years, a large number of LIDAR-based multi-sensor fusion SLAM works have emerged in order to obtain a more stable and robust system. In this work, the development
APA, Harvard, Vancouver, ISO, and other styles
9

Bu, Zean, Changku Sun, and Peng Wang. "Semantic Lidar-Inertial SLAM for Dynamic Scenes." Applied Sciences 12, no. 20 (2022): 10497. http://dx.doi.org/10.3390/app122010497.

Full text
Abstract:
Over the past few years, many impressive lidar-inertial SLAM systems have been developed and perform well under static scenes. However, most tasks are under dynamic environments in real life, and the determination of a method to improve accuracy and robustness poses a challenge. In this paper, we propose a semantic lidar-inertial SLAM approach with the combination of a point cloud semantic segmentation network and lidar-inertial SLAM LIO mapping for dynamic scenes. We import an attention mechanism to the PointConv network to build an attention weight function to improve the capacity to predict
APA, Harvard, Vancouver, ISO, and other styles
10

Abdelhafid, El Farnane, Youssefi My Abdelkader, Mouhsen Ahmed, Dakir Rachid, and El Ihyaoui Abdelilah. "Visual and light detection and ranging-based simultaneous localization and mapping for self-driving cars." International Journal of Electrical and Computer Engineering (IJECE) 12, no. 6 (2022): 6284. http://dx.doi.org/10.11591/ijece.v12i6.pp6284-6292.

Full text
Abstract:
<span lang="EN-US">In recent years, there has been a strong demand for self-driving cars. For safe navigation, self-driving cars need both precise localization and robust mapping. While global navigation satellite system (GNSS) can be used to locate vehicles, it has some limitations, such as satellite signal absence (tunnels and caves), which restrict its use in urban scenarios. Simultaneous localization and mapping (SLAM) are an excellent solution for identifying a vehicle’s position while at the same time constructing a representation of the environment. SLAM-based visual and light det
APA, Harvard, Vancouver, ISO, and other styles
More sources
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!