Academic literature on the topic '3D obstacle segmentation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic '3D obstacle segmentation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "3D obstacle segmentation":

1

Jinming, Chen. "Obstacle Detection Based on 3D Lidar Euclidean Clustering." Applied Science and Innovative Research 5, no. 3 (November 8, 2021): p39. http://dx.doi.org/10.22158/asir.v5n3p39.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Environment perception is the basis of unmanned driving and obstacle detection is an important research area of environment perception technology. In order to quickly and accurately identify the obstacles in the direction of vehicle travel and obtain their location information, combined with the PCL (Point Cloud Library) function module, this paper designed a euclidean distance based Point Cloud clustering obstacle detection algorithm. Environmental information was obtained by 3D lidar, and ROI extraction, voxel filtering sampling, outlier point filtering, ground point cloud segmentation, Euclide clustering and other processing were carried out to achieve a complete PCL based 3D point cloud obstacle detection method. The experimental results show that the vehicle can effectively identify the obstacles in the area and obtain their location information.
2

Sun, Chun-Yu, Yu-Qi Yang, Hao-Xiang Guo, Peng-Shuai Wang, Xin Tong, Yang Liu, and Heung-Yeung Shum. "Semi-supervised 3D shape segmentation with multilevel consistency and part substitution." Computational Visual Media 9, no. 2 (January 3, 2023): 229–47. http://dx.doi.org/10.1007/s41095-022-0281-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
AbstractThe lack of fine-grained 3D shape segmentation data is the main obstacle to developing learning-based 3D segmentation techniques. We propose an effective semi-supervised method for learning 3D segmentations from a few labeled 3D shapes and a large amount of unlabeled 3D data. For the unlabeled data, we present a novel multilevel consistency loss to enforce consistency of network predictions between perturbed copies of a 3D shape at multiple levels: point level, part level, and hierarchical level. For the labeled data, we develop a simple yet effective part substitution scheme to augment the labeled 3D shapes with more structural variations to enhance training. Our method has been extensively validated on the task of 3D object semantic segmentation on PartNet and ShapeNetPart, and indoor scene semantic segmentation on ScanNet. It exhibits superior performance to existing semi-supervised and unsupervised pre-training 3D approaches.
3

Wang, Pengwei, Tianqi Gu, Binbin Sun, Di Huang, and Ke Sun. "Research on 3D Point Cloud Data Preprocessing and Clustering Algorithm of Obstacles for Intelligent Vehicle." World Electric Vehicle Journal 13, no. 7 (July 21, 2022): 130. http://dx.doi.org/10.3390/wevj13070130.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Environment perception is the foundation of the intelligent driving system and is a prerequisite for achieving path planning and vehicle control. Among them, obstacle detection is the key to environment perception. In order to solve the problems of difficult-to-distinguish adjacent obstacles and easy-to-split distant obstacles in the traditional obstacle detection algorithm, this study firstly designed a 3D point cloud data filtering algorithm, completed the point cloud data removal of vehicle body points and noise points, and designed the point cloud down-sampling method. Then a ground segmentation method based on the Ray Ground Filter algorithm was designed to solve the under-segmentation problem in ground segmentation, while ensuring real time. Furthermore, an improved DBSCAN (Density-Based Spatial Clustering of Application with Noise) clustering algorithm was proposed, and the L-shaped fitting method was used to complete the 3D bounding box fitting of the point cloud, thus solving the problems that it is difficult to distinguish adjacent obstacles at close distances caused by the fixed parameter thresholds and it is easy for obstacles at long distances to split into multiple obstacles; thus, the real-time performance of the algorithm was improved. Finally, a real vehicle test was conducted, and the test results show that the proposed obstacle detection algorithm in this paper has improved the accuracy by 6.1% and the real-time performance by 13.2% compared with the traditional algorithm.
4

Jiang, Wuhua, Chuanzheng Song, Hai Wang, Ming Yu, and Yajie Yan. "Obstacle Detection by Autonomous Vehicles: An Adaptive Neighborhood Search Radius Clustering Approach." Machines 11, no. 1 (January 2, 2023): 54. http://dx.doi.org/10.3390/machines11010054.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
For autonomous vehicles, obstacle detection results using 3D lidar are in the form of point clouds, and are unevenly distributed in space. Clustering is a common means for point cloud processing; however, improper selection of clustering thresholds can lead to under-segmentation or over-segmentation of point clouds, resulting in false detection or missed detection of obstacles. In order to solve these problems, a new obstacle detection method was required. Firstly, we applied a distance-based filter and a ground segmentation algorithm, to pre-process the original 3D point cloud. Secondly, we proposed an adaptive neighborhood search radius clustering algorithm, based on the analysis of the relationship between the clustering radius and point cloud spatial distribution, adopting the point cloud pitch angle and the horizontal angle resolution of the lidar, to determine the clustering threshold. Finally, an autonomous vehicle platform and the offline autonomous driving KITTI dataset were used to conduct multi-scene comparative experiments between the proposed method and a Euclidean clustering method. The multi-scene real vehicle experimental results showed that our method improved clustering accuracy by 6.94%, and the KITTI dataset experimental results showed that the F1 score increased by 0.0629.
5

Itu, Razvan, and Radu Danescu. "Part-Based Obstacle Detection Using a Multiple Output Neural Network." Sensors 22, no. 12 (June 7, 2022): 4312. http://dx.doi.org/10.3390/s22124312.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Detecting the objects surrounding a moving vehicle is essential for autonomous driving and for any kind of advanced driving assistance system; such a system can also be used for analyzing the surrounding traffic as the vehicle moves. The most popular techniques for object detection are based on image processing; in recent years, they have become increasingly focused on artificial intelligence. Systems using monocular vision are increasingly popular for driving assistance, as they do not require complex calibration and setup. The lack of three-dimensional data is compensated for by the efficient and accurate classification of the input image pixels. The detected objects are usually identified as cuboids in the 3D space, or as rectangles in the image space. Recently, instance segmentation techniques have been developed that are able to identify the freeform set of pixels that form an individual object, using complex convolutional neural networks (CNNs). This paper presents an alternative to these instance segmentation networks, combining much simpler semantic segmentation networks with light, geometrical post-processing techniques, to achieve instance segmentation results. The semantic segmentation network produces four semantic labels that identify the quarters of the individual objects: top left, top right, bottom left, and bottom right. These pixels are grouped into connected regions, based on their proximity and their position with respect to the whole object. Each quarter is used to generate a complete object hypothesis, which is then scored according to object pixel fitness. The individual homogeneous regions extracted from the labeled pixels are then assigned to the best-fitted rectangles, leading to complete and freeform identification of the pixels of individual objects. The accuracy is similar to instance segmentation-based methods but with reduced complexity in terms of trainable parameters, which leads to a reduced demand for computational resources.
6

Miyamoto, Ryusuke, Miho Adachi, Hiroki Ishida, Takuto Watanabe, Kouchi Matsutani, Hayato Komatsuzaki, Shogo Sakata, Raimu Yokota, and Shingo Kobayashi. "Visual Navigation Based on Semantic Segmentation Using Only a Monocular Camera as an External Sensor." Journal of Robotics and Mechatronics 32, no. 6 (December 20, 2020): 1137–53. http://dx.doi.org/10.20965/jrm.2020.p1137.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The most popular external sensor for robots capable of autonomous movement is 3D LiDAR. However, cameras are typically installed on robots that operate in environments where humans live their daily lives to obtain the same information that is presented to humans, even though autonomous movement itself can be performed using only 3D LiDAR. The number of studies on autonomous movement for robots using only visual sensors is relatively small, but this type of approach is effective at reducing the cost of sensing devices per robot. To reduce the number of external sensors required for autonomous movement, this paper proposes a novel visual navigation scheme using only a monocular camera as an external sensor. The key concept of the proposed scheme is to select a target point in an input image toward which a robot can move based on the results of semantic segmentation, where road following and obstacle avoidance are performed simultaneously. Additionally, a novel scheme called virtual LiDAR is proposed based on the results of semantic segmentation to estimate the orientation of a robot relative to the current path in a traversable area. Experiments conducted during the course of the Tsukuba Challenge 2019 demonstrated that a robot can operate in a real environment containing several obstacles, such as humans and other robots, if correct results of semantic segmentation are provided.
7

Chen, Baifan, Hong Chen, Dian Yuan, and Lingli Yu. "3D Fast Object Detection Based on Discriminant Images and Dynamic Distance Threshold Clustering." Sensors 20, no. 24 (December 17, 2020): 7221. http://dx.doi.org/10.3390/s20247221.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The object detection algorithm based on vehicle-mounted lidar is a key component of the perception system on autonomous vehicles. It can provide high-precision and highly robust obstacle information for the safe driving of autonomous vehicles. However, most algorithms are often based on a large amount of point cloud data, which makes real-time detection difficult. To solve this problem, this paper proposes a 3D fast object detection method based on three main steps: First, the ground segmentation by discriminant image (GSDI) method is used to convert point cloud data into discriminant images for ground points segmentation, which avoids the direct computing of the point cloud data and improves the efficiency of ground points segmentation. Second, the image detector is used to generate the region of interest of the three-dimensional object, which effectively narrows the search range. Finally, the dynamic distance threshold clustering (DDTC) method is designed for different density of the point cloud data, which improves the detection effect of long-distance objects and avoids the over-segmentation phenomenon generated by the traditional algorithm. Experiments have showed that this algorithm can meet the real-time requirements of autonomous driving while maintaining high accuracy.
8

Itu, Razvan, and Radu Gabriel Danescu. "A Self-Calibrating Probabilistic Framework for 3D Environment Perception Using Monocular Vision." Sensors 20, no. 5 (February 27, 2020): 1280. http://dx.doi.org/10.3390/s20051280.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Cameras are sensors that are available anywhere and to everyone, and can be placed easily inside vehicles. While stereovision setups of two or more synchronized cameras have the advantage of directly extracting 3D information, a single camera can be easily set up behind the windshield (like a dashcam), or above the dashboard, usually as an internal camera of a mobile phone placed there for navigation assistance. This paper presents a framework for extracting and tracking obstacle 3D data from the surrounding environment of a vehicle in traffic, using as a sensor a generic camera. The system combines the strength of Convolutional Neural Network (CNN)-based segmentation with a generic probabilistic model of the environment, the dynamic occupancy grid. The main contributions presented in this paper are the following: A method for generating the probabilistic measurement model from monocular images, based on CNN segmentation, which takes into account the particularities, uncertainties, and limitations of monocular vision; a method for automatic calibration of the extrinsic and intrinsic parameters of the camera, without the need of user assistance; the integration of automatic calibration and measurement model generation into a scene tracking system that is able to work with any camera to perceive the obstacles in real traffic. The presented system can be easily fitted to any vehicle, working standalone or together with other sensors, to enhance the environment perception capabilities and improve the traffic safety.
9

Lin, Chien-Chou, Wei-Lung Mao, Teng-Wen Chang, Chuan-Yu Chang, and Salah Sohaib Saleh Abdullah. "Fast Obstacle Detection Using 3D-to-2D LiDAR Point Cloud Segmentation for Collision-free Path Planning." Sensors and Materials 32, no. 7 (July 20, 2020): 2365. http://dx.doi.org/10.18494/sam.2020.2810.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Gomes, Tiago, Diogo Matias, André Campos, Luís Cunha, and Ricardo Roriz. "A Survey on Ground Segmentation Methods for Automotive LiDAR Sensors." Sensors 23, no. 2 (January 5, 2023): 601. http://dx.doi.org/10.3390/s23020601.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In the near future, autonomous vehicles with full self-driving features will populate our public roads. However, fully autonomous cars will require robust perception systems to safely navigate the environment, which includes cameras, RADAR devices, and Light Detection and Ranging (LiDAR) sensors. LiDAR is currently a key sensor for the future of autonomous driving since it can read the vehicle’s vicinity and provide a real-time 3D visualization of the surroundings through a point cloud representation. These features can assist the autonomous vehicle in several tasks, such as object identification and obstacle avoidance, accurate speed and distance measurements, road navigation, and more. However, it is crucial to detect the ground plane and road limits to safely navigate the environment, which requires extracting information from the point cloud to accurately detect common road boundaries. This article presents a survey of existing methods used to detect and extract ground points from LiDAR point clouds. It summarizes the already extensive literature and proposes a comprehensive taxonomy to help understand the current ground segmentation methods that can be used in automotive LiDAR sensors.

Dissertations / Theses on the topic "3D obstacle segmentation":

1

Habermann, Danilo. "Localização topológica e identificação de obstáculos por meio de sensor laser 3D (LIDAR) para aplicação em navegação de veículos autônomos terrestres." Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-05012017-144708/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
O emprego de veículos terrestres autônomos tem se tornado cada vez mais comum nos últimos anos em aplicações civis e militares. Eles podem ser úteis para as pessoas com necessidades especiais e para reduzir os acidentes de trânsito e o número de baixas em combate. Esta tese aborda o problema da classificação de obstáculos e da localização do veículo em relação a um mapa topológico, sem fazer uso de GPS e de mapas digitais detalhados. Um sensor laser 3D é usado para coletar dados do ambiente. O sistema de classificação de obstáculos extrai as features da nuvem de pontos e usam-nas para alimentar um classificador que separa os dados em quatro classes: veículos, pessoas, construções, troncos de árvores e postes. Durante a extração de features, um método original para transformar uma nuvem 3D em um grid 2D é proposto, o que ajuda a reduzir o tempo de processamento. As interseções de vias de áreas urbanas são detectadas e usadas como landmarks em um mapa topológico. O sistema consegue obter a localização do veículo, utilizando os pontos de referência, e identifica as mudanças de direção do veículo quando este passa pelos cruzamentos. Os experimentos demonstraram que o sistema foi capaz de classificar corretamente os obstáculos e localizar-se sem o uso de sinais de GPS.
The employment of autonomous ground vehicles, both in civilian and military applications, has become increasingly common over the past few years. Those vehicles can be helpful for disabled people and also to reduce traffic accidents. In this thesis, approaches to the problem of obstacles classification and the localization of the vehicle in relation to a topologic map are presented. GPS devices and previous digital maps are not employed. A 3D laser sensor is used to collect data from the environment. The obstacle classification system extracts features from point clouds and uses them to feed a classifier which separates data into four classes: vehicle, people, building and light poles/ trees. During the feature extraction, an original method to transform 3D to 2D data is proposed, which helps to reduce the processing time. Crossing roads are detected and used as landmarks in a topological map. The vehicle performs self-localization using the landmarks and identifying direction changes through the crossing roads. Experiments demonstrated that system was able to correctly classify obstacles and to localize itself without using GPS signals.
2

Grubb, Grant. "3D vision sensing for improved pedestrain safety." Master's thesis, 2004. http://hdl.handle.net/1885/44511.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Pedestrian-vehicle accidents account for the second largest source of automotive related fatality and injury worldwide. Automotive manufactures will soon be required to meet regulations specifying safety requirements for pedestrian-vehicle collisions. The inclusion of pedestrian protection systems (eg. External airbags) is being consider as a solution to preventing pedestrian fatality and injury. However, such systems require knowledge of pedestrian presence for correct activation. This thesis describes work towards a computer vision system to detect pedestrians which could fulfil the sensory requirements for activating automotive pedestrian protection devices. In this work, the requirements for a pedestrian sensor were examined and a prototype vision system was developed to demonstrate the concepts discussed in the thesis. To achieve greater robustness and an improved understanding of the environment, we focussed on using 3D and temporal techniques combined with existing pedestrian detection methods. ¶ Stereo vision was employed to provide 3D information about the scene. The well known computer vision concept of disparity maps was used to generate a 3D scene representation. Additional vision algorithms were developed to provide scene understanding and thus segment a scene into obstacles (pedestrians, vehicles and other road infrastructure). Two methods were investigated for this purpose: Inverse Perspective Mapping and v-disparity, with the latter producing superior results, and thus v-disparity was used for 3D obstacle segmentation. ¶ Next, we focused on developing a method to classify detected obstacles as either pedestrian or non-pedestrian. Existing algorithms which examine a pedestrian’s shape and provide a classification result using Support Vector Machines were used to fulfil this obstacle classification task. We extended the existing work to include a pedestrian model from a front/rear and side poses. ¶ Finally, temporal information from both the obstacle detection and classification results were used to enhance system results. We used Kalman filtering techniques to track pedestrians and provide motion predictions. Additionally, Bayesian probability was used to provide a certainty of pedestrian detection based on an object’s classification history. This provided greater robustness to the overall detection results. ¶ The developed prototype was installed on two vehicles, a Toyota Landcruiser and a Volvo S80, to perform real world testing. Results from the prototype were excellent, achieving average detection rates of 83% with average false detection rates of only 0.4%.

Book chapters on the topic "3D obstacle segmentation":

1

Wang, Zhe, Hong Liu, Yueliang Qian, and Tao Xu. "Real-Time Plane Segmentation and Obstacle Detection of 3D Point Clouds for Indoor Scenes." In Computer Vision – ECCV 2012. Workshops and Demonstrations, 22–31. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33868-7_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Easa, Said, Yang Ma, Ashraf Elshorbagy, Ahmed Shaker, Songnian Li, and Shriniwas Arkatkar. "Visibility-Based Technologies and Methodologies for Autonomous Driving." In Self-driving Vehicles and Enabling Technologies [Working Title]. IntechOpen, 2020. http://dx.doi.org/10.5772/intechopen.95328.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The three main elements of autonomous vehicles (AV) are orientation, visibility, and decision. This chapter presents an overview of the implementation of visibility-based technologies and methodologies. The chapter first presents two fundamental aspects that are necessary for understanding the main contents. The first aspect is highway geometric design as it relates to sight distance and highway alignment. The second aspect is mathematical basics, including coordinate transformation and visual space segmentation. Details on the Light Detection and Ranging (Lidar) system, which represents the ‘eye’ of the AV are presented. In particular, a new Lidar 3D mapping system, that can be operated on different platforms and modes for a new mapping scheme is described. The visibility methodologies include two types. Infrastructure visibility mainly addresses high-precision maps and sight obstacle detection. Traffic visibility (vehicles, pedestrians, and cyclists) addresses identification of critical positions and visibility estimation. Then, an overview of the decision element (path planning and intelligent car-following) for the movement of AV is presented. The chapter provides important information for researchers and therefore should help to advance road safety for autonomous vehicles.

Conference papers on the topic "3D obstacle segmentation":

1

Mo, J. W., A. Y. Lu, and T. Zhang. "An obstacle-detecting algorithm based on image and 3D point cloud segmentation." In International Conference on Artificial Intelligence and Industrial Application. Southampton, UK: WIT Press, 2015. http://dx.doi.org/10.2495/aiia140501.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Manfio Barbosa, Felipe, and Fernando Santos Osório. "3D Perception for Autonomous Mobile Robots Navigation Using Deep Learning for Safe Zones Detection: A Comparative Study." In Computer on the Beach. São José: Universidade do Vale do Itajaí, 2021. http://dx.doi.org/10.14210/cotb.v12.p072-079.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Computer vision plays an important role in intelligent systems, particularly for autonomous mobile robots and intelligent vehicles. It is essential to the correct operation of such systems, increasing safety for users/passengers and also for other people in the environment. One of its many levels of analysis is semantic segmentation, which provides powerful insights in scene understanding, a task of utmost importance in autonomous navigation. Recent developments have shown the power of deep learning models applied to semantic segmentation. Besides, 3D data shows up as a richer representation of the world. Although there are many studies comparing the performances of several semantic segmentation models, they mostly consider the task over 2D images and none of them include the recent GAN models in the analysis. In this paper, we carry out the study, implementation and comparison of recent deep learning models for 3D semantic image segmentation. We consider the FCN, SegNet and Pix2Pix models. The 3D images are captured indoors and gathered in a dataset created for the scope of this project. Our main objective is to evaluate and compare the models’ performances and efficiency in detecting obstacles, safe and unsafe zones for autonomous mobile robots navigation. Considering as metrics the mean IoU values, number of parameters and inference time, our experiments show that Pix2Pix, a recent Conditional Generative Adversarial Network, outperforms the FCN and SegNet models in the
3

Mueller, Simone, and Dieter Kranzlmueller. "Self-Organising Maps for Efficient Data Reduction and Visual Optimisation of Stereoscopic based Disparity Maps." In WSCG'2022 - 30. International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision'2022. Západočeská univerzita, 2022. http://dx.doi.org/10.24132/csrn.3201.28.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Many modern autonomous systems use disparity maps for recognition and interpretation of their environment. The depth information of these disparity maps can be utilised for point cloud generation. Real-time and high-quality processing of point clouds is necessary for reliable detection of safety-relevant issues such as barriers or obstacles in road traffic. However, quality characteristics of point clouds are influenced by properties of depth sensors and environmental conditions such as illumination, surface and texture. Quality optimisation and real-time implemen- tation can be resource intensive. Limiting the amount of data allows optimisation of real-time processing. We use Kohonen network existing self-organising maps to identify and segment salient objects in disparity maps. Kohonen networks use unsupervised learning to generate disparity maps abstracted by a small number of vectors instead of all pixels. The combination of object-specific segmentation and reduced pixel number decreases the memory and processing time towards real-time compatibility. Our results show that trained self-organising maps can be applied to disparity maps for improved runtime, reduced data volume and further processing of 3D reconstruction of salient objects.

To the bibliography