To see the other types of publications on this topic, follow the link: 3D point clouds.

Journal articles on the topic '3D point clouds'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic '3D point clouds.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Roopa B S, Pramod Kumar S, Prema K N, and Smitha S M. "Review on 3D Point Cloud." Global Journal of Engineering and Technology Advances 16, no. 3 (September 30, 2023): 219–23. http://dx.doi.org/10.30574/gjeta.2023.16.3.0192.

Full text
Abstract:
A collection of multidimensional points is represented by a data structure called a point cloud, which is frequently used to describe 3-D data. A point cloud is, technically speaking, a database of points in a three-dimensional coordinate system. However, from the viewpoint of a typical workflow, the only thing that matters is that a point cloud is an extremely accurate digital record of an item or region. It is saved as a very large number of points that cover a sensed object's surfaces. 3D point clouds have drawn more and more attention as a novel way to depict objects in recent years. In this paper, a brief introduction to point clouds is given. The study provides a discussion of point clouds, point cloud data collecting, processing, and applications.
APA, Harvard, Vancouver, ISO, and other styles
2

Liu, Ruyu, Zhiyong Zhang, Liting Dai, Guodao Zhang, and Bo Sun. "MFTR-Net: A Multi-Level Features Network with Targeted Regularization for Large-Scale Point Cloud Classification." Sensors 23, no. 8 (April 10, 2023): 3869. http://dx.doi.org/10.3390/s23083869.

Full text
Abstract:
There are some irregular and disordered noise points in large-scale point clouds, and the accuracy of existing large-scale point cloud classification methods still needs further improvement. This paper proposes a network named MFTR-Net, which considers the local point cloud’s eigenvalue calculation. The eigenvalues of 3D point cloud data and the 2D eigenvalues of projected point clouds on different planes are calculated to express the local feature relationship between adjacent point clouds. A regular point cloud feature image is constructed and inputs into the designed convolutional neural network. The network adds TargetDrop to be more robust. The experimental result shows that our methods can learn more high-dimensional feature information, further improving point cloud classification, and our approach can achieve 98.0% accuracy with the Oakland 3D dataset.
APA, Harvard, Vancouver, ISO, and other styles
3

Giang, Truong Thi Huong, and Young-Jae Ryoo. "Pruning Points Detection of Sweet Pepper Plants Using 3D Point Clouds and Semantic Segmentation Neural Network." Sensors 23, no. 8 (April 17, 2023): 4040. http://dx.doi.org/10.3390/s23084040.

Full text
Abstract:
Automation in agriculture can save labor and raise productivity. Our research aims to have robots prune sweet pepper plants automatically in smart farms. In previous research, we studied detecting plant parts by a semantic segmentation neural network. Additionally, in this research, we detect the pruning points of leaves in 3D space by using 3D point clouds. Robot arms can move to these positions and cut the leaves. We proposed a method to create 3D point clouds of sweet peppers by applying semantic segmentation neural networks, the ICP algorithm, and ORB-SLAM3, a visual SLAM application with a LiDAR camera. This 3D point cloud consists of plant parts that have been recognized by the neural network. We also present a method to detect the leaf pruning points in 2D images and 3D space by using 3D point clouds. Furthermore, the PCL library was used to visualize the 3D point clouds and the pruning points. Many experiments are conducted to show the method’s stability and correctness.
APA, Harvard, Vancouver, ISO, and other styles
4

Rai, A., N. Srivastava, K. Khoshelham, and K. Jain. "SEMANTIC ENRICHMENT OF 3D POINT CLOUDS USING 2D IMAGE SEGMENTATION." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVIII-1/W2-2023 (December 14, 2023): 1659–66. http://dx.doi.org/10.5194/isprs-archives-xlviii-1-w2-2023-1659-2023.

Full text
Abstract:
Abstract. 3D point cloud segmentation is computationally intensive due to the lack of inherent structural information and the unstructured nature of the point cloud data, which hinders the identification and connection of neighboring points. Understanding the structure of the point cloud data plays a crucial role in obtaining a meaningful and accurate representation of the underlying 3D environment. In this paper, we propose an algorithm that builds on existing state-of-the-art techniques of 2D image segmentation and point cloud registration to enrich point clouds with semantic information. DeepLab2 with ResNet50 as backbone architecture trained on the COCO dataset is used for indoor scene semantic segmentation into several classes like wall, floor, ceiling, doors, and windows. Semantic information from 2D images is propagated along with other input data, i.e., RGB images, depth images, and sensor information to generate 3D point clouds with semantic information. Iterative Closest Point (ICP) algorithm is used for the pair-wise registration of consecutive point clouds and finally, optimization is applied using the pose graph optimization on the whole set of point clouds to generate the combined point cloud of the whole scene. 3D point cloud of the whole scene contains pseudo-color information which denotes the semantic class to which each point belongs. The proposed methodology use an off-the-shelf 2D semantic segmentation deep learning model to semantically segment 3D point clouds collected using handheld mobile LiDAR sensor. We demonstrate a comparison of the accuracy achieved compared to a manually segmented point cloud on an in-house dataset as well as a 2D3DS benchmark dataset.
APA, Harvard, Vancouver, ISO, and other styles
5

Han, Ming, Jianjun Sha, Yanheng Wang, and Xiangwei Wang. "PBFormer: Point and Bi-Spatiotemporal Transformer for Pointwise Change Detection of 3D Urban Point Clouds." Remote Sensing 15, no. 9 (April 27, 2023): 2314. http://dx.doi.org/10.3390/rs15092314.

Full text
Abstract:
Change detection (CD) is a technique widely used in remote sensing for identifying the differences between data acquired at different times. Most existing 3D CD approaches voxelize point clouds into 3D grids, project them into 2D images, or rasterize them into digital surface models due to the irregular format of point clouds and the variety of changes in three-dimensional (3D) objects. However, the details of the geometric structure and spatiotemporal sequence information may not be fully utilized. In this article, we propose PBFormer, a transformer network with Siamese architecture, for directly inferring pointwise changes in bi-temporal 3D point clouds. First, we extract point sequences from irregular 3D point clouds using the k-nearest neighbor method. Second, we uniquely use a point transformer network as an encoder to extract point feature information from bitemporal 3D point clouds. Then, we design a module for fusing the spatiotemporal features of bi-temporal point clouds to effectively detect change features. Finally, multilayer perceptrons are used to obtain the CD results. Extensive experiments conducted on the Urb3DCD benchmark show that PBFormer outperforms other excellent approaches for 3D point cloud CD tasks.
APA, Harvard, Vancouver, ISO, and other styles
6

Bello, Saifullahi Aminu, Shangshu Yu, Cheng Wang, Jibril Muhmmad Adam, and Jonathan Li. "Review: Deep Learning on 3D Point Clouds." Remote Sensing 12, no. 11 (May 28, 2020): 1729. http://dx.doi.org/10.3390/rs12111729.

Full text
Abstract:
A point cloud is a set of points defined in a 3D metric space. Point clouds have become one of the most significant data formats for 3D representation and are gaining increased popularity as a result of the increased availability of acquisition devices, as well as seeing increased application in areas such as robotics, autonomous driving, and augmented and virtual reality. Deep learning is now the most powerful tool for data processing in computer vision and is becoming the most preferred technique for tasks such as classification, segmentation, and detection. While deep learning techniques are mainly applied to data with a structured grid, the point cloud, on the other hand, is unstructured. The unstructuredness of point clouds makes the use of deep learning for its direct processing very challenging. This paper contains a review of the recent state-of-the-art deep learning techniques, mainly focusing on raw point cloud data. The initial work on deep learning directly with raw point cloud data did not model local regions; therefore, subsequent approaches model local regions through sampling and grouping. More recently, several approaches have been proposed that not only model the local regions but also explore the correlation between points in the local regions. From the survey, we conclude that approaches that model local regions and take into account the correlation between points in the local regions perform better. Contrary to existing reviews, this paper provides a general structure for learning with raw point clouds, and various methods were compared based on the general structure. This work also introduces the popular 3D point cloud benchmark datasets and discusses the application of deep learning in popular 3D vision tasks, including classification, segmentation, and detection.
APA, Harvard, Vancouver, ISO, and other styles
7

Mwangangi, K. K., P. O. Mc’Okeyo, S. J. Oude Elberink, and F. Nex. "EXPLORING THE POTENTIALS OF UAV PHOTOGRAMMETRIC POINT CLOUDS IN FAÇADE DETECTION AND 3D RECONSTRUCTION OF BUILDINGS." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B2-2022 (May 30, 2022): 433–40. http://dx.doi.org/10.5194/isprs-archives-xliii-b2-2022-433-2022.

Full text
Abstract:
Abstract. The use of Airborne Laser Scanner (ALS) point clouds has dominated 3D buildings reconstruction research, thus giving photogrammetric point clouds less attention. Point cloud density, occlusion and vegetation cover are some of the concerns that promote the necessity to understand and question the completeness and correctness of UAV photogrammetric point clouds for 3D buildings reconstruction. This research explores the potentials of modelling 3D buildings from nadir and oblique UAV image data vis a vis airborne laser data. Optimal parameter settings for dense matching and reconstruction are analysed for both UAV image-based and lidar point clouds. This research employs an automatic data driven model approach to 3D building reconstruction. A proper segmentation into planar roof faces is crucial, followed by façade detection to capture the real extent of the buildings’ roof overhang. An analysis of the quality of point density and point noise, in relation to setting parameter indicates that with a minimum of 50 points/m2, most of the planar surfaces are reconstructed comfortably. But for smaller features than dormers on the roof, a denser point cloud than 80 points/m2 is needed. 3D buildings from UAVs point cloud can be improved by enhancing roof boundary by use of edge information from images. It can also be improved by merging the imagery building outlines, point clouds roof boundary and the walls outline to extract the real extent of the building.
APA, Harvard, Vancouver, ISO, and other styles
8

Li, Weite, Kyoko Hasegawa, Liang Li, Akihiro Tsukamoto, and Satoshi Tanaka. "Deep Learning-Based Point Upsampling for Edge Enhancement of 3D-Scanned Data and Its Application to Transparent Visualization." Remote Sensing 13, no. 13 (June 28, 2021): 2526. http://dx.doi.org/10.3390/rs13132526.

Full text
Abstract:
Large-scale 3D-scanned point clouds enable the accurate and easy recording of complex 3D objects in the real world. The acquired point clouds often describe both the surficial and internal 3D structure of the scanned objects. The recently proposed edge-highlighted transparent visualization method is effective for recognizing the whole 3D structure of such point clouds. This visualization utilizes the degree of opacity for highlighting edges of the 3D-scanned objects, and it realizes clear transparent viewing of the entire 3D structures. However, for 3D-scanned point clouds, the quality of any edge-highlighting visualization depends on the distribution of the extracted edge points. Insufficient density, sparseness, or partial defects in the edge points can lead to unclear edge visualization. Therefore, in this paper, we propose a deep learning-based upsampling method focusing on the edge regions of 3D-scanned point clouds to generate more edge points during the 3D-edge upsampling task. The proposed upsampling network dramatically improves the point-distributional density, uniformity, and connectivity in the edge regions. The results on synthetic and scanned edge data show that our method can improve the percentage of edge points more than 15% compared to the existing point cloud upsampling network. Our upsampling network works well for both sharp and soft edges. A combined use with a noise-eliminating filter also works well. We demonstrate the effectiveness of our upsampling network by applying it to various real 3D-scanned point clouds. We also prove that the improved edge point distribution can improve the visibility of the edge-highlighted transparent visualization of complex 3D-scanned objects.
APA, Harvard, Vancouver, ISO, and other styles
9

Takahashi, G., and H. Masuda. "TRAJECTORY-BASED VISUALIZATION OF MMS POINT CLOUDS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W13 (June 5, 2019): 1127–33. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w13-1127-2019.

Full text
Abstract:
<p><strong>Abstract.</strong> MMSs allow us to obtain detailed 3D information around roads. Especially, LiDAR point clouds can be used for map generation and infrastructure management. For practical uses, however, it is necessary to add labels to a part of the points since various objects can be included in the point clouds. Existing automatic classification methods are not completely error-free, and may incorrectly classify objects. Therefore, even though automatic methods are applied to the point clouds, operators have to verify the labels. While operators classify the point clouds manually, selecting 3D points tasks in 3D views are difficult. In this paper, we propose a new point-cloud image based on the trajectories of MMSs. We call our point-cloud image <i>trajectory-based point-cloud image</i>. Although the image is distorted because it is generated based on rotation angles of laser scanners, we confirmed that most objects can be recognized from point-cloud images by checking main road facilities. We evaluated how efficient the annotation can be done using our method, and the results show that operators could add annotations to point-cloud images more efficiently.</p>
APA, Harvard, Vancouver, ISO, and other styles
10

Barnefske, Eike, and Harald Sternberg. "Evaluating the Quality of Semantic Segmented 3D Point Clouds." Remote Sensing 14, no. 3 (January 18, 2022): 446. http://dx.doi.org/10.3390/rs14030446.

Full text
Abstract:
Recently, 3D point clouds have become a quasi-standard for digitization. Point cloud processing remains a challenge due to the complex and unstructured nature of point clouds. Currently, most automatic point cloud segmentation methods are data-based and gain knowledge from manually segmented ground truth (GT) point clouds. The creation of GT point clouds by capturing data with an optical sensor and then performing a manual or semi-automatic segmentation is a less studied research field. Usually, GT point clouds are semantically segmented only once and considered to be free of semantic errors. In this work, it is shown that this assumption has no overall validity if the reality is to be represented by a semantic point cloud. Our quality model has been developed to describe and evaluate semantic GT point clouds and their manual creation processes. It is applied on our dataset and publicly available point cloud datasets. Furthermore, we believe that this quality model contributes to the objective evaluation and comparability of data-based segmentation algorithms.
APA, Harvard, Vancouver, ISO, and other styles
11

Xu, Y., Z. Sun, R. Boerner, T. Koch, L. Hoegner, and U. Stilla. "GENERATION OF GROUND TRUTH DATASETS FOR THE ANALYSIS OF 3D POINT CLOUDS IN URBAN SCENES ACQUIRED VIA DIFFERENT SENSORS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-3 (April 30, 2018): 2009–15. http://dx.doi.org/10.5194/isprs-archives-xlii-3-2009-2018.

Full text
Abstract:
In this work, we report a novel way of generating ground truth dataset for analyzing point cloud from different sensors and the validation of algorithms. Instead of directly labeling large amount of 3D points requiring time consuming manual work, a multi-resolution 3D voxel grid for the testing site is generated. Then, with the help of a set of basic labeled points from the reference dataset, we can generate a 3D labeled space of the entire testing site with different resolutions. Specifically, an octree-based voxel structure is applied to voxelize the annotated reference point cloud, by which all the points are organized by 3D grids of multi-resolutions. When automatically annotating the new testing point clouds, a voting based approach is adopted to the labeled points within multiple resolution voxels, in order to assign a semantic label to the 3D space represented by the voxel. Lastly, robust line- and plane-based fast registration methods are developed for aligning point clouds obtained via various sensors. Benefiting from the labeled 3D spatial information, we can easily create new annotated 3D point clouds of different sensors of the same scene directly by considering the corresponding labels of 3D space the points located, which would be convenient for the validation and evaluation of algorithms related to point cloud interpretation and semantic segmentation.
APA, Harvard, Vancouver, ISO, and other styles
12

Özdemir, E., and F. Remondino. "CLASSIFICATION OF AERIAL POINT CLOUDS WITH DEEP LEARNING." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W13 (June 4, 2019): 103–10. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w13-103-2019.

Full text
Abstract:
<p><strong>Abstract.</strong> Due to their usefulness in various implementations, such as energy evaluation, visibility analysis, emergency response, 3D cadastre, urban planning, change detection, navigation, etc., 3D city models have gained importance over the last decades. Point clouds are one of the primary data sources for the generation of realistic city models. Beside model-driven approaches, 3D building models can be directly produced from classified aerial point clouds. This paper presents an ongoing research for 3D building reconstruction based on the classification of aerial point clouds without given ancillary data (e.g. footprints, etc.). The work includes a deep learning approach based on specific geometric features extracted from the point cloud. The methodology was tested on the ISPRS 3D Semantic Labeling Contest (Vaihingen and Toronto point clouds) showing promising results, although partly affected by the low density and lack of points on the building facades for the available clouds.</p>
APA, Harvard, Vancouver, ISO, and other styles
13

Mugner, E., and N. Seube. "DENOISING OF 3D POINT CLOUDS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W17 (November 29, 2019): 217–24. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w17-217-2019.

Full text
Abstract:
Abstract. A method to remove random errors from 3D point clouds is proposed. It is based on the estimation of a local geometric descriptor of each point. For mobile mapping LiDAR and airborne LiDAR, a combined standard mesurement uncertainty of the LiDAR system may supplement a geometric approach. Our method can be applied to any point cloud, acquired by a fixed, a mobile or an airborne LiDAR system. We present the principle of the method and some results from various LiDAR system mounted on UAVs. A comparison of a low-cost LiDAR system and a high-grade LiDAR system is performed on the same area, showing the benefits of applying our denoising algorithm to UAV LiDAR data. We also present the impact of denoising as a pre-processing tool for ground classification applications. Finaly, we also show some application of our denoising algorithm to dense point clouds produced by a photogrammetry software.
APA, Harvard, Vancouver, ISO, and other styles
14

Harshit, H., S. K. P. Kushwaha, and K. Jain. "GEOMETRIC FEATURES INTERPRETATION OF PHOTOGRAMMETRIC POINT CLOUD FROM UNMANNED AERIAL VEHICLE." ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences X-4/W2-2022 (October 14, 2022): 83–88. http://dx.doi.org/10.5194/isprs-annals-x-4-w2-2022-83-2022.

Full text
Abstract:
Abstract. Recent days point clouds have become one of the most common 3D sources of information which is provides accurate geometry features of the object. 3D point clouds can be derived from either photogrammetry, Lidar or SAR in some cases depending upon the application. These point clouds consisting of 3D geospatial location of an object in form of XYZ coordinates which can be used in various ways to deduct information related to that object either based on visualisation or geometrical interpretation. Quality assessment standards for these point clouds are still very much in nascent stage with optimum accuracy in relative terms only. In this paper, multiple scale of point cloud has been used to understand the level of information these clouds consist on these multiple scales. Based on the 3D spatial information of these point cloud in local neighbourhood, some of invariant geometric properties can be computed for each 3D point with respective covariance matrix. These can be used to describe the local 3D structure using eigenvalues for these matrices. Using these Geometric features an approach is developed to understand the point cloud quality assessment. The proposed methodology exploits these special geometric properties to evaluate the 3D scene structure. Further, the point cloud is classified using shape detection algorithm which evaluates the geometric features to detect the mathematical shapes in the point cloud. This paper also enlightens on different geometric features that can be extracted from a point cloud and the importance of it.
APA, Harvard, Vancouver, ISO, and other styles
15

Li, Jianxin, Guannan Si, Xinyu Liang, Zhaoliang An, Pengxin Tian, and Fengyu Zhou. "Partition-Based Point Cloud Completion Network with Density Refinement." Entropy 25, no. 7 (July 2, 2023): 1018. http://dx.doi.org/10.3390/e25071018.

Full text
Abstract:
In this paper, we propose a novel method for point cloud complementation called PADPNet. Our approach uses a combination of global and local information to infer missing elements in the point cloud. We achieve this by dividing the input point cloud into uniform local regions, called perceptual fields, which are abstractly understood as special convolution kernels. The set of point clouds in each local region is represented as a feature vector and transformed into N uniform perceptual fields as the input to our transformer model. We also designed a geometric density-aware block to better exploit the inductive bias of the point cloud’s 3D geometric structure. Our method preserves sharp edges and detailed structures that are often lost in voxel-based or point-based approaches. Experimental results demonstrate that our approach outperforms other methods in reducing the ambiguity of output results. Our proposed method has important applications in 3D computer vision and can efficiently recover complete 3D object shapes from missing point clouds.
APA, Harvard, Vancouver, ISO, and other styles
16

Zhao, Guangyuan, Xue Wan, Yaolin Tian, Yadong Shao, and Shengyang Li. "3D Component Segmentation Network and Dataset for Non-Cooperative Spacecraft." Aerospace 9, no. 5 (May 1, 2022): 248. http://dx.doi.org/10.3390/aerospace9050248.

Full text
Abstract:
Spacecraft component segmentation is one of the key technologies which enables autonomous navigation and manipulation for non-cooperative spacecraft in OOS (On-Orbit Service). While most of the studies on spacecraft component segmentation are based on 2D image segmentation, this paper proposes spacecraft component segmentation methods based on 3D point clouds. Firstly, we propose a multi-source 3D spacecraft component segmentation dataset, including point clouds from lidar and VisualSFM (Visual Structure From Motion). Then, an improved PointNet++ based 3D component segmentation network named 3DSatNet is proposed with a new geometrical-aware FE (Feature Extraction) layers and a new loss function to tackle the data imbalance problem which means the points number of different components differ greatly, and the density distribution of point cloud is not uniform. Moreover, when the partial prior point clouds of the target spacecraft are known, we propose a 3DSatNet-Reg network by adding a Teaser-based 3D point clouds registration module to 3DSatNet to obtain higher component segmentation accuracy. Experiments carried out on our proposed dataset demonstrate that the proposed 3DSatNet achieves 1.9% higher instance mIoU than PointNet++_SSG, and the highest IoU for antenna in both lidar point clouds and visual point clouds compared with the popular networks. Furthermore, our algorithm has been deployed on an embedded AI computing device Nvidia Jetson TX2 which has the potential to be used on orbit with a processing speed of 0.228 s per point cloud with 20,000 points.
APA, Harvard, Vancouver, ISO, and other styles
17

Zhang, Le, Jian Sun, and Qiang Zheng. "3D Point Cloud Recognition Based on a Multi-View Convolutional Neural Network." Sensors 18, no. 11 (October 29, 2018): 3681. http://dx.doi.org/10.3390/s18113681.

Full text
Abstract:
The recognition of three-dimensional (3D) lidar (light detection and ranging) point clouds remains a significant issue in point cloud processing. Traditional point cloud recognition employs the 3D point clouds from the whole object. Nevertheless, the lidar data is a collection of two-and-a-half-dimensional (2.5D) point clouds (each 2.5D point cloud comes from a single view) obtained by scanning the object within a certain field angle by lidar. To deal with this problem, we initially propose a novel representation which expresses 3D point clouds using 2.5D point clouds from multiple views and then we generate multi-view 2.5D point cloud data based on the Point Cloud Library (PCL). Subsequently, we design an effective recognition model based on a multi-view convolutional neural network. The model directly acts on the raw 2.5D point clouds from all views and learns to get a global feature descriptor by fusing the features from all views by the view fusion network. It has been proved that our approach can achieve an excellent recognition performance without any requirement for three-dimensional reconstruction and the preprocessing of point clouds. In conclusion, this paper can effectively solve the recognition problem of lidar point clouds and provide vital practical value.
APA, Harvard, Vancouver, ISO, and other styles
18

Yue, Yaowei, Xiaonan Li, and Yun Peng. "A 3D Point Cloud Classification Method Based on Adaptive Graph Convolution and Global Attention." Sensors 24, no. 2 (January 18, 2024): 617. http://dx.doi.org/10.3390/s24020617.

Full text
Abstract:
In recent years, there has been significant growth in the ubiquity and popularity of three-dimensional (3D) point clouds, with an increasing focus on the classification of 3D point clouds. To extract richer features from point clouds, many researchers have turned their attention to various point set regions and channels within irregular point clouds. However, this approach has limited capability in attending to crucial regions of interest in 3D point clouds and may overlook valuable information from neighboring features during feature aggregation. Therefore, this paper proposes a novel 3D point cloud classification method based on global attention and adaptive graph convolution (Att-AdaptNet). The method consists of two main branches: the first branch computes attention masks for each point, while the second branch employs adaptive graph convolution to extract global features from the point set. It dynamically learns features based on point interactions, generating adaptive kernels to effectively and precisely capture diverse relationships among points from different semantic parts. Experimental results demonstrate that the proposed model achieves 93.8% in overall accuracy and 90.8% in average accuracy on the ModeNet40 dataset.
APA, Harvard, Vancouver, ISO, and other styles
19

Lokesh M R, Anushitha K, Ashok D, Deepak Raj K, and Harshitha K. "3D Point-Cloud Processing Using Panoramic Images for Object Detection." International Journal of Scientific Research in Computer Science, Engineering and Information Technology 10, no. 3 (May 15, 2024): 186–98. http://dx.doi.org/10.32628/cseit2410318.

Full text
Abstract:
The Remote sensing application plays a major role in real-world critical application projects. The research introduces a novel approach, "3D Point-Cloud Processing Using Panoramic Images for Object Detection," aimed at enhancing the interpretability of laser point clouds through the integration of color information derived from panoramic images. Focusing on the context of Mobile Measurement Systems (MMS), where various digital cameras are utilized, the work addresses the challenges associated with processing panoramic images offering a 360-degree view angle. The core objective is to develop a robust method for generating color point clouds by establishing a mathematical correspondence between panoramic images and laser point clouds. The collinear principle of three points guides the fusion process, involving the center of the omnidirectional multi-camera system, the image point on the sphere, and the object point. Through comprehensive experimental validation, the work confirms the accuracy of the proposed algorithm and formulas, showcasing its effectiveness in generating color point clouds within MMS. This research contributes to the present development of 3D point-cloud processing, introducing a contemporary methodology for improved object detection through the fusion of panoramic images and laser point clouds.
APA, Harvard, Vancouver, ISO, and other styles
20

Barnefske, E., and H. Sternberg. "PCCT: A POINT CLOUD CLASSIFICATION TOOL TO CREATE 3D TRAINING DATA TO ADJUST AND DEVELOP 3D CONVNET." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W16 (September 17, 2019): 35–40. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w16-35-2019.

Full text
Abstract:
<p><strong>Abstract.</strong> Point clouds give a very detailed and sometimes very accurate representation of the geometry of captured objects. In surveying, point clouds captured with laser scanners or camera systems are an intermediate result that must be processed further. Often the point cloud has to be divided into regions of similar types (object classes) for the next process steps. These classifications are very time-consuming and cost-intensive compared to acquisition. In order to automate this process step, conventional neural networks (ConvNet), which take over the classification task, are investigated in detail. In addition to the network architecture, the classification performance of a ConvNet depends on the training data with which the task is learned. This paper presents and evaluates the point clould classification tool (PCCT) developed at HCU Hamburg. With the PCCT, large point cloud collections can be semi-automatically classified. Furthermore, the influence of erroneous points in three-dimensional point clouds is investigated. The network architecture PointNet is used for this investigation.</p>
APA, Harvard, Vancouver, ISO, and other styles
21

Wang, Peng-Shuai. "OctFormer: Octree-based Transformers for 3D Point Clouds." ACM Transactions on Graphics 42, no. 4 (July 26, 2023): 1–11. http://dx.doi.org/10.1145/3592131.

Full text
Abstract:
We propose octree-based transformers, named OctFormer, for 3D point cloud learning. OctFormer can not only serve as a general and effective backbone for 3D point cloud segmentation and object detection but also have linear complexity and is scalable for large-scale point clouds. The key challenge in applying transformers to point clouds is reducing the quadratic, thus overwhelming, computation complexity of attentions. To combat this issue, several works divide point clouds into non-overlapping windows and constrain attentions in each local window. However, the point number in each window varies greatly, impeding the efficient execution on GPU. Observing that attentions are robust to the shapes of local windows, we propose a novel octree attention, which leverages sorted shuffled keys of octrees to partition point clouds into local windows containing a fixed number of points while permitting shapes of windows to change freely. And we also introduce dilated octree attention to expand the receptive field further. Our octree attention can be implemented in 10 lines of code with open-sourced libraries and runs 17 times faster than other point cloud attentions when the point number exceeds 200 k. Built upon the octree attention, OctFormer can be easily scaled up and achieves state-of-the-art performances on a series of 3D semantic segmentation and 3D object detection benchmarks, surpassing previous sparse-voxel-based CNNs and point cloud transformers in terms of both efficiency and effectiveness. Notably, on the challenging ScanNet200 dataset, OctFormer outperforms sparse-voxel-based CNNs by 7.3 in mIoU. Our code and trained models are available at https://wang-ps.github.io/octformer.
APA, Harvard, Vancouver, ISO, and other styles
22

Zhu, Jingwei, Olaf Wysocki, Christoph Holst, and Thomas H. Kolbe. "Enriching Thermal Point Clouds of Buildings using Semantic 3D building Models." ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences X-4/W5-2024 (June 27, 2024): 341–48. http://dx.doi.org/10.5194/isprs-annals-x-4-w5-2024-341-2024.

Full text
Abstract:
Abstract. Thermal point clouds integrate thermal radiation and laser point clouds effectively. However, the semantic information for the interpretation of building thermal point clouds can hardly be precisely inferred. Transferring the semantics encapsulated in 3D building models at Level of Detail (LoD)3 has a potential to fill this gap. In this work, we propose a workflow enriching thermal point clouds with the geo-position and semantics of LoD3 building models, which utilizes features of both modalities: model point clouds are generated from LoD3 models, and thermal point clouds are co-registered by coarse-to-fine registration. The proposed method can automatically co-register the point clouds from different sources and enrich the thermal point cloud in facade-detailed semantics. The enriched thermal point cloud supports thermal analysis and can facilitate the development of currently scarce deep learning models operating directly on thermal point clouds.
APA, Harvard, Vancouver, ISO, and other styles
23

Beil, C., T. Kutzner, B. Schwab, B. Willenborg, A. Gawronski, and T. H. Kolbe. "INTEGRATION OF 3D POINT CLOUDS WITH SEMANTIC 3D CITY MODELS – PROVIDING SEMANTIC INFORMATION BEYOND CLASSIFICATION." ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences VIII-4/W2-2021 (October 7, 2021): 105–12. http://dx.doi.org/10.5194/isprs-annals-viii-4-w2-2021-105-2021.

Full text
Abstract:
Abstract. A range of different and increasingly accessible acquisition methods, the possibility for frequent data updates of large areas, and a simple data structure are some of the reasons for the popularity of three-dimensional (3D) point cloud data. While there are multiple techniques for segmenting and classifying point clouds, capabilities of common data formats such as LAS for providing semantic information are mostly limited to assigning points to a certain category (classification). However, several fields of application, such as digital urban twins used for simulations and analyses, require more detailed semantic knowledge. This can be provided by semantic 3D city models containing hierarchically structured semantic and spatial information. Although semantic models are often reconstructed from point clouds, they are usually geometrically less accurate due to generalization processes. First, point cloud data structures / formats are discussed with respect to their semantic capabilities. Then, a new approach for integrating point clouds with semantic 3D city models is presented, consequently combining respective advantages of both data types. In addition to elaborate (and established) semantic concepts for several thematic areas, the new version 3.0 of the international Open Geospatial Consortium (OGC) standard CityGML also provides a PointCloud module. In this paper a scheme is shown, how CityGML 3.0 can be used to provide semantic structures for point clouds (directly or stored in a separate LAS file). Methods and metrics to automatically assign points to corresponding Level of Detail (LoD)2 or LoD3 models are presented. Subsequently, dataset examples implementing these concepts are provided for download.
APA, Harvard, Vancouver, ISO, and other styles
24

Kwak, Jeonghoon, and Yunsick Sung. "DeepLabV3-Refiner-Based Semantic Segmentation Model for Dense 3D Point Clouds." Remote Sensing 13, no. 8 (April 17, 2021): 1565. http://dx.doi.org/10.3390/rs13081565.

Full text
Abstract:
Three-dimensional virtual environments can be configured as test environments of autonomous things, and remote sensing by 3D point clouds collected by light detection and range (LiDAR) can be used to detect virtual human objects by segmenting collected 3D point clouds in a virtual environment. The use of a traditional encoder-decoder model, such as DeepLabV3, improves the quality of the low-density 3D point clouds of human objects, where the quality is determined by the measurement gap of the LiDAR lasers. However, whenever a human object with a surrounding environment in a 3D point cloud is used by the traditional encoder-decoder model, it is difficult to increase the density fitting of the human object. This paper proposes a DeepLabV3-Refiner model, which is a model that refines the fit of human objects using human objects whose density has been increased through DeepLabV3. An RGB image that has a segmented human object is defined as a dense segmented image. DeepLabV3 is used to make predictions of dense segmented images and 3D point clouds for human objects in 3D point clouds. In the Refiner model, the results of DeepLabV3 are refined to fit human objects, and a dense segmented image fit to human objects is predicted. The dense 3D point cloud is calculated using the dense segmented image provided by the DeepLabV3-Refiner model. The 3D point clouds that were analyzed by the DeepLabV3-Refiner model had a 4-fold increase in density, which was verified experimentally. The proposed method had a 0.6% increase in density accuracy compared to that of DeepLabV3, and a 2.8-fold increase in the density corresponding to the human object. The proposed method was able to provide a 3D point cloud that increased the density to fit the human object. The proposed method can be used to provide an accurate 3D virtual environment by using the improved 3D point clouds.
APA, Harvard, Vancouver, ISO, and other styles
25

Shinohara, T., H. Xiu, and M. Matsuoka. "IMAGE TO POINT CLOUD TRANSLATION USING CONDITIONAL GENERATIVE ADVERSARIAL NETWORK FOR AIRBORNE LIDAR DATA." ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences V-2-2021 (June 17, 2021): 169–74. http://dx.doi.org/10.5194/isprs-annals-v-2-2021-169-2021.

Full text
Abstract:
Abstract. This study introduces a novel image to a 3D point-cloud translation method with a conditional generative adversarial network that creates a large-scale 3D point cloud. This can generate supervised point clouds observed via airborne LiDAR from aerial images. The network is composed of an encoder to produce latent features of input images, generator to translate latent features to fake point clouds, and discriminator to classify false or real point clouds. The encoder is a pre-trained ResNet; to overcome the difficulty of generating 3D point clouds in an outdoor scene, we use a FoldingNet with features from ResNet. After a fixed number of iterations, our generator can produce fake point clouds that correspond to the input image. Experimental results show that our network can learn and generate certain point clouds using the data from the 2018 IEEE GRSS Data Fusion Contest.
APA, Harvard, Vancouver, ISO, and other styles
26

Martince Novianti Bani, Martince Novianti Bani. "ANALISIS KERAPATAN 3D POINT CLOUDS PADA UAV FOTOGRAMETRI." Jurnal Qua Teknika 12, no. 01 (March 16, 2022): 45–57. http://dx.doi.org/10.35457/quateknika.v12i01.2107.

Full text
Abstract:
Ketersediaan Infomasi Spasial merupakan salah satu faktor utama untuk mengoptimalkan perencanaan pembangunan. Disisi lain, peningkatan kebutuhan data spasial berskala detail terus diupayakan untuk memenuhi tantangan tersebut. Salah satu upaya untuk percepatan penyediaan data dan informasi spasial dengan memanfaatkan teknologi pemetaan dengan wahana Unmanned Aerial Vehicle (UAV). UAV dipandang efektif dan efisien baik itu dari segi waktu maupun biaya. Keakurasian yang dihasilkan oleh citra dari UAV pun terus ditingkatkan. Salah satunya adalah melakukan studi lebih lanjut terhadap data yang dihasilkan oleh wahana UAV yakni peningkatan kerapatan point cloud. Pada penelitian ini, dilakukan analisis kerapatan 3D point clouds dan kekaurasian kerapatan 3D point clouds tersebut direalisasikan dengan mengidentifikasi sejumlah tie points yang ideal pada lokasi von gruber kemudian diterapkan filtering. Setiap nilai hasil filtering tie points akan diproses kembali dengan menerapkan perhitungan bundle adjustment serta penambahan variabel lainnya. Ketelitian dari sejumlah Point clouds yang dihasilkan dari berbagai variasi jumlah tie points, ditentukan berdasarkan nilai RMSEnya
APA, Harvard, Vancouver, ISO, and other styles
27

Wang, Jun, Ying Cui, Dongyan Guo, Junxia Li, Qingshan Liu, and Chunhua Shen. "PointAttN: You Only Need Attention for Point Cloud Completion." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 6 (March 24, 2024): 5472–80. http://dx.doi.org/10.1609/aaai.v38i6.28356.

Full text
Abstract:
Point cloud completion referring to completing 3D shapes from partial 3D point clouds is a fundamental problem for 3D point cloud analysis tasks. Benefiting from the development of deep neural networks, researches on point cloud completion have made great progress in recent years. However, the explicit local region partition like kNNs involved in existing methods makes them sensitive to the density distribution of point clouds. Moreover, it serves limited receptive fields that prevent capturing features from long-range context information. To solve the problems, we leverage the cross-attention and self-attention mechanisms to design novel neural network for point cloud completion with implicit local region partition. Two basic units Geometric Details Perception (GDP) and Self-Feature Augment (SFA) are proposed to establish the structural relationships directly among points in a simple yet effective way via attention mechanism. Then based on GDP and SFA, we construct a new framework with popular encoder-decoder architecture for point cloud completion. The proposed framework, namely PointAttN, is simple, neat and effective, which can precisely capture the structural information of 3D shapes and predict complete point clouds with detailed geometry. Experimental results demonstrate that our PointAttN outperforms state-of-the-art methods on multiple challenging benchmarks. Code is available at: https://github.com/ohhhyeahhh/PointAttN
APA, Harvard, Vancouver, ISO, and other styles
28

Suzuki, Taro, Shunichi Shiozawa, Atsushi Yamaba, and Yoshiharu Amano. "Forest Data Collection by UAV Lidar-Based 3D Mapping: Segmentation of Individual Tree Information from 3D Point Clouds." International Journal of Automation Technology 15, no. 3 (May 5, 2021): 313–23. http://dx.doi.org/10.20965/ijat.2021.p0313.

Full text
Abstract:
In this study, we develop a system for efficiently measuring detailed information of trees in a forest environment using a small unmanned aerial vehicle (UAV) equipped with light detection and ranging (lidar). The main purpose of forest measurement is to predict the volume of wood for harvesting and delineating forest boundaries by tree location. Herein, we propose a method for extracting the position, number of trees, and vertical height of trees from a set of three-dimensional (3D) point clouds acquired by a UAV lidar system. The point cloud obtained from a UAV is dense in the tree’s crown, and the trunk 3D points are sparse because the crown of the tree obstructs the laser beam. Therefore, it is difficult to extract single-tree information from 3D point clouds because the characteristics of 3D point clouds differ significantly from those of conventional 3D point clouds using ground-based laser scanners. In this study, we segment the forest point cloud into three regions with different densities of point clouds, i.e., canopy, trunk, and ground, and process each region individually to extract the target information. By comparing a ground laser survey and the proposed method in an actual forest environment, it is discovered that the number of trees in an area measuring 100 m × 100 m is 94.6% of the total number of trees. The root mean square error of the tree position is 0.3 m, whereas that of the vertical height is 2.3 m, indicating that single-tree information can be measured with sufficient accuracy for forest management.
APA, Harvard, Vancouver, ISO, and other styles
29

Zhang, Jiazhe, Xingwei Li, Xianfa Zhao, and Zheng Zhang. "LLGF-Net: Learning Local and Global Feature Fusion for 3D Point Cloud Semantic Segmentation." Electronics 11, no. 14 (July 13, 2022): 2191. http://dx.doi.org/10.3390/electronics11142191.

Full text
Abstract:
Three-dimensional (3D) point cloud semantic segmentation is fundamental in complex scene perception. Currently, although various efficient 3D semantic segmentation networks have been proposed, the overall effect has a certain gap to 2D image segmentation. Recently, some transformer-based methods have opened a new stage in computer vision, which also has accelerated the effective development of methods in 3D point cloud segmentation. In this paper, we propose a novel semantic segmentation network named LLGF-Net that can aggregate features from both local and global levels of point clouds, effectively improving the ability to extract feature information from point clouds. Specifically, we adopt the multi-head attention mechanism in the original Transformer model to obtain the local features of point clouds and then use the position-distance information of point clouds in 3D space to obtain the global features. Finally, the local features and global features are fused and embedded into the encoder–decoder network to generate our method. Our extensive experimental results on the 3D point cloud dataset demonstrate the effectiveness and superiority of our method.
APA, Harvard, Vancouver, ISO, and other styles
30

Wang, Lei, Zhiyong Zhang, Xiaonan Li, and Yueshun He. "MFOC-CliqueNet: A CliqueNet-Based Optimal Combination of Multidimensional Features Classification Method for Large-Scale Laser Point Clouds." Computational Intelligence and Neuroscience 2022 (August 11, 2022): 1–11. http://dx.doi.org/10.1155/2022/2446212.

Full text
Abstract:
As large-scale laser 3D point clouds data contains massive and complex data, it faces great challenges in the automatic intelligent processing and classification of large-scale 3D point clouds. Aiming at the problem that 3D point clouds in complex scenes are self-occluded or occluded, which could reduce the object classification accuracy, we propose a multidimension feature optimal combination classification method named MFOC-CliqueNet based on CliqueNet for large-scale laser point clouds. The optimal combination matrix of multidimension features is constructed by extracting the three-dimensional features and multidirectional two-dimension features of 3D point cloud. This is the first time that multidimensional optimal combination features are introduced into cyclic convolutional networks CliqueNet. It is important for large-scale 3D point cloud classification. The experimental results show that the MFOC-CliqueNet framework can realize the latest level with fewer parameters. The experiments on the Large-Scale Scene Point Cloud Oakland dataset show that the classification accuracy of our method is 98.9%, which is better than other classification algorithms mentioned in this paper.
APA, Harvard, Vancouver, ISO, and other styles
31

Pu, Xinming, Shu Gan, Xiping Yuan, and Raobo Li. "Feature Analysis of Scanning Point Cloud of Structure and Research on Hole Repair Technology Considering Space-Ground Multi-Source 3D Data Acquisition." Sensors 22, no. 24 (December 8, 2022): 9627. http://dx.doi.org/10.3390/s22249627.

Full text
Abstract:
As one of the best means of obtaining the geometry information of special shaped structures, point cloud data acquisition can be achieved by laser scanning or photogrammetry. However, there are some differences in the quantity, quality, and information type of point clouds obtained by different methods when collecting point clouds of the same structure, due to differences in sensor mechanisms and collection paths. Thus, this study aimed to combine the complementary advantages of multi-source point cloud data and provide the high-quality basic data required for structure measurement and modeling. Specifically, low-altitude photogrammetry technologies such as hand-held laser scanners (HLS), terrestrial laser scanners (TLS), and unmanned aerial systems (UAS) were adopted to collect point cloud data of the same special-shaped structure in different paths. The advantages and disadvantages of different point cloud acquisition methods of special-shaped structures were analyzed from the perspective of the point cloud acquisition mechanism of different sensors, point cloud data integrity, and single-point geometric characteristics of the point cloud. Additionally, a point cloud void repair technology based on the TLS point cloud was proposed according to the analysis results. Under the premise of unifying the spatial position relationship of the three point clouds, the M3C2 distance algorithm was performed to extract the point clouds with significant spatial position differences in the same area of the structure from the three point clouds. Meanwhile, the single-point geometric feature differences of the multi-source point cloud in the area with the same neighborhood radius was calculated. With the kernel density distribution of the feature difference, the feature points filtered from the HLS point cloud and the TLS point cloud were fused to enrich the number of feature points in the TLS point cloud. In addition, the TLS point cloud voids were located by raster projection, and the point clouds within the void range were extracted, or the closest points were retrieved from the other two heterologous point clouds, to repair the top surface and façade voids of the TLS point cloud. Finally, high-quality basic point cloud data of the special-shaped structure were generated.
APA, Harvard, Vancouver, ISO, and other styles
32

Houshiar, H., and S. Winkler. "POINTO - A LOW COST SOLUTION TO POINT CLOUD PROCESSING." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W8 (November 13, 2017): 111–17. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w8-111-2017.

Full text
Abstract:
With advance in technology access to data especially 3D point cloud data becomes more and more an everyday task. 3D point clouds are usually captured with very expensive tools such as 3D laser scanners or very time consuming methods such as photogrammetry. Most of the available softwares for 3D point cloud processing are designed for experts and specialists in this field and are usually very large software packages containing variety of methods and tools. This results in softwares that are usually very expensive to acquire and also very difficult to use. Difficulty of use is caused by complicated user interfaces that is required to accommodate a large list of features. The aim of these complex softwares is to provide a powerful tool for a specific group of specialist. However they are not necessary required by the majority of the up coming average users of point clouds. In addition to complexity and high costs of these softwares they generally rely on expensive and modern hardware and only compatible with one specific operating system. Many point cloud customers are not point cloud processing experts or willing to spend the high acquisition costs of these expensive softwares and hardwares. In this paper we introduce a solution for low cost point cloud processing. Our approach is designed to accommodate the needs of the average point cloud user. To reduce the cost and complexity of software our approach focuses on one functionality at a time in contrast with most available softwares and tools that aim to solve as many problems as possible at the same time. Our simple and user oriented design improve the user experience and empower us to optimize our methods for creation of an efficient software. In this paper we introduce Pointo family as a series of connected softwares to provide easy to use tools with simple design for different point cloud processing requirements. PointoVIEWER and PointoCAD are introduced as the first components of the Pointo family to provide a fast and efficient visualization with the ability to add annotation and documentation to the point clouds.
APA, Harvard, Vancouver, ISO, and other styles
33

Hildebrand, J., S. Schulz, R. Richter, and J. Döllner. "SIMULATING LIDAR TO CREATE TRAINING DATA FOR MACHINE LEARNING ON 3D POINT CLOUDS." ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences X-4/W2-2022 (October 14, 2022): 105–12. http://dx.doi.org/10.5194/isprs-annals-x-4-w2-2022-105-2022.

Full text
Abstract:
Abstract. 3D point clouds represent an essential category of geodata used in a variety of geoinformation applications. Typically, these applications require additional semantics to operate on subsets of the data like selected objects or surface categories. Machine learning approaches are increasingly used for classification. They operate directly on 3D point clouds and require large amounts of training data. An adequate amount of high-quality training data is often not available or has to be created manually. In this paper, we introduce a system for virtual laser scanning to create 3D point clouds with semantics information by utilizing 3D models. In particular, our system creates 3D point clouds with the same characteristics regarding density, occlusion, and scan pattern as those 3D point clouds captured in the real world. We evaluate our system with different data sets and show the potential to use the data to train neural networks for 3D point cloud classification.
APA, Harvard, Vancouver, ISO, and other styles
34

Chen, D., X. Ma, X. Lu, and J. Xiao. "APPLICATION OF A SHELLNET BASED APPROACH TO SEMANTIC SEGMENTATION IN URBAN POINT CLOUD." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B2-2022 (May 30, 2022): 169–75. http://dx.doi.org/10.5194/isprs-archives-xliii-b2-2022-169-2022.

Full text
Abstract:
Abstract. In recent years, the popularity of airborne, vehicle-borne, and terrestrial 3D laser scanners has driven the rapid development of 3D point cloud processing methods. The 3D laser scanning technology has the characteristics of non-contact, high density, high accuracy, and digitalization, which can achieve comprehensive and fast 3D scanning of urban point clouds. To address the current situation that it is difficult to accurately segment urban point clouds in complex scenes from 3D laser scanned point clouds, a technical process for accurate and fast semantic segmentation of urban point clouds is proposed. In this study, the point clouds are first denoised, then the samples are annotated and sample sets are created based on the point cloud features of the category targets using CloudCompare software, followed by an end-to-end trainable optimization network-ShellNet, to train the urban point cloud samples, and finally, the models are evaluated on a test set. The method achieved IoU metrics of 89.83% and 73.74% for semantic segmentation of buildings and rods-like objects respectively. From the visualization results of the test set, the algorithm is feasible and robust, providing a new idea and method for semantic segmentation of large-scale urban scenes.
APA, Harvard, Vancouver, ISO, and other styles
35

Xu, S., R. Huang, Y. Xu, Z. Ye, H. Xie, and X. Tong. "3D POINT CLOUD COMPLETION USING TERRAIN-CONTINUOUS CONSTRAINTS AND DISTANCE-WEIGHTED INTERPOLATION FOR LUNAR TOPOGRAPHIC MAPPING." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVIII-1/W2-2023 (December 13, 2023): 771–76. http://dx.doi.org/10.5194/isprs-archives-xlviii-1-w2-2023-771-2023.

Full text
Abstract:
Abstract. Stereo vision has been proven to be an efficient tool for 3D reconstruction in lunar topographic mapping. However, point clouds reconstructed from pairs of stereo images always suffer from occlusions and illumination changes, especially in Lunar environments, resulting in incomplete geometric information. 3D point cloud completion is usually required for refining photogrammetric point clouds and enabling further applications. In this work, we address the problem of completing and refining 3D photogrammetric point clouds based on the assumption that 3D terrain should be continuous and with consistent slope change. We proposed a generalized strategy for 3D point cloud completion of lunar topographic mapping, including distance-weighted point cloud interpolation, terrian-continuous constrained outlier detection, and contour-based hole filling. We carried out experiments on two datasets of point clouds generated from 12 pairs and 6 pairs of stereo LROC NAC images covering the Apollo 17 and the Chang’E-4 landing sites, respectively. As a result, the holes in the initial DTM have been smoothly filled and the completeness of the whole DTM has been greatly improved. The incomplete area of the experimental areas has dropped by 100% and 93%, respectively. Finally, we constructed DTM with a resolution of 10 m covering a 33 km × 60 km area of the Apollo 17 landing site with RMSE of 4 m and a 12 km × 56 km area of Chang’E-4 landing site with RMSE of 4 m compared with LOLA laser points as a reference.
APA, Harvard, Vancouver, ISO, and other styles
36

Tian, Pengju, Xianghong Hua, Wuyong Tao, and Miao Zhang. "Robust Extraction of 3D Line Segment Features from Unorganized Building Point Clouds." Remote Sensing 14, no. 14 (July 7, 2022): 3279. http://dx.doi.org/10.3390/rs14143279.

Full text
Abstract:
As one of the most common features, 3D line segments provide visual information in scene surfaces and play an important role in many applications. However, due to the huge, unstructured, and non-uniform characteristics of building point clouds, 3D line segment extraction is a complicated task. This paper presents a novel method for extraction of 3D line segment features from an unorganized building point cloud. Given the input point cloud, three steps were performed to extract 3D line segment features. Firstly, we performed data pre-processing, including subsampling, filtering and projection. Secondly, a projection-based method was proposed to divide the input point cloud into vertical and horizontal planes. Finally, for each 3D plane, all points belonging to it were projected onto the fitting plane, and the α-shape algorithm was exploited to extract the boundary points of each plane. The 3D line segment structures were extracted from the boundary points, followed by a 3D line segment merging procedure. Corresponding experiments demonstrate that the proposed method works well in both high-quality TLS and low-quality RGB-D point clouds. Moreover, the robustness in the presence of a high degree of noise is also demonstrated. A comparison with state-of-the-art techniques demonstrates that our method is considerably faster and scales significantly better than previous ones. To further verify the effectiveness of the line segments extracted by the proposed method, we also present a line-based registration framework, which employs the extracted 2D-projected line segments for coarse registration of building point clouds.
APA, Harvard, Vancouver, ISO, and other styles
37

Dahaghin, M., F. Samadzadegan, and F. Dadras Javan. "3D THERMAL MAPPING OF BUILDING ROOFS BASED ON FUSION OF THERMAL AND VISIBLE POINT CLOUDS IN UAV IMAGERY." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-4/W18 (October 18, 2019): 271–77. http://dx.doi.org/10.5194/isprs-archives-xlii-4-w18-271-2019.

Full text
Abstract:
Abstract. Thermography is a robust method for detecting thermal irregularities on the roof of the buildings as one of the main energy dissipation parts. Recently, UAVs are presented to be useful in gathering 3D thermal data of the building roofs. In this topic, the low spatial resolution of thermal imagery is a challenge which leads to a sparse resolution in point clouds. This paper suggests the fusion of visible and thermal point clouds to generate a high-resolution thermal point cloud of the building roofs. For the purpose, camera calibration is performed to obtain internal orientation parameters, and then thermal point clouds and visible point clouds are generated. In the next step, both two point clouds are geo-referenced by control points. To extract building roofs from the visible point cloud, CSF ground filtering is applied, and the vegetation layer is removed by RGBVI index. Afterward, a predefined threshold is applied to the normal vectors in the z-direction in order to separate facets of roofs from the walls. Finally, the visible point cloud of the building roofs and registered thermal point cloud are combined and generate a fused dense point cloud. Results show mean re-projection error of 0.31 pixels for thermal camera calibration and mean absolute distance of 0.2 m for point clouds registration. The final product is a fused point cloud, which its density improves up to twice of the initial thermal point cloud density and it has the spatial accuracy of visible point cloud along with thermal information of the building roofs.
APA, Harvard, Vancouver, ISO, and other styles
38

Al-Bdairy, Ali M., Ahmed A. A. Al-Duroobi, and Maan A. Tawfiq. "Point Clouds Pre-Processing and Surface Reconstruction Based on Tangent Continuity Algorithm Technique." Engineering and Technology Journal 38, no. 6A (June 25, 2020): 917–25. http://dx.doi.org/10.30684/etj.v38i6a.1612.

Full text
Abstract:
Pre-processing is essential for processing the row data point clouds which acquired using a 3D laser scanner as a modern technique to digitize and reconstruct the surface of the 3D objects in reverse engineering applications. Due to the accuracy limitation of some 3D scanners and the environmental noise factors such as illumination and reflection, there are some noised data points associated with the row point clouds, so, in the present paper, a preprocessing algorithm has been proposed to determine and delete the unnecessary data as noised points and save the remaining data points for the surface reconstruction of 3D objects from its point clouds which acquired using the 3D laser scanner (Matter and Form). The proposed algorithm based on the assessment of tangent continuity as a geometrical feature and criteria for the contiguous points. A MATLAB software has been used to construct a program for the proposed point clouds pre-processing algorithm, the validity of the constructed program has been proved using geometrical case studies with different shapes. The application results of the proposed tangent algorithm and surface fitting process for the suggested case studies were proved the validity of the proposed algorithm for simplification of the point clouds, where the percent of noised data which removed according to the proposed tangent continuity algorithm which achieved a reduction of the total points to a percentage of (43.63%), and (32.01%) for the studied case studies, from the total number of data points in point cloud for first and second case study respectively.
APA, Harvard, Vancouver, ISO, and other styles
39

Petras, V., A. Petrasova, J. Jeziorska, and H. Mitasova. "PROCESSING UAV AND LIDAR POINT CLOUDS IN GRASS GIS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B7 (June 22, 2016): 945–52. http://dx.doi.org/10.5194/isprs-archives-xli-b7-945-2016.

Full text
Abstract:
Today’s methods of acquiring Earth surface data, namely lidar and unmanned aerial vehicle (UAV) imagery, non-selectively collect or generate large amounts of points. Point clouds from different sources vary in their properties such as number of returns, density, or quality. We present a set of tools with applications for different types of points clouds obtained by a lidar scanner, structure from motion technique (SfM), and a low-cost 3D scanner. To take advantage of the vertical structure of multiple return lidar point clouds, we demonstrate tools to process them using 3D raster techniques which allow, for example, the development of custom vegetation classification methods. Dense point clouds obtained from UAV imagery, often containing redundant points, can be decimated using various techniques before further processing. We implemented and compared several decimation techniques in regard to their performance and the final digital surface model (DSM). Finally, we will describe the processing of a point cloud from a low-cost 3D scanner, namely Microsoft Kinect, and its application for interaction with physical models. All the presented tools are open source and integrated in GRASS GIS, a multi-purpose open source GIS with remote sensing capabilities. The tools integrate with other open source projects, specifically Point Data Abstraction Library (PDAL), Point Cloud Library (PCL), and OpenKinect libfreenect2 library to benefit from the open source point cloud ecosystem. The implementation in GRASS GIS ensures long term maintenance and reproducibility by the scientific community but also by the original authors themselves.
APA, Harvard, Vancouver, ISO, and other styles
40

Petras, V., A. Petrasova, J. Jeziorska, and H. Mitasova. "PROCESSING UAV AND LIDAR POINT CLOUDS IN GRASS GIS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B7 (June 22, 2016): 945–52. http://dx.doi.org/10.5194/isprsarchives-xli-b7-945-2016.

Full text
Abstract:
Today’s methods of acquiring Earth surface data, namely lidar and unmanned aerial vehicle (UAV) imagery, non-selectively collect or generate large amounts of points. Point clouds from different sources vary in their properties such as number of returns, density, or quality. We present a set of tools with applications for different types of points clouds obtained by a lidar scanner, structure from motion technique (SfM), and a low-cost 3D scanner. To take advantage of the vertical structure of multiple return lidar point clouds, we demonstrate tools to process them using 3D raster techniques which allow, for example, the development of custom vegetation classification methods. Dense point clouds obtained from UAV imagery, often containing redundant points, can be decimated using various techniques before further processing. We implemented and compared several decimation techniques in regard to their performance and the final digital surface model (DSM). Finally, we will describe the processing of a point cloud from a low-cost 3D scanner, namely Microsoft Kinect, and its application for interaction with physical models. All the presented tools are open source and integrated in GRASS GIS, a multi-purpose open source GIS with remote sensing capabilities. The tools integrate with other open source projects, specifically Point Data Abstraction Library (PDAL), Point Cloud Library (PCL), and OpenKinect libfreenect2 library to benefit from the open source point cloud ecosystem. The implementation in GRASS GIS ensures long term maintenance and reproducibility by the scientific community but also by the original authors themselves.
APA, Harvard, Vancouver, ISO, and other styles
41

Zoumpekas, Thanasis, Maria Salamó, and Anna Puig. "Rethinking Design and Evaluation of 3D Point Cloud Segmentation Models." Remote Sensing 14, no. 23 (November 29, 2022): 6049. http://dx.doi.org/10.3390/rs14236049.

Full text
Abstract:
Currently, the use of 3D point clouds is rapidly increasing in many engineering fields, such as geoscience and manufacturing. Various studies have developed intelligent segmentation models providing accurate results, while only a few of them provide additional insights into the efficiency and robustness of their proposed models. The process of segmentation in the image domain has been studied to a great extent and the research findings are tremendous. However, the segmentation analysis with point clouds is considered particularly challenging due to their unordered and irregular nature. Additionally, solving downstream tasks with 3D point clouds is computationally inefficient, as point clouds normally consist of thousands or millions of points sparsely distributed in 3D space. Thus, there is a significant need for rigorous evaluation of the design characteristics of segmentation models, to be effective and practical. Consequently, in this paper, an in-depth analysis of five fundamental and representative deep learning models for 3D point cloud segmentation is presented. Specifically, we investigate multiple experimental dimensions, such as accuracy, efficiency, and robustness in part segmentation (ShapeNet) and scene segmentation (S3DIS), to assess the effective utilization of the models. Moreover, we create a correspondence between their design properties and experimental properties. For example, we show that convolution-based models that incorporate adaptive weight or position pooling local aggregation operations achieve superior accuracy and robustness to point-wise MLPs, while the latter ones show higher efficiency in time and memory allocation. Our findings pave the way for an effective 3D point cloud segmentation model selection and enlighten the research on point clouds and deep learning.
APA, Harvard, Vancouver, ISO, and other styles
42

Lu, Xiaohu, Jian Yao, Jinge Tu, Kai Li, Li Li, and Yahui Liu. "PAIRWISE LINKAGE FOR POINT CLOUD SEGMENTATION." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences III-3 (June 3, 2016): 201–8. http://dx.doi.org/10.5194/isprsannals-iii-3-201-2016.

Full text
Abstract:
In this paper, we first present a novel hierarchical clustering algorithm named Pairwise Linkage (P-Linkage), which can be used for clustering any dimensional data, and then effectively apply it on 3D unstructured point cloud segmentation. The P-Linkage clustering algorithm first calculates a feature value for each data point, for example, the density for 2D data points and the flatness for 3D point clouds. Then for each data point a pairwise linkage is created between itself and its closest neighboring point with a greater feature value than its own. The initial clusters can further be discovered by searching along the linkages in a simple way. After that, a cluster merging procedure is applied to obtain the finally refined clustering result, which can be designed for specialized applications. Based on the P-Linkage clustering, we develop an efficient segmentation algorithm for 3D unstructured point clouds, in which the flatness of the estimated surface of a 3D point is used as its feature value. For each initial cluster a slice is created, then a novel and robust slicemerging method is proposed to get the final segmentation result. The proposed P-Linkage clustering and 3D point cloud segmentation algorithms require only one input parameter in advance. Experimental results on different dimensional synthetic data from 2D to 4D sufficiently demonstrate the efficiency and robustness of the proposed P-Linkage clustering algorithm and a large amount of experimental results on the Vehicle-Mounted, Aerial and Stationary Laser Scanner point clouds illustrate the robustness and efficiency of our proposed 3D point cloud segmentation algorithm.
APA, Harvard, Vancouver, ISO, and other styles
43

Lu, Xiaohu, Jian Yao, Jinge Tu, Kai Li, Li Li, and Yahui Liu. "PAIRWISE LINKAGE FOR POINT CLOUD SEGMENTATION." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences III-3 (June 3, 2016): 201–8. http://dx.doi.org/10.5194/isprs-annals-iii-3-201-2016.

Full text
Abstract:
In this paper, we first present a novel hierarchical clustering algorithm named Pairwise Linkage (P-Linkage), which can be used for clustering any dimensional data, and then effectively apply it on 3D unstructured point cloud segmentation. The P-Linkage clustering algorithm first calculates a feature value for each data point, for example, the density for 2D data points and the flatness for 3D point clouds. Then for each data point a pairwise linkage is created between itself and its closest neighboring point with a greater feature value than its own. The initial clusters can further be discovered by searching along the linkages in a simple way. After that, a cluster merging procedure is applied to obtain the finally refined clustering result, which can be designed for specialized applications. Based on the P-Linkage clustering, we develop an efficient segmentation algorithm for 3D unstructured point clouds, in which the flatness of the estimated surface of a 3D point is used as its feature value. For each initial cluster a slice is created, then a novel and robust slicemerging method is proposed to get the final segmentation result. The proposed P-Linkage clustering and 3D point cloud segmentation algorithms require only one input parameter in advance. Experimental results on different dimensional synthetic data from 2D to 4D sufficiently demonstrate the efficiency and robustness of the proposed P-Linkage clustering algorithm and a large amount of experimental results on the Vehicle-Mounted, Aerial and Stationary Laser Scanner point clouds illustrate the robustness and efficiency of our proposed 3D point cloud segmentation algorithm.
APA, Harvard, Vancouver, ISO, and other styles
44

Ghorbani, Fariborz, Hamid Ebadi, Norbert Pfeifer, and Amin Sedaghat. "Uniform and Competency-Based 3D Keypoint Detection for Coarse Registration of Point Clouds with Homogeneous Structure." Remote Sensing 14, no. 16 (August 21, 2022): 4099. http://dx.doi.org/10.3390/rs14164099.

Full text
Abstract:
Recent advances in 3D laser scanner technology have provided a large amount of accurate geo-information as point clouds. The methods of machine vision and photogrammetry are used in various applications such as medicine, environmental studies, and cultural heritage. Aerial laser scanners (ALS), terrestrial laser scanners (TLS), mobile mapping laser scanners (MLS), and photogrammetric cameras via image matching are the most important tools for producing point clouds. In most applications, the process of point cloud registration is considered to be a fundamental issue. Due to the high volume of initial point cloud data, 3D keypoint detection has been introduced as an important step in the registration of point clouds. In this step, the initial volume of point clouds is converted into a set of candidate points with high information content. Many methods for 3D keypoint detection have been proposed in machine vision, and most of them were based on thresholding the saliency of points, but less attention had been paid to the spatial distribution and number of extracted points. This poses a challenge in the registration process when dealing with point clouds with a homogeneous structure. As keypoints are selected in areas of structural complexity, it leads to an unbalanced distribution of keypoints and a lower registration quality. This research presents an automated approach for 3D keypoint detection to control the quality, spatial distribution, and the number of keypoints. The proposed method generates a quality criterion by combining 3D local shape features, 3D local self-similarity, and the histogram of normal orientation and provides a competency index. In addition, the Octree structure is applied to control the spatial distribution of the detected 3D keypoints. The proposed method was evaluated for the keypoint-based coarse registration of aerial laser scanner and terrestrial laser scanner data, having both cluttered and homogeneous regions. The obtained results demonstrate the proper performance of the proposed method in the registration of these types of data, and in comparison to the standard algorithms, the registration error was diminished by up to 56%.
APA, Harvard, Vancouver, ISO, and other styles
45

Tohidi, Faranak, Manoranjan Paul, Anwaar Ulhaq, and Subrata Chakraborty. "Improved Video-Based Point Cloud Compression via Segmentation." Sensors 24, no. 13 (July 1, 2024): 4285. http://dx.doi.org/10.3390/s24134285.

Full text
Abstract:
A point cloud is a representation of objects or scenes utilising unordered points comprising 3D positions and attributes. The ability of point clouds to mimic natural forms has gained significant attention from diverse applied fields, such as virtual reality and augmented reality. However, the point cloud, especially those representing dynamic scenes or objects in motion, must be compressed efficiently due to its huge data volume. The latest video-based point cloud compression (V-PCC) standard for dynamic point clouds divides the 3D point cloud into many patches using computationally expensive normal estimation, segmentation, and refinement. The patches are projected onto a 2D plane to apply existing video coding techniques. This process often results in losing proximity information and some original points. This loss induces artefacts that adversely affect user perception. The proposed method segments dynamic point clouds based on shape similarity and occlusion before patch generation. This segmentation strategy helps maintain the points’ proximity and retain more original points by exploiting the density and occlusion of the points. The experimental results establish that the proposed method significantly outperforms the V-PCC standard and other relevant methods regarding rate–distortion performance and subjective quality testing for both geometric and texture data of several benchmark video sequences.
APA, Harvard, Vancouver, ISO, and other styles
46

Kulawiak, Marek. "A Cost-Effective Method for Reconstructing City-Building 3D Models from Sparse Lidar Point Clouds." Remote Sensing 14, no. 5 (March 5, 2022): 1278. http://dx.doi.org/10.3390/rs14051278.

Full text
Abstract:
The recent popularization of airborne lidar scanners has provided a steady source of point cloud datasets containing the altitudes of bare earth surface and vegetation features as well as man-made structures. In contrast to terrestrial lidar, which produces dense point clouds of small areas, airborne laser sensors usually deliver sparse datasets that cover large municipalities. The latter are very useful in constructing digital representations of cities; however, reconstructing 3D building shapes from a sparse point cloud is a time-consuming process because automatic shape reconstruction methods work best with dense point clouds and usually cannot be applied for this purpose. Moreover, existing methods dedicated to reconstructing simplified 3D buildings from sparse point clouds are optimized for detecting simple building shapes, and they exhibit problems when dealing with more complex structures such as towers, spires, and large ornamental features, which are commonly found e.g., in buildings from the renaissance era. In the above context, this paper proposes a novel method of reconstructing 3D building shapes from sparse point clouds. The proposed algorithm has been optimized to work with incomplete point cloud data in order to provide a cost-effective way of generating representative 3D city models. The algorithm has been tested on lidar point clouds representing buildings in the city of Gdansk, Poland.
APA, Harvard, Vancouver, ISO, and other styles
47

Sirmacek, B., and R. Lindenbergh. "Accuracy assessment of building point clouds automatically generated from iphone images." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-5 (June 6, 2014): 547–52. http://dx.doi.org/10.5194/isprsarchives-xl-5-547-2014.

Full text
Abstract:
Low-cost sensor generated 3D models can be useful for quick 3D urban model updating, yet the quality of the models is questionable. In this article, we evaluate the reliability of an automatic point cloud generation method using multi-view iPhone images or an iPhone video file as an input. We register such automatically generated point cloud on a TLS point cloud of the same object to discuss accuracy, advantages and limitations of the iPhone generated point clouds. For the chosen example showcase, we have classified 1.23% of the iPhone point cloud points as outliers, and calculated the mean of the point to point distances to the TLS point cloud as 0.11 m. Since a TLS point cloud might also include measurement errors and noise, we computed local noise values for the point clouds from both sources. Mean (μ) and standard deviation (&amp;sigma;) of roughness histograms are calculated as (μ<sub>1</sub> = 0.44 m., &amp;sigma;<sub>1</sub> = 0.071 m.) and (μ<sub>2</sub> = 0.025 m., &amp;sigma;<sub>2</sub> = 0.037 m.) for the iPhone and TLS point clouds respectively. Our experimental results indicate possible usage of the proposed automatic 3D model generation framework for 3D urban map updating, fusion and detail enhancing, quick and real-time change detection purposes. However, further insights should be obtained first on the circumstances that are needed to guarantee a successful point cloud generation from smartphone images.
APA, Harvard, Vancouver, ISO, and other styles
48

Chai, J. X., Y. S. Zhang, Z. Yang, and J. Wu. "3D CHANGE DETECTION OF POINT CLOUDS BASED ON DENSITY ADAPTIVE LOCAL EUCLIDEAN DISTANCE." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B2-2022 (May 30, 2022): 523–30. http://dx.doi.org/10.5194/isprs-archives-xliii-b2-2022-523-2022.

Full text
Abstract:
Abstract. With the development of sensors and multi-view stereo matching technology, image-based dense matching point cloud data shares higher geometric accuracy and richer spectral information, and such data is therefore widely used in change detection-related research. Due to the inconsistent position and attitude of the image acquisition for generating two phases of point clouds, as well as the seasonal variation of vegetation, the 3D change detection is often subject to false detection. To improve the accuracy of 3D change detection of point clouds in large fields, a method of 3D change detection of point clouds based on density adaptive local Euclidean distance is proposed. The method consists of three steps: (1) Calculating the local Euclidean distances from each point in the second phase of point clouds to the k nearest neighboring points of the first phase of point clouds; (2) Improving the local geometric Euclidean distance based on the local density and performing 3D change detection according to a given threshold; (3) Clustering the change detection results using Euclidean clustering, and then eliminating the false detection area according to the given threshold. The experiments show that the changed region can be better extracted by the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
49

El Sayed, Abdul Rahman, Abdallah El Chakik, Hassan Alabboud, and Adnan Yassine. "An efficient simplification method for point cloud based on salient regions detection." RAIRO - Operations Research 53, no. 2 (April 2019): 487–504. http://dx.doi.org/10.1051/ro/2018082.

Full text
Abstract:
Many computer vision approaches for point clouds processing consider 3D simplification as an important preprocessing phase. On the other hand, the big amount of point cloud data that describe a 3D object require excessively a large storage and long processing time. In this paper, we present an efficient simplification method for 3D point clouds using weighted graphs representation that optimizes the point clouds and maintain the characteristics of the initial data. This method detects the features regions that describe the geometry of the surface. These features regions are detected using the saliency degree of vertices. Then, we define features points in each feature region and remove redundant vertices. Finally, we will show the robustness of our methodviadifferent experimental results. Moreover, we will study the stability of our method according to noise.
APA, Harvard, Vancouver, ISO, and other styles
50

Ren, X., Z. Zhang, and H. Sun. "RESEARCH ON SELF-CROSS-TRANSFORMER MODEL OF POINT CLOUD CHANGE DETECTION." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVIII-1/W2-2023 (December 13, 2023): 179–86. http://dx.doi.org/10.5194/isprs-archives-xlviii-1-w2-2023-179-2023.

Full text
Abstract:
Abstract. With the vigorous development of the urban construction industry, engineering deformation or changes often occur during the construction process. To combat this phenomenon, it is necessary to detect changes in order to detect construction loopholes in time, ensure the integrity of the project and reduce labor costs. Or the inconvenience and injuriousness of the road. In the study of change detection in 3D point clouds, researchers have published various research methods on 3D point clouds. Directly based on but mostly based on traditional threshold distance methods (C2C, M3C2, M3C2-EP), and some are to convert 3D point clouds into DSM, which loses a lot of original information. Although deep learning is used in remote sensing methods, in terms of change detection of 3D point clouds, it is more converted into two-dimensional patches, and neural networks are rarely applied directly. We prefer that the network is given at the level of pixels or points. Variety. Therefore, in this article, our network builds a network for 3D point cloud change detection, and proposes a new module Cross transformer suitable for change detection. Simultaneously simulate tunneling data for change detection, and do test experiments with our network.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography