Academic literature on the topic '3D point clouds'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic '3D point clouds.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "3D point clouds"
Roopa B S, Pramod Kumar S, Prema K N, and Smitha S M. "Review on 3D Point Cloud." Global Journal of Engineering and Technology Advances 16, no. 3 (September 30, 2023): 219–23. http://dx.doi.org/10.30574/gjeta.2023.16.3.0192.
Full textLiu, Ruyu, Zhiyong Zhang, Liting Dai, Guodao Zhang, and Bo Sun. "MFTR-Net: A Multi-Level Features Network with Targeted Regularization for Large-Scale Point Cloud Classification." Sensors 23, no. 8 (April 10, 2023): 3869. http://dx.doi.org/10.3390/s23083869.
Full textGiang, Truong Thi Huong, and Young-Jae Ryoo. "Pruning Points Detection of Sweet Pepper Plants Using 3D Point Clouds and Semantic Segmentation Neural Network." Sensors 23, no. 8 (April 17, 2023): 4040. http://dx.doi.org/10.3390/s23084040.
Full textRai, A., N. Srivastava, K. Khoshelham, and K. Jain. "SEMANTIC ENRICHMENT OF 3D POINT CLOUDS USING 2D IMAGE SEGMENTATION." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVIII-1/W2-2023 (December 14, 2023): 1659–66. http://dx.doi.org/10.5194/isprs-archives-xlviii-1-w2-2023-1659-2023.
Full textHan, Ming, Jianjun Sha, Yanheng Wang, and Xiangwei Wang. "PBFormer: Point and Bi-Spatiotemporal Transformer for Pointwise Change Detection of 3D Urban Point Clouds." Remote Sensing 15, no. 9 (April 27, 2023): 2314. http://dx.doi.org/10.3390/rs15092314.
Full textBello, Saifullahi Aminu, Shangshu Yu, Cheng Wang, Jibril Muhmmad Adam, and Jonathan Li. "Review: Deep Learning on 3D Point Clouds." Remote Sensing 12, no. 11 (May 28, 2020): 1729. http://dx.doi.org/10.3390/rs12111729.
Full textMwangangi, K. K., P. O. Mc’Okeyo, S. J. Oude Elberink, and F. Nex. "EXPLORING THE POTENTIALS OF UAV PHOTOGRAMMETRIC POINT CLOUDS IN FAÇADE DETECTION AND 3D RECONSTRUCTION OF BUILDINGS." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B2-2022 (May 30, 2022): 433–40. http://dx.doi.org/10.5194/isprs-archives-xliii-b2-2022-433-2022.
Full textLi, Weite, Kyoko Hasegawa, Liang Li, Akihiro Tsukamoto, and Satoshi Tanaka. "Deep Learning-Based Point Upsampling for Edge Enhancement of 3D-Scanned Data and Its Application to Transparent Visualization." Remote Sensing 13, no. 13 (June 28, 2021): 2526. http://dx.doi.org/10.3390/rs13132526.
Full textTakahashi, G., and H. Masuda. "TRAJECTORY-BASED VISUALIZATION OF MMS POINT CLOUDS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W13 (June 5, 2019): 1127–33. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w13-1127-2019.
Full textBarnefske, Eike, and Harald Sternberg. "Evaluating the Quality of Semantic Segmented 3D Point Clouds." Remote Sensing 14, no. 3 (January 18, 2022): 446. http://dx.doi.org/10.3390/rs14030446.
Full textDissertations / Theses on the topic "3D point clouds"
Srivastava, Siddharth. "Features for 3D point clouds." Thesis, IIT Delhi, 2019. http://eprint.iitd.ac.in:80//handle/2074/8061.
Full textFilho, Carlos André Braile Przewodowski. "Feature extraction from 3D point clouds." Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-30072018-111718/.
Full textVisão computacional é uma área de pesquisa em que as imagens são o principal objeto de estudo. Um dos problemas abordados é o da descrição de formatos (em inglês, shapes). Classificação de objetos é um importante exemplo de aplicação que usa descritores de shapes. Classicamente, esses processos eram realizados em imagens 2D. Com o desenvolvimento em larga escala de novas tecnologias e o barateamento dos equipamentos que geram imagens 3D, a visão computacional se adaptou para este novo cenário, expandindo os métodos 2D clássicos para 3D. Entretanto, estes métodos são, majoritariamente, dependentes da variação de iluminação e de cor, enquanto os sensores 3D fornecem informações de profundidade, shape 3D e topologia, além da cor. Assim, foram estudados diferentes métodos de classificação de objetos e extração de atributos robustos, onde a partir destes são propostos e descritos novos métodos de extração de atributos a partir de dados 3D. Os resultados obtidos utilizando bases de dados 3D públicas conhecidas demonstraram a eficiência dos métodos propóstos e que os mesmos competem com outros métodos no estado-da-arte: o RPHSD (um dos métodos propostos) atingiu 85:4% de acurácia, sendo a segunda maior acurácia neste banco de dados; o COMSD (outro método proposto) atingiu 82:3% de acurácia, se posicionando na sétima posição do ranking; e o CNSD (outro método proposto) em nono lugar. Além disso, os métodos RPHSD têm uma complexidade de processamento relativamente baixa. Assim, eles atingem uma alta acurácia com um pequeno tempo de processamento.
Truong, Quoc Hung. "Knowledge-based 3D point clouds processing." Phd thesis, Université de Bourgogne, 2013. http://tel.archives-ouvertes.fr/tel-00977434.
Full textStålberg, Martin. "Reconstruction of trees from 3D point clouds." Thesis, Uppsala universitet, Avdelningen för systemteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-316833.
Full textSalman, Nader. "From 3D point clouds to feature preserving meshes." Nice, 2010. http://www.theses.fr/2010NICE4086.
Full textMost of the current surface reconstruction algorithms target high quality data and can produce some intractable results when used with point clouds acquired through profitable 3D acquisitions methods. Our first contribution is a surface reconstruction, algorithm from stereo vision data copes with the data’s fuzziness using information from both the acquired D point cloud and the calibrated images. After pre-processing the point cloud, the algorithm builds, using the calibrated images, 3D triangular soup consistent with the surface of the scene through a combination of visibility and photo-consistency constraints. A mesh is then computed from the triangle soup using a combination of restricted Delaunay triangulation and Delaunay refinement methods. Our second contribution is an algorithm that builds, given a 3D point cloud sampled on a surface, an approximating surface mesh with an accurate representation of surface sharp edges, providing an enhanced trade-off between accuracy and mesh complexity. We first extract from the point cloud an approximation of the sharp edges of the underlying surface. Then a feature preserving variant of a Delaunay refinement process generates a mesh combining a faithful representation of the extracted sharp edges with an implicit surface obtained from the point cloud. The method is shown to be flexible, robust to noise and tuneable to adapt to the scale of the targeted mesh and to a user defined sizing field. We demonstrate the effectiveness of both contributions on a variety of scenes and models acquired with different hardware and show results that compare favourably, in terms of accuracy, with the current state of the art
Robert, Damien. "Efficient learning on large-scale 3D point clouds." Electronic Thesis or Diss., Université Gustave Eiffel, 2024. http://www.theses.fr/2024UEFL2003.
Full textFor the past decade, deep learning has been driving progress in the automated understanding of complex data structures as diverse as text, image, audio, and video. In particular, transformer-based models and self-supervised learning have recently ignited a global competition to learn expressive textual and visual representations by training the largest possible model on Internet-scale datasets, with the help of massive computational resources. This thesis takes a different path, by proposing resource-efficient deep learning methods for the analysis of large-scale 3D point clouds.The efficiency of the introduced approaches comes in various flavors: fast training, few parameters, small compute or memory footprint, and leveraging realistically-available data.In doing so, we strive to devise solutions that can be used by researchers and practitioners with minimal hardware requirements.We first introduce a 3D semantic segmentation model which combines the efficiency of superpoint-based methods with the expressivity of transformers. We build a hierarchical data representation which drastically reduces the size of the 3D point cloud parsing problem, facilitating the processing of large point clouds en masse. Our self-attentive network proves to match or even surpass state-of-the-art approaches on a range of sensors and acquisition environments, while boasting orders of magnitude fewer parameters, faster training, and swift inference.We then build upon this framework to tackle panoptic segmentation of large-scale point clouds. Existing instance and panoptic segmentation methods need to solve a complex matching problem between predicted and ground truth instances for computing their supervision loss.Instead, we frame this task as a scalable graph clustering problem, which a small network is trained to address from local objectives only, without computing the actual object instances at train time. Our lightweight model can process ten-million-point scenes at once on a single GPU in a few seconds, opening the door to 3D panoptic segmentation at unprecedented scales. Finally, we propose to exploit the complementarity of image and point cloud modalities to enhance 3D scene understanding.We place ourselves in a realistic acquisition setting where multiple arbitrarily-located images observe the same scene, with potential occlusions.Unlike previous 2D-3D fusion approaches, we learn to select information from various views of the same object based on their respective observation conditions: camera-to-object distance, occlusion rate, optical distortion, etc. Our efficient implementation achieves state-of-the-art results both in indoor and outdoor settings, with minimal requirements: raw point clouds, arbitrarily-positioned images, and their cameras poses. Overall, this thesis upholds the principle that in data-scarce regimes,exploiting the structure of the problem unlocks both efficient and performant architectures
Al, Hakim Ezeddin. "3D YOLO: End-to-End 3D Object Detection Using Point Clouds." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-234242.
Full textFör att autonoma fordon ska ha en god uppfattning av sin omgivning används moderna sensorer som LiDAR och RADAR. Dessa genererar en stor mängd 3-dimensionella datapunkter som kallas point clouds. Inom utvecklingen av autonoma fordon finns det ett stort behov av att tolka LiDAR-data samt klassificera medtrafikanter. Ett stort antal studier har gjorts om 2D-objektdetektering som analyserar bilder för att upptäcka fordon, men vi är intresserade av 3D-objektdetektering med hjälp av endast LiDAR data. Därför introducerar vi modellen 3D YOLO, som bygger på YOLO (You Only Look Once), som är en av de snabbaste state-of-the-art modellerna inom 2D-objektdetektering för bilder. 3D YOLO tar in ett point cloud och producerar 3D lådor som markerar de olika objekten samt anger objektets kategori. Vi har tränat och evaluerat modellen med den publika träningsdatan KITTI. Våra resultat visar att 3D YOLO är snabbare än dagens state-of-the-art LiDAR-baserade modeller med en hög träffsäkerhet. Detta gör den till en god kandidat för kunna användas av autonoma fordon.
Biasutti, Pierre. "2D Image Processing Applied to 3D LiDAR Point Clouds." Thesis, Bordeaux, 2019. http://www.theses.fr/2019BORD0161/document.
Full textThe ever growing demand for reliable mapping data, especially in urban environments, has motivated the development of "close-range" Mobile Mapping Systems (MMS). These systems acquire high precision data, and in particular 3D LiDAR point clouds and optical images. The large amount of data, along with their diversity, make MMS data processing a very complex task. This thesis lies in the context of 2D image processing applied to 3D LiDAR point clouds acquired with MMS.First, we focus on the projection of the LiDAR point clouds onto 2D pixel grids to create images. Such projections are often sparse because some pixels do not carry any information. We use these projections for different applications such as high resolution orthoimage generation, RGB-D imaging and visibility estimation in point clouds.Moreover, we exploit the topology of LiDAR sensors in order to create low resolution images, named range-images. These images offer an efficient and canonical representation of the point cloud, while being directly accessible from the point cloud. We show how range-images can be used to simplify, and sometimes outperform, methods for multi-modal registration, segmentation, desocclusion and 3D detection
IRFAN, MUHAMMAD ABEER. "Joint geometry and color denoising for 3D point clouds." Doctoral thesis, Politecnico di Torino, 2021. http://hdl.handle.net/11583/2912976.
Full textFucili, Mattia. "3D object detection from point clouds with dense pose voters." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/17616/.
Full textBooks on the topic "3D point clouds"
S, Cheok Geraldine, and National Institute of Standards and Technology (U.S.), eds. Registering 3D point clouds: An experimental evaluation. Gaithersburg, MD: U.S. Dept. of Commerce, Technology Administration, National Institute of Standards and Technology, 2001.
Find full textS, Cheok Geraldine, and National Institute of Standards and Technology (U.S.), eds. Registering 3D point clouds: An experimental evaluation. Gaithersburg, MD: U.S. Dept. of Commerce, Technology Administration, National Institute of Standards and Technology, 2001.
Find full textUray, Peter. From 3D point clouds to surfaces and volumes: Dissertation. Wien: Oldenbourg, 1997.
Find full textNational Institute of Standards and Technology (U.S.), ed. REGISTERING 3D POINT CLOUDS: AN EXPERIMENTAL EVALUATION... NISTIR 6743... U.S. DEPARTMENT OF COMMERCE. [S.l: s.n., 2001.
Find full textGolyanik, Vladislav. Robust Methods for Dense Monocular Non-Rigid 3D Reconstruction and Alignment of Point Clouds. Wiesbaden: Springer Fachmedien Wiesbaden, 2020. http://dx.doi.org/10.1007/978-3-658-30567-3.
Full textLiu, Shan, Min Zhang, Pranav Kadam, and C. C. Jay Kuo. 3D Point Cloud Analysis. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-89180-0.
Full textZhang, Guoxiang, and YangQuan Chen. Towards Optimal Point Cloud Processing for 3D Reconstruction. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-96110-7.
Full textRegistering 3D point clouds: An experimental evaluation. Gaithersburg, MD: U.S. Dept. of Commerce, Technology Administration, National Institute of Standards and Technology, 2001.
Find full textGolyanik, Vladislav. Robust Methods for Dense Monocular Non-Rigid 3D Reconstruction and Alignment of Point Clouds. Springer Vieweg, 2020.
Find full textChen, YangQuan, and Guoxiang Zhang. Towards Optimal Point Cloud Processing for 3D Reconstruction. Springer International Publishing AG, 2022.
Find full textBook chapters on the topic "3D point clouds"
Kamberov, George, Gerda Kamberova, and Amit Jain. "3D Shape from Unorganized 3D Point Clouds." In Advances in Visual Computing, 621–29. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11595755_76.
Full textSu, Jingyong, and Lin-Lin Tang. "Shape Estimation from 3D Point Clouds." In Intelligent Data analysis and its Applications, Volume I, 39–46. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-07776-5_5.
Full textLi, Yunpeng, Noah Snavely, Daniel P. Huttenlocher, and Pascal Fua. "Worldwide Pose Estimation Using 3D Point Clouds." In Large-Scale Visual Geo-Localization, 147–63. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-25781-5_8.
Full textXue, Mei, Shogo Tokai, and Hiroyuki Hase. "Point Clouds Based 3D Facial Expression Generation." In Advances in Mechanical Design, 467–84. Singapore: Springer Singapore, 2017. http://dx.doi.org/10.1007/978-981-10-6553-8_32.
Full textLi, Yunpeng, Noah Snavely, Dan Huttenlocher, and Pascal Fua. "Worldwide Pose Estimation Using 3D Point Clouds." In Computer Vision – ECCV 2012, 15–29. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33718-5_2.
Full textYan, Feng, Fei Wang, Yu Guo, and Peilin Jiang. "Saliency-Guided Smoothing for 3D Point Clouds." In Intelligent Computing Theories and Application, 165–74. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-63309-1_16.
Full textMedina, F. Patricia, and Randy Paffenroth. "Machine Learning in LiDAR 3D Point Clouds." In Association for Women in Mathematics Series, 113–33. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-79891-8_6.
Full textLiu, Daniel, Ronald Yu, and Hao Su. "Adversarial Shape Perturbations on 3D Point Clouds." In Computer Vision – ECCV 2020 Workshops, 88–104. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-66415-2_6.
Full textHamdi, Abdullah, Sara Rojas, Ali Thabet, and Bernard Ghanem. "AdvPC: Transferable Adversarial Perturbations on 3D Point Clouds." In Computer Vision – ECCV 2020, 241–57. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58610-2_15.
Full textDyshkant, Natalia. "Comparison of Point Clouds Acquired by 3D Scanner." In Discrete Geometry for Computer Imagery, 47–58. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-37067-0_5.
Full textConference papers on the topic "3D point clouds"
Kim, Sunghan, Mingyu Kim, Jeongtae Lee, Jinhwi Pyo, Heeyoung Heo, Dongho Yun, and Kwanghee Ko. "Registration of 3D Point Clouds for Ship Block Measurement." In SNAME 5th World Maritime Technology Conference. SNAME, 2015. http://dx.doi.org/10.5957/wmtc-2015-252.
Full textWells, Lee J., Mohammed S. Shafae, and Jaime A. Camelio. "Automated Part Inspection Using 3D Point Clouds." In ASME 2013 International Manufacturing Science and Engineering Conference collocated with the 41st North American Manufacturing Research Conference. American Society of Mechanical Engineers, 2013. http://dx.doi.org/10.1115/msec2013-1212.
Full textMen, Hao, and Kishore Pochiraju. "Hue Assisted Registration of 3D Point Clouds." In ASME 2010 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2010. http://dx.doi.org/10.1115/detc2010-29192.
Full textXiang, Chong, Charles R. Qi, and Bo Li. "Generating 3D Adversarial Point Clouds." In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2019. http://dx.doi.org/10.1109/cvpr.2019.00935.
Full textLihui Wang, Baozong Yuan, and Zhenjiang Miao. "3D point clouds parameterization alogrithm." In 2008 9th International Conference on Signal Processing (ICSP 2008). IEEE, 2008. http://dx.doi.org/10.1109/icosp.2008.4697396.
Full textLubos, Paul, Rudiger Beimler, Markus Lammers, and Frank Steinicke. "Touching the Cloud: Bimanual annotation of immersive point clouds." In 2014 IEEE Symposium on 3D User Interfaces (3DUI). IEEE, 2014. http://dx.doi.org/10.1109/3dui.2014.6798885.
Full textFu, Rao, Cheng Wen, Qian Li, Xiao Xiao, and Pierre Alliez. "BPNet: Bézier Primitive Segmentation on 3D Point Clouds." In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/84.
Full textZhang, Zihao, Lei Hu, Xiaoming Deng, and Shihong Xia. "Sequential 3D Human Pose Estimation Using Adaptive Point Cloud Sampling Strategy." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/184.
Full textLiu, Weiquan, Hanyun Guo, Weini Zhang, Yu Zang, Cheng Wang, and Jonathan Li. "TopoSeg: Topology-aware Segmentation for Point Clouds." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/168.
Full textTchapmi, Lyne, Christopher Choy, Iro Armeni, JunYoung Gwak, and Silvio Savarese. "SEGCloud: Semantic Segmentation of 3D Point Clouds." In 2017 International Conference on 3D Vision (3DV). IEEE, 2017. http://dx.doi.org/10.1109/3dv.2017.00067.
Full textReports on the topic "3D point clouds"
Witzgall, Christoph, and Geraldine S. Cheok. Registering 3D point clouds:. Gaithersburg, MD: National Institute of Standards and Technology, 2001. http://dx.doi.org/10.6028/nist.ir.6743.
Full textHabib, Ayman, Darcy M. Bullock, Yi-Chun Lin, and Raja Manish. Road Ditch Line Mapping with Mobile LiDAR. Purdue University, 2021. http://dx.doi.org/10.5703/1288284317354.
Full textEnnasr, Osama, Michael Paquette, and Garry Glaspell. UGV SLAM payload for low-visibility environments. Engineer Research and Development Center (U.S.), September 2023. http://dx.doi.org/10.21079/11681/47589.
Full textEnnasr, Osama, Charles Ellison, Anton Netchaev, Ahmet Soylemezoglu, and Garry Glaspell. Unmanned ground vehicle (UGV) path planning in 2.5D and 3D. Engineer Research and Development Center (U.S.), August 2023. http://dx.doi.org/10.21079/11681/47459.
Full textBlundell, S., and Philip Devine. Creation, transformation, and orientation adjustment of a building façade model for feature segmentation : transforming 3D building point cloud models into 2D georeferenced feature overlays. Engineer Research and Development Center (U.S.), January 2020. http://dx.doi.org/10.21079/11681/35115.
Full text