Dissertations / Theses on the topic '3D point clouds'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic '3D point clouds.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Srivastava, Siddharth. "Features for 3D point clouds." Thesis, IIT Delhi, 2019. http://eprint.iitd.ac.in:80//handle/2074/8061.
Full textFilho, Carlos André Braile Przewodowski. "Feature extraction from 3D point clouds." Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-30072018-111718/.
Full textVisão computacional é uma área de pesquisa em que as imagens são o principal objeto de estudo. Um dos problemas abordados é o da descrição de formatos (em inglês, shapes). Classificação de objetos é um importante exemplo de aplicação que usa descritores de shapes. Classicamente, esses processos eram realizados em imagens 2D. Com o desenvolvimento em larga escala de novas tecnologias e o barateamento dos equipamentos que geram imagens 3D, a visão computacional se adaptou para este novo cenário, expandindo os métodos 2D clássicos para 3D. Entretanto, estes métodos são, majoritariamente, dependentes da variação de iluminação e de cor, enquanto os sensores 3D fornecem informações de profundidade, shape 3D e topologia, além da cor. Assim, foram estudados diferentes métodos de classificação de objetos e extração de atributos robustos, onde a partir destes são propostos e descritos novos métodos de extração de atributos a partir de dados 3D. Os resultados obtidos utilizando bases de dados 3D públicas conhecidas demonstraram a eficiência dos métodos propóstos e que os mesmos competem com outros métodos no estado-da-arte: o RPHSD (um dos métodos propostos) atingiu 85:4% de acurácia, sendo a segunda maior acurácia neste banco de dados; o COMSD (outro método proposto) atingiu 82:3% de acurácia, se posicionando na sétima posição do ranking; e o CNSD (outro método proposto) em nono lugar. Além disso, os métodos RPHSD têm uma complexidade de processamento relativamente baixa. Assim, eles atingem uma alta acurácia com um pequeno tempo de processamento.
Truong, Quoc Hung. "Knowledge-based 3D point clouds processing." Phd thesis, Université de Bourgogne, 2013. http://tel.archives-ouvertes.fr/tel-00977434.
Full textStålberg, Martin. "Reconstruction of trees from 3D point clouds." Thesis, Uppsala universitet, Avdelningen för systemteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-316833.
Full textSalman, Nader. "From 3D point clouds to feature preserving meshes." Nice, 2010. http://www.theses.fr/2010NICE4086.
Full textMost of the current surface reconstruction algorithms target high quality data and can produce some intractable results when used with point clouds acquired through profitable 3D acquisitions methods. Our first contribution is a surface reconstruction, algorithm from stereo vision data copes with the data’s fuzziness using information from both the acquired D point cloud and the calibrated images. After pre-processing the point cloud, the algorithm builds, using the calibrated images, 3D triangular soup consistent with the surface of the scene through a combination of visibility and photo-consistency constraints. A mesh is then computed from the triangle soup using a combination of restricted Delaunay triangulation and Delaunay refinement methods. Our second contribution is an algorithm that builds, given a 3D point cloud sampled on a surface, an approximating surface mesh with an accurate representation of surface sharp edges, providing an enhanced trade-off between accuracy and mesh complexity. We first extract from the point cloud an approximation of the sharp edges of the underlying surface. Then a feature preserving variant of a Delaunay refinement process generates a mesh combining a faithful representation of the extracted sharp edges with an implicit surface obtained from the point cloud. The method is shown to be flexible, robust to noise and tuneable to adapt to the scale of the targeted mesh and to a user defined sizing field. We demonstrate the effectiveness of both contributions on a variety of scenes and models acquired with different hardware and show results that compare favourably, in terms of accuracy, with the current state of the art
Robert, Damien. "Efficient learning on large-scale 3D point clouds." Electronic Thesis or Diss., Université Gustave Eiffel, 2024. http://www.theses.fr/2024UEFL2003.
Full textFor the past decade, deep learning has been driving progress in the automated understanding of complex data structures as diverse as text, image, audio, and video. In particular, transformer-based models and self-supervised learning have recently ignited a global competition to learn expressive textual and visual representations by training the largest possible model on Internet-scale datasets, with the help of massive computational resources. This thesis takes a different path, by proposing resource-efficient deep learning methods for the analysis of large-scale 3D point clouds.The efficiency of the introduced approaches comes in various flavors: fast training, few parameters, small compute or memory footprint, and leveraging realistically-available data.In doing so, we strive to devise solutions that can be used by researchers and practitioners with minimal hardware requirements.We first introduce a 3D semantic segmentation model which combines the efficiency of superpoint-based methods with the expressivity of transformers. We build a hierarchical data representation which drastically reduces the size of the 3D point cloud parsing problem, facilitating the processing of large point clouds en masse. Our self-attentive network proves to match or even surpass state-of-the-art approaches on a range of sensors and acquisition environments, while boasting orders of magnitude fewer parameters, faster training, and swift inference.We then build upon this framework to tackle panoptic segmentation of large-scale point clouds. Existing instance and panoptic segmentation methods need to solve a complex matching problem between predicted and ground truth instances for computing their supervision loss.Instead, we frame this task as a scalable graph clustering problem, which a small network is trained to address from local objectives only, without computing the actual object instances at train time. Our lightweight model can process ten-million-point scenes at once on a single GPU in a few seconds, opening the door to 3D panoptic segmentation at unprecedented scales. Finally, we propose to exploit the complementarity of image and point cloud modalities to enhance 3D scene understanding.We place ourselves in a realistic acquisition setting where multiple arbitrarily-located images observe the same scene, with potential occlusions.Unlike previous 2D-3D fusion approaches, we learn to select information from various views of the same object based on their respective observation conditions: camera-to-object distance, occlusion rate, optical distortion, etc. Our efficient implementation achieves state-of-the-art results both in indoor and outdoor settings, with minimal requirements: raw point clouds, arbitrarily-positioned images, and their cameras poses. Overall, this thesis upholds the principle that in data-scarce regimes,exploiting the structure of the problem unlocks both efficient and performant architectures
Al, Hakim Ezeddin. "3D YOLO: End-to-End 3D Object Detection Using Point Clouds." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-234242.
Full textFör att autonoma fordon ska ha en god uppfattning av sin omgivning används moderna sensorer som LiDAR och RADAR. Dessa genererar en stor mängd 3-dimensionella datapunkter som kallas point clouds. Inom utvecklingen av autonoma fordon finns det ett stort behov av att tolka LiDAR-data samt klassificera medtrafikanter. Ett stort antal studier har gjorts om 2D-objektdetektering som analyserar bilder för att upptäcka fordon, men vi är intresserade av 3D-objektdetektering med hjälp av endast LiDAR data. Därför introducerar vi modellen 3D YOLO, som bygger på YOLO (You Only Look Once), som är en av de snabbaste state-of-the-art modellerna inom 2D-objektdetektering för bilder. 3D YOLO tar in ett point cloud och producerar 3D lådor som markerar de olika objekten samt anger objektets kategori. Vi har tränat och evaluerat modellen med den publika träningsdatan KITTI. Våra resultat visar att 3D YOLO är snabbare än dagens state-of-the-art LiDAR-baserade modeller med en hög träffsäkerhet. Detta gör den till en god kandidat för kunna användas av autonoma fordon.
Biasutti, Pierre. "2D Image Processing Applied to 3D LiDAR Point Clouds." Thesis, Bordeaux, 2019. http://www.theses.fr/2019BORD0161/document.
Full textThe ever growing demand for reliable mapping data, especially in urban environments, has motivated the development of "close-range" Mobile Mapping Systems (MMS). These systems acquire high precision data, and in particular 3D LiDAR point clouds and optical images. The large amount of data, along with their diversity, make MMS data processing a very complex task. This thesis lies in the context of 2D image processing applied to 3D LiDAR point clouds acquired with MMS.First, we focus on the projection of the LiDAR point clouds onto 2D pixel grids to create images. Such projections are often sparse because some pixels do not carry any information. We use these projections for different applications such as high resolution orthoimage generation, RGB-D imaging and visibility estimation in point clouds.Moreover, we exploit the topology of LiDAR sensors in order to create low resolution images, named range-images. These images offer an efficient and canonical representation of the point cloud, while being directly accessible from the point cloud. We show how range-images can be used to simplify, and sometimes outperform, methods for multi-modal registration, segmentation, desocclusion and 3D detection
IRFAN, MUHAMMAD ABEER. "Joint geometry and color denoising for 3D point clouds." Doctoral thesis, Politecnico di Torino, 2021. http://hdl.handle.net/11583/2912976.
Full textFucili, Mattia. "3D object detection from point clouds with dense pose voters." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/17616/.
Full textWang, Yutao. "Outlier formation and removal in 3D laser scanned point clouds." Thesis, University of British Columbia, 2014. http://hdl.handle.net/2429/51265.
Full textApplied Science, Faculty of
Mechanical Engineering, Department of
Graduate
Yanes, Luis. "Haptic Interaction with 3D oriented point clouds on the GPU." Thesis, University of East Anglia, 2015. https://ueaeprints.uea.ac.uk/58556/.
Full textPaffenholz, Jens-André [Verfasser]. "Direct geo-referencing of 3D point clouds with 3D positioning sensors / Jens-André Paffenholz." Hannover : Technische Informationsbibliothek und Universitätsbibliothek Hannover (TIB), 2013. http://d-nb.info/1036695646/34.
Full textPaffenholz, Jens-André [Verfasser]. "Direct geo-referencing of 3D point clouds with 3D positioning sensors / Jens-André Paffenholz." Hannover : Gottfried Wilhelm Leibniz Universität Hannover, 2012. http://d-nb.info/1183907168/34.
Full textRashidi, Abbas. "Improved monocular videogrammetry for generating 3D dense point clouds of built infrastructure." Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/52257.
Full textEngelmann, Francis [Verfasser], Bastian [Akademischer Betreuer] Leibe, and Siyu [Akademischer Betreuer] Tang. "3D scene understanding on point clouds / Francis Engelmann ; Bastian Leibe, Siyu Tang." Aachen : Universitätsbibliothek der RWTH Aachen, 2021. http://nbn-resolving.de/urn:nbn:de:101:1-2021100303242549426277.
Full textAbuzaina, Anas. "On evidence gathering in 3D point clouds of static and moving objects." Thesis, University of Southampton, 2015. https://eprints.soton.ac.uk/381290/.
Full textWiklander, Marcus. "Classification of tree species from 3D point clouds using convolutional neural networks." Thesis, Umeå universitet, Institutionen för fysik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-174662.
Full textAzhari, Faris. "Automated crack detection and characterisation from 3D point clouds of unstructured surfaces." Thesis, Queensland University of Technology, 2022. https://eprints.qut.edu.au/234510/1/Faris_Azhari_Thesis.pdf.
Full textSerra, Sabina. "Deep Learning for Semantic Segmentation of 3D Point Clouds from an Airborne LiDAR." Thesis, Linköpings universitet, Datorseende, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-168367.
Full textPaudel, Danda Pani. "Local and global methods for registering 2D image sets and 3D point clouds." Thesis, Dijon, 2015. http://www.theses.fr/2015DIJOS077/document.
Full textIn this thesis, we study the problem of registering 2D image sets and 3D point clouds under threedifferent acquisition set-ups. The first set-up assumes that the image sets are captured using 2Dcameras that are fully calibrated and coupled, or rigidly attached, with a 3D sensor. In this context,the point cloud from the 3D sensor is registered directly to the asynchronously acquired 2D images.In the second set-up, the 2D cameras are internally calibrated but uncoupled from the 3D sensor,allowing them to move independently with respect to each other. The registration for this set-up isperformed using a Structure-from-Motion reconstruction emanating from images and planar patchesrepresenting the point cloud. The proposed registration method is globally optimal and robust tooutliers. It is based on the theory Sum-of-Squares polynomials and a Branch-and-Bound algorithm.The third set-up consists of uncoupled and uncalibrated 2D cameras. The image sets from thesecameras are registered to the point cloud in a globally optimal manner using a Branch-and-Prunealgorithm. Our method is based on a Linear Matrix Inequality framework that establishes directrelationships between 2D image measurements and 3D scene voxels
Graehling, Quinn R. "Feature Extraction Based Iterative Closest Point Registration for Large Scale Aerial LiDAR Point Clouds." University of Dayton / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1607380713807017.
Full textDigne, Julie. "Inverse geometry : from the raw point cloud to the 3d surface : theory and algorithms." Phd thesis, École normale supérieure de Cachan - ENS Cachan, 2010. http://tel.archives-ouvertes.fr/tel-00610432.
Full textOchmann, Sebastian Klaus [Verfasser]. "Automatic Reconstruction of Parametric, Volumetric Building Models from 3D Point Clouds / Sebastian Klaus Ochmann." Bonn : Universitäts- und Landesbibliothek Bonn, 2019. http://d-nb.info/1188731599/34.
Full textPatiño, Mejía Isabel Cristina [Verfasser], and Andreas [Akademischer Betreuer] Zell. "Estimating Head Measurements from 3D Point Clouds / Isabel Cristina Patiño Mejía ; Betreuer: Andreas Zell." Tübingen : Universitätsbibliothek Tübingen, 2019. http://d-nb.info/1201644429/34.
Full textDehbi, Youness [Verfasser]. "Statistical relational learning of semantic models and grammar rules for 3D building reconstruction from 3D point clouds / Youness Dehbi." Bonn : Universitäts- und Landesbibliothek Bonn, 2016. http://d-nb.info/112228585X/34.
Full textArvidsson, Simon, and Marcus Gullstrand. "Predicting forest strata from point clouds using geometric deep learning." Thesis, Jönköping University, JTH, Avdelningen för datavetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-54155.
Full textAvdiu, Blerta. "Matching Feature Points in 3D World." Thesis, Tekniska Högskolan, Högskolan i Jönköping, JTH, Data- och elektroteknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-23049.
Full textKersten, Thomas. "Untersuchungen zur Qualität und Genauigkeit von 3D-Punktwolken für die 3D-Objektmodellierung auf der Grundlage von terrestrischem Laserscanning und bildbasierten Verfahren." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2018. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-231616.
Full text3D point clouds have significantly changed the surveying of objects in the last 25 years. Since in many applications, the individual point measurements were replaced through area-based measurements in form of point clouds, a paradigm shift in surveying has been fulfilled. This change in measurement methodology was made possible with the rapid developments in instrument manufacturing and computer technology. Today, airborne and terrestrial laser scanners, as well as hand-held 3D scanners directly generate dense point clouds, while dense point clouds are indirectly derived from photos of image-based recording systems used for detailed 3D object reconstruction in almost any scale. In this work, investigations into the geometric accuracy of some of these scanning systems are pre-sented to document and evaluate their performance. While terrestrial laser scanners mostly met the accuracy specifications in the investigations, 3-5 mm for 3D points and distance measurements as defined in the technical specifications of the system manufacturer, significant differences are shown, however, by many tested hand-held 3D scanners. These observed deviations indicate a certain geometric instability of the measuring system, caused either by the construction/manufacturing and/or insufficient calibration (particularly with regard to the scale). It is apparent that most of the hand-held 3D scanners are at the beginning of the technical development, which still offers potential for optimization. The image-based recording systems have been increasingly accepted by the market as flexible and efficient alternatives to laser scanning systems for about ten years. The research of image-based recording and evaluation methods presented in this work has shown that these coloured 3D point clouds correspond to the accuracy of the laser scanner depending on the image scale and surface material of the object. Compared with the results of most hand-held 3D scanners, point clouds gen-erated by image-based recording techniques exhibit superior quality. However, the Creaform HandySCAN 700, based on a photogrammetric recording principle (stereo photogrammetry), shows as the solitary exception of the hand-held 3D scanners very good results with better than 30 micrometres on average, representing accuracies even in the range of the reference systems (here structured light projection systems). The developed test procedures and the corresponding investigations have been practically proven for both terrestrial and hand-held 3D scanners, since comparable results can be obtained using the VDI/VDE guidelines 2634, which allows statements about the performance of the tested scanning system for practice-oriented users. For object scans comprised of multiple single scan acquired in static mode, errors of the scan registration have to be added, while for scans collected in the kine-matic mode the accuracies of the (absolute) position sensors will be added on the error budget of the point cloud. A careful system calibration of various positioning and recording sensors of the mobile multi-sensor system used in kinematic mode allows a 3D point accuracy of about 3-5 cm, which if necessary can be improved with higher quality sensors under good conditions. With static scans an accuracy of better than 1 cm for 3D points can be achieved surpassing the potential of mobile recording systems, which are economically much more efficient if larger areas have to be scanned. The 3D point clouds are the basis for object reconstruction in two different ways: a) engineering modelling as generalized CAD construction through geometric primitives and b) mesh modelling by triangulation of the point clouds for the exact representation of the surface. Deviations up to 10 cm (and possibly higher) from the nominal value can be created very quickly through the generalization in the CAD construction, but on the other side a significant reduction of data and a topological struc-turing can be achieved by fitting the point cloud into geometric primitives. However, investigations have shown that the number of polygons can be reduced to 25% and even 10% of the original data in the mesh triangulation using intelligent polygon decimation algorithms (e.g. curvature based) depending on the surface characteristic of the object, without having too much impact on the visual and geometric quality of the result. Depending on the object size, deviations of less than one milli-metre (e.g. for archaeological finds) up to 5 cm on average for larger objects can be achieved. In the future point clouds can form an important basis for the construction of the environment for many virtual reality applications, where the visual appearance is more important than the perfect geometric accuracy of the modelled objects
Schauer, Marin Rodrigues Johannes [Verfasser], and Andreas [Gutachter] Nüchter. "Detecting Changes and Finding Collisions in 3D Point Clouds / Johannes Schauer Marin Rodrigues ; Gutachter: Andreas Nüchter." Würzburg : Universität Würzburg, 2020. http://d-nb.info/1222910462/34.
Full textSHAH, GHAZANFAR ALI. "Template-based reverse engineering of parametric CAD models from point clouds." Doctoral thesis, Università degli studi di Genova, 2021. http://hdl.handle.net/11567/1048640.
Full textManamasa, Krishna Himaja. "Domain adaptation from 3D synthetic images to real images." Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-19303.
Full textBorrmann, Dorit [Verfasser], Andreas [Gutachter] Nüchter, Joachim [Gutachter] Hertzberg, and Claus [Gutachter] Brenner. "Multi-modal 3D mapping - Combining 3D point clouds with thermal and color information / Dorit Borrmann ; Gutachter: Andreas Nüchter, Joachim Hertzberg, Claus Brenner." Würzburg : Universität Würzburg, 2018. http://d-nb.info/1152211501/34.
Full textBorrmann, Dorit Verfasser], Andreas [Gutachter] Nüchter, Joachim [Gutachter] Hertzberg, and Claus [Gutachter] [Brenner. "Multi-modal 3D mapping - Combining 3D point clouds with thermal and color information / Dorit Borrmann ; Gutachter: Andreas Nüchter, Joachim Hertzberg, Claus Brenner." Würzburg : Universität Würzburg, 2018. http://nbn-resolving.de/urn:nbn:de:bvb:20-opus-157085.
Full textRichter, Rico [Verfasser], and Jürgen [Akademischer Betreuer] Döllner. "Concepts and techniques for processing and rendering of massive 3D point clouds / Rico Richter ; Betreuer: Jürgen Döllner." Potsdam : Universität Potsdam, 2018. http://d-nb.info/1217813020/34.
Full textOrts-Escolano, Sergio. "A three-dimensional representation method for noisy point clouds based on growing self-organizing maps accelerated on GPUs." Doctoral thesis, Universidad de Alicante, 2013. http://hdl.handle.net/10045/36484.
Full textThomas, Hugues. "Apprentissage de nouvelles représentations pour la sémantisation de nuages de points 3D." Thesis, Paris Sciences et Lettres (ComUE), 2019. http://www.theses.fr/2019PSLEM048/document.
Full textIn the recent years, new technologies have allowed the acquisition of large and precise 3D scenes as point clouds. They have opened up new applications like self-driving vehicles or infrastructure monitoring that rely on efficient large scale point cloud processing. Convolutional deep learning methods cannot be directly used with point clouds. In the case of images, convolutional filters brought the ability to learn new representations, which were previously hand-crafted in older computer vision methods. Following the same line of thought, we present in this thesis a study of hand-crafted representations previously used for point cloud processing. We propose several contributions, to serve as basis for the design of a new convolutional representation for point cloud processing. They include a new definition of multiscale radius neighborhood, a comparison with multiscale k-nearest neighbors, a new active learning strategy, the semantic segmentation of large scale point clouds, and a study of the influence of density in multiscale representations. Following these contributions, we introduce the Kernel Point Convolution (KPConv), which uses radius neighborhoods and a set of kernel points to play the role of the kernel pixels in image convolution. Our convolutional networks outperform state-of-the-art semantic segmentation approaches in almost any situation. In addition to these strong results, we designed KPConv with a great flexibility and a deformable version. To conclude our argumentation, we propose several insights on the representations that our method is able to learn
Leroy, Rémy. "Deep Learning methods for monocular 3D vision systems." Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG021.
Full textIn this thesis, we explore deep learning methods for monocular 3D vision systems, from image acquisition to processing. We first propose Pix2Point, a method for 3D point cloud prediction from a single image using context information, trained with an optimal transport loss. Pix2Point achieves a better coverage of the scenes when trained on sparse point clouds than monocular depth estimation methods, trained on sparse depth maps. Second, to exploit sensor depth cues, we propose a depth regression method from a defocused patch, which outperforms classification and direct regression, on simulated and real data. Finally, we tackle the design of a RGB-D monocular vision system for which the image is processed jointly by our defocus-based depth regression method and a simple image deblurring network. We propose an end-to-end multi-task optimisation framework of sensor and network parameters, that we apply to the focus optimisation for a chromatic lens. The optimisation landscape presents multiple optima, due to the depth regression task, while the deblurring task appears less sensitive to the focus. This thesis hence contains several contributions exploiting neural networks for monocular 3D estimation and paves the way towards end-to-end design of RGB-D systems
Fischer, Andreas, and Andreas Schäfer. "Untersuchungen zum mobilen 3D-Scannen unter Tage bei K+S." Technische Universitaet Bergakademie Freiberg Universitaetsbibliothek "Georgius Agricola", 2016. http://nbn-resolving.de/urn:nbn:de:bsz:105-qucosa-205693.
Full textAs part of a thesis at the Technical University of Freiberg, a basis for the analysis of 3D point clouds was set for refining the mine map automatically. Since 2015 studies and test measurements have been running to create the necessary 3D point clouds as economically as possible, by using an underground mobile scanning system. Below the different technical approaches will be presented as well as the results of the test measurements and the next planned steps
Hölscher, Phillip. "Deep Learning for estimation of fingertip location in 3-dimensional point clouds : An investigation of deep learning models for estimating fingertips in a 3D point cloud and its predictive uncertainty." Thesis, Linköpings universitet, Statistik och maskininlärning, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-176675.
Full textStella, Federico. "Learning a Local Reference Frame for Point Clouds using Spherical CNNs." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/20197/.
Full textEl, Sayed Abdul Rahman. "Traitement des objets 3D et images par les méthodes numériques sur graphes." Thesis, Normandie, 2018. http://www.theses.fr/2018NORMLH19/document.
Full textSkin detection involves detecting pixels corresponding to human skin in a color image. The faces constitute a category of stimulus important by the wealth of information that they convey because before recognizing any person it is essential to locate and recognize his face. Most security and biometrics applications rely on the detection of skin regions such as face detection, 3D adult object filtering, and gesture recognition. In addition, saliency detection of 3D mesh is an important pretreatment phase for many computer vision applications. 3D segmentation based on salient regions has been widely used in many computer vision applications such as 3D shape matching, object alignments, 3D point-point smoothing, searching images on the web, image indexing by content, video segmentation and face detection and recognition. The detection of skin is a very difficult task for various reasons generally related to the variability of the shape and the color to be detected (different hues from one person to another, orientation and different sizes, lighting conditions) and especially for images from the web captured under different light conditions. There are several known approaches to skin detection: approaches based on geometry and feature extraction, motion-based approaches (background subtraction (SAP), difference between two consecutive images, optical flow calculation) and color-based approaches. In this thesis, we propose numerical optimization methods for the detection of skins color and salient regions on 3D meshes and 3D point clouds using a weighted graph. Based on these methods, we provide 3D face detection approaches using Linear Programming and Data Mining. In addition, we adapted our proposed methods to solve the problem of simplifying 3D point clouds and matching 3D objects. In addition, we show the robustness and efficiency of our proposed methods through different experimental results. Finally, we show the stability and robustness of our methods with respect to noise
Bobkov, Dmytro [Verfasser], Eckehard [Akademischer Betreuer] Steinbach, Eckehard [Gutachter] Steinbach, and Klaus [Gutachter] Diepold. "Semantic understanding of 3D point clouds of indoor environments / Dmytro Bobkov ; Gutachter: Eckehard Steinbach, Klaus Diepold ; Betreuer: Eckehard Steinbach." München : Universitätsbibliothek der TU München, 2019. http://d-nb.info/1196791856/34.
Full textHuang, Rong [Verfasser], Uwe [Akademischer Betreuer] Stilla, Helmut [Gutachter] Mayer, and Uwe [Gutachter] Stilla. "Change detection of construction sites based on 3D point clouds / Rong Huang ; Gutachter: Helmut Mayer, Uwe Stilla ; Betreuer: Uwe Stilla." München : Universitätsbibliothek der TU München, 2021. http://d-nb.info/1240832850/34.
Full textJack, Dominic. "Deep learning approaches for 3D inference from monocular vision." Thesis, Queensland University of Technology, 2020. https://eprints.qut.edu.au/204267/1/Dominic_Jack_Thesis.pdf.
Full textBirdal, Tolga [Verfasser], Slobodan [Akademischer Betreuer] Ilic, Darius [Gutachter] Burschka, Yasutaka [Gutachter] Furukawa, and Slobodan [Gutachter] Ilic. "Geometric Methods for 3D Reconstruction from Large Point Clouds / Tolga Birdal ; Gutachter: Darius Burschka, Yasutaka Furukawa, Slobodan Ilic ; Betreuer: Slobodan Ilic." München : Universitätsbibliothek der TU München, 2018. http://d-nb.info/1179914082/34.
Full textFernandes, maligo Artur otavio. "Unsupervised Gaussian mixture models for the classification of outdoor environments using 3D terrestrial lidar data." Thesis, Toulouse, INSA, 2016. http://www.theses.fr/2016ISAT0053/document.
Full textThe processing of 3D lidar point clouds enable terrestrial autonomous mobile robots to build semantic models of the outdoor environments in which they operate. Such models are interesting because they encode qualitative information, and thus provide to a robot the ability to reason at a higher level of abstraction. At the core of a semantic modelling system, lies the capacity to classify the sensor observations. We propose a two-layer classi- fication model which strongly relies on unsupervised learning. The first, intermediary layer consists of a Gaussian mixture model. This model is determined in a training step in an unsupervised manner, and defines a set of intermediary classes which is a fine-partitioned representation of the environment. The second, final layer consists of a grouping of the intermediary classes into final classes that are interpretable in a considered target task. This grouping is determined by an expert during the training step, in a process which is supervised, yet guided by the intermediary classes. The evaluation is done for two datasets acquired with different lidars and possessing different characteristics. It is done quantitatively using one of the datasets, and qualitatively using another. The system is designed following the standard learning procedure, based on a training, a validation and a test steps. The operation follows a standard classification pipeline. The system is simple, with no requirement of pre-processing or post-processing stages
Contreras, Samamé Luis Federico. "SLAM collaboratif dans des environnements extérieurs." Thesis, Ecole centrale de Nantes, 2019. http://www.theses.fr/2019ECDN0012/document.
Full textThis thesis proposes large-scale mapping model of urban and rural environments using 3D data acquired by several robots. The work contributes in two main ways to the research field of mapping. The first contribution is the creation of a new framework, CoMapping, which allows to generate 3D maps in a cooperative way. This framework applies to outdoor environments with a decentralized approach. The CoMapping's functionality includes the following elements: First of all, each robot builds a map of its environment in point cloud format.To do this, the mapping system was set up on computers dedicated to each vehicle, processing distance measurements from a 3D LiDAR moving in six degrees of freedom (6-DOF). Then, the robots share their local maps and merge the point clouds individually to improve their local map estimation. The second key contribution is the group of metrics that allow to analyze the merging and card sharing processes between robots. We present experimental results to validate the CoMapping framework with their respective metrics. All tests were carried out in urban outdoor environments on the surrounding campus of the École Centrale de Nantes as well as in rural areas
Deng, Haowen [Verfasser], Slobodan [Akademischer Betreuer] Ilic, Stefano Luigi [Gutachter] Di, and Slobodan [Gutachter] Ilic. "Learned 3D Local Features for Rigid Pose Estimation on Point Clouds / Haowen Deng ; Gutachter: Luigi Di Stefano, Slobodan Ilic ; Betreuer: Slobodan Ilic." München : Universitätsbibliothek der TU München, 2020. http://d-nb.info/1223616924/34.
Full textSchilling, Anita. "Automatic Retrieval of Skeletal Structures of Trees from Terrestrial Laser Scanner Data." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-155698.
Full textDie Erforschung des ÖkosystemsWald spielt gerade heutzutage im Hinblick auf den nachhaltigen Umgang mit nachwachsenden Rohstoffen und den Klimawandel eine große Rolle. Insbesondere die exakte Beschreibung der dreidimensionalen Struktur eines Baumes ist wichtig für die Forstwissenschaften und Bioklimatologie, aber auch im Rahmen kommerzieller Anwendungen. Die konventionellen Methoden um geometrische Pflanzenmerkmale zu messen sind arbeitsintensiv und zeitaufwändig. Für eine genaue Analyse müssen Bäume gefällt werden, was oft unerwünscht ist. Hierbei bietet sich das Terrestrische Laserscanning (TLS) als besonders attraktives Werkzeug aufgrund seines kontaktlosen Messprinzips an. Die Objektgeometrie wird als 3D-Punktwolke wiedergegeben. Basierend darauf ist das Ziel der Arbeit die automatische Bestimmung der räumlichen Baumstruktur aus TLS-Daten. Der Fokus liegt dabei auf Waldszenen mit vergleichsweise hoher Bestandesdichte und mit zahlreichen daraus resultierenden Verdeckungen. Die Auswertung dieser TLS-Daten, die einen unterschiedlichen Grad an Detailreichtum aufweisen, stellt eine große Herausforderung dar. Zwei vollautomatische Methoden zur Generierung von Skelettstrukturen von gescannten Bäumen, welche komplementäre Eigenschaften besitzen, werden vorgestellt. Bei der ersten Methode wird das Gesamtskelett eines Baumes aus 3D-Daten von registrierten Scans bestimmt. Die Aststruktur wird von einer Voxelraum-Repräsentation abgeleitet indem Pfade von Astspitzen zum Stamm gesucht werden. Der Stamm wird im Voraus aus den 3D-Punkten rekonstruiert. Das Baumskelett wird als 3D-Liniengraph erzeugt. Für jeden gemessenen Punkt stellt ein Scan neben 3D-Koordinaten und Distanzwerten auch 2D-Indizes zur Verfügung, die sich aus dem Intensitätsbild ergeben. Bei der zweiten Methode, die auf Einzelscans arbeitet, wird dies ausgenutzt. Außerdem wird ein neuartiges Konzept zum Management von TLS-Daten beschrieben, welches die Forschungsarbeit erleichtert hat. Zunächst wird das Tiefenbild in Komponenten aufgeteilt. Es wird eine Prozedur zur Bestimmung von Komponentenkonturen vorgestellt, die in der Lage ist innere Tiefendiskontinuitäten zu verfolgen. Von der Konturinformation wird ein 2D-Skelett generiert, welches benutzt wird um die Komponente in Teilkomponenten zu zerlegen. Von der 3D-Punktmenge, die mit einer Teilkomponente assoziiert ist, wird eine Principal Curve berechnet. Die Skelettstruktur einer Komponente im Tiefenbild wird als Menge von Polylinien zusammengefasst. Die objektive Evaluation der Resultate stellt weiterhin ein ungelöstes Problem dar, weil die Aufgabe selbst nicht klar erfassbar ist: Es existiert keine eindeutige Definition davon was das wahre Skelett in Bezug auf eine gegebene Punktmenge sein sollte. Die Korrektheit der Methoden kann daher nicht quantitativ beschrieben werden. Aus diesem Grund, können die Ergebnisse nur visuell beurteiltwerden. Weiterhinwerden die Charakteristiken beider Methoden eingehend diskutiert. Es werden Experimentresultate beider Methoden vorgestellt. Die erste Methode bestimmt effizient das Skelett eines Baumes, welches die Aststruktur approximiert. Der Detaillierungsgrad wird hauptsächlich durch den Voxelraum bestimmt, weshalb kleinere Äste nicht angemessen reproduziert werden. Die zweite Methode rekonstruiert Teilskelette eines Baums mit hoher Detailtreue. Die Methode reagiert sensibel auf Rauschen in der Kontur, dennoch sind die Ergebnisse vielversprechend. Es gibt eine Vielzahl von Möglichkeiten die Robustheit der Methode zu verbessern. Die Kombination der Stärken von beiden präsentierten Methoden sollte weiter untersucht werden und kann zu einem robusteren Ansatz führen um vollständige Baumskelette automatisch aus TLS-Daten zu generieren