Siga este enlace para ver otros tipos de publicaciones sobre el tema: Clovis points.

Tesis sobre el tema "Clovis points"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores tesis para su investigación sobre el tema "Clovis points".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Richard, Andrew Justin. "Clovis and Folsom Functionality Comparison". Thesis, The University of Arizona, 2015. http://hdl.handle.net/10150/556853.

Texto completo
Resumen
This thesis uses experimental archaeology as a method to discover the functional differences between Clovis and Folsom projectile points filtered through a behavioral ecology paradigm. Porcelain is used as a substitute for tool stone for its consistency and control value. The experiment was devised to find out which technology, Clovis or Folsom, was more functional, had a higher curation rate and contributed to increased group subsistence. Paleoindian tool technology transitions can be seen as indicators for adaptation triggered by environmental conditions and changes in subsistence. Folsom technology, when compared to Clovis technology, was functionally superior in performance, refurbishment and curation. Technological design choices made by Folsom people were engineered toward producing a more functional tool system as a sustainable form of risk management. The Clovis Folsom Breakage Experiment indicates that Folsom tool technology was specifically adapted to bison subsistence based on increased functionality and curation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Prasciunas, Mary M. "Clovis first? an analysis of space, time, and technology /". Laramie, Wyo. : University of Wyoming, 2008. http://proquest.umi.com/pqdweb?did=1594497451&sid=1&Fmt=2&clientId=18949&RQT=309&VName=PQD.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Werner, Angelia N. "Experimental assessment of proximal-lateral edge grinding on haft damage using replicated Clovis points". Kent State University / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=kent1492848811526633.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Giraudot, Simon. "Reconstruction robuste de formes à partir de données imparfaites". Thesis, Nice, 2015. http://www.theses.fr/2015NICE4024/document.

Texto completo
Resumen
Au cours des vingt dernières années, de nombreux algorithmes de reconstruction de surface ont été développés. Néanmoins, des données additionnelles telles que les normales orientées sont souvent requises et la robustesse aux données imparfaites est encore un vrai défi. Dans cette thèse, nous traitons de nuages de points non-orientés et imparfaits, et proposons deux nouvelles méthodes gérant deux différents types de surfaces. La première méthode, adaptée au bruit, s'applique aux surfaces lisses et fermées. Elle prend en entrée un nuage de points avec du bruit variable et des données aberrantes, et comporte trois grandes étapes. Premièrement, en supposant que la surface est lisse et de dimension connue, nous calculons une fonction distance adaptée au bruit. Puis nous estimons le signe et l'incertitude de la fonction sur un ensemble de points-sources, en minimisant une énergie quadratique exprimée sur les arêtes d'un graphe uniforme aléatoire. Enfin, nous calculons une fonction implicite signée par une approche dite « random walker » avec des contraintes molles choisies aux points-sources de faible incertitude. La seconde méthode génère des surfaces planaires par morceaux, potentiellement non-variétés, représentées par des maillages triangulaires simples. En faisant croitre des primitives planaires convexes sous une erreur de Hausdorff bornée, nous déduisons à la fois la surface et sa connectivité et générons un complexe simplicial qui représente efficacement les grandes régions planaires, les petits éléments et les bords. La convexité des primitives est essentielle pour la robustesse et l'efficacité de notre approche
Over the last two decades, a high number of reliable algorithms for surface reconstruction from point clouds has been developed. However, they often require additional attributes such as normals or visibility, and robustness to defect-laden data is often achieved through strong assumptions and remains a scientific challenge. In this thesis we focus on defect-laden, unoriented point clouds and contribute two new reconstruction methods designed for two specific classes of output surfaces. The first method is noise-adaptive and specialized to smooth, closed shapes. It takes as input a point cloud with variable noise and outliers, and comprises three main steps. First, we compute a novel noise-adaptive distance function to the inferred shape, which relies on the assumption that this shape is a smooth submanifold of known dimension. Second, we estimate the sign and confidence of the function at a set of seed points, through minimizing a quadratic energy expressed on the edges of a uniform random graph. Third, we compute a signed implicit function through a random walker approach with soft constraints chosen as the most confident seed points. The second method generates piecewise-planar surfaces, possibly non-manifold, represented by low complexity triangle surface meshes. Through multiscale region growing of Hausdorff-error-bounded convex planar primitives, we infer both shape and connectivity of the input and generate a simplicial complex that efficiently captures large flat regions as well as small features and boundaries. Imposing convexity of primitives is shown to be crucial to both the robustness and efficacy of our approach
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Truong, Quoc Hung. "Knowledge-based 3D point clouds processing". Phd thesis, Université de Bourgogne, 2013. http://tel.archives-ouvertes.fr/tel-00977434.

Texto completo
Resumen
The modeling of real-world scenes through capturing 3D digital data has proven to be both useful andapplicable in a variety of industrial and surveying applications. Entire scenes are generally capturedby laser scanners and represented by large unorganized point clouds possibly along with additionalphotogrammetric data. A typical challenge in processing such point clouds and data lies in detectingand classifying objects that are present in the scene. In addition to the presence of noise, occlusionsand missing data, such tasks are often hindered by the irregularity of the capturing conditions bothwithin the same dataset and from one data set to another. Given the complexity of the underlyingproblems, recent processing approaches attempt to exploit semantic knowledge for identifying andclassifying objects. In the present thesis, we propose a novel approach that makes use of intelligentknowledge management strategies for processing of 3D point clouds as well as identifying andclassifying objects in digitized scenes. Our approach extends the use of semantic knowledge to allstages of the processing, including the guidance of the individual data-driven processing algorithms.The complete solution consists in a multi-stage iterative concept based on three factors: the modeledknowledge, the package of algorithms, and a classification engine. The goal of the present work isto select and guide algorithms following an adaptive and intelligent strategy for detecting objects inpoint clouds. Experiments with two case studies demonstrate the applicability of our approach. Thestudies were carried out on scans of the waiting area of an airport and along the tracks of a railway.In both cases the goal was to detect and identify objects within a defined area. Results show that ourapproach succeeded in identifying the objects of interest while using various data types
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

König, Sören y Stefan Gumhold. "Robust Surface Reconstruction from Point Clouds". Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-131561.

Texto completo
Resumen
The problem of generating a surface triangulation from a set of points with normal information arises in several mesh processing tasks like surface reconstruction or surface resampling. In this paper we present a surface triangulation approach which is based on local 2d delaunay triangulations in tangent space. Our contribution is the extension of this method to surfaces with sharp corners and creases. We demonstrate the robustness of the method on difficult meshing problems that include nearby sheets, self intersecting non manifold surfaces and noisy point samples.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Filho, Carlos André Braile Przewodowski. "Feature extraction from 3D point clouds". Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-30072018-111718/.

Texto completo
Resumen
Computer vision is a research field in which images are the main object of study. One of its category of problems is shape description. Object classification is one important example of applications using shape descriptors. Usually, these processes were performed on 2D images. With the large-scale development of new technologies and the affordable price of equipment that generates 3D images, computer vision has adapted to this new scenario, expanding the classic 2D methods to 3D. However, it is important to highlight that 2D methods are mostly dependent on the variation of illumination and color, while 3D sensors provide depth, structure/3D shape and topological information beyond color. Thus, different methods of shape descriptors and robust attributes extraction were studied, from which new attribute extraction methods have been proposed and described based on 3D data. The results obtained from well known public datasets have demonstrated their efficiency and that they compete with other state-of-the-art methods in this area: the RPHSD (a method proposed in this dissertation), achieved 85:4% of accuracy on the University of Washington RGB-D dataset, being the second best accuracy on this dataset; the COMSD (another proposed method) has achieved 82:3% of accuracy, standing at the seventh position in the rank; and the CNSD (another proposed method) at the ninth position. Also, the RPHSD and COMSD methods have relatively small processing complexity, so they achieve high accuracy with low computing time.
Visão computacional é uma área de pesquisa em que as imagens são o principal objeto de estudo. Um dos problemas abordados é o da descrição de formatos (em inglês, shapes). Classificação de objetos é um importante exemplo de aplicação que usa descritores de shapes. Classicamente, esses processos eram realizados em imagens 2D. Com o desenvolvimento em larga escala de novas tecnologias e o barateamento dos equipamentos que geram imagens 3D, a visão computacional se adaptou para este novo cenário, expandindo os métodos 2D clássicos para 3D. Entretanto, estes métodos são, majoritariamente, dependentes da variação de iluminação e de cor, enquanto os sensores 3D fornecem informações de profundidade, shape 3D e topologia, além da cor. Assim, foram estudados diferentes métodos de classificação de objetos e extração de atributos robustos, onde a partir destes são propostos e descritos novos métodos de extração de atributos a partir de dados 3D. Os resultados obtidos utilizando bases de dados 3D públicas conhecidas demonstraram a eficiência dos métodos propóstos e que os mesmos competem com outros métodos no estado-da-arte: o RPHSD (um dos métodos propostos) atingiu 85:4% de acurácia, sendo a segunda maior acurácia neste banco de dados; o COMSD (outro método proposto) atingiu 82:3% de acurácia, se posicionando na sétima posição do ranking; e o CNSD (outro método proposto) em nono lugar. Além disso, os métodos RPHSD têm uma complexidade de processamento relativamente baixa. Assim, eles atingem uma alta acurácia com um pequeno tempo de processamento.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

König, Sören y Stefan Gumhold. "Robust Surface Reconstruction from Point Clouds". Technische Universität Dresden, 2013. https://tud.qucosa.de/id/qucosa%3A27391.

Texto completo
Resumen
The problem of generating a surface triangulation from a set of points with normal information arises in several mesh processing tasks like surface reconstruction or surface resampling. In this paper we present a surface triangulation approach which is based on local 2d delaunay triangulations in tangent space. Our contribution is the extension of this method to surfaces with sharp corners and creases. We demonstrate the robustness of the method on difficult meshing problems that include nearby sheets, self intersecting non manifold surfaces and noisy point samples.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Aronsson, Oskar y Julia Nyman. "Boundary Representation Modeling from Point Clouds". Thesis, KTH, Bro- och stålbyggnad, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-278543.

Texto completo
Resumen
Inspections of bridges are today performed ocularly by an inspector at arm’s lengths distance to evaluate damages and to assess its current condition. Ocular inspections often require specialized equipment to aid the inspector to reach all parts of the bridge. The current state of practice for bridge inspection is therefore considered to be time-consuming, costly, and a safety hazard for the inspector. The purpose of this thesis has been to develop a method for automated modeling of bridges from point cloud data. Point clouds that have been created through photogrammetry from a collection of images acquired with an Unmanned Aerial Vehicle (UAV). This thesis has been an attempt to contribute to the long-term goal of making bridge inspections more efficient by using UAV technology. Several methods for the identification of structural components in point clouds have been evaluated. Based on this, a method has been developed to identify planar surfaces using the model-fitting method Random Sample Consensus (RANSAC). The developed method consists of a set of algorithms written in the programming language Python. The method utilizes intersection points between planes as well as the k-Nearest-Neighbor (k-NN) concept to identify the vertices of the structural elements. The method has been tested both for simulated point cloud data as well as for real bridges, where the images were acquired with a UAV. The results from the simulated point clouds showed that the vertices were modeled with a mean deviation of 0.13− 0.34 mm compared to the true vertex coordinates. For a point cloud of a rectangular column, the algorithms identified all relevant surfaces and were able to reconstruct it with a deviation of less than 2 % for the width and length. The method was also tested on two point clouds of real bridges. The algorithms were able to identify many of the relevant surfaces, but the complexity of the geometries resulted in inadequately reconstructed models.
Besiktning av broar utförs i dagsläget okulärt av en inspektör som på en armlängds avstånd bedömer skadetillståndet. Okulär besiktning kräver därmed ofta speciell utrustning för att inspektören ska kunna nå samtliga delar av bron. Detta resulterar i att det nuvarande tillvägagångssättet för brobesiktning beaktas som tidkrävande, kostsamt samt riskfyllt för inspektören. Syftet med denna uppsats var att utveckla en metod för att modellera broar på ett automatiserat sätt utifrån punktmolnsdata. Punktmolnen skapades genom fotogrammetri, utifrån en samling bilder tagna med en drönare. Uppsatsen har varit en insats för att bidra till det långsiktiga målet att effektivisera brobesiktning genom drönarteknik. Flera metoder för att identifiera konstruktionselement i punktmoln har undersökts. Baserat på detta har en metod utvecklats som identifierar plana ytor med regressionsmetoden Random Sample Consensus (RANSAC). Den utvecklade metoden består av en samling algoritmer skrivna i programmeringsspråket Python. Metoden grundar sig i att beräkna skärningspunkter mellan plan samt använder konceptet k-Nearest-Neighbor (k-NN) för att identifiera konstruktionselementens hörnpunkter. Metoden har testats på både simulerade punktmolnsdata och på punktmoln av fysiska broar, där bildinsamling har skett med hjälp av en drönare. Resultatet från de simulerade punktmolnen visade att hörnpunkterna kunde identifieras med en medelavvikelse på 0,13 − 0,34 mm jämfört med de faktiska hörnpunkterna. För ett punktmoln av en rektangulär pelare lyckades algoritmerna identifiera alla relevanta ytor och skapa en rekonstruerad modell med en avvikelse på mindre än 2 % med avseende på dess bredd och längd. Metoden testades även på två punktmoln av riktiga broar. Algoritmerna lyckades identifiera många av de relevanta ytorna, men geometriernas komplexitet resulterade i bristfälligt rekonstruerade modeller.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Otepka, Johannes, Sajid Ghuffar, Christoph Waldhauser, Ronald Hochreiter y Norbert Pfeifer. "Georeferenced Point Clouds: A Survey of Features and Point Cloud Management". MDPI AG, 2013. http://dx.doi.org/10.3390/ijgi2041038.

Texto completo
Resumen
This paper presents a survey of georeferenced point clouds. Concentration is, on the one hand, put on features, which originate in the measurement process themselves, and features derived by processing the point cloud. On the other hand, approaches for the processing of georeferenced point clouds are reviewed. This includes the data structures, but also spatial processing concepts. We suggest a categorization of features into levels that reflect the amount of processing. Point clouds are found across many disciplines, which is reflected in the versatility of the literature suggesting specific features. (authors' abstract)
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Hedlund, Tobias. "Registration of multiple ToF camera point clouds". Thesis, Umeå University, Department of Physics, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-34952.

Texto completo
Resumen

Buildings, maps and objects et cetera, can be modeled using a computer or reconstructed in 3D by data from different kinds of cameras or laser scanners. This thesis concerns the latter. The recent improvements of Time-of-Flight cameras have brought a number of new interesting research areas to the surface. Registration of several ToF camera point clouds is such an area.

A literature study has been made to summarize the research done in the area over the last two decades. The most popular method for registering point clouds, namely the Iterative Closest Point (ICP), has been studied. In addition to this, an error relaxation algorithm was implemented to minimize the accumulated error of the sequential pairwise ICP.

A few different real-world test scenarios and one scenario with synthetic data were constructed. These data sets were registered with varying outcome. The obtained camera poses from the sequential ICP were improved by loop closing and error relaxation.

The results illustrate the importance of having good initial guesses on the relative transformations to obtain a correct model. Furthermore the strengths and weaknesses of the sequential ICP and the utilized error relaxation method are shown.

Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Stålberg, Martin. "Reconstruction of trees from 3D point clouds". Thesis, Uppsala universitet, Avdelningen för systemteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-316833.

Texto completo
Resumen
The geometrical structure of a tree can consist of thousands, even millions, of branches, twigs and leaves in complex arrangements. The structure contains a lot of useful information and can be used for example to assess a tree's health or calculate parameters such as total wood volume or branch size distribution. Because of the complexity, capturing the structure of an entire tree used to be nearly impossible, but the increased availability and quality of particularly digital cameras and Light Detection and Ranging (LIDAR) instruments is making it increasingly possible. A set of digital images of a tree, or a point cloud of a tree from a LIDAR scan, contains a lot of data, but the information about the tree structure has to be extracted from this data through analysis. This work presents a method of reconstructing 3D models of trees from point clouds. The model is constructed from cylindrical segments which are added one by one. Bayesian inference is used to determine how to optimize the parameters of model segment candidates and whether or not to accept them as part of the model. A Hough transform for finding cylinders in point clouds is presented, and used as a heuristic to guide the proposals of model segment candidates. Previous related works have mainly focused on high density point clouds of sparse trees, whereas the objective of this work was to analyze low resolution point clouds of dense almond trees. The method is evaluated on artificial and real datasets and works rather well on high quality data, but performs poorly on low resolution data with gaps and occlusions.
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Thomas, Hugues. "Apprentissage de nouvelles représentations pour la sémantisation de nuages de points 3D". Thesis, Paris Sciences et Lettres (ComUE), 2019. http://www.theses.fr/2019PSLEM048/document.

Texto completo
Resumen
Aujourd’hui, de nouvelles technologies permettent l’acquisition de scènes 3D volumineuses et précises sous la forme de nuages de points. Les nouvelles applications ouvertes par ces technologies, comme les véhicules autonomes ou la maintenance d'infrastructure, reposent sur un traitement efficace des nuages de points à grande échelle. Les méthodes d'apprentissage profond par convolution ne peuvent pas être utilisées directement avec des nuages de points. Dans le cas des images, les filtres convolutifs ont permis l’apprentissage de nouvelles représentations, jusqu’alors construites « à la main » dans les méthodes de vision par ordinateur plus anciennes. En suivant le même raisonnement, nous présentons dans cette thèse une étude des représentations construites « à la main » utilisées pour le traitement des nuages de points. Nous proposons ainsi plusieurs contributions, qui serviront de base à la conception d’une nouvelle représentation convolutive pour le traitement des nuages de points. Parmi elles, une nouvelle définition de voisinages sphériques multi-échelles, une comparaison avec les k plus proches voisins multi-échelles, une nouvelle stratégie d'apprentissage actif, la segmentation sémantique des nuages de points à grande échelle, et une étude de l'influence de la densité dans les représentations multi-échelles. En se basant sur ces contributions, nous introduisons la « Kernel Point Convolution » (KPConv), qui utilise des voisinages sphériques et un noyau défini par des points. Ces points jouent le même rôle que les pixels du noyau des convolutions en image. Nos réseaux convolutionnels surpassent les approches de segmentation sémantique de l’état de l’art dans presque toutes les situations. En plus de ces résultats probants, nous avons conçu KPConv avec une grande flexibilité et une version déformable. Pour conclure notre réflexion, nous proposons plusieurs éclairages sur les représentations que notre méthode est capable d'apprendre
In the recent years, new technologies have allowed the acquisition of large and precise 3D scenes as point clouds. They have opened up new applications like self-driving vehicles or infrastructure monitoring that rely on efficient large scale point cloud processing. Convolutional deep learning methods cannot be directly used with point clouds. In the case of images, convolutional filters brought the ability to learn new representations, which were previously hand-crafted in older computer vision methods. Following the same line of thought, we present in this thesis a study of hand-crafted representations previously used for point cloud processing. We propose several contributions, to serve as basis for the design of a new convolutional representation for point cloud processing. They include a new definition of multiscale radius neighborhood, a comparison with multiscale k-nearest neighbors, a new active learning strategy, the semantic segmentation of large scale point clouds, and a study of the influence of density in multiscale representations. Following these contributions, we introduce the Kernel Point Convolution (KPConv), which uses radius neighborhoods and a set of kernel points to play the role of the kernel pixels in image convolution. Our convolutional networks outperform state-of-the-art semantic segmentation approaches in almost any situation. In addition to these strong results, we designed KPConv with a great flexibility and a deformable version. To conclude our argumentation, we propose several insights on the representations that our method is able to learn
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Trillos, Nicolás Garcia. "Variational Limits of Graph Cuts on Point Clouds". Research Showcase @ CMU, 2015. http://repository.cmu.edu/dissertations/518.

Texto completo
Resumen
The main goal of this thesis is to develop tools that enable us to study the convergence of minimizers of functionals defined on point clouds towards minimizers of equivalent functionals in the continuum; the point clouds we consider are samples of a ground-truth distribution. In particular, we investigate approaches to clustering based on minimizing objective functionals defined on proximity graphs of the given sample. Our focus is on functionals based on graph cuts like the Cheeger and ratio cuts. We show that minimizers of these cuts converge as the sample size increases to a minimizer of a corresponding continuum cut (which partitions the ground-truth distribution). Moreover, we obtain sharp conditions on how the connectivity radius can be scaled with respect to the number of sample points for the consistency to hold. We provide results for two-way and for multi-way cuts. The results are obtained by using the notion of Γ-convergence and an appropriate choice of metric which allows us to compare functions defined on point clouds with functions defined on continuous domains.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Salman, Nader. "From 3D point clouds to feature preserving meshes". Nice, 2010. http://www.theses.fr/2010NICE4086.

Texto completo
Resumen
La majorité des algorithmes de reconstruction de surface sont optimisés pour s’appliquer à des données de haute qualité. Les résultats obtenus peuvent alors être utilisables si les données proviennent de solutions d’acquisition bon marché. Notre première contribution est un algorithme de reconstruction de surfaces à partir de données de stéréo vision. Il combine les informations liées aux points 3D avec les images calibrées afin de combler l’imprécision des données. L’algorithme construit une soupe de triangles 3D à l’aide des images calibrées et à l’issue d’une phase de prétraitement du nuage de points. Pour épouser au mieux la surface de la scène, on contraint cette soupe de triangle 3D à respecter des critères de visibilité et de photo-consistance. On calcule ensuite un maillage à partir de la soupe de triangles à l’aide d’une technique de reconstruction qui combine les triangulations de Delaunay contraintes et le raffinement de Delaunay. Notre seconde contribution est un algorithme qui construit, à partir d’un nuage de points 3D échantillonnés sur une surface, un maillage de surface qui représente fidèlement les arrêtes vives. Cet algorithme génère un bon compromis entre précision et complexité du maillage. Dans un premier temps, on extrait une approximation des arrêtes vives de la surface sous-jacente à partir du nuage de points. Dans un deuxième temps, on utilise une variante du raffinement de Delaunay pour générer un maillage qui combine les arrêtes vives extraites avec une surface implicite obtenue à partir du nuage de points. Notre méthode se révèle flexible, robuste au bruit ; cette méthode peut prendre en compte la résolution du maillage ciblé et un champ de taille défini par l’utilisateur. Nos deux contributions génèrent des résultats efficaces sur une variété de scènes et de modèles. Notre méthode améliore l’état de l’art en termes de précision
Most of the current surface reconstruction algorithms target high quality data and can produce some intractable results when used with point clouds acquired through profitable 3D acquisitions methods. Our first contribution is a surface reconstruction, algorithm from stereo vision data copes with the data’s fuzziness using information from both the acquired D point cloud and the calibrated images. After pre-processing the point cloud, the algorithm builds, using the calibrated images, 3D triangular soup consistent with the surface of the scene through a combination of visibility and photo-consistency constraints. A mesh is then computed from the triangle soup using a combination of restricted Delaunay triangulation and Delaunay refinement methods. Our second contribution is an algorithm that builds, given a 3D point cloud sampled on a surface, an approximating surface mesh with an accurate representation of surface sharp edges, providing an enhanced trade-off between accuracy and mesh complexity. We first extract from the point cloud an approximation of the sharp edges of the underlying surface. Then a feature preserving variant of a Delaunay refinement process generates a mesh combining a faithful representation of the extracted sharp edges with an implicit surface obtained from the point cloud. The method is shown to be flexible, robust to noise and tuneable to adapt to the scale of the targeted mesh and to a user defined sizing field. We demonstrate the effectiveness of both contributions on a variety of scenes and models acquired with different hardware and show results that compare favourably, in terms of accuracy, with the current state of the art
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Tosteberg, Patrik. "Semantic Segmentation of Point Clouds Using Deep Learning". Thesis, Linköpings universitet, Datorseende, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-136793.

Texto completo
Resumen
In computer vision, it has in recent years become more popular to use point clouds to represent 3D data. To understand what a point cloud contains, methods like semantic segmentation can be used. Semantic segmentation is the problem of segmenting images or point clouds and understanding what the different segments are. An application for semantic segmentation of point clouds are e.g. autonomous driving, where the car needs information about objects in its surrounding. Our approach to the problem, is to project the point clouds into 2D virtual images using the Katz projection. Then we use pre-trained convolutional neural networks to semantically segment the images. To get the semantically segmented point clouds, we project back the scores from the segmentation into the point cloud. Our approach is evaluated on the semantic3D dataset. We find our method is comparable to state-of-the-art, without any fine-tuning on the Semantic3Ddataset.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Avdiu, Blerta. "Matching Feature Points in 3D World". Thesis, Tekniska Högskolan, Högskolan i Jönköping, JTH, Data- och elektroteknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-23049.

Texto completo
Resumen
This thesis work deals with the most actual topic in Computer Vision field which is scene understanding and this using matching of 3D feature point images. The objective is to make use of Saab’s latest breakthrough in extraction of 3D feature points, to identify the best alignment of at least two 3D feature point images. The thesis gives a theoretical overview of the latest algorithms used for feature detection, description and matching. The work continues with a brief description of the simultaneous localization and mapping (SLAM) technique, ending with a case study on evaluation of the newly developed software solution for SLAM, called slam6d. Slam6d is a tool that registers point clouds into a common coordinate system. It does an automatic high-accurate registration of the laser scans. In the case study the use of slam6d is extended in registering 3D feature point images extracted from a stereo camera and the results of registration are analyzed. In the case study we start with registration of one single 3D feature point image captured from stationary image sensor continuing with registration of multiple images following a trail. Finally the conclusion from the case study results is that slam6d can register non-laser scan extracted feature point images with high-accuracy in case of single image but it introduces some overlapping results in the case of multiple images following a trail.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Anagnostopoulos, Ioannis. "Generating As-Is BIMs of existing buildings : from planar segments to spaces". Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/281699.

Texto completo
Resumen
As-Is Building Information Models aid in the management, maintenance and renovation of existing buildings. However, most existing buildings do not have an accurate geometric depiction of their As-Is conditions. The process of generating As-Is models of existing structures involves practitioners, who manually convert Point Cloud Data (PCD) into semantically meaningful 3D models. This process requires a significant amount of manual effort and time. Previous research has been able to model objects by segmenting the point clouds into planes and classifying each one separately into classes, such as walls, floors and ceilings; this is insufficient for modelling, as BIM objects are comprised of multiple planes that form volumetric objects. This thesis introduces a novel method that focuses on the geometric creation of As-Is BIMs with enriched information. It tackles the problem by detecting objects, modelling them and enriching the model with spaces and object adjacencies from PCD. The first step of the proposed method detects objects by exploiting the relationships the segments should satisfy to be grouped into one object. It further proposes a method for detecting slabs with variations in height by finding local maxima in the point density. The second step models the geometry of walls and finally enriches the model with closed spaces encoded in the Industry Foundation Classes (IFC) standard. The method uses the point cloud density of detected walls to determine their width by projecting the wall into two directions and finding the edges with the highest density. It identifies adjacent walls by finding gaps or intersections between walls and exploits walls adjacency for correcting their boundaries, creating an accurate 3D geometry of the model. Finally, the method detects closed spaces by using a shortest-path algorithm. The method was tested on three original PCD which represent office floors. The method detects objects of class walls, floors and ceilings in PCD with an accuracy of approximately 96%. The precision and recall for the room detection were found to be 100%.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Graehling, Quinn R. "Feature Extraction Based Iterative Closest Point Registration for Large Scale Aerial LiDAR Point Clouds". University of Dayton / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1607380713807017.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Bucksch, Alexander [Verfasser]. "Revealing the skeleton from imperfect point clouds / Alexander Bucksch". München : Verlag Dr. Hut, 2011. http://d-nb.info/1011442027/34.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Asghar, Umair. "Landslide mapping from analysis of UAV-SFM point clouds". Thesis, University of British Columbia, 2017. http://hdl.handle.net/2429/63604.

Texto completo
Resumen
In recent years, unmanned aerial vehicles (UAVs) equipped with digital cameras have emerged as an inexpensive alternative to light detection and ranging (LiDAR) for mapping landslides. However, mapping with UAVs typically requires a ground control point (GCP) network to achieve higher mapping accuracies. Complex natural environments often limit the number as well as the proper distribution of GCPs. In the first part of this study, aerial imagery acquired with a quadrotor UAV was processed using structure from motion (SfM) technique to produce a three-dimensional point cloud of a large landslide involving multiple steep slopes and dense tree cover. The resulting point cloud was georeferenced with six different configurations of GCPs measured with a real-time kinetic GNSS receiver to test the influence of the number and the distribution of GCPs on mapping accuracies. Horizontal and vertical mapping accuracies of 0.058 m and 0.044 m, respectively, were achieved for the most accurate GCP configuration. A separate point cloud comparison was performed on the georeferenced point clouds to assess the effect of varying topography and tree cover on mapping accuracy. The 3D change in the natural terrain measured over a 1-year period from July 2016 to July 2017 showed movements ranging from ±0.4 m to over ±1 m at the toe of the landslide. Other parts of the landslide either remained inactive or moved less than 0.1 m. The second part of this thesis involved an accuracy comparison of five different opensource algorithms, originally developed for LiDAR data, for classification of the UAV-SfM point clouds. The influences of terrain slope, vegetation and point densities, and difficult-to-filter features on classification accuracy were also evaluated. CSF and MCC algorithms produced the lowest overall errors (4%) closely followed by LASground and FUSION (5%). All algorithms suffered in areas with densely vegetated steep slopes, understory vegetation, low point density, and low objects. Although any of the tested algorithms along with careful selection of input parameters can be used to accurately classify UAV-SfM point clouds, CSF is recommended as it is computationally efficient, does not require any preprocessing, and can process very large point clouds (>50 million points).
Applied Science, Faculty of
Engineering, School of (Okanagan)
Graduate
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Staniaszek, Michal. "Feature-Feature Matching For Object Retrieval in Point Clouds". Thesis, KTH, Datorseende och robotik, CVAP, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-170475.

Texto completo
Resumen
In this project, we implement a system for retrieving instances of objects from point clouds using feature based matching techniques. The target dataset of point clouds consists of approximately 80 full scans of office rooms over a period of one month. The raw clouds are reprocessed to remove regions which are unlikely to contain objects. Using locations determined by one of several possible interest point selection methods, one of a number of descriptors is extracted from the processed clouds. Descriptors from a target cloud are compared to those from a query object using a nearest neighbour approach. The nearest neighbours of each descriptor in the query cloud are used to vote for the position of the object in a 3D grid overlaid on the room cloud. We apply clustering in the voting space and rank the clusters according to the number of votes they contain. The centroid of each of the clusters is used to extract a region from the target cloud which, in the ideal case, corresponds to the query object. We perform an experimental evaluation of the system using various parameter settings in order to investigate factors affecting the usability of the system, and the efficacy of the system in retrieving correct objects. In the best case, we retrieve approximately 50% of the matching objects in the dataset. In the worst case, we retrieve only 10%. We find that the best approach is to use a uniform sampling over the room clouds, and to use a descriptor which factors in both colour and shape information to describe points.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Strandell, Ebbe. "Computational Geometry and Surface Reconstruction from Unorganized Point Clouds". Thesis, Linköpings universitet, Institutionen för teknik och naturvetenskap, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-96279.

Texto completo
Resumen
This thesis addresses the problem of constructing virtual representations of surfaces only known as clouds of unstructured points in space. This problem is related to many areas including computer graphics, image processing, computer vision, reverse engineering and geometry studies. Data sets can be acquired from a wide range of sources including Computer Tomography (CT), Magnetic Resonance Imaging (MRI), medical cryosections, laser range scanners, seismic surveys or mathematical models. This thesis will furthermost focus on medical samples acquired through cryosections of bodies. In this thesis report various computational geometry approaches of surface reconstruction are evaluated in terms of adequateness for scientific uses. Two methods called “γ-regular shapes” and “the Power Crust” are implemented and evaluated. The contribution of this work is the proposal of a new hybrid method of surface reconstruction in three dimensions. The underlying thought of the hybrid solution is to utilize the inverse medial axis transformation, defined by the Power Crust, to recover holes that may appear in the three dimensional γ-regular shapes.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Truax, Robert D. (Robert Denison). "Localization and tracking of parameterized objects in point clouds". Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/67805.

Texto completo
Resumen
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2011.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 43-46).
This thesis focuses on object recognition and tracking from three dimensional point cloud renderings of dense range and bearing data. Sensors like laser range-finders and depth cameras have become increasingly popular in autonomous robotic applications. A common task is to locate and track specific objects of interest located somewhere in the point cloud. This often introduces a tedious network of heuristics to build objects from identified primitives or an intractable high dimensional search space. Through a parameterized object model and certain relaxation functions, a likelihood based view of the data can be used to accomplish these goals with increased performance and reliability. Improvements in mathematics and convergence properties have shown that this method can be realized in real time.
by Robert Truax.
S.M.
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Wang, Lei. "Reconstruction and Deformation of Objects from Sampled Point Clouds". The Ohio State University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=osu1404134905.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Gao, Ge [Verfasser]. "Learning 6D Object Pose from Point Clouds / Ge Gao". Hamburg : Staats- und Universitätsbibliothek Hamburg Carl von Ossietzky, 2021. http://d-nb.info/1237050510/34.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Biasutti, Pierre. "2D Image Processing Applied to 3D LiDAR Point Clouds". Thesis, Bordeaux, 2019. http://www.theses.fr/2019BORD0161/document.

Texto completo
Resumen
L'intérêt toujours grandissant pour les données cartographiques fiables, notamment en milieu urbain, a motivé le développement de systèmes de cartographie mobiles terrestres. Ces systèmes sont conçus pour l'acquisition de données de très haute précision, telles que des nuages de points LiDAR 3D et des images optiques. La multitude de données, ainsi que leur diversité, rendent complexe le traitement des données issues de ce type de systèmes. Cette thèse se place dans le contexte du traitement de l'image appliqué au nuages de points LiDAR 3D issus de ce type de système.Premièrement, nous nous intéressons à des images issues de la projection de nuages de points LiDAR dans des grilles de pixels 2D régulières. Ces projections créent généralement des images éparses, dans lesquelles l'information de certains pixels n'est pas connue. Nous proposons alors différentes méthodes pour des applications telles que la génération d'orthoimages haute résolution, l'imagerie RGB-D et l'estimation de la visibilité des points d'un nuage.De plus, nous proposons d'exploiter la topologie d'acquisition des capteurs LiDAR pour produire des images de faible résolution: les range-images. Ces images offrent une représentation efficace et canonique du nuage de points, tout en étant directement accessibles à partir du nuage de points. Nous montrons comment ces images peuvent être utilisées pour simplifier, voire améliorer, des méthodes pour le recalage multi-modal, la segmentation, la désoccultation et la détection 3D
The ever growing demand for reliable mapping data, especially in urban environments, has motivated the development of "close-range" Mobile Mapping Systems (MMS). These systems acquire high precision data, and in particular 3D LiDAR point clouds and optical images. The large amount of data, along with their diversity, make MMS data processing a very complex task. This thesis lies in the context of 2D image processing applied to 3D LiDAR point clouds acquired with MMS.First, we focus on the projection of the LiDAR point clouds onto 2D pixel grids to create images. Such projections are often sparse because some pixels do not carry any information. We use these projections for different applications such as high resolution orthoimage generation, RGB-D imaging and visibility estimation in point clouds.Moreover, we exploit the topology of LiDAR sensors in order to create low resolution images, named range-images. These images offer an efficient and canonical representation of the point cloud, while being directly accessible from the point cloud. We show how range-images can be used to simplify, and sometimes outperform, methods for multi-modal registration, segmentation, desocclusion and 3D detection
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Fucili, Mattia. "3D object detection from point clouds with dense pose voters". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/17616/.

Texto completo
Resumen
Il riconoscimento di oggetti è sempre stato un compito sfidante per la Computer Vision. Trova applicazione in molti campi, principalmente nell’industria, come ad esempio per permettere ad un robot di trovare gli oggetti da afferrare. Negli ultimi decenni tali compiti hanno trovato nuovi modi di essere raggiunti grazie alla riscoperta delle Reti Neurali, in particolare le Reti Neurali Convoluzionali. Questo tipo di reti ha raggiunto ottimi risultati in molte applicazioni per il riconoscimento e la classificazione degli oggetti. La tendenza, ora, `e quella di utilizzare tali reti anche nell’industria automobilistica per cercare di rendere reale il sogno delle automobili che guidano da sole. Ci sono molti lavori importanti sul riconoscimento delle auto dalle immagini. In questa tesi presentiamo la nostra architettura di Rete Neurale Convoluzionale per il riconoscimento di automobili e la loro posizione nello spazio, utilizzando solo input lidar. Salvando le informazioni riguardanti le bounding box attorno all’auto a livello del punto ci assicura una buona previsione anche in situazioni in cui le automobili sono occluse. I test vengono eseguiti sul dataset più utilizzato per il riconoscimento di automobili e pedoni nelle applicazioni di guida autonoma.
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Wang, Yutao. "Outlier formation and removal in 3D laser scanned point clouds". Thesis, University of British Columbia, 2014. http://hdl.handle.net/2429/51265.

Texto completo
Resumen
3D scanners have become widely used in many industrial applications in reverse engineering, quality inspection, entertainment industry, etc. Despite the popularity of 3D scanners, the raw scanned data, referred to as point cloud, is often contaminated by outliers not belonging to the scanned surface. Moreover, when the scanned surface is highly reflective, outliers become much more extensive due to specular reflections. Such outliers cause considerable issues to point cloud applications and thus need to be removed through an outlier detection process. Considering the commonness of reflective surfaces in mechanical parts, it is critical to investigate the outlier formation mechanism and develop methods to effectively remove outliers. However, research on how outliers are formed in scanning reflective surfaces is very limited. Meanwhile, existing outlier removal methods show limited effectiveness in detecting extensive outliers. This thesis investigates the outlier formation mechanism in scanning reflective surfaces using laser scanners, and develops outlier removal algorithms to effectively and efficiently detect outliers in the scanned point clouds. The overall objective is to remove outliers in a raw data to obtain a clean point cloud in order to ensure the performance of point cloud applications. In particular, two outlier formation models, mixed reflections and multi-path reflections, are proposed and verified through experiments. The effects of scanning orientation on outlier formation are also experimentally investigated. A guidance of proper scan path planning is provided in order to reduce the occurrence of outliers. Regarding outlier removal, a rotating scan approach is proposed to efficiently remove view-dependent outliers. A flexible and effective algorithm is also presented to detect the challenging non-isolated outliers as well as other outliers.
Applied Science, Faculty of
Mechanical Engineering, Department of
Graduate
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Schindler, Falko [Verfasser]. "Man-made Surface Structures from Triangulated Point Clouds / Falko Schindler". Bonn : Universitäts- und Landesbibliothek Bonn, 2013. http://d-nb.info/1047216248/34.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

PEREIRA, TAIS DE SA. "SILHOUETTES AND LAPLACIAN LINES OF POINT CLOUDS VIA LOCAL RECONSTRUCTION". PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2013. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=23504@1.

Texto completo
Resumen
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO
PROGRAMA DE SUPORTE À PÓS-GRADUAÇÃO DE INSTS. DE ENSINO
No presente trabalho propomos uma nova forma de extrair a silhueta de uma nuvem de pontos, via reconstrução local de uma superfície descrita implicitamente por uma função polinomial. Esta reconstrução é baseada nos métodos Gradient one fitting e Ridge regression. A curva silhueta fica definida implicitamente por um sistema de equações não-lineares e sua geração é feita por continuação numérica. Como resultado, verificamos que nosso método se mostrou adequado para tratar dados com ruídos. Além disso, apresentamos um método para a extração local de linhas laplacianas de uma nuvem de pontos baseado na reconstrução local utilizando a triangulação de Delaunay.
In this work we propose a new method for silhouette extraction of a point cloud, via local reconstruction of a surface described implicitly by a polynomial function. This reconstruction is based on the Gradient one fitting and Ridge regression methods. The curve silhouette is implicitly defined by a system of nonlinear equations, and is obtained using numerical continuation. As a result, we observe that our method is suitable to handle noisy data. In addition, we present a method for extracting Laplacian Lines of a point cloud based on local reconstruction using the Delaunay triangulation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Goussard, Charl Leonard. "Semi-automatic extraction of primitive geometric entities from point clouds". Thesis, Stellenbosch : Stellenbosch University, 2001. http://hdl.handle.net/10019.1/52449.

Texto completo
Resumen
Thesis (MScEng)--University of Stellenbosch, 2001.
ENGLISH ABSTRACT: This thesis describes an algorithm to extract primitive geometric entities (flat planes, spheres or cylinders, as determined by the user's inputs) from unstructured, unsegmented point clouds. The algorithm extracts whole entities or only parts thereof. The entity boundaries are computed automatically. Minimal user interaction is required to extract these entities. The algorithm is accurate and robust. The algorithm is intended for use in the reverse engineering environment. Point clouds created in this environment typically have normal error distributions. Comprehensive testing and results are shown as well as the algorithm's usefulness in the reverse engineering environment.
AFRIKAANSE OPSOMMING: Hierdie tesis beskryf 'n algoritme wat primitiewe geometriese entiteite (plat vlakke, sfere of silinders na gelang van die gebruiker se inset) pas op ongestruktureerde, ongesegmenteerde puntewolke. Die algoritme pas geslote geometriese entiteite of slegs dele daarvan. Die grense van hierdie entiteite word automaties bereken. Minimale gebruikersinteraksie word benodig om die geometriese entiteite te pas. Die algoritme is akkuraat en robuust. Die algoritme is ontwikkel vir gebruik in die truwaartse ingenieurswese omgewing. Puntewolke opgemeet in hierdie omgewing het tipies meetfoute met 'n normaal verdeling. Omvattende toetsing en resultate word getoon en daarmee ook die nut wat die algoritme vir die gebruiksomgewing inhou.
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Oesterling, Patrick. "Visual Analysis of High-Dimensional Point Clouds using Topological Abstraction". Doctoral thesis, Universitätsbibliothek Leipzig, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-203056.

Texto completo
Resumen
This thesis is about visualizing a kind of data that is trivial to process by computers but difficult to imagine by humans because nature does not allow for intuition with this type of information: high-dimensional data. Such data often result from representing observations of objects under various aspects or with different properties. In many applications, a typical, laborious task is to find related objects or to group those that are similar to each other. One classic solution for this task is to imagine the data as vectors in a Euclidean space with object variables as dimensions. Utilizing Euclidean distance as a measure of similarity, objects with similar properties and values accumulate to groups, so-called clusters, that are exposed by cluster analysis on the high-dimensional point cloud. Because similar vectors can be thought of as objects that are alike in terms of their attributes, the point cloud\'s structure and individual cluster properties, like their size or compactness, summarize data categories and their relative importance. The contribution of this thesis is a novel analysis approach for visual exploration of high-dimensional point clouds without suffering from structural occlusion. The work is based on implementing two key concepts: The first idea is to discard those geometric properties that cannot be preserved and, thus, lead to the typical artifacts. Topological concepts are used instead to shift away the focus from a point-centered view on the data to a more structure-centered perspective. The advantage is that topology-driven clustering information can be extracted in the data\'s original domain and be preserved without loss in low dimensions. The second idea is to split the analysis into a topology-based global overview and a subsequent geometric local refinement. The occlusion-free overview enables the analyst to identify features and to link them to other visualizations that permit analysis of those properties not captured by the topological abstraction, e.g. cluster shape or value distributions in particular dimensions or subspaces. The advantage of separating structure from data point analysis is that restricting local analysis only to data subsets significantly reduces artifacts and the visual complexity of standard techniques. That is, the additional topological layer enables the analyst to identify structure that was hidden before and to focus on particular features by suppressing irrelevant points during local feature analysis. This thesis addresses the topology-based visual analysis of high-dimensional point clouds for both the time-invariant and the time-varying case. Time-invariant means that the points do not change in their number or positions. That is, the analyst explores the clustering of a fixed and constant set of points. The extension to the time-varying case implies the analysis of a varying clustering, where clusters appear as new, merge or split, or vanish. Especially for high-dimensional data, both tracking---which means to relate features over time---but also visualizing changing structure are difficult problems to solve.
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Limberger, Frederico Artur. "Real-time detection of planar regions in unorganized point clouds". reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2014. http://hdl.handle.net/10183/97001.

Texto completo
Resumen
Detecção automática de regiões planares em nuvens de pontos é um importante passo para muitas aplicações gráficas, de processamento de imagens e de visão computacional. Enquanto a disponibilidade de digitalizadores a laser e a fotografia digital tem nos permitido capturar nuvens de pontos cada vez maiores, técnicas anteriores para detecção de planos são computacionalmente caras, sendo incapazes de alcançar desempenho em tempo real para conjunto de dados contendo dezenas de milhares de pontos, mesmo quando a detecção é feita de um modo não determinístico. Apresentamos uma abordagem determinística para detecção de planos em nuvens de pontos não estruturadas que apresenta complexidade computacional O(n log n) no número de amostras de entrada. Ela é baseada em um método eficiente de votação para a transformada de Hough. Nossa estratégia agrupa conjuntos de pontos aproximadamente coplanares e deposita votos para estes conjuntos em um acumulador esférico, utilizando núcleos Gaussianos trivariados. Uma comparação com as técnicas concorrentes mostra que nossa abordagem é consideravelmente mais rápida e escala significativamente melhor que as técnicas anteriores, sendo a primeira solução prática para detecção determinística de planos em nuvens de pontos grandes e não estruturadas.
Automatic detection of planar regions in point clouds is an important step for many graphics, image processing, and computer vision applications. While laser scanners and digital photography have allowed us to capture increasingly larger datasets, previous techniques are computationally expensive, being unable to achieve real-time performance for datasets containing tens of thousands of points, even when detection is performed in a non-deterministic way. We present a deterministic technique for plane detection in unorganized point clouds whose cost is O(n log n) in the number of input samples. It is based on an efficient Hough-transform voting scheme and works by clustering approximately co-planar points and by casting votes for these clusters on a spherical accumulator using a trivariate Gaussian kernel. A comparison with competing techniques shows that our approach is considerably faster and scales significantly better than previous ones, being the first practical solution for deterministic plane detection in large unorganized point clouds.
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Liao, Nilsson Sunny y Martin Norrbom. "CLASSIFICATION OF BRIDGES IN LASER POINT CLOUDS USING MACHINE LEARNING". Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-55067.

Texto completo
Resumen
In this work, machine learning was being used for bridge detection in point clouds. To estimate the performance, it was compared to an existing algorithm based on traditional methods for point classification. The purpose of this work was to use machine learning for bridge classification in point clouds. To see how today's machine learning algorithms perform and find the challenges of using machine learning in point classification. The point clouds used are based on airborne laser scanning and represent the land area over Sweden. To get satisfactory results, several different testing areas were used with varying landscapes. For comparing the two different algorithms, both statistical and visual analysis were made to identify the algorithms' behaviours, strengths, and weaknesses. The machine learning algorithm tested was PointNet++, and it was compared to the current algorithm that the Swedish mapping, cadastral and land registration authority use for bridge classification in point clouds. Based on the results, the current method had higher accuracy in the classification of bridge points, but the machine learning approach could detect more bridges. Thus, it was concluded that there are potential for this machine learning approach, but there are still needs for improvements.
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Arvidsson, Simon y Marcus Gullstrand. "Predicting forest strata from point clouds using geometric deep learning". Thesis, Jönköping University, JTH, Avdelningen för datavetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-54155.

Texto completo
Resumen
Introduction: Number of strata (NoS) is an informative descriptor of forest structure and is therefore useful in forest management. Collection of NoS as well as other forest properties is performed by fieldworkers and could benefit from automation. Objectives: This study investigates automated prediction of NoS from airborne laser scanned point clouds over Swedish forest plots.Methods: A previously suggested approach of using vertical gap probability is compared through experimentation against the geometric neural network PointNet++ configured for ordinal prediction. For both approaches, the mean accuracy is measured for three datasets: coniferous forest, deciduous forest, and a combination of all forests. Results: PointNet++ displayed a better point performance for two out of three datasets, attaining a top mean accuracy of 46.2%. However only the coniferous subset displayed a statistically significant superiority for PointNet++. Conclusion: This study demonstrates the potential of geometric neural networks for data mining of forest properties. The results show that impediments in the data may need to be addressed for further improvements.
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Chen, Shuo. "Robust Registration of ToF and RGB-D Camera Point Clouds". Thesis, KTH, Fastigheter och byggande, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-299537.

Texto completo
Resumen
This thesis presents a comparison of M-estimator, BLAVE, and RANSAC method in point clouds registration. The comparison is performed empirically by applying all the estimators on a simulated data added with noise plus gross errors, ToF data and RGB-D data. The RANSAC method is the fastest and most robust estimator from the comparison. The 2D feature extracting methods Harris corner detector, SIFT and SURF and 3D extracting method ISS are compared in the real-world scene data as well. SIFT algorithm is proven to have extracted the most feature points with accurate features among all the extracting methods in different data. In the end, ICP algorithm is used to refine the registration result based on the estimation of initial transform.
Denna avhandling presenterar en jämförelse av tre metoder för registrering av punktmoln: M-estimator, BLAVE och RANSAC. Jämförelsen utfördes empiriskt genom att använda alla metoder på simulerad data med brus och grova fel samt på ToF - och RGB-D -data. Tester visade att RANSAC-metoden är den snabbaste och mest robusta metoden.  Vi har även jämfört tre metoder för extrahering av features från 2D-bilder: Harris hörndetektor, SIFT och SURF och en 3D extraheringsmetod ISS. Denna jämförelse utfördes md hjälp av verkliga data. SIFT -algoritmen har visat sig fungera bäst bland alla extraheringsmetoder: den har extraherat flesta features med högst precision. I slutändan användes ICP-algoritmen för att förfina registreringsresultatet baserat på uppskattningen av initial transformering.
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Yanes, Luis. "Haptic Interaction with 3D oriented point clouds on the GPU". Thesis, University of East Anglia, 2015. https://ueaeprints.uea.ac.uk/58556/.

Texto completo
Resumen
Real-time point-based rendering and interaction with virtual objects is gaining popularity and importance as different haptic devices and technologies increasingly provide the basis for realistic interaction. Haptic Interaction is being used for a wide range of applications such as medical training, remote robot operators, tactile displays and video games. Virtual object visualization and interaction using haptic devices is the main focus; this process involves several steps such as: Data Acquisition, Graphic Rendering, Haptic Interaction and Data Modification. This work presents a framework for Haptic Interaction using the GPU as a hardware accelerator, and includes an approach for enabling the modification of data during interaction. The results demonstrate the limits and capabilities of these techniques in the context of volume rendering for haptic applications. Also, the use of dynamic parallelism as a technique to scale the number of threads needed from the accelerator according to the interaction requirements is studied allowing the editing of data sets of up to one million points at interactive haptic frame rates.
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Bae, Kwang-Ho. "Automated registration of unorganised point clouds from terrestrial laser scanners". Thesis, Curtin University, 2006. http://hdl.handle.net/20.500.11937/946.

Texto completo
Resumen
Laser scanners provide a three-dimensional sampled representation of the surfaces of objects. The spatial resolution of the data is much higher than that of conventional surveying methods. The data collected from different locations of a laser scanner must be transformed into a common coordinate system. If good a priori alignment is provided and the point clouds share a large overlapping region, existing registration methods, such as the Iterative Closest Point (ICP) or Chen and Medioni’s method, work well. In practical applications of laser scanners, partially overlapping and unorganised point clouds are provided without good initial alignment. In these cases, the existing registration methods are not appropriate since it becomes very difficult to find the correspondence of the point clouds. A registration method, the Geometric Primitive ICP with the RANSAC (GPICPR), using geometric primitives, neighbourhood search, the positional uncertainty of laser scanners, and an outlier removal procedure is proposed in this thesis. The change of geometric curvature and approximate normal vector of the surface formed by a point and its neighbourhood are used for selecting the possible correspondences of point clouds. In addition, an explicit expression of the position uncertainty of measurement by laser scanners is presented in this dissertation and this position uncertainty is utilised to estimate the precision and accuracy of the estimated relative transformation parameters between point clouds. The GP-ICPR was tested with both simulated data and datasets from close range and terrestrial laser scanners in terms of its precision, accuracy, and convergence region. It was shown that the GP-ICPR improved the precision of the estimated relative transformation parameters as much as a factor of 5.In addition, the rotational convergence region of the GP-ICPR on the order of 10°, which is much larger than the ICP or its variants, provides a window of opportunity to utilise this automated registration method in practical applications such as terrestrial surveying and deformation monitoring.
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Bae, Kwang-Ho. "Automated registration of unorganised point clouds from terrestrial laser scanners". Curtin University of Technology, Department of Spatial Sciences, 2006. http://espace.library.curtin.edu.au:80/R/?func=dbin-jump-full&object_id=16596.

Texto completo
Resumen
Laser scanners provide a three-dimensional sampled representation of the surfaces of objects. The spatial resolution of the data is much higher than that of conventional surveying methods. The data collected from different locations of a laser scanner must be transformed into a common coordinate system. If good a priori alignment is provided and the point clouds share a large overlapping region, existing registration methods, such as the Iterative Closest Point (ICP) or Chen and Medioni’s method, work well. In practical applications of laser scanners, partially overlapping and unorganised point clouds are provided without good initial alignment. In these cases, the existing registration methods are not appropriate since it becomes very difficult to find the correspondence of the point clouds. A registration method, the Geometric Primitive ICP with the RANSAC (GPICPR), using geometric primitives, neighbourhood search, the positional uncertainty of laser scanners, and an outlier removal procedure is proposed in this thesis. The change of geometric curvature and approximate normal vector of the surface formed by a point and its neighbourhood are used for selecting the possible correspondences of point clouds. In addition, an explicit expression of the position uncertainty of measurement by laser scanners is presented in this dissertation and this position uncertainty is utilised to estimate the precision and accuracy of the estimated relative transformation parameters between point clouds. The GP-ICPR was tested with both simulated data and datasets from close range and terrestrial laser scanners in terms of its precision, accuracy, and convergence region. It was shown that the GP-ICPR improved the precision of the estimated relative transformation parameters as much as a factor of 5.
In addition, the rotational convergence region of the GP-ICPR on the order of 10°, which is much larger than the ICP or its variants, provides a window of opportunity to utilise this automated registration method in practical applications such as terrestrial surveying and deformation monitoring.
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

El, Sayed Abdul Rahman. "Traitement des objets 3D et images par les méthodes numériques sur graphes". Thesis, Normandie, 2018. http://www.theses.fr/2018NORMLH19/document.

Texto completo
Resumen
La détection de peau consiste à détecter les pixels correspondant à une peau humaine dans une image couleur. Les visages constituent une catégorie de stimulus importante par la richesse des informations qu’ils véhiculent car avant de reconnaître n’importe quelle personne il est indispensable de localiser et reconnaître son visage. La plupart des applications liées à la sécurité et à la biométrie reposent sur la détection de régions de peau telles que la détection de visages, le filtrage d'objets 3D pour adultes et la reconnaissance de gestes. En outre, la détection de la saillance des mailles 3D est une phase de prétraitement importante pour de nombreuses applications de vision par ordinateur. La segmentation d'objets 3D basée sur des régions saillantes a été largement utilisée dans de nombreuses applications de vision par ordinateur telles que la correspondance de formes 3D, les alignements d'objets, le lissage de nuages de points 3D, la recherche des images sur le web, l’indexation des images par le contenu, la segmentation de la vidéo et la détection et la reconnaissance de visages. La détection de peau est une tâche très difficile pour différentes raisons liées en général à la variabilité de la forme et la couleur à détecter (teintes différentes d’une personne à une autre, orientation et tailles quelconques, conditions d’éclairage) et surtout pour les images issues du web capturées sous différentes conditions de lumière. Il existe plusieurs approches connues pour la détection de peau : les approches basées sur la géométrie et l’extraction de traits caractéristiques, les approches basées sur le mouvement (la soustraction de l’arrière-plan (SAP), différence entre deux images consécutives, calcul du flot optique) et les approches basées sur la couleur. Dans cette thèse, nous proposons des méthodes d'optimisation numérique pour la détection de régions de couleurs de peaux et de régions saillantes sur des maillages 3D et des nuages de points 3D en utilisant un graphe pondéré. En se basant sur ces méthodes, nous proposons des approches de détection de visage 3D à l'aide de la programmation linéaire et de fouille de données (Data Mining). En outre, nous avons adapté nos méthodes proposées pour résoudre le problème de la simplification des nuages de points 3D et de la correspondance des objets 3D. En plus, nous montrons la robustesse et l’efficacité de nos méthodes proposées à travers de différents résultats expérimentaux réalisés. Enfin, nous montrons la stabilité et la robustesse de nos méthodes par rapport au bruit
Skin detection involves detecting pixels corresponding to human skin in a color image. The faces constitute a category of stimulus important by the wealth of information that they convey because before recognizing any person it is essential to locate and recognize his face. Most security and biometrics applications rely on the detection of skin regions such as face detection, 3D adult object filtering, and gesture recognition. In addition, saliency detection of 3D mesh is an important pretreatment phase for many computer vision applications. 3D segmentation based on salient regions has been widely used in many computer vision applications such as 3D shape matching, object alignments, 3D point-point smoothing, searching images on the web, image indexing by content, video segmentation and face detection and recognition. The detection of skin is a very difficult task for various reasons generally related to the variability of the shape and the color to be detected (different hues from one person to another, orientation and different sizes, lighting conditions) and especially for images from the web captured under different light conditions. There are several known approaches to skin detection: approaches based on geometry and feature extraction, motion-based approaches (background subtraction (SAP), difference between two consecutive images, optical flow calculation) and color-based approaches. In this thesis, we propose numerical optimization methods for the detection of skins color and salient regions on 3D meshes and 3D point clouds using a weighted graph. Based on these methods, we provide 3D face detection approaches using Linear Programming and Data Mining. In addition, we adapted our proposed methods to solve the problem of simplifying 3D point clouds and matching 3D objects. In addition, we show the robustness and efficiency of our proposed methods through different experimental results. Finally, we show the stability and robustness of our methods with respect to noise
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Ruhe, Jakob y Johan Nordin. "Classification of Points Acquired by Airborne Laser Systems". Thesis, Linköping University, Department of Electrical Engineering, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-10485.

Texto completo
Resumen

During several years research has been performed at the Department of Laser Systems, the Swedish Defense Research Agency (FOI), to develop methods to produce high resolution 3D environment models based on data acquired with airborne laser systems. The 3D models are used for several purposes, both military and civilian applications, for example mission planning, crisis management analysis and planning of infrastructure.

We have implemented a new format to store laser point data. Instead of storing rasterized images of the data this new format stores the original location of each point. We have also implemented a new method to detect outliers, methods to estimate the ground surface and also to divide the remaining data into two classes: buildings and vegetation.

It is also shown that it is possible to get more accurate results by analyzing the points directly instead of only using rasterized images and image processing algorithms. We show that these methods can be implemented without increasing the computational complexity.

Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Nguyen, Van sinh. "3 D Modeling of elevation surfaces from voxel structured point clouds extracted from seismic cubes". Thesis, Aix-Marseille, 2013. http://www.theses.fr/2013AIXM4069/document.

Texto completo
Resumen
Dans cette thèse, nous présentons des méthodes pour construire une surface géologique optimal à partir d’une quantité énorme de points 3D extraits de cubes sismiques. Appliquer le processus à l’ensemble des points induit un risque important de contraction de la surface de sorte que l’extraction de la frontière initiale est une étape importante permettant une simplification à l’intérieur de la surface. La forme globale de la surface sera alors mieux respectée pour la reconstruction de la surface triangulaire finale. Nos propositions sont basées sur la régularité des données qui permet, même si des données sont manquantes, d’obtenir facilement les informations de voisinage. Tout d’abord, nous présentons une nouvelle méthode pour extraire et simplifier la frontière d’une surface d’élévation définie par un ensemble de voxels dans un grand volume 3D où des données sont manquantes. Deuxièmement, une méthode pour simplifier la surface à l’intérieur de sa frontière est présentée. Elle comprend une étape de simplification grossière optionnelle suivie par une étape plus fine basée sur l’étude des courbures. Nous tenons également compte du fait que la densité de données doit changer graduellement afin de recevoir à la dernière étape d’une surface triangulée avec de meilleurs triangles. Troisièmement, nous avons proposé une nouvelle méthode rapide pour trianguler la surface après simplification
Reconstructing surfaces with data coming from an automatic acquisition technique always entails the problem of mass of data. This implies that the usual processes cannot be applied directly. Therefore, it leads to a mandatory data reduction process. An effective algorithm for a rapid processing while keeping the original model is a valuable tool for constructing an optimal surface and managing the complex data.In this dissertation, we present methods for building an optimal geological surface from a huge amount of 3D points extracted from seismic cubes. Applying the process to the whole set of points induces an important risk of surface shrinking so that the initial boundary extraction is an important step permitting a simplification inside the surface. The global surface shape will then be better kept for the reconstruction of the final triangular surface. Our proposals are based on the regularity of data which permits, even if data are missing, to easily obtain the neighboring information. Firstly, we present a new method to extract and simplify the boundary of an elevation surface given as voxels in a large 3D volume having the characteristics to be sparse. Secondly, a method for simplifying the surface inside its boundary is presented with a rough optional simplification step followed by a finer one based on curvatures. We also keep into consideration that the density of data must gradually change in order to receive in the last step a triangulated surface with better triangles. Thirdly, we have proposed a new and fast method for triangulating the surface after simplification
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Digne, Julie. "Inverse geometry : from the raw point cloud to the 3d surface : theory and algorithms". Phd thesis, École normale supérieure de Cachan - ENS Cachan, 2010. http://tel.archives-ouvertes.fr/tel-00610432.

Texto completo
Resumen
Many laser devices acquire directly 3D objects and reconstruct their surface. Nevertheless, the final reconstructed surface is usually smoothed out as a result of the scanner internal de-noising process and the offsets between different scans. This thesis, working on results from high precision scans, adopts the somewhat extreme conservative position, not to loose or alter any raw sample throughout the whole processing pipeline, and to attempt to visualize them. Indeed, it is the only way to discover all surface imperfections (holes, offsets). Furthermore, since high precision data can capture the slightest surface variation, any smoothing and any sub-sampling can incur in the loss of textural detail.The thesis attempts to prove that one can triangulate the raw point cloud with almost no sample loss. It solves the exact visualization problem on large data sets of up to 35 million points made of 300 different scan sweeps and more. Two major problems are addressed. The first one is the orientation of the complete raw point set, an the building of a high precision mesh. The second one is the correction of the tiny scan misalignments which can cause strong high frequency aliasing and hamper completely a direct visualization.The second development of the thesis is a general low-high frequency decomposition algorithm for any point cloud. Thus classic image analysis tools, the level set tree and the MSER representations, are extended to meshes, yielding an intrinsic mesh segmentation method.The underlying mathematical development focuses on an analysis of a half dozen discrete differential operators acting on raw point clouds which have been proposed in the literature. By considering the asymptotic behavior of these operators on a smooth surface, a classification by their underlying curvature operators is obtained.This analysis leads to the development of a discrete operator consistent with the mean curvature motion (the intrinsic heat equation) defining a remarkably simple and robust numerical scale space. By this scale space all of the above mentioned problems (point set orientation, raw point set triangulation, scan merging, segmentation), usually addressed by separated techniques, are solved in a unified framework.
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Stella, Federico. "Learning a Local Reference Frame for Point Clouds using Spherical CNNs". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/20197/.

Texto completo
Resumen
Uno dei problemi più importanti della 3D Computer Vision è il cosiddetto surface matching, che consiste nel trovare corrispondenze tra oggetti tridimensionali. Attualmente il problema viene affrontato calcolando delle feature locali e compatte, chiamate descrittori, che devono essere riconosciute e messe in corrispondenza al mutare della posa dell'oggetto nello spazio, e devono quindi essere invarianti rispetto all'orientazione. Il metodo più usato per ottenere questa proprietà consiste nell'utilizzare dei Local Reference Frame (LRF): sistemi di coordinate locali che forniscono un'orientazione canonica alle porzioni di oggetti 3D che vengono usate per calcolare i descrittori. In letteratura esistono diversi modi per calcolare gli LRF, ma fanno tutti uso di algoritmi progettati manualmente. Vi è anche una recente proposta che utilizza reti neurali, tuttavia queste vengono addestrate mediante feature specificamente progettate per lo scopo, il che non permette di sfruttare pienamente i benefici delle moderne strategie di end-to-end learning. Lo scopo di questo lavoro è utilizzare un approccio data-driven per far imparare a una rete neurale il calcolo di un Local Reference Frame a partire da point cloud grezze, producendo quindi il primo esempio di end-to-end learning applicato alla stima di LRF. Per farlo, sfruttiamo una recente innovazione chiamata Spherical Convolutional Neural Networks, le quali generano e processano segnali nello spazio SO(3) e sono quindi naturalmente adatte a rappresentare e stimare orientazioni e LRF. Confrontiamo le prestazioni ottenute con quelle di metodi esistenti su benchmark standard, ottenendo risultati promettenti.
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Izatt, Gregory (Gregory Russell). "Robust object pose estimation with point clouds from vision and touch". Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/111867.

Texto completo
Resumen
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 75-81).
We present a study of object pose estimation performed with hybrid visuo-tactile sensing in mind. We propose that a tactile sensor can be treated as a source of dense local geometric information, and hence consider it to be a point cloud source analogous to an RGB-D camera. We incorporate the tactile geometric information directly into a conventional point-cloud-based articulated object tracker based on signed-distance functions. This tracker runs at 12 Hz using an online depth reconstruction algorithm for the GelSight tactile sensor and a modified second-order update for the tracking algorithm. The tracker provides robust pose estimates of small objects throughout manipulation, even when the objects are occluded by the robot's end effector. To address limitations in this tracker, we additionally present a formulation of the underlying point-cloud correspondence problem as a mixed-integer convex program, which we efficiently solve to optimality with an off-the-shelf branch and bound solver. We show that reasoning about object pose estimation in this way allows natural extension to point-to-mesh correspondence, multiple object estimation, and outlier rejection without losing the ability to obtain a globally optimal solution. We probe the extent to which rich problem-specific formulations typically tackled with unreliable nonlinear optimization can be rigorously treated in a global optimization framework.
by Gregory Izatt.
S.M.
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Al, Hakim Ezeddin. "3D YOLO: End-to-End 3D Object Detection Using Point Clouds". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-234242.

Texto completo
Resumen
For safe and reliable driving, it is essential that an autonomous vehicle can accurately perceive the surrounding environment. Modern sensor technologies used for perception, such as LiDAR and RADAR, deliver a large set of 3D measurement points known as a point cloud. There is a huge need to interpret the point cloud data to detect other road users, such as vehicles and pedestrians. Many research studies have proposed image-based models for 2D object detection. This thesis takes it a step further and aims to develop a LiDAR-based 3D object detection model that operates in real-time, with emphasis on autonomous driving scenarios. We propose 3D YOLO, an extension of YOLO (You Only Look Once), which is one of the fastest state-of-the-art 2D object detectors for images. The proposed model takes point cloud data as input and outputs 3D bounding boxes with class scores in real-time. Most of the existing 3D object detectors use hand-crafted features, while our model follows the end-to-end learning fashion, which removes manual feature engineering. 3D YOLO pipeline consists of two networks: (a) Feature Learning Network, an artificial neural network that transforms the input point cloud to a new feature space; (b) 3DNet, a novel convolutional neural network architecture based on YOLO that learns the shape description of the objects. Our experiments on the KITTI dataset shows that the 3D YOLO has high accuracy and outperforms the state-of-the-art LiDAR-based models in efficiency. This makes it a suitable candidate for deployment in autonomous vehicles.
För att autonoma fordon ska ha en god uppfattning av sin omgivning används moderna sensorer som LiDAR och RADAR. Dessa genererar en stor mängd 3-dimensionella datapunkter som kallas point clouds. Inom utvecklingen av autonoma fordon finns det ett stort behov av att tolka LiDAR-data samt klassificera medtrafikanter. Ett stort antal studier har gjorts om 2D-objektdetektering som analyserar bilder för att upptäcka fordon, men vi är intresserade av 3D-objektdetektering med hjälp av endast LiDAR data. Därför introducerar vi modellen 3D YOLO, som bygger på YOLO (You Only Look Once), som är en av de snabbaste state-of-the-art modellerna inom 2D-objektdetektering för bilder. 3D YOLO tar in ett point cloud och producerar 3D lådor som markerar de olika objekten samt anger objektets kategori. Vi har tränat och evaluerat modellen med den publika träningsdatan KITTI. Våra resultat visar att 3D YOLO är snabbare än dagens state-of-the-art LiDAR-baserade modeller med en hög träffsäkerhet. Detta gör den till en god kandidat för kunna användas av autonoma fordon.
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Spina, Sandro. "Graph-based segmentation and scene understanding for context-free point clouds". Thesis, University of Warwick, 2015. http://wrap.warwick.ac.uk/76651/.

Texto completo
Resumen
The acquisition of 3D point clouds representing the surface structure of real-world scenes has become common practice in many areas including architecture, cultural heritage and urban planning. Improvements in sample acquisition rates and precision are contributing to an increase in size and quality of point cloud data. The management of these large volumes of data is quickly becoming a challenge, leading to the design of algorithms intended to analyse and decrease the complexity of this data. Point cloud segmentation algorithms partition point clouds for better management, and scene understanding algorithms identify the components of a scene in the presence of considerable clutter and noise. In many cases, segmentation algorithms operate within the remit of a specific context, wherein their effectiveness is measured. Similarly, scene understanding algorithms depend on specific scene properties and fail to identify objects in a number of situations. This work addresses this lack of generality in current segmentation and scene understanding processes, and proposes methods for point clouds acquired using diverse scanning technologies in a wide spectrum of contexts. The approach to segmentation proposed by this work partitions a point cloud with minimal information, abstracting the data into a set of connected segment primitives to support efficient manipulation. A graph-based query mechanism is used to express further relations between segments and provide the building blocks for scene understanding. The presented method for scene understanding is agnostic of scene specific context and supports both supervised and unsupervised approaches. In the former, a graph-based object descriptor is derived from a training process and used in object identification. The latter approach applies pattern matching to identify regular structures. A novel external memory algorithm based on a hybrid spatial subdivision technique is introduced to handle very large point clouds and accelerate the computation of the k-nearest neighbour function. Segmentation has been successfully applied to extract segments representing geographic landmarks and architectural features from a variety of point clouds, whereas scene understanding has been successfully applied to indoor scenes on which other methods fail. The overall results demonstrate that the context-agnostic methods presented in this work can be successfully employed to manage the complexity of ever growing repositories.
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Landa, Yanina. "Visibility of point clouds and exploratory path planning in unknown environments". Diss., Restricted to subscribing institutions, 2008. http://proquest.umi.com/pqdweb?did=1610049881&sid=1&Fmt=2&clientId=1564&RQT=309&VName=PQD.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

José, Silva Leite Pedro. "Massively parallel nearest neighbors searches in dynamic point clouds on GPU". Universidade Federal de Pernambuco, 2010. https://repositorio.ufpe.br/handle/123456789/2356.

Texto completo
Resumen
Made available in DSpace on 2014-06-12T15:57:17Z (GMT). No. of bitstreams: 2 arquivo3157_1.pdf: 3737373 bytes, checksum: 7ca491f9a72f2e9cf51764a7acac3e3c (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2010
Conselho Nacional de Desenvolvimento Científico e Tecnológico
Esta dissertação introduz uma estrutura de dados baseada em gride implementada em GPU. Ela foi desenvolvida para pesquisa dos vizinhos mais próximos em nuvens de pontos dinâmicas, de uma forma massivamente paralela. A implementação possui desempenho em tempo real e é executada em GPU, ambas construção do gride e pesquisas dos vizinhos mais próximos (exatos e aproximados). Dessa forma, a transferência de memória entre sistema e dispositivo é minimizada, aumentando o desempenho de uma forma geral. O algoritmo proposto pode ser usado em diferentes aplicações com cenários estáticos ou dinâmicos. Além disso, a estrutura de dados suporta nuvens de pontos tridimensionais e dada sua natureza dinâmica, o usuário pode mudar seus parâmetros em tempo de execução. O mesmo se aplica ao número de vizinhos pesquisados. Uma referência em CPU foi implementada e comparações de desempenho justificam o uso de GPUs como processadores massivamente paralelos. Em adição, o desempenho da estrutura de dados proposta é comparada com implementações em CPU e GPU de trabalhos anteriores. Finalmente, uma aplicação de renderização baseada em pontos foi desenvolvida de forma a verificar o potencial da estrutura de dados
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía