To see the other types of publications on this topic, follow the link: Automated lidar.

Dissertations / Theses on the topic 'Automated lidar'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Automated lidar.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Hamraz, Hamid. "AUTOMATED TREE-LEVEL FOREST QUANTIFICATION USING AIRBORNE LIDAR." UKnowledge, 2018. https://uknowledge.uky.edu/cs_etds/69.

Full text
Abstract:
Traditional forest management relies on a small field sample and interpretation of aerial photography that not only are costly to execute but also yield inaccurate estimates of the entire forest in question. Airborne light detection and ranging (LiDAR) is a remote sensing technology that records point clouds representing the 3D structure of a forest canopy and the terrain underneath. We present a method for segmenting individual trees from the LiDAR point clouds without making prior assumptions about tree crown shapes and sizes. We then present a method that vertically stratifies the point cloud to an overstory and multiple understory tree canopy layers. Using the stratification method, we modeled the occlusion of higher canopy layers with respect to point density. We also present a distributed computing approach that enables processing the massive data of an arbitrarily large forest. Lastly, we investigated using deep learning for coniferous/deciduous classification of point cloud segments representing individual tree crowns. We applied the developed methods to the University of Kentucky Robinson Forest, a natural, majorly deciduous, closed-canopy forest. 90% of overstory and 47% of understory trees were detected with false positive rates of 14% and 2% respectively. Vertical stratification improved the detection rate of understory trees to 67% at the cost of increasing their false positive rate to 12%. According to our occlusion model, a point density of about 170 pt/m² is needed to segment understory trees located in the third layer as accurately as overstory trees. Using our distributed processing method, we segmented about two million trees within a 7400-ha forest in 2.5 hours using 192 processing cores, showing a speedup of ~170. Our deep learning experiments showed high classification accuracies (~82% coniferous and ~90% deciduous) without the need to manually assemble the features. In conclusion, the methods developed are steps forward to remote, accurate quantification of large natural forests at the individual tree level.
APA, Harvard, Vancouver, ISO, and other styles
2

Gadre, Mandar M. "Automated building footprint extraction from high resolution LIDAR DEM imagery." Diss., Columbia, Mo. : University of Missouri-Columbia, 2005. http://hdl.handle.net/10355/4320.

Full text
Abstract:
Thesis (M.S.)--University of Missouri-Columbia, 2005.<br>The entire dissertation/thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file (which also appears in the research.pdf); a non-technical general description, or public abstract, appears in the public.pdf file. Title from title screen of research.pdf file viewed on (July 13, 2006) Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
3

Lach, Stephen R. "Semi-automated DIRSIG scene modeling from 3D lidar and passive imagery /." Online version of thesis, 2008. http://hdl.handle.net/1850/7861.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Liu, Haijian. "Automated Treetop Detection and Tree Crown Identification Using Discrete-return Lidar Data." Thesis, University of North Texas, 2013. https://digital.library.unt.edu/ark:/67531/metadc271858/.

Full text
Abstract:
Accurate estimates of tree and forest biomass are essential for a wide range of applications. Automated treetop detection and tree crown discrimination using LiDAR data can greatly facilitate forest biomass estimation. Previous work has focused on homogenous or single-species forests, while few studies have focused on mixed forests. In this study, a new method for treetop detection is proposed in which the treetop is the cluster center of selected points rather than the highest point. Based on treetop detection, tree crowns are discriminated through comparison of three-dimensional shape signatures. The methods are first tested using simulated LiDAR point clouds for trees, and then applied to real LiDAR data from the Soquel Demonstration State Forest, California, USA. Results from both simulated and real LiDAR data show that the proposed method has great potential for effective detection of treetops and discrimination of tree crowns.
APA, Harvard, Vancouver, ISO, and other styles
5

Sadeghinaeenifard, Fariba. "Automated Tree Crown Discrimination Using Three-Dimensional Shape Signatures Derived from LiDAR Point Clouds." Thesis, University of North Texas, 2018. https://digital.library.unt.edu/ark:/67531/metadc1157521/.

Full text
Abstract:
Discrimination of different tree crowns based on their 3D shapes is essential for a wide range of forestry applications, and, due to its complexity, is a significant challenge. This study presents a modified 3D shape descriptor for the perception of different tree crown shapes in discrete-return LiDAR point clouds. The proposed methodology comprises of five main components, including definition of a local coordinate system, learning salient points, generation of simulated LiDAR point clouds with geometrical shapes, shape signature generation (from simulated LiDAR points as reference shape signature and actual LiDAR point clouds as evaluated shape signature), and finally, similarity assessment of shape signatures in order to extract the shape of a real tree. The first component represents a proposed strategy to define a local coordinate system relating to each tree to normalize 3D point clouds. In the second component, a learning approach is used to categorize all 3D point clouds into two ranks to identify interesting or salient points on each tree. The third component discusses generation of simulated LiDAR point clouds for two geometrical shapes, including a hemisphere and a half-ellipsoid. Then, the operator extracts 3D LiDAR point clouds of actual trees, either deciduous or evergreen. In the fourth component, a longitude-latitude transformation is applied to simulated and actual LiDAR point clouds to generate 3D shape signatures of tree crowns. A critical step is transformation of LiDAR points from their exact positions to their longitude and latitude positions using the longitude-latitude transformation, which is different from the geographic longitude and latitude coordinates, and labeled by their pre-assigned ranks. Then, natural neighbor interpolation converts the point maps to raster datasets. The generated shape signatures from simulated and actual LiDAR points are called reference and evaluated shape signatures, respectively. Lastly, the fifth component determines the similarity between evaluated and reference shape signatures to extract the shape of each examined tree. The entire process is automated by ArcGIS toolboxes through Python programming for further evaluation using more tree crowns in different study areas. Results from LiDAR points captured for 43 trees in the City of Surrey, British Columbia (Canada) suggest that the modified shape descriptor is a promising method for separating different shapes of tree crowns using LiDAR point cloud data. Experimental results also indicate that the modified longitude-latitude shape descriptor fulfills all desired properties of a suitable shape descriptor proposed in computer science along with leaf-off, leaf-on invariance, which makes this process autonomous from the acquisition date of LiDAR data. In summary, the modified longitude-latitude shape descriptor is a promising method for discriminating different shapes of tree crowns using LiDAR point cloud data.
APA, Harvard, Vancouver, ISO, and other styles
6

Hilker, Thomas. "Estimation of photosynthetic light-use efficience from automated multi-angular spectroradiometer measurements of coastal Douglas-fir." Thesis, University of British Columbia, 2008. http://hdl.handle.net/2429/2685.

Full text
Abstract:
Global modeling of gross primary production (GPP) is a critical component of climate change research. On local scales, GPP can be assessed from measuring CO₂ exchange above the plant canopy using tower-based eddy covariance (EC) systems. The limited footprint inherent to this method however, restricts observations to relatively few discrete areas making continuous predictions of global CO₂ fluxes difficult. Recently, the advent of high resolution optical remote sensing devices has offered new possibilities to address some of the scaling issues related to GPP using remote sensing. One key component for inferring GPP spectrally is the efficiency (ε) with which plants can use absorbed photosynthetically active radiation to produce biomass. While recent years have seen progress in measuring ε using the photochemical reflectance index (PRI), little is known about the temporal and spatial requirements for up-scaling these findings continuously throughout the landscape. Satellite observations of canopy reflectance are subject to view and illumination effects induced by the bi-directional reflectance distribution function(BRDF) which can confound the desired PRI signal. Further uncertainties include dependencies of PRI on canopy structure, understorey, species composition and leaf pigment concentration. The objective of this research was to investigate the effects of these factors on PRI to facilitate the modeling of GPP in a continuous fashion. Canopy spectra were sampled over a one-year period using an automated tower-based, multi-angular spectroradiometer platform (AMSPEC), designed to sample high spectral resolution data. The wide range of illumination and viewing geometries seen by the instrument permitted comprehensive modeling of the BRDF. Isolation of physiologically induced changes in PRI yielded a high correlation (r²=0.82, p<0.05) to EC-measured ε, thereby demonstrating the capability of PRI to model ε throughout the year. The results were extrapolated to the landscape scale using airborne laser-scanning (light detection and ranging, LiDAR) and high correlations were found between remotely-sensed and EC-measured GPP (r²>0.79, p<0.05). Permanently established tower-based canopy reflectance measurements are helpful for ongoing research aimed at up-scaling ε to landscape and global scales and facilitate a better understanding of physiological cycles of vegetation and serve as a calibration tool for broader band satellite observations.
APA, Harvard, Vancouver, ISO, and other styles
7

Hungar, Constanze [Verfasser], Frank [Akademischer Betreuer] Köster, and Stephan [Akademischer Betreuer] Schmidt. "Map-based Localization for Automated Vehicles using LiDAR Features / Constanze Hungar ; Frank Köster, Stephan Schmidt." Oldenburg : BIS der Universität Oldenburg, 2021. http://nbn-resolving.de/urn:nbn:de:gbv:715-oops-51937.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Dinoev, Todor. "Automated Raman lidar for day and night operational observation of tropospheric water vapor for meterorological applications /." [S.l.] : [s.n.], 2009. http://library.epfl.ch/theses/?nr=4501.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Deshpande, Sagar Shriram. "Semi-automated Methods to Create a Hydro-flattened DEM using Single Photon and Linear Mode LiDAR Points." The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1491300120665946.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Manhed, Joar. "Investigating Simultaneous Localization and Mapping for an Automated Guided Vehicle." Thesis, Linköpings universitet, Reglerteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-163075.

Full text
Abstract:
The aim of the thesis is to apply simultaneous localization and mapping (SLAM) to automated guided vehicles (AGVs) in a Robot Operating System (ROS) environment. Different sensor setups are used and evaluated. The SLAM applications used is the open-source solution Cartographer as well as Intel's own commercial SLAM in their T265 tracking camera. The different sensor setups are evaluated based on how well the localization will give the exact pose of the AGV in comparison to another positioning system acting as ground truth.
APA, Harvard, Vancouver, ISO, and other styles
11

Vande, Hey Joshua D. "Design, implementation, and characterisation of a novel lidar ceilometer." Thesis, Loughborough University, 2013. https://dspace.lboro.ac.uk/2134/11853.

Full text
Abstract:
A novel lidar ceilometer prototype based on divided lens optics has been designed, built, characterised, and tested. The primary applications for this manufacturable ground-based sensor are the determination of cloud base height and the measurement of vertical visibility. First, the design, which was developed in order to achieve superior performance at a low cost, is described in detail, along with the process used to develop it. The primary design considerations of optical signal to noise ratio, range-dependent overlap of the transmitter and receiver channels, and manufacturability, were balanced to develop an instrument with good signal to noise ratio, fast turn-on of overlap for detection of close range returns, and a minimised number of optical components and simplicity of assembly for cost control purposes. Second, a novel imaging method for characterisation of transmitter-receiver overlap as a function of range is described and applied to the instrument. The method is validated by an alternative experimental method and a geometric calculation that is specific to the unique geometry of the instrument. These techniques allow the calibration of close range detection sensitivity in order to acquire information prior to full overlap. Finally, signal processing methods used to automate the detection process are described. A novel two-part cloud base detection algorithm has been developed which combines extinction-derived visibility thresholds in the inverted cloud return signal with feature detection on the raw signal. In addition, standard approaches for determination of visibility based on an iterative far boundary inversion method, and calibration of attenuated backscatter profile using returns from a fully-attenuating water cloud, have been applied to the prototype. The prototype design, characterisation, and signal processing have been shown to be appropriate for implementation into a commercial instrument. The work that has been carried out provides a platform upon which a wide range of further work can be built.
APA, Harvard, Vancouver, ISO, and other styles
12

Azhari, Faris. "Automated crack detection and characterisation from 3D point clouds of unstructured surfaces." Thesis, Queensland University of Technology, 2022. https://eprints.qut.edu.au/234510/1/Faris_Azhari_Thesis.pdf.

Full text
Abstract:
This thesis proposes a novel automated crack detection and characterisation method on unstructured surfaces using 3D point cloud. Crack detection on unstructured surfaces poses a challenge compared to flat surfaces such as pavements and concrete, which typically utilise image-based sensors. The detection method utilises a point cloud-based deep learning method to perform point-wise classification. The detected points are then automatically characterised to estimate the detected cracks’ properties such as width profile, orientation, and length. The proposed method enables the deployment of autonomous systems to conduct reliable surveys in environments risky to humans.
APA, Harvard, Vancouver, ISO, and other styles
13

Wang, Chao. "Point clouds and thermal data fusion for automated gbXML-based building geometry model generation." Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/54008.

Full text
Abstract:
Existing residential and small commercial buildings now represent the greatest opportunity to improve building energy efficiency. Building energy simulation analysis is becoming increasingly important because the analysis results can assist the decision makers to make decisions on improving building energy efficiency and reducing environmental impacts. However, manually measuring as-is conditions of building envelops including geometry and thermal value is still a labor-intensive, costly, and slow process. Thus, the primary objective of this research was to automatically collect and extract the as-is geometry and thermal data of the building envelope components and create a gbXML-based building geometry model. In the proposed methodology, a rapid and low-cost data collection hardware system was designed by integrating 3D laser scanners and an infrared (IR) camera. Secondly, several algorithms were created to automatically recognize various components of building envelope as objects from collected raw data. The extracted 3D semantic geometric model was then automatically saved as an industry standard file format for data interoperability. The feasibility of the proposed method was validated through three case studies. The contributions of this research include 1) a customized low-cost hybrid data collection system development to fuse various data into a thermal point cloud; 2) an automatic method of extracting building envelope components and its geometry data to generate gbXML-based building geometry model. The broader impacts of this research are that it could offer a new way to collect as is building data without impeding occupants’ daily life, and provide an easier way for laypeople to understand the energy performance of their buildings via 3D thermal point cloud visualization.
APA, Harvard, Vancouver, ISO, and other styles
14

Heinzel, Johannes [Verfasser], and Barbara [Akademischer Betreuer] Koch. "Combined use of high resolution LiDAR and multispectral data for automated extraction of single trees and tree species = Kombinierte Verwendung von hochauflösenden LiDAR- und Spektraldaten zur automatisierten Extraktion von Einzelbäumen und Baumarten." Freiburg : Universität, 2011. http://d-nb.info/1123459495/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Vernon, Zachary Isaac. "A comparison of automated land cover/use classification methods for a Texas bottomland hardwood system using lidar, spot-5, and ancillary data." [College Station, Tex. : Texas A&M University, 2008. http://hdl.handle.net/1969.1/ETD-TAMU-2744.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Thornton, Douglas Anthony. "High Fidelity Localization and Map Building from an Instrumented Probe Vehicle." The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1483637485442285.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Seo, Suyoung. "Model-Based Automatic Building Extraction From LIDAR and Aerial Imagery." The Ohio State University, 2003. http://rave.ohiolink.edu/etdc/view?acc_num=osu1048886400.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Varney, Nina M. "LiDAR Data Analysis for Automatic Region Segmentation and Object Classification." University of Dayton / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1446747790.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Nalani, Hetti Arachchige. "Automatic Reconstruction of Urban Objects from Mobile Laser Scanner Data." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2015. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-159872.

Full text
Abstract:
Aktuelle 3D-Stadtmodelle werden immer wichtiger in verschiedenen städtischen Anwendungsbereichen. Im Moment dienen sie als Grundlage bei der Stadtplanung, virtuellem Tourismus und Navigationssystemen. Mittlerweile ist der Bedarf an 3D-Gebäudemodellen dramatisch gestiegen. Der Grund dafür sind hauptsächlich Navigationssysteme und Onlinedienste wie Google Earth. Die Mehrheit der Untersuchungen zur Rekonstruktion von Gebäudemodellen von Luftaufnahmen konzentriert sich ausschließlich auf Dachmodellierung. Jedoch treiben Anwendungen wie Virtuelle Realität und Navigationssysteme die Nachfrage nach detaillieren Gebäudemodellen, die nicht nur die geometrischen Aspekte sondern auch semantische Informationen beinhalten, stark an. Urbanisierung und Industrialisierung beeinflussen das Wachstum von urbaner Vegetation drastisch, welche als ein wesentlicher Teil des Lebensraums angesehen wird. Aus diesem Grund werden Aufgaben wie der Ökosystemüberwachung, der Verbesserung der Planung und des Managements von urbanen Regionen immer mehr Aufmerksamkeit geschenkt. Gleichermaßen hat die Erkennung und Modellierung von Bäumen im Stadtgebiet sowie die kontinuierliche Überprüfung ihrer Inventurparameter an Bedeutung gewonnen. Die steigende Nachfrage nach 3D-Gebäudemodellen, welche durch Fassadeninformation ergänzt wurden, und Informationen über einzelne Bäume im städtischen Raum erfordern effiziente Extraktions- und Rekonstruktionstechniken, die hochgradig automatisiert sind. In diesem Zusammenhang ist das Wissen über die geometrische Form jedes Objektteils ein wichtiger Aspekt. Heutzutage, wird das Mobile Laser Scanning (MLS) vermehrt eingesetzt um Objekte im städtischen Umfeld zu erfassen und es entwickelt sich zur Hauptquelle von Daten für die Modellierung von urbanen Objekten. Eine Vielzahl von Objekten wurde schon mit Daten von MLS rekonstruiert. Außerdem wurden bereits viele Methoden für die Verarbeitung von MLS-Daten mit dem Ziel urbane Objekte zu erkennen und zu rekonstruieren vorgeschlagen. Die 3D-Punkwolke einer städtischen Szene stellt eine große Menge von Messungen dar, die viele Objekte von verschiedener Größe umfasst, komplexe und unvollständige Strukturen sowie Löcher (Rauschen und Datenlücken) enthält und eine inhomogene Punktverteilung aufweist. Aus diesem Grund ist die Verarbeitung von MLS-Punktwolken im Hinblick auf die Extrahierung und Modellierung von wesentlichen und charakteristischen Fassadenstrukturen sowie Bäumen von großer Bedeutung. In der Arbeit werden zwei neue Methoden für die Rekonstruktion von Gebäudefassaden und die Extraktion von Bäumen aus MLS-Punktwolken vorgestellt, sowie ihre Anwendbarkeit in der städtischen Umgebung analysiert. Die erste Methode zielt auf die Rekonstruktion von Gebäudefassaden mit expliziter semantischer Information, wie beispielsweise Fenster, Türen, und Balkone. Die Rekonstruktion läuft vollautomatisch ab. Zu diesem Zweck werden einige Algorithmen vorgestellt, die auf dem Vorwissen über die geometrische Form und das Arrangement von Fassadenmerkmalen beruhen. Die initiale Klassifikation, mit welcher die Punkte in Objektpunkte und Bodenpunkte unterschieden werden, wird über eine lokale Höhenhistogrammanalyse zusammen mit einer planaren Region-Growing-Methode erzielt. Die Punkte, die als zugehörig zu Objekten klassifiziert werden, werden anschließend in Ebenen segmentiert, welche als Basiselemente der Merkmalserkennung angesehen werden können. Information über die Gebäudestruktur kann in Form von Regeln und Bedingungen erfasst werden, welche die wesentlichen Steuerelemente bei der Erkennung der Fassadenmerkmale und der Rekonstruktion des geometrischen Modells darstellen. Um Merkmale wie Fenster oder Türen zu erkennen, die sich an der Gebäudewand befinden, wurde eine löcherbasierte Methode implementiert. Einige Löcher, die durch Verdeckungen entstanden sind, können anschließend durch einen neuen regelbasierten Algorithmus eliminiert werden. Außenlinien der Merkmalsränder werden durch ein Polygon verbunden, welches das geometrische Modell repräsentiert, indem eine Methode angewendet wird, die auf geometrischen Primitiven basiert. Dabei werden die topologischen Relationen unter Beachtung des Vorwissens über die primitiven Formen analysiert. Mögliche Außenlinien können von den Kantenpunkten bestimmt werden, welche mit einer winkelbasierten Methode detektiert werden können. Wiederkehrende Muster und Ähnlichkeiten werden ausgenutzt um geometrische und topologische Ungenauigkeiten des rekonstruierten Modells zu korrigieren. Neben der Entwicklung des Schemas zur Rekonstruktion des 3D-Fassadenmodells, sind die Segmentierung einzelner Bäume und die Ableitung von Attributen der städtischen Bäume im Fokus der Untersuchung. Die zweite Methode zielt auf die Extraktion von individuellen Bäumen aus den Restpunktwolken. Vorwissen über Bäume, welches speziell auf urbane Regionen zugeschnitten ist, wird im Extraktionsprozess verwendet. Der formbasierte Ansatz zur Extraktion von Einzelbäumen besteht aus einer Reihe von Schritten. In jedem Schritt werden Objekte in Abhängigkeit ihrer geometrischen Merkmale gefunden. Stämme werden unter Ausnutzung der Hauptrichtung der Punktverteilung identifiziert. Dafür werden Punktsegmente gesucht, die einen Teil des Baumstamms repräsentieren. Das Ergebnis des Algorithmus sind segmentierte Bäume, welche genutzt werden können um genaue Informationen über die Größe und Position jedes einzelnen Baumes abzuleiten. Einige Beispiele der Ergebnisse werden in der Arbeit angeführt. Die Zuverlässigkeit der Algorithmen und der Methoden im Allgemeinen wurden unter Verwendung von drei Datensätzen, die mit verschiedenen Laserscannersystemen aufgenommen wurden, verifiziert. Die Untersuchung zeigt auch das Potential sowie die Einschränkungen der entwickelten Methoden wenn sie auf verschiedenen Datensätzen angewendet werden. Die Ergebnisse beider Methoden wurden quantitativ bewertet unter Verwendung einer Menge von Maßen, die die Qualität der Fassadenrekonstruktion und Baumextraktion betreffen wie Vollständigkeit und Genauigkeit. Die Genauigkeit der Fassadenrekonstruktion, der Baumstammdetektion, der Erfassung von Baumkronen, sowie ihre Einschränkungen werden diskutiert. Die Ergebnisse zeigen, dass MLS-Punktwolken geeignet sind um städtische Objekte detailreich zu dokumentieren und dass mit automatischen Rekonstruktionsmethoden genaue Messungen der wichtigsten Attribute der Objekte, wie Fensterhöhe und -breite, Flächen, Stammdurchmesser, Baumhöhe und Kronenfläche, erzielt werden können. Der gesamte Ansatz ist geeignet für die Rekonstruktion von Gebäudefassaden und für die korrekte Extraktion von Bäumen sowie ihre Unterscheidung zu anderen urbanen Objekten wie zum Beispiel Straßenschilder oder Leitpfosten. Aus diesem Grund sind die beiden Methoden angemessen um Daten von heterogener Qualität zu verarbeiten. Des Weiteren bieten sie flexible Frameworks für das viele Erweiterungen vorstellbar sind<br>Up-to-date 3D urban models are becoming increasingly important in various urban application areas, such as urban planning, virtual tourism, and navigation systems. Many of these applications often demand the modelling of 3D buildings, enriched with façade information, and also single trees among other urban objects. Nowadays, Mobile Laser Scanning (MLS) technique is being progressively used to capture objects in urban settings, thus becoming a leading data source for the modelling of these two urban objects. The 3D point clouds of urban scenes consist of large amounts of data representing numerous objects with significant size variability, complex and incomplete structures, and holes (noise and data gaps) or variable point densities. For this reason, novel strategies on processing of mobile laser scanning point clouds, in terms of the extraction and modelling of salient façade structures and trees, are of vital importance. The present study proposes two new methods for the reconstruction of building façades and the extraction of trees from MLS point clouds. The first method aims at the reconstruction of building façades with explicit semantic information such as windows, doors and balconies. It runs automatically during all processing steps. For this purpose, several algorithms are introduced based on the general knowledge on the geometric shape and structural arrangement of façade features. The initial classification has been performed using a local height histogram analysis together with a planar growing method, which allows for classifying points as object and ground points. The point cloud that has been labelled as object points is segmented into planar surfaces that could be regarded as the main entity in the feature recognition process. Knowledge of the building structure is used to define rules and constraints, which provide essential guidance for recognizing façade features and reconstructing their geometric models. In order to recognise features on a wall such as windows and doors, a hole-based method is implemented. Some holes that resulted from occlusion could subsequently be eliminated by means of a new rule-based algorithm. Boundary segments of a feature are connected into a polygon representing the geometric model by introducing a primitive shape based method, in which topological relations are analysed taking into account the prior knowledge about the primitive shapes. Possible outlines are determined from the edge points detected from the angle-based method. The repetitive patterns and similarities are exploited to rectify geometrical and topological inaccuracies of the reconstructed models. Apart from developing the 3D façade model reconstruction scheme, the research focuses on individual tree segmentation and derivation of attributes of urban trees. The second method aims at extracting individual trees from the remaining point clouds. Knowledge about trees specially pertaining to urban areas is used in the process of tree extraction. An innovative shape based approach is developed to transfer this knowledge to machine language. The usage of principal direction for identifying stems is introduced, which consists of searching point segments representing a tree stem. The output of the algorithm is, segmented individual trees that can be used to derive accurate information about the size and locations of each individual tree. The reliability of the two methods is verified against three different data sets obtained from different laser scanner systems. The results of both methods are quantitatively evaluated using a set of measures pertaining to the quality of the façade reconstruction and tree extraction. The performance of the developed algorithms referring to the façade reconstruction, tree stem detection and the delineation of individual tree crowns as well as their limitations are discussed. The results show that MLS point clouds are suited to document urban objects rich in details. From the obtained results, accurate measurements of the most important attributes relevant to the both objects (building façades and trees), such as window height and width, area, stem diameter, tree height, and crown area are obtained acceptably. The entire approach is suitable for the reconstruction of building façades and for the extracting trees correctly from other various urban objects, especially pole-like objects. Therefore, both methods are feasible to cope with data of heterogeneous quality. In addition, they provide flexible frameworks, from which many extensions can be envisioned
APA, Harvard, Vancouver, ISO, and other styles
20

Taylor, Zachary Jeremy. "Automatic Markerless Calibration of Multi-Modal Sensor Arrays." Thesis, The University of Sydney, 2015. http://hdl.handle.net/2123/14019.

Full text
Abstract:
This thesis presents a novel system for calibrating the extrinsic parameters of an array of cameras, 3D lidars and GPS/INS sensors without the requirement for any markers or other calibration aids. To achieve this, a new multi-modal metric, the gradient orientation measure is first presented. This metric operates by minimising the misalignment of gradients between the outputs of two candidate sensors and is able to handle the inherent differences in how sensors of different modalities perceive the world. This metric is successfully demonstrated on a range of calibration problems, however to calibrate the systems in a reliable manner the metric requires an initial estimate to the solution and a constrained search space. These constraints are required as repeated and similar structure in the environment in combination with the limited field of view of the sensors result in the metric's cost function being non-convex. This non-convexity is an issue that affects all appearance-based markerless methods. To overcome these limitations a second cue to the sensors' alignment is taken, the motion of the system. By estimating the motion that each individual sensor observes, an estimate of the extrinsic calibration of the sensors can be obtained. In this thesis standard techniques for this motion-based calibration (often referred to as hand-eye calibration) are extended by incorporating estimates of the accuracy of each sensor's readings. This allows the development of a probabilistic approach that calibrates all sensors simultaneously. The approach also facilities the estimation of the uncertainty in the final calibration. Finally, this motion-based approach is combined with appearance-based information to build a novel calibration framework. This framework does not require initialisation and can take advantage of all available alignment information to provide an accurate and robust calibration for the system.
APA, Harvard, Vancouver, ISO, and other styles
21

Pereira, Marcelo Silva. "Automated calibration of multiple LIDARs and cameras using a moving sphere." Master's thesis, Universidade de Aveiro, 2015. http://hdl.handle.net/10773/16454.

Full text
Abstract:
Mestrado em Engenharia Mecânica<br>Veículos autónomos têm atraído muito interesse nos últimos anos devido ao seu potencial impacto na sociedade, o que tem impulsionado esta área para estudos e desenvolvimentos constantes. Uma vez que os sistemas de perceção são extremamente importantes na navegação autónoma, a sua complexidade leva a um incremento do número de sensores a bordo (composto normalmente por sensores LIDAR, câmaras entre outros) juntamente com o aumento da sua diversidade, o que aumenta a preocupação sobre a calibração de sensores. Os métodos de calibração são normalmente manuais ou semi-automáticos e requerem intervenção de um utilizador. Poucos métodos automáticos estão disponíveis, e mesmo os que existem são normalmente baseados em processos complexos e dispositivos dispendiosos. Este trabalho apresenta um novo método de calibração automático usando uma bola como alvo para extrair correspondências entre sensores. O processo de calibração consiste em mover a bola permitindo a deteção do seu centro ao longo de sucessivas posições por todos os sensores a serem calibrados. Este estudo envolve a calibração de sensores LIDAR 2D e 3D, e câmaras. A segmentação em 2D usa um algoritmo baseado nas propriedades geométricas de um arco. Em 3D, a Point Cloud Library (PCL) sample consensus module é usado para identi car e localizar a bola. Finalmente, OpenCV é usado para calibrar o sistema stereo e computar a imagem de disparidade e a sua re-projeção 3D, resultando numa nuvem de pontos 3D. Durante o movimento da bola, é criada uma nuvem de pontos dos centros da bola para cada sensor. Finalmente, cada nuvem de pontos é alinhada com um sensor de referência. O resultado nal do processo é a transformação de corpo rígido de cada sensor com respeito ao sensor de referência. O método foi testado quer em laboratório quer com um veículo em tamanho real (AtlasCar). As relativas calibrações entre sensores assegura muito bons resultados que são avaliados pela consistência da performance da deteção por todos os sensores calibrados. Outra característica adicional nesta solução é a sua exibilidade ao permitir a calibração de diferentes LIDARs e câmaras.<br>Autonomous vehicles have attracted great interest in the past years due to their potential impact on society, which has been pushing this area into continuously study and development. Since the perception systems are extremely important in autonomous navigation, their complexity leads to an increment of the number of sensors on board (composed commonly by LIDAR, cameras and other sensors) along with the increase of their diversity, which raised concerns about sensor calibration. Calibration methods are usually manual or semi-automatic and require user intervention. Few automatic methods are available, and even the existent methods are normally based in complex processes and expensive devices. This work presents a new automatic calibration method using a ball as target to extract correspondences between sensors. The process of calibration consists of moving the ball allowing the detection of its center along successive positions by all the sensors to be calibrated. This study involves the calibration of 2D and 3D LIDAR sensors, and cameras. Segmentation in 2D uses an algorithm based on the geometric properties of an arc. In 3D, the Point Cloud Library (PCL) sample consensus module is used to identify and locate the ball. Finally, OpenCV is used to calibrate a stereo system and compute the disparity image and its 3D re-projection, resulting in a 3D point cloud. During ball motion, a point cloud of the ball centers is created for each sensor. Finally, all the point clouds are aligned with a reference sensor. The nal result of the process is the rigid body transformation of each sensor with respect to the reference frame. The method was tested both in laboratory experiments and in a real full size vehicle (AtlasCar). The relative calibration among all sensors yields very good results that are evaluated by the consistency of the detection performed by the calibrated sensors. Another additional feature of this solution is its exibility by permitting the calibration of several di erent LIDARs and cameras.
APA, Harvard, Vancouver, ISO, and other styles
22

Alhashimi, Anas. "Statistical Calibration Algorithms for Lidars." Licentiate thesis, Luleå tekniska universitet, Signaler och system, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-18224.

Full text
Abstract:
Robots are becoming increasingly available and capable, are becoming part of everyday life in applications: robots that guide blind or mentally handicapped people, robots that clean large office buildings and department stores, robots that assist people in shopping, recreational activities, etc.Localization, in the sense of understanding accurately one's position in the environment, is a basic building block for performing important tasks. Therefore, there is an interest in having robots to perform autonomously and accurately localization tasks in highly cluttered and dynamically changing environments.To perform localization, robots are required to opportunely combine their sensors measurements, sensors models and environment model. In this thesis we aim at improving the tools that constitute the basis of all the localization techniques, that are the models of these sensors, and the algorithms for processing the raw information from them. More specifically we focus on:- finding advanced statistical models of the measurements returned by common laser scanners (a.k.a. Lidars), starting from both physical considerations and evidence collected with opportune experiments;- improving the statistical algorithms for treating the signals coming from these sensors, and thus propose new estimation and system identification techniques for these devices.In other words, we strive for increasing the accuracy of Lidars through opportune statistical processing tools.The problems that we have to solve, in order to achieve our aims, are multiple. The first one is related to temperature dependency effects: the laser diode characteristics, especially the wave length of the emitted laser and the mechanical alignment of the optics, change non-linearly with temperature. In one of the papers in this thesis we specifically address this problem and propose a model describing the effects of temperature changes in the laser diode; these include, among others, the presence of multi-modal measurement noises. Our contributions then include an algorithm that statistically accounts not only for the bias induced by temperature changes, but also for these multi-modality issues.An other problem that we seek to relieve is an economical one. Improving the Lidar accuracy can be achieved by using accurate but expensive laser diodes and optical lenses. This unfortunately raises the sensor cost, and -- obviously -- low cost robots should not be equipped with very expensive Lidars. On the other hand, cheap Lidars have larger biases and noise variance. In an other contribution we thus precisely targeted the problem of how to improve the performance indexes of inexpensive Lidars by removing their biases and artifacts through opportune statistical manipulations of the raw information coming from the sensor. To achieve this goal it is possible to choose two different ways (that have been both explored):1- use the ground truth to estimate the Lidar model parameters;2- find algorithms that perform simultaneously calibration and estimation without using ground truth information. Using the ground truth is appealing since it may lead to better estimation performance. On the other hand, though, in normal robotic operations the actual ground truth is not available -- indeed ground truths usually require environmental modifications, that are costly. We thus considered how to estimate the Lidar model parameters for both the cases above.In last chapter of this thesis we conclude our findings and propose also our current future research directions.<br>Godkänd; 2016; 20160809 (alhana); Nedanstående person kommer att hålla licentiatseminarium för avläggande av teknologie licentiatexamen. Namn: Anas Alhashimi Ämne: Reglerteknik/Control Engineering Uppsats: Statistical Calibration Algorithms for Lidars Examinator: Professor Thomas Gustafsson, Institutionen för system- och rymdteknik, Avdelning: Signaler och System, Luleå tekniska universitet. Diskutant: Bitr. Professor Steffi Knorn, Signaler och System, Uppsala Universitet. Tid: Tisdag 6 september, 2016 kl 10.00 Plats: A109, Luleå tekniska universitet
APA, Harvard, Vancouver, ISO, and other styles
23

Jönsson, Erika. "A Simulation Model for Detection and Tracking Bio Aerosol Clouds using Elastic Lidar." Thesis, Linköping University, Department of Electrical Engineering, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-16327.

Full text
Abstract:
<p>Discharges of warfare bio aerosol clouds are powerful weapons in war and terror situations. A discharge of a small amount of a contagious substance can obliterate large areas. The discharges can usually not be seen with bare eyes, hence some tool needs to be used to find bio aerosol cloud discharges. One way is to use a lidar for the detection of clouds. By sending out a laser pulse into the atmosphere some of the light is scattered back. By measuring the backscattered light, the aerosol structure of the atmosphere can be obtained. If a cloud is hit by the laser beam, an increase of light is observed and the cloud can be detected and tracked.In this thesis a tool for simulating the elastic backscattered light has been developed. A graphical user interface for easier handling has also been developed. For automatic detection of clouds some detection algorithms have been tested. Another graphical user interface for presentation of the simulated lidar signals has also been constructed.The simulator is working well. A lot of different parameters can be changed for the lidar system, the atmosphere and the cloud type. The model works as a helpful tool for specifications of an elastic lidar system to be developed and also as a guide for expected system performance and ways to present the results.</p>
APA, Harvard, Vancouver, ISO, and other styles
24

Mejerfalk, Mattias. "Automatic Registration of Point Clouds Acquired by a Sweeping Single-Pixel TCSPC Lidar System." Thesis, Uppsala universitet, Avdelningen för visuell information och interaktion, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-325767.

Full text
Abstract:
This project investigates an image registration process, involving a method known as K-4PCS. This registration process was applied to a set of 16 long range lidar scans, acquired at different positions by a single pixel TCSPC (Time Correlated Single-Photon Counting) lidar system. By merging these lidar scans, after having been transformed by proper scan alignments, one could obtain clear information regarding obscured surfaces. Using all available data, the investigated method was able to provide adequate alignments for all lidar scans.The data in each lidar scan was subsampled and a subsampling ratio of 50% proved to be sufficient in order to construct sparse, representative point clouds that, when subjected to the image registration process, result in adequate alignments. This was approximately equivalent to 9 million collected photon detections per scan position. Lower subsampling ratios failed to generate representative point clouds that could be used in the imageregistration process in order to obtain adequate alignments. Large errors followed, especially in the horisontal and elevation angles, of each alignment. The computation time for one scan pair matching at a subsampling ratio = 100%. was, on average, approximately 120 s, and 95s for a subsampling = 50%.To summarise, the investigated method can be used to register lidar scans acquired by a lidar system using TCSPC principles, and with proper equipment and code implementation, one could potentially acquire 3D images of a measurement area every second, however, at a delay depending on the efficiency of the lidar data processing.
APA, Harvard, Vancouver, ISO, and other styles
25

Raun, Karl Hjalte Maack [Verfasser], and Armin [Akademischer Betreuer] Volkmann. "LIDAR based semi-automatic pattern recognition within an archaeological landscape / Karl Hjalte Maack Raun ; Betreuer: Armin Volkmann." Heidelberg : Propylaeum, 2019. http://d-nb.info/1234989298/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Töpel, Johanna. "Initial Analysis and Visualization of Waveform Laser Scanner Data." Thesis, Linköping University, Department of Electrical Engineering, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2864.

Full text
Abstract:
<p>Conventional airborne laser scanner systems output the three-dimensional coordinates of the surface location hit by the laser pulse. Data storage capacity and processing speeds available today has made it possible to digitally sample and store the entire reflected waveform, instead of only extracting the coordinates. Research has shown that return waveforms can give even more detailed insights into the vertical structure of surface objects, surface slope, roughness and reflectivity than the conventional systems. One of the most important advantages with registering the waveforms is that it gives the user the possibility to himself define the way range is calculated in post-processing. </p><p>In this thesis different techniques have been tested to visualize a waveform data set in order to get a better understanding of the waveforms and how they can be used to improve methods for classification of ground objects.</p><p>A pulse detection algorithm, using the EM algorithm, has been implemented and tested. The algorithm output position and width of the echo pulses. One of the results of this thesis is that echo pulses reflected by vegetation tend to be wider than those reflected by for example a road. Another result is that up till five echo pulses can be detected compared to two echo pulses that the conventional system detects.</p>
APA, Harvard, Vancouver, ISO, and other styles
27

Abayowa, Bernard Olushola. "Automatic Registration of Optical Aerial Imagery to a LiDAR Point Cloud for Generation of Large Scale City Models." University of Dayton / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1372508452.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Olsson, Erik. "Towards automatic asset management for real-time visualization of urban environments." Thesis, Linköpings universitet, Medie- och Informationsteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-141692.

Full text
Abstract:
This thesis describes how a pipeline was obtained to reconstruct an urban environment from terrestrial laser scanning and photogrammetric 3D maps of Norrköping, visualized in first prison and real-time. Together with LIU University and the city planning office of Norrköping the project was carried out as a preliminary study to get an idea of how much work is needed and in what accuracy we can recreate a few buildings. The visualization is intended to demonstrate a new way of exploring the city in virtual reality as well as visualize the geometrical and textural details in a higher quality comparing to the 3D map that Municipality of Norrköping uses today. Before, the map has only been intended to be displayed from a bird’s eye view and has poor resolution from closer ranges. In order to improve the resolution, HDR photos were used to texture the laser scanned model and cover a particular area of the low res 3D map. This thesis will explain which method was used to process a point based environment for texturing and setting up an environment in Unreal using both the 3d map and the laser scanned model.
APA, Harvard, Vancouver, ISO, and other styles
29

He, Yi. "An Analysis of Airborne Data Collection Methods for Updating Highway Feature Inventory." DigitalCommons@USU, 2016. https://digitalcommons.usu.edu/etd/5016.

Full text
Abstract:
Highway assets, including traffic signs, traffic signals, light poles, and guardrails, are important components of transportation networks. They guide, warn and protect drivers, and regulate traffic. To manage and maintain the regular operation of the highway system, state departments of transportation (DOTs) need reliable and up-to-date information about the location and condition of highway assets. Different methodologies have been employed to collect road inventory data. Currently, ground-based technologies are widely used to help DOTs to continually update their road database, while air-based methods are not commonly used. One possible reason is that the initial investment for air-based methods is relatively high; another is the lack of a systematic and effective approach to extract road features from raw airborne light detection and ranging (LiDAR) data and aerial image data. However, for large-area inventories (e.g., a whole state highway inventory), the total cost of using aerial mapping is actually much lower than other methods considering the time and personnel needed. Moreover, unmanned aerial vehicles (UAVs) are easily accessible and inexpensive, which makes it possible to reduce costs for aerial mapping. The focus of this project is to analyze the capability and strengths of airborne data collection system in highway inventory data collection. In this research, a field experiment was conducted by the Remote Sensing Service Laboratory (RSSL), Utah State University (USU), to collect airborne data. Two kinds of methodologies were proposed for data processing, namely ArcGIS-based algorithm for airborne LiDAR data, and MATLAB-based procedure for aerial photography. The results proved the feasibility and high efficiency of airborne data collection method for updating highway inventory database.
APA, Harvard, Vancouver, ISO, and other styles
30

Brandin, Martin, and Roger Hamrén. "Classification of Ground Objects Using Laser Radar Data." Thesis, Linköping University, Department of Electrical Engineering, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-1572.

Full text
Abstract:
<p>Accurate 3D models of natural environments are important for many modelling and simulation applications, for both civilian and military purposes. When building 3D models from high resolution data acquired by an airborne laser scanner it is de-sirable to separate and classify the data to be able to process it further. For example, to build a polygon model of a building the samples belonging to the building must be found.</p><p>In this thesis we have developed, implemented (in IDL and ENVI), and evaluated algorithms for classification of buildings, vegetation, power lines, posts, and roads. The data is gridded and interpolated and a ground surface is estimated before the classification. For the building classification an object based approach was used unlike most classification algorithms which are pixel based. The building classifica-tion has been tested and compared with two existing classification algorithms. </p><p>The developed algorithm classified 99.6 % of the building pixels correctly, while the two other algorithms classified 92.2 % respective 80.5 % of the pixels correctly. The algorithms developed for the other classes were tested with thefollowing result (correctly classified pixels): vegetation, 98.8 %; power lines, 98.2 %; posts, 42.3 %; roads, 96.2 %.</p>
APA, Harvard, Vancouver, ISO, and other styles
31

Zutautas, Vaidutis. "Charcoal Kiln Detection from LiDAR-derived Digital Elevation Models Combining Morphometric Classification and Image Processing Techniques." Thesis, Högskolan i Gävle, Samhällsbyggnad, GIS, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-24374.

Full text
Abstract:
This paper describes a unique method for the semi-automatic detection of historic charcoal production sites in LiDAR-derived digital elevation models. Intensified iron production in the early 17th century has remarkably influenced ways of how the land in Sweden was managed. Today, the abundance of charcoal kilns embedded in the landscape survives as cultural heritage monuments that testify about the scale forest management for charcoal production has contributed to the uprising iron manufacturing industry. An arbitrary selected study area (54 km2) south west of Gävle city served as an ideal testing ground, which is known to consist of already registered as well as unsurveyed charcoal kiln sites. The proposed approach encompasses combined morphometric classification methods being subjected to analytical image processing, where an image that represents refined terrain morphology was segmented and further followed by Hough Circle transfer function applied in seeking to detect circular shapes that represent charcoal kilns. Sites that have been identified manually and using the proposed method were only verified within an additionally established smaller validation area (6 km2). The resulting outcome accuracy was measured by calculating harmonic mean of precision and recall (F1-Score). Along with indication of previously undiscovered site locations, the proposed method showed relatively high score in recognising already registered sites after post-processing filtering. In spite of required continual fine-tuning, the described method can considerably facilitate mapping and overall management of cultural resources.
APA, Harvard, Vancouver, ISO, and other styles
32

Barsai, Gabor. "DATA REGISTRATION WITHOUT EXPLICIT CORRESPONDENCE FOR ADJUSTMENT OF CAMERA ORIENTATION PARAMETER ESTIMATION." The Ohio State University, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=osu1315855340.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Karlsson, Daniel. "Human Motion Tracking Using 3D Camera." Thesis, Linköping University, Department of Electrical Engineering, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-54426.

Full text
Abstract:
<p>The interest in video surveillance has increased in recent years. Cameras are now installed in e.g. stores, arenas and prisons. The video data is analyzed to detect abnormal or undesirable events such as thefts, fights and escapes. At the Informatics Unit at the division of Information Systems, FOI in Linköping, algorithms are developed for automatic detection and tracking of humans in video data. This thesis deals with the target tracking problem when a 3D camera is used. A 3D camera creates images whose pixels represent the ranges to the scene. In recent years, new camera systems have emerged where the range images are delivered at up to video rate (30 Hz). One goal of the thesis is to determine how range data affects the frequency with which the measurement update part of the tracking algorithm must be performed. Performance of the 2D tracker and the 3D tracker are evaluated with both simulated data and measured data from a 3D camera. It is concluded that the errors in the estimated image coordinates are independent of whether range data is available or not. The small angle and the relatively large distance to the target explains the good performance of the 2D tracker. The 3D tracker however shows superior tracking ability (much smaller tracking error) if the comparison is made in the world coordinates.</p>
APA, Harvard, Vancouver, ISO, and other styles
34

Lundberg, Gustav. "Automatic map generation from nation-wide data sources using deep learning." Thesis, Linköpings universitet, Statistik och maskininlärning, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-170759.

Full text
Abstract:
The last decade has seen great advances within the field of artificial intelligence. One of the most noteworthy areas is that of deep learning, which is nowadays used in everything from self driving cars to automated cancer screening. During the same time, the amount of spatial data encompassing not only two but three dimensions has also grown and whole cities and countries are being scanned. Combining these two technological advances enables the creation of detailed maps with a multitude of applications, civilian as well as military.This thesis aims at combining two data sources covering most of Sweden; laser data from LiDAR scans and surface model from aerial images, with deep learning to create maps of the terrain. The target is to learn a simplified version of orienteering maps as these are created with high precision by experienced map makers, and are a representation of how easy or hard it would be to traverse a given area on foot. The performance on different types of terrain are measured and it is found that open land and larger bodies of water is identified at a high rate, while trails are hard to recognize.It is further researched how the different densities found in the source data affect the performance of the models, and found that some terrain types, trails for instance, benefit from higher density data, Other features of the terrain, like roads and buildings are predicted with higher accuracy by lower density data.Finally, the certainty of the predictions is discussed and visualised by measuring the average entropy of predictions in an area. These visualisations highlight that although the predictions are far from perfect, the models are more certain about their predictions when they are correct than when they are not.
APA, Harvard, Vancouver, ISO, and other styles
35

Agardt, Erik, and Markus Löfgren. "Pilot Study of Systems to Drive Autonomous Vehicles on Test Tracks." Thesis, Linköping University, Department of Electrical Engineering, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-12217.

Full text
Abstract:
<p>This Master’s thesis is a pilot study that investigates different systems to drive autonomous and non-autonomous vehicles simultaneously on test tracks. The thesis includes studies of communication, positioning, collision avoidance, and techniques for surveillance of vehicles which are suitable for implementation. The investigation results in a suggested system outline.</p><p>Differential GPS combined with laser scanner vision is used for vehicle state estimation (position, heading, velocity, etc.). The state information is transmitted with IEEE 802.11 to all surrounding vehicles and surveillance center. With this information a Kalman prediction of the future position for all vehicles can be estimated and used for collision avoidance.</p>
APA, Harvard, Vancouver, ISO, and other styles
36

Henriksson, Tomas. "Driver Assistance Systemswith focus onAutomatic Emergency Brake." Thesis, KTH, Fordonsdynamik, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-121306.

Full text
Abstract:
This thesis work aims at performing a survey of those technologies generally called DriverAssistance Systems (DAS). This thesis work focuses on gathering information in terms ofaccident statistics, sensors and functions and analyzing this information and shall thruaccessible information match functions with accidents, functions with sensors etc.This analysis, based on accidents in United States and Sweden during the period 1998 – 2002and two truck accident studies, shows that of all accidents with fatalities or sever injuriesinvolving a heavy truck almost half are the result of a frontal impact. About one fourth of theaccidents are caused by side impact, whereas single vehicle and rear impact collisions causesaround 14 % each. Of these, about one fourth is collision with unprotected (motorcycles,mopeds, bicycles, and pedestrians) whereas around 60 % are collision with other vehicles.More than 90 % of all accidents are partly the result of driver error and about 75 % aredirectly the result of driver error. Hence there exist a great opportunity to reduce the numberof accidents by introducing DAS.In this work, an analysis of DAS shows that six of the systems discussed today have thepotential to prevent 40 – 50 % of these accidents, whereas 20 – 40 % are estimated to actuallyhaving the chance to be prevented.One of these DAS, automatic emergency brake (AEB), has been analyzed in more detail.Decision models for an emergency brake capable to mitigate rear-end accidents has beendesigned and evaluated. The results show that this model has high capabilities to mitigatecollisions.
APA, Harvard, Vancouver, ISO, and other styles
37

Radermacher, Matthew Jeffery. "Pattern Recognition and Feature Extraction Using Liar-Derived Elevation Models in GIS: A Comparison Between Visualization Techniques and Automated Methods for Identifying Prehistoric Ditch-Fortified Sites in North Dakota." Thesis, North Dakota State University, 2016. https://hdl.handle.net/10365/28010.

Full text
Abstract:
As technologies advance in the fields of geology and computer science, new methods in remote sensing, including data acquisition and analyses, make it possible to accurately model diverse landscapes. Archaeological applications of these systems are becoming increasingly popular, especially in regards to site prospection and the geospatial analysis of cultural features. Different methodologies were used to identify fortified ditch features of anthropogenic origin using aerial lidar from known prehistoric sites in North Dakota. The results were compared in an attempt to develop a system aimed at detecting similar, unrecorded morphological features on the landscape. The successful development of this program will allow archaeological investigators to review topography and locate specific features on the surface that otherwise could be difficult to identify as a result of poor visibility in the field.<br>ND NASA EPSCoR<br>North Dakota State University
APA, Harvard, Vancouver, ISO, and other styles
38

Bergström, Adam, and David Larsson. "Automatiserad mönsterigenkänning av stenmurar." Thesis, Högskolan i Gävle, Avdelningen för datavetenskap och samhällsbyggnad, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-30286.

Full text
Abstract:
Automated pattern recognition of stone walls, within both point cloud and image processing, can help identify previously inaccessible areas than with only image pro-cessing. This is important as stone walls are biotopes and serve as structures and have ecological functions for both plants and animals. An automated pattern recog-nition can also benefit Sweden with the fulfillment of the national environmental quality objectives, as well as several commitments from the EU which promote the preservation of biological diversity and cultural heritage. However, conventional airborne laser scanners, via airplanes, have not had a sufficiently high point density and penetration of dense forests. This study therefore aims to use an improved tech-nology in Light Detection and Ranging (LiDAR), where data is collected from Sin-gle Photon LiDAR (SPL). Then, the automated pattern recognition will be used to discover stone walls in varied terrain. After the evaluation, two out of five stone walls were identified, one of which had rendered 99.99% of the target area and the other had a 75% target area, although both also displayed one false hit outside of the desired area. The remaining missing area, as well as the other stone walls, could not be identified because of nearby fac-tors such as shrubbery and trees, but even though the method selection for this study did not provide a 100% match on all stone walls, the data from the SPL tech-nology is still useful for pattern recognition with its point density and penetration. The conclusion of this work is that a point cloud filtering must be improved, if not adapted for each area of stone walls, to create better areas of interest before image processing of segmentation and pattern recognition can be implemented. However, the study shows that a combination of point cloud and image processing for auto-matic pattern recognition is a useful way of identifying stone walls.<br>En automatiserad mönsterigenkänning av stenmurar, inom både punktmolns- och bildbehandling, kan bidra till att identifiera tidigare oåtkomliga områden än med endast bildbehandling. Detta är viktigt då stenmurar är biotoper och fungerar som strukturer och ekologiska funktioner för både växter och djur. En automatiserad mönsterigenkänning kan även bidra till att Sverige gynnar uppfyllandet av de nation-ella miljökvalitetsmålen, samt flera åtaganden från EU enklare inom bevarelse av den biologiska mångfalden och kulturarv. Däremot har konventionella flygburna la-serskanningar, med flygplan, inte haft en tillräcklig hög punkttäthet och genom-trängning av tät skog. Denna studie syftar därför till att använda sig av en förbättrad teknik inom Light Detection and Ranging (LiDAR), där data är insamlat från Single Photon LiDAR (SPL). Därefter ska den automatiserade mönsterigenkänningen an-vändas på dess data för att identifiera stenmurar i varierad terräng. Efter utvärderingen identifierades två av fem stenmurar, varav den ena muren hade 99,99 % upphittad träffyta med en felträff och den andra muren hade en 75 % upp-hittad träffyta med en felträff. Resterande saknad träffyta, samt de övriga stenmu-rarna, kunde inte identifieras på grund av närliggande faktorer som buskage och träd, men även om metodvalet till den här studien inte gav en 100 % träffyta på alla samtliga stenmurar är data från SPL-tekniken fortfarande användbart för mönsteri-genkänning med dess punkttäthet och genomträngning. Slutsatsen av detta arbete är att en punktmolnsfiltreringen måste förbättras, om inte anpassas för varje område av stenmurar, för att på så sätt skapa bättre intresseområden av stenmurar innan bildbe-handling av segmentering och mönsterigenkänning kan implementeras. Däremot vi-sar studien att en kombination av punktmolns- och bildbehandling för automatisk mönsterigenkänning är ett användbart arbetssätt för identifiering av stenmurar.
APA, Harvard, Vancouver, ISO, and other styles
39

Gidel, Samuel. "Méthode de détection et de suivi multi-piétons multi-capteurs embarquée sur un véhicule routier: application à un environnement urbain." Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2010. http://tel.archives-ouvertes.fr/tel-00719262.

Full text
Abstract:
Les travaux présentés dans cette thèse ont pour cadre la vision par ordinateur et concernent la détection et le suivi de piéton se trouvant sur la trajectoire d'un véhicule routier circulant en milieu urbain. Dans ce type d'environnement complexe, une des difficultés majeurs est la capacité à discerner les piétons des nombreux autres obstacles situés sur la chaussée. Un autre point essentiel est de pouvoir les suivre afin de prédire leur déplacement et ainsi le cas échéant éviter le contact avec le véhicule. D'autres contraintes s'ajoutent dans le contexte industriel des véhicules routiers intelligents. Il est nécessaire de proposer des algorithmes robustes temps réel avec des capteurs les moins chers possible
APA, Harvard, Vancouver, ISO, and other styles
40

Gao, Jizhou. "VISUAL SEMANTIC SEGMENTATION AND ITS APPLICATIONS." UKnowledge, 2013. http://uknowledge.uky.edu/cs_etds/14.

Full text
Abstract:
This dissertation addresses the difficulties of semantic segmentation when dealing with an extensive collection of images and 3D point clouds. Due to the ubiquity of digital cameras that help capture the world around us, as well as the advanced scanning techniques that are able to record 3D replicas of real cities, the sheer amount of visual data available presents many opportunities for both academic research and industrial applications. But the mere quantity of data also poses a tremendous challenge. In particular, the problem of distilling useful information from such a large repository of visual data has attracted ongoing interests in the fields of computer vision and data mining. Structural Semantics are fundamental to understanding both natural and man-made objects. Buildings, for example, are like languages in that they are made up of repeated structures or patterns that can be captured in images. In order to find these recurring patterns in images, I present an unsupervised frequent visual pattern mining approach that goes beyond co-location to identify spatially coherent visual patterns, regardless of their shape, size, locations and orientation. First, my approach categorizes visual items from scale-invariant image primitives with similar appearance using a suite of polynomial-time algorithms that have been designed to identify consistent structural associations among visual items, representing frequent visual patterns. After detecting repetitive image patterns, I use unsupervised and automatic segmentation of the identified patterns to generate more semantically meaningful representations. The underlying assumption is that pixels capturing the same portion of image patterns are visually consistent, while pixels that come from different backdrops are usually inconsistent. I further extend this approach to perform automatic segmentation of foreground objects from an Internet photo collection of landmark locations. New scanning technologies have successfully advanced the digital acquisition of large-scale urban landscapes. In addressing semantic segmentation and reconstruction of this data using LiDAR point clouds and geo-registered images of large-scale residential areas, I develop a complete system that simultaneously uses classification and segmentation methods to first identify different object categories and then apply category-specific reconstruction techniques to create visually pleasing and complete scene models.
APA, Harvard, Vancouver, ISO, and other styles
41

Fortin, Benoît. "Méthodes conjointes de détection et suivi basé-modèle de cibles distribuées par filtrage non-linéaire dans les données lidar à balayage." Phd thesis, Université du Littoral Côte d'Opale, 2013. http://tel.archives-ouvertes.fr/tel-01021085.

Full text
Abstract:
Dans les systèmes de perception multicapteurs, un point central concerne le suivi d'objets multiples. Dans mes travaux de thèse, le capteur principal est un télémètre laser à balayage qui perçoit des cibles étendues. Le problème desuivi multi-objets se décompose généralement en plusieurs étapes (détection, association et suivi) réalisées de manière séquentielle ou conjointe. Mes travaux ont permis de proposer des alternatives à ces méthodes en adoptant une approche "track-before-detect" sur cibles distribuées qui permet d'éviter la succession des traitements en proposant un cadre global de résolution de ce problème d'estimation. Dans une première partie, nous proposons une méthode de détection travaillant directement en coordonnées naturelles (polaires) qui exploite les propriétés d'invariance géométrique des objets suivis. Cette solution est ensuite intégrée dans le cadre des approches JPDA et PHD de suivi multicibles résolues grâce aux méthodes de Monte-Carlo séquentielles. La seconde partie du manuscrit vise à s'affranchir du détecteur pour proposer une méthode dans laquelle le modèle d'objet est directement intégré au processus de suivi. C'est sur ce point clé que les avancées ont été les plus significatives permettant d'aboutir à une méthode conjointe de détection et de suivi. Un processus d'agrégation a été développé afin de permettre une formalisation des données qui évite tout prétraitement sous-optimal. Nous avons finalement proposé un formalisme général pour les systèmes multicapteurs (multilidar, centrale inertielle, GPS). D'un point de vue applicatif, ces travaux ont été validés dans le domaine du suivi de véhicules pour les systèmes d'aide à la conduite.
APA, Harvard, Vancouver, ISO, and other styles
42

Wang, Wai-Lee, and 王偉立. "A Study of Automated Planar Feature Matching and Strip Adjustment of Lidar Point Clouds." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/16117935159759299896.

Full text
Abstract:
碩士<br>國立成功大學<br>測量及空間資訊學系碩博士班<br>96<br>The systematic errors of airborne Lidar cause elevation offset of point clouds. Strip adjustment is one of the ways to reduce systematic errors. Using strip adjustment, the parameters of deformation in each strip can be solved by means of the corresponding blocks or tie points in overlapped strips. Nevertheless, the location of corresponding blocks or tie points usually be selected and decided manually. In order to find out corresponding blocks, especially planar blocks, in overlapped strips automatically, a tensor voting algorithm is presented for detecting planes from Lidar data in this article. In general, the topology of planes in strips may be similar even if systematic errors exist. In the research an artificial neural network method is adopted for matching the planes with similar topology. The centre of gravity of matched conjugate planes will be regarded as tie points for strip adjustments. The advantage of this algorithm is that the choice of tie points can be executed automatically. In experiments of strip adjustments, this research discusses the influence of cross strips, distribution and the number of tie points and control points. The results of our experiments show the feasibility of our algorithm to improve the accuracy of heights by airborne Lidar data.
APA, Harvard, Vancouver, ISO, and other styles
43

Chagn, Yen-Shih, and 張晏獅. "Combination of Airborne LiDAR and color-photo images in automated interpretation of artificial building." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/32300602671774980934.

Full text
Abstract:
碩士<br>國立成功大學<br>地球科學系碩博士班<br>95<br>Get empty to carry a LiDAR technique the each number is scanned the ground by the frequency of 10000:00, it produces of order the high accuracy 3D space of cloud data record earth's surface to sit a mark, can produce to make the number high distance model(DEM) and number earth's surface model(DSM) that the mesh turns, by the difference of the of twos quantity and acquire number building model(DBM), the model's data at the right moment can make up image to judge to release because of shortage or tone of the image resolution to the artificial building distribute the miscarriage of justice caused not and all as a result. This research makes use of a sail to shine on image to have the characteristic that advantage and its image of detailed description have many spectrum information to the earth's surface, combining the LiDAR can quickly provide the advantage of high accuracy DTM, with the 2 Ds(the sail shine on image)+the 1 D(LiDAR DTM) its concept carry on artificial building's automatically categorizing, hope to join assistance information and geography information system of three-dimensional space as set the way which fold analysis demonstration, give correction towards coming out and the ratio of miscarriage of justice. Through substantial evidence analysis, the usage ENVI software carries on a direct type image to judge to release will have already leaked to judge or miscarriage of justice of phenomenon, after joining DBM data make use of the ArcMap software in the ArcGIS to research district of the sail shine on image and judge to release image and DBM three data to carry on a set to fold a ratio to and area analysis, very easily understand and judge to leak to judge or the place of miscarriage of justice, so plus DBM information by the image data of two dimensions, can really lend support to image to judge to release medium for the automatic classification of artificial building, promote to judge to release of accuracy.
APA, Harvard, Vancouver, ISO, and other styles
44

Ma, Zeyu. "SLAM research for port AGV based on 2D LIDAR." Master's thesis, 2019. http://hdl.handle.net/10071/20314.

Full text
Abstract:
With the increase in international trade, the transshipment of goods at international container ports is very busy. The AGV (Automated Guided Vehicle) has been used as a new generation of automated container horizontal transport equipment. The AGV is an automated unmanned vehicle that can work 24 hours a day, increasing productivity and reducing labor costs compared to using container trucks. The ability to obtain information about the surrounding environment is a prerequisite for the AGV to automatically complete tasks in the port area. At present, the method of AGV based on RFID tag positioning and navigation has a problem of excessive cost. This dissertation has carried out a research on applying light detection and ranging (LIDAR) simultaneous localization and mapping (SLAM) technology to port AGV. In this master's thesis, a mobile test platform based on a laser range finder is developed to scan 360-degree environmental information (distance and angle) centered on the LIDAR and upload the information to a real-time database to generate surrounding environmental maps, and the obstacle avoidance strategy was developed based on the acquired information. The effectiveness of the platform was verified by the experiments from multiple scenarios. Then based on the first platform, another experimental platform with encoder and IMU sensor was developed. In this platform, the functionality of SLAM is enabled by the GMapping algorithm and the installation of the encoder and IMU sensor. Based on the established environment SLAM map, the path planning and obstacle avoidance functions of the platform were realized.<br>Com o aumento do comércio internacional, o transbordo de mercadorias em portos internacionais de contentores é muito movimentado. O AGV (“Automated Guided Vehicle”) foi usado como uma nova geração de equipamentos para transporte horizontal de contentores de forma automatizada. O AGV é um veículo não tripulado automatizado que pode funcionar 24 horas por dia, aumentando a produtividade e reduzindo os custos de mão-de-obra em comparação com o uso de camiões porta-contentores. A capacidade de obter informações sobre o ambiente circundante é um pré-requisito para o AGV concluir automaticamente tarefas na área portuária. Atualmente, o método de AGV baseado no posicionamento e navegação de etiquetas RFID apresenta um problema de custo excessivo. Nesta dissertação foi realizada uma pesquisa sobre a aplicação da tecnologia LIDAR de localização e mapeamento simultâneo (SLAM) num AGV. Uma plataforma de teste móvel baseada num telémetro a laser é desenvolvida para examinar o ambiente em redor em 360 graus (distância e ângulo), centrado no LIDAR, e fazer upload da informação para uma base de dados em tempo real para gerar um mapa do ambiente em redor. Uma estratégia de prevenção de obstáculos foi também desenvolvida com base nas informações adquiridas. A eficácia da plataforma foi verificada através da realização de testes com vários cenários e obstáculos. Por fim, com base na primeira plataforma, uma outra plataforma experimental com codificador e sensor IMU foi também desenvolvida. Nesta plataforma, a funcionalidade do SLAM é ativada pelo algoritmo GMapping e pela instalação do codificador e do sensor IMU. Com base no estabelecimento do ambiente circundante SLAM, foram realizadas as funções de planeamento de trajetória e prevenção de obstáculos pela plataforma.
APA, Harvard, Vancouver, ISO, and other styles
45

"Automated detection of prehistoric conical burial mounds from LiDAR bare-earth digital elevation models a thesis presented to the Department of Geology and Geography in candidacy for the degree of Master of Science /." Diss., Maryville, Mo. : Northwest Missouri State University, 2009. http://www.nwmissouri.edu/library/theses/RileyMelanie/ThesisFinal.pdf.

Full text
Abstract:
Thesis (M.S.)--Northwest Missouri State University, 2009.<br>The full text of the thesis is included in the pdf file. Title from title screen of full text.pdf file (viewed on July 17, 2009) Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
46

Ioannou, Yani Andrew. "Automatic Urban Modelling using Mobile Urban LIDAR Data." Thesis, 2010. http://hdl.handle.net/1974/5443.

Full text
Abstract:
Recent advances in Light Detection and Ranging (LIDAR) technology and integration have resulted in vehicle-borne platforms for urban LIDAR scanning, such as Terrapoint Inc.'s TITAN system. Such technology has lead to an explosion in ground LIDAR data. The large size of such mobile urban LIDAR data sets, and the ease at which they may now be collected, has shifted the bottleneck of creating abstract urban models for Geographical Information Systems (GIS) from data collection to data processing. While turning such data into useful models has traditionally relied on human analysis, this is no longer practical. This thesis outlines a methodology for automatically recovering the necessary information to create abstract urban models from mobile urban LIDAR data using computer vision methods. As an integral part of the methodology, a novel scale-based interest operator is introduced (Di erence of Normals) that is e cient enough to process large datasets, while accurately isolating objects of interest in the scene according to real-world parameters. Finally a novel localized object recognition algorithm is introduced (Local Potential Well Space Embedding), derived from a proven global method for object recognition (Potential Well Space Embedding). The object recognition phase of our methodology is discussed with these two algorithms as a focus.<br>Thesis (Master, Computing) -- Queen's University, 2010-03-01 12:26:34.698
APA, Harvard, Vancouver, ISO, and other styles
47

Hsiao-ChuHung and 洪曉竹. "Automatic Building Boundary Extraction From Airborne LiDAR Data." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/04170292496613634649.

Full text
Abstract:
碩士<br>國立成功大學<br>測量及空間資訊學系碩博士班<br>101<br>2D topographic maps are the main map data of recording objects information on ground in urban area. Along with the progress of technology, 2D geographic information system trends to 3D cyber city. Both contain rich components, however, building boundary is one of the important components for the mapping of 2D digital topographic maps and the modeling of 3D city buildings. Photogrammetry is currently the common technique applied for building boundary generation. However, photogrammetric approaches to building reconstruction are still labor intensive and time consuming. Many researches have proposed the use of airborne LiDAR data to increase the automation of the reconstruction process. Airborne LiDAR data directly provide the three dimensional coordinates and the reflectivity information of the scanned objects, resulting in a lot of discrete points with very high density also known as LiDAR point clouds. However, the characteristics of objects such as building corners (point feature); building boundary (line feature); and roofs (plane feature); are implicitly contained in the data set. Those characteristics of objects need other process or algorithms to be extracted for further applications. Airborne LiDAR points are not evenly distributed on building surfaces. Usually top surfaces, such as roofs, may have densely distributed points, but vertical surfaces, such as walls, usually have sparsely distributed points or even on points. It means that while plane features of roofs can be extracted appropriately, plane features of walls could be very vague for extraction. Building boundaries, referring to the intersections of roof and wall planes are, therefore, not clearly defined in point clouds.To overcome this problem, this paper develops an algorithm to acquire building boundary from airborne LiDAR data. Three major process steps are included in the algorithm. Firstly the point clouds are classified as building points and non-building points using the commercial software, TerraScan. Then, octree-based split-and-merge segmentation approach is applied for building points to extract 3D plane features. Second, those building points and coplanar points are then used to trace the boundary points by the modified convex-hull algorithm, so called concave-hull algorithm. Boundary points of coplanar point group and building points and the first and intermediate echo points of multi-return scan are selected as candidates of building boundary points. Third, methods of the Hough transform, line fitting and line segmentation are applied to find line segments belonging to building boundaries. After the Hough transform process, the collinear points can be detected. A line fitting process is applied to obtain the best-fitting line of these collinear points. The height information of straight line from average z coordinate can be obtained after height fitting process. Then, several line segments represented by the two endpoints of the fitting lines are finally obtained. The test data in our experiments include ten buildings. The degree of correctness and completeness of results will be checked by comparing the extracted boundaries with that in 2D digital topographic map data, which can inspect and verify the feasibility of the algorithm. Also, the relationship between point cloud distribution and parameters in each step are probed in order to induce the most appropriate parameters. The experiment results show the effectiveness of the proposed method for automatic building boundary extraction from airborne LiDAR data, and that combining the information of the first and intermediate echo points of multi-return and the boundary points increases the completeness of boundaries. And, it is promising to use the extracted boundaries for 3D building modelling in the future.
APA, Harvard, Vancouver, ISO, and other styles
48

Kai-Hsuan, Chan, and 詹凱軒. "Automatic Generation of Building Model from Ground-Based LIDAR Data." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/32041068860695995694.

Full text
Abstract:
碩士<br>國立政治大學<br>資訊科學學系<br>95<br>Ground-based LIDAR system can be used to detect the surface of the buildings on the earth. In general, it produces large amount of high-precision point cloud data. These data include not only the three-dimensional space information, but also the color information. However, the number of point cloud data is huge and is difficult to be displayed efficiently. It’s necessary to use efficient data processing techniques in order to display these point cloud data in real-time. In this research, we construct the three-dimensional building model using the key points selected from a given set of point cloud data. The major works of our scheme consists of three parts. In the first part, we extract the key points from the given point cloud data through the help of a three-dimensional grid. These key points are used to construct a primitive model of the building. Then, we checked all the remaining points and decided whether these points are essential to the final building model. Finally, we transformed the color information into images and then used the transformed images to represent generic surface material of the three-dimensional model of the building. The goal of the final step is to make the model more realistic. In the experiments, we used the twin-tower of our university as our target. We successfully reduced the required data in displaying the building model and only about one percent of the original point cloud data are used in the final model. Hence, one can see the twin-tower from various view points in real-time. In addition, we use VRML to describe our model and the users can browse the results in real-time on internet.
APA, Harvard, Vancouver, ISO, and other styles
49

Nalani, Hetti Arachchige. "Automatic Reconstruction of Urban Objects from Mobile Laser Scanner Data." Doctoral thesis, 2014. https://tud.qucosa.de/id/qucosa%3A28506.

Full text
Abstract:
Aktuelle 3D-Stadtmodelle werden immer wichtiger in verschiedenen städtischen Anwendungsbereichen. Im Moment dienen sie als Grundlage bei der Stadtplanung, virtuellem Tourismus und Navigationssystemen. Mittlerweile ist der Bedarf an 3D-Gebäudemodellen dramatisch gestiegen. Der Grund dafür sind hauptsächlich Navigationssysteme und Onlinedienste wie Google Earth. Die Mehrheit der Untersuchungen zur Rekonstruktion von Gebäudemodellen von Luftaufnahmen konzentriert sich ausschließlich auf Dachmodellierung. Jedoch treiben Anwendungen wie Virtuelle Realität und Navigationssysteme die Nachfrage nach detaillieren Gebäudemodellen, die nicht nur die geometrischen Aspekte sondern auch semantische Informationen beinhalten, stark an. Urbanisierung und Industrialisierung beeinflussen das Wachstum von urbaner Vegetation drastisch, welche als ein wesentlicher Teil des Lebensraums angesehen wird. Aus diesem Grund werden Aufgaben wie der Ökosystemüberwachung, der Verbesserung der Planung und des Managements von urbanen Regionen immer mehr Aufmerksamkeit geschenkt. Gleichermaßen hat die Erkennung und Modellierung von Bäumen im Stadtgebiet sowie die kontinuierliche Überprüfung ihrer Inventurparameter an Bedeutung gewonnen. Die steigende Nachfrage nach 3D-Gebäudemodellen, welche durch Fassadeninformation ergänzt wurden, und Informationen über einzelne Bäume im städtischen Raum erfordern effiziente Extraktions- und Rekonstruktionstechniken, die hochgradig automatisiert sind. In diesem Zusammenhang ist das Wissen über die geometrische Form jedes Objektteils ein wichtiger Aspekt. Heutzutage, wird das Mobile Laser Scanning (MLS) vermehrt eingesetzt um Objekte im städtischen Umfeld zu erfassen und es entwickelt sich zur Hauptquelle von Daten für die Modellierung von urbanen Objekten. Eine Vielzahl von Objekten wurde schon mit Daten von MLS rekonstruiert. Außerdem wurden bereits viele Methoden für die Verarbeitung von MLS-Daten mit dem Ziel urbane Objekte zu erkennen und zu rekonstruieren vorgeschlagen. Die 3D-Punkwolke einer städtischen Szene stellt eine große Menge von Messungen dar, die viele Objekte von verschiedener Größe umfasst, komplexe und unvollständige Strukturen sowie Löcher (Rauschen und Datenlücken) enthält und eine inhomogene Punktverteilung aufweist. Aus diesem Grund ist die Verarbeitung von MLS-Punktwolken im Hinblick auf die Extrahierung und Modellierung von wesentlichen und charakteristischen Fassadenstrukturen sowie Bäumen von großer Bedeutung. In der Arbeit werden zwei neue Methoden für die Rekonstruktion von Gebäudefassaden und die Extraktion von Bäumen aus MLS-Punktwolken vorgestellt, sowie ihre Anwendbarkeit in der städtischen Umgebung analysiert. Die erste Methode zielt auf die Rekonstruktion von Gebäudefassaden mit expliziter semantischer Information, wie beispielsweise Fenster, Türen, und Balkone. Die Rekonstruktion läuft vollautomatisch ab. Zu diesem Zweck werden einige Algorithmen vorgestellt, die auf dem Vorwissen über die geometrische Form und das Arrangement von Fassadenmerkmalen beruhen. Die initiale Klassifikation, mit welcher die Punkte in Objektpunkte und Bodenpunkte unterschieden werden, wird über eine lokale Höhenhistogrammanalyse zusammen mit einer planaren Region-Growing-Methode erzielt. Die Punkte, die als zugehörig zu Objekten klassifiziert werden, werden anschließend in Ebenen segmentiert, welche als Basiselemente der Merkmalserkennung angesehen werden können. Information über die Gebäudestruktur kann in Form von Regeln und Bedingungen erfasst werden, welche die wesentlichen Steuerelemente bei der Erkennung der Fassadenmerkmale und der Rekonstruktion des geometrischen Modells darstellen. Um Merkmale wie Fenster oder Türen zu erkennen, die sich an der Gebäudewand befinden, wurde eine löcherbasierte Methode implementiert. Einige Löcher, die durch Verdeckungen entstanden sind, können anschließend durch einen neuen regelbasierten Algorithmus eliminiert werden. Außenlinien der Merkmalsränder werden durch ein Polygon verbunden, welches das geometrische Modell repräsentiert, indem eine Methode angewendet wird, die auf geometrischen Primitiven basiert. Dabei werden die topologischen Relationen unter Beachtung des Vorwissens über die primitiven Formen analysiert. Mögliche Außenlinien können von den Kantenpunkten bestimmt werden, welche mit einer winkelbasierten Methode detektiert werden können. Wiederkehrende Muster und Ähnlichkeiten werden ausgenutzt um geometrische und topologische Ungenauigkeiten des rekonstruierten Modells zu korrigieren. Neben der Entwicklung des Schemas zur Rekonstruktion des 3D-Fassadenmodells, sind die Segmentierung einzelner Bäume und die Ableitung von Attributen der städtischen Bäume im Fokus der Untersuchung. Die zweite Methode zielt auf die Extraktion von individuellen Bäumen aus den Restpunktwolken. Vorwissen über Bäume, welches speziell auf urbane Regionen zugeschnitten ist, wird im Extraktionsprozess verwendet. Der formbasierte Ansatz zur Extraktion von Einzelbäumen besteht aus einer Reihe von Schritten. In jedem Schritt werden Objekte in Abhängigkeit ihrer geometrischen Merkmale gefunden. Stämme werden unter Ausnutzung der Hauptrichtung der Punktverteilung identifiziert. Dafür werden Punktsegmente gesucht, die einen Teil des Baumstamms repräsentieren. Das Ergebnis des Algorithmus sind segmentierte Bäume, welche genutzt werden können um genaue Informationen über die Größe und Position jedes einzelnen Baumes abzuleiten. Einige Beispiele der Ergebnisse werden in der Arbeit angeführt. Die Zuverlässigkeit der Algorithmen und der Methoden im Allgemeinen wurden unter Verwendung von drei Datensätzen, die mit verschiedenen Laserscannersystemen aufgenommen wurden, verifiziert. Die Untersuchung zeigt auch das Potential sowie die Einschränkungen der entwickelten Methoden wenn sie auf verschiedenen Datensätzen angewendet werden. Die Ergebnisse beider Methoden wurden quantitativ bewertet unter Verwendung einer Menge von Maßen, die die Qualität der Fassadenrekonstruktion und Baumextraktion betreffen wie Vollständigkeit und Genauigkeit. Die Genauigkeit der Fassadenrekonstruktion, der Baumstammdetektion, der Erfassung von Baumkronen, sowie ihre Einschränkungen werden diskutiert. Die Ergebnisse zeigen, dass MLS-Punktwolken geeignet sind um städtische Objekte detailreich zu dokumentieren und dass mit automatischen Rekonstruktionsmethoden genaue Messungen der wichtigsten Attribute der Objekte, wie Fensterhöhe und -breite, Flächen, Stammdurchmesser, Baumhöhe und Kronenfläche, erzielt werden können. Der gesamte Ansatz ist geeignet für die Rekonstruktion von Gebäudefassaden und für die korrekte Extraktion von Bäumen sowie ihre Unterscheidung zu anderen urbanen Objekten wie zum Beispiel Straßenschilder oder Leitpfosten. Aus diesem Grund sind die beiden Methoden angemessen um Daten von heterogener Qualität zu verarbeiten. Des Weiteren bieten sie flexible Frameworks für das viele Erweiterungen vorstellbar sind.<br>Up-to-date 3D urban models are becoming increasingly important in various urban application areas, such as urban planning, virtual tourism, and navigation systems. Many of these applications often demand the modelling of 3D buildings, enriched with façade information, and also single trees among other urban objects. Nowadays, Mobile Laser Scanning (MLS) technique is being progressively used to capture objects in urban settings, thus becoming a leading data source for the modelling of these two urban objects. The 3D point clouds of urban scenes consist of large amounts of data representing numerous objects with significant size variability, complex and incomplete structures, and holes (noise and data gaps) or variable point densities. For this reason, novel strategies on processing of mobile laser scanning point clouds, in terms of the extraction and modelling of salient façade structures and trees, are of vital importance. The present study proposes two new methods for the reconstruction of building façades and the extraction of trees from MLS point clouds. The first method aims at the reconstruction of building façades with explicit semantic information such as windows, doors and balconies. It runs automatically during all processing steps. For this purpose, several algorithms are introduced based on the general knowledge on the geometric shape and structural arrangement of façade features. The initial classification has been performed using a local height histogram analysis together with a planar growing method, which allows for classifying points as object and ground points. The point cloud that has been labelled as object points is segmented into planar surfaces that could be regarded as the main entity in the feature recognition process. Knowledge of the building structure is used to define rules and constraints, which provide essential guidance for recognizing façade features and reconstructing their geometric models. In order to recognise features on a wall such as windows and doors, a hole-based method is implemented. Some holes that resulted from occlusion could subsequently be eliminated by means of a new rule-based algorithm. Boundary segments of a feature are connected into a polygon representing the geometric model by introducing a primitive shape based method, in which topological relations are analysed taking into account the prior knowledge about the primitive shapes. Possible outlines are determined from the edge points detected from the angle-based method. The repetitive patterns and similarities are exploited to rectify geometrical and topological inaccuracies of the reconstructed models. Apart from developing the 3D façade model reconstruction scheme, the research focuses on individual tree segmentation and derivation of attributes of urban trees. The second method aims at extracting individual trees from the remaining point clouds. Knowledge about trees specially pertaining to urban areas is used in the process of tree extraction. An innovative shape based approach is developed to transfer this knowledge to machine language. The usage of principal direction for identifying stems is introduced, which consists of searching point segments representing a tree stem. The output of the algorithm is, segmented individual trees that can be used to derive accurate information about the size and locations of each individual tree. The reliability of the two methods is verified against three different data sets obtained from different laser scanner systems. The results of both methods are quantitatively evaluated using a set of measures pertaining to the quality of the façade reconstruction and tree extraction. The performance of the developed algorithms referring to the façade reconstruction, tree stem detection and the delineation of individual tree crowns as well as their limitations are discussed. The results show that MLS point clouds are suited to document urban objects rich in details. From the obtained results, accurate measurements of the most important attributes relevant to the both objects (building façades and trees), such as window height and width, area, stem diameter, tree height, and crown area are obtained acceptably. The entire approach is suitable for the reconstruction of building façades and for the extracting trees correctly from other various urban objects, especially pole-like objects. Therefore, both methods are feasible to cope with data of heterogeneous quality. In addition, they provide flexible frameworks, from which many extensions can be envisioned.
APA, Harvard, Vancouver, ISO, and other styles
50

Lin, Jhih-Syuan, and 林志軒. "Development of a Lidar-Based Positioning System for an Automatic Guided Vehicle." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/h7k3a8.

Full text
Abstract:
碩士<br>國立雲林科技大學<br>機械工程系<br>107<br>The coordinates and orientation of an Automatic Guided Vehicle (AGV) are calculated in indoor environment using LIDAR scans. The distance between the vehicle to two mutually perpendicular walls is scanned over a range of -120°to 120°. Multiple line segments are first calculated by the least-square-error method from the data. Two perpendicular lines are then obtained by dropping the outlier line segments based on the variation of their slopes. The coordinate and orientation of the vehicle can then be derived by determining the intersection of the two perpendicular lines and their orientations. It is found that an accuracy of less than 5 mm in distance and 1° in orientation can be obtained using this method. With the pose-estimation algorithm, closed loop position control for the AGV is tested over an area of two meters by two meters. Experimental results show that orientation accuracy of about 1° and positioning accuracy of about 20 mm can be achieved. Keywords: Lidar, AGV, Data processing, Positioning.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!