To see the other types of publications on this topic, follow the link: Geospatial data.

Dissertations / Theses on the topic 'Geospatial data'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Geospatial data.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Xiao, Jun. "WWW access to geospatial data." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0033/MQ65529.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lee, Donald C. "Geospatial data sharing in Saudi Arabia." University of Southern Queensland, Faculty of Engineering and Surveying, 2003. http://eprints.usq.edu.au/archive/00001458/.

Full text
Abstract:
This research started with a realization that two organizations in Saudi Arabia were spending large amounts of money, millions of dollars in fact, in acquiring separate sets of geospatial data that had identical basemap components. Both the organizations would be using the data for similar engineering purposes, yet both would be independently outsourcing the data gathering. In all probability, resources are being wasted through two organizations each developing and operating stand-alone geographic information systems and then populating the databases with geospatial data obtained separately. Surely with some cooperation, a shared database could be established, with a diffusion of economic benefits to both organizations. Preliminary discussions with representatives from both the organizations revealed high levels of enthusiasm for the principle of sharing geospatial data, but the discussions also revealed even higher levels of scepticism that such a scheme could be implemented. This dichotomy of views prompted an investigation into the issues, benefits and the barriers involved in data sharing, the relative weight of these issues, and a quest for a workable model. Sharing geospatial data between levels of government, between governmental and private institutions, and within institutions themselves has been attempted on large and small scales in a variety of countries, with varying degrees of accomplishment. Lessons can be learned from these attempts at data sharing, confirming that success is not purely a function of financial and technical benefits, but is also influenced by institutional and cultural aspects. This research is aimed at defining why there is little geospatial data sharing between authorities in Saudi Arabia, and then presenting a workable model as a pilot arrangement. This should take into account issues raised in reference material; issues evidenced through experience in the implementation of systems that were configured as independent structures; issues of culture; and issues apparent in a range of existing data sharing arrangements. The doubts expressed by engineering managers towards using a geospatial database that is shared between institutions in Saudi Arabia have been borne out by the complexity of interrelationships which this research has revealed. Nevertheless, by concentrating on a two party entry level, a model has been presented which shows promise for the implementation of such a scheme. The model was derived empirically and checked against a case study of various other similar ventures, with a consideration of their applicability in the environment of Saudi Arabia. This model follows closely the generic structure of the Singapore Land Hub. The scalability of the model should allow it to be extended to other, multi-lateral data sharing arrangements. An alternative solution could be developed based on a Spatial Data Infrastructure model and this is suggested for ongoing investigation. Major unresolved questions relate to cultural issues, whose depth and intricacy have the potential to influence the realization of successful geospatial data sharing in the Kingdom of Saudi Arabia.
APA, Harvard, Vancouver, ISO, and other styles
3

Berndtsson, Carl. "Open Geospatial Data for Energy Planning." Thesis, KTH, Energisystemanalys, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-186392.

Full text
Abstract:
Geographic information systems (GIS) are increasingly being used in energy planning and by private sector practitioners. Through qualitative interviews with 49 leading practitioners in the public and private sector, this thesis establishes the data of most importance, current open access data sources for energy access along with the information currently lacking from open data sources. The interviews revealed grid infrastructure, population density, renewable power potential and energy expenditure to be the most sought after data for both practitioners’ groups. However, it was evident that the private sector had a stronger focus on land, water resource and climate data determining the renewable power potential for a specific area of interest, while the public sector focused on socioeconomic indicators and energy expenditure. A following data aggregation and analysis of the most desired datasets showed that a majority of the needed datasets were available with the exception of energy expenditure. A least-cost option electrification model developed by KTH-dESA has proven to be a powerful tool in assessing the cost of nationwide electrification. This thesis compares the average least-cost option electrification cost for each region in Tanzania with a projected average income. The comparison showed that the average household cost for least-cost option electrification as a share of projected household income varies between regions. The average share per household in the western regions of Tanzania were significantly higher compared to households in the central and eastern regions. The comparison was combined with the geographical location of donor-supported energy development projects showing that majority of the projects were located in the central parts of Tanzania and not targeting the most vulnerable households in regions furthest away from the national grid. In order to successfully introduce electricity nationwide in Tanzania, more support needs to be provided to the poorest regions.  Open data aggregation and coordination are the key to expand the support from GIS for energy access. Even though multiple data sources have been identified, they are scattered and leads to data being collected again. Coordinated efforts aimed to provide means to share aggregated updated and freely accessible data can help reduce high transaction costs, helping to alleviate energy poverty.
Geografiska informationssystem (GIS) används i allt större utsträckning inom energiplanering och av privata aktörer. Genom kvalitativa intervjuer med 49 ledande aktörer i offentlig och privat sektor redogör denna rapport för de viktigaste dataseten för aktörer, befintliga källor för öppen data och vilka informationsluckor som finns i dessa källor. Intervjuerna visade att dataseten gällande energiinfrastruktur, befolkningstäthet, potential för förnybar energi och energiutgifter var viktigast för både offentlig och privat sektor. Privat sektor hade ett större fokus på land, vatten och klimatdata, som alla är viktiga för att avgöra ett områdes potential för förnybar energi. Offentlig sektor hade ett större intresse av socioekonomiska faktorer och energiutgifter. En dataaggregation och analys visade att de mest eftertraktade dataseten fanns öppet tillgängliga med undantag för energiutgifter. En modell för energialternativ till lägsta kostnad utvecklad av KTH-dESA har visat sig vara ett kraftfullt verktyg för att kostnadsbedöma en landsomfattande elektrifiering. I en fallstudie för Tanzania jämför denna rapport den genomsnittliga kostnaden för hushåll för en implementering av en sådan elektrifiering med en beräknad genomsnittlig hushållsinkomst. Jämförelsen visade att kostnaden för hushållen som andel av total hushållsinkomst varierar kraftigt mellan regioner. Den genomsnittliga andelen av hushållsinkomsten som skulle läggas på elektricitet i de västra regionerna av Tanzania var betydligt högre jämfört med de centrala och östra regionerna. Jämförelsen kombinerade även detta resultat med den geografiska positionen hos biståndsstödda energiprojekt. vilken visade att majoriteten av dessa projekt fanns i de centrala delarna av landet och inte i de mest utsatta regionerna som präglas av låg genomsnittlig inkomst och långa avstånd till det nationella kraftnätet. För att framgångsrikt kunna genomföra en landsomfattande elektrifiering behöver mer stöd ges till dessa regioner. Aggregation av öppen data och koordinering är nyckeln till att framgångsrikt utveckla GIS som stöd vid framtida energiprojekt som syftar till att ge fler tillgång till elektricitet. Trots att flertalet datakällor kunde identifieras är dessa spridda vilket leder till att data behöver samlas in gång på gång. Koordinerade insatser för att öka möjligheten till att dela redan insamlad öppen och uppdaterad data kan bidra till att minska transaktionskostnader och därmed minska energifattigdomen
APA, Harvard, Vancouver, ISO, and other styles
4

Swain, Bradley Andrew. "Path understanding using geospatial natural language." [Pensacola, Fla.] : University of West Florida, 2009. http://purl.fcla.edu/fcla/etd/WFE0000182.

Full text
Abstract:
Thesis (M.S.)--University of West Florida, 2009.
Submitted to the Dept. of Computer Science. Title from title page of source document. Document formatted into pages; contains 45 pages. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
5

Joshi, Kripa. "Combining Geospatial and Temporal Ontologies." Fogler Library, University of Maine, 2007. http://www.library.umaine.edu/theses/pdf/JoshiK2007.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Banks, Mitchakima D. "Maintaining Multimedia Data in a Geospatial Database." Thesis, Monterey, California. Naval Postgraduate School, 2012. http://hdl.handle.net/10945/17318.

Full text
Abstract:
Approved for public release; distribution is unlimited
The maintenance and organization of data in any profession, government or commercial, is becoming increasingly more challenging. Adding components, whether those components are two- or three dimensional, further increases the complexity of databases. It is harder to determine which database software to choose to meet the needs of the organization. This thesis evaluates the performance of two databases as spatial functions are executed on columns containing spatial data using benchmark testing. Evaluating the performance of spatial databases makes it possible to identify performance issues with spatial queries. The process of conducting a performance evaluation of multiple databases, in this thesis, focuses on the measurement of each elapsed time within each database. The work already implemented in evaluating the performance of spatial databases did not explore a databases performance as it returned large and small result sets. The overhead of returning large or small result sets was not considered. Therefore, a custom test was developed to engage the aspects of prior work found beneficial. Using a database the researchers built with well over one million records, the elapsed time in adding records was measured. The elapsed time of the spatial functions queries was measured next. The results showed areas where each database excelled given multiple conditions. A different look at PostgreSQL and MySQL as spatial databases was offered. Given their results, as each database produced result sets from zero to 100,000, it was learned that the performance of each database could differ depending on the volume of information it is expected to return.
APA, Harvard, Vancouver, ISO, and other styles
7

Demšar, Urška. "Data mining of geospatial data: combining visual and automatic methods." Doctoral thesis, KTH, School of Architecture and the Built Environment (ABE), 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-3892.

Full text
Abstract:

Most of the largest databases currently available have a strong geospatial component and contain potentially useful information which might be of value. The discipline concerned with extracting this information and knowledge is data mining. Knowledge discovery is performed by applying automatic algorithms which recognise patterns in the data.

Classical data mining algorithms assume that data are independently generated and identically distributed. Geospatial data are multidimensional, spatially autocorrelated and heterogeneous. These properties make classical data mining algorithms inappropriate for geospatial data, as their basic assumptions cease to be valid. Extracting knowledge from geospatial data therefore requires special approaches. One way to do that is to use visual data mining, where the data is presented in visual form for a human to perform the pattern recognition. When visual mining is applied to geospatial data, it is part of the discipline called exploratory geovisualisation.

Both automatic and visual data mining have their respective advantages. Computers can treat large amounts of data much faster than humans, while humans are able to recognise objects and visually explore data much more effectively than computers. A combination of visual and automatic data mining draws together human cognitive skills and computer efficiency and permits faster and more efficient knowledge discovery.

This thesis investigates if a combination of visual and automatic data mining is useful for exploration of geospatial data. Three case studies illustrate three different combinations of methods. Hierarchical clustering is combined with visual data mining for exploration of geographical metadata in the first case study. The second case study presents an attempt to explore an environmental dataset by a combination of visual mining and a Self-Organising Map. Spatial pre-processing and visual data mining methods were used in the third case study for emergency response data.

Contemporary system design methods involve user participation at all stages. These methods originated in the field of Human-Computer Interaction, but have been adapted for the geovisualisation issues related to spatial problem solving. Attention to user-centred design was present in all three case studies, but the principles were fully followed only for the third case study, where a usability assessment was performed using a combination of a formal evaluation and exploratory usability.

APA, Harvard, Vancouver, ISO, and other styles
8

Demšar, Urška. "Data mining of geospatial data: combining visual and automatic methods /." Stockholm : Department of urban planning and environment, Royal Institute of Technology, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-3892.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Yang, Zhao. "Spatial Data Mining Analytical Environment for Large Scale Geospatial Data." ScholarWorks@UNO, 2016. http://scholarworks.uno.edu/td/2284.

Full text
Abstract:
Nowadays, many applications are continuously generating large-scale geospatial data. Vehicle GPS tracking data, aerial surveillance drones, LiDAR (Light Detection and Ranging), world-wide spatial networks, and high resolution optical or Synthetic Aperture Radar imagery data all generate a huge amount of geospatial data. However, as data collection increases our ability to process this large-scale geospatial data in a flexible fashion is still limited. We propose a framework for processing and analyzing large-scale geospatial and environmental data using a “Big Data” infrastructure. Existing Big Data solutions do not include a specific mechanism to analyze large-scale geospatial data. In this work, we extend HBase with Spatial Index(R-Tree) and HDFS to support geospatial data and demonstrate its analytical use with some common geospatial data types and data mining technology provided by the R language. The resulting framework has a robust capability to analyze large-scale geospatial data using spatial data mining and making its outputs available to end users.
APA, Harvard, Vancouver, ISO, and other styles
10

Sherif, Mohamed Ahmed Mohamed. "Automating Geospatial RDF Dataset Integration and Enrichment." Doctoral thesis, Universitätsbibliothek Leipzig, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-215708.

Full text
Abstract:
Over the last years, the Linked Open Data (LOD) has evolved from a mere 12 to more than 10,000 knowledge bases. These knowledge bases come from diverse domains including (but not limited to) publications, life sciences, social networking, government, media, linguistics. Moreover, the LOD cloud also contains a large number of crossdomain knowledge bases such as DBpedia and Yago2. These knowledge bases are commonly managed in a decentralized fashion and contain partly verlapping information. This architectural choice has led to knowledge pertaining to the same domain being published by independent entities in the LOD cloud. For example, information on drugs can be found in Diseasome as well as DBpedia and Drugbank. Furthermore, certain knowledge bases such as DBLP have been published by several bodies, which in turn has lead to duplicated content in the LOD . In addition, large amounts of geo-spatial information have been made available with the growth of heterogeneous Web of Data. The concurrent publication of knowledge bases containing related information promises to become a phenomenon of increasing importance with the growth of the number of independent data providers. Enabling the joint use of the knowledge bases published by these providers for tasks such as federated queries, cross-ontology question answering and data integration is most commonly tackled by creating links between the resources described within these knowledge bases. Within this thesis, we spur the transition from isolated knowledge bases to enriched Linked Data sets where information can be easily integrated and processed. To achieve this goal, we provide concepts, approaches and use cases that facilitate the integration and enrichment of information with other data types that are already present on the Linked Data Web with a focus on geo-spatial data. The first challenge that motivates our work is the lack of measures that use the geographic data for linking geo-spatial knowledge bases. This is partly due to the geo-spatial resources being described by the means of vector geometry. In particular, discrepancies in granularity and error measurements across knowledge bases render the selection of appropriate distance measures for geo-spatial resources difficult. We address this challenge by evaluating existing literature for point set measures that can be used to measure the similarity of vector geometries. Then, we present and evaluate the ten measures that we derived from the literature on samples of three real knowledge bases. The second challenge we address in this thesis is the lack of automatic Link Discovery (LD) approaches capable of dealing with geospatial knowledge bases with missing and erroneous data. To this end, we present Colibri, an unsupervised approach that allows discovering links between knowledge bases while improving the quality of the instance data in these knowledge bases. A Colibri iteration begins by generating links between knowledge bases. Then, the approach makes use of these links to detect resources with probably erroneous or missing information. This erroneous or missing information detected by the approach is finally corrected or added. The third challenge we address is the lack of scalable LD approaches for tackling big geo-spatial knowledge bases. Thus, we present Deterministic Particle-Swarm Optimization (DPSO), a novel load balancing technique for LD on parallel hardware based on particle-swarm optimization. We combine this approach with the Orchid algorithm for geo-spatial linking and evaluate it on real and artificial data sets. The lack of approaches for automatic updating of links of an evolving knowledge base is our fourth challenge. This challenge is addressed in this thesis by the Wombat algorithm. Wombat is a novel approach for the discovery of links between knowledge bases that relies exclusively on positive examples. Wombat is based on generalisation via an upward refinement operator to traverse the space of Link Specifications (LS). We study the theoretical characteristics of Wombat and evaluate it on different benchmark data sets. The last challenge addressed herein is the lack of automatic approaches for geo-spatial knowledge base enrichment. Thus, we propose Deer, a supervised learning approach based on a refinement operator for enriching Resource Description Framework (RDF) data sets. We show how we can use exemplary descriptions of enriched resources to generate accurate enrichment pipelines. We evaluate our approach against manually defined enrichment pipelines and show that our approach can learn accurate pipelines even when provided with a small number of training examples. Each of the proposed approaches is implemented and evaluated against state-of-the-art approaches on real and/or artificial data sets. Moreover, all approaches are peer-reviewed and published in a conference or a journal paper. Throughout this thesis, we detail the ideas, implementation and the evaluation of each of the approaches. Moreover, we discuss each approach and present lessons learned. Finally, we conclude this thesis by presenting a set of possible future extensions and use cases for each of the proposed approaches.
APA, Harvard, Vancouver, ISO, and other styles
11

Neupane, Samip. "Storing and Rendering Geospatial Data in Mobile Applications." ScholarWorks@UNO, 2017. http://scholarworks.uno.edu/honors_theses/90.

Full text
Abstract:
Geographical Information Systems and geospatial data are seeing widespread use in various internet and mobile mapping applications. One of the areas where such technologies can be particularly valuable is aeronautical navigation. Pilots use paper charts for navigation, which, in contrast to modern mapping software, have some limitations. This project aims to develop an iOS application for phones and tablets that uses a GeoPackage database containing aeronautical geospatial data, which is rendered on a map to create an offline, feature-based mapping software to be used for navigation. Map features are selected from the database using R-Tree spatial indices. The attributes from each feature within the requested bounds are evaluated to determine the styling for that feature. Each feature, after applying the aforementioned styling, is drawn to an interactive map that supports basic zooming and panning functionalities. The application is written in Swift 3.0 and all features are drawn using iOS Core Graphics
APA, Harvard, Vancouver, ISO, and other styles
12

He, Juan Xia. "An ontology-based methodology for geospatial data integration." Thesis, University of Ottawa (Canada), 2010. http://hdl.handle.net/10393/28710.

Full text
Abstract:
Data semantic and schematic heterogeneity is a major obstacle to the reuse and sharing of geospatial data. This research focuses on developing an ontology-based methodology to logically integrate heterogeneous geographic data in a cross-border context Three main obstacles hindering data integration are semantic, schematic, and syntactic heterogeneity. Approaches to overcome these obstacles in previous research are reviewed. Among the different approaches, an ontology-based approach is selected for horizontal geospatial data integration in the context of cross-border applications. The integration methodology includes the extraction of application schemas and application ontologies, ontology integration, the creation of a reference model (or ontologies), schema matching and integration, and the creation of usable integrated datasets. The methodology is conceptual and integrates geospatial data based on the semantic content and so is not tied to specific data formats, geometric representations, or feature locations. In order to facilitate the integration procedure, four semantic relationships are used: refer-to, semantic equivalence, semantic generalization, and semantic aggregation. A hybrid ontology approach is employed in order to facilitate the addition of new geospatial data sources to the integration process. As such, three levels of ontologies are developed and illustrated within a MS ACCESS database: application, domain, and a reference model. Furthermore, a working integration prototype is designed to facilitate the integration of geospatial data in the North American context given the semantic and schema heterogeneities in international Canadian-US geospatial datasets. The methodology and prototype provide users with the ability to freely query and retrieve data without knowledge of the heterogeneous data ontologies and schemas. This is illustrated via a case study identifying critical infrastructure around the Ambassador Bridge international border crossing. The methodology and prototype are compared and evaluated with other GDI approaches and by criteria introduced by Buccella et al. (2009). Specific challenges unique to GDI were uncovered and include geographic discrepancies, scale compatibility and temporal issues.
APA, Harvard, Vancouver, ISO, and other styles
13

Sharad, Chakravarthy Namindi. "Public Commons for Geospatial Data: A Conceptual Model." Fogler Library, University of Maine, 2003. http://www.library.umaine.edu/theses/pdf/SharadCN2003.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Chisholm, S. "Extracting temporal dependencies from geospatial time series data." Thesis, University College London (University of London), 2016. http://discovery.ucl.ac.uk/1476865/.

Full text
Abstract:
In recent years, there have been significant advances in the technology used to collect data on the movement and activity patterns of humans and animals. GPS units, which form the primary source of location data, have become cheaper, more accurate, lighter and less power-hungry, and their accuracy has been further improved with the addition of inertial measurement units. The consequence is a glut of geospatial time series data, recorded at rates that range from one position fix every several hours (to maximise system lifetime) to ten fixes per second (in high dynamic situations). Since data of this quality and volume has only recently become available, the analytical methods to extract behavioural information from raw position data are at an early stage of development. An instance of this lies in the analysis of animal movement patterns. There are, broadly speaking, two types of animals: solitary animals and social animals. In the former case, the timing and location of instances of avoidance and association are important behavioural markers. In the latter case, the identification of periods and strengths of social interaction is a necessary precursor to social network analysis. In this dissertation, we present two novel analytical methods for extracting behavioural information from geospatial time series, one for each case. For solitary animals, a new method to detect avoidance and association between individuals is proposed; unlike existing methods, assumptions about the shape of the territories or the nature of individual movement are not needed. For social individuals, we have made significant progress in developing a method to test for cointegration; this measures the extent to which two non-stationary time series have a stationary linear relationship between them and can be used to assess whether a pair of animals move together. This method has more general application in time series analysis; for example, in financial time series analysis.
APA, Harvard, Vancouver, ISO, and other styles
15

Luescher, Samuel. "Beyond visualization : designing interfaces to contextualize geospatial data." Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/82428.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2013.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 71-74).
The growing sensor data collections about our environment have the potential to drastically change our perception of the fragile world we live in. To make sense of such data, we commonly use visualization techniques, enabling public discourse and analysis. This thesis describes the design and implementation of a series of interactive systems that integrate geospatial sensor data visualization and terrain models with various user interface modalities in an educational context to support data analysis and knowledge building using part-digital, part-physical rendering. The main contribution of this thesis is a concrete application scenario and initial prototype of a "Designed Environment" where we can explore the relationship between the surface of Japan's islands, the tension that originates in the fault lines along the seafloor beneath its east coast, and the resulting natural disasters. The system is able to import geospatial data from a multitude of sources on the "Spatial Web", bringing us one step closer to a tangible "dashboard of the Earth."
Samuel Luescher.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
16

Wu, Bradley. "Development of MIT geospatial data to CityGML workflows." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/92098.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2014.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 48-50).
BlindAid is a virtual environment system that helps blind people learn in advance about places they plan to visit. A core component of the BlindAid system is the actual set of virtual models that represent physical locations. In order for BlindAid to be useful, there must be a means to generate new virtual environments. The research in this thesis explores the process of translating geospatial data received from MIT into the CityGML data format, which will then be used to generate virtual environments for BlindAid. We discuss the main challenge of preserving both geometry and semantic information with respect to different data formats. We also identify several initial workflows for geospatial data obtained from the Massachusetts Institute of Technology Department of Facilities, including Sketchup, GIS Shapefile, and Industry Foundation Class models. These workflows serve as a foundation that we can build upon to bring in a variety of geospatial data to BlindAid in the future.
by Bradley Wu.
M. Eng.
APA, Harvard, Vancouver, ISO, and other styles
17

Foy, Andrew Scott. "Making Sense Out of Uncertainty in Geospatial Data." Diss., Virginia Tech, 2011. http://hdl.handle.net/10919/39175.

Full text
Abstract:
Uncertainty in geospatial data fusion is a major concern for scientists because society is increasing its use of geospatial technology and generalization is inherent to geographic representations. Limited research exists on the quality of results that come from the fusion of geographic data, yet there is extensive literature on uncertainty in cartography, GIS, and geospatial data. The uncertainties exist and are difficult to understand because data are overlaid which have different scopes, times, classes, accuracies, and precisions. There is a need for a set of tools that can manage uncertainty and incorporate it into the overlay process. This research explores uncertainty in spatial data, GIS and GIScience via three papers. The first paper introduces a framework for classifying and modeling error-bands in a GIS. Paper two tests GIS usersâ ability to estimate spatial confidence intervals and the third paper looks at the practical application of a set of tools for incorporating uncertainty into overlays. The results from this research indicate that it is hard for people to agree on an error-band classification based on their interpretation of metadata. However, people are good estimators of data quality and uncertainty if they follow a systematic approach and use their average estimate to define spatial confidence intervals. The framework and the toolset presented in this dissertation have the potential to alter how people interpret and use geospatial data. The hope is that the results from this paper prompt inquiry and question the reliability of all simple overlays. Many situations exist in which this research has relevance, making the framework, the tools, and the methods important to a wide variety of disciplines that use spatial analysis and GIS.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
18

Jacobson, Jared Neil. "Assessing OpenGL for 2D rendering of geospatial data." Diss., University of Pretoria, 2014. http://hdl.handle.net/2263/45917.

Full text
Abstract:
The purpose of this study was to investigate the suitability of using the OpenGL and OpenCL graphics application programming interfaces (APIs), to increase the speed at which 2D vector geographic information could be rendered. The research focused on rendering APIs available to the Windows operating system. In order to determine the suitability of OpenGL for efficiently rendering geographic data, this dissertation looked at how software and hardware based rendering performed. The results were then compared to that of the different rendering APIs. In order to collect the data necessary to achieve this; an in-depth study of geographic information systems (GIS), geographic coordinate systems, OpenGL and OpenCL was conducted. A simplistic 2D geographic rendering engine was then constructed using a number of graphic APIs which included GDI, GDI+, DirectX, OpenGL and the Direct2D API. The purpose of the developed rendering engine was to provide a tool on which to perform a number of rendering experiments. A large dataset was then rendered via each of the implementations. The processing times as well as image quality were recorded and analysed. This research investigated the potential issues such as acquiring data to be rendered for the API as fast as possible. This was needed to ensure saturation at the API level. Other aspects such as difficulty of implementation as well as implementation differences were examined. Additionally, leveraging the OpenCL API in conjunction with the TopoJSON storage format as a means of data compression was investigated. Compression is beneficial in that to get optimal rendering performance from OpenGL, the graphic data to be rendered needs to reside in the graphics processing unit (GPU) memory bank. More data in GPU memory in turn theoretically provides faster rendering times. The aim was to utilise the extra processing power of the GPU to decode the data and then pass it to the OpenGL API for rendering and display. This was achievable via OpenGL/OpenCL context sharing. The results of the research showed that on average, the OpenGL API provided a significant speedup of between nine and fifteen times that of GDI and GDI+. This means a faster and more performant rendering engine could be built with OpenGL at its core. Additional experiments show that the OpenGL API performed faster than GDI and GDI+ even when a dedicated graphics device is not present. A challenge early in the experiments was related to the supply of data to the graphic API. Disk access is orders of magnitude slower than the rest of the computer system. As such, in order to saturate the different graphics APIs, data had to be loaded into main memory. Using the TopoJSON storage format yielded decent data compression allowing a larger amount of data to be stored on the GPU. However, in an initial experiment, it took longer to process the TopoJSON file into a flat structure that could be utilised by OpenGL than to simply use the actual created objects, process them on the central processing unit (CPU) and then upload them directly to OpenGL. It is left as future work to develop a more efficient algorithm for converting from TopoJSON format to a format that OpenCL can utilise.
Dissertation (MSc)--University of Pretoria, 2014.
tm2015
Geography, Geoinformatics and Meteorology
MSc
Unrestricted
APA, Harvard, Vancouver, ISO, and other styles
19

Tully, D. A. "Contributions to big geospatial data rendering and visualisations." Thesis, Liverpool John Moores University, 2017. http://researchonline.ljmu.ac.uk/6685/.

Full text
Abstract:
Current geographical information systems lack features and components which are commonly found within rendering and game engines. When combined with computer game technologies, a modern geographical information system capable of advanced rendering and data visualisations are achievable. We have investigated the combination of big geospatial data, and computer game engines for the creation of a modern geographical information system framework capable of visualising densely populated real-world scenes using advanced rendering algorithms. The pipeline imports raw geospatial data in the form of Ordnance Survey data which is provided by the UK government, LiDAR data provided by a private company, and the global open mapping project of OpenStreetMap. The data is combined to produce additional terrain data where data is missing from the high resolution data sources of LiDAR by utilising interpolated Ordnance Survey data. Where data is missing from LiDAR, the same interpolation techniques are also utilised. Once a high resolution terrain data set which is complete in regards to coverage, is generated, sub datasets can be extracted from the LiDAR using OSM boundary data as a perimeter. The boundaries of OSM represent buildings or assets. Data can then be extracted such as the heights of buildings. This data can then be used to update the OSM database. Using a novel adjacency matrix extraction technique, 3D model mesh objects can be generated using both LiDAR and OSM information. The generation of model mesh objects created from OSM data utilises procedural content generation techniques, enabling the generation of GIS based 3D real-world scenes. Although only LiDAR and Ordnance Survey for UK data is available, restricting the generation to the UK borders, using OSM alone, the system is able to procedurally generate any place within the world covered by OSM. In this research, to manage the large amounts of data, a novel scenegraph structure has been generated to spatially separate OSM data according to OS coordinates, splitting the UK into 1kilometer squared tiles, and categorising OSM assets such as buildings, highways, amenities. Once spatially organised, and categorised as an asset of importance, the novel scenegraph allows for data dispersal through an entire scene in real-time. The 3D real-world scenes visualised within the runtime simulator can be manipulated in four main aspects; • Viewing at any angle or location through the use of a 3D and 2D camera system. • Modifying the effects or effect parameters applied to the 3D model mesh objects to visualise user defined data by use of our novel algorithms and unique lighting data-structure effect file with accompanying material interface. • Procedurally generating animations which can be applied to the spatial parameters of objects, or the visual properties of objects. • Applying Indexed Array Shader Function and taking advantage of the novel big geospatial scenegraph structure to exploit better rendering techniques in the context of a modern Geographical Information System, which has not been done, to the best of our knowledge. Combined with a novel scenegraph structure layout, the user can view and manipulate real-world procedurally generated worlds with additional user generated content in a number of unique and unseen ways within the current geographical information system implementations. We evaluate multiple functionalities and aspects of the framework. We evaluate the performance of the system, measuring frame rates with multi sized maps by stress testing means, as well as evaluating the benefits of the novel scenegraph structure for categorising, separating, manoeuvring, and data dispersal. Uniform scaling by n2 of scenegraph nodes which contain no model mesh data, procedurally generated model data, and user generated model data. The experiment compared runtime parameters, and memory consumption. We have compared the technical features of the framework against that of real-world related commercial projects; Google Maps, OSM2World, OSM-3D, OSM-Buildings, OpenStreetMap, ArcGIS, Sustainability Assessment Visualisation and Enhancement (SAVE), and Autonomous Learning Agents for Decentralised Data and Information (ALLADIN). We conclude that when compared to related research, the framework produces data-sets relevant for visualising geospatial assets from the combination of real-world data-sets, capable of being used by a multitude of external game engines, applications, and geographical information systems. The ability to manipulate the production of said data-sets at pre-compile time aids processing speeds for runtime simulation. This ability is provided by the pre-processor. The added benefit is to allow users to manipulate the spatial and visual parameters in a number of varying ways with minimal domain knowledge. The features of creating procedural animations attached to each of the spatial parameters and visual shading parameters allow users to view and encode their own representations of scenes which are unavailable within all of the products stated. Each of the alternative projects have similar features, but none which allow full animation ability of all parameters of an asset; spatially or visually, or both. We also evaluated the framework on the implemented features; implementing the needed algorithms and novelties of the framework as problems arose in the development of the framework. Examples of this is the algorithm for combining the multiple terrain data-sets we have (Ordnance Survey terrain data and Light Detection and Ranging Digital Surface Model data and Digital Terrain Model data), and combining them in a justifiable way to produce maps with no missing data values for further analysis and visualisation. A majority of visualisations are rendered using an Indexed Array Shader Function effect file, structured to create a novel design to encapsulate common rendering effects found in commercial computer games, and apply them to the rendering of real-world assets for a modern geographical information system. Maps of various size, in both dimensions, polygonal density, asset counts, and memory consumption prove successful in relation to real-time rendering parameters i.e. the visualisation of maps do not create a bottleneck for processing. The visualised scenes allow users to view large dense environments which include terrain models within procedural and user generated buildings, highways, amenities, and boundaries. The use of a novel scenegraph structure allows for the fast iteration and search from user defined dynamic queries. The interaction with the framework is allowed through a novel Interactive Visualisation Interface. Utilising the interface, a user can apply procedurally generated animations to both spatial and visual properties to any node or model mesh within the scene. We conclude that the framework has been a success. We have completed what we have set out to develop and create, we have combined multiple data-sets to create improved terrain data-sets for further research and development. We have created a framework which combines the real-world data of Ordnance Survey, LiDAR, and OpenStreetMap, and implemented algorithms to create procedural assets of buildings, highways, terrain, amenities, model meshes, and boundaries. for visualisation, with implemented features which allows users to search and manipulate a city’s worth of data on a per-object basis, or user-defined combinations. The successful framework has been built by the cross domain specialism needed for such a project. We have combined the areas of; computer games technology, engine and framework development, procedural generation techniques and algorithms, use of real-world data-sets, geographical information system development, data-parsing, big-data algorithmic reduction techniques, and visualisation using shader techniques.
APA, Harvard, Vancouver, ISO, and other styles
20

Amino, Robert. "Topographic building pattern recognition with geospatial OpenStreetMap data." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-231795.

Full text
Abstract:
This paper aims to explore the perceptual recognition of topographical building patterns from real-world OpenStreetMap data on virtual globes. An implementation was developed in which all geographical and contextual information was layered and, for the purpose of this study, what solely remained were building patterns as viewed from above. This was developed as a module for the planetarium visualization software Uniview. The aim was to determine how cities with different building patterns were perceived by participants in terms of size, scale, and building density. This was measured as the comparative difference between city pairs, that is, how much they differed in the percentage of the area that they covered. Two quantitative studies were conducted, one smaller controlled study with 19 participants and one larger online crowd-sourced study with 72 participants. The results show that participants are generally able to discern building patterns when the comparative difference is greater than a certain critical threshold. This critical threshold was determined to be at approximately 0.5% for both studies and for accuracy levels above 60%. Thus it was concluded that below this critical threshold users should be provided with visual feedback or other means of identifiers in order to allow for definite recognition, depending on what kind of information a certain type of visualization is trying to convey.
Den här rapporten avser att utforska den perpetuella igenkänningen av topografiska byggnadsmönster genom att använda geografisk data från OpenStreetMap som avbildas på virtuella sfärer. En implementation utvecklades där geografisk data samt kontextuell information ordnades i överlappande lager som filtrerades, och där endast byggnadsmönster sett från ovan kvarstod. Denna modul utvecklades för Uniview som är en mjukvara för visualisering i planetarier. Målet var att avgöra hur deltagare uppfattade städer med olika byggnadsmönster med hänsyn till storlek, skala, samt byggnadsdensitet. Detta mättes genom den procentuella skillnaden mellan städer, dvs. skillnaden i procent för varje stads geografiska utsträckning. Två kvantitativa studier utfördes, en mindre kontrollerad studie med 19 deltagare samt en större nätbaserad studie med 72 deltagare. Resultatet visar att deltagare generellt kunde bedöma den procentuella skillnaden i byggnadsmönster upp till en viss kritisk gräns. Denna kritiska gräns fastställdes till runt 0.5% för båda studier och för noggrannhetsnivåer över 60%. Slutsatsen från detta är att användare bör ges visuella indikatorer för nivåer under denna kritiska gräns för att säkerställa definitiv igenkänning beroende på vilken information som skall förmedlas i en viss typ av visualisering.
APA, Harvard, Vancouver, ISO, and other styles
21

Virinchi, Billa. "Data Visualization of Telenor mobility data." Thesis, Blekinge Tekniska Högskola, Institutionen för kommunikationssystem, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-13951.

Full text
Abstract:
Nowadays with the rapid development of cities, understanding the human mobility patterns of subscribers is crucial for urban planning and for network infrastructure deployment. Today mobile phones are electronic devices used for analyzing the mobility patterns of the subscribers in the network, because humans in their daily activities they carry mobile phones for communication purpose. For effective utilization of network infrastructure (NI) there is a need to study on mobility patterns of subscribers.   The aim of the thesis is to simulate the geospatial Telenor mobility data (i.e. three different subscriber categorized segments) and provide a visual support in google maps using google maps API, which helps in decision making to the telecommunication operators for effective utilization of network infrastructure (NI).    In this thesis there are two major objectives. Firstly, categorize the given geospatial telenor mobility data using subscriber mobility algorithm. Secondly, providing a visual support for the obtained categorized geospatial telenor mobility data in google maps using a geovisualization simulation tool.    The algorithm used to categorize the given geospatial telenor mobility data is subscriber mobility algorithm. Where this subscriber mobility algorithm categorizes the subscribers into three different segments (i.e. infrastructure stressing, medium, friendly). For validation and confirmation purpose of subscriber mobility algorithm a tetris optimization model is used. To give visual support for each categorized segments a simulation tool is developed and it displays the visualization results in google maps using Google Maps API.   The result of this thesis are presented to the above formulated objectives. By using subscriber mobility algorithm and tetris optimization model to a geospatial data set of 33,045 subscribers only 1400 subscribers are found as infrastructure stressing subscribers. To look informative, a small region (i.e. boras region) is taken to visualize the subscribers from each of the categorized segments (i.e. infrastructure stressing, medium, friendly).    The conclusion of the thesis is that the functionality thus developed contributes to knowledge discovery from geospatial data and provides visual support for decision making to telecommunication operators. Nowadays with the rapid development of cities, understanding the human mobility patterns of subscribers is crucial for urban planning and for network infrastructure deployment. Today mobile phones are electronic devices used for analyzing the mobility patterns of the subscribers in the network, because humans in their daily activities they carry mobile phones for communication purpose. For effective utilization of network infrastructure (NI) there is a need to study on mobility patterns of subscribers.   The aim of the thesis is to simulate the geospatial Telenor mobility data (i.e. three different subscriber categorized segments) and provide a visual support in google maps using google maps API, which helps in decision making to the telecommunication operators for effective utilization of network infrastructure (NI).    In this thesis there are two major objectives. Firstly, categorize the given geospatial telenor mobility data using subscriber mobility algorithm. Secondly, providing a visual support for the obtained categorized geospatial telenor mobility data in google maps using a geovisualization simulation tool.    The algorithm used to categorize the given geospatial telenor mobility data is subscriber mobility algorithm. Where this subscriber mobility algorithm categorizes the subscribers into three different segments (i.e. infrastructure stressing, medium, friendly). For validation and confirmation purpose of subscriber mobility algorithm a tetris optimization model is used. To give visual support for each categorized segments a simulation tool is developed and it displays the visualization results in google maps using Google Maps API.   The result of this thesis are presented to the above formulated objectives. By using subscriber mobility algorithm and tetris optimization model to a geospatial data set of 33,045 subscribers only 1400 subscribers are found as infrastructure stressing subscribers. To look informative, a small region (i.e. boras region) is taken to visualize the subscribers from each of the categorized segments (i.e. infrastructure stressing, medium, friendly).    The conclusion of the thesis is that the functionality thus developed contributes to knowledge discovery from geospatial data and provides visual support for decision making to telecommunication operators.
APA, Harvard, Vancouver, ISO, and other styles
22

Zhang, Chengyang. "Toward a Data-Type-Based Real Time Geospatial Data Stream Management System." Thesis, University of North Texas, 2011. https://digital.library.unt.edu/ark:/67531/metadc68070/.

Full text
Abstract:
The advent of sensory and communication technologies enables the generation and consumption of large volumes of streaming data. Many of these data streams are geo-referenced. Existing spatio-temporal databases and data stream management systems are not capable of handling real time queries on spatial extents. In this thesis, we investigated several fundamental research issues toward building a data-type-based real time geospatial data stream management system. The thesis makes contributions in the following areas: geo-stream data models, aggregation, window-based nearest neighbor operators, and query optimization strategies. The proposed geo-stream data model is based on second-order logic and multi-typed algebra. Both abstract and discrete data models are proposed and exemplified. I further propose two useful geo-stream operators, namely Region By and WNN, which abstract common aggregation and nearest neighbor queries as generalized data model constructs. Finally, I propose three query optimization algorithms based on spatial, temporal, and spatio-temporal constraints of geo-streams. I show the effectiveness of the data model through many query examples. The effectiveness and the efficiency of the algorithms are validated through extensive experiments on both synthetic and real data sets. This work established the fundamental building blocks toward a full-fledged geo-stream database management system and has potential impact in many applications such as hazard weather alerting and monitoring, traffic analysis, and environmental modeling.
APA, Harvard, Vancouver, ISO, and other styles
23

Yang, Zhao. "A Framework to Annotate the Uncertainty for Geospatial Data." ScholarWorks@UNO, 2012. http://scholarworks.uno.edu/td/1528.

Full text
Abstract:
We have developed a new approach to annotate the uncertainty information of geospatial data. This framework is composed of a geospatial platform and the data with uncertainty. The framework supports geospatial sources such as Geography Markup Language (GML) with uncertainty information. The purpose of this framework is to integrate the uncertainty information of data from the application users and thereby ease the development of processing uncertainty information of geospatial data. Having well organized data and using this framework, the end-users can store the uncertainty information on the current geospatial data structure. For example, a GIS user can share the error information for environmental and geospatial data to others. We have also reported the enhanced geographic information system function.
APA, Harvard, Vancouver, ISO, and other styles
24

Kasperi, Johan. "Occlusion in outdoor Augmented Reality using geospatial building data." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-204442.

Full text
Abstract:
Creating physical simulations between virtual and real objects in Augmented Reality (AR) is essential for the user experience. Otherwise the user might lose sense of depth, distance and size. One of these simulations is occlusion, meaning that virtual content should be partially or fully occluded if real world objects is in the line-of-sight between the user and the content. The challenge for simulating occlusion is to construct the geometric model of the current AR environment. Earlier studies within the field have all tried to create realistic pixel-perfect occlusion and most of them have either required depth-sensing hardware or a static predefined environment. This study proposes and evaluates an alternative model-based approach to the problem. It uses geospatial data to construct the geometric model of all the buildings in the current environment, making virtual content occluded by all real buildings in the current environment. This approach made the developed function compatible with non depth-sensing devices and in a dynamic outdoor urban environment. To evaluate the solution it was implemented in a sensor-based AR application visualizing a future building in Stockholm. The effect of the developed function was that the future virtual building was occluded as expected. However, it was not pixel-perfect, meaning that the simulated occlusion was not realistic, but results from the conducted user study said that it fulfilled its goal. A majority of the participants thought that their AR experience got better with the solution activated and that their depth perception improved. However, any definite conclusions could not be drawn due to issues with the sensor-based tracking. The result from this study is interesting for the mobile AR field since the great majority of smartphones are not equipped with depth sensors. Using geospatial data for simulating occlusions, or other physical interactions between virtual and real objects, could then be an efficient enough solution until depth-sensing AR devices are more widely used.
För att uppnå en god användarupplevelse i Augmented Reality (AR) så är det viktigt att simulera fysiska interaktioner mellan de virtuella och reella objekten. Om man inte gör det kan användare uppfatta saker som djup, avstånd och storlek felaktigt. En av dessa simulationer är ocklusion som innebär att det virtuella innehållet ska vara delvis eller helt ockluderat om ett reellt objekt finns i siktlinjen mellan användaren och innehållet. För att simulera detta är utmaningen att rekonstruera den geometriska modellen av den nuvarande miljön.Tidigare studier inom fältet har försökt att uppnå en perfekt simulation av ocklusion, men majoriteten av dem har då krävt antingen djupavkännande hårdvara eller en statisk fördefinierad miljö. Denna studie föreslår och utvärderar en alternativ modellbaserad lösning på problemet. Lösningen använder geospatial data för att rekonstruera den geometriska modellen av alla byggnader i den nuvarande omgivningen, vilket resulterar i att det virtuella innehållet blir ockluderat av alla reella byggnader i den nuvarande miljön. Den utvecklade funktionen blev i och med det kompatibel på icke djupavkännande enheter och fungerande i en dynamisk urban miljö. För attutvärdera denna funktion så var den implementerad i en sensorbaserad AR applikation som visualiserade en framtida byggnad i Stockholm. Resultatet visade att den utvecklade funktionen ockluderade den virtuella byggnaden som förväntat. Dock gjorde den ej det helt realistiskt, men resultatet från den utförda användarstudien visade att den uppnådde sitt mål. Majoriteten av deltagarna ansåg att deras AR upplevelse blev bättre med den utvecklade funktionen aktiverad och ett deras uppfattning av djup förbättrades. Dock kan inga definitiva slutsatser dras eftersom AR applikationen hade problem med den sensorbaserade spårningen. Resultaten är intressant för det mobila AR fältet eftersom majoriteten av alla smartphones ej har stöd för djupavkänning. Att använda geospatial data för att simulera ocklusion, eller någon annan fysisk interaktion mellan virtuella och reella objekt, kan då vara en tillräckligt effektiv lösning tills djupavkännande AR enheter används mer.
APA, Harvard, Vancouver, ISO, and other styles
25

Scott, Kara E. "Evaluating an improved algorithm for segregating large geospatial data /." Available to subscribers only, 2007. http://proquest.umi.com/pqdweb?did=1324372541&sid=24&Fmt=2&clientId=1509&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Mohd, Yunus Mohd Zulkifli. "Geospatial data management throughout large area civil engineering projects." Thesis, University of Newcastle Upon Tyne, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.360241.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Black, Jennifer. "On the derivation of value from geospatial linked data." Thesis, University of Southampton, 2013. https://eprints.soton.ac.uk/358899/.

Full text
Abstract:
Linked Data (LD) is a set of best practices for publishing and connecting structured data on the web. LD and Linked Open Data (LOD) are often con ated to the point where there is an expectation that LD will be free and unrestricted. The current research looks at deriving commercial value from LD. When there is both free and paid for data available the issue arises of how users will react to a situation where two or more options are provided. The current research examines the factors that would affect choices made by users, and subsequently created prototypes for users to interact with, in order to understand how consumers reacted to each of the di�erent options. Our examination of commercial providers of LD uses Ordnance Survey (OS) (the UK national mapping agency) as a case study by studying their requirements for and experiences of publishing LD, and we further extrapolate from this by comparing the OS to other potential commercial publishers of LD. Our research looks at the business case for LD and introduces the concept of LOD and Linked Closed Data (LCD). We also determine that there are two types of LD users; non-commercial users and commercial users and as such, two types of use of LD; LD as a raw commodity and LD as an application. Our experiments aim to identify the issues users would find whereby LD is accessed via an application. Our first investigation brought together technical users and users of Geographic Information (GI). With the idea of LOD and LCD we asked users what factors would affect their view of data quality. We found 3 different types of buying behaviour on the web. We also found that context actively affected the users decision, i.e. users were willing to pay when the data was to make a professional decision but not for leisure use. To enable us to observe the behaviour of consumers whilst using data online, we built a working prototype of a LD application that would enable potential users of the system to experience the data and give us feedback about how they would behave in a LD environment. This was then extended into a second LD application to find if the same principles held true if actual capital was involved and they had to make a conscious decision regarding payment. With this in mind we proposed a potential architecture for the consumption of LD on the web. We determined potential issues which affect a consumers willingness to pay for data which surround quality factors. This supported our hypothesis that context affects a consumers willingness to pay and that willingness to pay is related to a requirement to reduce search times. We also found that a consumers perception of value and criticality of purpose also affected their willingness to pay. Finally we outlined an architecture to enable users to use LD where different scenarios may be involved which may have potential payment restrictions. This work is our contribution to the issue of the business case for LD on the web and is a starting point.
APA, Harvard, Vancouver, ISO, and other styles
28

Zeitz, Kimberly Ann. "An Optimized Alert System Based on Geospatial Location Data." Thesis, Virginia Tech, 2014. http://hdl.handle.net/10919/49265.

Full text
Abstract:
Crises are spontaneous and highly variable events that lead to life threatening and urgent situations. As such, crisis and emergency notification systems need to be both flexible and highly optimized to quickly communicate to users. Implementing the fastest methods, however, is only half of the battle. The use of geospatial location is missing from alert systems utilized at university campuses across the United States. Our research included the design and implementation of a mobile application addition to our campus notification system. This addition is complete with optimizations including an increase in the speed of delivery, message differentiation to enhance message relevance to the user, and usability studies to enhance user trust and understanding. Another advantage is that our application performs all location data computations on the user device with no external storage to protect user location privacy. However, ensuring the adoption of a mobile application that requests location data permissions and relating privacy measures to users is not a trivial matter. We conducted a campus-wide survey and interviews to understand mobile device usage patterns and obtain opinions of a representative portion of the campus population. These findings guided the development of this mobile application and can provide valuable insights which may be helpful for future application releases. Our addition of a mobile application with geospatial location awareness will send users relevant alerts at speeds faster than those of the current campus notification system while still guarding user location privacy, increasing message relevance, and enhancing the probability of adoption and use.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
29

Odoi, Ebenezer Attua Jr. "Improving the Visualization of Geospatial Data Using Google’s KML." The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1338836407.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Marney, Katherine Anne. "Geospatial metadata and an ontology for water observations data." Thesis, [Austin, Tex. : University of Texas, 2009. http://hdl.handle.net/2152/ETD-UT-2009-05-135.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Wylie, Austin. "Geospatial Data Modeling to Support Energy Pipeline Integrity Management." DigitalCommons@CalPoly, 2015. https://digitalcommons.calpoly.edu/theses/1447.

Full text
Abstract:
Several hundred thousand miles of energy pipelines span the whole of North America -- responsible for carrying the natural gas and liquid petroleum that power the continent's homes and economies. These pipelines, so crucial to everyday goings-on, are closely monitored by various operating companies to ensure they perform safely and smoothly. Happenings like earthquakes, erosion, and extreme weather, however -- and human factors like vehicle traffic and construction -- all pose threats to pipeline integrity. As such, there is a tremendous need to measure and indicate useful, actionable data for each region of interest, and operators often use computer-based decision support systems (DSS) to analyze and allocate resources for active and potential hazards. We designed and implemented a geospatial data service, REST API for Pipeline Integrity Data (RAPID) to improve the amount and quality of data available to DSS. More specifically, RAPID -- built with a spatial database and the Django web framework -- allows third-party software to manage and query an arbitrary number of geographic data sources through one centralized REST API. Here, we focus on the process and peculiarities of creating RAPID's model and query interface for pipeline integrity management; this contribution describes the design, implementation, and validation of that model, which builds on existing geospatial standards.
APA, Harvard, Vancouver, ISO, and other styles
32

Eucker, William. "A geospatial analysis of Arctic marine traffic." Thesis, University of Cambridge, 2012. https://www.repository.cam.ac.uk/handle/1810/248854.

Full text
Abstract:
Recent changes in Arctic Ocean climate dynamics and marine activity in the region require re-evaluation of physical operating conditions, ship traffic patterns, and policy requirements. This study used (1) government surveys, (2) vessel reports, and (3) Automatic Identification System (AIS) messages to characterize the spatial and temporal variability of surface vessel traffic in relation to various sea-ice conditions on the Arctic Ocean during a year-long study from 1 April 2010 to 31 March 2011. Data sources, methods of analysis, and errors were discussed. Three principal topics were examined. First, sea-ice cover on the Arctic Ocean was analysed to determine the physical access for marine operations. Daily sea-ice concentration data based on satellite passive microwave measurements were used to calculate the extent of open water and duration of the sea-ice season. Second, ship traffic on the Arctic Ocean was analysed to determine the present patterns of human activity. Time-stamped AIS messages encoded with Global Navigation Satellite System (GNSS) positions received by a commercial satellite constellation from north of the Arctic Circle (66·56°N) were used to calculate the distribution of vessels per unit area. Satellite AIS data from SpaceQuest, Limited, were compared with land-based vessel observations during the study period from the Marine Exchange of Alaska and the Port of Longyearbyen. Third, the spatial and temporal relationship between sea ice and surface vessels on the Arctic Ocean was analysed to determine potential policy implications. Three groups of marine operations with distinct characteristics were determined from the analysis: operations in perennial open water, operations in the seasonal ice zone, and operations in the perennial ice zone. Throughout the study year, most ships north of 66·56°N operated in perennially icefree areas, but year-round operations also occurred in ice-covered areas. The results from this study identify new pathways of information to enable consistent pan-Arctic assessment of physical operating conditions and ship traffic patterns. This approach provides novel considerations to sustainably develop a safe, secure, and environmentally protected Arctic Ocean.
APA, Harvard, Vancouver, ISO, and other styles
33

Francis, Alexandra Michelle. "REST API to Access and Manage Geospatial Pipeline Integrity Data." DigitalCommons@CalPoly, 2015. https://digitalcommons.calpoly.edu/theses/1496.

Full text
Abstract:
Today’s economy and infrastructure is dependent on raw natural resources, like crude oil and natural gases, that are optimally transported through a net- work of hundreds of thousands of miles of pipelines throughout America[28]. A damaged pipe can negatively a↵ect thousands of homes and businesses so it is vital that they are monitored and quickly repaired[1]. Ideally, pipeline operators are able to detect damages before they occur, but ensuring the in- tegrity of the vast amount of pipes is unrealistic and would take an impractical amount of time and manpower[1]. Natural disasters, like earthquakes, as well as construction are just two of the events that could potentially threaten the integrity of pipelines. Due to the diverse collection of data sources, the necessary geospatial data is scat- tered across di↵erent physical locations, stored in di↵erent formats, and owned by di↵erent organizations. Pipeline companies do not have the resources to manually gather all input factors to make a meaningful analysis of the land surrounding a pipe. Our solution to this problem involves creating a single, centralized system that can be queried to get all necessary geospatial data and related informa- tion in a standardized and desirable format. The service simplifies client-side computation time by allowing our system to find, ingest, parse, and store the data from potentially hundreds of repositories in varying formats. An online web service fulfills all of the requirements and allows for easy remote access to do critical analysis of the data through computer based decision support systems (DSS). Our system, REST API for Pipeline Integrity Data (RAPID), is a multi- tenant REST API that utilizes HTTP protocol to provide a online and intuitive set of functions for DSS. RAPID’s API allows DSS to access and manage data stored in a geospatial database with a supported Django web framework. Full documentation of the design and implementation of RAPID’s API are detailed in this thesis document, supplemented with some background and validation of the completed system.
APA, Harvard, Vancouver, ISO, and other styles
34

Talebi, Hassan. "On the spatial modelling of mixed and constrained geospatial data." Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2018. https://ro.ecu.edu.au/theses/2279.

Full text
Abstract:
Spatial uncertainty modelling and prediction of a set of regionalized dependent variables from various sample spaces (e.g. continuous and categorical) is a common challenge for geoscience modellers and many geoscience applications such as evaluation of mineral resources, characterization of oil reservoirs or hydrology of groundwater. To consider the complex statistical and spatial relationships, categorical data such as rock types, soil types, alteration units, and continental crustal blocks should be modelled jointly with other continuous attributes (e.g. porosity, permeability, seismic velocity, mineral and geochemical compositions or pollutant concentration). These multivariate geospatial data normally have complex statistical and spatial relationships which should be honoured in the predicted models. Continuous variables in the form of percentages, proportions, frequencies, and concentrations are compositional which means they are non-negative values representing some parts of a whole. Such data carry just relative information and the constant sum constraint forces at least one covariance to be negative and induces spurious statistical and spatial correlations. As a result, classical (geo)statistical techniques should not be implemented on the original compositional data. Several geostatistical techniques have been developed recently for the spatial modelling of compositional data. However, few of these consider the joint statistical and/or spatial relationships of regionalized compositional data with the other dependent categorical information. This PhD thesis explores and introduces approaches to spatial modelling of regionalized compositional and categorical data. The first proposed approach is in the multiple-point geostatistics framework, where the direct sampling algorithm is developed for joint simulation of compositional and categorical data. The second proposed method is based on two-point geostatistics and is useful for the situation where a large and representative training image is not available or difficult to build. Approaches to geostatistical simulation of regionalized compositions consisting of several populations are explored and investigated. The multi-population characteristic is usually related to a dependent categorical variable (e.g. rock type, soil type, and land use). Finally, a hybrid predictive model based on the advanced geostatistical simulation techniques for compositional data and machine learning is introduced. Such a hybrid model has the ability to rank and select features internally, which is useful for geoscience process discovery analysis. The proposed techniques were evaluated via several case studies and results supported their usefulness and applicability.
APA, Harvard, Vancouver, ISO, and other styles
35

Martin-Lac, Victor. "Aerial navigation based on SAR imaging and reference geospatial data." Electronic Thesis or Diss., Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2024. http://www.theses.fr/2024IMTA0400.

Full text
Abstract:
Nous cherchons les moyens algorithmiques de déterminer l’état cinématique d’un appareil aérien à partir d’une image RSO observée et de données géospatiales de référence qui peuvent être RSO, optiques ou vectorielles. Nous déterminons la transformation qui associe les coordonnées de l’observation et les coordonnées de la référence et dont les paramètres sont l’état cinématique. Nous poursuivons trois approches. La première repose sur la détection et l’appariement de structures telles que des contours. Nous proposons un algorithme de type Iterative Closest Point (ICP) et démontrons comment il peut servir à estimer l’état cinématique complet. Nous proposons ensuite un système complet qui inclue un détecteur de contours multimodal appris. La seconde approche repose sur une métrique de similarité multimodale, ce qui est un moyen de mesurer la vraisemblance que deux restrictions locales de données géospatiales représentent le même point géographique. Nous déterminons l’état cinématique sous l’hypothèse duquel l’image SAR est la plus similaire aux données géospatiales de référence. La troisième approche repose sur la régression de coordonnées de scène. Nous prédisons les coordonnées géographiques de morceaux d’images et déduisons l’état cinématique à partir des correspondances ainsi prédites. Cependant, dans cette approche, nous ne satisfaisons pas l’hypothèse de multimodalité
We seek the algorithmic means of determining the kinematic state of an aerial device from an observation SAR image and reference geospatial data that may be SAR, optical or vector. We determine a transform that relates the observation and reference coordinates and whose parameters are the kinematic state. We follow three approaches. The first one is based on detecting and matching structures such as contours. We propose an iterative closest point algorithm and demonstrate how it can serve to estimate the full kinematic state. We then propose a complete pipeline that includes a learned multimodal contour detector. The second approach is based on a multimodal similarity metric, which is the means of measuring the likelihood that two local patches of geospatial data represent the same geographic point. We determine the kinematic state under the hypothesis of which the SAR image is most similar to the reference geospatial data. The third approach is based on scene coordinates regression. We predict the geographic coordinates of random image patches and infer the kinematic state from these predicted correspondences. However, in this approach, we do not address the fact that the modality of the observation and the reference are different
APA, Harvard, Vancouver, ISO, and other styles
36

Kiani, Tayebeh. "Modeling for geospatial database : application to structural geology data : application to structural geology data." Paris 6, 2010. http://www.theses.fr/2010PA066057.

Full text
Abstract:
L’objectif de cette étude est de créer un modèle de base de données dans un système d’information géographique afin d’archiver, analyser, présenter et de diffuser des données observées lors des analyses de géologie structurale. Le modèle de données est conçu pour atteindre quatre objectifs: établir un vocabulaire partagé par les spécialistes, modéliser les concepts de géologie structurale, produire des cartes dérivées des cartes géologiques d’Iran et fonctionner avec un logiciel de système d’information géographique. Un ensemble de classes conceptuelles est alors identifié pour représenter les concepts de base de la géologie structurale pour les objets contacts, plis, foliations, fractures (failles et joints), linéations et zones de cisaillement. Un modèle conceptuel unifié est construit pour chaque famille. Puis, le modèle logique de données, présenté en langage UML à l’aide de diagrammes de classes statiques, est développé. Les étapes dans l’élaboration du modèle de données incluent l’identification des classes, la création des diagrammes de classes, la déclaration des attributs et des associations. Les cartes géologiques d’Iran au 1:250 000 sont ici utilisées comme base de présentation d’un modèle conceptuel permettant l’unification et la préparation d’une légende unique d’un ensemble pilote de quatre cartes. Les résultats de l’étude fondent les principaux concepts et les structures des données pour représenter l’information spatiale en géologie structurale et fournissent un modèle pour créer une base de données permettant la gestion des données de géologie structurale avec un système d’information géographique
APA, Harvard, Vancouver, ISO, and other styles
37

Farrugia, James A. "Semantic Interoperability of Geospatial Ontologies: A Model-theoretic Analysis." Fogler Library, University of Maine, 2007. http://www.library.umaine.edu/theses/pdf/FarrugiaJA2007.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Al-Bakri, Maythm M. Sharky. "Developing tools and models for evaluating geospatial data integration of official and VGI data sources." Thesis, University of Newcastle Upon Tyne, 2012. http://hdl.handle.net/10443/1676.

Full text
Abstract:
In recent years, systems have been developed which enable users to produce, share and update information on the web effectively and freely as User Generated Content (UGC) data (including Volunteered Geographic Information (VGI)). Data quality assessment is a major concern for supporting the accurate and efficient spatial data integration required if VGI is to be used alongside official, formal, usually governmental datasets. This thesis aims to develop tools and models for the purpose of assessing such integration possibilities. Initially, in order to undertake this task, geometrical similarity of formal and informal data was examined. Geometrical analyses were performed by developing specific programme interfaces to assess the positional, linear and polygon shape similarity among reference field survey data (FS); official datasets such as data from Ordnance Survey (OS), UK and General Directorate for Survey (GDS), Iraq agencies; and VGI information such as OpenStreetMap (OSM) datasets. A discussion of the design and implementation of these tools and interfaces is presented. A methodology has been developed to assess such positional and shape similarity by applying different metrics and standard indices such as the National Standard for Spatial Data Accuracy (NSSDA) for positional quality; techniques such as buffering overlays for linear similarity; and application of moments invariant for polygon shape similarity evaluations. The results suggested that difficulties exist for any geometrical integration of OSM data with both bench mark FS and formal datasets, but that formal data is very close to reference datasets. An investigation was carried out into contributing factors such as data sources, feature types and number of data collectors that may affect the geometrical quality of OSM data and consequently affect the integration process of OSM datasets with FS, OS and GDS. Factorial designs were undertaken in this study in order to develop and implement an experiment to discover the effect of these factors individually and the interaction between each of them. The analysis found that data source is the most significant factor that affects the geometrical quality of OSM datasets, and that there are interactions among all these factors at different levels of interaction. This work also investigated the possibility of integrating feature classification of official datasets such as data from OS and GDS geospatial data agencies, and informal datasets such as OSM information. In this context, two different models were developed. The first set of analysis included the evaluation of semantic integration of corresponding feature classifications of compared datasets. The second model was concerned with assessing the ability of XML schema matching of feature classifications of tested datasets. This initially involved a tokenization process in order to split up into single words classifications that were composed of multiple words. Subsequently, encoding feature classifications as XML schema trees was undertaken. The semantic similarity, data type similarity and structural similarity were measured between the nodes of compared schema trees. Once these three similarities had been computed, a weighted combination technique has been adopted in order to obtain the overall similarity. The findings of both sets of analysis were not encouraging as far as the possibility of effectively integrating feature classifications of VGI datasets, such as OSM information, and formal datasets, such as OS and GDS datasets, is concerned.
APA, Harvard, Vancouver, ISO, and other styles
39

Sahr, Kevin Michael. "Discrete global grid systems : a new class of geospatial data structures /." view abstract or download file of text, 2005. http://wwwlib.umi.com/cr/uoregon/fullcit?p3190547.

Full text
Abstract:
Thesis (Ph. D.)--University of Oregon, 2005.
Typescript. Includes vita and abstract. Includes bibliographical references (leaves 109-115). Also available for download via the World Wide Web; free to University of Oregon users.
APA, Harvard, Vancouver, ISO, and other styles
40

Nejman, Dawid. "Automation of data processing in the network of geospatial web services." Thesis, Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-4029.

Full text
Abstract:
Geoinformatics field of science becomes more and more important nowadays. This is not only because it is crucial for industry, but it also plays more important role in consumer electronics than ever before. The ongoing demand for complex solutions gave a rise to SOA1 architecture in enterprise and geographical field. The topic that is currently being studied is interoperability between different geospatial services. This paper makes a proposal for a master thesis that tries to add another way of chaining different geospatial services. It describes the current state of knowledge, possible research gap and then goes into the details on design and execution part. Final stage is the summary of expected outcomes. The result of this proposal is a clearly defined need for a research in the outlined area of knowledge.
Contact details: email: dawidnejman@gmail.com phone: +48 511-139-190
APA, Harvard, Vancouver, ISO, and other styles
41

Teng, Ying. "Use of XML for Web-based query processing of geospatial data." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0035/MQ65524.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Cheewinsiriwat, Pannee. "Development of a 3D geospatial data representation and spatial analysis system." Thesis, University of Newcastle Upon Tyne, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.514467.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

ELIA, AGATA. "Open geospatial data and information in support of displaced population contexts." Doctoral thesis, Politecnico di Torino, 2021. http://hdl.handle.net/11583/2932735.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Ngo, Duc Khanh. "Relief Planning Management Systems - Investigation of the Geospatial Components." Thesis, KTH, Geodesi och geoinformatik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-118373.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Ganpaa, Gayatri. "An R*-Tree Based Semi-Dynamic Clustering Method for the Efficient Processing of Spatial Join in a Shared-Nothing Parallel Database System." ScholarWorks@UNO, 2006. http://scholarworks.uno.edu/td/298.

Full text
Abstract:
The growing importance of geospatial databases has made it essential to perform complex spatial queries efficiently. To achieve acceptable performance levels, database systems have been increasingly required to make use of parallelism. The spatial join is a computationally expensive operator. Efficient implementation of the join operator is, thus, desirable. The work presented in this document attempts to improve the performance of spatial join queries by distributing the data set across several nodes of a cluster and executing queries across these nodes in parallel. This document discusses a new parallel algorithm that implements the spatial join in an efficient manner. This algorithm is compared to an existing parallel spatial-join algorithm, the clone join. Both algorithms have been implemented on a Beowulf cluster and compared using real datasets. An extensive experimental analysis reveals that the proposed algorithm exhibits superior performance both in declustering time as well as in the execution time of the join query.
APA, Harvard, Vancouver, ISO, and other styles
46

Aragó, Galindo Pau. "Enhance the value of participative geospatial data, modelling using point pattern processes." Doctoral thesis, Universitat Jaume I, 2016. http://hdl.handle.net/10803/386241.

Full text
Abstract:
Esta tesi tracta sobre la participació de les persones en la creació de dades espacials de forma més o menys voluntària i en l'anàlisi d'aquestes dades utilitzant patrons de processos puntuals i geoestadística. Les dades geogràfiques creades per voluntaris suposen un canvi radical en la forma com aquestes han segut creades tradicionalment. Fins fa uns anys, les dades geogràfiques han estat creades tan sols per experts i grans institucions, una aproximació des de dalt cap a baix. La revolució 2.0 a internet ha permés que la tecnologia estiga a l'abast de tothom. Per a les dades geogràfiques ha suposat que qualsevol puga contribuir a projectes voluntaris com OspenStreetMap aportant el seu coneixemnet de l'entorn. Una aportació des de baix cap a dalt. Aquest canvi deixa preguntes sobre si es poden emprar les dades, quina qualitat tenen, són creïbles els usuaris quan fan aportacions, hi ha errors... Aquesta tesi tracta de donar resposta a com fer mesures de la qualitat, la credibilitat i l'anàlisi de les dades així com facilitar la recol·lecció de dades geogràfiques provinents de les xarxes socials.
APA, Harvard, Vancouver, ISO, and other styles
47

Frew, Robin. "GIS-based accessibility modelling as a means of evaluating geospatial data usability." Thesis, University of South Wales, 2016. https://pure.southwales.ac.uk/en/studentthesis/gisbased-accessibility-modelling-as-a-means-of-evaluating-geospatial-data-usability(cdd9e8aa-6bb0-470f-8fb2-a012a7d9a49f).html.

Full text
Abstract:
This thesis arose from the recognised lack of previous research into the usability of geospatial data. Whereas considerable study has been made into the usability of devices and their interface, the underlying data has been subject to less consideration, though a substantial literature exists regarding data quality. With the rapid expansion of geospatial data availability through a multitude of platforms (much of it free and crowdsourced), and the increase in creation of such data as mobile devices track location, there is a need to investigate the usability of the data in different contexts and applications. This thesis contends that data quality and data usability, though closely related, are separate characteristics, and that quality is an important element of data usability. Usability of different data types from various sources is examined here in the context of the well-established application of GIS-based accessibility modelling. Sensitivity analysis techniques were utilised in a novel way to highlight usability issues with the data being studied through the use of statistical and visual approaches. Comparisons were made between a variety of proprietary datasets and data from other sources, such as free and open-source software (FOSS) volunteer geographic information (VGI) network data from OpenStreetMap (OSM), and observations made as to their usability, while addressing cross-cutting topical themes, such as examining different sources of locational representation, different sources of network representation, and questioning the effect of supply and demand on accessibility. The use of both an urban and a rural study area enabled comparisons to be drawn in different geographical contexts. Several specific proposals are made with regard to improving usability of the proprietary (Ordnance Survey) and VGI (OSM) datasets. An aide to decision making is also suggested through the use of a usability checklist, enabling sample or trial data to be assessed quickly and simply in any given context. A novel Utility Factor is proposed, which draws together contributions from the quantitative aspects of this study into one figure, and is suggested as a context-based proxy of usability based on measures of data similarity, difference and effect. The results obtained confirm the need for further research to both clarify aspects of data usability and widen the scope of future research into different contexts and applications.
APA, Harvard, Vancouver, ISO, and other styles
48

Josefsson, André. "Comparing the performance of relational and document databases for hierarchical geospatial data." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-231884.

Full text
Abstract:
The aim of this degree project is to investigate alternatives to the relational database paradigm when storing hierarchical geospatial data. The document paradigm is found suitable and is therefore further examined. A benchmark suite is developed in order to test the relative performance of the paradigms for the relevant type of data. MongoDB and Microsoft SQL Server are chosen to represent the two paradigms in the benchmark. The results indicate that the document paradigm has potential when working with hierarchical structures. When adding geospatial elements to the data, the results are inconclusive.
Det här examensarbetet ämnar undersöka alternativ till den relationella databasparadigmen för lagring av hierarkisk geospatial data. Dokumentparadigmen identiferas som särskilt lämplig och undersöks därför vidare. En benchmark-svit utvecklas för att undersöka de två paradigmens relativa prestanda vid lagring av den undersökta typen av data. MongoDB och Microsoft SQL Server väljs som representanter för de två paradigmen i benchmark-sviten. Resultaten indikerar att dokumentparadigmen har god potential för hierarkisk data. Inga tydliga slutsatser kan dock dras gällande den geospatiala aspekten.
APA, Harvard, Vancouver, ISO, and other styles
49

Spur, Maxim. "Immersive Visualization of Multilayered Geospatial Urban Data for Visual Analytics and Exploration." Thesis, Ecole centrale de Nantes, 2021. http://www.theses.fr/2021ECDN0032.

Full text
Abstract:
Les données géospatiales urbaines englobent une multitude de couches thématiques et s'étendent sur des échelles géométriques allant d’éléments architecturaux aux réseaux de transport interrégionaux. Cette thèse étudie comment les environnements immersifs peuvent être utilisés afin d'aider à visualiser efficacement ces données multicouches de manière simultanée et à diverses échelles. Pour cela, deux prototypes logiciels ont été développés afin de mettre en oeuvre deux nouvelles méthodes de visualisation de données, à savoir les « vues multiples coordonnées » et le « focus+contexte ». Ces logiciels tirent pleinement parti des possibilités offertes par le matériel moderne de réalité virtuelle tout en étant également adaptés à la réalité augmentée. Parmi les deux nouvelles méthodes présentées ici, l'une — une disposition verticale optimisée des couches cartographiques — a été évaluée de manière formelle dans le cadre d'une étude contrôlée auprès des utilisateurs, et l'autre — une approche de projection géométrique pour créer des vues panoramiques de type focus+contexte — de manière plus informelle à partir des commentaires d'experts du domaine qui l'ont testée. Si les deux méthodes ont montré des résultats prometteurs, l'étude formelle a plus particulièrement permis de comprendre comment les particularités des utilisateurs peuvent influencer la perception de la facilité d'utilisation de ces systèmes de visualisation ainsi que leurs performances
Geospatial urban data encompasses a plethora of thematic layers, and spans geometric scales reaching from individual architectural elements to inter-regional transportation networks. This thesis examines how immersive environments can be utilized to effectively aid in visualizing this multilayered data simultaneously at various scales. For this, two distinct software prototypes were developed to implement the concepts of multiple coordinated views and focus+context, specifically taking full advantage of the affordances granted by modern virtual reality hardware, while also being suitable for augmented reality. Of the two novel methods introduced here, one — an optimized, vertical arrangement of map layers — was formally evaluated in a controlled user study, and the other — a geometric projection approach to create panoramic focus+ context views — informally through feedback from domain experts who tested it. Both showed promising results, and especially the formal study yielded valuable insights into how user characteristics can influence the perceived usability of such visualization systems and their performance
APA, Harvard, Vancouver, ISO, and other styles
50

STURARI, MIRCO. "Processing and visualization of multi-source data in next-generation geospatial applications." Doctoral thesis, Università Politecnica delle Marche, 2018. http://hdl.handle.net/11566/252596.

Full text
Abstract:
Le applicazioni geospaziali di nuova generazione come dati non usano semplicemente punti, linee e poligoni, ma oggetti complessi o evoluzioni di fenomeni che hanno bisogno di tecniche avanzate di analisi e visualizzazione per essere compresi. Le caratteristiche di queste applicazioni sono l'uso di dati multi-sorgente con diverse dimensioni spaziali, temporali e spettrali, la visualizzazione dinamica e interattiva con qualsiasi dispositivo e quasi ovunque, anche sul campo. L'analisi dei fenomeni complessi ha utilizzato fonti dati eterogenee per formato/tipologia e per risoluzione spaziale/temporale/spettrale, che rendono problematica l'operazione di fusione per l'estrazione di informazioni significative e immediatamente comprensibili. L'acquisizione dei dati multi-sorgente può avvenire tramite diversi sensori, dispositivi IoT, dispositivi mobili, social media, informazioni geografiche volontarie e dati geospaziali di fonti pubbliche. Dato che le applicazioni geospaziali di nuova generazione presentano nuove caratteristiche, per visualizzare i dati grezzi, i dati integrati, i dati derivati e le informazioni è stata analizzata l'usabilità di tecnologie innovative che ne consentano la visualizzazione con qualsiasi dispositivo: dashboard interattive, le viste e le mappe con dimensioni spaziali e temporali, le applicazioni di Augmented e Virtual Reality. Per l'estrazione delle informazioni in modo semi-automatico abbiamo impiegato varie tecniche all'interno di un processo sinergico: segmentazione e identificazione, classificazione, rilevamento dei cambiamenti, tracciamento e clustering dei percorsi, simulazione e predizione. All'interno di un workflow di elaborazione, sono stati analizzati vari scenari e implementate soluzioni innovative caratterizzate dalla fusione di dati multi-sorgente, da dinamicità e interattivà. A seconda dell'ambito applicativo le problematiche sono differenziate e per ciascuno di questi sono state implementate le soluzioni più coerenti con suddette caratteristiche. In ciascuno scenario presentato sono state trovate soluzioni innovative che hanno dato buoni risultati, alcune delle quali in nuovi ambiti applicativi: (i) l'integrazione di dati di elevazione e immagini multispettrali ad alta risoluzione per la mappatura Uso del Suolo / Copertura del Suolo, (ii) mappatura con il contributo volontario per la protezione civile e la gestione delle emergenze (iii) la fusione di sensori per la localizzazione e il tracciamento in ambiente retail, (iv) l'integrazione dei dati in tempo reale per la simulazione del traffico nei sistemi di mobilità, (v) la combinazione di informazioni visive e di nuvole di punti per la rilevazione dei cambiamenti nell'applicazione della sicurezza ferroviaria. Attraverso questi esempi, i suggerimenti potranno essere applicati per realizzare applicazioni geospaziali anche in ambiti diversi. Nel futuro sarà possibile aumentare l'integrazione per realizzare piattaforme data-driven come base per sistemi intelligenti: un'interfaccia semplice per l'utente che metta a disposizione funzionalità avanzate di analisi costruite su algoritmi affidabili ed efficienti.
Next-generation geospatial applications as data do not simply use dots, lines, and polygons, but complex objects or evolution of phenomena that need advanced analysis and visualization techniques to be understood. The features of these applications are the use of multi-source data with different spatial, temporal and spectral dimensions, dynamic and interactive visualization with any device and almost anywhere, even in the field. Complex phenomena analysis has used heterogeneous data sources for format/typology and spatial/temporal/spectral resolution, which challenging combining operation to extract meaningful and immediately comprehensible information. Multi-source data acquisition can take place through various sensors, IoT devices, mobile devices, social media, voluntary geographic information and geospatial data from public sources. Since next-generation geospatial applications have new features to view raw data, integrated data, derived data, and information, wh have analysed the usability of innovative technologies to enable visualization with any device: interactive dashboards, views and maps with spatial and temporal dimensions, Augmented and Virtual Reality applications. For semi-automatic data extraction we have used various techniques in a synergistic process: segmentation and identification, classification, change detection, tracking and path clustering, simulation and prediction. Within a processing workflow, various scenarios were analysed and implemented innovative solutions characterized by the fusion of multi-source data, dynamism and interactivity. Depending on the application field, the problems are differentiated and for each of these the most coherent solutions have been implemented with the aforementioned characteristics. Innovative solutions that have yielded good results have been found in each scenario presented, some of which are in new applications: (i) integration of elevation data and multispectral high-resolution images for Land Use/Land Cover mapping, (ii) crowd-mapping for civil protection and emergency management, (iii) sensor fusion for indoor localization and tracking, (iv) integration real-time data for traffic simulation in mobility systems, (v) mixing visual and point cloud informations for change detection on railways safety and security application. Through these examples, given suggestions can be applied to create geospatial applications even in different areas. In the future, integration can be enhanced to build data-driven platforms as the basis for intelligent systems: a user-friendly interface that provides advanced analysis capabilities built on reliable and efficient algorithms.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography