To see the other types of publications on this topic, follow the link: Cartographic Accuracy Standard (PEC).

Journal articles on the topic 'Cartographic Accuracy Standard (PEC)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 23 journal articles for your research on the topic 'Cartographic Accuracy Standard (PEC).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Santos, Alex da Silva, Nilcilene das Graças Medeiros, Gérson Rodrigues dos Santos, and Jugurta Lisboa Filho. "USE OF GEOSTATISTICS ON ABSOLUTE POSITIONAL ACCURACY ASSESMENT OF GEOSPATIAL DATA." Boletim de Ciências Geodésicas 23, no. 3 (2017): 405–18. http://dx.doi.org/10.1590/s1982-21702017000300027.

Full text
Abstract:
Abstract: In the area of Geosciences it is intuitive to think of spatial correlation as a phenomenon under study, and Geostatistics has tools to identify and represent the behavior of such dependency. The spatial analysis of the results of an inspection of the quality of a cartographic product is generally not addressed in the standards, which are restricted to descriptive and tabular findings, based on the assumption of the Classical Statistics of independence of observed data. At the Brazilian National Infrastructure of Spatial Data (INDE), various cartographic products should be made available to society, along with their metadata. This paper proposes a methodology for quality inspection based on international standards and on the Cartographic Accuracy Standard (PEC), using geostatistical methods and spatial representation of this benchmarking, through positional quality maps. The method of evaluating the quality of data was applied to Brazil's Continual Cartographic Base, at the scale of 1:250000 - BC250, with a focus on absolute positional accuracy. The quality map generated presented regionalizations of the planimetric error confirmed by the producer team of the referred cartographic base of IBGE. Such information can help users and producers to understand the spatial behavior of cartographic product quality in study
APA, Harvard, Vancouver, ISO, and other styles
2

Bettiol, Giovana Maranhão, Manuel Eduardo Ferreira, Luiz Pacheco Motta, Édipo Henrique Cremon, and Edson Eyji Sano. "Conformity of the NASADEM_HGT and ALOS AW3D30 DEM with the Altitude from the Brazilian Geodetic Reference Stations: A Case Study from Brazilian Cerrado." Sensors 21, no. 9 (2021): 2935. http://dx.doi.org/10.3390/s21092935.

Full text
Abstract:
The Brazilian Cerrado (tropical savanna) is the second largest biome in South America and the main region in the country for agricultural production. Altitude is crucial information for decision-makers and planners since it is directly related to temperature that conditions, for example, the climatic risk of rainfed crop plantations. This study analyzes the conformity of two freely available digital elevation models (DEMs), the NASADEM Merged Digital Elevation Model Global 1 arc second (NASADEM_HGT) version 1 and the Advanced Land Observing Satellite Global Digital Surface Model (ALOS AW3D30), version 3.1, with the altitudes provided by 1695 reference stations of the Brazilian Geodetic System. Both models were evaluated based on the parameters recommended in the Brazilian Cartographic Accuracy Standard for Digital Cartographic Products (PEC-PCD), which defines error tolerances according to eight different scales (from 1:1000 to 1:250,000) and classes A (most strict tolerance, for example, 0.17 m for 1:1000 scale), B, C, and D (least strict tolerance, for example, 50 m for 1:250,000 scale). Considering the class A, the NASADEM_HGT meets 1:250,000 and lower scales, while AW3D30 meets 1:100,000 and lower scales; for class B, NASADEM_HGT meets 1:100,000 scale and AW3D30 meets 1:50,000. AW3D30 presented lower values of root mean square error, standard deviation, and bias, indicating that it presents higher accuracy in relation to the NASADEM_HGT. Within eight of Cerrado’s municipalities with the highest grain production, the differences between average altitudes, measured by the Cohen’s effect size, were statistically insignificant. The results obtained by the PEC-PCD for the Cerrado biome indicate that both models can be employed in different DEM-dependent applications over this biome.
APA, Harvard, Vancouver, ISO, and other styles
3

Morais, Josyceyla Duarte, Thaísa Santos Faria, Marcos Antonio Timbó Elmiro, Marcelo Antonio Nero, Archibald de Araujo Silva, and Rodrigo Affonso de Albuquerque Nobrega. "ALTIMETRY ASSESSMENT OF ASTER GDEM v2 AND SRTM v3 DIGITAL ELEVATION MODELS: A CASE STUDY IN URBAN AREA OF BELO HORIZONTE, MG, BRAZIL." Boletim de Ciências Geodésicas 23, no. 4 (2017): 654–68. http://dx.doi.org/10.1590/s1982-21702017000400043.

Full text
Abstract:
Abstract: This work is an altimetry evaluation study involving Digital Elevation Models ASTER GDEM version 2 and SRTM version 3. Both models are readily available free of charge, however as they are built from different remote sensing methods it is also expected that they present different data qualities. LIDAR data with 25 cm vertical accuracy were used as reference for assessment validation. The evaluation study, carried out in urbanized area, investigated the distribution of the residuals and the relationship between the observed errors with land slope classes. Remote sensing principles, quantitative statistical methods and the Cartographic Accuracy Standard of Digital Mapping Products (PEC-PCD) were considered. The results indicated strong positive linear correlation and the existence of a functional relationship between the evaluated models and the reference model. Residuals between -4.36 m and 3.11 m grouped 47.7% of samples corresponding to ASTER GDEM and 63.7% of samples corresponding to SRTM. In both evaluated models, Root Mean Square Error values increased with increasing of land slope. Considering 1: 50,000 mapping scale the PEC-PCD classification indicated class B standard for SRTM and class C for ASTER GDEM. In all analyzes, SRTM presented smaller altimetry errors compared to ASTER GDEM, except in areas with steep relief.
APA, Harvard, Vancouver, ISO, and other styles
4

Viel, Jorge Antônio, Kátia Kellem da Rosa, and Cláudio Wilson Mendes Junior. "Avaliação da Acurácia Vertical dos Modelos Digitais de Elevação SRTM, ALOS World 3D e ASTER GDEM: Um Estudo de Caso no Vale dos Vinhedos, RS – Brasil." Revista Brasileira de Geografia Física 13, no. 5 (2020): 2255. http://dx.doi.org/10.26848/rbgf.v13.5.p2255-2268.

Full text
Abstract:
Este estudo tem como objetivo avaliar a acurácia vertical dos Modelos Digitais de Elevação (MDEs) SRTM v.3, ALOS World 3D e ASTER GDEM v.2 na região da denominação de origem Vale dos Vinhedos, RS. Para tanto, os dados desses MDEs, com resolução espacial de 30 m, foram comparados com os de um MDE fotogramétrico com resolução espacial de 5 m no terreno, por meio de análises de regressão e correlação linear, e de perfis topográficos derivados desses modelos. O Padrão de Exatidão Cartográfica dos Produtos Cartográficos Digitais (PEC-PCD) de cada MDE foi analisado, para identificar a escala máxima de seu uso em estudos morfométricos, nas escalas 1:25.000, 1:50.000 e 1:100.000, por meio de cálculos da Tolerância Vertical e do Erro Médio Quadrático (EMQ). Os MDEs SRTM v.3 e ASTER GDEM v.2 atenderam o PEC-PCD altimétrico classe A na escala 1:100.000. Diferentemente do MDE ALOS World 3D que enquadrou-se na classe B para a escala de 1:100.000. Todos os modelos, na escala 1:50.000, enquadraram-se na classe D, enquanto que na escala 1:25.000 não houve enquadramento. O MDE SRTM v.3 foi o que apresentou melhores resultados morfométricos e o maior coeficiente de correlação de Pearson (r=0,995). Todos os MDEs avaliados neste estudo apresentaram morfologia próxima a do MDE fotogramétrico. Portanto, recomenda-se o uso de todos os MDEs analisados em estudos morfométricos da área de estudo, sendo necessário observar o objetivo do trabalho, bem como a escala de análise e a apresentação desses dados. Evaluation of the Vertical Acuracy of Digital Elevation Models SRTM, ALOS WORLD 3D and ASTER GDEM: a case study in Vale dos Vinhedos, RS - Brazil A B S T R A C TThis work aims to evaluate the vertical accuracy of the digital elevation models (DEMs) SRTM v.3, Alos World 3D and ASTER GDEM v.2 in Vale dos Vinhedos designation of origin (DO) region, RS. Thus, the DEM data with 30 m of the spatial resolution were compared with photogrammetric DEM data with 5 m of the spatial resolution by linear regression and correlation analyzes, and also, topographic profiles carried out with these models. The Cartographic Accuracy Standard (PEC) of each DEM was analyzed to identify the maximum scale for morphometric application, in scales 1:25.000, 1:50.000 and 1:100.000, by calculations of Vertical Tolerance and the Mean Square Error (MSE). All DEMs. All the models studied were classified in class A for the 1:100,000 scale, and for the 1:50,000 scale the analyzed models were classified in class C, while in 1:25.000 scale doesn´t have application. The DEM SRTM v.3 presented smaller altimetry errors compared to ASTER GDEM and Alos World 3D, as for mophometric analysis and Pearson correlation coefficient (r=0,995). It is worth mentioning that all models analyzed are statistically and morphologically close. Therefore, they can be used to conduct several studies, however it is necessary to have in mind the goal of the work, and the scale of analysis and presentation.Keywords: Vertical accuracy, SRTM v.3, Alos World 3D, ASTER GDEM
APA, Harvard, Vancouver, ISO, and other styles
5

Garcia, Andre, Ignacio Aguilar, Andres Legarra, et al. "22 Accuracy of indirect predictions for large datasets based on prediction error covariance of SNP effects from single-step GBLUP." Journal of Animal Science 98, Supplement_4 (2020): 6–7. http://dx.doi.org/10.1093/jas/skaa278.012.

Full text
Abstract:
Abstract With an ever-increasing number of genotyped animals, there is a question of whether to include all genotypes into single-step GBLUP (ssGBLUP) evaluations or to include only genotyped animals with phenotypes and use indirect predictions (IP) for the remaining young genotyped animals. Under ssGBLUP, SNP effects can be backsolved from GEBV, and IP can be calculated as the sum of SNP effects weighted by the gene content. To publish IP, a measure of accuracy that reflects the standard error of prediction, and that is comparable to GEBV accuracy, is needed. Our first objective was to test formulas to compute accuracy of IP by backsolving prediction error covariance (PEC) of GEBV into PEC of SNP effects. The second objective was to investigate the number of genotyped animals needed to obtain robust IP accuracy. Data were provided by the American Angus Association, with 38,000 post-weaning gain phenotypes and 60,000 genotyped animals. Correlations between GEBV and IP were ≥0.99. When all genotyped animals were used for PEC computations, accuracy correlations were also ≥0.99. Additionally, GEBV and IP accuracies were compatible, with both direct inversion of the genomic relationship matrix (G) or using the algorithm for proven and young (APY) to obtain G inverse. As the number of genotyped animals in PEC computations decreased to 15,000, accuracy correlations were still high (≥0.96), but IP accuracies were biased downwards. Indirect prediction accuracy can be successfully obtained from ssGBLUP without running an extra SNP-BLUP evaluation to compute SNP PEC. It is possible to reduce the number of genotyped animals in PEC computations, but accuracies may be slightly underestimated. When the amount of genomic and phenotypic data is large, the polygenic part of GEBV becomes small and IP can be very accurate. Further research is needed to approximate SNP PEC with a large number of genotyped animals.
APA, Harvard, Vancouver, ISO, and other styles
6

di Filippo, Andrea, Luis Sánchez-Aparicio, Salvatore Barba, José Martín-Jiménez, Rocío Mora, and Diego González Aguilera. "Use of a Wearable Mobile Laser System in Seamless Indoor 3D Mapping of a Complex Historical Site." Remote Sensing 10, no. 12 (2018): 1897. http://dx.doi.org/10.3390/rs10121897.

Full text
Abstract:
This paper presents an efficient solution, based on a wearable mobile laser system (WMLS), for the digitalization and modelling of a complex cultural heritage building. A procedural pipeline is formalized for the data acquisition, processing and generation of cartographic products over a XV century palace located in Segovia, Spain. The complexity, represented by an intricate interior space and by the presence of important structural problems, prevents the use of standard protocols such as those based on terrestrial photogrammetry or terrestrial laser scanning, making the WMLS the most suitable and powerful solution for the design of restoration actions. The results obtained corroborate with the robustness and accuracy of the digitalization strategy, allowing for the generation of 3D models and 2D cartographic products with the required level of quality and time needed to digitalize the area by a terrestrial laser scanner.
APA, Harvard, Vancouver, ISO, and other styles
7

Parvandeh, Saeid, Hung-Wen Yeh, Martin P. Paulus, and Brett A. McKinney. "Consensus features nested cross-validation." Bioinformatics 36, no. 10 (2020): 3093–98. http://dx.doi.org/10.1093/bioinformatics/btaa046.

Full text
Abstract:
Abstract Summary Feature selection can improve the accuracy of machine-learning models, but appropriate steps must be taken to avoid overfitting. Nested cross-validation (nCV) is a common approach that chooses the classification model and features to represent a given outer fold based on features that give the maximum inner-fold accuracy. Differential privacy is a related technique to avoid overfitting that uses a privacy-preserving noise mechanism to identify features that are stable between training and holdout sets. We develop consensus nested cross-validation (cnCV) that combines the idea of feature stability from differential privacy with nCV. Feature selection is applied in each inner fold and the consensus of top features across folds is used as a measure of feature stability or reliability instead of classification accuracy, which is used in standard nCV. We use simulated data with main effects, correlation and interactions to compare the classification accuracy and feature selection performance of the new cnCV with standard nCV, Elastic Net optimized by cross-validation, differential privacy and private evaporative cooling (pEC). We also compare these methods using real RNA-seq data from a study of major depressive disorder. The cnCV method has similar training and validation accuracy to nCV, but cnCV has much shorter run times because it does not construct classifiers in the inner folds. The cnCV method chooses a more parsimonious set of features with fewer false positives than nCV. The cnCV method has similar accuracy to pEC and cnCV selects stable features between folds without the need to specify a privacy threshold. We show that cnCV is an effective and efficient approach for combining feature selection with classification. Availability and implementation Code available at https://github.com/insilico/cncv. Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
8

Kent, Alexander J. "The Soviet Military Plan of Tokyo (1966)." Abstracts of the ICA 1 (July 15, 2019): 1. http://dx.doi.org/10.5194/ica-abs-1-169-2019.

Full text
Abstract:
<p><strong>Abstract.</strong> As part of its secret Cold War mapping programme, the Soviet Union produced detailed plans of over 2,000 towns and cities within foreign territories around the globe. Some of these maps were made available for the first time in 1993 at the 16th International Cartographic Conference in Cologne, Germany, via a Latvian map dealer who discovered them at an abandoned depot outside Riga as the Red Army withdrew. However, Soviet city plans have only recently become the topic of cartographic research, which has provided some insights into aspects of their production, accuracy and purpose, that continue to have relevance for mapping diverse urban environments today.</p><p>This paper focuses on the city plan of Tokyo, which comprises four sheets and was produced by the General Staff of the Soviet Union in 1966. Street names are transcribed to allow phonetic pronunciation and the plan identifies almost 400 important objects (from factories to hospitals), which are described in a numbered list. Although the street-level detail of the plan is produced according to a standard specification and symbology, it adopts an uncommon scale of 1 : 20,000 (with contours at 5-metre intervals) and incorporates an unusual and transitory cartographic style in the history of the series.</p><p>In addition to highlighting the main features of the plan and exploring some possible sources, this paper interprets the wider context of the Soviet military plans of Japanese towns and cities (over 90 are known to have been mapped during the Cold War). Aside from their historical significance, it suggests how understanding the city plans can reveal how problems of the design and portrayal of detailed topographic information may be overcome through their unfamiliar, yet comprehensive, cartographic language.</p>
APA, Harvard, Vancouver, ISO, and other styles
9

Berteška, Tautvydas, and Birutė Ruzgienė. "PHOTOGRAMMETRIC MAPPING BASED ON UAV IMAGERY." Geodesy and Cartography 39, no. 4 (2013): 158–63. http://dx.doi.org/10.3846/20296991.2013.859781.

Full text
Abstract:
Unmanned Aerial Vehicle (UAV) and Digital Photogrammetry is an up-to-date area mapping technology. Implemented features are low-cost, mobile and simple. UAV (fixed-wing EPP-FPV) with mounted digital camera (Canon S100) was used for imagery while digital photogrammetry processing (with lisa software application) was applied for cartographic data collection. High imagery quality is a significant factor for the efficiency and quality of standard mapping products, such as Digital Elevation Model and Ortho Images. DEM and Orthophoto quality mainly depends on camera resolution, flight height and accuracy of Ground Control Points (GCP). In experimental investigations, GCP coordinates were gained interactively from the Internet. Application of the appropriate DEM checking technique showed that DEM error was up to 0.5 m.
APA, Harvard, Vancouver, ISO, and other styles
10

Kuźma, Marta, and Albina Mościcka. "Accessibility evaluation of topographic maps in the National Library of Poland." Abstracts of the ICA 1 (July 15, 2019): 1. http://dx.doi.org/10.5194/ica-abs-1-201-2019.

Full text
Abstract:
<p><strong>Abstract.</strong> Digital libraries are created and managed mainly by traditional libraries, archives and museums. They collect, process, and make available digitized collections and data about them. These collections often constitute cultural heritage and they include, among others: books (including old prints), magazines, manuscripts, photographs, maps, atlases, postcards and graphics. An example of such a library is the National Library of Poland. It collects and provides digitally available data of about 55,000 maps.</p><p>The effective use of cultural heritage resources and information from National Library of Poland gives the prerequisites and challenges for multidisciplinary research and cross-sectoral cooperation. These resources are an unlimited source of knowledge, constituting value in themselves but also providing data for many new studies, including interdisciplinary studies of the past. Information necessary for such research is usually distributed across a wide spectrum of fields, formats and languages, reflecting different points of view, and the key task is to find them in digital libraries.</p><p>The growth of digital library collections requires high-quality metadata to make the materials collected by libraries fully accessible and to enable their integration and sharing between institutions. Consequently, three main metadata quality criteria have been defined to enable metadata management and evaluation. They are: accuracy, consistency, and completeness (Park, 2009, Park and Tosaka, 2010). Different aspects of metadata quality can also be defined as: accessibility, accuracy, availability, compactness, comprehensiveness, content, consistency, cost, data structure, ease of creation, ease of use, cost efficiency, flexibility, fitness for use, informativeness, quantity, reliability, standard, timeliness, transfer, usability (Moen et al., 1998). This list tells us where errors in metadata occur, which can result in hindering or completely disabling access to materials available through a digital library.</p><p>Archival maps have always been present in the libraries. In the digital age, geographical space has begun to exist in libraries in two aspects: as old maps’ collections, as well as a geographic reference of sources other than cartographic materials. Despite many experiences in this field, the authors emphasize that the main problem is related to the fact that most libraries are not populating the coordinates to the metadata, which is required to enable and support geographical search (Southall and Pridal, 2012).</p><p>During this stage the concept of research is born and the source materials necessary for the realization of this concept are collected. When using archival maps for such studies, it is important to be aware of detailed literature studies, including cartographic assumptions, the course and accuracy of cartographic works, the way of printing, the scope of updates of subsequent editions, and the period in which the given map was created. The ability to use cartographic materials also depends on the destination map. The awareness of the above issues allows researchers to avoid errors frequently made by non-cartographers, i.e. to prevent comparing maps on different scales and treating them as a basis for formulating very detailed yet unfortunately erroneous conclusions. Thus, one of the key tasks is to find materials that are comparable in terms of scale and that cover the same area and space in the historical period of interest.</p><p>The research aim is to evaluate the quality of topographic maps metadata provided by the National Library of Poland, which are the basis for effective access to cartographic resources.</p><p>The first research question is: how should topographic maps be described in metadata to enable finding them in the National Library of Poland? In other words, what kind of map-specific information should be saved in metadata (and in what way) to provide the proper characteristic of the spatially-related object?</p><p>The second research question is: which topographic maps have the best metadata in such a way as to give the users the best chance of finding the cartographic materials necessary for their research?</p><p>The paper will present the results of research connected with finding criteria and features to metadata evaluation, it means how archival maps are described. For the maps, it is a set of map features, which are collected in the metadata. This set includes the geographic location, map scale, map orientation, and cartographic presentation methods. The conducted evaluation refers to the quality of metadata, or, in other words, the accessibility of archival cartographic resources.</p>
APA, Harvard, Vancouver, ISO, and other styles
11

Seidelmann, P. K. "Possible Features of IAU Standards." Highlights of Astronomy 9 (1992): 155–59. http://dx.doi.org/10.1017/s1539299600008881.

Full text
Abstract:
In the past, the IAU has adopted standard values for some constants, primarily for use with solar system ephemerides. The constants adopted in 1976 were specifically adjusted to provide internal consistency. In each case, when constants have been adopted, the changes have reflected accuracy improvements, and the purpose has been to encourage the accomplishment of better science.Over the past 12 years, the Working Group on Cartographic Coordinates has issued triennial reports giving the best values for the sizes and rotations for the planets and satellites. This working group now is an IAU/IAG/COSPAR working group reflecting the different organizations that have recognized the benefits of this group. This is an example of a properly functioning working group, which provides the best values on a regular basis. The IUGG also provides best estimates triennially for values of interest in geodesy and geodynamics.
APA, Harvard, Vancouver, ISO, and other styles
12

França, Leandro Luiz Silva de, and Luiz Felipe Coutinho Ferreira da Silva. "COMPARISON BETWEEN THE DOUBLE BUFFER METHOD AND THE EQUIVALENT RECTANGLE METHOD FOR THE QUANTIFICATION OF DISCREPANCIES BETWEEN LINEAR FEATURES." Boletim de Ciências Geodésicas 24, no. 3 (2018): 300–317. http://dx.doi.org/10.1590/s1982-21702018000300020.

Full text
Abstract:
Abstract Currently, in Brazil, for the assessment of the Positional Accuracy of non-point features (lines and polygons), there is no standard norm of execution. This work aims to compare the results of two methodologies that allow determining the average value of the discrepancies between linear features. The first, Equivalent Rectangle Method, aims to determine the discrepancy by considering an equivalent rectangle for the polygon obtained from the two homologous lines. The second, Double Buffer Method applies a buffer on both lines and obtains the average discrepancy value based on the relation of the areas of the generated polygons. These methods were compared in two steps. Initially, an experiment was performed with features of known measurements, where the displacement of the homologous lines was controlled in azimuth and distance. In this step, it was verified that the shape of the feature and the direction of the displacement interfere in the results of both methods when compared to the traditional procedure of measurement of discrepancies by homologous points. In the second stage, we evaluated the vector data of the OpenStreetMap (class of roads), with reference to a more accurate vector dataset produced for the Mapping of the State of Bahia. As a result, for the 1:25,000, 1:50,000, 1:100,000 and 1:250,000 scales, it was obtained, respectively, the PEC-PCD for the Equivalent Rectangle Method "C", "B", "A" and "A" and the PEC-PCD for the Double Buffer Method "R", "C", "B" and "A", where "R" means that it has not achieved the minimum PEC-PCD classification.
APA, Harvard, Vancouver, ISO, and other styles
13

Zheng, Xianwei, Hanjiang Xiong, Jianya Gong, and Linwei Yue. "A VIRTUAL GLOBE-BASED MULTI-RESOLUTION TIN SURFACE MODELING AND VISUALIZETION METHOD." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B2 (June 8, 2016): 459–64. http://dx.doi.org/10.5194/isprsarchives-xli-b2-459-2016.

Full text
Abstract:
The integration and visualization of geospatial data on a virtual globe play an significant role in understanding and analysis of the Earth surface processes. However, the current virtual globes always sacrifice the accuracy to ensure the efficiency for global data processing and visualization, which devalue their functionality for scientific applications. In this article, we propose a high-accuracy multi-resolution TIN pyramid construction and visualization method for virtual globe. Firstly, we introduce the cartographic principles to formulize the level of detail (LOD) generation so that the TIN model in each layer is controlled with a data quality standard. A maximum z-tolerance algorithm is then used to iteratively construct the multi-resolution TIN pyramid. Moreover, the extracted landscape features are incorporated into each-layer TIN, thus preserving the topological structure of terrain surface at different levels. In the proposed framework, a virtual node (VN)-based approach is developed to seamlessly partition and discretize each triangulation layer into tiles, which can be organized and stored with a global quad-tree index. Finally, the real time out-of-core spherical terrain rendering is realized on a virtual globe system VirtualWorld1.0. The experimental results showed that the proposed method can achieve an high-fidelity terrain representation, while produce a high quality underlying data that satisfies the demand for scientific analysis.
APA, Harvard, Vancouver, ISO, and other styles
14

Zheng, Xianwei, Hanjiang Xiong, Jianya Gong, and Linwei Yue. "A VIRTUAL GLOBE-BASED MULTI-RESOLUTION TIN SURFACE MODELING AND VISUALIZETION METHOD." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B2 (June 8, 2016): 459–64. http://dx.doi.org/10.5194/isprs-archives-xli-b2-459-2016.

Full text
Abstract:
The integration and visualization of geospatial data on a virtual globe play an significant role in understanding and analysis of the Earth surface processes. However, the current virtual globes always sacrifice the accuracy to ensure the efficiency for global data processing and visualization, which devalue their functionality for scientific applications. In this article, we propose a high-accuracy multi-resolution TIN pyramid construction and visualization method for virtual globe. Firstly, we introduce the cartographic principles to formulize the level of detail (LOD) generation so that the TIN model in each layer is controlled with a data quality standard. A maximum z-tolerance algorithm is then used to iteratively construct the multi-resolution TIN pyramid. Moreover, the extracted landscape features are incorporated into each-layer TIN, thus preserving the topological structure of terrain surface at different levels. In the proposed framework, a virtual node (VN)-based approach is developed to seamlessly partition and discretize each triangulation layer into tiles, which can be organized and stored with a global quad-tree index. Finally, the real time out-of-core spherical terrain rendering is realized on a virtual globe system VirtualWorld1.0. The experimental results showed that the proposed method can achieve an high-fidelity terrain representation, while produce a high quality underlying data that satisfies the demand for scientific analysis.
APA, Harvard, Vancouver, ISO, and other styles
15

Kirk, R. L., E. Howington-Kraus, K. Edmundson, et al. "COMMUNITY TOOLS FOR CARTOGRAPHIC AND PHOTOGRAMMETRIC PROCESSING OF MARS EXPRESS HRSC IMAGES." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-3/W1 (July 25, 2017): 69–76. http://dx.doi.org/10.5194/isprs-archives-xlii-3-w1-69-2017.

Full text
Abstract:
The High Resolution Stereo Camera (HRSC) on the Mars Express orbiter (Neukum et al. 2004) is a multi-line pushbroom scanner that can obtain stereo and color coverage of targets in a single overpass, with pixel scales as small as 10 m at periapsis. Since commencing operations in 2004 it has imaged ~ 77 % of Mars at 20 m/pixel or better. The instrument team uses the Video Image Communication And Retrieval (VICAR) software to produce and archive a range of data products from uncalibrated and radiometrically calibrated images to controlled digital topographic models (DTMs) and orthoimages and regional mosaics of DTM and orthophoto data (Gwinner et al. 2009; 2010b; 2016). Alternatives to this highly effective standard processing pipeline are nevertheless of interest to researchers who do not have access to the full VICAR suite and may wish to make topographic products or perform other (e. g., spectrophotometric) analyses prior to the release of the highest level products. We have therefore developed software to ingest HRSC images and model their geometry in the USGS Integrated Software for Imagers and Spectrometers (ISIS3), which can be used for data preparation, geodetic control, and analysis, and the commercial photogrammetric software SOCET SET (® BAE Systems; Miller and Walker 1993; 1995) which can be used for independent production of DTMs and orthoimages. <br><br> The initial implementation of this capability utilized the then-current ISIS2 system and the generic pushbroom sensor model of SOCET SET, and was described in the DTM comparison of independent photogrammetric processing by different elements of the HRSC team (Heipke et al. 2007). A major drawback of this prototype was that neither software system then allowed for pushbroom images in which the exposure time changes from line to line. Except at periapsis, HRSC makes such timing changes every few hundred lines to accommodate changes of altitude and velocity in its elliptical orbit. As a result, it was necessary to split observations into blocks of constant exposure time, greatly increasing the effort needed to control the images and collect DTMs. <br><br> Here, we describe a substantially improved HRSC processing capability that incorporates sensor models with varying line timing in the current ISIS3 system (Sides 2017) and SOCET SET. This enormously reduces the work effort for processing most images and eliminates the artifacts that arose from segmenting them. In addition, the software takes advantage of the continuously evolving capabilities of ISIS3 and the improved image matching module NGATE (Next Generation Automatic Terrain Extraction, incorporating area and feature based algorithms, multi-image and multi-direction matching) of SOCET SET, thus greatly reducing the need for manual editing of DTM errors. We have also developed a procedure for geodetically controlling the images to Mars Orbiter Laser Altimeter (MOLA) data by registering a preliminary stereo topographic model to MOLA by using the point cloud alignment (<i>pc_align</i>) function of the NASA Ames Stereo Pipeline (ASP; Moratto et al. 2010). This effectively converts inter-image tiepoints into ground control points in the MOLA coordinate system. The result is improved absolute accuracy and a significant reduction in work effort relative to manual measurement of ground control. <i>The ISIS and ASP software used are freely available; SOCET SET, is a commercial product.</i> By the end of 2017 we expect to have ported our SOCET SET HRSC sensor model to the Community Sensor Model (CSM; Community Sensor Model Working Group 2010; Hare and Kirk 2017) standard utilized by the successor photogrammetric system SOCET GXP that is currently offered by BAE. In early 2018, we are also working with BAE to release the CSM source code under a BSD or MIT open source license. <br><br> We illustrate current HRSC processing capabilities with three examples, of which the first two come from the DTM comparison of 2007. Candor Chasma (h1235_0001) was a near-periapse observation with constant exposure time that could be processed relatively easily at that time. We show qualitative and quantitative improvements in DTM resolution and precision as well as greatly reduced need for manual editing, and illustrate some of the photometric applications possible in ISIS. At the Nanedi Valles site we are now able to process all 3 long-arc orbits (h0894_0000, h0905_0000 and h0927_0000) without segmenting the images. Finally, processing image set h4235_0001, which covers the landing site of the Mars Science Laboratory (MSL) rover and its rugged science target of Aeolus Mons in Gale crater, provides a rare opportunity to evaluate DTM resolution and precision because extensive High Resolution Imaging Science Experiment (HiRISE) DTMs are available (Golombek et al. 2012). The HiRISE products have ~ 50x smaller pixel scale so that discrepancies can mostly be attributed to HRSC. We use the HiRISE DTMs to compare the resolution and precision of our HRSC DTMs with the (evolving) standard products. <br><br> We find that the vertical precision of HRSC DTMs is comparable to the pixel scale but the horizontal resolution may be 15–30 image pixels, depending on processing. This is significantly coarser than the lower limit of 3–5 pixels based on the minimum size for image patches to be matched. Stereo DTMs registered to MOLA altimetry by surface fitting typically deviate by 10thinsp;m or less in mean elevation. Estimates of the RMS deviation are strongly influenced by the sparse sampling of the altimetry, but range from <thinsp;50thinsp;m in flat areas to ~ 100thinsp;m in rugged areas.
APA, Harvard, Vancouver, ISO, and other styles
16

Gabryś, Marta, and Łukasz Ortyl. "Georeferencing of Multi-Channel GPR—Accuracy and Efficiency of Mapping of Underground Utility Networks." Remote Sensing 12, no. 18 (2020): 2945. http://dx.doi.org/10.3390/rs12182945.

Full text
Abstract:
Due to the capabilities of non-destructive testing of inaccessible objects, GPR (Ground Penetrating Radar) is used in geology, archeology, forensics and increasingly also in engineering tasks. The wide range of applications of the GPR method has been provided by the use of advanced technological solutions by equipment manufacturers, including multi-channel units. The acquisition of data along several profiles simultaneously allows time to be saved and quasi-continuous information to be collected about the subsurface situation. One of the most important aspects of data acquisition systems, including GPR, is the appropriate methodology and accuracy of the geoposition. This publication aims to discuss the results of GPR measurements carried out using the multi-channel Leica Stream C GPR (IDS GeoRadar Srl, Pisa, Italy). The significant results of the test measurement were presented the idea of which was to determine the achievable accuracy depending on the georeferencing method using a GNSS (Global Navigation Satellite System) receiver, also supported by time synchronization PPS (Pulse Per Second) and a total station. Methodology optimization was also an important aspect of the discussed issue, i.e., the effect of dynamic changes in motion trajectory on the positioning accuracy of echograms and their vectorization products was also examined. The standard algorithms developed for the dedicated software were used for post-processing of the coordinates and filtration of echograms, while the vectorization was done manually. The obtained results provided the basis for the confrontation of the material collected in urban conditions with the available cartographic data in terms of the possibility of verifying the actual location of underground utilities. The urban character of the area limited the possibility of the movement of Leica Stream C due to the large size of the instrument, however, it created the opportunity for additional analyses, including the accuracy of different location variants around high-rise buildings or the agreement of the amplitude distribution at the intersection of perpendicular profiles.
APA, Harvard, Vancouver, ISO, and other styles
17

Pereira, Edson Adjair de Souza, Pedro Walfir M. Souza-Filho, Waldir R. Paradella, and Wilson Da Rocha Nascimento Jr. "GENERATION AND EVALUATION OF RADARGRAMMETRIC DEM FROM RADARSAT-1 STANDARD IMAGES IN LOW RELIEF AREA IN THE AMAZON COASTAL PLAIN." Revista Brasileira de Geofísica 32, no. 3 (2014): 405. http://dx.doi.org/10.22564/rbgf.v32i3.499.

Full text
Abstract:
ABSTRACT. The generation of digital elevation models (DEMs) from the Standard imaging mode of RADARSAT-1 stereo-images was investigated to evaluate theviability of producing 1:100,000 scale altimetric maps in areas with a low topographic relief on the Brazilian Amazon coastal plain. Absolute DEMs were generatedusing RADARSAT-1 Standard stereopairs (S2Asc/S1Des, S6Des/S1Des, and S7Asc/S6Des) with ground control points collected using a Differential Global Positioningsystem. The geometric modeling for the DEM extractions was based on the “RADARSAT Specific Model” from the OrthoEngine Satellite Edition of the PCI Geomaticasoftware; this model is an automated matching solution that considers the slant range distances from sensors and terrain. Thirteen independent control points were usedto validate the accuracy of the absolute DEM. Only the S2Asc/S1Des pair was effective in highlighting depth information, which was a result of the pair’s intermediateintersection angle (47◦) and higher vertical parallax ratio (4.31). Therefore, RADARSAT-1 Standard images are a useful alternative for generating absolute DEM at thescale of 1:100,000 in cartographic gap areas on the Amazon coastal plain.Keywords: digital elevation model, stereoscopy, RADARSAT-1, Amazon, Brazil. RESUMO. A geração de modelos digitais de elevação (MDEs) a partir de pares estereoscópicos RADARSAT-1 modo Standard foi empregada com o objetivo deavaliar a produção de mapa altimétrico na escala de 1:100.000 em uma área de baixo relevo na planície costeira amazônica. MDEs absolutos foram gerados usandopares estereoscópicos RADARSAT-1 Standard (S2Asc/S1Des, S6Des/S1Des e S7Asc/S6Des) com pontos de controle do terreno coletados usando-se um sistema deposicionamento global diferencial. Omodelamento geométrico para extração doMDE foi baseado no “Modelo Específico para o RADARSAT”, do programa PCIGeomatica, através do cálculo que maximiza o coeficiente de correlação e leva em consideração as distâncias no alcance inclinado entre o sensor e o terreno. Para a validação do MDE absoluto foram usados 13 pontos de controle independentes. Apenas o par S2Asc/S1Des foi eficaz no realce da informação de profundidade, devido aos ângulos de intersecção intermediários (47◦), mas principalmente, devido a maior razão da paralaxe vertical observada (4,31). Portanto, as imagens RADARSAT-1 Standard representam uma ótima alternativa para a produção de MDEs absolutos na escala de 1:100.000 em áreas com vazios cartográficos na planície costeira amazônica.Palavras-chave: modelo digital de elevação, estereoscopia, RADARSAT-1,Amazônia, Brasil.
APA, Harvard, Vancouver, ISO, and other styles
18

Alsynbaev, Kamil, Vitaly Bryksin, Lilia Zhegalina, Anton Kozlov, and Igor Nazarov. "Creating of geodatabase of melioration system of the Kaliningrad Region." InterCarto. InterGIS 26, no. 3 (2020): 184–98. http://dx.doi.org/10.35595/2414-9179-2020-3-26-184-198.

Full text
Abstract:
The article describes the experience of creating a geodatabase of the melioration system of the Kaliningrad region for integration into an agriculture management automated system. The uniqueness of this melioration system is the scale of drainage facilities created during East Prussia time and Soviet period. The characteristic of the current condition of melioration system facilities is given. Actual problems and potential risks in the context of climate change are highlighted. The relevance of digitalization in melioration sector of public administration and in in the context of transboundary cooperation is explained. The primary data model, the structure of cartographic layers, and the composition of attribute information are considered. The features of the initial data and the problems of their preparation are described. The technology of inputting poorly formalized data has been developed. Authors used own service programs for geometry control, topology, and automation of operations, which allows increasing the productivity and accuracy of data input in comparison with standard means of basic geographic information systems. Thematic maps, examples of which are given, are the information basis for monitoring of drained lands of the Kaliningrad region to make environmental and economic management decisions. Promising areas of application of the geodatabase are proposed: geoportal project based on server data storage with using satellite information; project for hydrological modelling of hazardous and catastrophic occurrence in various melioration subsystems. The created geodatabase allows increasing the efficiency of processing and analysis of information about melioration systems on a local and regional scale enables the geographic visualization, helps in melioration management making-decisions.
APA, Harvard, Vancouver, ISO, and other styles
19

Amirova, A. R., G. I. Kalimullina, V. V. Mozzherin, and V. V. Khusnutdinova. "DEVELOPMENT OF A HYDRO-STATISTICAL MODEL OF ESTABLISHING THE LIMITS OF STREAMS AND ITS VERIFICATION (ON THE EXAMPLE OF THE VYATKA-RIVER BASIN)." Bulletin of Udmurt University. Series Biology. Earth Sciences 29, no. 2 (2019): 231–42. http://dx.doi.org/10.35634/2412-9518-2019-29-2-231-242.

Full text
Abstract:
A new unique method for establishing the shoreline of watercourses has been offered. In accordance with the Russian Water code, the border of rivers and streams is the shoreline with a long-term average annual water level for the ice-free period. That level is unknown in advance: it cannot be directly measured afield; it is not given in hydrological directories, as it is not considered to be characteristic, and on cartographic materials the outlines of water bodies are given at the average water level for the summer-autumn low flow period. To determine the desired level, the authors proposed a new method using the example of the Vyatka River. The method is based on a hydrolytic statistical model: the predicted parameter is the value of ΔH (the excess of the long-term average water level for the ice-free period over the average level of the summer-autumn low flow period), and the predictors (factors) of the model are the conditions of flow formation and passage. These factors are connected with ΔH by a power dependence, which has coefficients established by the least squares method from observations at 17 hydrological stations. The average calculation error ΔH was 5.4 ± 2.2 cm (significance level 0.05). The equation allows for any point or segment of the watercourse to calculate the desired level, and then establish a shoreline using large-scale maps. In this case, hydrometric observations are not needed. Mapping of the shoreline of the Vyatka River within the Kirov city and comparison of its position with the shoreline on a detailed space image received on a date close to the calculated level show that the standard error of the planned coastal position does not exceed 1.3 m, which meets the requirements to the accuracy of land boundaries of the water fund.
APA, Harvard, Vancouver, ISO, and other styles
20

Majewski, Steven R. "The Future of Stellar Populations Studies in the Milky Way and the Local Group." Proceedings of the International Astronomical Union 5, S262 (2009): 99–110. http://dx.doi.org/10.1017/s1743921310002607.

Full text
Abstract:
AbstractThe last decade has seen enormous progress in understanding the structure of the Milky Way and neighboring galaxies via the production of large-scale digital surveys of the sky like 2MASS and SDSS, as well as specialized, counterpart imaging surveys of other Local Group systems. Apart from providing snaphots of galaxy structure, these “cartographic” surveys lend insights into the formation and evolution of galaxies when supplemented with additional data (e.g., spectroscopy, astrometry) and when referenced to theoretical models and simulations of galaxy evolution. These increasingly sophisticated simulations are making ever more specific predictions about the detailed chemistry and dynamics of stellar populations in galaxies. To fully exploit, test and constrain these theoretical ventures demands similar commitments of observational effort as has been plied into the previous imaging surveys to fill out other dimensions of parameter space with statistically significant intensity. Fortunately the future of large-scale stellar population studies is bright with a number of grand projects on the horizon that collectively will contribute a breathtaking volume of information on individual stars in Local Group galaxies. These projects include: (1) additional imaging surveys, such as Pan-STARRS, SkyMapper and LSST, which, apart from providing deep, multicolor imaging, yield time series data useful for revealing variable stars (including critical standard candles, like RR Lyrae variables) and creating large-scale, deep proper motion catalogs; (2) higher accuracy, space-based astrometric missions, such as Gaia and SIM-Lite, which stand to provide critical, high precision dynamical data on stars in the Milky Way and its satellites; and (3) large-scale spectroscopic surveys provided by RAVE, APOGEE, HERMES, LAMOST, and the Gaia spectrometer, which will yield not only enormous numbers of stellar radial velocities, but extremely comprehensive views of the chemistry of stellar populations. Meanwhile, previously dust-obscured regions of the Milky Way will continue to be systematically exposed via large infrared surveys underway or on the way, such as the various GLIMPSE surveys from Spitzer's IRAC instrument, UKIDSS, APOGEE, JASMINE and WISE.
APA, Harvard, Vancouver, ISO, and other styles
21

Namysłowska-Wilczyńska, Barbara, and Janusz Wynalek. "Geostatistical Investigations of Displacements on the Basis of Data from the Geodetic Monitoring of a Hydrotechnical Object." Studia Geotechnica et Mechanica 39, no. 4 (2017): 59–75. http://dx.doi.org/10.1515/sgem-2017-0037.

Full text
Abstract:
Abstract Geostatistical methods make the analysis of measurement data possible. This article presents the problems directed towards the use of geostatistics in spatial analysis of displacements based on geodetic monitoring. Using methods of applied (spatial) statistics, the research deals with interesting and current issues connected to space-time analysis, modeling displacements and deformations, as applied to any large-area objects on which geodetic monitoring is conducted (e.g., water dams, urban areas in the vicinity of deep excavations, areas at a macro-regional scale subject to anthropogenic influences caused by mining, etc.). These problems are very crucial, especially for safety assessment of important hydrotechnical constructions, as well as for modeling and estimating mining damage. Based on the geodetic monitoring data, a substantial basic empirical material was created, comprising many years of research results concerning displacements of controlled points situated on the crown and foreland of an exemplary earth dam, and used to assess the behaviour and safety of the object during its whole operating period. A research method at a macro-regional scale was applied to investigate some phenomena connected with the operation of the analysed big hydrotechnical construction. Applying a semivariogram function enabled the spatial variability analysis of displacements. Isotropic empirical semivariograms were calculated and then, theoretical parameters of analytical functions were determined, which approximated the courses of the mentioned empirical variability measure. Using ordinary (block) kriging at the grid nodes of an elementary spatial grid covering the analysed object, the values of the Z* estimated means of displacements were calculated together with the accompanying assessment of uncertainty estimation – a standard deviation of estimation σk. Raster maps of the distribution of estimated averages Z* and raster maps of deviations of estimation σk (in perspective) were obtained for selected years (1995 and 2007), taking the ground height 136 m a.s.l. into calculation. To calculate raster maps of Z* interpolated values, methods of quick interpolation were also used, such as the technique of the inverse distance squares, a linear model of kriging, a spline kriging, which made the recognition of the general background of displacements possible, without the accuracy assessment of Z* value estimation, i.e., the value of σk. These maps are also related to 1995 and 2007 and the elevation. As a result of applying these techniques, clear boundaries of subsiding areas, upthrusting and also horizontal displacements on the examined hydrotechnical object were marked out, which can be interpreted as areas of local deformations of the object, important for the safety of the construction. The effect of geostatistical research conducted, including the structural analysis, semivariograms modeling, estimating the displacements of the hydrotechnical object, are rich cartographic characteristic (semivariograms, raster maps, block diagrams), which present the spatial visualization of the conducted various analyses of the monitored displacements. The prepared geostatistical model (3D) of displacement variability (analysed within the area of the dam, during its operating period and including its height) will be useful not only in the correct assessment of displacements and deformations, but it will also make it possible to forecast these phenomena, which is crucial when the operating safety of such constructions is taken into account.
APA, Harvard, Vancouver, ISO, and other styles
22

Silva, Gleice Pereira da, Roberto Quental Coutinho, and Rafael Antonio da Silva Rosa. "AN APPROACH TO POSITIONAL QUALITY CONTROL METHODS FOR AIRBORNE INSAR HIGH-RESOLUTION X-BAND ORTHOIMAGES AND P-BAND DIGITAL TERRAIN MODEL." Boletim de Ciências Geodésicas 27, no. 1 (2021). http://dx.doi.org/10.1590/s1982-21702021000100001.

Full text
Abstract:
Abstract: The positional validation of datasets is an important step for cartography studies since it allows learning about its accuracy, and also indicates the data process quality. However, the positional validation of Synthetic Aperture Radar (SAR) datasets have some additional challenges when compared to optical images due to the geometric distortions. We employ existing targets such as traffic signs and lampposts in the scene and identify them on the image as control points. We performed the validation of the geographic coordinates used as planialtimetric positional control points, using both the amplitude backscattering orthoimage and the Digital Terrain Model (DTM) generated from the InSAR system. We employed the NMAS, ASPRS and NSSDA tests along with information by the Brazilian Standards. This validation showed these control points presented the following results for 1:10,000 scale: NMAS test - class “A” in PEC and PEC-PCD; ASPRS test - RMSE x = 1.317m, RMSE y = 1.231m and RMSE z = 1.145m; and NSSDA test - RMSE r = 1,802m, Precision r = 3.118m and Precision z = 2.244m. These results prove we can use the proposed targets as control points and the used InSAR datasets meet the expected quality for generation of geotechnic products for 1:10,000 scale.
APA, Harvard, Vancouver, ISO, and other styles
23

Jethani, Suneel. "Lists, Spatial Practice and Assistive Technologies for the Blind." M/C Journal 15, no. 5 (2012). http://dx.doi.org/10.5204/mcj.558.

Full text
Abstract:
IntroductionSupermarkets are functionally challenging environments for people with vision impairments. A supermarket is likely to house an average of 45,000 products in a median floor-space of 4,529 square meters and many visually impaired people are unable to shop without assistance, which greatly impedes personal independence (Nicholson et al.). The task of selecting goods in a supermarket is an “activity that is expressive of agency, identity and creativity” (Sutherland) from which many vision-impaired persons are excluded. In response to this, a number of proof of concept (demonstrating feasibility) and prototype assistive technologies are being developed which aim to use smart phones as potential sensorial aides for vision impaired persons. In this paper, I discuss two such prototypic technologies, Shop Talk and BlindShopping. I engage with this issue’s list theme by suggesting that, on the one hand, list making is a uniquely human activity that demonstrates our need for order, reliance on memory, reveals our idiosyncrasies, and provides insights into our private lives (Keaggy 12). On the other hand, lists feature in the creation of spatial inventories that represent physical environments (Perec 3-4, 9-10). The use of lists in the architecture of assistive technologies for shopping illuminates the interaction between these two modalities of list use where items contained in a list are not only textual but also cartographic elements that link the material and immaterial in space and time (Haber 63). I argue that despite the emancipatory potential of assistive shopping technologies, their efficacy in practical situations is highly dependent on the extent to which they can integrate a number of lists to produce representations of space that are meaningful for vision impaired users. I suggest that the extent to which these prototypes may translate to becoming commercially viable, widely adopted technologies is heavily reliant upon commercial and institutional infrastructures, data sources, and regulation. Thus, their design, manufacture and adoption-potential are shaped by the extent to which certain data inventories are accessible and made interoperable. To overcome such constraints, it is important to better understand the “spatial syntax” associated with the shopping task for a vision impaired person; that is, the connected ordering of real and virtual spatial elements that result in a supermarket as a knowable space within which an assisted “spatial practice” of shopping can occur (Kellerman 148, Lefebvre 16).In what follows, I use the concept of lists to discuss the production of supermarket-space in relation to the enabling and disabling potentials of assistive technologies. First, I discuss mobile digital technologies relative to disability and impairment and describe how the shopping task produces a disabling spatial practice. Second, I present a case study showing how assistive technologies function in aiding vision impaired users in completing the task of supermarket shopping. Third, I discuss various factors that may inhibit the liberating potential of technology assisted shopping by vision-impaired people. Addressing Shopping as a Disabling Spatial Practice Consider how a shopping list might inform one’s experience of supermarket space. The way shopping lists are written demonstrate the variability in the logic that governs list writing. As Bill Keaggy demonstrates in his found shopping list Web project and subsequent book, Milk, Eggs, Vodka, a shopping list may be written on a variety of materials, be arranged in a number of orientations, and the writer may use differing textual attributes, such as size or underlining to show emphasis. The writer may use longhand, abbreviate, write neatly, scribble, and use an array of alternate spelling and naming conventions. For example, items may be listed based on knowledge of the location of products, they may be arranged on a list as a result of an inventory of a pantry or fridge, or they may be copied in the order they appear in a recipe. Whilst shopping, some may follow strictly the order of their list, crossing back and forth between aisles. Some may work through their list item-by-item, perhaps forward scanning to achieve greater economies of time and space. As a person shops, their memory may be stimulated by visual cues reminding them of products they need that may not be included on their list. For the vision impaired, this task is near impossible to complete without the assistance of a relative, friend, agency volunteer, or store employee. Such forms of assistance are often unsatisfactory, as delays may be caused due to the unavailability of an assistant, or the assistant having limited literacy, knowledge, or patience to adequately meet the shopper’s needs. Home delivery services, though readily available, impede personal independence (Nicholson et al.). Katie Ellis and Mike Kent argue that “an impairment becomes a disability due to the impact of prevailing ableist social structures” (3). It can be said, then, that supermarkets function as a disability producing space for the vision impaired shopper. For the vision impaired, a supermarket is a “hegemonic modern visual infrastructure” where, for example, merchandisers may reposition items regularly to induce customers to explore areas of the shop that they wouldn’t usually, a move which adds to the difficulty faced by those customers with impaired vision who work on the assumption that items remain as they usually are (Schillmeier 161).In addressing this issue, much emphasis has been placed on the potential of mobile communications technologies in affording vision impaired users greater mobility and flexibility (Jolley 27). However, as Gerard Goggin argues, the adoption of mobile communication technologies has not necessarily “gone hand in hand with new personal and collective possibilities” given the limited access to standard features, even if the device is text-to-speech enabled (98). Issues with Digital Rights Management (DRM) limit the way a device accesses and reproduces information, and confusion over whether audio rights are needed to convert text-to-speech, impede the accessibility of mobile communications technologies for vision impaired users (Ellis and Kent 136). Accessibility and functionality issues like these arise out of the needs, desires, and expectations of the visually impaired as a user group being considered as an afterthought as opposed to a significant factor in the early phases of design and prototyping (Goggin 89). Thus, the development of assistive technologies for the vision impaired has been left to third parties who must adopt their solutions to fit within certain technical parameters. It is valuable to consider what is involved in the task of shopping in order to appreciate the considerations that must be made in the design of shopping intended assistive technologies. Shopping generally consists of five sub-tasks: travelling to the store; finding items in-store; paying for and bagging items at the register; exiting the store and getting home; and, the often overlooked task of putting items away once at home. In this process supermarkets exhibit a “trichotomous spatial ontology” consisting of locomotor space that a shopper moves around the store, haptic space in the immediate vicinity of the shopper, and search space where individual products are located (Nicholson et al.). In completing these tasks, a shopper will constantly be moving through and switching between all three of these spaces. In the next section I examine how assistive technologies function in producing supermarkets as both enabling and disabling spaces for the vision impaired. Assistive Technologies for Vision Impaired ShoppersJason Farman (43) and Adriana de Douza e Silva both argue that in many ways spaces have always acted as information interfaces where data of all types can reside. Global Positioning System (GPS), Radio Frequency Identification (RFID), and Quick Response (QR) codes all allow for practically every spatial encounter to be an encounter with information. Site-specific and location-aware technologies address the desire for meaningful representations of space for use in everyday situations by the vision impaired. Further, the possibility of an “always-on” connection to spatial information via a mobile phone with WiFi or 3G connections transforms spatial experience by “enfolding remote [and latent] contexts inside the present context” (de Souza e Silva). A range of GPS navigation systems adapted for vision-impaired users are currently on the market. Typically, these systems convert GPS information into text-to-speech instructions and are either standalone devices, such as the Trekker Breeze, or they use the compass, accelerometer, and 3G or WiFi functions found on most smart phones, such as Loadstone. Whilst both these products are adequate in guiding a vision-impaired user from their home to a supermarket, there are significant differences in their interfaces and data architectures. Trekker Breeze is a standalone hardware device that produces talking menus, maps, and GPS information. While its navigation functionality relies on a worldwide radio-navigation system that uses a constellation of 24 satellites to triangulate one’s position (May and LaPierre 263-64), its map and text-to-speech functionality relies on data on a DVD provided with the unit. Loadstone is an open source software system for Nokia devices that has been developed within the vision-impaired community. Loadstone is built on GNU General Public License (GPL) software and is developed from private and user based funding; this overcomes the issue of Trekker Breeze’s reliance on trading policies and pricing models of the few global vendors of satellite navigation data. Both products have significant shortcomings if viewed in the broader context of the five sub-tasks involved in shopping described above. Trekker Breeze and Loadstone require that additional devices be connected to it. In the case of Trekker Breeze it is a tactile keypad, and with Loadstone it is an aftermarket screen reader. To function optimally, Trekker Breeze requires that routes be pre-recorded and, according to a review conducted by the American Foundation for the Blind, it requires a 30-minute warm up time to properly orient itself. Both Trekker Breeze and Loadstone allow users to create and share Points of Interest (POI) databases showing the location of various places along a given route. Non-standard or duplicated user generated content in POI databases may, however, have a negative effect on usability (Ellis and Kent 2). Furthermore, GPS-based navigation systems are accurate to approximately ten metres, which means that users must rely on their own mobility skills when they are required to change direction or stop for traffic. This issue with GPS accuracy is more pronounced when a vision-impaired user is approaching a supermarket where they are likely to encounter environmental hazards with greater frequency and both pedestrian and vehicular traffic in greater density. Here the relations between space defined and spaces poorly defined or undefined by the GPS device interact to produce the supermarket surrounds as a disabling space (Galloway). Prototype Systems for Supermarket Navigation and Product SelectionIn the discussion to follow, I look at two prototype systems using QR codes and RFID that are designed to be used in-store by vision-impaired shoppers. Shop Talk is a proof of concept system developed by researchers at Utah State University that uses synthetic verbal route directions to assist vision impaired shoppers with supermarket navigation, product search, and selection (Nicholson et al.). Its hardware consists of a portable computational unit, a numeric keypad, a wireless barcode scanner and base station, headphones for the user to receive the synthetic speech instructions, a USB hub to connect all the components, and a backpack to carry them (with the exception of the barcode scanner) which has been slightly modified with a plastic stabiliser to assist in correct positioning. Shop Talk represents the supermarket environment using two data structures. The first is comprised of two elements: a topological map of locomotor space that allows for directional labels of “left,” “right,” and “forward,” to be added to the supermarket floor plan; and, for navigation of haptic space, the supermarket inventory management system, which is used to create verbal descriptions of product information. The second data structure is a Barcode Connectivity Matrix (BCM), which associates each shelf barcode with several pieces of information such as aisle, aisle side, section, shelf, position, Universal Product Code (UPC) barcode, product description, and price. Nicholson et al. suggest that one of their “most immediate objectives for future work is to migrate the system to a more conventional mobile platform” such as a smart phone (see Mobile Shopping). The Personalisable Interactions with Resources on AMI-Enabled Mobile Dynamic Environments (PRIAmIDE) research group at the University of Deusto is also approaching Ambient Assisted Living (AAL) by exploring the smart phone’s sensing, communication, computing, and storage potential. As part of their work, the prototype system, BlindShopping, was developed to address the issue of assisted shopping using entirely off-the-shelf technology with minimal environmental adjustments to navigate the store and search, browse and select products (López-de-Ipiña et al. 34). Blind Shopping’s architecture is based on three components. Firstly, a navigation system provides the user with synthetic verbal instructions to users via headphones connected to the smart phone device being used in order to guide them around the store. This requires a RFID reader to be attached to the tip of the user’s white cane and road-marking-like RFID tag lines to be distributed throughout the aisles. A smartphone application processes the RFID data that is received by the smart phone via Bluetooth generating the verbal navigation commands as a result. Products are recognised by pointing a QR code reader enabled smart phone at an embossed code located on a shelf. The system is managed by a Rich Internet Application (RIA) interface, which operates by Web browser, and is used to register the RFID tags situated in the aisles and the QR codes located on shelves (López-de-Ipiña and 37-38). A typical use-scenario for Blind Shopping involves a user activating the system by tracing an “L” on the screen or issuing the “Location” voice command, which activates the supermarket navigation system which then asks the user to either touch an RFID floor marking with their cane or scan a QR code on a nearby shelf to orient the system. The application then asks the user to dictate the product or category of product that they wish to locate. The smart phone maintains a continuous Bluetooth connection with the RFID reader to keep track of user location at all times. By drawing a “P” or issuing the “Product” voice command, a user can switch the device into product recognition mode where the smart phone camera is pointed at an embossed QR code on a shelf to retrieve information about a product such as manufacturer, name, weight, and price, via synthetic speech (López-de-Ipiña et al. 38-39). Despite both systems aiming to operate with as little environmental adjustment as possible, as well as minimise the extent to which a supermarket would need to allocate infrastructural, administrative, and human resources to implementing assistive technologies for vision impaired shoppers, there will undoubtedly be significant establishment and maintenance costs associated with the adoption of production versions of systems resembling either prototype described in this paper. As both systems rely on data obtained from a server by invoking Web services, supermarkets would need to provide in-store WiFi. Further, both systems’ dependence on store inventory data would mean that commercial versions of either of these systems are likely to be supermarket specific or exclusive given that there will be policies in place that forbid access to inventory systems, which contain pricing information to third parties. Secondly, an assumption in the design of both prototypes is that the shopping task ends with the user arriving at home; this overlooks the important task of being able to recognise products in order to put them away or to use at a later time.The BCM and QR product recognition components of both respective prototypic systems associates information to products in order to assist users in the product search and selection sub-tasks. However, information such as use-by dates, discount offers, country of manufacture, country of manufacturer’s origin, nutritional information, and the labelling of products as Halal, Kosher, containing alcohol, nuts, gluten, lactose, phenylalanine, and so on, create further challenges in how different data sources are managed within the devices’ software architecture. The reliance of both systems on existing smartphone technology is also problematic. Changes in the production and uptake of mobile communication devices, and the software that they operate on, occurs rapidly. Once the fit-out of a retail space with the necessary instrumentation in order to accommodate a particular system has occurred, this system is unlikely to be able to cater to the requirement for frequent upgrades, as built environments are less flexible in the upgrading of their technological infrastructure (Kellerman 148). This sets up a scenario where the supermarket may persist as a disabling space due to a gap between the functional capacities of applications designed for mobile communication devices and the environments in which they are to be used. Lists and Disabling Spatial PracticeThe development and provision of access to assistive technologies and the data they rely upon is a commercial issue (Ellis and Kent 7). The use of assistive technologies in supermarket-spaces that rely on the inter-functional coordination of multiple inventories may have the unintended effect of excluding people with disabilities from access to legitimate content (Ellis and Kent 7). With de Certeau, we can ask of supermarket-space “What spatial practices correspond, in the area where discipline is manipulated, to these apparatuses that produce a disciplinary space?" (96).In designing assistive technologies, such as those discussed in this paper, developers must strive to achieve integration across multiple data inventories. Software architectures must be optimised to overcome issues relating to intellectual property, cross platform access, standardisation, fidelity, potential duplication, and mass-storage. This need for “cross sectioning,” however, “merely adds to the muddle” (Lefebvre 8). This is a predicament that only intensifies as space and objects in space become increasingly “representable” (Galloway), and as the impetus for the project of spatial politics for the vision impaired moves beyond representation to centre on access and meaning-making.ConclusionSupermarkets act as sites of hegemony, resistance, difference, and transformation, where the vision impaired and their allies resist the “repressive socialization of impaired bodies” through their own social movements relating to environmental accessibility and the technology assisted spatial practice of shopping (Gleeson 129). It is undeniable that the prototype technologies described in this paper, and those like it, indeed do have a great deal of emancipatory potential. However, it should be understood that these devices produce representations of supermarket-space as a simulation within a framework that attempts to mimic the real, and these representations are pre-determined by the industrial, technological, and regulatory forces that govern their production (Lefebvre 8). Thus, the potential of assistive technologies is dependent upon a range of constraints relating to data accessibility, and the interaction of various kinds of lists across the geographic area that surrounds the supermarket, locomotor, haptic, and search spaces of the supermarket, the home-space, and the internal spaces of a shopper’s imaginary. These interactions are important in contributing to the reproduction of disability in supermarkets through the use of assistive shopping technologies. The ways by which people make and read shopping lists complicate the relations between supermarket-space as location data and product inventories versus that which is intuited and experienced by a shopper (Sutherland). Not only should we be creating inventories of supermarket locomotor, haptic, and search spaces, the attention of developers working in this area of assistive technologies should look beyond the challenges of spatial representation and move towards a focus on issues of interoperability and expanded access of spatial inventory databases and data within and beyond supermarket-space.ReferencesDe Certeau, Michel. The Practice of Everyday Life. Berkeley: University of California Press, 1984. Print.De Souza e Silva, A. “From Cyber to Hybrid: Mobile Technologies As Interfaces of Hybrid Spaces.” Space and Culture 9.3 (2006): 261-78.Ellis, Katie, and Mike Kent. Disability and New Media. New York: Routledge, 2011.Farman, Jason. Mobile Interface Theory: Embodied Space and Locative Media. New York: Routledge, 2012.Galloway, Alexander. “Are Some Things Unrepresentable?” Theory, Culture and Society 28 (2011): 85-102.Gleeson, Brendan. Geographies of Disability. London: Routledge, 1999.Goggin, Gerard. Cell Phone Culture: Mobile Technology in Everyday Life. London: Routledge, 2006.Haber, Alex. “Mapping the Void in Perec’s Species of Spaces.” Tattered Fragments of the Map. Ed. Adam Katz and Brian Rosa. S.l.: Thelimitsoffun.org, 2009.Jolley, William M. When the Tide Comes in: Towards Accessible Telecommunications for People with Disabilities in Australia. Sydney: Human Rights and Equal Opportunity Commission, 2003.Keaggy, Bill. Milk Eggs Vodka: Grocery Lists Lost and Found. Cincinnati, Ohio: HOW Books, 2007.Kellerman, Aharon. Personal Mobilities. London: Routledge, 2006.Kleege, Georgia. “Blindness and Visual Culture: An Eyewitness Account.” The Disability Studies Reader. 2nd edition. Ed. Lennard J. Davis. New York: Routledge, 2006. 391-98.Lefebvre, Henri. The Production of Space. Oxford, UK: Blackwell, 1991.López-de-Ipiña, Diego, Tania Lorido, and Unai López. “Indoor Navigation and Product Recognition for Blind People Assisted Shopping.” Ambient Assisted Living. Ed. J. Bravo, R. Hervás, and V. Villarreal. Berlin: Springer-Verlag, 2011. 25-32. May, Michael, and Charles LaPierre. “Accessible Global Position System (GPS) and Related Orientation Technologies.” Assistive Technology for Visually Impaired and Blind People. Ed. Marion A. Hersh, and Michael A. Johnson. London: Springer-Verlag, 2008. 261-88. Nicholson, John, Vladimir Kulyukin, and Daniel Coster. “Shoptalk: Independent Blind Shopping Through Verbal Route Directions and Barcode Scans.” The Open Rehabilitation Journal 2.1 (2009): 11-23.Perec, Georges. Species of Spaces and Other Pieces. Trans. and Ed. John Sturrock. London: Penguin Books, 1997.Schillmeier, Michael W. J. Rethinking Disability: Bodies, Senses, and Things. New York: Routledge, 2010.Sutherland, I. “Mobile Media and the Socio-Technical Protocols of the Supermarket.” Australian Journal of Communication. 36.1 (2009): 73-84.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!