Academic literature on the topic 'Distance map'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Distance map.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Distance map"

1

Lalor, Michael J. "Four-map absolute distance contouring." Optical Engineering 36, no. 9 (September 1, 1997): 2517. http://dx.doi.org/10.1117/1.601480.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Richard Gott, J., Charles Mugnolo, and Wesley N. Colley. "Map Projections Minimizing Distance Errors." Cartographica: The International Journal for Geographic Information and Geovisualization 42, no. 3 (September 2007): 219–34. http://dx.doi.org/10.3138/carto.42.3.219.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Troberg, Erik, and Douglas J. Gillan. "Measuring Spatial Knowledge: Effects of the Relation between Acquisition and Testing." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 51, no. 4 (October 2007): 368–71. http://dx.doi.org/10.1177/154193120705100446.

Full text
Abstract:
Performance in human-robot interaction is related to the operator's mental map of the space in which the robot travels. Accordingly, accurate assessment of mental maps will be important for the design of human-robot interfaces. The present research used a factorial design experiment to examine two methods for acquiring spatial knowledge (reading a map vs. navigating in the space), three methods of testing spatial knowledge (drawing a map, navigating through the space, and estimating point-to-point distances. The results showed that performance in the navigation test was influenced by factors unrelated to the navigated distance, whereas map drawing especially was closely related to the actual distance. Map drawing resulted in a stronger relation between map distance and actual distance in the map training condition than in the navigation training condition. The results are interpreted in terms of transfer appropriate processing, and are applied to human-robot interface design.
APA, Harvard, Vancouver, ISO, and other styles
4

Ban, Sang-Woo, Young-Min Jang, and Minho Lee. "Affective saliency map considering psychological distance." Neurocomputing 74, no. 11 (May 2011): 1916–25. http://dx.doi.org/10.1016/j.neucom.2010.07.033.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Huang, He, and Xinqi Gong. "A Review of Protein Inter-residue Distance Prediction." Current Bioinformatics 15, no. 8 (January 1, 2021): 821–30. http://dx.doi.org/10.2174/1574893615999200425230056.

Full text
Abstract:
Proteins are large molecules consisting of a linear sequence of amino acids. Protein performs biological functions with specific 3D structures. The main factors that drive proteins to form these structures are constraint between residues. These constraints usually lead to important inter-residue relationships, including short-range inter-residue contacts and long-range interresidue distances. Thus, a highly accurate prediction of inter-residue contact and distance information is of great significance for protein tertiary structure computations. Some methods have been proposed for inter-residue contact prediction, most of which focus on contact map prediction and some reviews have summarized the progresses. However, inter-residue distance prediction is found to provide better guidance for protein structure prediction than contact map prediction in recent years. The methods for inter-residue distance prediction can be roughly divided into two types according to the consideration of distance value: one is based on multi-classification with discrete value and the other is based on regression with continuous value. Here, we summarize these algorithms and show that they have obtained good results. Compared to contact map prediction, distance map prediction is in its infancy. There is a lot to do in the future including improving distance map prediction precision and incorporating them into residue-residue distanceguided ab initio protein folding.
APA, Harvard, Vancouver, ISO, and other styles
6

Iigusa, Kyoichi, Hirokazu Sawada, Fumihide Kojima, Hiroshi Harada, and Hiroyuki Yano. "Efficiency degradation of short‐distance wireless power transmission by distance alignment error." IET Microwaves, Antennas & Propagation 10, no. 9 (June 2016): 947–54. http://dx.doi.org/10.1049/iet-map.2015.0665.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Bandyopadhyay, Pradipta, and S. Dutta. "Farthest points and the farthest distance map." Bulletin of the Australian Mathematical Society 71, no. 3 (June 2005): 425–33. http://dx.doi.org/10.1017/s0004972700038430.

Full text
Abstract:
In this paper, we consider farthest points and the farthest distance map of a closed bounded set in a Banach space. We show, inter alia, that a strictly convex Banach space has the Mazur intersection property for weakly compact sets if and only if every such set is the closed convex hull of its farthest points, and recapture a classical result of Lau in a broader set-up. We obtain an expression for the subdifferential of the farthest distance map in the spirit of Preiss' Theorem which in turn extends a result of Westphal and Schwartz, showing that the subdifferential of the farthest distance map is the unique maximal monotone extension of a densely defined monotone operator involving the duality map and the farthest point map.
APA, Harvard, Vancouver, ISO, and other styles
8

Lamm, Lars U., Tom Kristensen, Flemming Kissmeyer-Nielsen, and Fritz Jørgensen. "On the HLA-B, -D Map Distance." Tissue Antigens 10, no. 5 (December 11, 2008): 394–98. http://dx.doi.org/10.1111/j.1399-0039.1977.tb00775.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Huang, Kun, and Shenghua Gao. "Wireframe Parsing With Guidance of Distance Map." IEEE Access 7 (2019): 141036–44. http://dx.doi.org/10.1109/access.2019.2943885.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Åkesson, Susanne. "Geomagnetic map used for long-distance navigation?" Trends in Ecology & Evolution 11, no. 10 (October 1996): 398–400. http://dx.doi.org/10.1016/0169-5347(96)30040-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Distance map"

1

Ranjitkar, Hari Sagar, and Sudip Karki. "Comparison of A*, Euclidean and Manhattan distance using Influence map in MS. Pac-Man." Thesis, Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-11800.

Full text
Abstract:
Context An influence map and potential fields are used for finding path in domain of Robotics and Gaming in AI. Various distance measures can be used to find influence maps and potential fields. However, these distance measures have not been compared yet. ObjectivesIn this paper, we have proposed a new algorithm suitable to find an optimal point in parameters space from random parameter spaces. Finally, comparisons are made among three popular distance measures to find the most efficient. Methodology For our RQ1 and RQ2, we have implemented a mix of qualitative and quantitative approach and for RQ3, we have used quantitative approach. Results A* distance measure in influence maps is more efficient compared to Euclidean and Manhattan in potential fields. Conclusions Our proposed algorithm is suitable to find optimal point and explores huge parameter space. A* distance in influence maps is highly efficient compared to Euclidean and Manhattan distance in potentials fields. Euclidean and Manhattan distance performed relatively similar whereas A* distance performed better than them in terms of score in Ms. Pac-Man (See Appendix A).
APA, Harvard, Vancouver, ISO, and other styles
2

Ottenby, Nore. "Focal Operations with Network Distance Based Neighbourhoods : Implementation, Application and Visualization." Thesis, KTH, Geoinformatik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-170018.

Full text
Abstract:
In spatial analysis, many operations are performed considering the neighbouring locations of a feature. The standard definition of a neighbourhood is an area confined by geometrical length and direction with respect to its focus. When allocating a location for a service, the population distribution is often considered. Standard GIS software includes tools for computations with uniform neighbourhoods, usually equal sized circles. These tools can be used for distribution analysis. Many geographic studies used as basis for city planning decisions use distance as an evaluator. It is a frequent occurrence that the actual distance is approximated using factored straight-line distance. For great distances and large datasets, this is a sufficient means of evaluation, whilst for smaller distances for specific locations, it poses major drawbacks. For distribution analysis in a network space, the neighbourhood would need to be derived from the local set of network features, creating a unique neighbourhood for each location. The neighbourhood can then be used to overlay other datasets to perform analysis of features within the network space. This report describes the application of network distance based neighbourhoods to design a tool, Network Location Analysis, for calculating focal statistics for use as a city planning decision support. The tool has been implemented as a workflow of ArcGIS tools scripted as a Python toolbox. The input required by the tool is a population point layer and a vector network dataset. The output is a grid of points with population statistics as attributes and corresponding neighbourhoods generalized as polygons. The tool has been tested by comparing it to standard focal operations implemented in ArcGIS and by applying it to the dataset used when conducting a study on the location of a new metro station using conventional ArcGIS tools. The results have been analysed and visualized and compared to data used in the study. The result concludes that Network Location Analysis surpasses conventional ArcGIS tools when conducting analysis on features in a network. It derives an accurate set of sum and proximity statistics for all locations within the processing extent, enabling analysis on the population distribution throughout the area and for specific points. The output is intuitive, manageable and can be visualized as raster or by displaying the neighbourhoods as polygons and can be used to evaluate population distribution and network connectivity. The drawbacks of the tool are its lack of robustness, its rigidity to input and the inefficient implementation causing execution time to be unpractical.
APA, Harvard, Vancouver, ISO, and other styles
3

Dalin, Magnus, and Stina Måhl. "Radar Distance Positioning System : A Particle Filter Approach." Thesis, Linköping University, Department of Electrical Engineering, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-9475.

Full text
Abstract:

Abstract

Positioning at sea has been important through all times. Thousands of years ago sea men used the stars to navigate. Today GPS is the most used positioning system at sea. In this thesis an alternative positioning method is described and evaluated. The advantage with the method is that it is independent of external systems which make it harder to interfere with than GPS. By calculating the distance to land using radar echoes (measured from the ship), and compare the distances to a digital sea chart a position can be estimated. There are several problems that have to be solved when using this method. The distance calculation and the comparison with the sea chart result in a non-linear system. One way to handle this non-linearity is the particle filter, which is used in this thesis. When using authentic radar data to estimate a position from an area of 784 km2, the system can isolate a small region around the correct position in two iterations. The system also manages to estimate the position with the same precision as GPS when the ship is moving.

APA, Harvard, Vancouver, ISO, and other styles
4

Emil, Segerbäck. "Shape Representation Using a Volume Coverage Model." Thesis, Linköpings universitet, Informationskodning, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-166727.

Full text
Abstract:
Geometric shapes can be represented in a variety of different ways. A distancemap is a map from points to distances. This can be used as a shape representationwhich can be created through a process known as a distance transform. This the-sis project tests a method for three-dimensional distance transforms using frac-tional volume coverage. This method produces distance maps with subvoxel dis-tance values. The result which is achieved is clearly better than what would beexpected from a binary distance transform and similar to the one known fromprevious work. The resulting code has been published under a free and opensource software license.

The developed code is available under a GPL license here https://gitlab.com/Emiluren/3d-distance-transform

APA, Harvard, Vancouver, ISO, and other styles
5

Malod-Dognin, Noël. "Protein structure comparison : from contact map overlap maximisation to distance-based alignment search tool." Rennes 1, 2010. http://www.theses.fr/2010REN1S015.

Full text
Abstract:
In molecular biology, a fruitful assumption is that proteins sharing close three dimensional structures may share a common function and in most cases derive from a same ancestor. Computing the similarity between two protein structures is therefore a crucial task and has been extensively investigated. Among all the proposed methods, we focus on the similarity measure called Contact Map Overlap maximisation (CMO), mainly because it provides scores which can be used for obtaining good automatic classifications of the protein structures. In this thesis, comparing two protein structures is modelled as finding specific sub-graphs in specific k-partite graphs called alignment graphs. Then, we model CMO as a kind of maximum edge induced sub-graph problem in alignment graphs, for which we conceive an exact solver which outperforms the other CMO algorithms from the literature. Even though we succeeded to accelerate CMO, the procedure still stays too much time consuming for large database comparisons. To further accelerate CMO, we propose a hierarchical approach for CMO which is based on the secondary structure of the proteins. Finally, although CMO is a very good scoring scheme, the alignments it provides frequently posses big root mean square deviation values. To overcome this weakness, we propose a new comparison method based on internal distances which we call DAST (for Distance-based Alignment Search Tool). It is modelled as a maximum clique problem in alignment graphs, for which we design a dedicated solver with very good performances
Une hypothèse féconde de la biologie structurale est que les protéines partageant des structures tri-dimensionnelles similaires sont susceptibles de partager des fonctions similaires et de provenir d'un ancêtre commun. Déterminer la similarité entre deux structures protéiques est une tâche importante qui à été largement étudiée. Parmi toutes les méthodes proposées, nous nous intéressons à la mesure de similarité appelée Maximisation du Recouvrement de Cartes de Contacts (CMO). Dans cette thèse, nous proposons un cadre général pour modéliser la comparaison de deux structures protéiques dans des graphes k-partis spécifiques appelés graphes d'alignements. Puis, nous modélisons CMO comme une recherche de sous-graphe maximum induit par les arêtes dans des graphes d'alignements, problème pour lequel nous proposons un solveur exact qui surpasse les autres algorithmes de la littérature. Cependant, la procédure d'alignement requière encore trop de temps de calculs pour envisager des comparaisons à grande échelle. Afin d'accélérer davantage CMO, nous proposons une approche hiérarchique basée sur les structures secondaires. Enfin, bien que CMO soit une très bonne mesure de similarité, les alignements qu'elle fournit possèdent souvent de fortes valeurs de déviation (RMSD). Pour palier à cette faiblesse, nous proposons une nouvelle méthode de comparaison basée sur les distances internes que nous appelons DAST. Elle est modélisée comme une recherche de clique maximum dans des graphes d'alignements, pour laquelle nous présentons un solveur dédié montrant de très bonnes performances
APA, Harvard, Vancouver, ISO, and other styles
6

Adamsson, Gustav. "Fast and Approximate Text Rendering Using Distance Fields." Thesis, Linköpings universitet, Informationskodning, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-121592.

Full text
Abstract:
Distance field text rendering has many advantages compared to most other text renderingsolutions. Two of the advantages are the possibility  to scale the glyphs without losing the crisp edge and less memory consumption. A drawback with distance field text renderingcan be high distance field generation time. The solution for fast distance field text renderingin this thesis generates the distance fields by drawing distance gradients locally over the outlines of the glyphs. This method is much faster than the old exact methods for generating distance fields that often includes multiple passes over the whole image. Using the solution for text rendering proposed in this thesis results in good looking text that is generated on the fly. The distance fields are generated on a mobile device in less than 10 ms for most of the glyphs in good quality which is less than the time between two frames.
APA, Harvard, Vancouver, ISO, and other styles
7

Pillamari, Jayachandran. "Parallelization of backward deleted distance calculation in graph based features using Hadoop." Kansas State University, 2013. http://hdl.handle.net/2097/15433.

Full text
Abstract:
Master of Science
Department of Computing & Information Sciences
Daniel Andresen
The current project presents an approach to parallelize the calculation of Backward Deleted Distance (BDD) in Graph Based Features (GBF) computation using Hadoop. In this project the issues concerned with the calculation of BDD are identified and parallel computing technologies like Hadoop are applied to solve them. The project introduces a new algorithm to parallelize the APSP problem in BDD calculation using Hadoop Map Reduce feature. The project is implemented in Java and Hadoop technologies. The aim of this project is to parallelize the calculation of BDD thereby reducing GBF computation time. The process of BDD calculation is examined to identify the key places where it could be parallelized. Since the BDD calculation involves calculating the shortest paths between all pairs of given users, it can viewed as All Pairs Shortest Path (APSP) problem. The internal structure and implementation of Hadoop Map-Reduce framework is studied and applied to the process of APSP problem. The GBF features are one of the features set used in the Ontology classifiers. In the current project, GBF features are used to predict the friendship relationship between the users whose direct link is deleted. The computation involves calculating BDD between all pairs of users. The BDD for a user pair represents the shortest path between them when their direct link is deleted. In real terms, it is the shortest distance between them other than the direct path. The project uses train and test data sets consisting of positive instances and negative instances. The positive instances consist of user pairs having a friendship link between them whereas the negative instances do not have any direct link between them. Apache Hadoop is a latest emerging technology in the market introduced for scalable, distributed computing across clusters of computers. It has a Map Reduce framework used for developing applications which process large amounts of data in parallel on large clusters. The project is developed and implemented successfully and has the best time complexity. The project is tested for its reliability and performance. Different data sets are used in this testing by considering various factors and typical graph representations. The test results were analyzed to predict the behavior of the system. The test results show that the system has best speedup and considerably decreased the processing time from 10 hours to 20 minutes which is rewarding.
APA, Harvard, Vancouver, ISO, and other styles
8

Hjelm, Andersson Patrick. "Binär matchning av bilder med hjälp av vektorer från deneuklidiska avståndstransformen." Thesis, Linköping University, Department of Electrical Engineering, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2440.

Full text
Abstract:

This thesis shows the result from investigations of methods that use distance vectors when matching pictures. The distance vectors are available in a distance map made by the Euclidean Distance Transform. The investigated methods use the two characteristic features of the distance vector when matching pictures, length and direction. The length of the vector is used to calculate a value of how good a match is and the direction of the vector is used to predict a transformation to get a better match. The results shows that the number of calculation steps that are used during a search can be reduced compared to matching methods that only uses the distance during the matching.

APA, Harvard, Vancouver, ISO, and other styles
9

Emberger, Simon. "Algorithmes, architecture et éléments optiques pour l'acquisition embarquées d'images totalement focalisées et annotées en distance." Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2017. http://www.theses.fr/2017IMTA0055/document.

Full text
Abstract:
L'acquisition de la profondeur d'une scène en plus de son image est une caractéristique souhaitable pour de nombreuses applications qui dépendent de l'environnement proche. L'état de l'art dans le domaine de l'extraction de profondeur propose de nombreuses méthodes, mais très peu sont réellement adaptées aux systèmes embarqués miniaturisés. Certaines parce qu'elles sont trop encombrantes en raison de leur système optique, d'autres parce qu'elles nécessitent une calibration délicate, ou des méthodes de reconstructions difficilement implantables dans un système embarqué. Dans cette thèse nous nous concentrons sur des méthodes a faible complexité matérielle afin de proposer une solution algorithmique et optique pour réaliser un capteur permettant à la fois d'extraire la profondeur de la scène, de fournir une évaluation de pertinence de cette mesure et de proposer des images focalisées en tout point. Dans ce sens, nous montrons que les algorithmes du type Depth from Focus (DfF) sont les plus adaptés à ces contraintes. Ce procédé consiste à acquérir un cube d'images multi-focus d'une même scène pour différentes distances de focalisation. Les images sont analysées afin d'annoter chacune des zones de la scène d'un indice relatif à sa profondeur estimée. Cet indice est utilisé pour reconstruire une image nette en tout point.Nous avons travaillé sur la notion de netteté afin de proposer des solutions peu complexes, uniquement basées sur des additions et comparaisons, et de fait, facilement adaptables pour un portage sur une architecture matérielle. La solution proposée effectue une analyse bidirectionnelle de contraste local puis combine les meilleures estimations de profondeur en fin de traitement. Elle se décline en trois approches avec une restriction de la complexité de plus en plus forte et ainsi une aptitude de plus en plus marquée pour l'embarqué. Pour chaque méthode, des cartes de profondeurs et de confiances sont établies, ainsi qu'une image totalement focalisée constituée d'éléments issus de l'ensemble du cube multi-focus. Ces approches sont comparées en qualité et en complexité à d'autres méthodes de l'état de l'art de complexité similaire. Une architecture est proposée pour une implantation matérielle de la solution la plus prometteuse. La conception de ces algorithmes soulève le problème de la qualité d'image. Il est en effet primordial d'avoir une évolution remarquable du contraste ainsi qu'une invariance de la scène lors de la capture du cube multi-focus. Un effet très souvent négligé dans ce type d'approche est le zoom parasite provoqué par la lentille responsable de la variation de focus. Ce zoom de focalisation fragilise l'aspect invariance de la scène et provoque l'apparition d'artefacts sur les trois informations Profondeur, Image et Confiance. La recherche d'optiques adaptées au DfF constitue donc un second axe de ces travaux. Nous avons évalué des lentilles liquides industrielles et des lentilles modales expérimentales à cristaux liquides nématiques conçues durant cette thèse. Ces technologies ont été comparées en termes de rapidité, de qualité d'image, d'intensité de zoom de focalisation engendré, de tension d'alimentation et enfin de qualité des cartes de profondeur extraites et des images totalement focalisées reconstruites.La lentille et l'algorithme répondant le mieux à cette problématique DfF embarqué ont ensuite été évalués via le portage sur une plateforme de développement CPU-GPU permettant l'acquisition d'images et de cartes de profondeurs et de confiances en temps réel
Acquiring the depth of a scene in addition to its image is a desirable feature for many applications which depend on the near environment. The state of the art in the field of depth extraction offers many methods, but very few are well adapted to small embedded systems. Some of them are too cumbersome because of their large optical system. Others might require a delicate calibration or processing methods which are difficult to implement in an embedded system. In this PhD thesis, we focus on methods with low hardware complexity in order to propose algorithms and optical solutions that extract the depth of the scene, provide a relevance evaluation of this measurement and produce all-in-focus images. We show that Depth from Focus (DfF) algorithms are the most adapted to embedded electronics constraints. This method consists in acquiring a cube of multi-focus images of the same scene for different focusing distances. The images are analyzed in order to annotate each zone of the scene with an index relative to its estimated depth. This index is then used to build an all in focus image. We worked on the sharpness criterion in order to propose low complexity solutions, only based on additions and comparisons, easily adaptable on a hardware architecture. The proposed solution uses bidirectional local contrast analysis and then combines the most relevant depth estimations based on detection confidence at the end of treatment. It is declined in three approaches which need less and less processing and thus make them more and more adapted for a final embedded solution. For each method, depth and confidence maps are established, as well as an all-in-focus image composed of elements from the entire multi-focus cube. These approaches are compared in quality and complexity with other state-of-the-art methods which present similar complexity. A hardware implementation of the best solution is proposed. The design of these algorithms raises the problem of image quality. It is indeed essential to have a remarkable contrast evolution as well as a motionless scene during the capture of the multi-focus cube. A very often neglected effect in this type of approach is the parasitic zoom caused by the lens motion during a focus variation. This "focal zoom" weakens the invariance aspect of the scene and causes artifacts on the depth and confidence maps and on the all in focus image. The search for optics adapted to DfF is thus a second line of research in this work. We have evaluated industrial liquid lenses and experimental nematic liquid crystal modal lenses designed during this thesis. These technologies were compared in terms of speed, image quality, generated focal zoom intensity, power supply voltage and finally the quality of extracted depth maps and reconstructed all in focus images. The lens and the algorithm which best suited this embedded DfF issue were then evaluated on a CPU-GPU development platform allowing real time acquisition of depth maps, confidence maps and all in focus images
APA, Harvard, Vancouver, ISO, and other styles
10

Wong, Mark. "Comparison of heat maps showing residence price generated using interpolation methods." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-214110.

Full text
Abstract:
In this report we attempt to provide insights in how interpolation can be used for creating heat maps showing residence prices for different residence markets in Sweden. More specifically, three interpolation methods are implemented and are then used on three Swedish residence markets. These three residence markets are of varying characteristics such as size and residence type. Data of residence sales and the physical definitions of the residence markets were collected. As residence sales are never identical, residence sales were preprocessed to make them comparable. For comparison, a so-called external predictor was used as an extra parameter for the interpolation method. In this report, distance to nearest public transportation was used as an external predictor. The interpolated heat maps were compared and evaluated using both quantitative and qualitative approaches. Results show that each interpolation method has its own strengths and weaknesses, and that using an external predictor results in better heat maps compared to only using residence price as predictor. Kriging was found to be the most robust method and consistently resulted in the best interpolated heat maps for all residence markets. On the other hand, it was also the most time-consuming interpolation method.
Den här rapporten försöker ge insikter i hur interpolation kan användas för att skapa färgdiagram över bostadspriser för olika bostadsmarknader i Sverige. Mer specifikt implementeras tre interpolationsmetoder som sedan används på tre olika svenska bostadsmarknader. Dessa tre bostadsmarknader är av olika karaktär med hänsyn till storlek och bostadstyp. Bostadsförsäljningsdata och de fysiska definitionerna för bostadsmarknaderna samlades in. Eftersom bostadsförsäljningar aldrig är identiska, behandlas de först i syfte att göra dem jämförbara. En extern indikator, vilket är en extra parameter för interpolationsmetoder, undersöktes även. I den här rapporten användes avståndet till närmaste kollektiva transportmedel som extern indikator. De interpolerade färgdiagrammen jämfördes och utvärderades både med en kvantiativ och en kvalitativ metod. Resultaten visar att varje interpolationsmetod har sina styrkor och svagheter och att användandet av en extern indikator alltid renderade i ett bättre färgdiagram jämfört med att endast använda bostadspris som indikator. Kriging bedöms vara den mest robusta interpolationsmetoden och interpolerade även de bästa färgdiagrammen för alla bostadsmarknader. Samtidigt var det även den mest tidskrävande interpolationsmetoden.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Distance map"

1

Quinlan, Julia J. Scale and distance in maps. New York: PowerKids Press, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Distance. London: Jonathan Cape, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Barker, Raffaella. From a distance. London: Bloomsbury, 2015.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Measurements. New York: Benchmark Books, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Sayers, Valerie. The distance between us. New York: Doubleday, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Sayers, Valerie. The distance between us. Evanston, Ill: Northwestern University Press, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

How far is far?: Comparing geographical distances. Chicago, Ill: Heinemann Library, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Parker, Victoria. How far is far?: Comparing geographical distances. Chicago, Ill: Heinemann Library, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

My lover's lover [and] The distance between us. London: Tinder Press, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Finishing strong: How a man can go the distance. Sisters, Or: Multnomah Books, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Distance map"

1

Czúni, László, Dezső Csordás, and Gergely Császár. "Distance Map Retrieval." In Lecture Notes in Computer Science, 811–17. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-30125-7_100.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Pinoli, Jean-Charles. "The Distance-Map Framework." In Mathematical Foundations of Image Processing and Analysis 2, 281–93. Hoboken, NJ, USA: John Wiley & Sons, Inc., 2014. http://dx.doi.org/10.1002/9781118984574.ch40.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ahmed, Mahmuda, Sophia Karagiorgou, Dieter Pfoser, and Carola Wenk. "Fréchet Distance-Based Map Construction Algorithm." In Map Construction Algorithms, 33–46. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-25166-0_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Combier, Camille, Guillaume Damiand, and Christine Solnon. "Map Edit Distance vs. Graph Edit Distance for Matching Images." In Graph-Based Representations in Pattern Recognition, 152–61. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-38221-5_16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Zeng, PeiFeng, and Tomio Hirata. "Distance Map Based Enhancement for Interpolated Images." In Geometry, Morphology, and Computational Imaging, 86–100. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/3-540-36586-9_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Malmberg, Filip, Robin Strand, Jianming Zhang, and Stan Sclaroff. "The Boolean Map Distance: Theory and Efficient Computation." In Discrete Geometry for Computer Imagery, 335–46. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-66272-5_27.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Chen, Daniel, Anne Driemel, Leonidas J. Guibas, Andy Nguyen, and Carola Wenk. "Approximate Map Matching with respect to the Fréchet Distance." In 2011 Proceedings of the Thirteenth Workshop on Algorithm Engineering and Experiments (ALENEX), 75–83. Philadelphia, PA: Society for Industrial and Applied Mathematics, 2011. http://dx.doi.org/10.1137/1.9781611972917.8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Vesanto, Juha, and Mika Sulkava. "Distance Matrix Based Clustering of the Self-Organizing Map." In Artificial Neural Networks — ICANN 2002, 951–56. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-46084-5_154.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Donath, Klaus, Matthias Wolf, Radim Chrástek, and Heinrich Niemann. "A Hybrid Distance Map Based and Morphologic Thinning Algorithm." In Lecture Notes in Computer Science, 354–61. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/978-3-540-45243-0_46.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Normand, Nicolas. "Single Scan Granulometry Estimation from an Asymmetric Distance Map." In Discrete Geometry for Computer Imagery, 288–99. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-14085-4_23.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Distance map"

1

Aouit, Djedjiga Ait, and Abdeldjalil Ouahabi. "Hausdorff Distance Map Classification Using SVM." In IECON 2006 - 32nd Annual Conference on IEEE Industrial Electronics. IEEE, 2006. http://dx.doi.org/10.1109/iecon.2006.347706.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Chong, Alex, and Kok Wai Wong. "On the Fuzzy Cognitive Map attractor distance." In 2007 IEEE Congress on Evolutionary Computation. IEEE, 2007. http://dx.doi.org/10.1109/cec.2007.4424805.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Almendros-Jimenez, Jesus M., and Antonio Becerra-Teron. "Distance Based Queries in Open Street Map." In 2015 26th International Workshop on Database and Expert Systems Applications (DEXA). IEEE, 2015. http://dx.doi.org/10.1109/dexa.2015.60.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Li, Xin, Xiaoqing Lyu, Zhi Tang, and Hao Zhang. "Chemical Similarity Based on Map Edit Distance." In 2019 IEEE International Conference on Bioinformatics and Biomedicine (BIBM). IEEE, 2019. http://dx.doi.org/10.1109/bibm47256.2019.8983077.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Xie, Guiwang, Danni Ai, Hong Song, Yong huang, Yongtian Wang, and Jian Yang. "Prior Distance Map for Multiple Abdominal Organ Segmentation." In the 2019 4th International Conference. New York, New York, USA: ACM Press, 2019. http://dx.doi.org/10.1145/3330393.3330394.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Lepoittevin, Yann, Isabelle Herlin, and Dominique Bereziat. "Object's tracking by advection of a distance map." In 2013 20th IEEE International Conference on Image Processing (ICIP). IEEE, 2013. http://dx.doi.org/10.1109/icip.2013.6738745.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Augustine, Marcus, Frank Ortmeier, Elmar Mair, Darius Burschka, Annett Stelzer, and Michael Suppa. "Landmark-Tree map: A biologically inspired topological map for long-distance robot navigation." In 2012 IEEE International Conference on Robotics and Biomimetics (ROBIO). IEEE, 2012. http://dx.doi.org/10.1109/robio.2012.6490955.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Lin, Wanbiao, Lei Sun, Jianchao Song, Xinwei Chen, and Jingtai Liu. "Map Optimization with Distance-Based Covariance in Industrial Field." In 2018 IEEE 8th Annual International Conference on CYBER Technology in Automation, Control, and Intelligent Systems (CYBER). IEEE, 2018. http://dx.doi.org/10.1109/cyber.2018.8688135.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Aguilar-Gonzalez, Abiel, Madain Perez-Patricio, Miguel Arias-Estrada, Jorge-Luis Camas-Anzueto, Hector-Ricardo Hernandez-de Leon, and Avisai Sanchez-Alegria. "An FPGA Correlation-Edge Distance approach for disparity map." In 2015 International Conference on Electronics, Communications and Computers (CONIELECOMP). IEEE, 2015. http://dx.doi.org/10.1109/conielecomp.2015.7086952.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Youkana, Imane, Rachida Saouli, Jean Cousty, and Mohamed Akil. "Morphological operators on graph based on geodesic distance map." In 2015 International Conference on Computer Vision and Image Analysis Applications (ICCVIA). IEEE, 2015. http://dx.doi.org/10.1109/iccvia.2015.7351899.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Distance map"

1

Sinclair, Samantha, and Sandra LeGrand. Reproducibility assessment and uncertainty quantification in subjective dust source mapping. Engineer Research and Development Center (U.S.), August 2021. http://dx.doi.org/10.21079/11681/41523.

Full text
Abstract:
Accurate dust-source characterizations are critical for effectively modeling dust storms. A previous study developed an approach to manually map dust plume-head point sources in a geographic information system (GIS) framework using Moderate Resolution Imaging Spectroradiometer (MODIS) imagery processed through dust-enhancement algorithms. With this technique, the location of a dust source is digitized and recorded if an analyst observes an unobscured plume head in the imagery. Because airborne dust must be sufficiently elevated for overland dust-enhancement algorithms to work, this technique may include up to 10 km in digitized dust-source location error due to downwind advection. However, the potential for error in this method due to analyst subjectivity has never been formally quantified. In this study, we evaluate a version of the methodology adapted to better enable reproducibility assessments amongst multiple analysts to determine the role of analyst subjectivity on recorded dust source location error. Four analysts individually mapped dust plumes in Southwest Asia and Northwest Africa using five years of MODIS imagery collected from 15 May to 31 August. A plume-source location is considered reproducible if the maximum distance between the analyst point-source markers for a single plume is ≤10 km. Results suggest analyst marker placement is reproducible; however, additional analyst subjectivity-induced error (7 km determined in this study) should be considered to fully characterize locational uncertainty. Additionally, most of the identified plume heads (> 90%) were not marked by all participating analysts, which indicates dust source maps generated using this technique may differ substantially between users.
APA, Harvard, Vancouver, ISO, and other styles
2

Sinclair, Samantha, and Sandra LeGrand. Reproducibility assessment and uncertainty quantification in subjective dust source mapping. Engineer Research and Development Center (U.S.), August 2021. http://dx.doi.org/10.21079/11681/41542.

Full text
Abstract:
Accurate dust-source characterizations are critical for effectively modeling dust storms. A previous study developed an approach to manually map dust plume-head point sources in a geographic information system (GIS) framework using Moderate Resolution Imaging Spectroradiometer (MODIS) imagery processed through dust-enhancement algorithms. With this technique, the location of a dust source is digitized and recorded if an analyst observes an unobscured plume head in the imagery. Because airborne dust must be sufficiently elevated for overland dust-enhancement algorithms to work, this technique may include up to 10 km in digitized dust-source location error due to downwind advection. However, the potential for error in this method due to analyst subjectivity has never been formally quantified. In this study, we evaluate a version of the methodology adapted to better enable reproducibility assessments amongst multiple analysts to determine the role of analyst subjectivity on recorded dust source location error. Four analysts individually mapped dust plumes in Southwest Asia and Northwest Africa using five years of MODIS imagery collected from 15 May to 31 August. A plume-source location is considered reproducible if the maximum distance between the analyst point-source markers for a single plume is ≤10 km. Results suggest analyst marker placement is reproducible; however, additional analyst subjectivity-induced error (7 km determined in this study) should be considered to fully characterize locational uncertainty. Additionally, most of the identified plume heads (> 90%) were not marked by all participating analysts, which indicates dust source maps generated using this technique may differ substantially between users.
APA, Harvard, Vancouver, ISO, and other styles
3

Zilberman, Mark. Shouldn’t Doppler 'De-boosting' be accounted for in calculations of intrinsic luminosity of Standard Candles? Intellectual Archive, September 2021. http://dx.doi.org/10.32370/iaj.2569.

Full text
Abstract:
"Doppler boosting / de-boosting" is a well-known relativistic effect that alters the apparent luminosity of approaching/receding radiation sources. "Doppler boosting" alters the apparent luminosity of approaching light sources to appear brighter, while "Doppler de-boosting" alters the apparent luminosity of receding light sources to appear fainter. While "Doppler boosting / de-boosting" has been successfully accounted for and observed in relativistic jets of AGN, double white dwarfs, in search of exoplanets and stars in binary systems it was ignored in the establishment of Standard Candles for cosmological distances. A Standard Candle adjustment appears necessary for "Doppler de-boosting" for high Z, otherwise we would incorrectly assume that Standard Candles appear dimmer, not because of "Doppler de-boosting" but because of the excessive distance, which would affect the entire Standard Candles ladder at cosmological distances. The ratio between apparent (L) and intrinsic (Lo) luminosities as a function of redshift Z and spectral index α is given by the formula ℳ(Z) = L/Lo=(Z+1)^(α-3) and for Type Ia supernova as ℳ(Z) = L/Lo=(Z+1)^(-2). These formulas are obtained within the framework of Special Relativity and may require adjustments within the General Relativity framework.
APA, Harvard, Vancouver, ISO, and other styles
4

Zilberman, Mark. “Doppler de-boosting” and the observation of “Standard candles” in cosmology. Intellectual Archive, July 2021. http://dx.doi.org/10.32370/iaj.2549.

Full text
Abstract:
“Doppler boosting” is a well-known relativistic effect that alters the apparent luminosity of approaching radiation sources. “Doppler de-boosting” is the name of relativistic effect observed for receding light sources (e.g. relativistic jets of active galactic nuclei and gamma-ray bursts). “Doppler boosting” changes the apparent luminosity of approaching light sources to appear brighter, while “Doppler de-boosting” causes the apparent luminosity of receding light sources to appear fainter. While “Doppler de-boosting” has been successfully accounted for and observed in relativistic jets of AGN, it was ignored in the establishment of Standard candles for cosmological distances. A Standard candle adjustment of an Z>0.1 is necessary for “Doppler de-boosting”, otherwise we would incorrectly assume that Standard Candles appear dimmer not because of “Doppler de-boosting” but because of the excessive distance, which would affect the entire Standard Candles ladder at cosmological distances. The ratio between apparent (L) and intrinsic (Lo) luminosities as a function of the redshift Z and spectral index α is given by the formula ℳ(Z) = L/Lo=(Z+1)α -3 and for Type Ia supernova appears as ℳ(Z) = L/Lo=(Z+1)-2. “Doppler de-boosting” may also explain the anomalously low luminosity of objects with a high Z without the introduction of an accelerated expansion of the Universe and Dark Energy.
APA, Harvard, Vancouver, ISO, and other styles
5

Zilberman, Mark. PREPRINT. “Doppler de-boosting” and the observation of “Standard candles” in cosmology. Intellectual Archive, June 2021. http://dx.doi.org/10.32370/ia_2021_06_23.

Full text
Abstract:
PREPRINT. “Doppler boosting” is a well-known relativistic effect that alters the apparent luminosity of approaching radiation sources. “Doppler de-boosting” is the term of the same relativistic effect observed for receding light sources (e.g.relativistic jets of active galactic nuclei and gamma-ray bursts). “Doppler boosting” alters the apparent luminosity of approaching light sources to appear brighter, while “Doppler de-boosting” alters the apparent luminosity of receding light sources to appear fainter. While “Doppler de-boosting” has been successfully accounted for and observed in relativistic jets of AGN, it was ignored in the establishment of Standard candles for cosmological distances. A Standard candle adjustment of Z>0.1 is necessary for “Doppler de-boosting”, otherwise we would incorrectly assume that Standard Candles appear dimmer, not because of “Doppler de-boosting” but because of the excessive distance, which would affect the entire Standard Candles ladder at cosmological distances. The ratio between apparent (L) and intrinsic (Lo) luminosities as a function of the redshift Z and spectral index α is given by the formula ℳ(Z) =L/Lo=(Z+1)^(α-3) and for Type Ia supernova appears as ℳ(Z)=L/Lo=(Z+1)^(-2). “Doppler de-boosting” may also explain the anomalously low luminosity of objects with a high Z without the introduction of an accelerated expansion of the Universe and Dark Energy.
APA, Harvard, Vancouver, ISO, and other styles
6

Zilberman, Mark. "Doppler De-boosting" and the Observation of "Standard Candles" in Cosmology. Intellectual Archive, July 2021. http://dx.doi.org/10.32370/iaj.2552.

Full text
Abstract:
“Doppler boosting” is a well-known relativistic effect that alters the apparent luminosity of approaching radiation sources. “Doppler de-boosting” is the same relativistic effect observed but for receding light sources (e.g. relativistic jets of AGN and GRB). “Doppler boosting” alters the apparent luminosity of approaching light sources to appear brighter, while “Doppler de-boosting” alters the apparent luminosity of receding light sources to appear fainter. While “Doppler de-boosting” has been successfully accounted for and observed in relativistic jets of AGN, it was ignored in the establishment of Standard candles for cosmological distances. A Standard Candle adjustment of Z>0.1 is necessary for “Doppler de-boosting”, otherwise we would incorrectly assume that Standard Candles appear dimmer, not because of “Doppler de-boosting” but because of the excessive distance, which would affect the entire Standard Candles ladder at cosmological distances. The ratio between apparent (L) and intrinsic (Lo) luminosities as a function of the redshift Z and spectral index α is given by the formula ℳ(Z) = L/Lo=(Z+1)α -3 and for Type Ia supernova appears as ℳ(Z) = L/Lo=(Z+1)-2. “Doppler de-boosting” may also explain the anomalously low luminosity of objects with a high Z without the introduction of an accelerated expansion of the Universe and Dark Energy.
APA, Harvard, Vancouver, ISO, and other styles
7

Nafakh, Abdullah Jalal, Yunchang Zhang, Sarah Hubbard, and Jon D. Fricker. Assessment of a Displaced Pedestrian Crossing for Multilane Arterials. Purdue University, 2021. http://dx.doi.org/10.5703/1288284317318.

Full text
Abstract:
This research explores the benefits of a pedestrian crosswalk that is physically displaced from the intersection, using simulation software to estimate the benefits in terms of delay and pedestrian travel time. In many cases, the displaced pedestrian crossing may provide benefits such as reduced vehicle delay, reduced crossing distance, increased opportunity for signal progression, and reduced conflicts with turning vehicles. The concurrent pedestrian service that is traditionally used presents potential conflicts between pedestrians and three vehicular movements: right turns, permissive left turns, and right turns on red. The findings of this research suggest that a displaced pedestrian crossing should be considered as an option by designers when serving pedestrians crossing multi-lane arterials. In addition to reduced delay, pedestrian safety may be improved due to the shorter crossing distance, the elimination of conflicts with turning vehicles, and the potential for high driver compliance rates associated with signals, such as pedestrian hybrid beacons.
APA, Harvard, Vancouver, ISO, and other styles
8

Knapik, Joseph J., Anita Spiess, Steven G. Chervak, Myrna C. Callison, and Bruce H. Jones. Effectiveness of a Seat Pad in Reducing Back Pain in Long-Distance Drivers Deployed to Kuwait, October 2008-May 2009. Fort Belvoir, VA: Defense Technical Information Center, December 2009. http://dx.doi.org/10.21236/ada514481.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Hicks, Julie, Laurin Yates, and Jackie Pettway. Mat Sinking Unit supply study : Mississippi River revetment. Engineer Research and Development Center (U.S.), September 2021. http://dx.doi.org/10.21079/11681/41867.

Full text
Abstract:
The Mississippi Valley Division (MVD) has maintained the Mississippi River banks for over 80 years. The Mat Sinking Unit (MSU), built in 1946, was considered state-of-the-art at the time. This system is still in operation today and has placed over 1,000 miles of Articulated Concrete Mats along the Mississippi River from Head of Passes, LA, to Cairo, IL. A new MSU has been designed and is expected to be fully mission capable and operational by the 2023 season, which is expected to increase the productivity from 2,000 squares/day up to 8,000 squares/day with double shifts and optimal conditions. This MSU supply study identifies and optimizes the supply chain logistics for increased production rates from the mat fields to the MSU. The production rates investigated for this effort are 2,000 squares/day, 4,000 squares/day, and 6,000 squares/day. RiskyProject® software, which utilizes a Monte Carlo method to determine a range of durations, manpower, and supplies based on logical sequencing is used for this study. The study identifies several potential supply and demand issues with the increased daily production rates. Distance to casting fields, number of barges, and square availability are the major issues to supply increased placement rates identified by this study.
APA, Harvard, Vancouver, ISO, and other styles
10

Costley, D., Luis De Jesús Díaz,, Sarah McComas, Christopher Simpson, James Johnson, and Mihan McKenna. Multi-objective source scaling experiment. Engineer Research and Development Center (U.S.), June 2021. http://dx.doi.org/10.21079/11681/40824.

Full text
Abstract:
The U.S. Army Engineer Research and Development Center (ERDC) performed an experiment at a site near Vicksburg, MS, during May 2014. Explosive charges were detonated, and the shock and acoustic waves were detected with pressure and infrasound sensors stationed at various distances from the source, i.e., from 3 m to 14.5 km. One objective of the experiment was to investigate the evolution of the shock wave produced by the explosion to the acoustic wavefront detected several kilometers from the detonation site. Another objective was to compare the effectiveness of different wind filter strategies. Toward this end, several sensors were deployed near each other, approximately 8 km from the site of the explosion. These sensors used different types of wind filters, including the different lengths of porous hoses, a bag of rocks, a foam pillow, and no filter. In addition, seismic and acoustic waves produced by the explosions were recorded with seismometers located at various distances from the source. The suitability of these sensors for measuring low-frequency acoustic waves was investigated.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography