To see the other types of publications on this topic, follow the link: Point and Range Queries.

Journal articles on the topic 'Point and Range Queries'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Point and Range Queries.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Lai, Ying Kit, Chung Keung Poon, and Benyun Shi. "Approximate colored range and point enclosure queries." Journal of Discrete Algorithms 6, no. 3 (September 2008): 420–32. http://dx.doi.org/10.1016/j.jda.2007.10.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ghosh, Esha, Olga Ohrimenko, and Roberto Tamassia. "Efficient Verifiable Range and Closest Point Queries in Zero-Knowledge." Proceedings on Privacy Enhancing Technologies 2016, no. 4 (October 1, 2016): 373–88. http://dx.doi.org/10.1515/popets-2016-0045.

Full text
Abstract:
AbstractWe present an efficient method for answering one-dimensional range and closest-point queries in a verifiable and privacy-preserving manner. We consider a model where a data owner outsources a dataset of key-value pairs to a server, who answers range and closest-point queries issued by a client and provides proofs of the answers. The client verifies the correctness of the answers while learning nothing about the dataset besides the answers to the current and previous queries. Our work yields for the first time a zero-knowledge privacy assurance to authenticated range and closest-point queries. Previous work leaked the size of the dataset and used an inefficient proof protocol. Our construction is based on hierarchical identity-based encryption. We prove its security and analyze its efficiency both theoretically and with experiments on synthetic and real data (Enron email and Boston taxi datasets).
APA, Harvard, Vancouver, ISO, and other styles
3

Watve, Alok, Sakti Pramanik, Sungwon Jung, and Chae Yong Lim. "Data-independent vantage point selection for range queries." Journal of Supercomputing 75, no. 12 (April 21, 2018): 7952–78. http://dx.doi.org/10.1007/s11227-018-2384-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

MYERS, YONATAN, and LEO JOSKOWICZ. "POINT SET DISTANCE AND ORTHOGONAL RANGE PROBLEMS WITH DEPENDENT GEOMETRIC UNCERTAINTIES." International Journal of Computational Geometry & Applications 22, no. 06 (December 2012): 517–41. http://dx.doi.org/10.1142/s0218195912500148.

Full text
Abstract:
Classical computational geometry algorithms handle geometric constructs whose shapes and locations are exact. However, many real-world applications require modeling and computing with geometric uncertainties, which are often coupled and mutually dependent. In this paper we address the relative position of points, point set distance problems, and orthogonal range queries in the plane in the presence of geometric uncertainty. The uncertainty can be in the locations of the points, in the query range, or both, and is possibly coupled. Point coordinates and range uncertainties are modeled with the Linear Parametric Geometric Uncertainty Model (LPGUM), a general and computationally efficient worst-case, first-order linear approximation of geometric uncertainty that supports dependence among uncertainties. We present efficient algorithms for relative points orientation, minimum and maximum pairwise distance, closest pair, diameter, and efficient algorithms for uncertain range queries: uncertain range/nominal points, nominal range/uncertain points, uncertain range/uncertain points, with independent/dependent uncertainties. In most cases, the added complexity is sub-quadratic in the number of parameters and points, with higher complexities for dependent point uncertainties.
APA, Harvard, Vancouver, ISO, and other styles
5

Sun, Ping, Caimei Liang, Guohui Li, and Ling Yuan. "Researching Why-Not Questions in Skyline Query Based on Orthogonal Range." Electronics 9, no. 3 (March 18, 2020): 500. http://dx.doi.org/10.3390/electronics9030500.

Full text
Abstract:
This paper aims to answer “why-not” questions in skyline queries based on the orthogonal query range (i.e., ORSQ). These queries retrieve skyline points within a rectangular query range, which improves query efficiency. Answering why-not questions in ORSQ can help users analyze query results and make decisions. We discuss the causes of why-not questions in ORSQ. Then, we outline how to modify the why-not point and the orthogonal query range so that the why-not point is included in the result of the skyline query based on the orthogonal range. When the why-not point is in the orthogonal range, we show how to modify the why-not point and narrow the orthogonal range. We also present how to expand the orthogonal range when the why-not point is not in the orthogonal range. We effectively combine query refinement and data modification techniques to produce meaningful answers. The experimental results demonstrate that the proposed algorithms have high-quality explanations for why-not questions in ORSQ in the real and synthetic datasets.
APA, Harvard, Vancouver, ISO, and other styles
6

Cho, Hyung-Ju, and Rize Jin. "Efficient Processing of Movingk-Range Nearest Neighbor Queries in Directed and Dynamic Spatial Networks." Mobile Information Systems 2016 (2016): 1–17. http://dx.doi.org/10.1155/2016/2406142.

Full text
Abstract:
Ak-range nearest neighbor (kRNN) query in a spatial network finds thekclosest objects to each point in the query region. The essential nature of thekRNN query is significant in location-based services (LBSs), where location-aware queries with query regions such askRNN queries are frequently used because of the issue of location privacy and the imprecision of the associated positioning techniques. Existing studies focus on reducing computation costs at the server side while processingkRNN queries. They also consider snapshot queries that are evaluated once and terminated, as opposed to moving queries that require constant updating of their results. However, little attention has been paid to evaluating movingkRNN queries in directed and dynamic spatial networks where every edge is directed and its weight changes in accordance with the traffic conditions. In this paper, we propose an efficient algorithm called MORAN that evaluates movingk-range nearest neighbor (MkRNN) queries in directed and dynamic spatial networks. The results of a simulation conducted using real-life roadmaps indicate that MORAN is more effective than a competitive method based on a shared execution approach.
APA, Harvard, Vancouver, ISO, and other styles
7

El-Mahgary, Sami, Juho-Pekka Virtanen, and Hannu Hyyppä. "A Simple Semantic-Based Data Storage Layout for Querying Point Clouds." ISPRS International Journal of Geo-Information 9, no. 2 (January 22, 2020): 72. http://dx.doi.org/10.3390/ijgi9020072.

Full text
Abstract:
The importance of being able to separate the semantics from the actual (X,Y,Z) coordinates in a point cloud has been actively brought up in recent research. However, there is still no widely used or accepted data layout paradigm on how to efficiently store and manage such semantic point cloud data. In this paper, we present a simple data layout that makes use the semantics and that allows for quick queries. The underlying idea is especially suited for a programming approach (e.g., queries programmed via Python) but we also present an even simpler implementation of the underlying technique on a well known relational database management system (RDBMS), namely, PostgreSQL. The obtained query results suggest that the presented approach can be successfully used to handle point and range queries on large points clouds.
APA, Harvard, Vancouver, ISO, and other styles
8

R. Brisaboa, Nieves, Guillermo De Bernardo, Roberto Konow, Gonzalo Navarro, and Diego Seco. "Aggregated 2D range queries on clustered points." Information Systems 60 (August 2016): 34–49. http://dx.doi.org/10.1016/j.is.2016.03.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

EPPSTEIN, DAVID, MICHAEL T. GOODRICH, and JONATHAN Z. SUN. "SKIP QUADTREES: DYNAMIC DATA STRUCTURES FOR MULTIDIMENSIONAL POINT SETS." International Journal of Computational Geometry & Applications 18, no. 01n02 (April 2008): 131–60. http://dx.doi.org/10.1142/s0218195908002568.

Full text
Abstract:
We present a new multi-dimensional data structure, which we call the skip quadtree (for point data in R2) or the skip octree (for point data in Rd, with constant d > 2). Our data structure combines the best features of two well-known data structures, in that it has the well-defined “box”-shaped regions of region quadtrees and the logarithmic-height search and update hierarchical structure of skip lists. Indeed, the bottom level of our structure is exactly a region quadtree (or octree for higher dimensional data). We describe efficient algorithms for inserting and deleting points in a skip quadtree, as well as fast methods for performing point location, approximate range, and approximate nearest neighbor queries.
APA, Harvard, Vancouver, ISO, and other styles
10

Shen, Jun Hong, Ye In Chang, Chen Chang Wu, and Ta Wei Liu. "A Forward Moving Method for Continuous Nearest Neighbor Queries." Applied Mechanics and Materials 284-287 (January 2013): 2965–69. http://dx.doi.org/10.4028/www.scientific.net/amm.284-287.2965.

Full text
Abstract:
A continuous nearest neighbor (CNN) query retrieves the nearest neighbor of every point on a line segment and indicates its valid segments. Zheng et al. have proposed a Hilbert-curve index for the CNN query. This method contains two phases, searching candidates in the approximate search range, and filtering the candidates to get the final answer. However, it may determine a wide search range in the first phase based on this method, resulting in the decrease of the accuracy and the increase of the processing time. Therefore, in this paper, to avoid this disadvantage, we propose a forward moving method to efficiently support the CNN queries. The proposed method locally expands the search range along the query line segment to find the neighbors. Experimental results show that our method outperforms Zheng et al.’s method in terms of the accuracy and the processing time.
APA, Harvard, Vancouver, ISO, and other styles
11

Liu, Yongshan, and Dehan Kong. "Continuous Visible Query for Three-Dimensional Objects in Spatial Databases." Mathematical Problems in Engineering 2016 (2016): 1–12. http://dx.doi.org/10.1155/2016/6340856.

Full text
Abstract:
Present research of visible query focuses on points and segments in two-dimensional space, while disfigurements occur during processing of visible query in three-dimensional space. In this paper, Continuous Visible Range Query Based on Control Point (CVRQ-CP) is proposed to solve the visible query in a 3D spatial database. Firstly, the horizontal angle (HA) and Vertical Projection Angle (VPA) for 3D objects in a spatial database were used in the visibility testing method. The HA and VPA in the processing of the continuous visible query created visibility changes, defining and confirming the control point. Finally, the algorithm of Continuous Visible Range Query Based on Control Point (CVRQ-CP) was proposed. Verified by experiments, the CVRQ-CP algorithm correctly deals with the visible query of 3D spatial objects. The CVRQ-CP algorithm has better superior accuracy over present visible queries in 3D spatial databases.
APA, Harvard, Vancouver, ISO, and other styles
12

Phan, Tien-Khoi, HaRim Jung, and Ung-Mo Kim. "An Efficient Algorithm for Maximizing Range Sum Queries in a Road Network." Scientific World Journal 2014 (2014): 1–11. http://dx.doi.org/10.1155/2014/541602.

Full text
Abstract:
Given a set of positive-weighted points and a query rectangler(specified by a client) of given extents, the goal of a maximizing range sum (MaxRS) query is to find the optimal location ofrsuch that the total weights of all the points covered byrare maximized. All existing methods for processing MaxRS queries assume the Euclidean distance metric. In many location-based applications, however, the motion of a client may be constrained by an underlying (spatial) road network; that is, the client cannot move freely in space. This paper addresses the problem of processing MaxRS queries in a road network. We propose the external-memory algorithm that is suited for a large road network database. In addition, in contrast to the existing methods, which retrieve only one optimal location, our proposed algorithm retrieves all the possible optimal locations. Through simulations, we evaluate the performance of the proposed algorithm.
APA, Harvard, Vancouver, ISO, and other styles
13

Cho, Hyung-Ju. "A Unified Approach to Spatial Proximity Query Processing in Dynamic Spatial Networks." Sensors 21, no. 16 (August 4, 2021): 5258. http://dx.doi.org/10.3390/s21165258.

Full text
Abstract:
Nearest neighbor (NN) and range (RN) queries are basic query types in spatial databases. In this study, we refer to collections of NN and RN queries as spatial proximity (SP) queries. At peak times, location-based services (LBS) need to quickly process SP queries that arrive simultaneously. Timely processing can be achieved by increasing the number of LBS servers; however, this also increases service costs. Existing solutions evaluate SP queries sequentially; thus, such solutions involve unnecessary distance calculations. This study proposes a unified batch algorithm (UBA) that can effectively process SP queries in dynamic spatial networks. With the proposed UBA, the distance between two points is indicated by the travel time on the shortest path connecting them. The shortest travel time changes frequently depending on traffic conditions. The goal of the proposed UBA is to avoid unnecessary distance calculations for nearby SP queries. Thus, the UBA clusters nearby SP queries and exploits shared distance calculations for query clusters. Extensive evaluations using real-world roadmaps demonstrated the superiority and scalability of UBA compared with state-of-the-art sequential solutions.
APA, Harvard, Vancouver, ISO, and other styles
14

Huurdeman, Hugo C., Mikaela Aamodt, and Dan Michael Heggø. "“More than Meets the Eye” - Analyzing the Success of User Queries in Oria." Nordic Journal of Information Literacy in Higher Education 10, no. 1 (May 31, 2018): 18–36. http://dx.doi.org/10.15845/noril.v10i1.270.

Full text
Abstract:
Discovery systems allow academic library users to locate a wider range of resources than previous OPACs. However, actual usage of these systems may still be challenging. The main aim of this research is to get a better understanding of the hurdles users face while searching contemporary library systems.This study utilizes a transaction log analysis approach, using popular and zero result queries datasets gathered from the statistics module of a library discovery system. It explores what types of queries users perform, how successful the queries are, and examines underlying reasons for unsuccessful queries. To our knowledge, this is the first academic paper to use data originating from built-in transaction logs of the Oria library discovery system.The analysis shows that queries are often curriculum-related: we could pinpoint a relation with curriculum for 58% of the popular queries, and 28% for the zero result searches. A vast majority of popular queries refer to books, databases and journals, and over half of the queries used the title to locate a resource. 20% of the popular queries turned out to be unsuccessful. Zero result queries typically involve long queries, and in many cases consist of pasted reference citations.Our conclusion is that the examined discovery system is rather sensitive. Whilst this suggests the importance of increasing users' information search skills, it also points to the need for enhancing discovery systems and their underlying metadata. Furthermore, due to the prominence of curriculum-related queries, a better integration of curriculum materials ought to be achieved.
APA, Harvard, Vancouver, ISO, and other styles
15

Pieldner, Judit. "Interpretation – Artistic Reproduction – Translatability. Theoretical Queries." Acta Universitatis Sapientiae, Philologica 6, no. 1 (December 1, 2014): 105–13. http://dx.doi.org/10.1515/ausp-2015-0010.

Full text
Abstract:
AbstractAlong Wolfgang Iser’s considerations-formulated in his work entitled The Range of Interpretation-we can speak about translation whenever a shift of levels/registers takes place. Literary interpretation is essentially an act of translation. As Iser points out, the register to which interpretation translates always depends on the subject matter that is translated. Translation does not repeat its subject matter, making it redundant, but transposes it into another register while the subject matter itself is also tailored by the interpretive register. The presentation aims to discuss the question of translatability in relation to the hermeneutical concept of application, and proposes to rethink the issue of change of the medium of artistic expression in the light of the concept of artistic reproduction as posited by Hans-Georg Gadamer’s hermeneutics in his seminal work Truth and Method.
APA, Harvard, Vancouver, ISO, and other styles
16

Zeberga, Kamil, Rize Jin, Hyung-Ju Cho, and Tae-Sun Chung. "A Safe-Region Approach to a Moving k-RNN Queries in a Directed Road Network." Journal of Circuits, Systems and Computers 26, no. 05 (February 8, 2017): 1750071. http://dx.doi.org/10.1142/s0218126617500712.

Full text
Abstract:
In road networks, [Formula: see text]-range nearest neighbor ([Formula: see text]-RNN) queries locate the [Formula: see text]-closest neighbors for every point on the road segments, within a given query region defined by the user, based on the network distance. This is an important task because the user's location information may be inaccurate; furthermore, users may be unwilling to reveal their exact location for privacy reasons. Therefore, under this type of specific situation, the server returns candidate objects for every point on the road segments and the client evaluates and chooses exact [Formula: see text] nearest objects from the candidate objects. Evaluating the query results at each timestamp to keep the freshness of the query answer, while the query object is moving, will create significant computation burden for the client. We therefore propose an efficient approach called a safe-region-based approach (SRA) for computing a safe segment region and the safe exit points of a moving nearest neighbor (NN) query in a road network. SRA avoids evaluation of candidate answers returned by the location-based server since it will have high computation cost in the query side. Additionally, we applied SRA for a directed road network, where each road network has a particular orientation and the network distances are not symmetric. Our experimental results demonstrate that SRA significantly outperforms a conventional solution in terms of both computational and communication costs.
APA, Harvard, Vancouver, ISO, and other styles
17

ZHAO, GENG, KEFENG XUAN, DAVID TANIAR, and BALA SRINIVASAN. "INCREMENTAL K-NEAREST-NEIGHBOR SEARCH ON ROAD NETWORKS." Journal of Interconnection Networks 09, no. 04 (December 2008): 455–70. http://dx.doi.org/10.1142/s0219265908002382.

Full text
Abstract:
Most query search on road networks is either to find objects within a certain range (range search) or to find K nearest neighbors (KNN) on the actual road network map. In this paper, we propose a novel query, that is, incremental k nearest neighbor (iKNN). iKNN can be defined as given a set of candidate interest objects, a query point and the number of objects k, find a path which starts at the query point, goes through k interest objects and the distance of this path is the shortest among all possible paths. This is a new type of query, which can be used when we want to visit k interest objects one by one from the query point. This approach is based on expanding the network from the query point, keeping the results in a query set and updating the query set when reaching network intersection or interest objects. The basic theory of this approach is Dijkstra's algorithm and the Incremental Network Expansion (INE) algorithm. Our experiment verified the applicability of the proposed approach to solve the queries, which involve finding incremental k nearest neighbor.
APA, Harvard, Vancouver, ISO, and other styles
18

Zhu, Liang, Fei Fei Liu, Wu Chen, and Qing Ma. "Processing Top-N Queries Based on p-Norm Distances." Applied Mechanics and Materials 490-491 (January 2014): 1293–97. http://dx.doi.org/10.4028/www.scientific.net/amm.490-491.1293.

Full text
Abstract:
Top-Nqueries are employed in a wide range of applications to obtain a ranked list of data objects that have the highest aggregate scores over certain attributes. The threshold algorithm (TA) is an important method in many scenarios. However, TA is effective only when the ranking function is monotone and the query point is fixed. In the paper, we propose an approach that alleviates the limitations of TA-like methods for processing top-Nqueries. Based onp-norm distances as ranking functions, our methods utilize the fundamental principle of Functional Analysis so that the candidate tuples of top-Nquery with ap-norm distance can be obtained by the Maximum distance. We conduct extensive experiments to prove the effectiveness and efficiency of our method for both low-dimensional (2, 3 and 4) and high-dimensional (25,50 and 104) data.
APA, Harvard, Vancouver, ISO, and other styles
19

Cho, Hyung-Ju, Kiyeol Ryu, and Tae-Sun Chung. "An efficient algorithm for computing safe exit points of moving range queries in directed road networks." Information Systems 41 (May 2014): 1–19. http://dx.doi.org/10.1016/j.is.2013.10.008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Corsia, M., T. Chabardès, H. Bouchiba, and A. Serna. "LARGE SCALE 3D POINT CLOUD MODELING FROM CAD DATABASE IN COMPLEX INDUSTRIAL ENVIRONMENTS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B2-2020 (August 12, 2020): 391–98. http://dx.doi.org/10.5194/isprs-archives-xliii-b2-2020-391-2020.

Full text
Abstract:
Abstract. In this paper, we present a method to build Computer Aided Design (CAD) representations of dense 3D point cloud scenes by queries in a large CAD model database. This method is applied to real world industrial scenes for infrastructure modeling. The proposed method firstly relies on a region growing algorithm based on novel edge detection method. This algorithm is able to produce geometrically coherent regions which can be agglomerated in order to extract the objects of interest of an industrial environment. Each segment is then processed to compute relevant keypoints and multi-scale features in order to be compared to all CAD models from the database. The best fitting model is estimated together with the rigid six degree of freedom (6 DOF) transformation for positioning the CAD model on the 3D scene. The proposed novel keypoints extractor achieves robust and repeatable results that captures both thin geometrical details and global shape of objects. Our new multi-scale descriptor stacks geometrical information around each keypoint at short and long range, allowing non-ambiguous matching for object recognition and positioning. We illustrate the efficiency of our method in a real-world application on 3D segmentation and modeling of electrical substations.
APA, Harvard, Vancouver, ISO, and other styles
21

Komai, Yuka, Yuya Sasaki, Takahiro Hara, and Shojiro Nishio. "k-Nearest Neighbor Search based on Node Density in MANETs." Mobile Information Systems 10, no. 4 (2014): 385–405. http://dx.doi.org/10.1155/2014/158737.

Full text
Abstract:
In a kNN query processing method, it is important to appropriately estimate the range that includes kNNs. While the range could be estimated based on the node density in the entire network, it is not always appropriate because the density of nodes in the network is not uniform. In this paper, we propose two kNN query processing methods in MANETs where the density of nodes is ununiform; the One-Hop (OH) method and the Query Log (QL) method. In the OH method, the nearest node from the point specified by the query acquires its neighbors' location and then determines the size of a circle region (the estimated kNN circle) which includes kNNs with high probability. In the QL method, a node which relays a reply of a kNN query stores the information on the query result for future queries.
APA, Harvard, Vancouver, ISO, and other styles
22

Liu, Muhua, Ping Zhang, and Qingtao Wu. "A Novel Construction of Constrained Verifiable Random Functions." Security and Communication Networks 2019 (November 3, 2019): 1–15. http://dx.doi.org/10.1155/2019/4187892.

Full text
Abstract:
Constrained verifiable random functions (VRFs) were introduced by Fuchsbauer. In a constrained VRF, one can drive a constrained key skS from the master secret key sk, where S is a subset of the domain. Using the constrained key skS, one can compute function values at points which are not in the set S. The security of constrained VRFs requires that the VRFs’ output should be indistinguishable from a random value in the range. They showed how to construct constrained VRFs for the bit-fixing class and the circuit constrained class based on multilinear maps. Their construction can only achieve selective security where an attacker must declare which point he will attack at the beginning of experiment. In this work, we propose a novel construction for constrained verifiable random function from bilinear maps and prove that it satisfies a new security definition which is stronger than the selective security. We call it semiadaptive security where the attacker is allowed to make the evaluation queries before it outputs the challenge point. It can immediately get that if a scheme satisfied semiadaptive security, and it must satisfy selective security.
APA, Harvard, Vancouver, ISO, and other styles
23

TAN, XUEHOU. "FINDING AN OPTIMAL BRIDGE BETWEEN TWO POLYGONS." International Journal of Computational Geometry & Applications 12, no. 03 (June 2002): 249–61. http://dx.doi.org/10.1142/s0218195902000852.

Full text
Abstract:
Let π(a,b) denote the shortest path between two points a, b inside a simple polygon P, which totally lies in P. The geodesic distance between a and b in P is defined as the length of π(a,b), denoted by gd(a,b), in contrast with the Euclidean distance between a and b in the plane, denoted by d(a,b). Given two disjoint polygons P and Q in the plane, the bridge problem asks for a line segment (optimal bridge) that connects a point p on the boundary of P and a point q on the boundary of Q such that the sum of three distances gd(p′,p), d(p,q) and gd(q,q′), with any p′ ∈ P and any q′ ∈ Q, is minimized. We present an O(n log 3 n) time algorithm for finding an optimal bridge between two simple polygons. This significantly improves upon the previous O(n2) time bound. Our result is obtained by making substantial use of a hierarchical structure that consists of segment trees, range trees and persistent search trees, and a structure that supports dynamic ray shooting and shortest path queries as well.
APA, Harvard, Vancouver, ISO, and other styles
24

Bowden, Vanessa, Luke Ren, and Shayne Loft. "Supervising High Degree Automation in Simulated Air Traffic Control." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 62, no. 1 (September 2018): 86. http://dx.doi.org/10.1177/1541931218621019.

Full text
Abstract:
Implementing high degree automation in future air traffic control (ATC) systems will be crucial for coping with increased air traffic demand and maintaining safety. However, issues associated with the passive monitoring role assumed by operators in these systems continue to be of concern. Passive monitoring can lead to a range of human operator performance problems when overseeing automation. The performance cost when human operators are placed in a passive monitoring role has been conceptualized as the out-of-the-loop (OOTL) performance problem: where adding more automation to a system makes it less likely that the operator will notice an automation failure and intervene appropriately (Endsley & Kiris, 1995). The OOTL performance problem has been attributed to numerous factors including vigilance decrements, fatigue, task disengagement, and poor situation awareness. This study tested two different approaches to addressing the OOTL performance problem associated with high degree automation in a simulation of en-route ATC (ATC-labAdvanced; Fothergill, Loft, & Neal, 2009). Following a 60-min training and practice session, 115 university student participants completed two 30-min ATC scenarios; one under manual control and one where they supervised high degree automation (counterbalanced order). The automation performed all acceptances for aircraft entering the sector of controlled airspace, handed off all departing aircraft, and resolved all conflicts between aircraft pairs that would otherwise have violated the minimum safe separation standards (except for a single automation failure event). Participants were instructed that the automation was highly reliable, but not infallible. The first aim was to confirm that while high degree automation can reduce workload, it can also lead to increased task disengagement and fatigue when compared to manual control. Furthermore, to determine how well participants supervised the automation, the conflict detection automation failed once late in the automation scenario. This failure involved two aircraft violating the minimum lateral and vertical separation standard and being missed by the automation. We expected to find that participants would fail to detect this conflict more often, or be slower to detect it, when under automation conditions, compared to a comparable conflict event presented when under manual control. Our second aim was to investigate whether these costs of automation could be ameliorated by techniques designed to improve task engagement. Participants were assigned to one of three automation conditions, including automation with (1) no acknowledgements, (2) acknowledgments, or (3) queries. In the no acknowledgements condition, automation failure monitoring was the only task performed. In the acknowledgements condition, similar to Pop et al. (2012), participants were additionally instructed to click to acknowledge each automated action, thereby potentially improving engagement by adding an active component to an otherwise passive monitoring task. In the queries condition, participants were queried regarding the past, present, and future state of aircraft on the display. The goal was to help participants maintain an accurate mental model (aka. situation awareness) when using automation. We found that automation reduced workload, increased disengagement and fatigue, and impaired detection of a single conflict detection failure event compared to manual task performance. Consistent with previous research, this shows that as a higher degree of automation is added to a system, it becomes less likely that the operator will notice automation failures and intervene appropriately (e.g. Pop et al., 2012). The first intervention tested whether adding automation acknowledgement requirements to the task made it easier for participants to detect and resolve a single automation failure event. The results showed that there was no difference between automation with and without acknowledgement requirements on workload, task disengagement, fatigue, and the detection of the automation failure event. The second intervention tested whether adding queries regarding aircraft on the display would improve failure detection performance. The queries intervention successfully reduced task disengagement and trended towards reducing fatigue, while workload was maintained at a level similar to that of manual control. These findings suggest that the manipulation successfully reduced some of the subjective deficits associated with the passive monitoring of automation. However, there was a significant cost to participants’ ability to detect and resolve the automation failure event relative to manual performance, where half the participants in the queries condition missed the automation failure entirely, compared to 25% in the no queries condition. Response times to detect the failure event were also considerably longer when queries were included compared to no queries. One explanation is that the queries condition may have been engaging to the point of distraction. This is supported by qualitative information provided by participants, where 40% mentioned that they found the queries to be distracting. Future studies may wish to examine the effectiveness of auditory queries instead of visual queries, potentially with verbal instead of typed responses. This may allow queries to reduce task disengagement and fatigue while potentially improving participants’ ability to intervene to automation failures.
APA, Harvard, Vancouver, ISO, and other styles
25

Easwarakumar, K. S., and T. Hema. "BITS-Tree -- An Efficient Data Structure for Segment Storage and Query Processing." INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY 11, no. 10 (December 5, 2013): 3108–16. http://dx.doi.org/10.24297/ijct.v11i10.2980.

Full text
Abstract:
In this paper, a new and novel data structure is proposed to dynamically insert and delete segments. Unlike the standard segment trees, the proposed data structure permits insertion of a segment with interval range beyond the interval range of the existing tree, which is the interval between minimum and maximum values of the end points of all the segments. Moreover, the number of nodes in the proposed tree is lesser as compared to the dynamic version of the standard segment trees, and is able to answer both stabbing and range queries practically much faster compared to the standard segment trees.
APA, Harvard, Vancouver, ISO, and other styles
26

AFSHANI, PEYMAN. "IMPROVED POINTER MACHINE AND I/O LOWER BOUNDS FOR SIMPLEX RANGE REPORTING AND RELATED PROBLEMS." International Journal of Computational Geometry & Applications 23, no. 04n05 (August 2013): 233–51. http://dx.doi.org/10.1142/s0218195913600054.

Full text
Abstract:
We investigate one of the fundamental areas in computational geometry: lower bounds for range reporting problems in the pointer machine and the external memory models. We develop new techniques that lead to new and improved lower bounds for simplex range reporting as well as some other geometric problems. Simplex range reporting is the problem of storing n points in the d-dimensional space in a data structure such that the k points that lie inside a query simplex can be found efficiently. This is one of the fundamental and extensively studied problems in computational geometry. Currently, the best data structures for the problem achieve Q(n) + O(k) query time using [Formula: see text] space in which the [Formula: see text] notation either hides a polylogarithmic or an nε factor for any constant ε > 0, (depending on the data structure and Q(n)). The best lower bound on this problem is due to Chazelle and Rosenberg who showed any pointer machine data structure that can answer queries in O(nγ + k) time must use Ω(nd-ε-dγ) space. Observe that this bound is a polynomial factor away from the best known data structures. In this article, we improve the space lower bound to [Formula: see text]. Not only this bridges the gap from polynomial to sub-polynomial, it also offers a smooth trade-off curve. For instance, for polylogarithmic values of Q(n), our space lower bound almost equals Ω((n/Q(n))d); the latter is generally believed to be the “right” bound. By a simple geometric transformation, we also improve the best lower bounds for the halfspace range reporting problem. Furthermore, we study the external memory model and offer a new simple framework for proving lower bounds in this model. We show that answering simplex range reporting queries with Q(n)+O(k/B) I/Os requires [Formula: see text]) space or [Formula: see text] blocks, in which B is the block size.
APA, Harvard, Vancouver, ISO, and other styles
27

Wang, Shuang, Yingchun Xu, Yinzhe Wang, Hezhi Liu, Qiaoqiao Zhang, Tiemin Ma, Shengnan Liu, Siyuan Zhang, and Anliang Li. "Semantic-Aware Top-k Multirequest Optimal Route." Complexity 2019 (May 15, 2019): 1–15. http://dx.doi.org/10.1155/2019/4047894.

Full text
Abstract:
In recent years, research on location-based services has received a lot of interest, in both industry and academic aspects, due to a wide range of potential applications. Among them, one of the active topic areas is the route planning on a point-of-interest (POI) network. We study the top-k optimal routes querying on large, general graphs where the edge weights may not satisfy the triangle inequality. The query strives to find the top-k optimal routes from a given source, which must visit a number of vertices with all the services that the user needs. Existing POI query methods mainly focus on the textual similarities and ignore the semantic understanding of keywords in spatial objects and queries. To address this problem, this paper studies the semantic similarity of POI keyword searching in the route. Another problem is that most of the previous studies consider that a POI belongs to a category, and they do not consider that a POI may provide various kinds of services even in the same category. So, we propose a novel top-k optimal route planning algorithm based on semantic perception (KOR-SP). In KOR-SP, we define a dominance relationship between two partially explored routes which leads to a smaller searching space and consider the semantic similarity of keywords and the number of single POI’s services. We use an efficient label indexing technique for the shortest path queries to further improve efficiency. Finally, we perform an extensive experimental evaluation on multiple real-world graphs to demonstrate that the proposed methods deliver excellent performance.
APA, Harvard, Vancouver, ISO, and other styles
28

Li, W., S. Zlatanova, and B. Gorte. "VOXEL DATA MANAGEMENT AND ANALYSIS IN POSTGRESQL/POSTGIS UNDER DIFFERENT DATA LAYOUTS." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences VI-3/W1-2020 (November 17, 2020): 35–42. http://dx.doi.org/10.5194/isprs-annals-vi-3-w1-2020-35-2020.

Full text
Abstract:
Abstract. Three-dimensional (3D) raster data (also named voxel) is important sources for 3D geo-information applications, which have long been used for modelling continuous phenomena such as geological and medical objects. Our world can be represented in voxels by gridding the 3D space and specifying what each grid represents by attaching every voxel to a real-world object. Nature-triggered disasters can also be modelled in volumetric representation. Unlike point cloud, it is still a lack of wide research on how to efficiently store and manage such semantic 3D raster data. In this work, we would like to investigate four different data layouts for voxel management in open-source (spatial) DBMS - PostgreSQL/PostGIS, which is suitable for efficiently retrieving and quick querying. Besides, a benchmark has been developed to compare various voxel data management solutions concerning functionality and performance. The main test dataset is the groups of buildings of UNSW Kensington Campus, with 10cm resolution. The obtained storage and query results suggest that the presented approach can be successfully used to handle voxel management, semantic and range queries on large voxel dataset.
APA, Harvard, Vancouver, ISO, and other styles
29

Vijay Kumar, V., S. V.L. Gayathri, K. Roshini, and E. Rohith. "Improving Efficiency of Nearest Neighbor Search by Utilizing Spatial Inverted Index." International Journal of Engineering & Technology 7, no. 2.32 (May 31, 2018): 416. http://dx.doi.org/10.14419/ijet.v7i2.32.15729.

Full text
Abstract:
Spatial queries with regular strategies like range request or closest neighbor Search include just the utilization of geometric properties of the items. An up-to-date answer for looking troublesome inquiries utilizes IR-tree technique and we have examined it in the paper. We propose another strategy with a goal of finding the closest neighbor of the inquiry while decreasing the delay time caused in looking what's more, enchasing the of the aftereffect of a question. Many web indexes are utilized to appear everything from everywhere; this framework is utilized to quick closest neighbor seek to utilize a watchword. Previous works, generally, emphasis on searching top-k Near Neighbors, each center point needs to facilitate the total addressing the Key words. The thickness over information protests over the space is not reflected. In the same manner, these methods are little proficient for an incremental query. Information Retrieval R Tree having a few disadvantages. The output of Information Retrieval R Tree gravely was pretentious due to rare disadvantages. Overcome with problem must to be observed. The resultant record is the technique of getting response over this issue.
APA, Harvard, Vancouver, ISO, and other styles
30

Suri, Subhash, and Kevin Verbeek. "On the Most Likely Voronoi Diagram and Nearest Neighbor Searching." International Journal of Computational Geometry & Applications 26, no. 03n04 (September 2016): 151–66. http://dx.doi.org/10.1142/s0218195916600025.

Full text
Abstract:
Let [Formula: see text] be a set of stochastic sites, where each site is a tuple [Formula: see text] consisting of a point [Formula: see text] in [Formula: see text]-dimensional space and a probability [Formula: see text] of existence. Given a query point [Formula: see text], we define its most likely nearest neighbor (LNN) as the site with the largest probability of being [Formula: see text]’s nearest neighbor. The Most Likely Voronoi Diagram (LVD) of [Formula: see text] is a partition of the space into regions with the same LNN. We investigate the complexity of LVD in one dimension and show that it can have size [Formula: see text] in the worst-case. We then show that under non-adversarial conditions, the size of the [Formula: see text]-dimensional LVD is significantly smaller: (1) [Formula: see text] if the input has only [Formula: see text] distinct probability values, (2) [Formula: see text] on average, and (3) [Formula: see text] under smoothed analysis. We also describe a framework for LNN search using Pareto sets, which gives a linear-space data structure and sub-linear query time in 1D for average and smoothed analysis models as well as the worst-case with a bounded number of distinct probabilities. The Pareto-set framework is also applicable to multi-dimensional LNN search via reduction to a sequence of nearest neighbor and spherical range queries.
APA, Harvard, Vancouver, ISO, and other styles
31

CHAN, TIMOTHY M. "THREE PROBLEMS ABOUT DYNAMIC CONVEX HULLS." International Journal of Computational Geometry & Applications 22, no. 04 (August 2012): 341–64. http://dx.doi.org/10.1142/s0218195912600096.

Full text
Abstract:
We present three results related to dynamic convex hulls: • A fully dynamic data structure for maintaining a set of n points in the plane so that we can find the edges of the convex hull intersecting a query line, with expected query and amortized update time O( log 1+εn) for an arbitrarily small constant ε > 0. This improves the previous bound of O( log 3/2n). • A fully dynamic data structure for maintaining a set of n points in the plane to support halfplane range reporting queries in O( log n+k) time with O( polylog n) expected amortized update time. A similar result holds for 3-dimensional orthogonal range reporting. For 3-dimensional halfspace range reporting, the query time increases to O( log 2n/ log log n + k). • A semi-online dynamic data structure for maintaining a set of n line segments in the plane, so that we can decide whether a query line segment lies completely above the lower envelope, with query time O( log n) and amortized update time O(nε). As a corollary, we can solve the following problem in O(n1+ε) time: given a triangulated terrain in 3-d of size n, identify all faces that are partially visible from a fixed viewpoint.
APA, Harvard, Vancouver, ISO, and other styles
32

Lempp, Frieder. "A software implementation and case study application of Lempp’s propositional model of conflict resolution." International Journal of Conflict Management 28, no. 5 (October 9, 2017): 563–91. http://dx.doi.org/10.1108/ijcma-08-2016-0073.

Full text
Abstract:
Purpose The starting point of this paper is the propositional model of conflict resolution which was presented and critically discussed in Lempp (2016). Based on this model, a software implementation, called ProCON, is introduced and applied to three scenarios. The purpose of the paper is to demonstrate how ProCON can be used by negotiators and to evaluate ProCON’s practical usefulness as an automated negotiation support system. Design/methodology/approach The propositional model is implemented as a computer program. The implementation consists of an input module to enter data about a negotiation situation, an output module to generate outputs (e.g. a list of all incompatible goal pairs or a graph displaying the compatibility relations between goals) and a queries module to run queries on particular aspects of a negotiation situation. Findings The author demonstrates how ProCON can be used to capture a simple two-party, non-iterative prisoner’s dilemma, applies ProCON to a contract negotiation between a supplier and a purchaser of goods, and uses it to model the negotiations between the Iranian and six Western governments over Iran’s nuclear enrichment and stockpiling capacities. Research limitations/implications A limitation of the current version of ProCON arises from the fact that the computational complexity of the underlying algorithm is EXPTIME (i.e. the computing time required to process information in ProCON grows exponentially with respect to the number of issues fed into the program). This means that computing time can be quite long for even relatively small negotiation scenarios. Practical implications The three case studies demonstrate how ProCON can provide support for negotiators in a wide range of multi-party, multi-issue negotiations. In particular, ProCON can be used to visualise the compatibility relations between parties’ goals, generate possible outcomes and solutions and evaluate solutions regarding the extent to which they satisfy the parties’ goals. Originality/value In contrast to standard game-theoretic models of negotiation, ProCON does not require users to provide data about their preferences across their goals. Consequently, it can operate in situations where no information about the parties’ goal preferences is available. Compared to game-theoretical models, ProCON represents a more general approach of looking at possible outcomes in the context of negotiations.
APA, Harvard, Vancouver, ISO, and other styles
33

Hong, Julian C., Jonathan Foote, Gloria Broadwater, Julie A. Sosa, Stephanie Gaillard, Laura J. Havrilesky, and Junzo P. Chino. "Data-Derived Treatment Duration Goal for Cervical Cancer: Should 8 Weeks Remain the Target in the Era of Concurrent Chemoradiation?" JCO Clinical Cancer Informatics, no. 1 (November 2017): 1–15. http://dx.doi.org/10.1200/cci.16.00072.

Full text
Abstract:
Purpose Prior studies have demonstrated the importance of treatment duration (TD) in radiation therapy (RT) for cervical cancer, with an 8-week goal based primarily on RT alone. This study uses a contemporary cohort to estimate the time point by which completion of chemoradiation therapy is most critical. Patients and Methods The National Cancer Database was queried for women with nonmetastatic cervical cancer diagnosed from 2004 to 2012 who underwent chemotherapy, external beam RT, and brachytherapy. Data-derived TD cut points for overall survival (OS) were computed by using recursive partitioning analysis with bootstrapped aggregation (bagging) and 10-fold cross-validation. Models were independently trained with 70% of the population and validated on 30% of the population by log-rank test with and without propensity matching. Multivariable Cox proportional hazards regression was performed for the entire cohort. Results In all, 7,355 women were identified with a median TD of 57 days. Bagged recursive partitioning analysis converged to a mean cut point of 66.6 days (median, 64.5 days; interquartile range, 63.5 to 68.5 days). Cross-validation yielded a cut point of 63.3 days. Both cut points differentiated OS in validation. Younger age, recent diagnosis, geographic region, nongovernment insurance, shorter distance to treatment facility, metropolitan location, lower comorbidity, squamous cell carcinoma, lower stage, negative lymph nodes, and shorter TD were independently associated with longer OS. With adjustment, TD within the mean cut point (64.9 days; hazard ratio, 0.79; 95% CI, 0.73 to 0.87) and 56 days (hazard ratio, 0.87; 95% CI, 0.80 to 0.95) were associated with longer OS. Exploratory stratification suggested increasing OS detriment beyond 64 days. Conclusion Shorter chemoradiation TD in cervical cancer is associated with longer survival, and TD should be minimized as much as possible. The data-derived cut point was distributed around 64 days, with a continuous relationship between shorter TD and longer OS.
APA, Harvard, Vancouver, ISO, and other styles
34

Xiang, Xuehua. "Constructing the ‘tellables’." Chinese Language and Discourse 3, no. 2 (December 14, 2012): 247–72. http://dx.doi.org/10.1075/cld.3.2.05xia.

Full text
Abstract:
Based on twelve celebrity interviews in Mandarin Chinese and American English, broadcast in a range of talk-radio/television programs in the U.S. and China, the current study is a comparative analysis of interviewers’ questioning practices and the cultural underpinnings of those questions. The analysis focuses on the interviewers’ question-word interrogatives in the discourse context of multiple Turn Construction Units (multi-TCUs). The study demonstrates similar interviewing strategies between two datasets including couching queries in partial knowledge of the guest’s “celebrity-induced experiences,” and using the presupposition function of question-word interrogatives to “control” responses. Significant differences exist: The English interviews primarily reference the guest’s behaviors/activities as context for query, and frame the interviewee’s first-person accounts as particularizations of commonly shared ‘tellables.’ The Chinese interviews tend to use external reference-points, particularly the behavior and sentiments of others, thus constructing a comparative/contrastive angle from which the guest relays first-person accounts.
APA, Harvard, Vancouver, ISO, and other styles
35

Meijers, M., and P. van Oosterom. "CLUSTERING AND INDEXING HISTORIC VESSEL MOVEMENT DATA WITH SPACE FILLING CURVES." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-4 (September 19, 2018): 417–24. http://dx.doi.org/10.5194/isprs-archives-xlii-4-417-2018.

Full text
Abstract:
<p><strong>Abstract.</strong> This paper reports on the result of an on-going study using Space Filling Curves (SFCs) for indexing and clustering vessel movement message data (obtained via the Automated Identification System, AIS) inside a geographical Database Management System (Geo-DBMS). With AIS, vessels transmit their positions in intervals ranging from 2 seconds to 3 minutes. Every 6 minutes voyage related information is broadcast.</p><p> Relevant AIS messages contain a position, timestamp and vessel identifier. This information can be stored in a DBMS as separate columns with different types (as 2D point plus time plus identifier), or in an integrated column (as higher dimensional 4D point which is encoded as the position on a space filling curve, that we will call the SFC-key). Subsequently, indexing based on this SFC-key column can replace separate indexes (where this one integrated index will need less storage space than separate indexes). Moreover, this integrated index allows a good clustering (physical ordering of the table). Also, in an approach with separate indexes for location, time and object identifier the query optimizer inside a DBMS has to estimate which index is most selective for a given query. It is not possible to use two indexes at the same time &amp;ndash; e.g. in case of a space-time query. An approach with one multi-dimensional integrated index does not have this problem. It results in faster query responses when specifying multiple selection criteria; i.e. both search geometry and time interval.</p><p> We explain the steps needed to make this SFC approach available <i>fully inside</i> a DBMS (to avoid expensive data transfer to external programs during use). The SFC approach makes it possible to better cluster the (spatio-temporal) data compared to an approach with separate indexes. Moreover, we show experiments (with 723,853,597 AIS position report messages spanning 3 months, Sep&amp;ndash;Dec 2016, using data for Europe, both on-sea and inland water ways) to compare an approach based on one multi-dimensional integrated index (using a SFC) with non-integrated approach. We analyze loading time (including SFC encoding) and storage requirements, together with the speed of execution of queries and granularity of answers.</p><p> Conclusion is that time spend on query execution in case of space-time queries where both dimensions are selective using the integrated SFC approach outperforms the non-integrated approach (typically a factor 2&amp;ndash;6). Also, the SFC approach saves considerably on storage space (less space needed for indexes). Lastly, we propose some future improvements to get some better query performance using the SFC approach (e.g. IOT, range-glueing and nD-histogram).</p>
APA, Harvard, Vancouver, ISO, and other styles
36

Sieg, CH, and HA Wright. "The Role of Prescribed Burning in Regenerating Quercus Macrocarpa and Associated Woody Plants in Stringer Woodlands in the Black Hills, South Dakota." International Journal of Wildland Fire 6, no. 1 (1996): 21. http://dx.doi.org/10.1071/wf9960021.

Full text
Abstract:
Throughout the range of Quercus macrocarpa, fire historically played an important role in maintaining Quercus stands. However, little is known about the role of fire in maintaining stringer Quercus stands on the western edge of its distribution. This research suggests that prescribed burning could be used to rejuvenate woody plants in Quercus woodlands. Relative to unburned areas, there were more (p < 0.1) Quercus, Fraxinus pennsylvanica and Acer negundo sprouts following spring burning. However, Quercus seedling density did not increase (p = 0.22) relative to unburned sites, and changes in the density of woody understory species in response to burning were erratic. Dormant season burning has some appeal from a fire control point of view and because carbohydrate reserves in woody plants are high during this time. However, if the objective is to regenerate woody plants and/or mimic historical fires, prescriptions should be set to achieve high intensities.
APA, Harvard, Vancouver, ISO, and other styles
37

Belussi, Alberto, Sara Migliorini, and Ahmed Eldawy. "Skewness-Based Partitioning in SpatialHadoop." ISPRS International Journal of Geo-Information 9, no. 4 (March 27, 2020): 201. http://dx.doi.org/10.3390/ijgi9040201.

Full text
Abstract:
In recent years, several extensions of the Hadoop system have been proposed for dealing with spatial data. SpatialHadoop belongs to this group of projects and includes some MapReduce implementations of spatial operators, like range queries and spatial join. the MapReduce paradigm is based on the fundamental principle that a task can be parallelized by partitioning data into chunks and performing the same operation on them, (map phase), eventually combining the partial results at the end (reduce phase). Thus, the applied partitioning technique can tremendously affect the performance of a parallel execution, since it is the key point for obtaining balanced map tasks and exploiting the parallelism as much as possible. When uniformly distributed datasets are considered, this goal can be easily obtained by using a regular grid covering the whole reference space for partitioning the geometries of the input dataset; conversely, with skewed distributed datasets, this might not be the right choice and other techniques have to be applied. for instance, SpatialHadoop can produce a global index also by means of a Quadtree-based grid or an Rtree-based grid, which in turn are more expensive index structures to build. This paper proposes a technique based on both a box counting function and a heuristic, rooted on theoretical properties and experimental observations, for detecting the degree of skewness of an input spatial dataset and then deciding which partitioning technique to apply in order to improve as much as possible the performance of subsequent operations. Experiments on both synthetic and real datasets are presented to confirm the effectiveness of the proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
38

Lynen, Simon, Bernhard Zeisl, Dror Aiger, Michael Bosse, Joel Hesch, Marc Pollefeys, Roland Siegwart, and Torsten Sattler. "Large-scale, real-time visual–inertial localization revisited." International Journal of Robotics Research 39, no. 9 (July 7, 2020): 1061–84. http://dx.doi.org/10.1177/0278364920931151.

Full text
Abstract:
The overarching goals in image-based localization are scale, robustness, and speed. In recent years, approaches based on local features and sparse 3D point-cloud models have both dominated the benchmarks and seen successful real-world deployment. They enable applications ranging from robot navigation, autonomous driving, virtual and augmented reality to device geo-localization. Recently, end-to-end learned localization approaches have been proposed which show promising results on small-scale datasets. However, the positioning accuracy, scalability, latency, and compute and storage requirements of these approaches remain open challenges. We aim to deploy localization at a global scale where one thus relies on methods using local features and sparse 3D models. Our approach spans from offline model building to real-time client-side pose fusion. The system compresses the appearance and geometry of the scene for efficient model storage and lookup leading to scalability beyond what has been demonstrated previously. It allows for low-latency localization queries and efficient fusion to be run in real-time on mobile platforms by combining server-side localization with real-time visual–inertial-based camera pose tracking. In order to further improve efficiency, we leverage a combination of priors, nearest-neighbor search, geometric match culling, and a cascaded pose candidate refinement step. This combination outperforms previous approaches when working with large-scale models and allows deployment at unprecedented scale. We demonstrate the effectiveness of our approach on a proof-of-concept system localizing 2.5 million images against models from four cities in different regions of the world achieving query latencies in the 200 ms range.
APA, Harvard, Vancouver, ISO, and other styles
39

Singh, Dharmendra, Sarnam Singh, VR Lekshmi, Sukumar Dutta, Dr Nazma, Md Shahjahan Ali, and Ashfaque Ahmed. "Species type and forest health assessment via hyperspectral remote sensing in the part of Himalayan range, India." Dhaka University Journal of Biological Sciences 23, no. 2 (August 20, 2014): 135–46. http://dx.doi.org/10.3329/dujbs.v23i2.20093.

Full text
Abstract:
Ground spectra using analytical spectral device were collected for dominant species in Rajpur and Mussoorie hills of Lesser Himalayas of India for mapping dominant plant species. Hyperspectral remote sensing data (EO?1) was used for classification of temperate species such as Quercus leucotrichophora, Cedrus deodara, Thuja orientalis, subtropical species like Pinus roxburghii, and tropical species such as Shorea robusta, Lantana camara, etc. using Spectral Angle Mapper. A total of 14 species were mapped along with the 4 other land use/cover classes (Agriculture, settlement, main river course, barren land). Map accuracy was 77% assessed on the basis of 66 ground truth ground control point. The exotic species, Lantana camara was mapped in the area which has been found to be distributed from tropical to lower temperate regions and was showing impact on the health of the neighboring species which was derived from the Hyperion data. Large impact has been observed in the Shorea robusta species and its health distribution map showed 48% healthy and 52% less healthy part. DOI: http://dx.doi.org/10.3329/dujbs.v23i2.20093 Dhaka Univ. J. Biol. Sci. 23(2): 135-146, 2014
APA, Harvard, Vancouver, ISO, and other styles
40

Waggoner, Benjamin M., and Mary F. Poteet. "Unusual oak leaf galls from the middle Miocene of northwestern Nevada." Journal of Paleontology 70, no. 6 (November 1996): 1080–84. http://dx.doi.org/10.1017/s0022336000038762.

Full text
Abstract:
Distinctive galls have been found on a fossil oak leaf from the Miocene Gillam Springs Flora of Washoe County, Nevada. The described galls are located on the leaf surface of Quercus hannibali Dorf, an analogue of the modern species Q. chrysolepis Liebmann. Similar galls are found on extant Quercus, but the fossils seem distinctive enough to warrant description as Antronoides schorni new genus and species. The occurrence of Antronoides schorni coincides with a rapid episode of change from a mesic to a more xeric habitat, with a concomitant shift from an oak-dominated to a conifer-dominated paleoflora. Recent work suggests that speciation and radiation of galling insects is highest in xeric environments, possibly due to decreases in rates of parasitism and disease. This pattern has been documented for modern galling insects and fits the qualitative fossil evidence we present. These galls also support the hypothesis that cynipids in the Antron group originated in Nevada or eastern California and migrated from their point of origin to their current range in the Sierra Nevada and Coast Ranges.
APA, Harvard, Vancouver, ISO, and other styles
41

Platonov, V., Aleksandr Hadarcev, G. Suhih, V. Frankevich, M. Volochaeva, and V. Dunaev. "CHEMICAL COMPOSITION OF ORGANIC SUBSTANCE OF OAK BAR ORDINARY (EMERGENCY) (QUERCUS ROBUR L, BEECH FAMILY - FAGACCAC) (MESSAGE III - CHLOROFORM EXTRACT)." Clinical Medicine and Pharmacology 6, no. 1 (June 18, 2020): 53–56. http://dx.doi.org/10.12737/2409-3750-2020-6-1-53-56.

Full text
Abstract:
This report presents the results of chromatography-mass spectrometry of a chloroform extract obtained after preliminary sequential exhaustive extraction of the bark of ordinary oak (petiole) n-hexane and toluene. The purpose of the study is to significantly expand the range of compounds that determine the composition of the organic matter of the bark of ordinary oak (petiole). Materials and research methods. The dried raw material is ground to a powder in a porcelain laboratory mill, which is then subjected to exhaustive sequential extraction with n-hexane and toluene at boiling points. Then, the solid residue was dried to constant weight in a vacuum oven, and then extraction was carried out with chloroform at its boiling point. Extraction with chloroform was completed when the refractive index was reached equal to its initial value, after which chloroform was distilled off in a vacuum rotary evaporator to obtain a dark brown oily residue. Thus, we can conclude that the pharmacological effect of the chloroform extract of ordinary oak bark (petiolate) will be determined by the presence of significant amounts of sterols, hydrocarbons, which are dominated by arenes, alkenes and cycloalkanes, esters formed mainly by dicarboxylic acid (Oxalic acid).
APA, Harvard, Vancouver, ISO, and other styles
42

Coretti, Silvia, Filippo Rumi, and Americo Cicchetti. "The Social Cost of Major Depression. A Systematic Review." Review of European Studies 11, no. 1 (January 25, 2019): 73. http://dx.doi.org/10.5539/res.v11n1p73.

Full text
Abstract:
Major depression (MD) is a major cause of disability and a significant public health problem due to strong physical and mental impairment, possible complications for patients (including suicides), serious social and working problems to the patient and his/her family. We provide an overview of the social cost of Major depression worldwide. We conducted a systematic literature review. Two search engines were queried. Screening of records and summary of evidence was performed by two researchers blindly. The review was conducted in accordance with the standards of the PRISMA guidelines. Twenty studies met the inclusion criteria. Despite the heterogeneity in terms of population, setting and estimation techniques, the studies showed that the largest share of the burden of disease is represented by indirect costs. Among direct healthcare costs, inpatient care represents the most significant item, followed by outpatient care. The average total direct cost of depression ranges between &euro;508 and &euro;24 069, depending on the jurisdiction where the analysis was run and the range of cost items included. Indirect costs range between &euro;1963 and &euro;27 364. Evidence on the cost of MD in some countries is currently lacking. A deeper understanding of the drivers of the economic burden of disease is a crucial starting point for studies concerned with the cost-effectiveness of new treatment strategies.
APA, Harvard, Vancouver, ISO, and other styles
43

S.R, Srikanth, and Madialagan M.K. "CONCISE RANGE QUERIES." International Journal on Information Sciences and Computing 7, no. 1 (2013): 35–46. http://dx.doi.org/10.18000/ijisac.50132.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Grygoruk, Dorota. "Root vitality of Fagus sylvatica L., Quercus petraea Liebl. and Acer pseudoplatanus L. in mature mixed forest stand." Folia Forestalia Polonica 58, no. 2 (June 1, 2016): 55–61. http://dx.doi.org/10.1515/ffp-2016-0006.

Full text
Abstract:
AbstractThe main task of the present study was to investigate the root vitality of common beech Fagus sylvatica L., sessile oak Quercus petraea Liebl. and sycamore maple Acer pseudoplatanus L. in the optimal g rowth conditions in south-western Poland. The study was carried out in 130-year-old mixed stand located within natural range of studied tree species. The density of roots (g/100 cm3of soil) and biomass of fine roots (g/m2) in topsoil layers (0-5 cm, 5-15 cm) were determined in the tree biogroups of the same species. The mean total root density ranged from 0.248 to 0.417 g/100 cm3in the 0-5 cm soil layer, and it decreased in the deeper soil layer (5-15 cm). There were found no statistically significant differences of total root densities between tree biogroups in topsoil layers. Diversity of fine root biomass was comparable in the tree biogroups (H’ = 1.5), but common beech showed more intensive growth of fine roots in the topsoil 0-15 cm when compared to sessile oak and sycamore maple. The results of the study point out the stability of the multi-species structure of the mixed stand studied, and consequently - the ability of beech, sessile oak and sycamore maple trees to coexist in the mixed stands - in the area of natural range of these species.
APA, Harvard, Vancouver, ISO, and other styles
45

Falsberg, Elizabeth. "Geoffrey Hughes, A history of English words. Oxford (UK) & Malden (MA): Blackwell, 2000. Pp. v, 430. Pb $29.95." Language in Society 30, no. 2 (April 2001): 316–19. http://dx.doi.org/10.1017/s004740450136205x.

Full text
Abstract:
In this ambitious book, Hughes extends and broadens the projects of his two earlier books, Words in time: A social history of the English vocabulary (1988), and Swearing: A social history of foul language, oaths and profanity in English (1991). The contents range from summaries of events in the history of English to specialized lexical studies; the work's real strength, as with Hughes's two earlier books, lies in its investigations of the English vocabulary. While moving in a generally diachronic fashion from the origins of English to present-day Englishes, Hughes devotes much of the book to snapshots of the English lexis at various points and in various registers, queries as to how these configurations came about, how they affect other areas of the language, and why they matter to speakers. The motivation for the book, and its great accomplishment, is to show that the English lexis is a rich historical repository. Chronological discussions can be found elsewhere, but the treatments of moments in the English lexis provided by Hughes form a special and engaging contribution. As the book jacket states, Hughes interrogates the vocabulary “as an indicator of social change and as a symbol reflecting different social dynamics between speech communities and models of dominance, cohabitation, colonialism, and globalization.”
APA, Harvard, Vancouver, ISO, and other styles
46

Wolfe, Yanika, YouYou Duanmu, Viveta Lobo, Michael Kohn, and Kenton Anderson. "Utilization of Point-of-care Echocardiography in Cardiac Arrest: A Cross-sectional Pilot Study." Western Journal of Emergency Medicine 22, no. 4 (July 20, 2021): 803–9. http://dx.doi.org/10.5811/westjem.2021.4.50205.

Full text
Abstract:
Introduction: Point-of-care (POC) echocardiography (echo) is a useful adjunct in the management of cardiac arrest. However, the practice pattern of POC echo utilization during management of cardiac arrest cases among emergency physicians (EP) is unclear. In this pilot study we aimed to characterize the utilization of POC echo and the potential barriers to its use in the management of cardiac arrest among EPs. Methods: This was a cross-sectional survey of attending EPs who completed an electronic questionnaire composed of demographic variables (age, gender, year of residency graduation, practice setting, and ultrasound training) and POC echo utilization questions. The first question queried participants regarding frequency of POC echo use during the management of cardiac arrest. Branching logic then presented participants with a series of subsequent questions regarding utilization and barriers to use based on their responses. Results: A total of 155 EPs participated in the survey, with a median age of 39 years (interquartile range 31-67). Regarding POC echo utilization, participants responded that they always (66%), sometimes (30%), or never (4.5%) use POC echo during cardiac arrest cases. Among participants who never use POC echo, 86% reported a lack of training, competency, or credentialing as a barrier to use. Among participants who either never or sometimes use POC echo, the leading barrier to use (58%) reported was a need for improved competency. Utilization was not different among participants of different age groups (P = 0.229) or different residency graduation dates (P = 0.229). POC echo utilization was higher among participants who received ultrasound training during residency (P = 0.006) or had completed ultrasound fellowship training (P <0.001) but did not differ by gender (P = 0.232), or practice setting (0.231). Conclusion: Only a small minority of EPs never use point-of-care echocardiography during the management of cardiac arrest. Lack of training, competency, or credentialing is reported as the leading barrier to use among those who do not use POC echo during cardiac arrest cases. Participants who do not always use ultrasound are less likely to have received ultrasound training during residency.
APA, Harvard, Vancouver, ISO, and other styles
47

Redmann, Andrew J., Sonia N. Yuen, Douglas VonAllmen, Adam Rothstein, Alice Tang, Joseph Breen, and Ryan Collar. "Does Surgical Volume and Complexity Affect Cost and Mortality in Otolaryngology–Head and Neck Surgery?" Otolaryngology–Head and Neck Surgery 161, no. 4 (July 16, 2019): 629–34. http://dx.doi.org/10.1177/0194599819861524.

Full text
Abstract:
Objectives (1) To evaluate whether admission volume and case complexity are associated with mortality rates and (2) evaluate whether admission volume and case complexity are associated with cost per admission. Study Design Retrospective case series. Setting Tertiary academic hospital. Subjects and Methods The Vizient database was queried for inpatient admissions between July 2015 and March 2017 to an otolaryngology–head and neck surgery service. Data collected included admission volume, length of stay, intensive care unit (ICU) status, complication rates, case mix index (CMI), and cost data. Regression analysis was performed to evaluate the relationship between cost, CMI, admission volume, and mortality rate. Results In total, 338 hospitals provided data for analysis. Mean hospital admission volume was 182 (range, 1-1284), and mean CMI was 1.69 (range, 0.66-6.0). A 1-point increase in hospital average CMI was associated with a 40% increase in odds for high mortality. Admission volume was associated with lower mortality, with 1% lower odds for each additional case. A 1-point increase in CMI produces a $4624 higher total cost per case (95% confidence interval, $4550-$4700), and for each additional case, total cost per case increased by $6. Conclusion For otolaryngology inpatient services at US academic medical centers, increasing admission volume is associated with decreased mortality rates, even after controlling for CMI and complication rates. Increasing CMI levels have an anticipated correlation with higher total costs per case, but admission volume is unexpectedly associated with a significant increase in average cost per case.
APA, Harvard, Vancouver, ISO, and other styles
48

Kuzsella, László, and Imre Szabó. "The Effect of the Compression on the Mechanical Properties of Wood Material." Materials Science Forum 537-538 (February 2007): 41–46. http://dx.doi.org/10.4028/www.scientific.net/msf.537-538.41.

Full text
Abstract:
The wood is one of the most favourable structural material. It appears on all fields of the ordinary life. It is difficult to say an application where the wood is not used due to its cheap price, availability and just simply the beauty. Beside of the wide range of process technologies a new process appeared. This process changes the properties of the material and brings many new applications to this traditional material. This process is the compression of the structural wood material. This publication deals with the effect of the compression on the mechanical properties of two hardwoods (beech: fagus sylvatica, oak: quercus) by the help of the three-point bending test and the Charpy impact test.
APA, Harvard, Vancouver, ISO, and other styles
49

Marton, Christine F. "Salton and Buckley’s Landmark Research in Experimental Text Information Retrieval." Evidence Based Library and Information Practice 6, no. 4 (December 15, 2011): 169. http://dx.doi.org/10.18438/b87032.

Full text
Abstract:
Objectives – To compare the performance of the vector space model and the probabilistic weighting model of relevance feedback for the overall purpose of determining the most useful relevance feedback procedures. The amount of improvement that can be obtained from searching several test document collections with only one feedback iteration of each relevance feedback model was measured. Design – The experimental design consisted of 72 different tests: 2 different relevance feedback methods, each with 6 permutations, on 6 test document collections of various sizes. A residual collection method was utilized to ascertain the “true advantage provided by the relevance feedback process.” (Salton & Buckley, 1990, p. 293) Setting – Department of Computer Science at Cornell University. Subjects – Six test document collections. Methods – Relevance feedback is an effective technique for query modification that provides significant improvement in search performance. Relevance feedback entails both “term reweighting,” the modification of term weights based on term use in retrieved relevant and non-relevant documents, and “query expansion,” which is the addition of new terms from relevant documents retrieved (Harman, 1992). Salton and Buckley (1990) evaluated two established relevance feedback models based on the vector space model (a spatial model) and the probabilistic model, respectively. Harman (1992) describes the two key differences between these competing models of relevance feedback. [The vector space model merges] document vectors and original query vectors. This automatically reweights query terms by adding the weights from the actual occurrence of those query terms in the relevant documents, and subtracting the weights of those terms occurring in the non-relevant documents. Queries are automatically expanded by adding all the terms not in the original query that are in the relevant documents and non-relevant documents. They are expanded using both positive and negative weights based on whether the terms are coming from relevant or non-relevant documents. Yet, no new terms are actually added with negative weights; the contribution of non-relevant document terms is to modify the weighting of new terms coming from relevant documents. . . . The probabilistic model . . . is based on the distribution of query terms in relevant and non-relevant documents, This is expressed as a term weight, with the rank of each retrieved document then being the sum of the term weights for terms contained in the document that match query terms. (pp. 1-2) Second, while the vector space model “has an inherent relationship between term reweighting and query expansion” (p. 2), the probabilistic model does not. Thus, query expansion is optional, but given its usefulness, various schemes have been proposed for expanding queries using terms from retrieved relevant documents. In the Salton and Buckley study 3 versions of each of the two relevance feedback methods were utilized, with two different levels of query expansion, and run on 6 different test collections. More specifically, they queried test collections that ranged in size from small to large, and that represented different domains of knowledge, including medicine and engineering with 72 experimental runs in total. Salton and Buckley examined 3 variants of the vector space model, the second and third of which were based on the first. The first model was the classic Rocchio algorithm (1971), which uses reduced document weights to modify the queries. The second model was the “Ide regular” algorithm, which reweights both relevant and non-relevant query terms (Ide, 1971). And the third model was the “Ide dec-hi” algorithm, which reweights all identified relevant items but only one retrieved nonrelevant item, the one retrieved first in the initial set of search results (Ide & Salton, 1971). As well, 3 variants of the probabilistic model developed by S.E. Robertson (Robertson, 1986; Robertson & Spark Jones, 1976; Robertson, van Rijsbergen, & Porter, 1981; Yu, Buckley, Lam, & Salton, 1983) were examined: the conventional probabilistic approach with a 0.5 adjustment factor, the adjusted probabilistic derivation with a different adjustment factor, and finally an adjusted derivation with enhanced query term weights. The 6 vector space model and probabilistic model relevance feedback techniques are described in Table 3 (p. 293). The performance of the first iteration feedback searches were compared solely with the results of the initial searches performed with the original query statements. The first 15 documents retrieved from the initial searches were judged for relevance by the researchers and the terms contained in these relevant and non-relevant retrieved items were used to construct the feedback queries. The authors utilized the residual collection system, which entails the removal of all items previously seen by the searcher (whether relevant or not), and to evaluate both the initial and any subsequent queries for the reduced collection only. Both multi-valued (partial) and binary weights (1=relevant, 0=non-relevant) were used on the document terms (Table 6, p. 296). Also, two types of query expansion method were applied: expanded by the most common terms and expanded by all terms (Table 4, p. 294). While not using any query expansion and relying solely on reweighting relevant and non-relevant query terms is possible, this option was not examined. Three measures were calculated to assess relative relevance feedback performance, the rank order (recall-precision value); search precision (with respect to the average precision at 3 particular recall points of 0.75, 0.50, and 0.25), and the percentage improvement in the 3-point precision feedback and original searches. Main Results – The best results are produced by the same relevance feedback models for all test collections examined, and conversely, the poorest results are produced by the same relevance feedback models, (Tables 4, 5, and 6, pp. 294-296). In other words, all 3 relevance feedback algorithms based on the vector space retrieval model outperformed the 3 relevance feedback algorithms based on the probabilistic retrieval model, with the best relevance feedback results obtained for the “Ide dec hi” model. This finding suggests that improvements in relevance from term reweighting are attributable primarily to reweighting relevant terms. However, the probabilistic method with adjusted derivation, specifically considering the extra weight assignments for query terms, was almost as effective as the vector space model relevance feedback algorithms. Paired comparisons between full query expansion (all terms from the initial search are utilized in the feedback query) and partial query expansion by the most common terms from the relevant items, demonstrate that full expansion is better, however, the difference between expansion methods is small. Conclusions – Relevance feedback methods that reformulate the initial query by reweighting existing query terms and adding new terms (query expansion) can greatly improve the relevance of search results after only one feedback iteration. The amount of improvement achieved was highly variable across the 6 test collections, from 50% to 150% in the 3-point precision. Other variables thought to influence relevance feedback performance were initial query length, characteristics of the collection, including the specificity of the terms in the collection, the size of the collection (number of documents), and average term frequency in documents. The authors recommend that the relevance feedback process be incorporated into operational text retrieval systems.
APA, Harvard, Vancouver, ISO, and other styles
50

Gawrychowski, Paweł, Seungbum Jo, Shay Mozes, and Oren Weimann. "Compressed range minimum queries." Theoretical Computer Science 812 (April 2020): 39–48. http://dx.doi.org/10.1016/j.tcs.2019.07.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography