To see the other types of publications on this topic, follow the link: Approximate database.

Journal articles on the topic 'Approximate database'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Approximate database.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Saharia, Aditya N., and Terence M. Barron. "Approximate dependencies in database systems." Decision Support Systems 13, no. 3-4 (March 1995): 335–47. http://dx.doi.org/10.1016/0167-9236(93)e0049-j.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kläbe, Steffen, Kai-Uwe Sattler, and Stephan Baumann. "PatchIndex: exploiting approximate constraints in distributed databases." Distributed and Parallel Databases 39, no. 3 (March 6, 2021): 833–53. http://dx.doi.org/10.1007/s10619-021-07326-1.

Full text
Abstract:
AbstractCloud data warehouse systems lower the barrier to access data analytics. These applications often lack a database administrator and integrate data from various sources, potentially leading to data not satisfying strict constraints. Automatic schema optimization in self-managing databases is difficult in these environments without prior data cleaning steps. In this paper, we focus on constraint discovery as a subtask of schema optimization. Perfect constraints might not exist in these unclean datasets due to a small set of values violating the constraints. Therefore, we introduce the concept of a generic PatchIndex structure, which handles exceptions to given constraints and enables database systems to define these approximate constraints. We apply the concept to the environment of distributed databases, providing parallel index creation approaches and optimization techniques for parallel queries using PatchIndexes. Furthermore, we describe heuristics for automatic discovery of PatchIndex candidate columns and prove the performance benefit of using PatchIndexes in our evaluation.
APA, Harvard, Vancouver, ISO, and other styles
3

TamilSelvi, M., and R. Renuga. "Approximate String Search in Large Spatial Database." Procedia Computer Science 47 (2015): 92–100. http://dx.doi.org/10.1016/j.procs.2015.03.187.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Intan, Rolly, and Masao Mukaidono. "Approximate Data Querying in Fuzzy Relational Database." Journal of Advanced Computational Intelligence and Intelligent Informatics 6, no. 1 (February 20, 2002): 33–40. http://dx.doi.org/10.20965/jaciii.2002.p0033.

Full text
Abstract:
Fuzzy relational database was proposed for dealing with imprecise data or fuzzy information in a relational database. In order to provide a more realistic relation in representing similarity between two imprecise data, we need to weaken fuzzy similarity relation to be weak fuzzy similarity relation in which fuzzy conditional probability relation (FCPR, for short) is regarded as a concrete example of the weak fuzzy similarity relation. In this paper, application of approximate data querying is discussed induced by FCPR in the presence of the fuzzy relational database. Application of approximate data querying in order to provide fuzzy query relation is presented into two frameworks, namely dependent inputs and independent inputs. Finally, related to join operator, approximate join of two or more fuzzy query relations is given for the purpose of extending query system.
APA, Harvard, Vancouver, ISO, and other styles
5

Mazlack, Lawrence J. "Approximate reasoning applied to unsupervised database mining." International Journal of Intelligent Systems 12, no. 5 (May 1997): 391–414. http://dx.doi.org/10.1002/(sici)1098-111x(199705)12:5<391::aid-int3>3.0.co;2-i.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Valiullin, Timur, Zhexue Huang, Chenghao Wei, Jianfei Yin, Dingming Wu, and Luliia Egorova. "A new approximate method for mining frequent itemsets from big data." Computer Science and Information Systems, no. 00 (2020): 15. http://dx.doi.org/10.2298/csis200124015v.

Full text
Abstract:
Mining frequent itemsets in transaction databases is an important task in many applications. It becomes more challenging when dealing with a large transaction database because traditional algorithms are not scalable due to the memory limit. In this paper, we propose a new approach for approximately mining of frequent itemsets in a big transaction database. Our approach is suitable for mining big transaction databases since it produces approximate frequent itemsets from a subset of the entire database, and can be implemented in a distributed environment. Our algorithm is able to efficiently produce high-accurate results, however it misses some true frequent itemsets. To address this problem and reduce the number of false negative frequent itemsets we introduce an additional parameter to the algorithm to discover most of the frequent itemsets contained in the entire data set. In this article, we show an empirical evaluation of the results of the proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
7

Huh, Soon-Young, and Jung-Whan Lee. "Providing Approximate Answers Using a Knowledge Abstraction Database." Journal of Database Management 12, no. 2 (April 2001): 14–24. http://dx.doi.org/10.4018/jdm.2001040102.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Breitinger, Frank, Harald Baier, and Douglas White. "On the database lookup problem of approximate matching." Digital Investigation 11 (May 2014): S1—S9. http://dx.doi.org/10.1016/j.diin.2014.03.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Kum, Hye-Chung, and Joong-Hyuk Chang. "Mining Approximate Sequential Patterns in a Large Sequence Database." KIPS Transactions:PartD 13D, no. 2 (April 1, 2006): 199–206. http://dx.doi.org/10.3745/kipstd.2006.13d.2.199.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Fisher, Danyel, Steven M. Drucker, and A. Christian Knig. "Exploratory Visualization Involving Incremental, Approximate Database Queries and Uncertainty." IEEE Computer Graphics and Applications 32, no. 4 (July 2012): 55–62. http://dx.doi.org/10.1109/mcg.2012.48.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Higgins, Desmond G., and Peter Stoehr. "EMBLSCAN: fast approximate DNA database searches on compact disc." Bioinformatics 8, no. 2 (1992): 137–39. http://dx.doi.org/10.1093/bioinformatics/8.2.137.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

SELIGMAN, LEONARD J., and LARRY KERSCHBERG. "AN ACTIVE DATABASE APPROACH TO CONSISTENCY MANAGEMENT IN DATA- AND KNOWLEDGE-BASED SYSTEMS." International Journal of Cooperative Information Systems 02, no. 02 (June 1993): 187–200. http://dx.doi.org/10.1142/s0218215793000095.

Full text
Abstract:
Many AI and other applications populate their knowledge-bases with information retrieved from large, shared databases. This paper describes a new approach to maintaining consistency between objects in dynamic, shared databases and copies of those objects which are cached in an application knowledge-base. The approach relies on an intelligent interface to active databases that we call a Mediator for Approximate Consistency (MAC). The MAC has several unique features: (1) it permits applications to specify their consistency requirements declaratively, using a simple extension of a frame-based representation language, (2) it automatically generates the interfaces and database objects necessary to enforce those consistency requirements, shielding the knowledge-base developer from the implementation details of consistency maintenance, and (3) it provides an explicit representation of consistency constraints in the database, which allows them to be queried and reasoned about. The paper describes the knowledge-base/database consistency problem and previous approaches to dealing with it. It then describes our architecture for maintaining approximate knowledge-base/database consistency, including techniques for specifying, representing, and enforcing consistency constraints.
APA, Harvard, Vancouver, ISO, and other styles
13

Yu, Xiaomei, Hong Wang, and Xiangwei Zheng. "Mining top-k approximate closed patterns in an imprecise database." International Journal of Grid and Utility Computing 9, no. 2 (2018): 97. http://dx.doi.org/10.1504/ijguc.2018.091696.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Zheng, Xiangwei, Xiaomei Yu, and Hong Wang. "Mining top-k approximate closed patterns in an imprecise database." International Journal of Grid and Utility Computing 9, no. 2 (2018): 97. http://dx.doi.org/10.1504/ijguc.2018.10012791.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Wang, Shyue-Liang, Tzung-Pei Hong, and Wen-Yang Lin. "Answering Null Queries by Analogical Reasoning on Similarity-based Fuzzy Relational Databases." Journal of Advanced Computational Intelligence and Intelligent Informatics 5, no. 3 (May 20, 2001): 163–71. http://dx.doi.org/10.20965/jaciii.2001.p0163.

Full text
Abstract:
We present here a method of using analogical reasoning to infer approximate answers for null queries on similarity-based fuzzy relational databases. Null queries are queries that elicit a null answer from a database. Analogical reasoning assumes that if two situations are known to be similar in some respects, it is likely that they will be similar in others. Application of analogical reasoning to infer approximate answers for null queries using fuzzy functional dependency and fuzzy equality relation on possibility-based fuzzy relational database has been studied. However, the problem of inferring approximate answers has not been fully explored on the similarity-based fuzzy relational data model. In this work, we introduce the concept of approximate dependency and define a similarity measure on the similaritybased fuzzy model, as extensions to the fuzzy functional dependency and fuzzy equality relation respectively. Under the framework of reasoning by analogy, our method provides a flexible query answering mechanism for null queries on the similarity-based fuzzy relational data model.
APA, Harvard, Vancouver, ISO, and other styles
16

Nikolov, Aleksandar, Kunal Talwar, and Li Zhang. "The Geometry of Differential Privacy: The Small Database and Approximate Cases." SIAM Journal on Computing 45, no. 2 (January 2016): 575–616. http://dx.doi.org/10.1137/130938943.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Lopes, Stéphane, Jean-Marc Petit, and Lotfi Lakhal. "Functional and approximate dependency mining: database and FCA points of view." Journal of Experimental & Theoretical Artificial Intelligence 14, no. 2-3 (April 2002): 93–114. http://dx.doi.org/10.1080/09528130210164143.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

NARAYANAN, MUTHUKUMAR, SANJAY KUMAR MADRIA, and DAN ST CLAIR. "APPROXIMATE QUERY PROCESSING USING MULTILAYERED DATA MODEL TO HANDLE ENVIRONMENTAL CONSTRAINTS, PRIVACY AND AVOIDING INFERENCES." International Journal of Cooperative Information Systems 16, no. 02 (June 2007): 177–228. http://dx.doi.org/10.1142/s0218843007001627.

Full text
Abstract:
In this paper, we describe a query approximation system which uses the Multi-Layered Database (MLDB), a collection of summarized relational data generated using domain-based concept hierarchies. The system generates approximate answers to queries to handle environmental constraints and access control levels, thus preserving the privacy and security of data. Using concept hierarchy (CH), we generalize attributes to transform base relations to different layers of summarized relations corresponding to access control levels. The summary databases thus formed are the compression of the tuples in the main database using the CH constructed using the domain set. The query is rewritten by traversing the MLDB layers according to the user's access control level. We present summarization methods, query rewriting algorithms, implementation and experimental results of the system. In addition, we analyze some of the known inferences in Multi Level Secure (MLS) databases and then proceed to explore their effects on an approximate query processor that uses the MLDB model. The common relationships among inferential queries are found by analyzing them, and are used in possible solutions to detect and prevent inference problems. These patches are added to the query processor in MLDB to form a system that provides approximate results by preserving privacy and at the same time block the possible inferences. We have observed that these extra patches introduce only very small overheads in the MLDB generation and query processing.
APA, Harvard, Vancouver, ISO, and other styles
19

Liu, Julie Yu-Chih. "Lossless Join Decomposition for Extended Possibility-Based Fuzzy Relational Databases." Journal of Applied Mathematics 2014 (2014): 1–9. http://dx.doi.org/10.1155/2014/842680.

Full text
Abstract:
Functional dependency is the basis of database normalization. Various types of fuzzy functional dependencies have been proposed for fuzzy relational database and applied to the process of database normalization. However, the problem of achieving lossless join decomposition occurs when employing the fuzzy functional dependencies to database normalization in an extended possibility-based fuzzy data models. To resolve the problem, this study defined fuzzy functional dependency based on a notion of approximate equality for extended possibility-based fuzzy relational databases. Examples show that the notion is more applicable than other similarity concept to the research related to the extended possibility-based data model. We provide a decomposition method of using the proposed fuzzy functional dependency for database normalization and prove the lossless join property of the decomposition method.
APA, Harvard, Vancouver, ISO, and other styles
20

Wang, Jianfeng, Meixia Miao, Yaqian Gao, and Xiaofeng Chen. "Enabling efficient approximate nearest neighbor search for outsourced database in cloud computing." Soft Computing 20, no. 11 (July 2, 2015): 4487–95. http://dx.doi.org/10.1007/s00500-015-1758-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Di Tria, Francesco, Ezio Lefons, and Filippo Tangorra. "Benchmark for Approximate Query Answering Systems." Journal of Database Management 26, no. 1 (January 2015): 1–29. http://dx.doi.org/10.4018/jdm.2015010101.

Full text
Abstract:
The standard benchmark for Decision Support Systems is TPC-H, which is composed of a database, a workload, and a set of metrics for the performance evaluation. However, TPC-H does not include a methodology for the benchmark of Approximate Query Answering Systems, or the software tools used to obtain fast answers to analytical queries in the decision making process. In the paper, the authors present a methodology to evaluate and compare Approximate Query Answering Systems. To this aim, a methodology that extends the standard TPC-H and a set of new metrics that take into account the specific features of these systems are proposed. Experimental results show the application of these metrics to two systems based on the data analytic approximation by orthonormal series.
APA, Harvard, Vancouver, ISO, and other styles
22

Inoue, Tomohiro, Aneesh Krishna, and Raj P. Gopalan. "Approximate Query Processing on High Dimensionality Database Tables Using Multidimensional Cluster Sampling View." Journal of Software 11, no. 1 (2016): 80–93. http://dx.doi.org/10.17706/jsw.11.1.80-93.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Gatterbauer, Wolfgang, and Dan Suciu. "Dissociation and propagation for approximate lifted inference with standard relational database management systems." VLDB Journal 26, no. 1 (July 16, 2016): 5–30. http://dx.doi.org/10.1007/s00778-016-0434-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Caprara, A., M. Fischetti, and D. Maio. "Exact and approximate algorithms for the index selection problem in physical database design." IEEE Transactions on Knowledge and Data Engineering 7, no. 6 (1995): 955–67. http://dx.doi.org/10.1109/69.476501.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Ghinita, Gabriel, Panos Kalnis, Murat Kantarcioglu, and Elisa Bertino. "Approximate and exact hybrid algorithms for private nearest-neighbor queries with database protection." GeoInformatica 15, no. 4 (December 15, 2010): 699–726. http://dx.doi.org/10.1007/s10707-010-0121-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Yarygina, Anna, and Boris Novikov. "Optimizing resource allocation for approximate real-time query processing." Computer Science and Information Systems 11, no. 1 (2014): 69–88. http://dx.doi.org/10.2298/csis120825063y.

Full text
Abstract:
Query optimization techniques are proved to be essential for high performance of database management systems. In the context of new querying paradigms, such as similarity based search, exact query evaluation is neither computationally feasible nor meaningful, and approximate query evaluation is the only reasonable option. In this paper a problem of resource allocation for approximate evaluation of complex queries is considered. An approximate algorithm for a near-optimal resource allocation is presented, providing the best feasible quality of the output subject to a limited total cost of a query. The results of experiments have shown that the approximate resource allocation algorithm is accurate and efficient.
APA, Harvard, Vancouver, ISO, and other styles
27

Ismaeel, Salam, Ayman Al-Khazraji, and Karama Al-delimi. "Fuzzy Information Modeling in a Database System." IAES International Journal of Artificial Intelligence (IJ-AI) 6, no. 1 (March 1, 2017): 1. http://dx.doi.org/10.11591/ijai.v6.i1.pp1-7.

Full text
Abstract:
A Fuzzy logic (FL) provides a remarkably simple way to draw definite conclusions from vague, ambiguous or imprecise information. In a sense, fuzzy logic resembles human decision making with its ability to work from approximate data and find precise solutions. In this paper a fuzzy information modeling system was developed then used in a database, which contains fuzzy data and real data, to create new information assistance capable of making any decision about this data. The proposed system is implemented on a special database used to evaluation workers or users in any formal organizations.
APA, Harvard, Vancouver, ISO, and other styles
28

MARKOWITZ, HARRY. "MEAN-VARIANCE APPROXIMATIONS TO THE GEOMETRIC MEAN." Annals of Financial Economics 07, no. 01 (April 2012): 1250001. http://dx.doi.org/10.1142/s2010495212500017.

Full text
Abstract:
This paper uses two databases to test the ability of six functions of arithmetic mean and variance to approximate geometric mean return or, equivalently, Bernoulli's expected log utility. The two databases are: (1) a database of returns on frequently used asset classes, and (2) that of real returns on the equity markets of sixteen countries, 1900–2000. Three of the functions of arithmetic mean and variance do quite well, even for return series with large losses. The other three do less well.
APA, Harvard, Vancouver, ISO, and other styles
29

Rösch, Philipp, and Wolfgang Lehner. "Optimizing Sample Design for Approximate Query Processing." International Journal of Knowledge-Based Organizations 3, no. 4 (October 2013): 1–21. http://dx.doi.org/10.4018/ijkbo.2013100101.

Full text
Abstract:
The rapid increase of data volumes makes sampling a crucial component of modern data management systems. Although there is a large body of work on database sampling, the problem of automatically determine the optimal sample for a given query remained (almost) unaddressed. To tackle this problem the authors propose a sample advisor based on a novel cost model. Primarily designed for advising samples of a few queries specified by an expert, the authors additionally propose two extensions of the sample advisor. The first extension enhances the applicability by utilizing recorded workload information and taking memory bounds into account. The second extension increases the effectiveness by merging samples in case of overlapping pieces of sample advice. For both extensions, the authors present exact and heuristic solutions. Within their evaluation, the authors analyze the properties of the cost model and demonstrate the effectiveness and the efficiency of the heuristic solutions with a variety of experiments.
APA, Harvard, Vancouver, ISO, and other styles
30

Liu, Zheng, James Borneman, and Tao Jiang. "A software system for gene sequence database construction based on fast approximate string matching." International Journal of Bioinformatics Research and Applications 1, no. 3 (2005): 273. http://dx.doi.org/10.1504/ijbra.2005.007906.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Liu, Zheng, James Borneman, and Tao Jiang. "A software system for gene sequence database construction based on fast approximate string matching." International Journal of Bioinformatics Research and Applications 1, no. 3 (2006): 273. http://dx.doi.org/10.1504/ijbra.2006.007906.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Belayadi, Djahida, Khaled-Walid Hidouci, and Ladjel Bellatreche. "OLAPS: Online load-balancing in range-partitioned main memory database with approximate partition statistics." Computer Science and Information Systems 15, no. 2 (2018): 393–419. http://dx.doi.org/10.2298/csis170320007b.

Full text
Abstract:
Modern database systems can achieve high throughput main-memory query execution by being aware of the dynamics of highly parallel hardware. In such systems, data is partitioned into smaller pieces to reach a better parallelism. Unfortunately, data skew is one of the main problems faced during parallel processing in a parallel main memory database. In some data-intensive applications, parallel range queries over a dynamic range partitioned system are important. Continuous insertions/deletions can lead to a very high degree of data skew and consequently a poor performance of parallel range queries. In this paper, we propose an approach for maintaining balanced loads over a set of nodes as in a system of communicating vessels, by migrating tuples between neighboring nodes. These frequent (or even continuous) data transfers inevitably involve dynamic changes in the partition statistics. To avoid the performance degradation typically associated with this dynamism, we provide a solution based on an approximate Partition Statistics Table. The basic idea behind this table is that both clients and nodes may have an imperfect knowledge about the effective load distribution. They can nevertheless locate any data with almost the same efficiency as using exact partition statistics. Furthermore, maintaining load distribution statistics do not require exchanging additional messages as opposed to the cost of efficient solutions from the state-of-art (which requires at least O(logn) messages). We show through intensive experiments that our proposal supports efficient range queries, while simultaneously guaranteeing storage balance even in the presence of numerous concurrent insertions/deletions generating a heavy skewed data distribution.
APA, Harvard, Vancouver, ISO, and other styles
33

Ulusoy, Özgür. "An approximate analysis of a real-time database concurrency control protocol via Markov modeling." ACM SIGMETRICS Performance Evaluation Review 20, no. 3 (March 1993): 36–48. http://dx.doi.org/10.1145/155768.155773.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Cole, Jason C., Jing Wen Yao, Gregory P. Shields, W. D. S. Motherwell, Frank H. Allen, and Judith A. K. Howard. "Automatic detection of molecular symmetry in the Cambridge Structural Database." Acta Crystallographica Section B Structural Science 57, no. 1 (February 1, 2001): 88–94. http://dx.doi.org/10.1107/s010876810001380x.

Full text
Abstract:
A method for the detection of approximate molecular symmetry in crystal structures has been developed. The point-group symmetry is assigned to each molecule and the relevant symmetry elements can be visualized, superimposed on the molecule. The method has been validated against reference structures with exact symmetry subjected to small random perturbation.
APA, Harvard, Vancouver, ISO, and other styles
35

Ogunbo, Jide Nosakare, Jie Zhang, and Xiong Zhang. "Transient electromagnetic search engine for real-time imaging." GEOPHYSICS 82, no. 5 (September 1, 2017): E277—E285. http://dx.doi.org/10.1190/geo2016-0636.1.

Full text
Abstract:
To image the resistivity distribution of the subsurface, transient electromagnetic (TEM) surveying has been established as an effective geophysical method. Conventionally, an inversion method is applied to resolve the model parameters from the available measurements. However, significant time and effort are involved in preparing and executing an inversion and this prohibits its use as a real-time decision-making tool to optimize surveying in the field. We have developed a search engine method to find approximate 1D resistivity model solutions for circular central-loop configuration TEM data in real time. The search engine method is a concept used for query searches from large databases on the Internet. By extension, approximate solutions to any input TEM data can be found rapidly by searching a preestablished database. This database includes a large number of forward simulation results that represent the possible model solutions. The database size is optimized by the survey depth of investigation and the sensitivity analysis of the model layers. The fast-search speed is achieved by using the multiple randomized [Formula: see text]-dimensional tree method. In addition to its high speed in finding solutions, the search engine method provides a solution space that quantifies the resolutions and uncertainties of the results. We apply the search engine method to find 1D model solutions at different data points and then interpolate them to a pseudo-2D resistivity model. We tested the method with synthetic and real data.
APA, Harvard, Vancouver, ISO, and other styles
36

Wang, Rui, Ping Gu, and Jian Min Zeng. "A Vague Words Retrieval Method in a Relational Database." Applied Mechanics and Materials 268-270 (December 2012): 1692–96. http://dx.doi.org/10.4028/www.scientific.net/amm.268-270.1692.

Full text
Abstract:
In this paper, we propose a vague words retrieval method over text field of relational databases. This method is expected to get an ideal retrieval result from text field of relational databases when a set of incorrect keywords is submitted. The solution to this issue is: to create a “hot words library”, then let the input incorrect keywords match with the word of “hot words library”, based on the modified dynamic programming algorithm of k-difference approximate string matching. Finally, Experiments show that this solution has a good query performance.
APA, Harvard, Vancouver, ISO, and other styles
37

Dunin-Kęplicz, Barbara, Anh Nguyen, and Andrzej Szałas. "A layered rule-based architecture for approximate knowledge fusion?" Computer Science and Information Systems 7, no. 3 (2010): 617–42. http://dx.doi.org/10.2298/csis100209015d.

Full text
Abstract:
In this paper we present a framework for fusing approximate knowledge obtained from various distributed, heterogenous knowledge sources. This issue is substantial in modeling multi-agent systems, where a group of loosely coupled heterogeneous agents cooperate in achieving a common goal. In paper [5] we have focused on defining general mechanism for knowledge fusion. Next, the techniques ensuring tractability of fusing knowledge expressed as a Horn subset of propositional dynamic logic were developed in [13,16]. Propositional logics may seem too weak to be useful in real-world applications. On the other hand, propositional languages may be viewed as sublanguages of first-order logics which serve as a natural tool to define concepts in the spirit of description logics [2]. These notions may be further used to define various ontologies, like e.g. those applicable in the Semantic Web. Taking this step, we propose a framework, in which our Horn subset of dynamic logic is combined with deductive database technology. This synthesis is formally implemented in the framework of HSPDL architecture. The resulting knowledge fusion rules are naturally applicable to real-world data.
APA, Harvard, Vancouver, ISO, and other styles
38

Lee, Dong-Gun, and Hee-Sung Cha. "Development of BIM Standard Database System for an Approximate Estimate of Old-Aged Apartment Remodeling Project." Korean Journal of Construction Engineering and Management 11, no. 5 (September 30, 2010): 53–64. http://dx.doi.org/10.6106/kjcem.2010.11.5.53.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Liu, Bing, Dan Han, and Shuang Zhang. "Approximate Chinese String Matching Techniques Based on Pinyin Input Method." Applied Mechanics and Materials 513-517 (February 2014): 1017–20. http://dx.doi.org/10.4028/www.scientific.net/amm.513-517.1017.

Full text
Abstract:
String matching is one of the most typical problems in computer science. Previous studies mainly focused on accurate string matching problem. However, with the rapid development of the computer and Internet as well as the continuously rising of new issues, people find that it has very important theoretical value and practical meaning to research and design efficient approximate string matching algorithms. Approximate string matching is also called string matching that allows errors, which mainly aims to find the pattern string in the text and database and allows k differences between the pattern string and its occurring forms in the text. For the problem of approximate string matching, though a number of algorithms have been proposed, there are fewer studies which focus on large size of alphabet . Most of experts are interested in small or middle size of alphabet . For large size of , especially for Chinese characters and Asian phonetics, there are fewer efficient algorithms. For the above reasons, this paper focuses on the approximate Chinese strings matching problem based on the pinyin input method.
APA, Harvard, Vancouver, ISO, and other styles
40

Formica, Anna, Mauro Mazzei, Elaheh Pourabbas, and Maurizio Rafanelli. "Approximate answering of queries involving polyline–polyline topological relationships." Information Visualization 17, no. 2 (March 28, 2017): 128–45. http://dx.doi.org/10.1177/1473871617698516.

Full text
Abstract:
In geographic information systems, pictorial query languages are visual languages which make easier the user to express queries by free-hand drawing. In this perspective, this article proposes an approach to provide approximate answers to pictorial queries that do not match with the content of the database, that is, the results are null. It addresses the polyline–polyline topological relationships and is based on an algorithm, called Approximate Answer Computation algorithm, which exploits the notions of Operator Conceptual Neighborhood graph and 16-intersection matrix. The operator conceptual neighborhood graph represents the conceptual topological neighborhood between Symbolic Graphical Objects and is used for relaxing constraints of queries. The nodes of the operator conceptual neighborhood graph are labeled with geo-operators whose semantics has been formalized. The 16-intersection matrix provides enriched query details with respect to the well-known Dimensionally Extended 9-Intersection Model proposed in the literature. A set of minimal 16-intersection matrices associated with each node of the operator conceptual neighborhood graph, upon the external space connectivity condition, is defined and the proof of its minimality is provided. The main idea behind each introduced notion is illustrated using a running example throughout this article.
APA, Harvard, Vancouver, ISO, and other styles
41

Balla, Dániel, Tibor József Novák, and Marianna Zichar. "Approximation of the WRB reference group with the reapplication of archive soil databases." Acta Universitatis Sapientiae, Agriculture and Environment 8, no. 1 (December 1, 2016): 27–38. http://dx.doi.org/10.1515/ausae-2016-0003.

Full text
Abstract:
Abstract In our study, we tested the existing and freely accessible soil databases covering a smaller geographical region surveyed and classified according to the Hungarian classification in order to approximate the WRB soil reference groups (RSG). We tested the results and applicability of approximation for the RSG with three different methods on 12 soil profiles. First, RSGs were assigned to Hungarian soil taxa based on results of previous correlation studies, secondly, a freely accessible online database of ISRIC was applied furthermore, and an automated reclassification developed and programmed by us was used, which takes the original soil data as input.
APA, Harvard, Vancouver, ISO, and other styles
42

Thatcher, R. W., D. North, and C. Biver. "Evaluation and Validity of a LORETA Normative EEG Database." Clinical EEG and Neuroscience 36, no. 2 (April 2005): 116–22. http://dx.doi.org/10.1177/155005940503600211.

Full text
Abstract:
To evaluate the reliability and validity of a Z-score normative EEG database for Low Resolution Electromagnetic Tomography (LORETA), EEG digital samples (2 second intervals sampled 128 Hz, 1 to 2 minutes eyes closed) were acquired from 106 normal subjects, and the cross-spectrum was computed and multiplied by the Key Institute's LORETA 2,394 gray matter pixel T Matrix. After a log10 transform or a Box-Cox transform the mean and standard deviation of the *.lor files were computed for each of the 2,394 gray matter pixels, from 1 to 30 Hz, for each of the subjects. Tests of Gaussianity were computed in order to best approximate a normal distribution for each frequency and gray matter pixel. The relative sensitivity of a Z-score database was computed by measuring the approximation to a Gaussian distribution. The validity of the LORETA normative database was evaluated by the degree to which confirmed brain pathologies were localized using the LORETA normative database. Log10 and Box-Cox transforms approximated Gaussian distribution in the range of 95.64% to 99.75% accuracy. The percentage of normative Z-score values at 2 standard deviations ranged from 1.21% to 3.54%, and the percentage of Z-scores at 3 standard deviations ranged from 0% to 0.83%. Left temporal lobe epilepsy, right sensory motor hematoma and a right hemisphere stroke exhibited maximum Z-score deviations in the same locations as the pathologies. We conclude: (1) Adequate approximation to a Gaussian distribution can be achieved using LORETA by using a log10 transform or a Box-Cox transform and parametric statistics, (2) a Z-Score normative database is valid with adequate sensitivity when using LORETA, and (3) the Z-score LORETA normative database also consistently localized known pathologies to the expected Brodmann areas as an hypothesis test based on the surface EEG before computing LORETA.
APA, Harvard, Vancouver, ISO, and other styles
43

Sousa-Silva, Clara, Janusz J. Petkowski, and Sara Seager. "Molecular simulations for the spectroscopic detection of atmospheric gases." Physical Chemistry Chemical Physics 21, no. 35 (2019): 18970–87. http://dx.doi.org/10.1039/c8cp07057a.

Full text
Abstract:
The remote identification of molecules in an atmosphere requires data for each gas that makes contributions to its spectra. We present a database of approximate spectra for thousands of volatiles, simulated using organic and quantum chemistry.
APA, Harvard, Vancouver, ISO, and other styles
44

PARK, JE-HO, VINAY KANITKAR, R. N. UMA, and ALEX DELIS. "CLUSTERING OF CLIENT-SITES IN THREE-TIER DATABASE ARCHITECTURES." International Journal of Cooperative Information Systems 12, no. 01 (March 2003): 91–134. http://dx.doi.org/10.1142/s021884300300067x.

Full text
Abstract:
Conventional two-tier databases have shown performance limitations in the presence of many concurrent clients. We propose logical grouping of clients (or clustering) as the means to improve the performance of collaborative networked databases. In this paper, we discuss a three-tier client-server database architecture (3t-CSD) featuring the above partitioning. The proposed clustering is based on the similarity of clients' access patterns. Each cluster is supervised by a designated manager that coordinates data sharing among its members. A number of clients is optimally partitioned if sites in each individual cluster have the maximum common data access probability possible. We initially show that the optimal client clustering problem is NP-complete and then we develop two approximate solutions based on abstraction and filtering of statistics for client access patterns. Our main goal is to compare the performance of the conventional and three-tier client-server database architecture with respect to the transaction turnaround times and object response times. After developing system prototypes that implement both two-tier and 3t-CSDs, we experimentally show that as long as good client-clustering is possible, the 3t-CSD architecture yields sizable gains over its conventional counterpart. We also compare and evaluate the effectiveness of the two proposed techniques used to create client clusters. Finally, we examine the role of several preprocessing schemes used to reduce the volume of the input data supplied to the clustering techniques.
APA, Harvard, Vancouver, ISO, and other styles
45

Wakahara, Yasushi. "An approximate method for minimizing the communications cost of the replication routing schedule for a distributed database." Electronics and Communications in Japan (Part I: Communications) 83, no. 1 (January 2000): 73–86. http://dx.doi.org/10.1002/(sici)1520-6424(200001)83:1<73::aid-ecja8>3.0.co;2-u.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Szelka, Janusz, and Zbigniew Wrona. "The use of fuzzy databases and knowledge bases for aiding engineering projects." Budownictwo i Architektura 12, no. 1 (March 11, 2013): 069–76. http://dx.doi.org/10.35784/bud-arch.2175.

Full text
Abstract:
The IT tools that are widely used for aiding information and decision-making tasks in engineering activities include classic database systems, and in the case of problems with poorly-recognised structure – systems with knowledge bases. The uniqueness of these categories of systems allows, however, neither to represent the approximate or imprecise nature of available data or knowledge nor to process fuzzy data. Since so far there have been no solutions related to the use of fuzzy databases or fuzzy knowledge bases in engineering projects, it seems necessary to make an attempt to assess the possible employment of these technologies to aid analytical and decision-making processes.
APA, Harvard, Vancouver, ISO, and other styles
47

TAGHVA, KAZEM, JULIE BORSACK, BRYAN BULLARD, and ALLEN CONDIT. "POST-EDITING THROUGH APPROXIMATION AND GLOBAL CORRECTION." International Journal of Pattern Recognition and Artificial Intelligence 09, no. 06 (December 1995): 911–23. http://dx.doi.org/10.1142/s0218001495000377.

Full text
Abstract:
This paper describes a new automatic spelling correction program to deal with OCR generated errors. The method used here is based on three principles: 1. Approximate string matching between the misspellings and the terms occuring in the database as opposed to the entire dictionary 2. Local information obtained from the individual documents 3. The use of a confusion matrix, which contains information inherently specific to the nature of errors caused by the particular OCR device This system is then utilized to process approximately 10,000 pages of OCR generated documents. Among the misspellings discovered by this algorithm, about 87% were corrected.
APA, Harvard, Vancouver, ISO, and other styles
48

Brock, Carolyn Pratt. "Pseudosymmetric layers in high-Z′ and P1 structures of organic molecules." CrystEngComm 22, no. 43 (2020): 7371–79. http://dx.doi.org/10.1039/d0ce00302f.

Full text
Abstract:
Layers having obvious approximate symmetry higher than that of the overall 3-D crystal are present in 20–25% of the Z′ > 4 and P1 organic structures archived in the Cambridge Structural Database. In some structures different types of layers alternate.
APA, Harvard, Vancouver, ISO, and other styles
49

XIANG, JIAN, and ZHIJUN ZHENG. "DOUBLE-REFERENCE INDEX FOR MOTION RETRIEVAL BY ISOMAP DIMENSIONALITY REDUCTION." International Journal of Pattern Recognition and Artificial Intelligence 24, no. 04 (June 2010): 601–18. http://dx.doi.org/10.1142/s0218001410008044.

Full text
Abstract:
Along with the development of the motion capture (mocap) technique, large-scale 3D motion databases have become increasingly available. In this paper, a novel approach is presented for motion retrieval based on double-reference index (DRI). Due to the high dimensionality of motion's features, Isomap nonlinear dimension reduction is used. In addition, an algorithmic framework is employed to approximate the optimal mapping function by a Radial Basis Function (RBF) in handling new data. Subsequently, a DRI is built based on selecting a small set of representative motion clips in the database. Thus, the candidate set is obtained by discarding the most unrelated motion clips to significantly reduce the number of costly similarity measures. Finally, experimental results show that these approaches are effective for motion data retrieval in large-scale databases.
APA, Harvard, Vancouver, ISO, and other styles
50

Huang, Liang, He Zhang, Dezhong Deng, Kai Zhao, Kaibo Liu, David A. Hendrix, and David H. Mathews. "LinearFold: linear-time approximate RNA folding by 5'-to-3' dynamic programming and beam search." Bioinformatics 35, no. 14 (July 2019): i295—i304. http://dx.doi.org/10.1093/bioinformatics/btz375.

Full text
Abstract:
Abstract Motivation Predicting the secondary structure of an ribonucleic acid (RNA) sequence is useful in many applications. Existing algorithms [based on dynamic programming] suffer from a major limitation: their runtimes scale cubically with the RNA length, and this slowness limits their use in genome-wide applications. Results We present a novel alternative O(n3)-time dynamic programming algorithm for RNA folding that is amenable to heuristics that make it run in O(n) time and O(n) space, while producing a high-quality approximation to the optimal solution. Inspired by incremental parsing for context-free grammars in computational linguistics, our alternative dynamic programming algorithm scans the sequence in a left-to-right (5′-to-3′) direction rather than in a bottom-up fashion, which allows us to employ the effective beam pruning heuristic. Our work, though inexact, is the first RNA folding algorithm to achieve linear runtime (and linear space) without imposing constraints on the output structure. Surprisingly, our approximate search results in even higher overall accuracy on a diverse database of sequences with known structures. More interestingly, it leads to significantly more accurate predictions on the longest sequence families in that database (16S and 23S Ribosomal RNAs), as well as improved accuracies for long-range base pairs (500+ nucleotides apart), both of which are well known to be challenging for the current models. Availability and implementation Our source code is available at https://github.com/LinearFold/LinearFold, and our webserver is at http://linearfold.org (sequence limit: 100 000nt). Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography