To see the other types of publications on this topic, follow the link: Database relation schema.

Journal articles on the topic 'Database relation schema'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Database relation schema.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Wastl, Ralf. "Linear Derivations for Keys of a Database Relation Schema." JUCS - Journal of Universal Computer Science 4, no. (12) (1998): 883–97. https://doi.org/10.3217/jucs-004-12-0883.

Full text
Abstract:
In [Wastl 1998] we have introduced the Hilbert style inference system K for deriving all keys of a database relation schema. In this paper we investigate formal K-derivations more closely using the concept of tableaux. The analysis gives insight into the process of deriving keys of a relation schema. Also, the concept of tableaux gives a proof procedure for computing all keys of a relation schema. In practice, the methods developed here will be usefull for computing keys or for deciding whether an attribute is a key attribute, respectively non-key attribute. This decision problem becomes relevant when checking whether a relation schema is in third normal form, or when applying the well-known 3NF-decomposition algorithm (a.k.a. 3NF-synthesis algorithm).
APA, Harvard, Vancouver, ISO, and other styles
2

Wastl, Ralf. "On the Number of Keys of a Relational Database Schema." JUCS - Journal of Universal Computer Science 4, no. (5) (1998): 547–59. https://doi.org/10.3217/jucs-004-05-0547.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Imam, Abdullahi Abubakar, Shuib Basri, Rohiza Ahmad, et al. "DSP: Schema Design for Non-Relational Applications." Symmetry 12, no. 11 (2020): 1799. http://dx.doi.org/10.3390/sym12111799.

Full text
Abstract:
The way a database schema is designed has a high impact on its performance in relational databases, which are symmetric in nature. While the problem of schema optimization is even more significant for NoSQL (“Not only SQL”) databases, existing modeling tools for relational databases are inadequate for this asymmetric setting. As a result, NoSQL modelers rely on rules of thumb to model schemas that require a high level of competence. Several studies have been conducted to address this problem; however, they are either proprietary, symmetrical, relationally dependent or post-design assessment tools. In this study, a Dynamic Schema Proposition (DSP) model for NoSQL databases is proposed to handle the asymmetric nature of today’s data. This model aims to facilitate database design and improve its performance in relation to data availability. To achieve this, data modeling styles were aggregated and classified. Existing cardinality notations were empirically extended using synthetically generated queries. A binary integer formulation was used to guide the mapping of asymmetric entities from the application’s conceptual data model to a database schema. An experiment was conducted to evaluate the impact of the DSP model on NoSQL schema production and its performance. A profound improvement was observed in read/write query performance and schema production complexities. In this regard, DSP has significant potential to produce schemas that are capable of handling big data efficiently.
APA, Harvard, Vancouver, ISO, and other styles
4

Yannakakis, Mihalis. "Technical Perspective." ACM SIGMOD Record 51, no. 1 (2022): 77. http://dx.doi.org/10.1145/3542700.3542718.

Full text
Abstract:
The paper Structure and Complexity of Bag Consistency by Albert Atserias and Phokion Kolaitis [1] studies fundamental structural and algorithmic questions on the global consistency of databases in the context of bag semantics. A collection D of relations is called globally consistent if there is a (so-called "universal") relation over all the attributes that appear in all the relations of D whose projections yield the relations in D. The basic algorithmic problem for consistency is: given a database D, determine whether D is globally consistent. An obvious necessary condition for global consistency is local (or pairwise) consistency: every pair of relations in D must be consistent. This condition is not sufficient however: It is possible that every pair is consistent, but there is no single global relation over all the attributes whose projections yield the relations in D. A natural structural question is: Which database schemas have the property that every locally consistent database over the schema is also globally consistent?
APA, Harvard, Vancouver, ISO, and other styles
5

LEE, CHIANG, and MING-CHUAN WU. "A HYPERRELATIONAL APPROACH TO INTEGRATION AND MANIPULATION OF DATA IN MULTIDATABASE SYSTEMS." International Journal of Cooperative Information Systems 05, no. 04 (1996): 395–429. http://dx.doi.org/10.1142/s0218843096000154.

Full text
Abstract:
The issue of interoperability among multiple autonomous databases has attracted a lot of attention from researchers in these years. The past research on this issue can be roughly divided into two main categories: the tightly-integrated approach that integrate databases by building an integrated schema and the loosely-integrated approach that achieves interoperability by using a multidatabase language. Most of the past efforts focused on the issues in the first approach. The problem with the first approach is, however, that it lacks a convenient representation of the integrated schema at the system level and a sound mathematical basis for data manipulation in a multidatabase system. In this paper, we propose to use hyperrelations as a powerful and succinct model for the global level representation of heterogeneous database schemas. A hyperrelation has the structure of a relation, but its contents are the schemas of the semantically equivalent local relations in the databases. With this representation, the metadata of the global database, local databases and the data of these databases are all representable by using the structure of a relation. The impact of such a representation is that all the elegant features of relational systems can be easily extended to multidatabase systems. A hyperrelational algebra is designed accordingly. This algebra is performed at the multidatabase systems (MDBS) level such that query transformation and optimization is supported on a sound mathematical basis. The major contributions of this paper include: (1) Local relations of various schemas (even though they retain information of the same semantics) can be systematically mapped to hyperrelations. As the structure of a hyperrelation is similar to that of a relation, data manipulation and management tasks (such as design of the global query language and the view mechanism) are greatly facilitated. (2) The hyperrelational algebra provides a sound basis for query transformation and optimization in a MDBS.
APA, Harvard, Vancouver, ISO, and other styles
6

ZHANG, YANCHUN, MARIA E. ORLOWSKA, and ROBERT COLOMB. "AN EFFICIENT TEST FOR THE VALIDITY OF UNBIASED HYBRID KNOWLEDGE FRAGMENTATION IN DISTRIBUTED DATABASES." International Journal of Software Engineering and Knowledge Engineering 02, no. 04 (1992): 589–609. http://dx.doi.org/10.1142/s0218194092000270.

Full text
Abstract:
Knowledge bases contain specific and general knowledge. In relational database systems, specific knowledge is often represented as a set of relations. The conventional methodologies for centralized database design can be applied to develop a normalized, redundancy-free global schema. Distributed database design involves redundancy removal as well as the distribution design which allows replicated data segments. Thus, distribution design can be viewed as a process on a normalized global schema which produces a collection of fragments of relations from a global database. Clearly, not every fragment of data can be permitted as a relation. In this paper, we clarify and formally discuss three kinds of fragmentations of relational databases, and characterize their features as valid designs, and we introduce a hybrid knowledge fragmentation as the general case. For completeness of presentation, we first show an algorithm for the validity test of vertical fragmentations of normalized relations, and then extend it to the more general case of unbiased fragmentations.
APA, Harvard, Vancouver, ISO, and other styles
7

Huang, Shu Qin. "Scheme of Mapping from XML Document to Relational Database." Applied Mechanics and Materials 241-244 (December 2012): 2732–36. http://dx.doi.org/10.4028/www.scientific.net/amm.241-244.2732.

Full text
Abstract:
A new algorithm and a mapping scheme between XML document and relational database were presented. The scheme extracted name, type, value and other information from XML document, and then mapped them to relational database. Data from relational database can be restored to XML document. This method avoids information loss in the data transfer process and remains the structural relation between elements. And this mapping scheme does not depend on XML DTD or Schema.
APA, Harvard, Vancouver, ISO, and other styles
8

Köhler, Henning. "Global Database Design based on Storage Space and Update Time Minimization." JUCS - Journal of Universal Computer Science 15, no. (1) (2009): 195–240. https://doi.org/10.3217/jucs-015-01-0195.

Full text
Abstract:
A common approach in designing relational databases is to start with a universal relation schema, which is then decomposed into multiple subschemas. A good choice of subschemas can be determined using integrity constraints defined on the schema, such as functional, multivalued or join dependencies. In this paper we propose and analyze a new normal form based on the idea of minimizing overall storage space and update costs, and as a consequence redundancy as well. This is in contrast to existing normal forms such as BCNF, 4NF or KCNF, which only characterize the absence of redundancy (and thus space and update time minimality) for a single schema. We show that our new normal form naturally extendexisting normal forms to multiple schemas, and provide an algorithm for computing decompositions.
APA, Harvard, Vancouver, ISO, and other styles
9

Zhang, Hai Fei. "A On-Demand Mapping Method on Query Request in Relational Database Semantic Query." Applied Mechanics and Materials 411-414 (September 2013): 341–44. http://dx.doi.org/10.4028/www.scientific.net/amm.411-414.341.

Full text
Abstract:
Relational database semantic query namely RDF access to relational database is an important issue in semantic Web research. To realize the query is to build mapping relation between relational database schema and ontology. However, there is natural isomerism between them. The isomerism can be eliminated by transforming relational database schema into similar ontology and then building mapping between conversion ontology and input ontology. This paper realized an ondemanding mapping method when users request query, avoided building all concepts and attributes mapping between conversion ontology and input ontology and improved mapping efficiency.
APA, Harvard, Vancouver, ISO, and other styles
10

Gerlach, Luisa, Tobias Köppl, Stefanie Scherzinger, Nicole Schweikardt, and René Zander. "A Quantum-Leap into Schema Matching: Beyond 1-to-1 Matchings." Proceedings of the ACM on Management of Data 3, no. 2 (2025): 1–24. https://doi.org/10.1145/3725226.

Full text
Abstract:
Schema matching refers to the task of identifying corresponding attributes of different database relation schemas to enable the efficient integration of the associated datasets. We model the task of finding suitable 1:N/N:1 global matchings in relational schemas as an optimization problem. We show that this optimization problem is NP-hard. We then translate the optimization problem into the problem of minimizing a particular rational-valued function on binary variables. The latter enables us to utilize modern quantum algorithms for solving the global matching problem, a crucial stage in schema matching. We also report on preliminary experimental results that serve as a proof-of-concept for our approach.
APA, Harvard, Vancouver, ISO, and other styles
11

Ferrarotti, Flavio, Alejandra Paoletti, and José María Turull Torres. "Redundant Relations in Relational Databases: A Model Theoretic Perspective." JUCS - Journal of Universal Computer Science 16, no. (20) (2010): 2934–55. https://doi.org/10.3217/jucs-016-20-2934.

Full text
Abstract:
We initiate in this work the study of a sort of redundancy problem revealed by what we call redundant relations. Roughly, we define a redundant relation in a database instance (dbi) as a k-ary relation R such that there is a first-order query which evaluated in the reduced dbi, (i.e., the dbi without the redundant relation R) gives us R. So, given that first-order types are isomorphism types on finite structures, we can eliminate that relation R as long as the equivalence classes of the relation of equality of the first-order types for all k-tuples in the dbi are not altered. It turns out that in a fixed dbi, the problem of deciding whether a given relation in the dbi is redundant is decidable, though intractable, as well as the problem of deciding whether there is any relation symbol in the schema which is a redundant relation in the given dbi. We then study redundant relations with a restricted notion of equivalence so that the problem becomes tractable.
APA, Harvard, Vancouver, ISO, and other styles
12

Paul, Norbert, and Patrick E. Bradley. "Integrating Space, Time, Version, and Scale using Alexandrov Topologies." International Journal of 3-D Information Modeling 4, no. 4 (2015): 64–85. http://dx.doi.org/10.4018/ij3dim.2015100104.

Full text
Abstract:
This article introduces a novel approach to higher dimensional spatial database design. Instead of extending the canonical Solid–Face–Edge–Vertex schema of topological data, these classes are replaced altogether by a common type SpatialEntity, and the individual “bounded-by” relations between two consecutive classes are replaced by one separate binary relation BoundedBy on SpatialEntity which defines a so-called Alexandrov topology on SpatialEntity and thus exposes mathematical principles of spatial data design. This has important consequences: First, a mathematical definition of topological “dimension” for spatial data can be given. Second, every topology for data of arbitrary dimension has such a simple representation. Also, version histories have a canonical Alexandrov topology, and generalizations can be consistently modeled by continuous foreign keys between LoDs. The result is a relational database schema for spatial data of dimension 6 and more which seamlessly integrates 4D space-time, levels of details and version history. Topological constructions enable queries across these different aspects.
APA, Harvard, Vancouver, ISO, and other styles
13

Falchi, Ugo. "IT tools for the management of multi - representation geographical information." International Journal of Engineering & Technology 7, no. 1 (2018): 65. http://dx.doi.org/10.14419/ijet.v7i1.8810.

Full text
Abstract:
The goal of this research was the creation of software tools for managing instances of a multi - representation geodatabase, able to define multiple representations and topological constraints, in relation to modeled objects and structures according to the classification of the Italian national technical specifications of the November 10, Italian Ministerial Decree 2011. After the development of a conceptual scheme, encoded in its corresponding logical mode, various computer artifacts were designed and developed from scratch to perform the upload, management and display of data: a Scheme Designer, which allows users to define the logical model and implement the physical model of an instance of Oracle; a Loader, which allows users to populate the database; a GUI, which is a graphical interface to the tools and Schema Designer Loader; a DB Navigator, which is the web interface to the database multi - representation.
APA, Harvard, Vancouver, ISO, and other styles
14

Nguyen, Hoa. "A probabilistic relational database model and algebra." Journal of Computer Science and Cybernetics 31, no. 4 (2015): 305. http://dx.doi.org/10.15625/1813-9663/31/4/5742.

Full text
Abstract:
This paper introduces a probabilistic relational database model, called PRDB, for representing and querying uncertain information of objects in practice. To develop the PRDB model, first, we represent the relational attribute value as a pair of probabilistic distributions on a set for modeling the possibility that the attribute can take one of the values of the set with a probability belonging to the interval which is inferred from the pair of probabilistic distributions. Next, on the basis representing such attribute values, we formally define the notions as the schema, relation, probabilistic functional dependency and probabilistic relational algebraic operations for PRDB. In addition, a set of the properties of the probabilistic relational algebraic operations in PRDB also are formulated and proven.
APA, Harvard, Vancouver, ISO, and other styles
15

Jose, Benymol, Rajesh N., and Lumy Joseph. "Enhanced query performance for stored streaming data through structured streaming within spark SQL." Indonesian Journal of Electrical Engineering and Computer Science 35, no. 3 (2024): 1744. http://dx.doi.org/10.11591/ijeecs.v35.i3.pp1744-1750.

Full text
Abstract:
Traditional database systems like relational databases can store data which are structured with predefined schema, but in the case of bigdata, the data comes in different formats or are collected from diverse sources. The distributed databases like not only spark querying language (NoSQL) repositories are often used in relation to bigdata analytics, but a continual updating is required in business because of the streaming data that comes from stock trading, online activities of website visitors, and from the mobile applications in real time. It will not have to delay, for some report to show up, to assess and analyse the current situation, to move forward with the next business choice. Apache Spark’s structured streaming offer capabilities for handling streaming data in a batch processing mode with faster responses compared to MongoDB which is a document-based NoSQL database. This study completes similar queries to evaluate Spark SQL and NoSQL database performance, focusing on the upsides of Spark SQL over NoSQL databases in streaming data exploration. The queries are completed with streaming data stored in a batch mode.
APA, Harvard, Vancouver, ISO, and other styles
16

Benymol, Jose Rajesh N. Lumy Joseph. "Enhanced query performance for stored streaming data through structured streaming within spark SQL." Indonesian Journal of Electrical Engineering and Computer Science 35, no. 3 (2024): 1744–50. https://doi.org/10.11591/ijeecs.v35.i3.pp1744-1750.

Full text
Abstract:
Traditional database systems like relational databases can store data which are structured with predefined schema, but in the case of bigdata, the data comes in different formats or are collected from diverse sources. The distributed databases like not only spark querying language (NoSQL) repositories are often used in relation to bigdata analytics, but a continual updating is required in business because of the streaming data that comes from stock trading, online activities of website visitors, and from the mobile applications in real time. It will not have to delay, for some report to show up, to assess and analyse the current situation, to move forward with the next business choice. Apache Spark’s structured streaming offer capabilities for handling streaming data in a batch processing mode with faster responses compared to MongoDB which is a document-based NoSQL database. This study completes similar queries to evaluate Spark SQL and NoSQL database performance, focusing on the upsides of Spark SQL over NoSQL databases in streaming data exploration. The queries are completed with streaming data stored in a batch mode.
APA, Harvard, Vancouver, ISO, and other styles
17

Demba, Moussa. "KeyFinder: An Efficient Minimal Keys Finding Algorithm For Relational Databases." Inteligencia Artificial 24, no. 68 (2021): 37–52. http://dx.doi.org/10.4114/intartif.vol24iss68pp37-52.

Full text
Abstract:
In relational databases, it is essential to know all minimal keys since the concept of database normaliza-tion is based on keys and functional dependencies of a relation schema. Existing algorithms for determining keysor computing the closure of arbitrary sets of attributes are generally time-consuming. In this paper we present anefficient algorithm, called KeyFinder, for solving the key-finding problem. We also propose a more direct methodfor computing the closure of a set of attributes. KeyFinder is based on a powerful proof procedure for findingkeys called tableaux. Experimental results show that KeyFinder outperforms its predecessors in terms of searchspace and execution time.
APA, Harvard, Vancouver, ISO, and other styles
18

Yunda Adisa and Muhammad Irwan Padli Nasution. "Konsep Dan Peran Sistem Manajemen Basis Data Relasional Pada Sistem Informasi Manajemen." Masip: Jurnal Manajemen Administrasi Bisnis dan Publik Terapan 1, no. 3 (2023): 76–83. https://doi.org/10.59061/masip.v1i3.314.

Full text
Abstract:
The increase in digital systems has led to an increasingly important need for information in the current era of globalization. Facilities and infrastructure in technology really support the success of an information so that it is conveyed properly and correctly. This is exemplified by the use of computers in people's lives which are used to facilitate work and in everyday life. Therefore, the role of a computer-based management information system is very much needed in order to provide a competitive advantage so that it can be prioritized on a high scale. With the presence of an information system, an agency, institution and organization will add value to obtain, manage, distribute and utilize information competitively and efficiently with the aim of improving organizational performance and making decisions to achieve the goals of the organization. If the information system is accurate and relevant, it can be used to support company decision making. But sometimes a bad database system on the system will produce problems. So the system must be continuously reviewed and modified to maintain its quality. So we need a number of relational schema relations and databases to produce a schema relation to store information properly and make decisions easily.
APA, Harvard, Vancouver, ISO, and other styles
19

Tan, Jun, and Hai Ming Zhao. "Construction of Data Warehouse Platform in Continual Quality Improvement." Applied Mechanics and Materials 519-520 (February 2014): 13–16. http://dx.doi.org/10.4028/www.scientific.net/amm.519-520.13.

Full text
Abstract:
Aiming at improving product quality continually, we proposed an association rules mining system (ARMS) based on idea of PDCA cycling. Data warehouse is very useful for integrating heterogeneous database. Therefore, this paper designed a data warehouse platform as process data exchange module in ARMS. The role of data warehouse platform module is to integrate XML with enterprise process for realizing process data exchange among departments. In design of data warehosue, this paper chooses three-tier data warehouse structure and snowflake schema for indicating the complex relation between process data.
APA, Harvard, Vancouver, ISO, and other styles
20

Demetrovics, János, Vu Duc Thi, Nguyen Long Giang, and Tran Huy Duong. "On the Time Complexity of the Problem Related to Reducts of Consistent Decision Tables." Serdica Journal of Computing 9, no. 2 (2016): 167–76. http://dx.doi.org/10.55630/sjc.2015.9.167-176.

Full text
Abstract:
In recent years, rough set approach computing issues concerning reducts of decision tables have attracted the attention of many researchers. In this paper, we present the time complexity of an algorithmcomputing reducts of decision tables by relational database approach. LetDS = (U, C ∪ {d}) be a consistent decision table, we say that A ⊆ C is arelative reduct of DS if A contains a reduct of DS. Let s = <C ∪ {d} , F>be a relation schema on the attribute set C ∪ {d}, we say that A ⊆ C isa relative minimal set of the attribute d if A contains a minimal set of d.Let Qd be the family of all relative reducts of DS, and Pd be the family ofall relative minimal sets of the attribute d on s. We prove that the problem whether Qd ⊆ Pd is co-NP-complete. However, the problem whether Pd ⊆ Qd is in P .
APA, Harvard, Vancouver, ISO, and other styles
21

Mattiello, Elisa, and Wolfgang U. Dressler. "The Morphosemantic Transparency/Opacity of Novel English Analogical Compounds and Compound Families." Studia Anglica Posnaniensia 53, no. 1 (2018): 67–114. http://dx.doi.org/10.2478/stap-2018-0004.

Full text
Abstract:
Abstract This study deals with novel English analogical compounds, i.e. compounds obtained via either a unique model (e.g. beefcake after cheesecake) or a schema model: e.g., green-collar based on white-collar, blue-collar, pink-collar, and other X-collar compounds. The study aims, first, to inspect whether novel analogical compounds maintain the same degree of morphosemantic transparency/opacity as their models, and, second, to find out the role played by the compound constituents in the constitution of compound families, such as X-collar and others. To these aims, the study proposes a scale of morphosemantic transparency/opacity for the analysis of compound constituents. In particular, the compound constituents in our database (115 examples) are analysed in connection with: 1) their degree of transparency (vs. opacity, including metaphorical/metonymic meaning), linked to their semantic contribution in the construction of the whole compound’s meaning, and 2) their part-of-speech. Against the common assumption that productive word-formation rules mostly create morphosemantically transparent new words, or that rule productivity is closely connected with transparency, the study of our database demonstrates that novel analogical compounds tend to maintain the same transparency/opacity degree as their models. It also shows that, in nuclear families and subfamilies of compounds, the part-of-speech of the constituents, their degree of transparency/opacity, and their semantic relation are reproduced in all members of the analogical set.
APA, Harvard, Vancouver, ISO, and other styles
22

Yesin, V. I., and V. V. Vilihura. "Method for developing databases being easily adaptable to changes in the subject domain." Radiotekhnika, no. 195 (December 28, 2018): 184–92. http://dx.doi.org/10.30837/rt.2018.4.195.18.

Full text
Abstract:
A method for developing relational databases with the schema invariant to subject domains is proposed. The use of this method unlike the traditional technology of designing relational databases allows: creating databases for various simulated subject domains that meet the requirements of consumers of the information product in the process of reengineering, with less time and financial costs; adapting relational databases built on the basis of a scheme with a universal basis of relations to dynamic changes in subject domains, without changing the database schema, due to the use of the created predetermined structure of basic relations.
APA, Harvard, Vancouver, ISO, and other styles
23

Aina Nuwaid, An-Nisa, Encep Supriatna, and Elsa Fauziah. "Rancang Bangun Sistem Informasi Pengarsipan di PLN Rayon Rancaekek." Jurnal Dimamu 1, no. 2 (2022): 149–57. http://dx.doi.org/10.32627/dimamu.v1i2.473.

Full text
Abstract:
In today's era, information systems are an important factor in a company, one of which is in the field of archiving. Archiving at the State Electricity Company (SEC) Rayon Rancaekek still uses a semi-computerized method, which raises several problems, namely difficulty in finding data, not yet integrated transaction files with reports so that reports cannot be generated automatically, the length of the data calculation process because it is still done manually, there is a risk of data being double input and deleted, as well as lost or damaged documents. The purpose of this final project is to simplify data retrieval, integrate transaction files with reports so that reports can be presented after the transaction process, streamline the calculation of archived data, avoid double input and deleted data, and anticipate lost or damaged documents. The research and system development method used is the System Development Life Cycle (SDLC) with the Waterfall model. This system is made using Microsoft Visual FoxPro 9.0 and FoxPro database. System analysis and design tools used in this final project are Flow map, Data Flow Diagram (DFD), Entity Relationship Diagram (ERD), database relation schema, and Structure Chart. With the system design made, it can be concluded that the search for archive data becomes easier, all files are integrated so that reports can be presented at any time, the data calculation process can be carried out more effectively, double input data and the risk of data loss can be anticipated, as well as loss and damage data can be anticipated.
APA, Harvard, Vancouver, ISO, and other styles
24

Yesin, V. I. "Database schema invariant to subject domains and its distinctive features." Radiotekhnika, no. 193 (May 15, 2018): 133–42. http://dx.doi.org/10.30837/rt.2018.2.193.13.

Full text
Abstract:
The desire to avoid unnecessary costs inherent in the traditional methodology of creating relational databases, not only at the stages of conceptual and logical design, but also at the stage of physical design, has actualized the task of developing a database schema that is invariant to subject domains. The main developed and distinguishing elements of this scheme are as follows: composition, structure of the basic relations; implementation of integrity constraints; means ensuring the database security.
APA, Harvard, Vancouver, ISO, and other styles
25

Atserias, Albert, and Phokion G. Kolaitis. "Structure and Complexity of Bag Consistency." ACM SIGMOD Record 51, no. 1 (2022): 78–85. http://dx.doi.org/10.1145/3542700.3542719.

Full text
Abstract:
Since the early days of relational databases, it was realized that acyclic hypergraphs give rise to database schemas with desirable structural and algorithmic properties. In a bynow classical paper, Beeri, Fagin, Maier, and Yannakakis established several different equivalent characterizations of acyclicity; in particular, they showed that the sets of attributes of a schema form an acyclic hypergraph if and only if the local-to-global consistency property for relations over that schema holds, which means that every collection of pairwise consistent relations over the schema is globally consistent. Even though real-life databases consist of bags (multisets), there has not been a study of the interplay between local consistency and global consistency for bags. We embark on such a study here and we first show that the sets of attributes of a schema form an acyclic hypergraph if and only if the local-to-global consistency property for bags over that schema holds. After this, we explore algorithmic aspects of global consistency for bags by analyzing the computational complexity of the global consistency problem for bags: given a collection of bags, are these bags globally consistent? We show that this problem is in NP, even when the schema is part of the input. We then establish the following dichotomy theorem for fixed schemas: if the schema is acyclic, then the global consistency problem for bags is solvable in polynomial time, while if the schema is cyclic, then the global consistency problem for bags is NP-complete. The latter result contrasts sharply with the state of affairs for relations, where, for each fixed schema, the global consistency problem for relations is solvable in polynomial time.
APA, Harvard, Vancouver, ISO, and other styles
26

Baldin, Alexander V., and Dmitriy V. Eliseev. "Processing Archive Information in Digital University." ITM Web of Conferences 35 (2020): 02003. http://dx.doi.org/10.1051/itmconf/20203502003.

Full text
Abstract:
The article discusses methods of processing and storing data archive used in the digital university. Disadvantages of these methods are found. As a result, a fundamentally new method of processing and storing information archive in a constantly changing scheme database is proposed. This method uses mivar technologies. The multidimensional space structure has been developed to store the data archive. This multidimensional space describes the temporal relational model. For processing data, archive is proposed scheme for selecting the subspace and converting it into relations. A method of transformation of relational databases into multidimensional mivar space for efficient execution of operations on temporal data with changing structure is proposed. The transition to a multidimensional space allows us to describe the process of changing temporal data and their structure in a unified way. As a result, the time required to adapt the database schema and the redundancy of information storage are reduced. The results of this work are used in the human resource management database of BMSTU.
APA, Harvard, Vancouver, ISO, and other styles
27

LARSEN, KIM S. "SORT ORDER PROBLEMS IN RELATIONAL DATABASES." International Journal of Foundations of Computer Science 09, no. 04 (1998): 399–429. http://dx.doi.org/10.1142/s0129054198000313.

Full text
Abstract:
A relation of degree k can be sorted lexicographically in k! different ways, i.e., according to any one of the possible permutations of the schema of the relation. Such permutations are referred to as sort orders. When evaluating unary and binary relational algebra operators using sort-merge algorithms, sort orders fulfilling the constraints enforced by the operators are chosen for the operand relations. The relations are then sorted according to their assigned sort orders, and the result is obtained by merging. Should the operands already be sorted according to one of the permissible sort orders, then only a merging is required. The sort order of the result will depend on the sort orders of the operands. When evaluating whole relational algebra expressions, the result of one operation will be used as an operand to the next. It is desirable to choose sort orders in such a way that the result of one operation will automatically fulfill the requirements of the next. In general, one would like to find a minimal number of operators in the expression for which this cannot be obtained, bearing in mind the overall goal of minimizing the total work. We show that this problem is NP-hard, and that the corresponding decision problem is NP-complete. However, most simplifications of the original problem give rise to efficient algorithms. In fact, most frequently occurring queries can be analyzed in linear time in the size of the query. This is due to the fact that only a very limited number of subsets of all permutations of schemas can be encountered in the algorithms, which means that compact representations for these subsets can be found.
APA, Harvard, Vancouver, ISO, and other styles
28

Oleynik, Andrey G. "Storage organization for "open" sets of entity attributes in relational databases." Transaction Kola Science Centre 12, no. 5-2021 (2021): 128–39. http://dx.doi.org/10.37614/2307-5252.2021.5.12.011.

Full text
Abstract:
Relations are practically implemented by database management systems in the form of two-dimensional tables. In this regard, certain difficulties arise in the development of relational database schemas, in which it is necessary to represent objects with an alterable (open) set of attributes. The article proposes a solution to this problem by including special relations in the scheme - relations of properties directory. Properties directory allow replenishing the sets of object attributes without changing the structure of the database. Examples of the practical use of properties directory in the development of database schemas of two information systems are presented.
APA, Harvard, Vancouver, ISO, and other styles
29

Yesin, Vitalii, Mikolaj Karpinski, Maryna Yesina, Vladyslav Vilihura, and Kornel Warwas. "Ensuring Data Integrity in Databases with the Universal Basis of Relations." Applied Sciences 11, no. 18 (2021): 8781. http://dx.doi.org/10.3390/app11188781.

Full text
Abstract:
The objective of the paper was to reveal the main techniques and means of ensuring the integrity of data and persistent stored database modules implemented in accordance with the recommendations of the Clark–Wilson model as a methodological basis for building a system that ensures integrity. The considered database was built according to the schema with the universal basis of relations. The mechanisms developed in the process of researching the problem of ensuring the integrity of the data and programs of such a database were based on the provisions of the relational database theory, the Row Level Security technology, the potential of the modern blockchain model, and the capabilities of the database management system on the platform of which databases with the universal basis of relations are implemented. The implementation of the proposed techniques and means, controlling the integrity of the database of stored elements, prevents their unauthorized modification by authorized subjects and hinders the introduction of changes by unauthorized subjects. As a result, the stored data and programs remain correct, unaltered, undistorted, and preserved. This means that databases built based on a schema with the universal basis of relations and supported by such mechanisms are protected in terms of integrity.
APA, Harvard, Vancouver, ISO, and other styles
30

Khan, Aihab, and Syed Afaq Husain. "A Fragile Zero Watermarking Scheme to Detect and Characterize Malicious Modifications in Database Relations." Scientific World Journal 2013 (2013): 1–16. http://dx.doi.org/10.1155/2013/796726.

Full text
Abstract:
We put forward a fragile zero watermarking scheme to detect and characterize malicious modifications made to a database relation. Most of the existing watermarking schemes for relational databases introduce intentional errors or permanent distortions as marks into the database original content. These distortions inevitably degrade the data quality and data usability as the integrity of a relational database is violated. Moreover, these fragile schemes can detect malicious data modifications but do not characterize the tempering attack, that is, the nature of tempering. The proposed fragile scheme is based on zero watermarking approach to detect malicious modifications made to a database relation. In zero watermarking, the watermark is generated (constructed) from the contents of the original data rather than introduction of permanent distortions as marks into the data. As a result, the proposed scheme is distortion-free; thus, it also resolves the inherent conflict between security and imperceptibility. The proposed scheme also characterizes the malicious data modifications to quantify the nature of tempering attacks. Experimental results show that even minor malicious modifications made to a database relation can be detected and characterized successfully.
APA, Harvard, Vancouver, ISO, and other styles
31

Priyank, Kumar Singh, Ur Rehman1 Sami, J. Darshan, G. Shobha, and N. Deepamala. "Automated dynamic schema generation using knowledge graph." International Journal of Artificial Intelligence (IJ-AI) 11, no. 4 (2022): 1261–69. https://doi.org/10.11591/ijai.v11.i4.pp1261-1269.

Full text
Abstract:
On the internet where the number of database developers is increasing with the availability of huge data to be stored and queried. Establishing relations between various schemas and helping the developers by filtering, prioritizing, and suggesting relevant schema is a requirement. Recommendation system plays an important role in searching through a large volume of dynamically generated schemas to provide database developers with personalized schemas and services. Although many methods are already available to solve problems using machine learning, they require more time and data to learn. These problems can be solved using knowledge graphs (KG). This paper investigates building knowledge graphs to recommend schemas.
APA, Harvard, Vancouver, ISO, and other styles
32

Thi, Vũ Đức. "Some computational problems related to normal forms." Journal of Computer Science and Cybernetics 13, no. 1 (2016): 53–65. http://dx.doi.org/10.15625/1813-9663/13/1/7983.

Full text
Abstract:
In the relational database theory the most desirable normal form is the Boyce-Codd normal form (BCNF). This paper investigates some computational problems concerning BCNF relation scheme and BCNF relations. We give an effective algorithm finding a BCNF relation r such that r represents a given BCNF relation scheme s (i.e., Kr=Ks, where Kr and Ks are sets of all minimal keys of r and s). This paper also gives an effective algorithm which from a given BCNF relation finds a BCNF relation scheme such that Kr=Ks. Based on these algorithms we prove that the time complexity of the problem that finds a BCNF relation r representing a given BCNF relation scheme s is exponential in the size of s and conversely, the complexity of finding a BCNF relation scheme s from a given BCNF relation r such that r represents s also is exponential in the number of attributes. We give a new characterization of the relations and the relation scheme that are uniquely determined by their minimal keys. It is known that these relations and the relation schemes are in the BCNF class. From this characterization we give a polynomial time algorithm deciding whether an arbitrary relation is uniquely determined by its set of all minimal keys. In the rest if this paper some new bounds of the size of minimal Armstrong relations for BCNF relation scheme are given. We show that given a Sperner system K and BCNF relation scheme s a set of minimal keys of which is K, the number of antikeys (maximal nonkeys) of K is polynomial in the number of attributes iff so is the size of minimal Armstrong relation of s.
APA, Harvard, Vancouver, ISO, and other styles
33

Adji, Teguh Bharata, Dwi Retno Puspita Sari, and Noor Akhmad Setiawan. "Relational into Non-Relational Database Migration with Multiple-Nested Schema Methods on Academic Data." IJITEE (International Journal of Information Technology and Electrical Engineering) 3, no. 1 (2019): 16. http://dx.doi.org/10.22146/ijitee.46503.

Full text
Abstract:
The rapid development of internet technology has increased the need of data storage and processing technology application. One application is to manage academic data records at educational institutions. Along with massive growth of information, decrement in the traditional database performance is inevitable. Hence, there are many companies choose to migrate to NoSQL, a technology that is able to overcome the traditional database shortcomings. However, the existing SQL to NoSQL migration tools have not been able to represent SQL data relations in NoSQL without limiting query performance. In this paper, a relational database transformation system transforming MySQL into non-relational database MongoDB was developed, using the Multiple Nested Schema method for academic databases. The development began with a transformation scheme design. The transformation scheme was then implemented in the migration process, using PDI/Kettle. The testing was carried out on three aspects, namely query response time, data integrity, and storage requirements. The test results showed that the developed system successfully represented the relationship of SQL data in NoSQL, provided complex query performance 13.32 times faster in the migration database, basic query performance involving SQL transaction tables 28.6 times faster on migration results, and basic performance Queries without involving SQL transaction tables were 3.91 times faster in the migration source. This shows that the theory of the Multiple Nested Schema method, aiming to overcome the poor performance of queries involving many JOIN operations, is proved. In addition, the system is also proven to be able to maintain data integrity in all tested queries. The space performance test results indicated that the migrated database transformed using the Multiple Nested Schema method showed a storage requirement of 10.53 times larger than the migration source database. This is due to the large amount of data redundancy resulting from the transformation process. However, at present, storage performance is not a top priority in data processing technology, so large storage requirements are a consequence of obtaining efficient query performance, which is still considered as the first priority in data processing technology.
APA, Harvard, Vancouver, ISO, and other styles
34

Thi Bich Huong, LUU. "WATERMARKING SCHEME BASED ON MOST SIGNIFICANT BIT FOR PUBLIC COPYRIGHT PROTECTION FOR RELATIONAL DATABASES." Vinh University Journal of Science 53, no. 3A (2024): 73–79. http://dx.doi.org/10.56824/vujs.2024a032a.

Full text
Abstract:
The paper presents a watermarked scheme based on the most significant bit for public copyright protection for relational databases. A Watermark scheme could openly prove data copyright as often as desired. This watermark scheme is stable against common attacks such as adding, editing, and deleting data values randomly or selectively. Keywords: Public copyright protection; watermark; relation database.
APA, Harvard, Vancouver, ISO, and other styles
35

Qian, Hai Zhong, and Su Bin Shen. "Bridging the Mapping between Ontology and Relation Schemas Based on ORMapping Technology." Applied Mechanics and Materials 58-60 (June 2011): 1523–28. http://dx.doi.org/10.4028/www.scientific.net/amm.58-60.1523.

Full text
Abstract:
Ontology plays a key role in such areas: knowledge engineering, artificial intelligence, information retrieval, semantic web and web service. It is important to recover knowledge associated with specific domains in relational database to semantics, especially, in Ontology learning field. Previous works showed that ontologies can learn from relational database. However, the presented approaches still have some limits. In this paper, we present an ontology learning method based on Object Relation Mapping (ORM) that presents how the source of the databases can be exploited to ontology and the details of object can be generated, such as class hierarchies, relationship and properties.
APA, Harvard, Vancouver, ISO, and other styles
36

Singh, Priyank Kumar, Sami Ur Rehman, Darshan J, Shobha G, and Deepamala N. "Automated dynamic schema generation using knowledge graph." IAES International Journal of Artificial Intelligence (IJ-AI) 11, no. 4 (2022): 1261. http://dx.doi.org/10.11591/ijai.v11.i4.pp1261-1269.

Full text
Abstract:
<span>On the internet where the number of database developers is increasing with the availability of huge data to be stored and queried. Establishing relations between various schemas and helping the developers by filtering, prioritizing, and suggesting relevant schema is a requirement. Recommendation system plays an important role in searching through a large volume of dynamically generated schemas to provide database developers with personalized schemas and services. Although many methods are already available to solve problems using machine learning, they require more time and data to learn. These problems can be solved using knowledge graphs (KG). This paper investigates building knowledge graphs to recommend schemas. </span>
APA, Harvard, Vancouver, ISO, and other styles
37

Mahmood, Alza A. "Automated Algorithm for Data Migration from Relational to NoSQL Databases." Al-Nahrain Journal for Engineering Sciences 21, no. 1 (2018): 60. http://dx.doi.org/10.29194/njes21010060.

Full text
Abstract:
One of the barriers that the developer community face once turning to the newly, highly distributable, schema agnostic and non-relational database, called NoSQL, which is how to migrate their legacy relational database (which is already filled with a large amount of data) into this new class of database management systems. This paper presents a new approach for converting the already filled relational database of any database management system to any type of NoSQL databases in the most optimized data structure form without bothering of specifying the schema of tables and relations between them. In addition, a simplified software as a prototype based on this algorithm is built to show the results of the output for testing the validity of the algorithm.
APA, Harvard, Vancouver, ISO, and other styles
38

Michalewicz, Zbigniew, and Alvin Yeo. "A Good Normal Form for Relational Databases." Fundamenta Informaticae 12, no. 2 (1989): 129–38. http://dx.doi.org/10.3233/fi-1989-12202.

Full text
Abstract:
In the conceptual design of relational databases one of the main goals is to create a conceptual scheme, which minimize redundancies and eliminate deletion and addition anomalies, i.e. to create relation schemes in some good normal form. The study of relational databases has produced a host of normal forms: 2NF, 3NF, BCNF, Elementary-Key Normal Form, 4NF, Weak 4NF, PJ/NF, DK/NF, LTKNF, (3,3)NF, etc. There are two features which characterize these normal forms. First, they consider each relation separately. We believe that a normal form (which reflects the goodness of the conceptual design) should be related to the whole conceptual scheme. Second, the usefullness of all normal forms in relational database design have been based on the assumption that a data definition language (DDL) of a database management system (DBMS) is able to enforce key dependencies. However, different DDLs have different capabilities in defining constraints. In this paper we will discuss the design of conceptual relational schemes in general. We will also define a good normal form (GNF) which requires a minimally rich DDL; this normal form is based only on a primitive concept of constraints. We will not, however, discuss the normalization process itself – how one might, if possible, convert a relation scheme that is not in some normal form into a collection of relation schemes each of which is in that normal form.
APA, Harvard, Vancouver, ISO, and other styles
39

Dr., Pushpa Suri, and Sharma Divyesh. "SCHEMA BASED STORAGE OF XML DOCUMENTS IN RELATIONAL DATABASES." International Journal on Web Service Computing (IJWSC) 4, no. 2 (2013): 23–28. https://doi.org/10.5281/zenodo.3592307.

Full text
Abstract:
XML (Extensible Mark up language) is emerging as a tool for representing and exchanging data over the internet. When we want to store and query XML data, we can use two approaches either by using native databases or XML enabled databases. In this paper we deal with XML enabled databases. We use relational databases to store XML documents. In this paper we focus on mapping of XML DTD into relations. Mapping needs three steps: 1) Simplify Complex DTD’s 2) Make DTD graph by using simplified DTD’s 3) Generate Relational schema. We present an inlining algorithm for generating relational schemas from available DTD’s. This algorithm also handles recursion in an XML document.
APA, Harvard, Vancouver, ISO, and other styles
40

Atserias, Albert, and Phokion G. Kolaitis. "Consistency of Relations over Monoids." Proceedings of the ACM on Management of Data 2, no. 2 (2024): 1–15. http://dx.doi.org/10.1145/3651608.

Full text
Abstract:
The interplay between local consistency and global consistency has been the object of study in several different areas, including probability theory, relational databases, and quantum information. For relational databases, Beeri, Fagin, Maier, and Yannakakis showed that a database schema is acyclic if and only if it has the local-to-global consistency property for relations, which means that every collection of pairwise consistent relations over the schema is globally consistent. More recently, the same result has been shown under bag semantics. In this paper, we carry out a systematic study of local vs. global consistency for relations over positive commutative monoids, which is a common generalization of ordinary relations and bags. Let K be an arbitrary positive commutative monoid. We begin by showing that acyclicity of the schema is a necessary condition for the local-to-global consistency property for K-relations to hold. Unlike the case of ordinary relations and bags, however, we show that acyclicity is not always sufficient. After this, we characterize the positive commutative monoids for which acyclicity is both necessary and sufficient for the local-to-global consistency property to hold; this characterization involves a combinatorial property of monoids, which we call the transportation property. We then identify several different classes of monoids that possess the transportation property. As our final contribution, we introduce a modified notion of local consistency of K-relations, which we call pairwise consistency up to the free cover. We prove that, for all positive commutative monoids K, even those without the transportation property, acyclicity is both necessary and sufficient for every family of K-relations that is pairwise consistent up to the free cover to be globally consistent.
APA, Harvard, Vancouver, ISO, and other styles
41

França Costa, Wilian, Raquel Sousa, Tereza Giannini, Bruno Albertini, and Antonio Saraiva. "New Requirements of Biodiversity Research for Metadata on Models and Sensors on the Internet of Things and Big Data Era." Biodiversity Information Science and Standards 2 (May 17, 2018): e25653. http://dx.doi.org/10.3897/biss.2.25653.

Full text
Abstract:
Important initiatives, such as the Convention on Biological Diversity's (CBD) Aichi targets, the United Nations' 2030 Agenda for Sustainable Development (and its Sustainable Development Goals) highlight the urgent need to stop the continuous and increasing loss of biodiversity. That requires an increase in the knowledge that will allow for sustainable use of natural resources. To accomplish that, detailed studies are needed to evaluate multiple species and regions. These studies demand great effort from professionals, searching for species and/or observing their behavior. In this case, the use of new monitoring devices could be beneficial in data collection and identification, optimizing the specialist effort to detect and observe species in-situ. With the advance of technology platforms for developing connected devices and sensors, associated with the evolution of the Internet of Things (IoT) concepts, and the advances of unmanned aerial vehicles (UAVs) and Wireless sensor networks (WSN), new scenarios in biodiversity studies are possible. The technology available now could allow studies applying relatively cheaper sensors with long-range (approx. 15 km), low power, low bit rate communication and up to 10-year battery life, using a Low Power Wide Area Network (LPWAN) and with capacity to run bio-acoustic or image processing detection. Platforms like Raspberry Pi or any other with signal processing capabilities can be applied (Hodgkinson and Young 2016). Sensor technologies protocols applied in IoT networks are usually simple and flexible. Common semantics and metadata definitions are necessary to extract information and representations to construct complex networks. Some of these metadata definitions can be adopted from the current Darwin Core schema. However, Darwin Core evolved based on enterprise technologies (i.e. XML) and relational database definitions, that usually need machines with significant bandwidth to transmit data. Today the technology scenario is taking another route, going from centralized to distributed architectures, occasionally applying non-relational and distributed databases, ready to deal with synchronization and eventual consistency problems. These distributed databases are usually employed to construct complex networks, where relation restrictions are not mandatory or, sometimes, even desired (Baggio et al. 2016). With these new techniques becoming a reality in biodiversity conservation studies, new metadata definitions are necessary. Those new metadata need to standardize and create a shared vocabulary that includes requirements for devices information exchange, data analytics, and model generation. Also, these new definitions could aggregate the Essential Biodiversity Variables (EBVs) concepts, that aim to identify the minimum of variables that can be used to inform scientists, managers and decision makers (Haase et al. 2018). For this reason, we propose the insertion of EBV definitions in the construction of sensor integration metadata and models characterization inside the Darwin Core metadata definitions (Fig. 1).
APA, Harvard, Vancouver, ISO, and other styles
42

França, Costa Wilian, Raquel Sousa, Tereza Giannini, Bruno Albertini, and Antonio Saraiva. "New Requirements of Biodiversity Research for Metadata on Models and Sensors on the Internet of Things and Big Data Era." Biodiversity Information Science and Standards 2 (May 17, 2018): e25653. https://doi.org/10.3897/biss.2.25653.

Full text
Abstract:
Important initiatives, such as the Convention on Biological Diversity's (CBD) Aichi targets, the United Nations' 2030 Agenda for Sustainable Development (and its Sustainable Development Goals) highlight the urgent need to stop the continuous and increasing loss of biodiversity. That requires an increase in the knowledge that will allow for sustainable use of natural resources. To accomplish that, detailed studies are needed to evaluate multiple species and regions. These studies demand great effort from professionals, searching for species and/or observing their behavior. In this case, the use of new monitoring devices could be beneficial in data collection and identification, optimizing the specialist effort to detect and observe species in-situ. With the advance of technology platforms for developing connected devices and sensors, associated with the evolution of the Internet of Things (IoT) concepts, and the advances of unmanned aerial vehicles (UAVs) and Wireless sensor networks (WSN), new scenarios in biodiversity studies are possible. The technology available now could allow studies applying relatively cheaper sensors with long-range (approx. 15 km), low power, low bit rate communication and up to 10-year battery life, using a Low Power Wide Area Network (LPWAN) and with capacity to run bio-acoustic or image processing detection. Platforms like Raspberry Pi or any other with signal processing capabilities can be applied (Hodgkinson and Young 2016). Sensor technologies protocols applied in IoT networks are usually simple and flexible. Common semantics and metadata definitions are necessary to extract information and representations to construct complex networks. Some of these metadata definitions can be adopted from the current Darwin Core schema. However, Darwin Core evolved based on enterprise technologies (i.e. XML) and relational database definitions, that usually need machines with significant bandwidth to transmit data. Today the technology scenario is taking another route, going from centralized to distributed architectures, occasionally applying non-relational and distributed databases, ready to deal with synchronization and eventual consistency problems. These distributed databases are usually employed to construct complex networks, where relation restrictions are not mandatory or, sometimes, even desired (Baggio et al. 2016). With these new techniques becoming a reality in biodiversity conservation studies, new metadata definitions are necessary. Those new metadata need to standardize and create a shared vocabulary that includes requirements for devices information exchange, data analytics, and model generation. Also, these new definitions could aggregate the Essential Biodiversity Variables (EBVs) concepts, that aim to identify the minimum of variables that can be used to inform scientists, managers and decision makers (Haase et al. 2018). For this reason, we propose the insertion of EBV definitions in the construction of sensor integration metadata and models characterization inside the Darwin Core metadata definitions (Fig. 1).
APA, Harvard, Vancouver, ISO, and other styles
43

FONG, JOSEPH, HERBERT SHIU, and JENNY WONG. "METHODOLOGY FOR DATA CONVERSION FROM XML DOCUMENTS TO RELATIONS USING EXTENSIBLE STYLESHEET LANGUAGE TRANSFORMATION." International Journal of Software Engineering and Knowledge Engineering 19, no. 02 (2009): 249–81. http://dx.doi.org/10.1142/s0218194009004131.

Full text
Abstract:
Extensible Markup Language (XML) has been used for data-transport and data-transformation while the business sector continues to store critical business data in relational databases. Extracting relational data and formatting it into XML documents, and then converting XML documents back to relational structures, becomes a major daily activity. It is important to have an efficient methodology to handle this conversion between XML documents and relational data. This paper aims to perform data conversion from XML documents into relational databases. It proposes a prototype and algorithms for this conversion process. The pre-process is schema translation using an XML schema definition. The proposed approach is based on the needs of an Order Information System to suggest a methodology to gain the benefits provided by XML technology and relational database management systems. The methodology is a stepwise procedure using XML schema definition and Extensible Stylesheet Language Transformations (XSLT) to ensure that the data constraints are not scarified after data conversion. The implementation of the data conversion is performed by decomposing the XML document of a hierarchical tree model into normalized relations interrelated with their artifact primary keys and foreign keys. The transformation process is performed by XSLT. This paper will also demonstrate the entire conversion process through a detailed case study.
APA, Harvard, Vancouver, ISO, and other styles
44

Tseng, Frank S. C., Jeng-Jye Chiang, and Wei-Pang Yang. "Integration of relations with conflicting schema structures in heterogeneous database systems." Data & Knowledge Engineering 27, no. 2 (1998): 231–48. http://dx.doi.org/10.1016/s0169-023x(98)00005-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Pokorný, Jaroslav. "Integration of Relational and Graph Databases Functionally." Foundations of Computing and Decision Sciences 44, no. 4 (2019): 427–41. http://dx.doi.org/10.2478/fcds-2019-0021.

Full text
Abstract:
Abstract In today’s multi-model database world there is an effort to integrate databases expressed in different data models. The aim of the article is to show possibilities of integration of relational and graph databases with the help of a functional data model and its formal language – a typed lambda calculus. We suppose the existence of a data schema both for the relational and graph database. In this approach, relations are considered as characteristic functions and property graphs as sets of single-valued and multivalued functions. Then it is possible to express a query over such integrated heterogeneous database by one query expression expressed in a version of the typed lambda calculus. A more user-friendly version of such language could serve as a powerful query tool in practice. We discuss also queries sent to the integrated system and translated into queries in SQL and Cypher - the graph query language for Neo4j.
APA, Harvard, Vancouver, ISO, and other styles
46

Duc Thy, Vu, and Nguyen Hoang Son. "ON THE DENSE FAMILIES IN THE RELATIONAL DATAMODEL." ASEAN Journal on Science and Technology for Development 22, no. 3 (2017): 241. http://dx.doi.org/10.29037/ajstd.162.

Full text
Abstract:
In this paper, dense families of relation schemes are introduced. We characterize minimal keys of relation schemes in terms of dense families. Note that, the dense families of database relations were introduced by Jarvinen [6]. We prove that the set of all minimal keys of a relation scheme s= (U, F) is the transversal hypergraphs of a hypergraph D– {∅}, where Dis any s-dense family. We give a necessary and sufficient condition for an abitrary family to be s-dense family. We also present some dense families of relation schemes. Furthermore, in this paper, we also study antikeys by means of dense families. We present connections between antikeys and a dense family of relation schemes. Finally, we study the time complexity of the problem finding antikeys.
APA, Harvard, Vancouver, ISO, and other styles
47

Baranchikov, A. I., and N. Z. Nguyen. "RELATION DATABASE SCHEME COMPARISON ALGORITHM BASED ON SUBJECT AREA SEMANTICS ANALYSIS." Vestnik of Ryazan State Radio Engineering University 67 (2019): 45–49. http://dx.doi.org/10.21667/1995-4565-2019-67-1-45-49.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Sousa, Pedro, Lurdes Pedro-de-Jesus, Gonçalo Pereira, and Fernando Brito e Abreu. "Clustering relations into abstract ER schemas for database reverse engineering." Science of Computer Programming 45, no. 2-3 (2002): 137–53. http://dx.doi.org/10.1016/s0167-6423(02)00057-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Carver, Andy, and Terry Halpin. "Atomicity and Semantic Normalization." International Journal of Information System Modeling and Design 1, no. 2 (2010): 23–39. http://dx.doi.org/10.4018/jismd.2010040102.

Full text
Abstract:
This paper contrasts two different approaches to designing relational databases that are free of redundancy. The Object-Role Modeling (ORM) approach captures semantics in terms of atomic (elementary or existential) fact types, before grouping the fact types into relation schemes. Normalization by decomposition instead focuses on “non0loss decomposition” to various, and progressively more refined, “normal forms”. Traditionally, non0loss decomposition of a relation requires decomposition into smaller relations that, upon natural join, yield the exact original population. Non-loss decomposition of a table scheme (or relation variable) requires that the decomposition of all possible populations of the relation scheme is reversible in this way. This paper shows that the dependency requirement for “all possible populations” is too restrictive for definitions of multi-valued and join dependencies over relation schemes. By exploiting ORM modeling heuristics, the authors offer new definitions of these data dependencies and non-loss decomposition, to enable these concepts to be addressed at a truly semantic level.
APA, Harvard, Vancouver, ISO, and other styles
50

Yesin, Vitalii, Mikolaj Karpinski, Maryna Yesina, Vladyslav Vilihura, and Stanislaw A. Rajba. "Technique for Evaluating the Security of Relational Databases Based on the Enhanced Clements–Hoffman Model." Applied Sciences 11, no. 23 (2021): 11175. http://dx.doi.org/10.3390/app112311175.

Full text
Abstract:
Obtaining convincing evidence of database security, as the basic corporate resource, is extremely important. However, in order to verify the conclusions about the degree of security, it must be measured. To solve this challenge, the authors of the paper enhanced the Clements–Hoffman model, determined the integral security metric and, on this basis, developed a technique for evaluating the security of relational databases. The essence of improving the Clements–Hoffmann model is to expand it by including a set of object vulnerabilities. Vulnerability is considered as a separate objectively existing category. This makes it possible to evaluate both the likelihood of an unwanted incident and the database security as a whole more adequately. The technique for evaluating the main components of the security barriers and the database security as a whole, proposed by the authors, is based on the theory of fuzzy sets and risk. As an integral metric of database security, the reciprocal of the total residual risk is used, the constituent components of which are presented in the form of certain linguistic variables. In accordance with the developed technique, the authors presented the results of a quantitative evaluation of the effectiveness of the protection of databases built on the basis of the schema with the universal basis of relations and designed in accordance with the traditional technology of relational databases.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography