Academic literature on the topic 'Database relation schema'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Database relation schema.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Database relation schema"

1

Wastl, Ralf. "Linear Derivations for Keys of a Database Relation Schema." JUCS - Journal of Universal Computer Science 4, no. (12) (1998): 883–97. https://doi.org/10.3217/jucs-004-12-0883.

Full text
Abstract:
In [Wastl 1998] we have introduced the Hilbert style inference system K for deriving all keys of a database relation schema. In this paper we investigate formal K-derivations more closely using the concept of tableaux. The analysis gives insight into the process of deriving keys of a relation schema. Also, the concept of tableaux gives a proof procedure for computing all keys of a relation schema. In practice, the methods developed here will be usefull for computing keys or for deciding whether an attribute is a key attribute, respectively non-key attribute. This decision problem becomes relevant when checking whether a relation schema is in third normal form, or when applying the well-known 3NF-decomposition algorithm (a.k.a. 3NF-synthesis algorithm).
APA, Harvard, Vancouver, ISO, and other styles
2

Wastl, Ralf. "On the Number of Keys of a Relational Database Schema." JUCS - Journal of Universal Computer Science 4, no. (5) (1998): 547–59. https://doi.org/10.3217/jucs-004-05-0547.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Imam, Abdullahi Abubakar, Shuib Basri, Rohiza Ahmad, et al. "DSP: Schema Design for Non-Relational Applications." Symmetry 12, no. 11 (2020): 1799. http://dx.doi.org/10.3390/sym12111799.

Full text
Abstract:
The way a database schema is designed has a high impact on its performance in relational databases, which are symmetric in nature. While the problem of schema optimization is even more significant for NoSQL (“Not only SQL”) databases, existing modeling tools for relational databases are inadequate for this asymmetric setting. As a result, NoSQL modelers rely on rules of thumb to model schemas that require a high level of competence. Several studies have been conducted to address this problem; however, they are either proprietary, symmetrical, relationally dependent or post-design assessment tools. In this study, a Dynamic Schema Proposition (DSP) model for NoSQL databases is proposed to handle the asymmetric nature of today’s data. This model aims to facilitate database design and improve its performance in relation to data availability. To achieve this, data modeling styles were aggregated and classified. Existing cardinality notations were empirically extended using synthetically generated queries. A binary integer formulation was used to guide the mapping of asymmetric entities from the application’s conceptual data model to a database schema. An experiment was conducted to evaluate the impact of the DSP model on NoSQL schema production and its performance. A profound improvement was observed in read/write query performance and schema production complexities. In this regard, DSP has significant potential to produce schemas that are capable of handling big data efficiently.
APA, Harvard, Vancouver, ISO, and other styles
4

Yannakakis, Mihalis. "Technical Perspective." ACM SIGMOD Record 51, no. 1 (2022): 77. http://dx.doi.org/10.1145/3542700.3542718.

Full text
Abstract:
The paper Structure and Complexity of Bag Consistency by Albert Atserias and Phokion Kolaitis [1] studies fundamental structural and algorithmic questions on the global consistency of databases in the context of bag semantics. A collection D of relations is called globally consistent if there is a (so-called "universal") relation over all the attributes that appear in all the relations of D whose projections yield the relations in D. The basic algorithmic problem for consistency is: given a database D, determine whether D is globally consistent. An obvious necessary condition for global consistency is local (or pairwise) consistency: every pair of relations in D must be consistent. This condition is not sufficient however: It is possible that every pair is consistent, but there is no single global relation over all the attributes whose projections yield the relations in D. A natural structural question is: Which database schemas have the property that every locally consistent database over the schema is also globally consistent?
APA, Harvard, Vancouver, ISO, and other styles
5

LEE, CHIANG, and MING-CHUAN WU. "A HYPERRELATIONAL APPROACH TO INTEGRATION AND MANIPULATION OF DATA IN MULTIDATABASE SYSTEMS." International Journal of Cooperative Information Systems 05, no. 04 (1996): 395–429. http://dx.doi.org/10.1142/s0218843096000154.

Full text
Abstract:
The issue of interoperability among multiple autonomous databases has attracted a lot of attention from researchers in these years. The past research on this issue can be roughly divided into two main categories: the tightly-integrated approach that integrate databases by building an integrated schema and the loosely-integrated approach that achieves interoperability by using a multidatabase language. Most of the past efforts focused on the issues in the first approach. The problem with the first approach is, however, that it lacks a convenient representation of the integrated schema at the system level and a sound mathematical basis for data manipulation in a multidatabase system. In this paper, we propose to use hyperrelations as a powerful and succinct model for the global level representation of heterogeneous database schemas. A hyperrelation has the structure of a relation, but its contents are the schemas of the semantically equivalent local relations in the databases. With this representation, the metadata of the global database, local databases and the data of these databases are all representable by using the structure of a relation. The impact of such a representation is that all the elegant features of relational systems can be easily extended to multidatabase systems. A hyperrelational algebra is designed accordingly. This algebra is performed at the multidatabase systems (MDBS) level such that query transformation and optimization is supported on a sound mathematical basis. The major contributions of this paper include: (1) Local relations of various schemas (even though they retain information of the same semantics) can be systematically mapped to hyperrelations. As the structure of a hyperrelation is similar to that of a relation, data manipulation and management tasks (such as design of the global query language and the view mechanism) are greatly facilitated. (2) The hyperrelational algebra provides a sound basis for query transformation and optimization in a MDBS.
APA, Harvard, Vancouver, ISO, and other styles
6

ZHANG, YANCHUN, MARIA E. ORLOWSKA, and ROBERT COLOMB. "AN EFFICIENT TEST FOR THE VALIDITY OF UNBIASED HYBRID KNOWLEDGE FRAGMENTATION IN DISTRIBUTED DATABASES." International Journal of Software Engineering and Knowledge Engineering 02, no. 04 (1992): 589–609. http://dx.doi.org/10.1142/s0218194092000270.

Full text
Abstract:
Knowledge bases contain specific and general knowledge. In relational database systems, specific knowledge is often represented as a set of relations. The conventional methodologies for centralized database design can be applied to develop a normalized, redundancy-free global schema. Distributed database design involves redundancy removal as well as the distribution design which allows replicated data segments. Thus, distribution design can be viewed as a process on a normalized global schema which produces a collection of fragments of relations from a global database. Clearly, not every fragment of data can be permitted as a relation. In this paper, we clarify and formally discuss three kinds of fragmentations of relational databases, and characterize their features as valid designs, and we introduce a hybrid knowledge fragmentation as the general case. For completeness of presentation, we first show an algorithm for the validity test of vertical fragmentations of normalized relations, and then extend it to the more general case of unbiased fragmentations.
APA, Harvard, Vancouver, ISO, and other styles
7

Huang, Shu Qin. "Scheme of Mapping from XML Document to Relational Database." Applied Mechanics and Materials 241-244 (December 2012): 2732–36. http://dx.doi.org/10.4028/www.scientific.net/amm.241-244.2732.

Full text
Abstract:
A new algorithm and a mapping scheme between XML document and relational database were presented. The scheme extracted name, type, value and other information from XML document, and then mapped them to relational database. Data from relational database can be restored to XML document. This method avoids information loss in the data transfer process and remains the structural relation between elements. And this mapping scheme does not depend on XML DTD or Schema.
APA, Harvard, Vancouver, ISO, and other styles
8

Köhler, Henning. "Global Database Design based on Storage Space and Update Time Minimization." JUCS - Journal of Universal Computer Science 15, no. (1) (2009): 195–240. https://doi.org/10.3217/jucs-015-01-0195.

Full text
Abstract:
A common approach in designing relational databases is to start with a universal relation schema, which is then decomposed into multiple subschemas. A good choice of subschemas can be determined using integrity constraints defined on the schema, such as functional, multivalued or join dependencies. In this paper we propose and analyze a new normal form based on the idea of minimizing overall storage space and update costs, and as a consequence redundancy as well. This is in contrast to existing normal forms such as BCNF, 4NF or KCNF, which only characterize the absence of redundancy (and thus space and update time minimality) for a single schema. We show that our new normal form naturally extendexisting normal forms to multiple schemas, and provide an algorithm for computing decompositions.
APA, Harvard, Vancouver, ISO, and other styles
9

Zhang, Hai Fei. "A On-Demand Mapping Method on Query Request in Relational Database Semantic Query." Applied Mechanics and Materials 411-414 (September 2013): 341–44. http://dx.doi.org/10.4028/www.scientific.net/amm.411-414.341.

Full text
Abstract:
Relational database semantic query namely RDF access to relational database is an important issue in semantic Web research. To realize the query is to build mapping relation between relational database schema and ontology. However, there is natural isomerism between them. The isomerism can be eliminated by transforming relational database schema into similar ontology and then building mapping between conversion ontology and input ontology. This paper realized an ondemanding mapping method when users request query, avoided building all concepts and attributes mapping between conversion ontology and input ontology and improved mapping efficiency.
APA, Harvard, Vancouver, ISO, and other styles
10

Gerlach, Luisa, Tobias Köppl, Stefanie Scherzinger, Nicole Schweikardt, and René Zander. "A Quantum-Leap into Schema Matching: Beyond 1-to-1 Matchings." Proceedings of the ACM on Management of Data 3, no. 2 (2025): 1–24. https://doi.org/10.1145/3725226.

Full text
Abstract:
Schema matching refers to the task of identifying corresponding attributes of different database relation schemas to enable the efficient integration of the associated datasets. We model the task of finding suitable 1:N/N:1 global matchings in relational schemas as an optimization problem. We show that this optimization problem is NP-hard. We then translate the optimization problem into the problem of minimizing a particular rational-valued function on binary variables. The latter enables us to utilize modern quantum algorithms for solving the global matching problem, a crucial stage in schema matching. We also report on preliminary experimental results that serve as a proof-of-concept for our approach.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Database relation schema"

1

Wheeler, Jared Thomas. "Extracting a Relational Database Schema from a Document Database." UNF Digital Commons, 2017. http://digitalcommons.unf.edu/etd/730.

Full text
Abstract:
As NoSQL databases become increasingly used, more methodologies emerge for migrating from relational databases to NoSQL databases. Meanwhile, there is a lack of methodologies that assist in migration in the opposite direction, from NoSQL to relational. As software is being iterated upon, use cases may change. A system which was originally developed with a NoSQL database may accrue needs which require Atomic, Consistency, Isolation, and Durability (ACID) features that NoSQL systems lack, such as consistency across nodes or consistency across re-used domain objects. Shifting requirements could result in the system being changed to utilize a relational database. While there are some tools available to transfer data between an existing document database and existing relational database, there has been no work for automatically generating the relational database based upon the data already in the NoSQL system. Not taking the existing data into account can lead to inconsistencies during data migration. This thesis describes a methodology to automatically generate a relational database schema from the implicit schema of a document database. This thesis also includes details of how the methodology is implemented, and what could be enhanced in future works.
APA, Harvard, Vancouver, ISO, and other styles
2

Al, Arrayedh Osama M. "Evolution of synthesised relational database schemas." Thesis, University of Nottingham, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.321460.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wright, Christopher. "Mutation analysis of relational database schemas." Thesis, University of Sheffield, 2015. http://etheses.whiterose.ac.uk/12059/.

Full text
Abstract:
The schema is the key artefact used to describe the structure of a relational database, specifying how data will be stored and the integrity constraints used to ensure it is valid. It is therefore surprising that to date little work has addressed the problem of schema testing, which aims to identify mistakes in the schema early in software development. Failure to do so may lead to critical faults, which may cause data loss or degradation of data quality, remaining undetected until later when they will prove much more costly to fix. This thesis explores how mutation analysis – a technique commonly used in software testing to evaluate test suite quality – can be applied to evaluate data generated to exercise the integrity constraints of a relational database schema. By injecting faults into the constraints, modelling both faults of omission and commission, this enables the fault-finding capability of test suites generated by different techniques to be compared. This is essential to empirically evaluate further schema testing research, providing a means of assessing the effectiveness of proposed techniques. To mutate the integrity constraints of a schema, a collection of novel mutation operators are proposed and implementation described. These allow an empirical evaluation of an existing data generation approach, demonstrating the effectiveness of the mutation analysis technique and identifying a configuration that killed 94% of mutants on average. Cost-effective algorithms for automatically removing equivalent mutants and other ineffective mutants are then proposed and evaluated, revealing a third of mutation scores to be mutation adequate and reducing time taken by an average of 7%. Finally, the execution cost problem is confronted, with a range of optimisation strategies being applied that consistently improve efficiency, reducing the time taken by several hours in the best case and as high as 99% on average for one DBMS.
APA, Harvard, Vancouver, ISO, and other styles
4

Kutan, Kent. "Transformation of relational schema into static object schema." Master's thesis, This resource online, 1996. http://scholar.lib.vt.edu/theses/available/etd-02022010-020303/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Boháč, Martin. "Perzistence XML v relační databázi." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2010. http://www.nusl.cz/ntk/nusl-237200.

Full text
Abstract:
The aim of this thesis is to create a client xDB database, which supports visualization and management of XML documents and schemas. The first part deals with the introduction of XML, XML schemas (DTD, XML Schema, RelaxNG, etc.) and contextual technologies. After that the thesis deals with the problem of the XML persistence and it focuses on mapping techniques necessary for an efficient storage in a relational database. The main part is devoted to the design and implementation of client application XML Admin, which is programmed in Java. The application uses the XML:DB interface to communicate with the xDB database. It supports storing XML documents to a collection and the XPath language for querying them. The final section is devoted to application performance testing and comparison with existing native database eXist.
APA, Harvard, Vancouver, ISO, and other styles
6

Lee, Anna. "Transformation of set schema into relational structures." Thesis, University of British Columbia, 1987. http://hdl.handle.net/2429/26431.

Full text
Abstract:
This thesis describes a new approach of relational database design using the SET conceptual model. The SET conceptual model is used for information modelling. The database schema generated from the information modelling is called the SET schema. The SET schema consists of the declarations of all the sets of the database schema. A domain graph can be constructed based on the information declared in the SET schema. A domain graph is a directed graph with nodes labelled with declared sets and arcs labelled with degree information. Each are in the domain graph points to a node S from a node labelled with an immediate domain predecessor of S. The new method of table design for the relational database involves partitioning the domain graph into mutually exclusive <1,1>-connected components based on the degree information. These components (subgraphs) are then transformed into tree structures. These trees are extended to include the domain predecessors of their nodes to make them predecessor total. The projections of these extended trees into the value sets labelling their leaf nodes form a set of relations which can be represented by tables. This table design method is described and presented in this thesis, along with d program that automates the method. Given a schema of the SET model, together with some degree information about defined sets that a user must calculate based on the intention of the defined sets, the program produces a relational database schema that will record data for the SET schema correctly and completely.<br>Science, Faculty of<br>Computer Science, Department of<br>Graduate
APA, Harvard, Vancouver, ISO, and other styles
7

Tanaka, Mitsuru. "Classifier System Learning of Good Database Schema." ScholarWorks@UNO, 2008. http://scholarworks.uno.edu/td/859.

Full text
Abstract:
This thesis presents an implementation of a learning classifier system which learns good database schema. The system is implemented in Java using the NetBeans development environment, which provides a good control for the GUI components. The system contains four components: a user interface, a rule and message system, an apportionment of credit system, and genetic algorithms. The input of the system is a set of simple database schemas and the objective for the classifier system is to keep the good database schemas which are represented by classifiers. The learning classifier system is given some basic knowledge about database concepts or rules. The result showed that the system could decrease the bad schemas and keep the good ones.
APA, Harvard, Vancouver, ISO, and other styles
8

Shamsedin, Tekieh Razieh Sadat Information Systems Technology &amp Management Australian School of Business UNSW. "An XML-based framework for electronic business document integration with relational databases." Publisher:University of New South Wales. Information Systems, Technology & Management, 2009. http://handle.unsw.edu.au/1959.4/43695.

Full text
Abstract:
Small and medium enterprises (SMEs) are becoming increasingly engaged in B2B interactions. The ubiquitousness of the Internet and the quasi-reliance on electronic document exchanges with larger trading partners have fostered this move. The main technical challenge that this brings to SMEs is that of business document integration: they need to exchange business documents with heterogeneous document formats and also integrate these documents with internal information systems. Often they can not afford using expensive, customized and proprietary solutions for document exchange and storage. Rather they need cost-effective approaches designed based on open standards and backed with easy-to-use information systems. In this dissertation, we investigate the problem of business document integration for SMEs following a design science methodology. We propose a framework and conceptual architecture for a business document integration system (BDIS). By studying existing business document formats, we recommend using the GS1 XML standard format as the intermediate format for business documents in BDIS. The GS1 standards are widely used in supply chains and logistics globally. We present an architecture for BDIS consisting of two layers: one for the design of internal information system based on relational databases, capable of storing XML business documents, and the other enabling the exchange of heterogeneous business documents at runtime. For the design layer, we leverage existing XML schema conversion approaches, and extend them, to propose a customized and novel approach for converting GS1 XML document schemas into relational schemas. For the runtime layer, we propose wrappers as architectural components for the conversion of various electronic documents formats into the GS1 XML format. We demonstrate our approach through a case study involving a GS1 XML business document. We have implemented a prototype BDIS. We have evaluated and compared it with existing research and commercial tools for XML to relational schema conversion. The results show that it generates operational and simpler relational schemas for GS1 XML documents. In conclusion, the proposed framework enables SMEs to engage effectively in electronic business.
APA, Harvard, Vancouver, ISO, and other styles
9

Macák, Martin. "Editor relačních tabulek." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2008. http://www.nusl.cz/ntk/nusl-412810.

Full text
Abstract:
This thesis deals with aspects of using the common, universal relational table editor as a simple information system, which would be fully independent on the underlying database system and partially independent on the underlying's database schema. One part of this thesis deals with potential of using such universal information system to create a framework to allow fast and easy development of small and medium information systems. The practical part of this thesis is the application, which implements the basics of simple relational table editor, which is fully independent on the underlying database provider and schema and servers as the demonstrative table editor.
APA, Harvard, Vancouver, ISO, and other styles
10

Janse, van Rensburg J., and H. Vermaak. "The design of a JADE compliant manufacturing ontology and accompanying relational database schema." Interim : Interdisciplinary Journal, Vol 10 , Issue 1: Central University of Technology Free State Bloemfontein, 2011. http://hdl.handle.net/11462/333.

Full text
Abstract:
Published Article<br>To enable meaningful and consistent communication between different software systems in a particular domain (such as manufacturing, law or medicine), a standardised vocabulary and communication language is required by all the systems involved. Concepts in the domain about which the systems want to communicate are formalized in an ontology by establishing the meaning of concepts and creating relationships between them. The inputs to this process in found by analysing the physical domain and its processes. The resulting ontology structure is a computer useable representation of the physical domain about which the systems want to communicate. To enable the long term persistence of the actual data contained in these concepts and the enforcement of various business rules, a sufficiently powerful database system is required. This paper presents the design of a manufacturing ontology and its accompanying relational database schema that will be used in a manufacturing test domain.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Database relation schema"

1

Halpin, T. A. Conceptual schema and relational database design. 2nd ed. WytLytPub, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

A, Halpin T., and Nijssen G. M, eds. Conceptual schema & relational database design. 2nd ed. Prentice Hall, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Clifford, Wayne. Relational to object orientated schema transformations for databases. The author], 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Laver, Kent. Semantic and syntactic properties of universal relation scheme data bases. University of Toronto, Computer Systems Research Institute, 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

A, Halpin T., ed. Conceptual schema and relational database design: A fact oriented approach. Prentice Hall, 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Turichshev, Alexandr. A Web-accessible relational database for intact rock properties and an XML data format for intact rock properties with schema. National Library of Canada, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Halpin, T. A., and G. M. Nijssen. Conceptual Schema and Relational Database Design. Pearson Education, Limited, 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Database relation schema"

1

Jones, Andrew R. "Relational Database Schema." In Encyclopedia of Systems Biology. Springer New York, 2013. http://dx.doi.org/10.1007/978-1-4419-9863-7_1428.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Jensen, Ole G., and Michael H. Böhlen. "Evolving Relations." In Database Schema Evolution and Meta-Modeling. Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-48196-6_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Yannakoudakis, E. J., and C. P. Cheng. "Schema Definition in SQL." In Standard Relational and Network Database Languages. Springer London, 1988. http://dx.doi.org/10.1007/978-1-4471-3287-5_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Yannakoudakis, E. J., and C. P. Cheng. "Schema Definition in NDL." In Standard Relational and Network Database Languages. Springer London, 1988. http://dx.doi.org/10.1007/978-1-4471-3287-5_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Türker, Can. "Schema Evolution in SQL-99 and Commercial (Object-)Relational DBMS." In Database Schema Evolution and Meta-Modeling. Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-48196-6_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Yazici, Ali, and Ziya Karakaya. "Normalizing Relational Database Schemas Using Mathematica." In Computational Science – ICCS 2006. Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11758525_51.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Lewis, P. J. "Interpretative Data Models and Relational Database Schema." In Critical Issues in Systems Theory and Practice. Springer US, 1995. http://dx.doi.org/10.1007/978-1-4757-9883-8_47.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Torres, Manuel, José Samos, and Eladio Garví. "Closing Schemas in Object-Relational Databases." In Objects and Databases. Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-16092-9_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Biskup, Joachim. "Achievements of relational database schema design theory revisited." In Semantics in Databases. Springer Berlin Heidelberg, 1998. http://dx.doi.org/10.1007/bfb0035004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Liu, Chengfei, Jixue Liu, and Minyi Guo. "On Transformation to Redundancy Free XML Schema from Relational Database Schema." In Web Technologies and Applications. Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/3-540-36901-5_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Database relation schema"

1

Han, Jiang, Xiaowei Peng, Hequn Xian, and Dalin Yang. "A Distortion Free Watermark Scheme for Relational Databases." In 2024 33rd International Conference on Computer Communications and Networks (ICCCN). IEEE, 2024. http://dx.doi.org/10.1109/icccn61486.2024.10637516.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

"Database Schema Elicitation to Modernize Relational Databases." In 14th International Conference on Enterprise Information Systems. SciTePress - Science and and Technology Publications, 2012. http://dx.doi.org/10.5220/0003980801260132.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ma, Z. M. "Handling Imprecise and Uncertain Engineering Information in IDEF1X and Relational Data Models." In ASME 2004 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2004. http://dx.doi.org/10.1115/detc2004-57739.

Full text
Abstract:
Database modeling of engineering information is crucial for constructing manufacturing systems because current manufacturing industries are typically information-based enterprises and information systems have become their nervous center. Engineering information can be modeled at two levels: conceptual data model and logical database model. Generally a conceptual data model is designed and then the designed conceptual data model will be transformed into the chosen logical database schema. Imprecise and uncertain information, however, is generally involved in many engineering activities and imprecise and uncertain engineering information are represented by fuzzy sets. Nowadays relational databases are still the most useful database product and IDEF1X is most useful for logical database design of relational databases in engineering. So in this paper, we focus on fuzzy data modeling in IDEF1X and relational databases. The formal approaches to mapping fuzzy IDEF1X models to fuzzy relational database schemes are hereby developed.
APA, Harvard, Vancouver, ISO, and other styles
4

Allen, Marshall D., Raymundo Arróyave, and Richard Malak. "A Graph Database Schema for Metal Additive Informatics." In ASME 2024 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2024. http://dx.doi.org/10.1115/detc2024-141981.

Full text
Abstract:
Abstract Metal additive manufacturing (AM) enables the creation of metal structures with complex materials and geometry for advanced performance capabilities. Metal AM continues to grow in popularity for use cases from alloy design to advanced manufacturing. In this work, we identify the requirements for a novel metal AM informatics schema based on the benefits and shortcomings of existing methods. After investigating different types of databases as the basis of our schema, including relational database management systems (RDBMS) and graph databases (GDBs), we identify the labeled property graph (LPG) configuration of GDBs as a solution due to its capabilities for dimensional scalability, data reusability &amp; adaptability, topological representation, graph traversal queries, and property/label metadata storage. Then, we propose an LPG database schema that addresses the identified requirements for metal AM informatics for materials design. This approach closes the loop from materials development to manufacturing &amp; characterization, enabling accelerated high-fidelity alloy design for metal AM applications. We discuss our implementation of the LPG database schema in Python and demonstrate the schema’s capabilities for visualization and exploration of the alloy design space. We then apply our LPG database schema by designing a compositionally graded alloy (CGA) part with real materials. Lastly, we discuss the possibilities for future work to expand upon the proposed schema.
APA, Harvard, Vancouver, ISO, and other styles
5

Ma, Z. M. "Modeling Imprecise and Uncertain Engineering Information in EXPRESS-G and Relational Data Models." In ASME 2005 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2005. http://dx.doi.org/10.1115/detc2005-85663.

Full text
Abstract:
Computer-based information systems have become the nerve center of current manufacturing systems. Engineering information modeling in databases is thus essential. However, information imprecision and uncertainty extensively arise in engineering design and manufacturing. So contemporary engineering applications have put a requirement on imprecise and uncertain information modeling. Viewed from database systems, engineering information modeling can be identified at two levels: conceptual data modeling and logical database modeling and correspondingly we have conceptual data models and logical database models, respectively. In this paper, we first investigate information imprecision and uncertainty in engineering applications. Then EXPRESS-G, which is a graphical modeling tool of EXPRESS for conceptual data modeling of engineering information, and nested relational databases are extended based on possibility distribution theory, respectively, in order to model imprecise and uncertain engineering information. The formal methods to mapping fuzzy EXPRESS-G schema to fuzzy relational schema are developed.
APA, Harvard, Vancouver, ISO, and other styles
6

Nascimento, Eduardo Roger S., and Marco Antonio Casanova. "Querying Databases with Natural Language: The use of Large Language Models for Text-to-SQL tasks." In Anais Estendidos do Simpósio Brasileiro de Banco de Dados. Sociedade Brasileira de Computação - SBC, 2024. http://dx.doi.org/10.5753/sbbd_estendido.2024.240552.

Full text
Abstract:
The Text-to-SQL task involves generating SQL queries based on a given relational database and a Natural Language (NL) question. Although Large Language Models (LLMs) show good performance on well-known benchmarks, they are evaluated on databases with simpler schemas. This dissertation first evaluates their effectiveness on a complex and openly available database (Mondial) using GPT-3.5 and GPT-4. The results indicate that LLM-based models perform poorly and struggle with schema linking and joins. To improve accuracy, this work proposes the use of LLM-friendly views and data descriptions. A second experiment on a real-world database confirms that this approach enhances the accuracy of the Text-to-SQL task.
APA, Harvard, Vancouver, ISO, and other styles
7

Karasneh, Yaser, Hamidah Ibrahim, Mohamed Othman, and Razali Yaakob. "Integrating schemas of heterogeneous relational databases through schema matching." In the 11th International Conference. ACM Press, 2009. http://dx.doi.org/10.1145/1806338.1806380.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Xirouchakis, Paul C. "Constrained Structural Design Databases." In ASME 1995 15th International Computers in Engineering Conference and the ASME 1995 9th Annual Engineering Database Symposium collocated with the ASME 1995 Design Engineering Technical Conferences. American Society of Mechanical Engineers, 1995. http://dx.doi.org/10.1115/edm1995-0852.

Full text
Abstract:
Abstract The technology of constrained databases (CDBs) is employed in the design of structural design databases. To illustrate the application of CDBs in structural design, the following example is considered in some detail: the design of a database for combined stiffener-plate sectional properties. A major drawback of relational database technology (RDBs) for engineering applications is that constraints cannot be processed within the data model. Another major drawback of the relational data model for engineering applications is that it only can incorporate attributes with discrete value set i.e. it cannot represent effectively variables with a continuous range. To remedy this situation, we propose the use of CDBs which are extended RDBs (so that they retain the advantages of RDBs) that can process arithmetic constraints in both relations and queries. CDBs can represent infinite point sets through the respective constraints on variables with a continuous range. We provide the database schema specification and specification for a number of typical queries for the example problem of combined stiffener-plate sectional properties database. We also specify constraints in the data models that model structural stability and manufacturing constraints.
APA, Harvard, Vancouver, ISO, and other styles
9

Juric, Damir, and Zoran Skocir. "Building OWL ontologies by analyzing relational database schema concepts and WordNet semantic relations." In 2007 9th International Conference on Telecommunications. IEEE, 2007. http://dx.doi.org/10.1109/contel.2007.381877.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Orehova, Ekaterina, Sergey Govyazin, and Iurii Stroganov. "LEARNING DATABASE QUERIES WITH PROLOG." In eLSE 2019. Carol I National Defence University Publishing House, 2019. http://dx.doi.org/10.12753/2066-026x-19-107.

Full text
Abstract:
A database is a collection of some knowledge. Knowledge can be presented as some semantic network. Entity-relationship model is one of representations of the semantic network. When using the entity-relationship model, it is possible to distinguish entities and relations between these entities. The entity-relationship model can be then converted into a database schema. The user interacts with the database by writing requests and receiving answers containing the requested information. There are several ways to write queries to databases with different convenience of creating and speed of execution. The article reviews three different approaches to writing queries: SQL query, Prolog query and Object-Relational Mapping (ORM) query. Each of the approaches has its own advantages and disadvantages. You need to know the basics of relational algebra to write queries with the SQL language, while ORM libraries and the Prolog don't require any additional knowledge. Writing queries with the Prolog language is similar to writing text in natural language, which makes these queries understandable for people who have never worked with databases. There was made the comparison of the plainness of the approaches when explaining them to listeners who are studying databases. The listeners participated in the compilation were divided into groups according to their specialties. The following groups took part in the study: first-year students of an economic and managerial specialty, engineering students and students with a specialty software engineering. The purpose of this comparison is to determine the method of compiling database queries, which is most suitable for teaching students of various specialties.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Database relation schema"

1

Lutz, Carsten. Reasoning about Entity Relationship Diagrams with Complex Attribute Dependencies. Aachen University of Technology, 2002. http://dx.doi.org/10.25368/2022.119.

Full text
Abstract:
Entity Relationship (ER) diagrams are among the most popular formalisms for the support of database design [7, 12, 17, 6]. Their classical use in the (usually computer aided) database design process can roughly be described as follows: after evaluating the requirements of the application, the database designer constructs an ER schema, which represents the conceptual model of the new database. CASE tools can be used to automatically transform the ER schema into a relational database schema, which is then manually fine-tuned. During the last years, the initially rather simple ER formalisms has been extended by various means of expressivity to account for new, more complex application areas such as schema integration for data warehouses [12, 3, 13]. Designing a conceptual model with such enriched ER diagrams is a nontrivial task: there exist complex interactions between the various means of expressivity, which quite often result in unnoticed inconsistencies in the ER schemas and in implicit ramifications of the modeling that have not been intended by the designer. To address this problem, Description Logics (DLs) have been proposed and succesfully used as a tool for reasoning about ER diagrams and thereby detecting the aforementioned anomalies [5, 6, 8].
APA, Harvard, Vancouver, ISO, and other styles
2

Satogata, T. A Generic Device Description Scheme Using a Relational Database. Office of Scientific and Technical Information (OSTI), 1994. http://dx.doi.org/10.2172/1119429.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Hidders, Jan, Bei Li, and George Fletcher, eds. Property Graph Schema contributions to WG3. Linked Data Benchmark Council, 2020. http://dx.doi.org/10.54285/ldbc.ofjf3566.

Full text
Abstract:
Five papers and three accompanying presentations contributed to the ISO/IEC JTC 1/SC 32 WG3 (Database Languages) standards committee relating to property graph schema, including meta-properties and keys
APA, Harvard, Vancouver, ISO, and other styles
4

Borgida, Alex, and Ralf Küsters. What's not in a name? Initial Explorations of a Structural Approach to Integrating Large Concept Knowledge-Bases. Aachen University of Technology, 1999. http://dx.doi.org/10.25368/2022.101.

Full text
Abstract:
Aus der Einleitung: Given two ontologies/terminologies collections of terms and their 'meanings' as used in some universe of discourse (UofD), our general task is to integrate them into a single ontology, which captures the meanings of the original terms and their inter-relationships. This problem is motivated by several application scenarios: • First, such ontologies have been and are being developed independently by multiple groups for knowledge-based and other applications. Among others, medicine is an area in which such ontologies already abound [RZStGC, CCHJ94, SCC97]. • Second, a traditional step in database design has been so-called 'view integration': taking the descriptions of the database needs of different parts of an organization (called 'external views'), and coming up with a unified central schema (called the 'logical schema') for the database [BLN86]. Although the database views might be expressed in some low-level formalism, such as the relational data model, one can express the semantics (meta-data) in a more expressive notation, which can be thought of as an ontology. Then the integration of the ontologies can guide the integration of the views. • Finally, databases and semistructured data on the internet provide many examples where there are multiple, existing heterogeneous information sources, for which uniform access is desired. To achieve this goal, it is necessary to relate the contents of the various information sources. The approach of choice has been the development of a single, integrated ontology, starting from separate ontologies capturing the semantics of the heterogeneous sources[Kas97, CDGL+98]. Of course, we could just take the union of the two ontologies, and return the result as the integration. However, except for the case when the ontologies had absolutely nothing to do with each other, this seems inappropriate. Therefore part of our task will to be explore what it means to 'integrate' two ontologies. To help in this, we will in fact assume here that the ontologies are describing exactly the same aspects of the universe of discourse (UofD), leaving for a separate paper the issue of dealing with partially overlapping ontologies.
APA, Harvard, Vancouver, ISO, and other styles
5

Brandenberg, Scott, Jonathan Stewart, Kenneth Hudson, Dong Youp Kwak, Paolo Zimmaro, and Quin Parker. Ground Failure of Hydraulic Fills in Chiba, Japan and Data Archival in Community Database. Pacific Earthquake Engineering Research Center, University of California, Berkeley, CA, 2024. http://dx.doi.org/10.55461/amnh7013.

Full text
Abstract:
This report describes analysis of ground failure and lack thereof observed in the Mihama Ward portion of Chiba, Japan following the 2011 M9.0 Tohoku Earthquake. In conjunction with this work, we have also significantly expanded the laboratory component of the Next Generation Liquefaction (NGL) relational database. The district referred to as Mihama Ward is on ground composed of hydraulic fill sluiced in by pipes, thereby resulting in a gradient of soil coarseness, with coarser soils deposited near the pipes and fine-grained soils carried further away. Observations from local researchers at Chiba University following the 2011 Tohoku Earthquake indicate that ground failure was observed closer to the locations where the pipes deposited the soil, and not further away. This ground failure consisted of extensive sand boiling and ground cracking, which led to building settlement and pipe breaks. Our hypothesis at the outset of the project was that liquefaction susceptibility might explain the pattern of ground failure. Specifically, soils deposited near the pipes are susceptible due to their coarser texture, while soils further from the pipes may be non-susceptible due to the presence of clay minerals and higher plasticity. Were this hypothesis borne out by evidence, soil in the transition zone would have provided important insights about liquefaction susceptibility. Based on testing of soils in our laboratory, we find this hypothesis to be only partially correct. We have confirmed that there are regions with high clay contents and no ground failure and other regions with predominantly granular soils and extensive surface manifestation of liquefaction. Where the hypothesis breaks down is in the transition zone, where we found that the fine-grained soils are non-plastic, and therefore they are susceptible to liquefaction. Our interpretation is that these silt materials likely liquefied during the earthquake, but did not manifest liquefaction. Two factors may have contributed to this lack of manifestation: (1) level ground conditions and lack of large driving static shear stresses (structures in the region are light residential construction) and (2) the silt is less likely to erode to the surface and form silt boils than the sandier soils that produced surface manifestations. This case history points to the importance of separating triggering (defined as the development of significant excess pore pressure and loss of strength) from manifestation (defined as observations of ground failure, including cracking, sand boils, and lateral spreading). The Mihama Ward case history involved laboratory tests performed by Tokyo Soil Research Co. Ltd. and the UCLA geotechnical laboratory. Given the importance of this data to the understanding of this case history, we recognized a need to incorporate laboratory tests in the NGL database alongside field tests and liquefaction observations. We therefore developed an organizational structure for laboratory tests, including direct simple shear, triaxial compression, and consolidation, and implemented the schema in the NGL database. We then uploaded data from tests performed by Tokyo Soil and UCLA. Furthermore, numerous other researchers have also uploaded laboratory test data for other sites. This report describes the organizational structure of the laboratory component of the database, and a tool for interacting with laboratory data.
APA, Harvard, Vancouver, ISO, and other styles
6

O'Donoghue, Cathal, Herwig Immervoll, Zeynep Gizem Can, Jules Linden, and Denisa Sologon. The distributional impact of carbon pricing and energy related taxation in Ireland. ESRI, 2024. http://dx.doi.org/10.26504/bp202503.

Full text
Abstract:
In this paper we evaluate the distributional impact of carbon pricing in Ireland via a number of different measures, Excise Duties, Carbon Taxes and the EU Emissions Trading Scheme, utilising information contained in the OECD Effective Carbon Rate (ECR) database together with the PRICES model. Essential household energy consumption constitutes a significant portion of spending, particularly for lower-income households, indicating regressive expenditure patterns across income brackets. The immediate impact of carbon pricing on household budgets varies based on their reliance on various fuels for heating and transportation (direct impact), as well as the emissions associated with other goods and services (indirect impact). Carbon footprints vary widely among households, with higher-income ones generally emitting less than lower-income ones as a percentage of their income. Although carbon footprints primarily dictate the burdens of carbon pricing, other factors such as the uneven application of carbon pricing policies and disparities in emissions between industries and fuel types also influence the equation. Despite the necessity for substantial carbon price hikes to meet climate targets, the effects on household budgets during the 2012-2021 period were relatively modest. Carbon pricing reforms typically exhibited regressive trends, disproportionately affecting lower-income households relative to their earnings. We modelled also a number of different reforms utilising the revenue generated by the additional carbon revenues. The net impact in terms of winners and losers depended very significantly upon the both the nature of the expenditure and upon the share of revenue used.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography