Academic literature on the topic 'Data-model integration'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Data-model integration.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Data-model integration"

1

NASSIRI, Hassana. "Data Model Integration." International Journal of New Computer Architectures and their Applications 7, no. 2 (2017): 45–49. http://dx.doi.org/10.17781/p002327.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Curcin, V., A. Barton, M. M. McGilchrist, et al. "Clinical Data Integration Model." Methods of Information in Medicine 54, no. 01 (2015): 16–23. http://dx.doi.org/10.3414/me13-02-0024.

Full text
Abstract:
SummaryIntroduction: This article is part of the Focus Theme of Methods of Information in Medicine on “Managing Interoperability and Complexity in Health Systems”.Background: Primary care data is the single richest source of routine health care data. However its use, both in research and clinical work, often requires data from multiple clinical sites, clinical trials databases and registries. Data integration and interoperability are therefore of utmost importance.Objectives: TRANSFoRm’s general approach relies on a unified interoperability framework, described in a previous paper. We developed a core ontology for an interoperability framework based on data mediation. This article presents how such an ontology, the Clinical Data Integration Model (CDIM), can be designed to support, in conjunction with appropriate terminologies, biomedical data federation within TRANSFoRm, an EU FP7 project that aims to develop the digital infrastructure for a learning healthcare system in European Primary Care.Methods: TRANSFoRm utilizes a unified structural / terminological interoperability frame work, based on the local-as-view mediation paradigm. Such an approach mandates the global information model to describe the domain of interest independently of the data sources to be explored. Following a requirement analysis process, no ontology focusing on primary care research was identified and, thus we designed a realist ontology based on Basic Formal Ontology to support our framework in collaboration with various terminologies used in primary care.Results: The resulting ontology has 549 classes and 82 object properties and is used to support data integration for TRANSFoRm’s use cases. Concepts identified by researchers were successfully expressed in queries using CDIM and pertinent terminologies. As an example, we illustrate how, in TRANSFoRm, the Query Formulation Workbench can capture eligibility criteria in a computable representation, which is based on CDIM.Conclusion: A unified mediation approach to semantic interoperability provides a flexible and extensible framework for all types of interaction between health record systems and research systems. CDIM, as core ontology of such an approach, enables simplicity and consistency of design across the heterogeneous software landscape and can support the specific needs of EHR-driven phenotyping research using primary care data.
APA, Harvard, Vancouver, ISO, and other styles
3

Lu, James J. "A Data Model for Data Integration." Electronic Notes in Theoretical Computer Science 150, no. 2 (2006): 3–19. http://dx.doi.org/10.1016/j.entcs.2005.11.031.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Bakshi, Waseem Jeelani, Rana Hashmy, Majid Zaman, and Muheet Ahmed Butt. "Logical Data Integration Model for the Integration of Data Repositories." International Journal of Database Theory and Application 11, no. 1 (2018): 21–28. http://dx.doi.org/10.14257/ijdta.2018.11.1.03.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Shellito, Cindy, Jean-Francois Lamarque, and J. Kiehl. "Paleocene-Eocene Data Model Integration." Eos, Transactions American Geophysical Union 88, no. 35 (2007): 344. http://dx.doi.org/10.1029/2007eo350007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Vetova, Stella. "Big Data Integration and Processing Model." WSEAS TRANSACTIONS ON COMPUTERS 20 (May 12, 2021): 82–87. http://dx.doi.org/10.37394/23205.2021.20.10.

Full text
Abstract:
The presented paper deals with data integration and sorting of Covid-19 data. The data file contains fifteen data fiels and for the design of integration and sorting model each of them is configured in data type, format and field length. For the data integration and sorting model design Talend Open Studio is used. The model concerns the performance of four main tasks: data integration, data sorting, result display, and output in .xls file format. For the sorting process two rules are assigned in accordance with the medical and biomedical requirements, namely to sort report date descending order and the Country Name field in alphabetical one
APA, Harvard, Vancouver, ISO, and other styles
7

Codesso, Mauricio Mello, Paulo Caetano da Silva, Miklos A. Vasarhelyi, and Rogério João Lunkes. "Continuous audit model: data integration framework." Revista Contemporânea de Contabilidade 15, no. 34 (2018): 144–57. http://dx.doi.org/10.5007/2175-8069.2018v15n34p144.

Full text
Abstract:
A aproximação de áreas de negócio com o uso de novas tecnologias, economia em tempo real, transações com vários países e em vários continentes com diferentes garantias legais são necessárias. Essas garantias podem ser adquiridas por meio da Auditoria Contínua (CA). No entanto, para poder realizar a análise, os auditores precisam ter acesso e extrair os dados. Pesquisadores anteriores enfatizam apenas os benefícios da aplicação dos métodos de AC, mas não explicam como recuperar e organizar os dados. Desta forma, propomos o desenvolvimento de um framework para integrar diferentes sistemas de auditoria contínua. O artigo visa contribuir para a literatura com o aprofundamento das formas de acessar, estruturar e coletar dados críticos e necessários para a AC. Com o aprofundamento do Audit Data Standard e do eXtensible Business Reporting Language (XBRL), além de criar uma base para pesquisas futuras com a integração de algoritmos de extração, análise e detecção de exceção que são utilizados pela CA.
APA, Harvard, Vancouver, ISO, and other styles
8

Classen, Aimée T., and J. Adam Langley. "Data-model integration is not magic." New Phytologist 166, no. 2 (2005): 367–70. http://dx.doi.org/10.1111/j.1469-8137.2005.01414.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Jang, Miso, and Dong-Chul Park. "Application of Classifier Integration Model with Confusion Table to Audio Data Classification." International Journal of Machine Learning and Computing 9, no. 3 (2019): 368–73. http://dx.doi.org/10.18178/ijmlc.2019.9.3.812.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ruíz-Ceniceros, Juan Antonio, José Alfonso Aguilar-Calderón, Carolina Tripp-Barba, and Aníbal Zaldívar-Colado. "Dynamic Canonical Data Model: An Architecture Proposal for the External and Data Loose Coupling for the Integration of Software Units." Applied Sciences 13, no. 19 (2023): 11040. http://dx.doi.org/10.3390/app131911040.

Full text
Abstract:
Integrating third-party and legacy systems has become a critical necessity for companies, driven by the need to exchange information with various entities such as banks, suppliers, customers, and partners. Ensuring data integrity, keeping integrations up-to-date, reducing transaction risks, and preventing data loss are all vital aspects of this complex task. Achieving success in this endeavor, which involves both technological and business challenges, necessitates the implementation of a well-suited architecture. This article introduces an architecture known as the Dynamic Canonical Data Model through Agnostic Messages. The proposal addresses the integration of loosely coupled software units, mainly when dealing with internal and external data integration. To illustrate the architecture’s components, a case study from the Mexican Logistics Company Paquetexpress is presented. This organization manages integrations across several platforms, including SalesForce and Oracle ERP, with clients like Amazon, Mercado Libre, Grainger, and Afull. Each of these incurs costs ranging from USD 30,000 to USD 36,000, with consultants from firms such as Quanam, K&F, TSOL, and TekSi playing a crucial role in their execution. This consumes much time, making maintenance costs considerably high when clients request data transmission or type changes, particularly when utilizing tools like Oracle Integration Cloud (OIC) or Oracle Service Bus (OSB). The article provides insights into the architecture’s design and implementation in a real-world scenario within the delivery company. The proposed architecture significantly reduces integration and maintenance times and costs while maximizing scalability and encouraging the reuse of components. The source code for this implementation has been registered in the National Registry of Copyrights in Mexico.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Data-model integration"

1

Routly, Wayne A. "SIDVI: a model for secure distributed data integration." Thesis, Port Elizabeth Technikon, 2004. http://hdl.handle.net/10948/261.

Full text
Abstract:
The new millennium has brought about an increase in the use of business intelligence and knowledge management systems. The very foundations of these systems are the multitude of source databases that store the data. The ability to derive information from these databases is brought about by means of data integration. With the current emphasis on security in all walks of information and communication technology, a renewed interest must be placed in the systems that provide us with information; data integration systems. This dissertation investigates security issues at specific stages in the data integration cycle, with special reference to problems when performing data integration in a peer-topeer environment, as in distributed data integration. In the database environment we are concerned with the database itself and the media used to connect to and from the database. In distributed data integration, the concept of the database is redefined to the source database, from which we extract data and the storage database in which the integrated data is stored. This postulates three distinct areas in which to apply security, the data source, the network medium and the data store. All of these areas encompass data integration and must be considered holistically when implementing security. Data integration is never only one server or one database; it is various geographically dispersed components working together towards a common goal. It is important then that we consider all aspects involved when attempting to provide security for data integration. This dissertation will focus on the areas of security threats and investigates a model to ensure the integrity and security of data during the entire integration process. In order to ensure effective security in a data integration environment, that security, should be present at all stages, it should provide for end-to-end protection.
APA, Harvard, Vancouver, ISO, and other styles
2

Wilmer, Greg. "OPM model-based integration of multiple data repositories." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/100389.

Full text
Abstract:
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, Engineering Systems Division, System Design and Management Program, 2015.<br>Cataloged from PDF version of thesis.<br>Includes bibliographical references (page 90).<br>Data integration is at the heart of a significant portion of current information system implementations. As companies continue to move towards a diverse, growing set of Commercial Off the Shelf (COTS) applications to fulfill their information technology needs, the need to integrate data between them continues to increase. In addition, these diverse application portfolios are becoming more geographically dispersed as more software is provided using the Software as a Service (SaaS) model, and companies continue the pattern of moving their internal data centers to cloud-based computing. As the growth of data integration activities continues, several prominent data integration patterns have emerged, and commercial software packages have been created that covers each of the patterns below: 1. Bulk and/or batch data extraction and delivery (ETL, ELT, etc.); 2. Messaging / Message-oriented data movement; 3. Granular, low-latency data capture and propagation (data synchronization). As the data integration landscape within an organization, and between organizations, becomes larger and more complex, opportunities exist to streamline aspects of the data integrating process not covered by current toolsets including: 1. Extensibility by third parties. Many COTS integration toolsets today are difficult if not impossible to extend by third parties; 2. Capabilities to handle different types of structured data from relational to hierarchical to graph models; 3. Enhanced modeling capabilities through use of data visualization and modeling techniques and tools; 4. Capabilities for automated unit testing of integrations; 5. A unified toolset that covers all three patterns, allowing an enterprise to implement the pattern that best suites business needs for the specific scenario; 6. A Web-based toolset that allows configuration, management and deployment via Web-based technologies allowing geographical indifference for application deployment and integration. While discussing these challenges with a large Fortune 500 client, they expressed the need for an enhanced data integration toolset that would allow them to accomplish such tasks. Given this request, the Object Process Methodology (OPM) and the Opcat toolset were used to begin design of a data integration toolset that could fulfill these needs. As part of this design process, lessons learned covering both the use of OPM in software design projects as well as enhancement requests for the Opcat toolset were documented.<br>by Greg Wilmer.<br>S.M. in Engineering and Management
APA, Harvard, Vancouver, ISO, and other styles
3

Bergami, Giacomo <1990&gt. "A new Nested Graph Model for Data Integration." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amsdottorato.unibo.it/8348/1/bergami_giacomo_tesi.pdf.

Full text
Abstract:
Despite graph data gained increasing interest in several fields, no data model suitable for both querying and integrating differently structured graph and (semi)structured data has been currently conceived. The lack of operators allowing combinations of (multiple) graphs in current graph query languages (graph joins), and on graph data structure allowing neither data integration nor nested multidimensional representations (graph nesting) are a possible motivation. In order to make such data integration possible, this thesis proposes a novel model (General Semistructured data Model) allowing the representation of both graphs and arbitrarily nested contents (e.g., one node can be contained by more than just one parent node), thus allowing the definition of a nested graph model, where both vertices and edges may include (overlapping) graphs. We provide two graph joins algorithms (Graph Conjunctive Equijoin Algorithm and Graph Conjunctive Less-equal Algorithm) and one graph nesting algorithm (Two HOp Separated Patterns). Their evaluation on top of our secondary memory representation showed the inefficiency of existing query languages’ query plan on top of their respective data models (relational, graph and document-oriented). In all three algorithms, the enhancement was possible by using an adjacency list graph representation, thus reducing the cost of joining the vertices with their respective outgoing (or ingoing) edges, and by associating hash values to both vertices and edges. As a secondary outcome of this thesis, a general data integration scenario is provided where both graph data and other semistructured and structured data could be represented and integrated into the General Semistructured data Model. A new query language outlines the feasibility of this approach (General Semistructured Query Language) over the former data model, also allowing to express both graph joins and graph nestings. This language is also capable of representing both traversal and data manipulation operators.
APA, Harvard, Vancouver, ISO, and other styles
4

Nagyová, Barbora. "Data integration in large enterprises." Master's thesis, Vysoká škola ekonomická v Praze, 2015. http://www.nusl.cz/ntk/nusl-203918.

Full text
Abstract:
Data Integration is currently an important and complex topic for many companies, because having a good and working Data Integration solution can bring multiple advantages over competitors. Data Integration is usually being executed in a form of a project, which might easily turn into failure. In order to decrease risks and negative impact of a failed Data Integration project, there needs to be good project management, Data Integration knowledge and the right technology in place. This thesis provides a framework for setting up a good Data Integration solution. The framework is developed based on the current theory, currently available Data Integration tools and opinions provided by experts working in the field for a minimum of 7+ years and have proven their skills with a successful Data Integration project. This thesis does not guarantee the development of the right Data Integration solution, but it does provide guidance how to deal with a Data Integration project in a large enterprise. This thesis is structured into seven chapters. The first chapter brings an overview about this thesis such as scope, goals, assumptions and expected value. The second chapter describes Data Management and basic Data Integration theory in order to distinguish these two topics and to explain the relationship between them. The third chapter is focused purely on Data Integration theory which should be known by everyone who participates in a Data Integration project. The fourth chapter analyses features of the current Data Integration solutions available on the market and provides an overview of the most common and necessary functionalities. Chapter five focuses on the practical part of this thesis, where the Data Integration framework is designed based on findings from previous chapters and interviews with experts in this field. Chapter six then applies the framework to a real working (anonymized) Data Integration solution, highlights the gap between the framework and the solution and provides guidance how to deal with the gaps. Chapter seven provides a resume, personal opinion and outlook.
APA, Harvard, Vancouver, ISO, and other styles
5

Zhu, Cheng-Feng. "CAD/CSPP/CAM integration using feature-based component data model." Thesis, University of Nottingham, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.362993.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Nock, Alison Heidi. "The integration of coastal flooding into an ArcFLOOD data model." Thesis, University of Plymouth, 2014. http://hdl.handle.net/10026.1/3199.

Full text
Abstract:
With the impact of global climate change, the speedy, intelligent and accessible dissemination of coastal flood predictions from a number of modelling tools at a range of temporal and spatial scales becomes increasingly important for policy decision makers. This thesis provides a novel approach to integrate the coastal flood data into an ArcFLOOD data model to improve the analysis, assessment and mitigation of the potential flood risk in coastal zones. This novel methodology has improved the accessibility, dissemination and visualisation of coastal flood risk. The results were condensed into spatial information flows, data model schematic diagrams and XML schema for end-user extension, customisation and spatial analysis. More importantly, software developers with these applications can now develop rich internet applications with little knowledge of numerical flood modelling systems. Specifically, this work has developed a coastal flooding geodatabase based upon the amalgamation, reconditioning and analysis of numerical flood modelling. In this research, a distinct lack of Geographic Information Systems (GIS) data modelling for coastal flooding prediction was identified in the literature. A schema was developed to provide the linkage between numerical flood modelling, flood risk assessment and information technology (IT) by extending the ESRI ArcGIS Marine Data Model (MDM) to include coastal flooding. The results of a linked hybrid hydrodynamic-morphological numerical flood model were used to define the time-series representation of a coastal flood in the schema. The results generated from GIS spatial analyses have improved the interpretation of numerical flood modelling output by effectively mapping the flood risk in the study site, with an improved definition according to the time-series duration of a flood. The improved results include flood water depth at a point and flood water increase which equates to the difference in significant wave height for each time step of coastal flooding. The flood risk mapping provided has indicated the potential risk to infrastructure and property and depicted the failure of flood defence structures. In the wider context, the results have been provided to allow knowledge transfer to a range of coastal flooding end-users.
APA, Harvard, Vancouver, ISO, and other styles
7

Carvalhais, Nuno Miguel Matias. "Iberian peninsula ecosystem carbon fluxes: a model-data integration study." Doctoral thesis, Faculdade de Ciências e Tecnologia, 2010. http://hdl.handle.net/10362/5384.

Full text
Abstract:
Dissertação apresentada para obtenção do Grau de Doutor em Engenharia do Ambiente pela Universidade Nova de Lisboa,Faculdade de Ciências e Tecnologia<br>Terrestrial ecosystems play a key role within the context of the global carbon cycle. Characterizing and understanding ecosystem level responses and feedbacks to climate drivers is essential for diagnostic purposes as well as climate modelling projections. Consequently,numerous modelling and data driven approaches emerge, aiming the appraisal of biosphereatmosphere carbon fluxes. The combination of biogeochemical models with observations of ecosystem carbon fluxes in a model-data integration framework enables the recognition of potential limitations of modelling approaches. In this regard, the steady-state assumption represents a general approach in the initialization routines of biogeochemical models that entails limitations in the ability to simulate net ecosystem fluxes and in model development exercises. The present research addresses the generalized assumption of initial steady-state conditions in ecosystem carbon pools for modelling carbon fluxes of terrestrial ecosystems, from local to regional scales. At local scale, this study aims to evaluate the implications of equilibrium assumptions on modelling performance and on optimized parameters and uncertainty estimates based on a model-data integration approach. These results further aim to support the estimates of regional net ecosystem fluxes, following a bottom-up approach, by focusing on parameters governing net primary production (NPP) and heterotrophic respiration (RH)processes, which determine the simulation of the net ecosystem production fluxes in the CASA model. An underlying goal of the current research is addressed by focusing on Mediterranean ecosystem types, or ecosystems potentially present in Iberia, and evaluate the general ability of terrestrial biogeochemical models in estimating net ecosystem fluxes for the Iberian Peninsula region. At regional scales, and given the limited information available, the main objective is to minimize the implications of the initial conditions in the evaluation of the temporal dynamics of net ecosystem fluxes. Inverse model parameter optimizations at site level are constrained by eddy-covariance measurements of net ecosystem fluxes and driven by local observations of meteorological variables and vegetation biophysical variables from remote sensing products. Optimizations under steady-state conditions show significantly poorer model performance and higher parameter uncertainties when compared to optimizations under relaxed initial conditions. In addition, assuming initial steady-state conditions tend to bias parameter retrievals – reducing NPP sensitivity to water availability and RH responses to temperature – in order to prescribe sink conditions. But nonequilibrium conditions can be experienced in soil and/or vegetation carbon pools under alternative underlying dynamics, which are solely discernible through the integration of additional information sources, circumventing equifinality issues.<br>Portuguese Foundation for Science and Technology (FCT),the European Union under Operational Program “Science and Innovation” (POCI 2010), PhD grant ref. SFRH/BD/6517/2001, co-sponsored by the European Social Fund. Further support,concerning the final months of the PhD, was provided by a Max Planck Society research fellowship.
APA, Harvard, Vancouver, ISO, and other styles
8

Bella, Sanjuán Antonio. "Model Integration in Data Mining: From Local to Global Decisions." Doctoral thesis, Universitat Politècnica de València, 2012. http://hdl.handle.net/10251/16964.

Full text
Abstract:
El aprendizaje autom�atico es un �area de investigaci�on que proporciona algoritmos y t�ecnicas que son capaces de aprender autom�aticamente a partir de experiencias pasadas. Estas t�ecnicas son esenciales en el �area de descubrimiento de conocimiento de bases de datos (KDD), cuya fase principal es t�ÿpicamente conocida como miner�ÿa de datos. El proceso de KDD se puede ver como el aprendizaje de un modelo a partir de datos anteriores (generaci�on del modelo) y la aplicaci�on de este modelo a nuevos datos (utilizaci�on del modelo). La fase de utilizaci�on del modelo es muy importante, porque los usuarios y, muy especialmente, las organizaciones toman las decisiones dependiendo del resultado de los modelos. Por lo general, cada modelo se aprende de forma independiente, intentando obtener el mejor resultado (local). Sin embargo, cuando varios modelos se usan conjuntamente, algunos de ellos pueden depender los unos de los otros (por ejemplo, las salidas de un modelo pueden ser las entradas de otro) y aparecen restricciones. En este escenario, la mejor decisi�on local para cada problema tratado individualmente podr�ÿa no dar el mejor resultado global, o el resultado obtenido podr�ÿa no ser v�alido si no cumple las restricciones del problema. El �area de administraci�on de la relaci�on con los clientes (CRM) ha dado origen a problemas reales donde la miner�ÿa de datos y la optimizaci�on (global) deben ser usadas conjuntamente. Por ejemplo, los problemas de prescripci�on de productos tratan de distinguir u ordenar los productos que ser�an ofrecidos a cada cliente (o sim�etricamente, elegir los clientes a los que se les deber�ÿa de ofrecer los productos). Estas �areas (KDD, CRM) carecen de herramientas para tener una visi�on m�as completa de los problemas y una mejor integraci�on de los modelos de acuerdo a sus interdependencias y las restricciones globales y locales. La aplicaci�on cl�asica de miner�ÿa de datos a problemas de prescripci�on de productos, por lo general, ha<br>Bella Sanjuán, A. (2012). Model Integration in Data Mining: From Local to Global Decisions [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/16964<br>Palancia
APA, Harvard, Vancouver, ISO, and other styles
9

Kovács, Zsolt. "The integration of product data with workflow management systems through a common data model." Thesis, University of Bristol, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.312062.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Mireku, Kwakye Michael. "A Practical Approach to Merging Multidimensional Data Models." Thesis, Université d'Ottawa / University of Ottawa, 2011. http://hdl.handle.net/10393/20457.

Full text
Abstract:
Schema merging is the process of incorporating data models into an integrated, consistent schema from which query solutions satisfying all incorporated models can be derived. The efficiency of such a process is reliant on the effective semantic representation of the chosen data models, as well as the mapping relationships between the elements of the source data models. Consider a scenario where, as a result of company mergers or acquisitions, a number of related, but possible disparate data marts need to be integrated into a global data warehouse. The ability to retrieve data across these disparate, but related, data marts poses an important challenge. Intuitively, forming an all-inclusive data warehouse includes the tedious tasks of identifying related fact and dimension table attributes, as well as the design of a schema merge algorithm for the integration. Additionally, the evaluation of the combined set of correct answers to queries, likely to be independently posed to such data marts, becomes difficult to achieve. Model management refers to a high-level, abstract programming language designed to efficiently manipulate schemas and mappings. Particularly, model management operations such as match, compose mappings, apply functions and merge, offer a way to handle the above-mentioned data integration problem within the domain of data warehousing. In this research, we introduce a methodology for the integration of star schema source data marts into a single consolidated data warehouse based on model management. In our methodology, we discuss the development of three (3) main streamlined steps to facilitate the generation of a global data warehouse. That is, we adopt techniques for deriving attribute correspondences, and for schema mapping discovery. Finally, we formulate and design a merge algorithm, based on multidimensional star schemas; which is primarily the core contribution of this research. Our approach focuses on delivering a polynomial time solution needed for the expected volume of data and its associated large-scale query processing. The experimental evaluation shows that an integrated schema, alongside instance data, can be derived based on the type of mappings adopted in the mapping discovery step. The adoption of Global-And-Local-As-View (GLAV) mapping models delivered a maximally-contained or exact representation of all fact and dimensional instance data tuples needed in query processing on the integrated data warehouse. Additionally, different forms of conflicts, such as semantic conflicts for related or unrelated dimension entities, and descriptive conflicts for differing attribute data types, were encountered and resolved in the developed solution. Finally, this research has highlighted some critical and inherent issues regarding functional dependencies in mapping models, integrity constraints at the source data marts, and multi-valued dimension attributes. These issues were encountered during the integration of the source data marts, as it has been the case of evaluating the queries processed on the merged data warehouse as against that on the independent data marts.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Data-model integration"

1

Kutsche, Ralf-Detlef, and Nikola Milanovic, eds. Model-Based Software and Data Integration. Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-78999-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Michael, Schrefl, ed. Metaclasses and their application: Data model tailoring and database integration. Springer, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

MBSDI 2008 (2008 Berlin, Germany). Model-based software and data integration: First international workshop, MBSDI 2008, Berlin, Germany, April 1-3, 2008 : proceedings. Springer, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

John, Backwell, ed. IT across the National Curriculum: A model for management and integration. Framework, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Burch, Thomas K. Model-Based Demography: Essays on Integrating Data, Technique and Theory. Springer Nature, 2018.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

J, Fenves Steven, and National Institute of Standards and Technology (U.S.), eds. Master product model for the support of tighter integration of spatial and functional design. U.S. Dept. of Commerce, Technology Administration, National Institute of Standards and Technology, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Corp, Petrotechnical Open Software. Posc Epicentre Data Model (Software Integration Platform Specification). Prentice Hall, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Supcik, Jacques. Odéon: An object-oriented data model and its integration in the Oberon system. 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

FOKUS, Fraunhofer, Ralf-Detlev Kutsch, Tom Ritter, Michael Wagner, and Christian Hein. Symposium on Model Driven Engineering : Software and Data Integration, Process Based Approaches and Tools: Fourth Workshop on Model-Driven Tool and Process Integration , Birmingham, UK, June 7, 2011 Proceedings of the Third Workshop on Model-Based Software and Data Integration , Birmingham, UK, June 7, 2011 Proceedings. Fraunhofer IRB Verlag, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Milanovic, Nikola, and Ralf-Detlef Kutsche. Model-Based Software and Data Integration: First International Workshop, MBSDI 2008, Berlin, Germany, April 1-3, 2008, Proceedings. Springer London, Limited, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Data-model integration"

1

Lu, Qiuchen, Xiang Xie, Ajith Kumar Parlikad, Jennifer Schooling, and Michael Pitt. "Data–model integration layer." In Digital Twins in the Built Environment. ICE Publishing, 2022. http://dx.doi.org/10.1680/dtbe.65802.161.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Janga, Prudhvi, and Karen C. Davis. "Schema Extraction and Integration of Heterogeneous XML Document Collections." In Model and Data Engineering. Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-41366-7_15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Liu, Hai, Yunzhen Liu, Qunhui Wu, and Shilong Ma. "A Heterogeneous Data Integration Model." In Geo-Informatics in Resource Management and Sustainable Ecosystem. Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-45025-9_31.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Eddy, James A., and Nathan D. Price. "Biological Data Integration and Model Building." In Encyclopedia of Complexity and Systems Science. Springer New York, 2013. http://dx.doi.org/10.1007/978-3-642-27737-5_34-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Eddy, James A., and Nathan D. Price. "Biological Data Integration and Model Building." In Encyclopedia of Complexity and Systems Science. Springer New York, 2009. http://dx.doi.org/10.1007/978-0-387-30440-3_34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Chromiak, Michał, and Krzysztof Stencel. "A Data Model for Heterogeneous Data Integration Architecture." In Communications in Computer and Information Science. Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-06932-6_53.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Mena, Manel, Javier Criado, Luis Iribarne, and Antonio Corral. "Digital Dices: Towards the Integration of Cyber-Physical Systems Merging the Web of Things and Microservices." In Model and Data Engineering. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-32065-2_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Pokorný, Jaroslav. "Data Integration in a Multi-model Environment." In Information Integration and Web Intelligence. Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-48316-5_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Badiru, Adedeji B. "Application of DEJI Systems Model to Data Integration." In Data Analytics. CRC Press, 2020. http://dx.doi.org/10.1201/9781003083146-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Xiao, Guorong. "Study of Data Integration Model on Securities." In Lecture Notes in Electrical Engineering. Springer Netherlands, 2012. http://dx.doi.org/10.1007/978-94-007-2169-2_23.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Data-model integration"

1

Hasibuan, Nelly Astuti, Poltak Sihombing, Erna Budhiarti Nababan, and Amalia. "High-Dimensional Data Integration Model to Improve the Plants Quality." In 2024 IEEE International Conference on Control & Automation, Electronics, Robotics, Internet of Things, and Artificial Intelligence (CERIA). IEEE, 2024. https://doi.org/10.1109/ceria64726.2024.10914956.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Nakatani, Tomohiro, Naoyuki Kamo, Marc Delcroix, and Shoko Araki. "Multi-Stream Diffusion Model for Probabilistic Integration of Model-Based and Data-Driven Speech Enhancement." In 2024 18th International Workshop on Acoustic Signal Enhancement (IWAENC). IEEE, 2024. http://dx.doi.org/10.1109/iwaenc61483.2024.10694664.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Barkallah, Bassem, and Samir Moalla. "Metadata driven integration model for large scale data integration." In 2009 IEEE/ACS International Conference on Computer Systems and Applications. IEEE, 2009. http://dx.doi.org/10.1109/aiccsa.2009.5069296.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Luder, Arndt, Konstantin Kirchheim, Johanna Pauly, Stefan Biffl, Felix Rinker, and Laura Waltersdorfer. "Supporting the Data Model Integrator in an Engineering Network by Automating Data Integration." In 2019 IEEE 17th International Conference on Industrial Informatics (INDIN). IEEE, 2019. http://dx.doi.org/10.1109/indin41052.2019.8972174.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Zhang, Zhe, Min Song, Daxin Liu, Zhengxian Wei, Hongbin Wang, and Jun Ni. "Data-Structure-Model for Data Integration in Distributed Systems." In 2008 International Multi-symposiums on Computer and Computational Sciences (IMSCCS). IEEE, 2008. http://dx.doi.org/10.1109/imsccs.2008.45.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Chen, Ning. "Integration: Bridge Gap between Data Model and Process Model." In 2008 International Conference on Convergence and Hybrid Information Technology. IEEE, 2008. http://dx.doi.org/10.1109/ichit.2008.227.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Meng, Jian, and Jinlong Chen. "A Mashup Model for Distributed Data Integration." In 2009 International Conference on Management of e-Commerce and e-Government. IEEE, 2009. http://dx.doi.org/10.1109/icmecg.2009.46.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Heldt, T., and G. C. Verghese. "Model-based data integration in clinical environments." In 2010 32nd Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC 2010). IEEE, 2010. http://dx.doi.org/10.1109/iembs.2010.5626101.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Wan, Lin, and Rongrong Ren. "A VGI data integration framework based on linked data model." In International Conference on Intelligent Earth Observing and Applications, edited by Guoqing Zhou and Chuanli Kang. SPIE, 2015. http://dx.doi.org/10.1117/12.2211068.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Klímek, Jakub, and Martin Nečaský. "Integration and evolution of XML data via common data model." In the 1st International Workshop. ACM Press, 2010. http://dx.doi.org/10.1145/1754239.1754283.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Data-model integration"

1

Swinhoe, Martyn Thomas. Model development and data uncertainty integration. Office of Scientific and Technical Information (OSTI), 2015. http://dx.doi.org/10.2172/1227409.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Swinhoe, Martyn Thomas. Model development and data uncertainty integration. Office of Scientific and Technical Information (OSTI), 2015. http://dx.doi.org/10.2172/1227933.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Polo, J., S. Wilbert, J. A. Ruiz-Arias, et al. Integration of ground measurements with model-derived data. IEA Solar Heating and Cooling Programme, 2015. http://dx.doi.org/10.18777/ieashc-task46-2015-0003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Anderson, Alexander, Eric Stephan, and Thomas McDermott. Enabling Data Exchange and Data Integration with the Common Information Model. Office of Scientific and Technical Information (OSTI), 2022. http://dx.doi.org/10.2172/1922947.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Lieberman, Joshua, ed. Model for Underground Data Definition and Integration (MUDDI) Engineering Report. Open Geospatial Consortium, Inc., 2019. http://dx.doi.org/10.62973/17-090r1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Al Rashdan, Ahmad, Jeren Browning, and Christopher Ritter. Data Integration Aggregated Model and Ontology for Nuclear Deployment (DIAMOND): Preliminary Model and Ontology. Office of Scientific and Technical Information (OSTI), 2019. http://dx.doi.org/10.2172/2439922.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Danner, William F., David T. Sanford, and Yuhwei Yang. STEP (Standard for the Exchange of Product Model Data) resource integration:. National Institute of Standards and Technology, 1991. http://dx.doi.org/10.6028/nist.ir.4528.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Lieberman, Joshua, ed. MUDDI v1.1 (Model for Underground Data Definition and Integration) Engineering Report. Open Geospatial Consortium, Inc., 2021. http://dx.doi.org/10.62973/19-081.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Barr, S., D. O. ReVelle, C. Y. J. Kao, and E. K. Bigg. Data/model integration for vertical mixing in the stable Arctic boundary layer. Office of Scientific and Technical Information (OSTI), 1998. http://dx.doi.org/10.2172/334240.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Simakhodskiy, Igor. Mapping Integration Definition for Function Modeling (IDEFO) model into CASE Data Interchange Format (CDIF) transfer file. National Institute of Standards and Technology, 1995. http://dx.doi.org/10.6028/nist.ir.5719.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!