Academic literature on the topic 'Data Quality Model'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Data Quality Model.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Data Quality Model"

1

Hejazi, Aylin, Neda Abdolvand, and Saeedeh Rajaee Harandi. "Assessing the Importance of Data Factors of Data Quality Model in the Business Intelligence Area." International Journal of Trade, Economics and Finance 8, no. 2 (April 2017): 102–8. http://dx.doi.org/10.18178/ijtef.2017.8.2.547.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ryu, Kyung Seok, Joo Seok Park, and Jae Hong Park. "A Data Quality Management Maturity Model." ETRI Journal 28, no. 2 (April 10, 2006): 191–204. http://dx.doi.org/10.4218/etrij.06.0105.0026.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Mounyol, Roger. "A Data Model for Quality Control." IFAC Proceedings Volumes 25, no. 8 (June 1992): 181–87. http://dx.doi.org/10.1016/s1474-6670(17)54062-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Karplus, P. A., and K. Diederichs. "Linking Crystallographic Model and Data Quality." Science 336, no. 6084 (May 24, 2012): 1030–33. http://dx.doi.org/10.1126/science.1218231.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Duan, Luling, and Ninggui Duan. "Calibration model for air quality data." IOP Conference Series: Earth and Environmental Science 450 (March 24, 2020): 012005. http://dx.doi.org/10.1088/1755-1315/450/1/012005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Aljumaili, Mustafa, Ramin Karim, and Phillip Tretten. "Metadata-based data quality assessment." VINE Journal of Information and Knowledge Management Systems 46, no. 2 (May 9, 2016): 232–50. http://dx.doi.org/10.1108/vjikms-11-2015-0059.

Full text
Abstract:
Purpose The purpose of this paper is to develop data quality (DQ) assessment model based on content analysis and metadata analysis. Design/methodology/approach A literature review of DQ assessment models has been conducted. A study of DQ key performances (KPIs) has been done. Finally, the proposed model has been developed and applied in a case study. Findings The results of this study shows that the metadata data have important information about DQ in a database and can be used to assess DQ to provide decision support for decision makers. Originality/value There is a lot of DQ assessment in the literature; however, metadata are not considered in these models. The model developed in this study is based on metadata in addition to the content analysis, to find a quantitative DQ assessment.
APA, Harvard, Vancouver, ISO, and other styles
7

Merino, Jorge, Ismael Caballero, Bibiano Rivas, Manuel Serrano, and Mario Piattini. "A Data Quality in Use model for Big Data." Future Generation Computer Systems 63 (October 2016): 123–30. http://dx.doi.org/10.1016/j.future.2015.11.024.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Radulovic, Filip, Nandana Mihindukulasooriya, Raúl García-Castro, and Asunción Gómez-Pérez. "A comprehensive quality model for Linked Data." Semantic Web 9, no. 1 (November 30, 2017): 3–24. http://dx.doi.org/10.3233/sw-170267.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Artamonov, Igor, Antonina Deniskina, Vladimir Filatov, and Olga Vasilyeva. "Quality management assurance using data integrity model." MATEC Web of Conferences 265 (2019): 07031. http://dx.doi.org/10.1051/matecconf/201926507031.

Full text
Abstract:
High complexity of modern quality management systems (QMS) leads to the need for new tools aimed to support them and implement continuous improvement. The key process to manage the quality level is quality assurance. The paper investigates process information integrity and consistency as properties or quality assurance process. Some basic approach for organizing data integrity metrics is described. An approach for QMS requirements assurance based on SysML modelling language is proposed.
APA, Harvard, Vancouver, ISO, and other styles
10

Tang, Xiaoqing, and Hu Yun. "Data model for quality in product lifecycle." Computers in Industry 59, no. 2-3 (March 2008): 167–79. http://dx.doi.org/10.1016/j.compind.2007.06.011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Data Quality Model"

1

Nitesh, Varma Rudraraju Nitesh, and Boyanapally Varun Varun. "Data Quality Model for Machine Learning." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-18498.

Full text
Abstract:
Context: - Machine learning is a part of artificial intelligence, this area is now continuously growing day by day. Most internet related services such as Social media service, Email Spam, E-commerce sites, Search engines are now using machine learning. The Quality of machine learning output relies on the input data, so the input data is crucial for machine learning and good quality of input data can give a better outcome to the machine learning system. In order to achieve quality data, a data scientist can use a data quality model on data of machine learning. Data quality model can help data scientists to monitor and control the input data of machine learning. But there is no considerable amount of research done on data quality attributes and data quality model for machine learning. Objectives: - The primary objectives of this paper are to find and understand the state-of-art and state-of-practice on data quality attributes for machine learning, and to develop a data quality model for machine learning in collaboration with data scientists. Methods: - This paper mainly consists of two studies: - 1) Conducted a literature review in the different database in order to identify literature on data quality attributes and data quality model for machine learning. 2) An in-depth interview study was conducted to allow a better understanding and verifying of data quality attributes that we identified from our literature review study, this process is carried out with the collaboration of data scientists from multiple locations. Totally of 15 interviews were performed and based on the results we proposed a data quality model based on these interviewees perspective. Result: - We identified 16 data quality attributes as important from our study which is based on the perspective of experienced data scientists who were interviewed in this study. With these selected data quality attributes, we proposed a data quality model with which quality of data for machine learning can be monitored and improved by data scientists, and effects of these data quality attributes on machine learning have also been stated. Conclusion: - This study signifies the importance of quality of data, for which we proposed a data quality model for machine learning based on the industrial experiences of a data scientist. This research gap is a benefit to all machine learning practitioners and data scientists who intended to identify quality data for machine learning. In order to prove that data quality attributes in the data quality model are important, a further experiment can be conducted, which is proposed in future work.
APA, Harvard, Vancouver, ISO, and other styles
2

He, Ying Surveying &amp Spatial Information Systems Faculty of Engineering UNSW. "Spatial data quality management." Publisher:University of New South Wales. Surveying & Spatial Information Systems, 2008. http://handle.unsw.edu.au/1959.4/43323.

Full text
Abstract:
The applications of geographic information systems (GIS) in various areas have highlighted the importance of data quality. Data quality research has been given a priority by GIS academics for three decades. However, the outcomes of data quality research have not been sufficiently translated into practical applications. Users still need a GIS capable of storing, managing and manipulating data quality information. To fill this gap, this research aims to investigate how we can develop a tool that effectively and efficiently manages data quality information to aid data users to better understand and assess the quality of their GIS outputs. Specifically, this thesis aims: 1. To develop a framework for establishing a systematic linkage between data quality indicators and appropriate uncertainty models; 2. To propose an object-oriented data quality model for organising and documenting data quality information; 3. To create data quality schemas for defining and storing the contents of metadata databases; 4. To develop a new conceptual model of data quality management; 5. To develop and implement a prototype system for enhancing the capability of data quality management in commercial GIS. Based on reviews of error and uncertainty modelling in the literature, a conceptual framework has been developed to establish the systematic linkage between data quality elements and appropriate error and uncertainty models. To overcome the limitations identified in the review and satisfy a series of requirements for representing data quality, a new object-oriented data quality model has been proposed. It enables data quality information to be documented and stored in a multi-level structure and to be integrally linked with spatial data to allow access, processing and graphic visualisation. The conceptual model for data quality management is proposed where a data quality storage model, uncertainty models and visualisation methods are three basic components. This model establishes the processes involved when managing data quality, emphasising on the integration of uncertainty modelling and visualisation techniques. The above studies lay the theoretical foundations for the development of a prototype system with the ability to manage data quality. Object-oriented approach, database technology and programming technology have been integrated to design and implement the prototype system within the ESRI ArcGIS software. The object-oriented approach allows the prototype to be developed in a more flexible and easily maintained manner. The prototype allows users to browse and access data quality information at different levels. Moreover, a set of error and uncertainty models are embedded within the system. With the prototype, data quality elements can be extracted from the database and automatically linked with the appropriate error and uncertainty models, as well as with their implications in the form of simple maps. This function results in proposing a set of different uncertainty models for users to choose for assessing how uncertainty inherent in the data can affect their specific application. It will significantly increase the users' confidence in using data for a particular situation. To demonstrate the enhanced capability of the prototype, the system has been tested against the real data. The implementation has shown that the prototype can efficiently assist data users, especially non-expert users, to better understand data quality and utilise it in a more practical way. The methodologies and approaches for managing quality information presented in this thesis should serve as an impetus for supporting further research.
APA, Harvard, Vancouver, ISO, and other styles
3

Tsai, Eva Y. (Eva Yi-hua). "Inter-database data quality management : a relational-model based approach." Thesis, Massachusetts Institute of Technology, 1996. http://hdl.handle.net/1721.1/40202.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Mallur, Vikram. "A Model for Managing Data Integrity." Thesis, Université d'Ottawa / University of Ottawa, 2011. http://hdl.handle.net/10393/20233.

Full text
Abstract:
Consistent, accurate and timely data are essential to the functioning of a modern organization. Managing the integrity of an organization’s data assets in a systematic manner is a challenging task in the face of continuous update, transformation and processing to support business operations. Classic approaches to constraint-based integrity focus on logical consistency within a database and reject any transaction that violates consistency, but leave unresolved how to fix or manage violations. More ad hoc approaches focus on the accuracy of the data and attempt to clean data assets after the fact, using queries to flag records with potential violations and using manual efforts to repair. Neither approach satisfactorily addresses the problem from an organizational point of view. In this thesis, we provide a conceptual model of constraint-based integrity management (CBIM) that flexibly combines both approaches in a systematic manner to provide improved integrity management. We perform a gap analysis that examines the criteria that are desirable for efficient management of data integrity. Our approach involves creating a Data Integrity Zone and an On Deck Zone in the database for separating the clean data from data that violates integrity constraints. We provide tool support for specifying constraints in a tabular form and generating triggers that flag violations of dependencies. We validate this by performing case studies on two systems used to manage healthcare data: PAL-IS and iMED-Learn. Our case studies show that using views to implement the zones does not cause any significant increase in the running time of a process.
APA, Harvard, Vancouver, ISO, and other styles
5

Akintoye, Samson Busuyi. "Quality of service in cloud computing: Data model; resource allocation; and data availability and security." University of the Western Cape, 2019. http://hdl.handle.net/11394/7066.

Full text
Abstract:
Philosophiae Doctor - PhD
Recently, massive migration of enterprise applications to the cloud has been recorded in the Information Technology (IT) world. The number of cloud providers offering their services and the number of cloud customers interested in using such services is rapidly increasing. However, one of the challenges of cloud computing is Quality-of-Service management which denotes the level of performance, reliability, and availability offered by cloud service providers. Quality-of-Service is fundamental to cloud service providers who find the right tradeoff between Quality-of-Service levels and operational cost. In order to find out the optimal tradeoff, cloud service providers need to comply with service level agreements contracts which define an agreement between cloud service providers and cloud customers. Service level agreements are expressed in terms of quality of service (QoS) parameters such as availability, scalability performance and the service cost. On the other hand, if the cloud service provider violates the service level agreement contract, the cloud customer can file for damages and claims some penalties that can result in revenue losses, and probably detriment to the provider’s reputation. Thus, the goal of any cloud service provider is to meet the Service level agreements, while reducing the total cost of offering its services.
APA, Harvard, Vancouver, ISO, and other styles
6

Malazizi, Ladan. "Development of Artificial Intelligence-based In-Silico Toxicity Models. Data Quality Analysis and Model Performance Enhancement through Data Generation." Thesis, University of Bradford, 2008. http://hdl.handle.net/10454/4262.

Full text
Abstract:
Toxic compounds, such as pesticides, are routinely tested against a range of aquatic, avian and mammalian species as part of the registration process. The need for reducing dependence on animal testing has led to an increasing interest in alternative methods such as in silico modelling. The QSAR (Quantitative Structure Activity Relationship)-based models are already in use for predicting physicochemical properties, environmental fate, eco-toxicological effects, and specific biological endpoints for a wide range of chemicals. Data plays an important role in modelling QSARs and also in result analysis for toxicity testing processes. This research addresses number of issues in predictive toxicology. One issue is the problem of data quality. Although large amount of toxicity data is available from online sources, this data may contain some unreliable samples and may be defined as of low quality. Its presentation also might not be consistent throughout different sources and that makes the access, interpretation and comparison of the information difficult. To address this issue we started with detailed investigation and experimental work on DEMETRA data. The DEMETRA datasets have been produced by the EC-funded project DEMETRA. Based on the investigation, experiments and the results obtained, the author identified a number of data quality criteria in order to provide a solution for data evaluation in toxicology domain. An algorithm has also been proposed to assess data quality before modelling. Another issue considered in the thesis was the missing values in datasets for toxicology domain. Least Square Method for a paired dataset and Serial Correlation for single version dataset provided the solution for the problem in two different situations. A procedural algorithm using these two methods has been proposed in order to overcome the problem of missing values. Another issue we paid attention to in this thesis was modelling of multi-class data sets in which the severe imbalance class samples distribution exists. The imbalanced data affect the performance of classifiers during the classification process. We have shown that as long as we understand how class members are constructed in dimensional space in each cluster we can reform the distribution and provide more knowledge domain for the classifier.
APA, Harvard, Vancouver, ISO, and other styles
7

Gol, Murat. "A New Field-data Based Eaf Model Applied To Power Quality Studies." Master's thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/12611083/index.pdf.

Full text
Abstract:
Electric Arc Furnace (EAF) modeling has been a common research area to date. This thesis work proposes a new field-data based EAF-specific model. The data used in developing the proposed model is obtained from field measurements of instantaneous voltages and currents of EAF the plants. This model presents the dynamic behavior of the EAF system including all its parts, which are the EAF transformer, the secondary circuit, the electrodes moving and the arc itself. It consists of a cascade connected variable-resistance and &ndash
inductance combination to represent the variation in time of the fundamental frequency, and a current source in parallel with it to inject the harmonics and interharmonics content of the EAF current. The proposed model is capable of representing both AC and DC EAFs, whose controllers&rsquo
set points are the impedance values seen from the low voltage (LV) side of the EAF transformer. The validity of the proposed model has been verified by comparing EMTDC/PSCAD simulations of the model with the field measurements. The results obtained have shown quite satisfactory correlation between the behavior of the proposed model and v the actual EAF operation. To show the advantages of the model while developing FACTS solutions for power quality (PQ) problem mitigation of a given busbar supplying single- or multi-EAF installations, various applications are presented.
APA, Harvard, Vancouver, ISO, and other styles
8

Castillo, Luis Felipe, Carlos Raymundo, and Francisco Dominguez Mateos. "Information architecture model for data governance initiatives in peruvian universities." Association for Computing Machinery, Inc, 2017. http://hdl.handle.net/10757/656361.

Full text
Abstract:
El texto completo de este trabajo no está disponible en el Repositorio Académico UPC por restricciones de la casa editorial donde ha sido publicado.
This current research revealed the need to design an information architecture model for Data Governance In order to reduce the gap between the Information Technology versus the Information Management. The model designed to make a balance between the need to invest in technology and the ability to manage the information that is originated from the use of those technologies, as well as to measure with greater precision the generation of IT value through the use of quality information and user satisfaction. In order to test our model we take a case of study in the Higher Education sector in Peru in order to demonstrate the successful data governance projects with this model. 1
APA, Harvard, Vancouver, ISO, and other styles
9

Borglund, Erik. "A predictive model for attaining quality in recordkeeping." Licentiate thesis, Mid Sweden University, Department of Information Technology and Media, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-42.

Full text
Abstract:

Records are a subset of information and recordkeeping requirements demand that a record is managed with maintained authenticity and reliability, i.e. with high quality. Records are evidence of transactions and are used and managed in daily work processes. Records may be preserved for anything from milliseconds to eternity. With computer based information systems the electronic record was born: a record that is born digital. With electronic records problems regarding maintenance of authenticity and reliability have been identified. Electronic records are no longer physical entities as traditional records were. An electronic record is a logical entity that can be spread over different locations in a computer based information system. In this research the aim is to improve the possibility of reaching high quality in recordkeeping systems, i.e. to maintain reliability and authenticity of electronic records, which is necessary if electronic records are to be usable as evidence of transactions. Based on case studies and literature studies, a recordkeeping quality model is presented: a predictive model for attaining quality in recordkeeping. The recordkeeping quality model consists of four major concepts which are interrelated with each other: Electronic records, Records use, Electronic record quality, and Multidimensional perspective. The model is proposed for use when designing and developing computer based information systems which are required to be recordkeeping, systems which manage electronic records. In this research two results beside the recordkeeping quality model are emphasized. The first is that quality in recordkeeping must be seen in a multidimensional perspective, and the second is that recordkeeping systems are information systems with a partially unknown purpose.

APA, Harvard, Vancouver, ISO, and other styles
10

Rogers, David R. "A model based approach for determining data quality metrics in combustion pressure measurement. A study into a quantative based improvement in data quality." Thesis, University of Bradford, 2014. http://hdl.handle.net/10454/14100.

Full text
Abstract:
This thesis details a process for the development of reliable metrics that could be used to assess the quality of combustion pressure measurement data - important data used in the development of internal combustion engines. The approach that was employed in this study was a model based technique, in conjunction with a simulation environment - producing data based models from a number of strategically defined measurement points. A simulation environment was used to generate error data sets, from which models of calculated result responses were built. This data was then analysed to determine the results with the best response to error stimulation. The methodology developed allows a rapid prototyping phase where newly developed result calculations may be simulated, tested and evaluated quickly and efficiently. Adopting these newly developed processes and procedures, allowed an effective evaluation of several groups of result classifications, with respect to the major sources of error encountered in typical combustion measurement procedures. In summary, the output gained from this work was that certain result groups could be stated as having an unreliable response to error simulation and could therefore be discounted quickly. These results were clearly identifiable from the data and hence, for the given errors, alternative methods to identify the error sources are proposed within this thesis. However, other results had a predictable response to certain error stimuli, hence; it was feasible to state the possibility of using these results in data quality assessment, or at least establishing any boundaries surrounding their application for this usage. Interactions in responses were also clearly visible using the model based sensitivity analysis as proposed. The output of this work provides a solid foundation of information from which further work and investigation would be feasible, in order to achieve an ultimate goal of a full set of metrics from which combustion data quality could be accurately and objectively assessed.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Data Quality Model"

1

Brown, Linfield C. Computer program documentation for the enhanced stream water quality model QUAL2E. Athens, Ga: Environmental Research Laboratory, Office of Research and Development, U.S. Environmental Protection Agency, 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Schere, Kenneth L. EPA Regional Oxidant Model (ROM2.0): Evaluation on 1980 NEROS data bases. Research Triangle Park, NC: U.S. Environmental Protection Agency, Atmospheric Research and Exposure Assessment Laboratory, 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Model-based failure-modes-and-effects analysis and its application to aircraft subsystems. Heidelberg: AKA. Verlag, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ahmad, Anees. Development of software to model AXAF-I image quality: Final report. [Washington, DC: National Aeronautics and Space Administration, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Wells, Scott A. Modeling the Tualatin River system including Scogging Creek and Hagg Lake: Model description, geometry, and forcing data. [Corvallis, Or.]: Oregon Water Resources Research Institute, Oregon State University, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Truppi, Lawrence E. EPA complex terrain model development: Description of a computer data base from the Full Scale Plume Study, Tracy Power Plant, Nevada. Research Triangle Park, NC: U.S. Environmental Protection Agency, Atmospheric Sciences Research Laboratory, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Truppi, Lawrence E. EPA complex terrain model development: Description of a computer data base from Small Hill Impaction Study No. 2, Hogback Ridge, New Mexico. Research Triangle Park, NC: U.S. Environmental Protection Agency, Atmospheric Sciences Research Laboratory, 1986.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ahmad, Anees. Development of software to model AXAF-I image quality: Final report, contract no. NAS8-38609. [Washington, DC: National Aeronautics and Space Administration, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Allison, Jerry D. MINTEQA2/PRODEFA2, a geochemical assessment model for environmental systems: Version 3.0 user's manual. Athens, Ga: Environmental Research Laboratory, Office of Research and Development, U.S. Environmental Protection Agency, 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Allison, Jerry D. MINTEQA2/PRODEFA2, a geochemical assessment model for environmental systems: Version 3.0 user's manual. Athens, Ga: Environmental Research Laboratory, Office of Research and Development, U.S. Environmental Protection Agency, 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Data Quality Model"

1

Taleb, Ikbal, Mohamed Adel Serhani, and Rachida Dssouli. "Big Data Quality: A Data Quality Profiling Model." In Services – SERVICES 2019, 61–77. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-23381-5_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Särndal, Carl-Erik, Bengt Swensson, and Jan Wretman. "Quality Declarations for Survey Data." In Model Assisted Survey Sampling, 637–48. New York, NY: Springer New York, 1992. http://dx.doi.org/10.1007/978-1-4612-4378-6_17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Diederichs, Kay. "Crystallographic Data and Model Quality." In Methods in Molecular Biology, 147–73. New York, NY: Springer New York, 2016. http://dx.doi.org/10.1007/978-1-4939-2763-0_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Hjalmarsson, Håkan, Lennart Ljung, and Bo Wahlberg. "Assessing Model Quality from Data." In Modeling, Estimation and Control of Systems with Uncertainty, 167–87. Boston, MA: Birkhäuser Boston, 1991. http://dx.doi.org/10.1007/978-1-4612-0443-5_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Felix, Jean-Paul, Nathalie Languillat, and Amélie Mourens. "Model Feeding and Data Quality." In EAA Series, 205–22. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-29776-7_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zaidi, Houda, Yann Pollet, Faouzi Boufarès, and Naoufel Kraiem. "Semantic of Data Dependencies to Improve the Data Quality." In Model and Data Engineering, 53–61. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-23781-7_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Baškarada, Saša. "Analysis of Data." In Information Quality Management Capability Maturity Model, 139–221. Wiesbaden: Vieweg+Teubner, 2009. http://dx.doi.org/10.1007/978-3-8348-9634-6_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ellouze, Nebrasse, Elisabeth Métais, and Nadira Lammari. "Proposed Approach for Evaluating the Quality of Topic Maps." In Model and Data Engineering, 42–49. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-24443-8_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Criado, Javier, Silverio Martínez-Fernández, David Ameller, Luis Iribarne, and Nicolás Padilla. "Exploring Quality-Aware Architectural Transformations at Run-Time: The ENIA Case." In Model and Data Engineering, 288–302. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-45547-1_23.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Mezzanzanica, Mario, Roberto Boselli, Mirko Cesarini, and Fabio Mercorio. "Data Quality through Model Checking Techniques." In Advances in Intelligent Data Analysis X, 270–81. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-24800-9_26.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Data Quality Model"

1

Kesper, Arno, Viola Wenz, and Gabriele Taentzer. "Detecting quality problems in research data." In MODELS '20: ACM/IEEE 23rd International Conference on Model Driven Engineering Languages and Systems. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3365438.3410987.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Mehmood, Kashif, Samira Si-Said Cherfi, and Isabelle Comyn-Wattiau. "Data quality through model quality." In Proceeding of the first international workshop. New York, New York, USA: ACM Press, 2009. http://dx.doi.org/10.1145/1651415.1651421.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

He, Tianxing, Shengcheng Yu, Ziyuan Wang, Jieqiong Li, and Zhenyu Chen. "From Data Quality to Model Quality." In Internetware '19: The 11th Asia-Pacific Symposium on Internetware. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3361242.3361260.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Mimouni, Loubna, Ahmed Zellou, and Ali Idri. "MDQM: Mediation Data Quality Model Aligned Data Quality Model for Mediation Systems." In 10th International Conference on Knowledge Engineering and Ontology Development. SCITEPRESS - Science and Technology Publications, 2018. http://dx.doi.org/10.5220/0006958403190326.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Rajbahadur, Gopi Krishnan, Gustavo Ansaldi Oliva, Ahmed E. Hassan, and Juergen Dingel. "Pitfalls Analyzer: Quality Control for Model-Driven Data Science Pipelines." In 2019 ACM/IEEE 22nd International Conference on Model Driven Engineering Languages and Systems (MODELS). IEEE, 2019. http://dx.doi.org/10.1109/models.2019.00-19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Sidi, Fatimah, Abdullah Ramli, Marzanah A. Jabar, Lilly Suriani Affendey, Aida Mustapha, and Hamidah Ibrahim. "Data quality comparative model for data warehouse." In 2012 International Conference on Information Retrieval & Knowledge Management (CAMP). IEEE, 2012. http://dx.doi.org/10.1109/infrkm.2012.6204987.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Suarez-Otero, Pablo, Michael J. Mior, Maria Jose Suarez-Cabal, and Javier Tuya. "Maintaining NoSQL Database Quality During Conceptual Model Evolution." In 2020 IEEE International Conference on Big Data (Big Data). IEEE, 2020. http://dx.doi.org/10.1109/bigdata50022.2020.9378228.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Cernach, Tania Maria Antunes, Carlos Hideo Arima, Edit Grassiani, and Renata Maria Nogueira de Oliveira. "A Data Quality model in a Data Warehouse." In 13th CONTECSI International Conference on Information Systems and Technology Management. TECSI, 2016. http://dx.doi.org/10.5748/9788599693124-13contecsi/ps-3937.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Taleb, Ikbal, Mohamed Adel Serhani, and Rachida Dssouli. "Big Data Quality Assessment Model for Unstructured Data." In 2018 International Conference on Innovations in Information Technology (IIT). IEEE, 2018. http://dx.doi.org/10.1109/innovations.2018.8605945.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Suhardi, I. Gusti Ngurah Agung Rama Gunawan, and Ardani Yustriana Dewi. "Total Information Quality Management-Capability Maturity Model (TIQM-CMM): An information quality management maturity model." In 2014 International Conference on Data and Software Engineering (ICODSE). IEEE, 2014. http://dx.doi.org/10.1109/icodse.2014.7062675.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Data Quality Model"

1

Richard D. Boardman, Tyler L. Westover, and Garold L. Gresham. Feedstock Quality Factor Calibration and Data Model Development. Office of Scientific and Technical Information (OSTI), May 2010. http://dx.doi.org/10.2172/1042373.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zhang, Yongping, Wen Cheng, and Xudong Jia. Enhancement of Multimodal Traffic Safety in High-Quality Transit Areas. Mineta Transportation Institute, February 2021. http://dx.doi.org/10.31979/mti.2021.1920.

Full text
Abstract:
Numerous extant studies are dedicated to enhancing the safety of active transportation modes, but very few studies are devoted to safety analysis surrounding transit stations, which serve as an important modal interface for pedestrians and bicyclists. This study bridges the gap by developing joint models based on the multivariate conditionally autoregressive (MCAR) priors with a distance-oriented neighboring weight matrix. For this purpose, transit-station-centered data in Los Angeles County were used for model development. Feature selection relying on both random forest and correlation analyses was employed, which leads to different covariate inputs to each of the two jointed models, resulting in increased model flexibility. Utilizing an Integrated Nested Laplace Approximation (INLA) algorithm and various evaluation criteria, the results demonstrate that models with a correlation effect between pedestrians and bicyclists perform much better than the models without such an effect. The joint models also aid in identifying significant covariates contributing to the safety of each of the two active transportation modes. The research results can furnish transportation professionals with additional insights to create safer access to transit and thus promote active transportation.
APA, Harvard, Vancouver, ISO, and other styles
3

Hickman, P. J., E. N. Suleimani, and D. J. Nicolsky. Digital elevation model of Sitka Harbor and the city of Sitka, Alaska: Procedures, data sources, and quality assessment. Alaska Division of Geological & Geophysical Surveys, July 2012. http://dx.doi.org/10.14509/23964.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Zhao, Yaoyao Fiona, Thomas Kramer, William Rippey, Robert Stone, Matt Hoffman, and Scott Hoffman. Design and Usage Guide for Version 0.92 of the Quality Information Framework Data Model and XML (Extensible Markup Language) Schemas. Gaithersburg, MD: National Institute of Standards and Technology, November 2012. http://dx.doi.org/10.6028/nist.tn.1777.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Chapman, Ray, Phu Luong, Sung-Chan Kim, and Earl Hayter. Development of three-dimensional wetting and drying algorithm for the Geophysical Scale Transport Multi-Block Hydrodynamic Sediment and Water Quality Transport Modeling System (GSMB). Engineer Research and Development Center (U.S.), July 2021. http://dx.doi.org/10.21079/11681/41085.

Full text
Abstract:
The Environmental Laboratory (EL) and the Coastal and Hydraulics Laboratory (CHL) have jointly completed a number of large-scale hydrodynamic, sediment and water quality transport studies. EL and CHL have successfully executed these studies utilizing the Geophysical Scale Transport Modeling System (GSMB). The model framework of GSMB is composed of multiple process models as shown in Figure 1. Figure 1 shows that the United States Army Corps of Engineers (USACE) accepted wave, hydrodynamic, sediment and water quality transport models are directly and indirectly linked within the GSMB framework. The components of GSMB are the two-dimensional (2D) deep-water wave action model (WAM) (Komen et al. 1994, Jensen et al. 2012), data from meteorological model (MET) (e.g., Saha et al. 2010 - http://journals.ametsoc.org/doi/pdf/10.1175/2010BAMS3001.1), shallow water wave models (STWAVE) (Smith et al. 1999), Coastal Modeling System wave (CMS-WAVE) (Lin et al. 2008), the large-scale, unstructured two-dimensional Advanced Circulation (2D ADCIRC) hydrodynamic model (http://www.adcirc.org), and the regional scale models, Curvilinear Hydrodynamics in three dimensions-Multi-Block (CH3D-MB) (Luong and Chapman 2009), which is the multi-block (MB) version of Curvilinear Hydrodynamics in three-dimensions-Waterways Experiments Station (CH3D-WES) (Chapman et al. 1996, Chapman et al. 2009), MB CH3D-SEDZLJ sediment transport model (Hayter et al. 2012), and CE-QUAL Management - ICM water quality model (Bunch et al. 2003, Cerco and Cole 1994). Task 1 of the DOER project, “Modeling Transport in Wetting/Drying and Vegetated Regions,” is to implement and test three-dimensional (3D) wetting and drying (W/D) within GSMB. This technical note describes the methods and results of Task 1. The original W/D routines were restricted to a single vertical layer or depth-averaged simulations. In order to retain the required 3D or multi-layer capability of MB-CH3D, a multi-block version with variable block layers was developed (Chapman and Luong 2009). This approach requires a combination of grid decomposition, MB, and Message Passing Interface (MPI) communication (Snir et al. 1998). The MB single layer W/D has demonstrated itself as an effective tool in hyper-tide environments, such as Cook Inlet, Alaska (Hayter et al. 2012). The code modifications, implementation, and testing of a fully 3D W/D are described in the following sections of this technical note.
APA, Harvard, Vancouver, ISO, and other styles
6

Kirby, Stephen F. An Examination of the Quality of Television Infrared Observation Satellite (TIROS) Operational Vertical Sounder (TOVS) Extracted Wind and Temperature Data over Oklahoma Using the Computer-Assisted Artillery Meteorology BattleScale Forecast Model (CAAM BFM). Fort Belvoir, VA: Defense Technical Information Center, August 2000. http://dx.doi.org/10.21236/ada381091.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Macpherson, A. E., D. J. Nicolsky, and E. N. Suleimani. Digital elevation models of Skagway and Haines, Alaska: Procedures, data sources, and quality assessment. Alaska Division of Geological & Geophysical Surveys, December 2014. http://dx.doi.org/10.14509/29143.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Gorbatov, A., K. Czarnota, B. Hejrani, M. Haynes, R. Hassan, A. Medlin, J. Zhao, et al. AusArray: quality passive seismic data to underpin updatable national velocity models of the lithosphere. Geoscience Australia, 2020. http://dx.doi.org/10.11636/135284.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Hart, Carl R., D. Keith Wilson, Chris L. Pettit, and Edward T. Nykaza. Machine-Learning of Long-Range Sound Propagation Through Simulated Atmospheric Turbulence. U.S. Army Engineer Research and Development Center, July 2021. http://dx.doi.org/10.21079/11681/41182.

Full text
Abstract:
Conventional numerical methods can capture the inherent variability of long-range outdoor sound propagation. However, computational memory and time requirements are high. In contrast, machine-learning models provide very fast predictions. This comes by learning from experimental observations or surrogate data. Yet, it is unknown what type of surrogate data is most suitable for machine-learning. This study used a Crank-Nicholson parabolic equation (CNPE) for generating the surrogate data. The CNPE input data were sampled by the Latin hypercube technique. Two separate datasets comprised 5000 samples of model input. The first dataset consisted of transmission loss (TL) fields for single realizations of turbulence. The second dataset consisted of average TL fields for 64 realizations of turbulence. Three machine-learning algorithms were applied to each dataset, namely, ensemble decision trees, neural networks, and cluster-weighted models. Observational data come from a long-range (out to 8 km) sound propagation experiment. In comparison to the experimental observations, regression predictions have 5–7 dB in median absolute error. Surrogate data quality depends on an accurate characterization of refractive and scattering conditions. Predictions obtained through a single realization of turbulence agree better with the experimental observations.
APA, Harvard, Vancouver, ISO, and other styles
10

Oberle, William F., and Lawrence Davis. Toward High Resolution, Ladar-Quality 3-D World Models Using Ladar-Stereo Data Integration and Fusion. Fort Belvoir, VA: Defense Technical Information Center, February 2005. http://dx.doi.org/10.21236/ada430020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography