Academic literature on the topic 'Data frameworks'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Data frameworks.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Data frameworks"

1

Arjun, Mantri. "Enhancing Data Quality in Data Engineering using Data Testing Framework: Types and Tradeoffs." European Journal of Advances in Engineering and Technology 7, no. 10 (2020): 95–100. https://doi.org/10.5281/zenodo.13354036.

Full text
Abstract:
Ensuring high data quality is critical in the era of big data, where reliable data is essential for accurate decision-making and business intelligence. This paper reviews various data testing frameworks designed to enhance data quality, including data validation, data cleansing, data profiling, data lineage, and automated testing frameworks. Each type of framework offers unique functionalities and presents distinct tradeoffs, such as customization versus complexity and real-time versus batch processing. By understanding these frameworks and their tradeoffs, data engineers can make informed decisions to implement the most suitable methods for their specific needs, ultimately ensuring robust data quality management.
APA, Harvard, Vancouver, ISO, and other styles
2

Karan, Patel, Sakaria Yash, and Bhadane Chetashri. "Real Time Data Processing Frameworks." International Journal of Data Mining & Knowledge Management Process (IJDKP) 5, no. 5 (2019): 49–63. https://doi.org/10.5281/zenodo.3406010.

Full text
Abstract:
On a business level, everyone wants to get hold of the business value and other organizational advantages that big data has to offer. Analytics has arisen as the primitive path to business value from big data. Hadoop is not just a storage platform for big data; it’s also a computational and processing platform for business analytics. Hadoop is, however, unsuccessful in fulfilling business requirements when it comes to live data streaming. The initial architecture of Apache Hadoop did not solve the problem of live stream data mining. In summary, the traditional approach of big data being co-relational to Hadoop is false; focus needs to be given on business value as well. Data Warehousing, Hadoop and stream processing complement each other very well. In this paper, we have tried reviewing a few frameworks and products which use real time data streaming by providing modifications to Hadoop.
APA, Harvard, Vancouver, ISO, and other styles
3

Häußler, Helena. "Data Ethics Frameworks." Information - Wissenschaft & Praxis 72, no. 5-6 (2021): 291–98. http://dx.doi.org/10.1515/iwp-2021-2178.

Full text
Abstract:
Zusammenfassung Zuletzt veröffentlichten viele Organisationen ethische Richtlinien, um ihre Haltung gegen Diskriminierung durch Algorithmen zu betonen. Vier dieser Frameworks werden mithilfe der kritischen Diskursanalyse untersucht. Ziel ist es, die darin vermittelten Werte und Wertkonflikte zu identifizieren. Die Ergebnisse weisen darauf hin, dass etablierte Werte aus der Computer- und Informationsethik aufgegriffen und bestehende Machtstrukturen zwischen Akteuren verstärkt werden.
APA, Harvard, Vancouver, ISO, and other styles
4

Reddy Koilakonda, Raghunath. "Implementing Data Governance Frameworks for Enhanced Decision Making." International Journal of Science and Research (IJSR) 13, no. 6 (2024): 1239–43. http://dx.doi.org/10.21275/sr24618105346.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Joseph, Aaron Tsapa. "Leading the Data-Driven Enterprise: Integrating Robust Data Governance and Quality Frameworks for Sustainable Success." Journal of Scientific and Engineering Research 8, no. 6 (2021): 135–41. https://doi.org/10.5281/zenodo.11220731.

Full text
Abstract:
This paper underscores data governance and quality control system development strategies to ensure sustainable data-driven business operations. It focuses on how to handle data growth and how data is owned and emphasizes the need for data management and assurance of quality data. The aim has been discussed to reveal those pitfalls and to teach the best way to use the data. Data governance and quality frameworks need to be enacted, meaning the basis of that is data reliability, acceptance, and utility in decision-making informed by deep analytics and innovation. It will be a paper compiling various approaches for developing and implementing an integrated framework at the organizational level. The framework is already demonstrated by successful examples of its use that cover the industries, thus making it possible to apply it in practice. The emerging data-driven business environment calls for operational data quality and governance frameworks to be adopted as a starting point for organizational success.
APA, Harvard, Vancouver, ISO, and other styles
6

Tongeren, Jan W. van. "Designing analytical data frameworks." Review of Income and Wealth 50, no. 2 (2004): 279–97. http://dx.doi.org/10.1111/j.0034-6586.2004.00126.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Abhijit, Joshi. "Scalable Data Integration Frameworks: Enhancing Data Cohesion in Complex Systems." Journal of Scientific and Engineering Research 9, no. 10 (2022): 83–94. https://doi.org/10.5281/zenodo.12772820.

Full text
Abstract:
Data integration in large-scale environments is crucial for organizations to leverage diverse data sources for advanced analytics and decision-making. This paper delves into the latest frameworks and methodologies designed to enhance data cohesion in complex systems. We explore the challenges associated with integrating heterogeneous data sources and present scalable solutions to achieve seamless data integration. The study highlights advanced techniques and tools, including ETL processes, data lakes, and modern data integration platforms. Through detailed methodologies, pseudocode, and illustrative graphs, we provide a comprehensive guide for data engineers to implement robust data integration frameworks.
APA, Harvard, Vancouver, ISO, and other styles
8

Abhijit, Joshi. "Scalable Data Integration Frameworks: Enhancing Data Cohesion in Complex Systems." Journal of Scientific and Engineering Research 9, no. 10 (2022): 83–94. https://doi.org/10.5281/zenodo.13337884.

Full text
Abstract:
<strong>Abstract </strong>Data integration in large-scale environments is crucial for organizations to leverage diverse data sources for advanced analytics and decision-making. This paper delves into the latest frameworks and methodologies designed to enhance data cohesion in complex systems. We explore the challenges associated with integrating heterogeneous data sources and present scalable solutions to achieve seamless data integration. The study highlights advanced techniques and tools, including ETL processes, data lakes, and modern data integration platforms. Through detailed methodologies, pseudocode, and illustrative graphs, we provide a comprehensive guide for data engineers to implement robust data integration frameworks.
APA, Harvard, Vancouver, ISO, and other styles
9

Miller, Russell, Sai Hin Matthew Chan, Harvey Whelan, and João Gregório. "A Comparison of Data Quality Frameworks: A Review." Big Data and Cognitive Computing 9, no. 4 (2025): 93. https://doi.org/10.3390/bdcc9040093.

Full text
Abstract:
This study reviews various data quality frameworks that have some form of regulatory backing. The aim is to identify how these frameworks define, measure, and apply data quality dimensions. This review identified generalisable frameworks, such as TDQM, ISO 8000, and ISO 25012, and specialised frameworks, such as IMF’s DQAF, BCBS 239, WHO’s DQA, and ALCOA+. A standardised data quality model was employed to map the dimensions of the data from each framework to a common vocabulary. This mapping enabled a gap analysis that highlights the presence or absence of specific data quality dimensions across the examined frameworks. The analysis revealed that core data quality dimensions such as “accuracy”, “completeness”, “consistency”, and “timeliness” are equally and well represented across all frameworks. In contrast, dimensions such as “semantics” and “quantity” were found to be overlooked by most frameworks, despite their growing impact for data practitioners as tools such as knowledge graphs become more common. Frameworks tailored to specific domains were also found to include fewer overall data quality dimensions but contained dimensions that were absent from more general frameworks, highlighting the need for a standardised approach that incorporates both established and emerging data quality dimensions. This work condenses information on commonly used and regulation-backed data quality frameworks, allowing practitioners to develop tools and applications to apply these frameworks that are compliant with standards and regulations. The bibliometric analysis from this review emphasises the importance of adopting a comprehensive quality framework to enhance governance, ensure regulatory compliance, and improve decision-making processes in data-rich environments.
APA, Harvard, Vancouver, ISO, and other styles
10

Sivananda, Reddy Julakanti, Satya kiranmayee Sattiraju Naga, and Julakanti Rajeswari. "Data Protection through Governance Frameworks." Journal of Computational Analysis and Applications 31, no. 1 (2023): 158–62. https://doi.org/10.5281/zenodo.14715381.

Full text
Abstract:
In today&rsquo;s increasingly digital world, data has become one of the most valuable assets for organizations. With the rise in cyberattacks, data breaches, and the stringent regulatory environment, it is imperative to adopt robust data protection strategies. One such approach is the use of governance frameworks, which provide structured guidelines, policies, and processes to ensure data protection, compliance, and ethical usage. This paper explores the role of data governance frameworks in protecting sensitive information and maintaining organizational data security. It delves into the principles, strategies, and best practices that constitute an effective governance framework, including risk management, access controls, data quality assurance, and compliance with regulations like GDPR, HIPAA, and CCPA. By analyzing case studies from various sectors, the paper highlights the practical challenges, limitations, and advantages of implementing data governance frameworks. Additionally, the paper examines how data governance frameworks contribute to transparency, accountability, and operational efficiency, while also identifying emerging trends and technologies that enhance data protection. Ultimately, the paper aims to provide a comprehensive understanding of how governance frameworks can be leveraged to safeguard organizational data and ensure its responsible use.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Data frameworks"

1

Nyström, Simon, and Joakim Lönnegren. "Processing data sources with big data frameworks." Thesis, KTH, Data- och elektroteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-188204.

Full text
Abstract:
Big data is a concept that is expanding rapidly. As more and more data is generatedand garnered, there is an increasing need for efficient solutions that can be utilized to process all this data in attempts to gain value from it. The purpose of this thesis is to find an efficient way to quickly process a large number of relatively small files. More specifically, the purpose is to test two frameworks that can be used for processing big data. The frameworks that are tested against each other are Apache NiFi and Apache Storm. A method is devised in order to, firstly, construct a data flow and secondly, construct a method for testing the performance and scalability of the frameworks running this data flow. The results reveal that Apache Storm is faster than Apache NiFi, at the sort of task that was tested. As the number of nodes included in the tests went up, the performance did not always do the same. This indicates that adding more nodes to a big data processing pipeline, does not always result in a better performing setup and that, sometimes, other measures must be made to heighten the performance.<br>Big data är ett koncept som växer snabbt. När mer och mer data genereras och samlas in finns det ett ökande behov av effektiva lösningar som kan användas föratt behandla all denna data, i försök att utvinna värde från den. Syftet med detta examensarbete är att hitta ett effektivt sätt att snabbt behandla ett stort antal filer, av relativt liten storlek. Mer specifikt så är det för att testa två ramverk som kan användas vid big data-behandling. De två ramverken som testas mot varandra är Apache NiFi och Apache Storm. En metod beskrivs för att, för det första, konstruera ett dataflöde och, för det andra, konstruera en metod för att testa prestandan och skalbarheten av de ramverk som kör dataflödet. Resultaten avslöjar att Apache Storm är snabbare än NiFi, på den typen av test som gjordes. När antalet noder som var med i testerna ökades, så ökade inte alltid prestandan. Detta visar att en ökning av antalet noder, i en big data-behandlingskedja, inte alltid leder till bättre prestanda och att det ibland krävs andra åtgärder för att öka prestandan.
APA, Harvard, Vancouver, ISO, and other styles
2

Venumuddala, Ramu Reddy. "Distributed Frameworks Towards Building an Open Data Architecture." Thesis, University of North Texas, 2015. https://digital.library.unt.edu/ark:/67531/metadc801911/.

Full text
Abstract:
Data is everywhere. The current Technological advancements in Digital, Social media and the ease at which the availability of different application services to interact with variety of systems are causing to generate tremendous volumes of data. Due to such varied services, Data format is now not restricted to only structure type like text but can generate unstructured content like social media data, videos and images etc. The generated Data is of no use unless been stored and analyzed to derive some Value. Traditional Database systems comes with limitations on the type of data format schema, access rates and storage sizes etc. Hadoop is an Apache open source distributed framework that support storing huge datasets of different formatted data reliably on its file system named Hadoop File System (HDFS) and to process the data stored on HDFS using MapReduce programming model. This thesis study is about building a Data Architecture using Hadoop and its related open source distributed frameworks to support a Data flow pipeline on a low commodity hardware. The Data flow components are, sourcing data, storage management on HDFS and data access layer. This study also discuss about a use case to utilize the architecture components. Sqoop, a framework to ingest the structured data from database onto Hadoop and Flume is used to ingest the semi-structured Twitter streaming json data on to HDFS for analysis. The data sourced using Sqoop and Flume have been analyzed using Hive for SQL like analytics and at a higher level of data access layer, Hadoop has been compared with an in memory computing system using Spark. Significant differences in query execution performances have been analyzed when working with Hadoop and Spark frameworks. This integration helps for ingesting huge Volumes of streaming json Variety data to derive better Value based analytics using Hive and Spark.
APA, Harvard, Vancouver, ISO, and other styles
3

Nambiar, Arun N. "Data Exchange in Multi-Disciplinary Design Optimization frameworks." Ohio University / OhioLINK, 2004. http://www.ohiolink.edu/etd/view.cgi?ohiou1088189791.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Randhawa, Tarlochan Singh. "Incorporating Data Governance Frameworks in the Financial Industry." ScholarWorks, 2019. https://scholarworks.waldenu.edu/dissertations/6478.

Full text
Abstract:
Data governance frameworks are critical to reducing operational costs and risks in the financial industry. Corporate data managers face challenges when implementing data governance frameworks. The purpose of this multiple case study was to explore the strategies that successful corporate data managers in some banks in the United States used to implement data governance frameworks to reduce operational costs and risks. The participants were 7 corporate data managers from 3 banks in North Carolina and New York. Servant leadership theory provided the conceptual framework for the study. Methodological triangulation involved assessment of nonconfidential bank documentation on the data governance framework, Basel Committee on Banking Supervision's standard 239 compliance documents, and semistructured interview transcripts. Data were analyzed using Yin's 5-step thematic data analysis technique. Five major themes emerged: leadership role in data governance frameworks to reduce risk and cost, data governance strategies and procedures, accuracy and security of data, establishment of a data office, and leadership commitment at the organizational level. The results of the study may lead to positive social change by supporting approaches to help banks maintain reliable and accurate data as well as reduce data breaches and misuse of consumer data. The availability of accurate data may enable corporate bank managers to make informed lending decisions to benefit consumers.
APA, Harvard, Vancouver, ISO, and other styles
5

Edling, Erik, and Emil Östergren. "An analysis of microservice frameworks." Thesis, Linköpings universitet, Interaktiva och kognitiva system, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-138034.

Full text
Abstract:
Microservice architecture has entered the industry to solve some of the problems with the monolithic architecture. However, this architecture comes with its own set of problems. In order to solve the microservice architecture problems while also providing additional functionalities, microservice frameworks have been developed. In this thesis, microservice frameworks were compared and thereafter two were chosen to implement a small part of a large monolithic system as microservices. This was done in order to see how well they could implement the different functionalities that the frameworks provided in relation to the benefits and the cross-cutting concerns of the microservice architecture which are concerns that is applicable to the entire system. The results showed that the frameworks embraced the benefits of the microservice architecture in the aspects of maintainability and scalability. However, in the terms of being able to change frameworks in the pursuit of newer technologies there were problems. Some functionalities such as service discovery requires all of the new services created to use the same mechanism in order to create a unified system. There were also problems caused by the load balancing mechanism provided by the frameworks used in this thesis. The load balancing mechanism made the system unable to send large data files which was crucial for the system that was to be implemented as a microservice system.
APA, Harvard, Vancouver, ISO, and other styles
6

Bao, Shunxing. "Algorithmic Enhancements to Data Colocation Grid Frameworks for Big Data Medical Image Processing." Thesis, Vanderbilt University, 2019. http://pqdtopen.proquest.com/#viewpdf?dispub=13877282.

Full text
Abstract:
<p> Large-scale medical imaging studies to date have predominantly leveraged in-house, laboratory-based or traditional grid computing resources for their computing needs, where the applications often use hierarchical data structures (e.g., Network file system file stores) or databases (e.g., COINS, XNAT) for storage and retrieval. The resulting performance for laboratory-based approaches reveal that performance is impeded by standard network switches since typical processing can saturate network bandwidth during transfer from storage to processing nodes for even moderate-sized studies. On the other hand, the grid may be costly to use due to the dedicated resources used to execute the tasks and lack of elasticity. With increasing availability of cloud-based big data frameworks, such as Apache Hadoop, cloud-based services for executing medical imaging studies have shown promise.</p><p> Despite this promise, our studies have revealed that existing big data frameworks illustrate different performance limitations for medical imaging applications, which calls for new algorithms that optimize their performance and suitability for medical imaging. For instance, Apache HBases data distribution strategy of region split and merge is detrimental to the hierarchical organization of imaging data (e.g., project, subject, session, scan, slice). Big data medical image processing applications involving multi-stage analysis often exhibit significant variability in processing times ranging from a few seconds to several days. Due to the sequential nature of executing the analysis stages by traditional software technologies and platforms, any errors in the pipeline are only detected at the later stages despite the sources of errors predominantly being the highly compute-intensive first stage. This wastes precious computing resources and incurs prohibitively higher costs for re-executing the application. To address these challenges, this research propose a framework - Hadoop &amp; HBase for Medical Image Processing (HadoopBase-MIP) - which develops a range of performance optimization algorithms and employs a number of system behaviors modeling for data storage, data access and data processing. We also introduce how to build up prototypes to help empirical system behaviors verification. Furthermore, we introduce a discovery with the development of HadoopBase-MIP about a new type of contrast for medical imaging deep brain structure enhancement. And finally we show how to move forward the Hadoop based framework design into a commercialized big data / High performance computing cluster with cheap, scalable and geographically distributed file system.</p><p>
APA, Harvard, Vancouver, ISO, and other styles
7

Price, Simon. "Higher-order frameworks for profiling and matching heterogeneous data." Thesis, University of Bristol, 2014. http://hdl.handle.net/1983/841f5b9b-c15f-4bb0-ba0d-e9edc752be55.

Full text
Abstract:
This Thesis brings together complementary research from higher-order computational logic and workflow systems to investigate software and theoretical frameworks for profiling and matching heterogeneous data. One motivating use case is submission sifting, which matches submitted conference or journal papers to potential peer reviewers based on the similarity between the paper's abstract and the reviewer's publications as found in online bibliographic databases. Inspired by e-Science workflows, we introduce the SubSift submission sifting framework for developing web-based research intelligence applications that profile and match heterogeneous textual content from web pages and documents. Abstracting SubSift we define a formal higher-order dataflow framework that ranges over a class of higher-order relations that are sufficiently expressive to represent a wide variety data types and structures. This dataflow model is shown to be embarrassingly parallel. Our serial proof of concept implementation, JSONMatch, is used to demonstrate that the combination of this model and higher-order representation provides a flexible approach to analysing heterogeneous data. Finally we propose a theoretical framework for querying structured data, elevating Codd's relational algebra to a higher-order algebra defined on the basic terms of a higher-order logic. An extension incorporates approximate joins on structured data and is demonstrated to be feasible and have promise for future work.
APA, Harvard, Vancouver, ISO, and other styles
8

Kokkonda, Vijay, and Krishna Sandeep Taduri. "Managing Data Location in Cloud Computing (Evaluating Data localization frameworks in Amazon Web Services)." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-2685.

Full text
Abstract:
Context: Cloud Computing is an emerging technology where present IT is trending towards it. Many enterprises and users are still uncomfortable to adopt Cloud Computing as it uncovers many potential and critical threats which remind the most concerned security issue to consider. As data is migrated between different data centers dispersed globally, security for data is a very important issue. In Cloud environment Cloud user should be aware of the physical location of the data to ensure whether their data reside within certain jurisdiction as there are different data privacy laws. Evaluating different data localization frameworks in Amazon Web Services by deploying web application in Amazon availability zones (US, Europe and Asia) is the main context of this study. Objectives: In this study we investigate which strategic data localization frameworks have been proposed, can be used to identify data location of web application resource deployed in Cloud and validate those considered three frameworks by conducting experiment in a controlled environment. Methods: Literature Review is performed by using search string in data bases like Compendex, IEEE, Inspec, ACM digital Library, Science Direct and Springer Link to identify the data location frameworks. Later these data location frameworks are evaluated by conducting a controlled Experiment. Experiment is performed by following the guidelines proposed by Wohlin, C [66]. Results: Three data localization frameworks out of ten, obtained from literature study are considered for the evaluation. The evaluation of these frameworks in Amazon Web Services resulted that replication of three data localization studies is possible, predicting the location of data in US, Europe and Asia close enough accurate and the factors considered from the frameworks are valid. Conclusions: We conclude that from the identified ten frameworks, three data location frameworks are considered for evaluation in which one framework allows verifying the location of data by trusting the information provided by cloud provider and second framework is to verify the location of cloud resources without need of trusting cloud provider, finally the third framework is to identify the replicated files in cloud however this framework also does not need trusting the Cloud provider. These frameworks address the data location problem but in a different way. Now the identified three frameworks are validated by performing a controlled experiment. The activities performed from the three frameworks in the experiment setup allow identifying the data location of web application deployed in US, Europe and Asia. The evaluation of these frameworks in Amazon Cloud environment allowed proposing a checklist that should be considered to manage the web application deployed in cloud regarding data location. This checklist is proposed based on the activities performed in the experiment. Moreover, authors conclude that there is a need for further validation, whether the proposed checklist is applicable for real Cloud user who deploys and manages Cloud resources.<br>Vijay Kokkonda +918121185670, Krishna Sandeep Taduri +917893970042
APA, Harvard, Vancouver, ISO, and other styles
9

Baalbaki, Hussein. "Designing Big Data Frameworks for Quality-of-Data Controlling in Large-Scale Knowledge Graphs." Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS697.

Full text
Abstract:
Les Knowledge Graphs (KG) sont la représentation la plus utilisée d'informations structurées sur un domaine particulier, composée de milliards de faits sous la forme d'entités (nœuds) et de relations (bords) entre eux. De plus, les informations de type sémantique des entités sont également contenues dans les KG. Le nombre de KG n'a cessé d'augmenter au cours des 20 dernières années dans divers domaines, notamment le gouvernement, la recherche universitaire, les domaines biomédicaux, etc. Les applications basées sur l'apprentissage automatique qui utilisent les KG incluent la liaison d'entités, les systèmes de questions-réponses, les systèmes de recommandation, etc. Les Open KG sont généralement produits de manière heuristique, automatiquement à partir de diverses sources, notamment du texte, des photos et d'autres ressources, ou sont sélectionnés manuellement. Cependant, ces KG sont souvent incomplètes, c'est-à-dire qu'il existe des liens manquants entre les entités et des liens manquants entre les entités et leurs types d'entités correspondants. Dans cette thèse, nous abordons l’un des problèmes les plus difficiles auxquels est confronté le Knowledge Graph Completion (KGC), à savoir la prédiction de liens. Prédiction générale des liens en KG qui inclut la prédiction de la tête et de la queue, triple classification. Ces dernières années, les KGE ont été formés pour représenter les entités et les relations du KG dans un espace vectoriel de faible dimension préservant la structure du graphe. Dans la plupart des travaux publiés tels que les modèles translationnels, les modèles de réseaux neuronaux et autres, la triple information est utilisée pour générer la représentation latente des entités et des relations. Dans cette thèse, plusieurs méthodes ont été proposées pour KGC et leur efficacité est démontrée empiriquement dans cette thèse. Tout d’abord, un nouveau modèle d’intégration KG, TransModE, est proposé pour la prédiction de liens. TransModE projette les informations contextuelles des entités dans un espace modulaire, tout en considérant la relation comme vecteur de transition qui guide l'entité tête vers l'entité queue. Deuxièmement, nous avons travaillé sur la construction d'un modèle KGE simple et de faible complexité, tout en préservant son efficacité. KEMA est un nouveau modèle KGE parmi les modèles KGE les plus bas en termes de complexité, tout en obtenant des résultats prometteurs. Enfin, KEMA++ est proposé comme une mise à niveau de KEMA pour prédire les triplets manquants dans les KG en utilisant l'opération arithmétique des produits dans un espace modulaire. Les expériences approfondies et les études d'ablation montrent l'efficacité du modèle proposé, qui rivalise avec les modèles de pointe actuels et établit de nouvelles références pour KGC<br>Knowledge Graphs (KGs) are the most used representation of structured information about a particular domain consisting of billions of facts in the form of entities (nodes) and relations (edges) between them. Additionally, the semantic type information of the entities is also contained in the KGs. The number of KGs has steadily increased over the past 20 years in a variety of fields, including government, academic research, the biomedical fields, etc. Applications based on machine learning that use KGs include entity linking, question-answering systems, recommender systems, etc. Open KGs are typically produced heuristically, automatically from a variety of sources, including text, photos, and other resources, or are hand-curated. However, these KGs are often incomplete, i.e., there are missing links between the entities and missing links between the entities and their corresponding entity types. In this thesis, we are addressing one of the most challenging issues facing Knowledge Graph Completion (KGC) which is link prediction. General Link Prediction in KGs that include head and tail prediction, triple classification. In recent years, KGE have been trained to represent the entities and relations in the KG in a low-dimensional vector space preserving the graph structure. In most published works such as the translational models, neural network models and others, the triple information is used to generate the latent representation of the entities and relations. In this dissertation, several methods have been proposed for KGC and their effectiveness is shown empirically in this thesis. Firstly, a novel KG embedding model TransModE is proposed for Link Prediction. TransModE projects the contextual information of the entities to modular space, while considering the relation as transition vector that guide the head to the tail entity. Secondly, we worked on building a simple low complexity KGE model, meanwhile preserving its efficiency. KEMA is a novel KGE model among the lowest KGE models in terms of complexity, meanwhile it obtains promising results. Finally, KEMA++ is proposed as an upgrade of KEMA to predict the missing triples in KGs using product arithmetic operation in modular space. The extensive experiments and ablation studies show efficiency of the proposed model, which compete the current state of the art models and set new baselines for KGC. The proposed models establish new way in solving KGC problem other than transitional, neural network, or tensor factorization based approaches. The promising results and observations open up interesting scopes for future research involving exploiting the proposed models in domain-specific KGs such as scholarly data, biomedical data, etc. Furthermore, the link prediction model can be exploited as a base model for the entity alignment task as it considers the neighborhood information of the entities
APA, Harvard, Vancouver, ISO, and other styles
10

Chen, Alexander Y. "Tools and frameworks for data abstraction in a performance context." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/112835.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017.<br>This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.<br>Cataloged from student-submitted PDF version of thesis.<br>Includes bibliographical references (pages 109-111).<br>As data science is impacting more and more fields and proving to be effective in a wide variety of applications, the importance of easy-to-understand, high-performance data science tools is growing. Tools tend to exhibit one of two general forms: composable or template-based. We have researched and developed examples of each of these forms. The first project is an implementation of the D4M schema in the Julia language. This implementation has been tested to be faster than the optimized versions in both Matlab and Octave. With this combination of technology, we hope to provide an effective means to represent data and compute on them. This implementation enables an interface with the common DataFrame representation used in data science. We also implemented a D4M.jl interface with an emerging database technology, TileDB. The second project is the MEDIC framework, aiming to map the process of taking a common machine learning engine into a streaming context. We implemented two versions of our solution to the Twitter Trend Prediction problem: one in Julia and one in Spark. We have verified our solution is valid by comparing the Julia version with a previous result that is in Mr. Stanislav Nikolov's master thesis, named Trend or No Trend. We have also verified our solution with the Spark Streaming engine.<br>by Alexander Y Chen.<br>M. Eng.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Data frameworks"

1

Leonard, Andy, and Kent Bradshaw. SQL Server Data Automation Through Frameworks. Apress, 2020. http://dx.doi.org/10.1007/978-1-4842-6213-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Oscar, Nigro Hector, Cisaro Sandra Gonzalez, and Xodo Daniel, eds. Date mining with ontologies: Implementations, findings and frameworks. Information Science Reference, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Balfour, James A. D. Computer analysis of structural frameworks. 2nd ed. Blackwell Scientific Publications, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Sakr, Nourhan. Data-Driven Combinatorial Optimization and Efficient Machine Learning Frameworks. [publisher not identified], 2019.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

August-Wilhelm, Scheer, and Scheer August-Wilhelm, eds. ARIS--business process frameworks. 2nd ed. Springer, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

A, O'Brien Frances, and Dyson Robert G, eds. Supporting strategy: Frameworks, methods and models. John Wiley & Sons, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Abawajy, Jemal H. Internet and distributed computing advancements: Theoretical frameworks and practical applications. Information Science Reference, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

International IFIP WG 10.5 Working Conference on Electronic Design Automation Frameworks (4th 1994 Gramado, Rio Grande do Sul, Brazil). Electronic design automation frameworks: Proceedings of the Fourth International IFIP WG 10.5 Working Conference on Electronic Design Automation Frameworks. Chapman & Hall, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

B, Cerrito Patricia, ed. Cases on health outcomes and clinical data mining: Studies and frameworks. Medical Information Science Reference, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Susan, Rodrigues, ed. Using analytical frameworks for classroom research: Collecting data and analysing narrative. Routledge, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Data frameworks"

1

Wolf, Pieter. "Data Modeling." In Cad Frameworks. Springer US, 1994. http://dx.doi.org/10.1007/978-1-4615-2768-8_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Barnes, Timothy J., David Harrison, A. Richard Newton, and Rick L. Spickelmier. "Data Representation." In Electronic CAD Frameworks. Springer US, 1992. http://dx.doi.org/10.1007/978-1-4615-3558-4_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Barnes, Timothy J., David Harrison, A. Richard Newton, and Rick L. Spickelmier. "Data Management." In Electronic CAD Frameworks. Springer US, 1992. http://dx.doi.org/10.1007/978-1-4615-3558-4_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Christen, Peter, Thilina Ranbaduge, and Rainer Schnell. "Regulatory Frameworks." In Linking Sensitive Data. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-59706-1_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Quasim, Mohammad Tabrez, Mohammad Ayoub Khan, Fahad Algarni, Abdullah Alharthy, and Goram Mufareh M. Alshmrani. "Blockchain Frameworks." In Studies in Big Data. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-38677-1_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Garn, Wolfgang. "Data Analytics Frameworks." In Data Analytics for Business. Routledge, 2024. http://dx.doi.org/10.4324/9781003336099-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Kakarla, Ramcharan, Sundar Krishnan, Balaji Dhamodharan, and Venkata Gunnu. "Modeling Frameworks." In Applied Data Science Using PySpark. Apress, 2024. https://doi.org/10.1007/979-8-8688-0820-3_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

DeBellis, Michael, Livia Pinera, and Christopher Connor. "Interoperability Frameworks." In Data Science with Semantic Technologies. CRC Press, 2023. http://dx.doi.org/10.1201/9781003310785-13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Frampton, Michael. "Frameworks." In Complete Guide to Open Source Big Data Stack. Apress, 2018. http://dx.doi.org/10.1007/978-1-4842-2149-5_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Didenko, Anton, Natalia Jevglevskaja, and Ross P. Buckley. "Customer perspective: the quest for customer trust." In Customer Data Sharing Frameworks. Routledge, 2024. http://dx.doi.org/10.4324/9781003414216-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Data frameworks"

1

Gayar, Ismail El, Hesham Hassan, and Khaled T. Wassif. "Data Quality Frameworks: A Systematic Review." In 2024 5th International Conference on Artificial Intelligence and Data Sciences (AiDAS). IEEE, 2024. http://dx.doi.org/10.1109/aidas63860.2024.10730114.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ramdass, Karthikeyan, Arun Mulka, Nitin Agarwal, Sugeetha Avvaru, Gopal Kumar Gupta, and Sansar Singh Chauhan. "Scalable Performance Testing for Distributed Big Data Frameworks." In 2025 First International Conference on Advances in Computer Science, Electrical, Electronics, and Communication Technologies (CE2CT). IEEE, 2025. https://doi.org/10.1109/ce2ct64011.2025.10939442.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

N S, Vasanth, Selva Jothi M, and Pavithra S. "Secure File Sharing using Blockchain-Based Frameworks." In 2025 International Conference on Data Science, Agents & Artificial Intelligence (ICDSAAI). IEEE, 2025. https://doi.org/10.1109/icdsaai65575.2025.11011863.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Mishra, Vaishali, Ujjwal Karn, Vasanth Rajendran, Monojit Banerjee, and Harshal Darade. "Ethical Frameworks for Artificial Intelligence: A Comparative Study." In 2025 International Conference on Artificial Intelligence and Data Engineering (AIDE). IEEE, 2025. https://doi.org/10.1109/aide64228.2025.10986906.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Molnar, Sam, J. D. Laurence-Chasen, Yuhan Duan, Julie Bessac, and Kristi Potter. "Uncertainty Visualization Challenges in Decision Systems with Ensemble Data & Surrogate Models." In 2024 IEEE Workshop on Uncertainty Visualization: Applications, Techniques, Software, and Decision Frameworks. IEEE, 2024. http://dx.doi.org/10.1109/uncertaintyvisualization63963.2024.00006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Stokes, Chase, Chelsea Sanker, Bridget Cogley, and Vidya Setlur. "Voicing Uncertainty: How Speech, Text, and Visualizations Influence Decisions with Data Uncertainty." In 2024 IEEE Workshop on Uncertainty Visualization: Applications, Techniques, Software, and Decision Frameworks. IEEE, 2024. http://dx.doi.org/10.1109/uncertaintyvisualization63963.2024.00007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Twaty, Muaz, Amine Ghrab, and Sabri Skhiri. "GraphOpt: a Framework for Automatic Parameters Tuning of Graph Processing Frameworks." In 2019 IEEE International Conference on Big Data (Big Data). IEEE, 2019. http://dx.doi.org/10.1109/bigdata47090.2019.9006320.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Chandarana, Parth, and M. Vijayalakshmi. "Big Data analytics frameworks." In 2014 International Conference on Circuits, Systems, Communication and Information Technology Applications (CSCITA). IEEE, 2014. http://dx.doi.org/10.1109/cscita.2014.6839299.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Cao, Paul Y., Gang Li, Guoxing Chen, and Biao Chen. "Mobile Data Collection Frameworks." In MobiHoc'15: The Sixteenth ACM International Symposium on Mobile Ad Hoc Networking and Computing. ACM, 2015. http://dx.doi.org/10.1145/2757384.2757396.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Londońo Peláez, Jorge Mario, María Alejandra Echavarria Arcila, Leonardo Betancur Agudelo, Diana Patricia Giraldo Ramirez, and Laura Orozco Salazar. "Shared-Data Governance Frameworks." In 15th International Conference on Society and Information Technologies. International Institute of Informatics and Cybernetics, 2024. http://dx.doi.org/10.54808/icsit2024.01.65.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Data frameworks"

1

Pulivarti, Ronald. Genomic Data Cybersecurity and Privacy Frameworks Community Profile. National Institute of Standards and Technology, 2024. https://doi.org/10.6028/nist.ir.8467.2pd.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Conway, David. Data Governance: Frameworks and Approaches in the Current Marketplace. Iowa State University, 2021. http://dx.doi.org/10.31274/cc-20240624-478.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Iyer, Maithili, Hannah Stratton, Sangeeta Mathew, Satish Kumar, Paul Mathew, and Mohini Singh. Review of Building Data Frameworks across Countries: Lessons for India. Office of Scientific and Technical Information (OSTI), 2017. http://dx.doi.org/10.2172/1373279.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Bandlow, Alisa, Katherine A. Jones, and Vanessa N. Vargas. SNL Lesson Learned and Guidance for Data Repositories and Analytic Frameworks. Office of Scientific and Technical Information (OSTI), 2018. http://dx.doi.org/10.2172/1463073.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Gorzig, Marina. Disparities, Discrimination, and Data (Free Seminar). Instats Inc., 2025. https://doi.org/10.61700/mcl6o8g3dmujp1972.

Full text
Abstract:
This workshop provides an essential overview of methodologies for analyzing disparities and discrimination in social science research, focusing on the practical use of Stata for data analysis. Led by Dr Marina Gorzig, participants will introduce statistical frameworks, data preparation, regression analysis, and ethical considerations, equipping them with the skills to critically assess and contribute to this field.
APA, Harvard, Vancouver, ISO, and other styles
6

Kalkar, Uma, and Natalia González Alarcón. Facilitating Data Flows through Data Collaboratives: A Practical Guide to Designing Valuable, Accessible, and Responsible Data Collaboratives. Inter-American Development Bank, 2023. http://dx.doi.org/10.18235/0005185.

Full text
Abstract:
Data is an indispensable asset in today's society, but its production and sharing are subject to well-known market failures. Among these: neither economic nor academic markets efficiently reward costly data collection and quality assurance efforts; data providers cannot easily supervise the appropriate use of their data; and, correspondingly, users have weak incentives to pay for, acknowledge, and protect data that they receive from providers. Data collaboratives are a potential non-market solution to this problem, bringing together data providers and users to address these market failures. The governance frameworks for these collaboratives are varied and complex and their details are not widely known. This guide proposes a methodology and a set of common elements that facilitate experimentation and creation of collaborative environments. It offers guidance to governments on implementing effective data collaboratives as a means to promote data flows in Latin America and the Caribbean, harnessing their potential to design more effective services and improve public policies.
APA, Harvard, Vancouver, ISO, and other styles
7

Paul, Prince M., Madhan Jeyaraman, Arulkumar Nallakumarasamy, Naveen Jeyaraman, and Manish Khanna. Medicolegal implications and regulatory frameworks of regenerative orthopaedics - A systematic review. INPLASY - International Platform of Registered Systematic Review and Meta-analysis Protocols, 2023. http://dx.doi.org/10.37766/inplasy2023.4.0022.

Full text
Abstract:
Review question / Objective: Analyzing the literature and data on medicolegal implications, issues, and challenges of regenerative orthopaedics globally as well as in India. P – Published literature on medicolegal implications and regulatory frameworks on regenerative orthopaedics. I – Regenerative therapies in orthopaedics. C – No comparator group. O – Medical legal implication of regenerative orthopaedics. Condition being studied: Medicolegal implications and regulatory frameworks in regenerative orthopaedics.
APA, Harvard, Vancouver, ISO, and other styles
8

Valencia, Oscar, and André Martínez. Enhancing Fiscal Resilience: Medium-Term Frameworks for Managing Emerging Risks. Inter-American Development Bank, 2025. https://doi.org/10.18235/0013503.

Full text
Abstract:
Climate change poses significant well-known risks to fiscal sustainability in Latin America and the Caribbean, such as annual losses of up to 1.7 percent of GDP due to extreme weather events, the devaluation of carbon-intensive assets caused by the transition to a low-carbon economy, and the loss of fiscal revenues linked to fossil fuels. This study describes how governments can integrate green medium-term fiscal frameworks (MTFFs) into their fiscal planning to mitigate these risks. Green MTFFs combine effective carbon pricing, sustainable reforms, and strategic investments in green infrastructure. They are not only a technical tool but an imperative approach to avoid fiscal decisions that undermine growth and stability in the region. They also expose certain shortcomings, such as lack of reliable data, institutional fragmentation, and fiscal space constraints.
APA, Harvard, Vancouver, ISO, and other styles
9

Pasupuleti, Murali Krishna. Securing AI-driven Infrastructure: Advanced Cybersecurity Frameworks for Cloud and Edge Computing Environments. National Education Services, 2025. https://doi.org/10.62311/nesx/rrv225.

Full text
Abstract:
Abstract: The rapid adoption of artificial intelligence (AI) in cloud and edge computing environments has transformed industries by enabling large-scale automation, real-time analytics, and intelligent decision-making. However, the increasing reliance on AI-powered infrastructures introduces significant cybersecurity challenges, including adversarial attacks, data privacy risks, and vulnerabilities in AI model supply chains. This research explores advanced cybersecurity frameworks tailored to protect AI-driven cloud and edge computing environments. It investigates AI-specific security threats, such as adversarial machine learning, model poisoning, and API exploitation, while analyzing AI-powered cybersecurity techniques for threat detection, anomaly prediction, and zero-trust security. The study also examines the role of cryptographic solutions, including homomorphic encryption, federated learning security, and post-quantum cryptography, in safeguarding AI models and data integrity. By integrating AI with cutting-edge cybersecurity strategies, this research aims to enhance resilience, compliance, and trust in AI-driven infrastructures. Future advancements in AI security, blockchain-based authentication, and quantum-enhanced cryptographic solutions will be critical in securing next-generation AI applications in cloud and edge environments. Keywords: AI security, adversarial machine learning, cloud computing security, edge computing security, zero-trust AI, homomorphic encryption, federated learning security, post-quantum cryptography, blockchain for AI security, AI-driven threat detection, model poisoning attacks, anomaly prediction, cyber resilience, decentralized AI security, secure multi-party computation (SMPC).
APA, Harvard, Vancouver, ISO, and other styles
10

Zyphur, Michael. Instrumental Variable Analysis Using Mplus. Instats Inc., 2022. http://dx.doi.org/10.61700/vr95hvyzh3cux469.

Full text
Abstract:
This seminar introduces estimating causal effects with observational data using instrumental variables in Mplus, both within a path analysitic and SEM framework. Basic introductions are first provided for path analysis and CFA, before describing instrumental variable methods in both observed- and latent-variable frameworks. Traditional bootstrapping and Bayesian methods are explored for estimation and hypothesis testing. An official Instats certificate of completion is provided at the conclusion of the seminar. For European PhD students, the seminar offers 2 ECTS Equivalent point.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!