To see the other types of publications on this topic, follow the link: Data Governance.

Dissertations / Theses on the topic 'Data Governance'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Data Governance.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Blahová, Leontýna. "Big Data Governance." Master's thesis, Vysoká škola ekonomická v Praze, 2016. http://www.nusl.cz/ntk/nusl-203994.

Full text
Abstract:
This master thesis is about Big Data Governance and about software, which is used for this purposes. Because Big Data are huge opportunity and also risk, I wanted to map products which can be easily use for Data Quality and Big Data Governance in one platform. This thesis is not only on theoretical knowledge level, but also evaluates five key products (from my point of view). I defined requirements for every kind of domain and then I set up the weights and points. The main objective is to evaluate software capabilities and compere them.
APA, Harvard, Vancouver, ISO, and other styles
2

Slouková, Anna. "Postup zavádění Data Governance." Master's thesis, Vysoká škola ekonomická v Praze, 2008. http://www.nusl.cz/ntk/nusl-10498.

Full text
Abstract:
This thesis refers to Data Governance issue and the way of implementing this program. It is logically devided into two parts -- theoretical and practical one. The teoretical part represented by first chapter summarises actual findings about the Data Governance program, it explains what is hidden behind the term Data Governance, cause for Data Governance initiatives emergence, it itemizes particular parts of which the program is composed and basic, mostly software tools, that are necessary for successful program run. Practical part consists of second and third chapter. The second chapter contains enumeration of various types of outputs that grow up either during the implementation of program or in its run itself. It categorizes and deals in detail with processes and activities, organizational structure of the program, dokuments, used metrics and KPIs and IS/IT tools. Third chapter describes the process of implementing the program into an enterprise in detail. It is devided into four consequential phases -- assessment of current state, design, implementation and run of the program. In every chapter, there are inputs, outputs, detailed decomposition into particular activities with references to document tepmlates that are used during theese activities, risks and resources introduced. In two attachments of this thesis, there are two helpful documents -- general document teplate and teplate of a role description -- that serve to better implementation of Data Governance program.
APA, Harvard, Vancouver, ISO, and other styles
3

Reken, Jaroslav. "Role v Data Governance." Master's thesis, Vysoká škola ekonomická v Praze, 2008. http://www.nusl.cz/ntk/nusl-19114.

Full text
Abstract:
This work is covering the area of Data Governance (DG) with the main focus on roles in DG. First part is capturing the DG field from a basic perspective. This chapter introduces main principles of DG and is considered as a guideline for better understanding of the second chapter. The second chapter contains different approaches on DG and on roles in DG. The approaches are from world leaders in the field of DG like IBM, Teradata, KIK Consulting and The Data Governance Institute. In the summary of second chapter you can find a comparison of these different approaches to organization structure and roles in DG. The third and final chapter contains my own approach to organization structures and roles in DG. You can find there a wide variety of roles divided by different factors. This will give you a very good and unique perspective on roles and it might be also helpful as a guideline for necessary roles in the implementation process of DG program.
APA, Harvard, Vancouver, ISO, and other styles
4

Zosinčuk, Dominik. "Zavádění projektu data governance." Master's thesis, Vysoká škola ekonomická v Praze, 2013. http://www.nusl.cz/ntk/nusl-197446.

Full text
Abstract:
Topic of this thesis is the Data Governance implementation in the large companies. These companies struggle during governing and managing data to get useful insights for the decision making. Data Governance is new approach to managing the companies which helps to solve the data management pain points and helps organizations to work with data effectively and without any problems. Data Governance helps to transform data into asset. This thesis is divided into theoretical and practical part. In the theoretical part are discussed reasons for emerging Data Governance, analysis of approaches to Data Governance by world leading methodologies and possible focus of the Data Governance projects as well as its benefits. Important part of this theoretical part is Data Governance components definition. Implementation of the Data Governance is discussed in the practical part. The goal of the practical part is to describe required artifacts which should exist during the implementation. Described artifacts use the best practice from the existing literature. These deliverables will help to better structure, govern and successfully implement the Data Governance. Delivering these artifacts bring the value for the company. Each project deliverable has definitions of the importance for the project team and the company. Most important benefit of the practical part is aspiration to eliminate pain points during the Data Governance implementation as appropriate project team, cooperation definition, buy-in and deliverables.
APA, Harvard, Vancouver, ISO, and other styles
5

Ullrichová, Jana. "Koncept zavedení Data Governance." Master's thesis, Vysoká škola ekonomická v Praze, 2016. http://www.nusl.cz/ntk/nusl-203901.

Full text
Abstract:
This master´s thesis discusses concept of implementation for data governance. The theoretical part of this thesis is about data governance. It explains why data are important for company, describes definitoons of data governance, its history, its components, its principles and processes and fitting in company. Theoretical part is amended with examples of data governance failures and banking specifics. The main goal of this thesis is to create a concept for implementing data governance and its implementation in real company. That is what practical part consists of.
APA, Harvard, Vancouver, ISO, and other styles
6

DeStefano, R. J. "Improving Enterprise Data Governance Through Ontology and Linked Data." Thesis, Pace University, 2016. http://pqdtopen.proquest.com/#viewpdf?dispub=10097925.

Full text
Abstract:

In the past decade, the role of data has increased exponentially from being the output of a process, to becoming a true corporate asset. As the business landscape becomes increasingly complex and the pace of change increasingly faster, companies need a clear awareness of their data assets, their movement, and how they relate to the organization in order to make informed decisions, reduce cost, and identify opportunity. The increased complexity of corporate technology has also created a high level of risk, as the data moving across a multitude of systems lends itself to a higher likelihood of impacting dependent processes and systems, should something go wrong or be changed. The result of this increased difficulty in managing corporate data assets is poor enterprise data quality, the impacts of which, range in the billions of dollars of waste and lost opportunity to businesses.

Tools and processes exist to help companies manage this phenomena, however often times, data projects are subject to high amounts of scrutiny as senior leadership struggles to identify return on investment. While there are many tools and methods to increase a companies’ ability to govern data, this research stands by the fact that you can’t govern that which you don’t know. This lack of awareness of the corporate data landscape impacts the ability to govern data, which in turn impacts overall data quality within organizations.

This research seeks to propose a means for companies to better model the landscape of their data, processes, and organizational attributes through the use of linked data, via the Resource Description Framework (RDF) and ontology. The outcome of adopting such techniques is an increased level of data awareness within the organization, resulting in improved ability to govern corporate data assets. It does this by primarily addressing corporate leadership’s low tolerance for taking on large scale data centric projects. The nature of linked data, with it’s incremental and de-centralized approach to storing information, combined with a rich ecosystem of open source or low cost tools reduces the financial barriers to entry regarding these initiatives. Additionally, linked data’s distributed nature and flexible structure help foster maximum participation throughout the enterprise to assist in capturing information regarding data assets. This increased participation aids in increasing the quality of the information captured by empowering more of the individuals who handle the data to contribute.

Ontology, in conjunction with linked data, provides an incredibly powerful means to model the complex relationships between an organization, its people, processes, and technology assets. When combined with the graph based nature of RDF the model lends itself to presenting concepts such as data lineage to allow an organization to see the true reach of it’s data. This research further proposes an ontology that is based on data governance standards, visualization examples and queries against data to simulate common data governance situations, as well as guidelines to assist in its implementation in a enterprise setting.

The result of adopting such techniques will allow for an enterprise to accurately reflect the data assets, stewardship information and integration points that are so necessary to institute effective data governance.

APA, Harvard, Vancouver, ISO, and other styles
7

Barker, James M. "Data governance| The missing approach to improving data quality." Thesis, University of Phoenix, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10248424.

Full text
Abstract:

In an environment where individuals use applications to drive activities from what book to purchase, what film to view, to what temperature to heat a home, data is the critical element. To make things work data must be correct, complete, and accurate. Many firms view data governance as a panacea to the ills of systems and organizational challenge while other firms struggle to generate the value of these programs. This paper documents a study that was executed to understand what is being done by firms in the data governance space and why? The conceptual framework that was established from the literature on the subject was a set of six areas that should be addressed for a data governance program including: data governance councils; data quality; master data management; data security; policies and procedures; and data architecture. There is a wide range of experiences and ways to address data quality and the focus needs to be on execution. This explanatory case study examined the experiences of 100 professionals at 41 firms to understand what is being done and why professionals are undertaking such an endeavor. The outcome is that firms need to address data quality, data security, and operational standards in a manner that is organized around business value including strong business leader sponsorship and a documented dynamic business case. The outcome of this study provides a foundation for data governance program success and a guide to getting started.

APA, Harvard, Vancouver, ISO, and other styles
8

Furlan, Patrícia Kuzmenko. "Fatores determinantes para a adoção das governanças de dados e de informação no ambiente big data." Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/3/3136/tde-24092018-081250/.

Full text
Abstract:
No ambiente big data, as organizações se preocupam em extrair valor dos dados e das informações com o intuito de obter vantagens competitivas. No entanto, são necessários esforços organizacionais com relação aos ativos de dados, incluindo a definição de responsabilidades com relação ao uso dos dados, a garantia da qualidade dos dados, dentre outros aspectos contemplados pelos modelos de governança de dados ou de informação. Deste modo, esta pesquisa investigou como as organizações podem adotar as governanças de dados ou de informação no ambiente big data e, para tanto, foram contemplados estudos de casos multisetoriais para identificar os fatores determinantes para a adoção das governanças de dados ou de informação no ambiente big data. Foram investigados os elementos e os conteúdos dos modelos de governança de dados ou de informação e analisados os aspectos dos modelos com relação à inteligência de negócios e ao big data analytics. Notou-se que as ações organizacionais com relação à governança de dados ou de informação são pouco consolidadas, mas conhecidas pelas organizações. Além disto, os modelos de governança de dados ou de informação são adotados por organizações com diferentes níveis de capacidades analíticas. Tais modelos contemplam a definição dos objetivos estratégicos da governança e domínios como o gerenciamento da qualidade dos dados ou das informações, o gerenciamento dos dados (em especial meta-dados), a transformação da mentalidade organizacional com relação aos dados e as informações e necessitam de competências de colaboração e comunicação dos stakeholders. Foram identificados oito fatores determinantes para a adoção das governanças de dados ou de informação no ambiente big data, os quais contemplam práticas estruturais, relacionais e operacionais do modelo de governança: 1 - Organizações grandes, globais, difusas, com estruturas descentralizadas de negócios e portfolio complexo de produtos ou serviços; 2 - Apontar um C-level, definir gerentes na estrutura e determinar data owners e data stewards; 3 - Estabelecer comitê de dados ou outros meios para reunir a alta cúpula e os principais líderes da organização; 4 - Atuação do departamento de TI nas atividades de gerenciamento de dados ou de informação, viabilizando e executando atividades operacionais com relação aos dados e as informações dentre as bases de dados e sistemas de informação; 5 - Atuar ativamente na transformação cultural da organização para data-driven; 6 - Promover a comunicação e a colaboração interna; desenvolver a comunicação com relação à eficácia das políticas e a necessidade de adequação dos stakeholders; 7 - Definir, gerenciar e controlar metadados; 8 - Definir os padrões, as exigências e o controle sobre a qualidade dos dados. A pesquisa oferece uma consolidação teórica relevante para o campo da governança de dados ou da informação, contemplando vasta lista de variáveis da literatura de de dados e governança de informação. Foi também possível expandir o modelo de governança de dados ou de informação englobando os domínios relativos à colaboração e comunicação, mudança cultural. Propõem-se uma expansão na conceituação geral dos termos governança de dados e governança de informação.
In the big data environment, organizations are concerned with extracting value from data and information in order to acquire competitive advantage. However, organizational efforts are required to organize data assets, determine responsibilities with regard to the data assets, ensure data quality, and other aspects. Such activities are covered by data or information governance models. This research investigated how organizations can adopt data or information governance in the big data environment. Thus, it was conducted multi-sectoral case studies to identify determinants factors for the adopting of data or information governance in the big data environment. The research protocol encompassed elements and contents of the data or information governance models and those related to big data value extraction. It was noted that the organizational approaches regarding data or information governance are poorly consolidated, but are well known to organizations. In addition, data or information governance models are adopted by organizations with different levels of analytical capabilities. Those models include the definition of the strategic objectives, and domains like data or information quality management, data management (especially metadata), transformation of the organizational cultural in relation to the data and the information, and collaboration and communication among stakeholders. Eight determinants factor were identified for the adoption of data or information governance in the big data environment, including structural, relational and operational practices of the governance model: 1 - Large, global and diffuse organizations with decentralized business and complex portfolio of products or services; 2 - Define C-level, managers, data owners and data stewards; 3 - Establish a data committee or other means to bring together the top leaders of the organization; 4 - Engagement of the IT department on the data management activities, enabling and executing operational activities in relation to data and information among databases and information systems; 5 - Actively engage in the cultural transformation of the organization into data-driven; 6 - Promote communication and internal collaboration; develop communication on the effectiveness of policies and the need for stakeholder adequacy; 7 - Define, manage and control metadata; 8 - Define standards, requirements and control over data quality. This research provides a relevant theoretical consolidation to the field of data or information governance, contemplating a vast list of research variables on the fields of competitive intelligence, IT governance, data and information governance literatures. It was also possible to expand the data or information governance model through the addition of domains such as collaboration, communication, and cultural transformation. The research also proposes an expansion in the general conceptualization of the terms data governance and information governance.
APA, Harvard, Vancouver, ISO, and other styles
9

Kmoch, Václav. "Data Governance - koncept projektu zavedení procesu." Master's thesis, Vysoká škola ekonomická v Praze, 2010. http://www.nusl.cz/ntk/nusl-73648.

Full text
Abstract:
Companies in these days deal with underlying issue that concerns about questions how to manage volume growth of corporate data needed to decision making processes and how to control credibility and relevance of derived information and knowledge. Other questions deal with problem of responsibility and data security that represents potential risk of information outflow. The Data Governance concepts provide comprehensive answer to these questions. However, making a decision on implementing a Data Governance program is usually triggering many other problems like setting up environments, making determination of project scope, allocating capacity of data experts and finding one's way in non-uniform Data Governance concepts offered by various IT vendors. The aim of this thesis is to draw the unified and universal implementation process that helps with setting up DG projects and makes certain conception about how to run these projects step-by-step. The first and the second part of the thesis are dedicated to describe principles, components and tools of Data Governance and also methods of measuring data quality levels. The third part is offering concrete approach for successful implementation of Data Governance conception into corporate data environment.
APA, Harvard, Vancouver, ISO, and other styles
10

Alfaro, Carranza Rosa Ángela, and Mendoza Libusi Deyanira Ampuero. "Modelo de madurez de Data Governance." Bachelor's thesis, Universidad Peruana de Ciencias Aplicadas (UPC), 2015. http://hdl.handle.net/10757/347094.

Full text
Abstract:
Data Governance is a concept in evolution which includes people who have large responsibilities within organizations and the processes that these used to be able to manage information. This project proposes the creation of a maturity model of data governance based on the IBM Data Governance Maturity Model. The objective of this model is to help organizations to understand their level of maturity in relation to the management of your data and identify its weaknesses to subsequently take corrective action before opting for the implementation of a Data Governance program.
Data Governance o gobierno de datos es un concepto en evolución que incluye las personas que tienen grandes responsabilidades dentro de organizaciones y los procesos que estas utilizan para poder gestionar la información. El presente proyecto plantea la creación de un Modelo de Madurez de Data Governance basado en el IBM Data Governance Maturity Model. El objetivo de este modelo es ayudar a las organizaciones a conocer su nivel de madurez en relación con la gestión de sus datos e identificar sus puntos débiles para posteriormente tomar medidas correctivas antes de optar por la implementación de un programa de Data Governance.
Tesis
APA, Harvard, Vancouver, ISO, and other styles
11

Cave, Ashley. "Exploring Strategies for Implementing Data Governance Practices." ScholarWorks, 2017. https://scholarworks.waldenu.edu/dissertations/4206.

Full text
Abstract:
Data governance reaches across the field of information technology and is increasingly important for big data efforts, regulatory compliance, and ensuring data integrity. The purpose of this qualitative case study was to explore strategies for implementing data governance practices. This study was guided by institutional theory as the conceptual framework. The study's population consisted of informatics specialists from a small hospital, which is also a research institution in the Washington, DC, metropolitan area. This study's data collection included semi structured, in-depth individual interviews (n = 10), focus groups (n = 3), and the analysis of organizational documents (n = 19). By using methodological triangulation and by member checking with interviewees and focus group members, efforts were taken to increase the validity of this study's findings. Through thematic analysis, 5 major themes emerged from the study: structured oversight with committees and boards, effective and strategic communications, compliance with regulations, obtaining stakeholder buy-in, and benchmarking and standardization. The results of this study may benefit informatics specialists to better strategize future implementations of data governance and information management practices. By implementing effective data governance practices, organizations will be able to successfully manage and govern their data. These findings may contribute to social change by ensuring better protection of protected health information and personally identifiable information.
APA, Harvard, Vancouver, ISO, and other styles
12

Landelius, Cecilia. "Data governance in big data : How to improve data quality in a decentralized organization." Thesis, KTH, Industriell ekonomi och organisation (Inst.), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-301258.

Full text
Abstract:
The use of internet has increased the amount of data available and gathered. Companies are investing in big data analytics to gain insights from this data. However, the value of the analysis and decisions made based on it, is dependent on the quality ofthe underlying data. For this reason, data quality has become a prevalent issue for organizations. Additionally, failures in data quality management are often due to organizational aspects. Due to the growing popularity of decentralized organizational structures, there is a need to understand how a decentralized organization can improve data quality. This thesis conducts a qualitative single case study of an organization currently shifting towards becoming data driven and struggling with maintaining data quality within the logistics industry. The purpose of the thesis is to answer the questions: • RQ1: What is data quality in the context of logistics data? • RQ2: What are the obstacles for improving data quality in a decentralized organization? • RQ3: How can these obstacles be overcome? Several data quality dimensions were identified and categorized as critical issues,issues and non-issues. From the gathered data the dimensions completeness, accuracy and consistency were found to be critical issues of data quality. The three most prevalent obstacles for improving data quality were data ownership, data standardization and understanding the importance of data quality. To overcome these obstacles the most important measures are creating data ownership structures, implementing data quality practices and changing the mindset of the employees to a data driven mindset. The generalizability of a single case study is low. However, there are insights and trends which can be derived from the results of this thesis and used for further studies and companies undergoing similar transformations.
Den ökade användningen av internet har ökat mängden data som finns tillgänglig och mängden data som samlas in. Företag påbörjar därför initiativ för att analysera dessa stora mängder data för att få ökad förståelse. Dock är värdet av analysen samt besluten som baseras på analysen beroende av kvaliteten av den underliggande data. Av denna anledning har datakvalitet blivit en viktig fråga för företag. Misslyckanden i datakvalitetshantering är ofta på grund av organisatoriska aspekter. Eftersom decentraliserade organisationsformer blir alltmer populära, finns det ett behov av att förstå hur en decentraliserad organisation kan arbeta med frågor som datakvalitet och dess förbättring. Denna uppsats är en kvalitativ studie av ett företag inom logistikbranschen som i nuläget genomgår ett skifte till att bli datadrivna och som har problem med att underhålla sin datakvalitet. Syftet med denna uppsats är att besvara frågorna: • RQ1: Vad är datakvalitet i sammanhanget logistikdata? • RQ2: Vilka är hindren för att förbättra datakvalitet i en decentraliserad organisation? • RQ3: Hur kan dessa hinder överkommas? Flera datakvalitetsdimensioner identifierades och kategoriserades som kritiska problem, problem och icke-problem. Från den insamlade informationen fanns att dimensionerna, kompletthet, exakthet och konsekvens var kritiska datakvalitetsproblem för företaget. De tre mest förekommande hindren för att förbättra datakvalité var dataägandeskap, standardisering av data samt att förstå vikten av datakvalitet. För att överkomma dessa hinder är de viktigaste åtgärderna att skapa strukturer för dataägandeskap, att implementera praxis för hantering av datakvalitet samt att ändra attityden hos de anställda gentemot datakvalitet till en datadriven attityd. Generaliseringsbarheten av en enfallsstudie är låg. Dock medför denna studie flera viktiga insikter och trender vilka kan användas för framtida studier och för företag som genomgår liknande transformationer.
APA, Harvard, Vancouver, ISO, and other styles
13

Carvalho, Mónica Isabel Machado. "Data Governance : estudo e aplicação na EDP distribuição." Master's thesis, FEUC, 2012. http://hdl.handle.net/10316/21346.

Full text
Abstract:
O seguinte relatório foi elaborado no âmbito de um estágio curricular de conclusão do Mestrado em Gestão da Faculdade de Economia da Universidade de Coimbra. O estágio foi realizado na EDP Distribuição, na Direção de Organização e Desenvolvimento, entre as datas de 27 de fevereiro e 27 de julho do presento ano. Dentro da área dos sistemas de informação foi abordado o tema de Data Governance. Assim, o trabalho encontra-se dividido em quatro partes. Em primeiro lugar é apresentada a empresa. Na segunda parte é apresentado um enquadramento teórico com uma alusão à origem de dados e bases de dados. De seguida é feita uma breve explicação do porquê da necessidade da prática de Data Governance hoje em dia nas organizações (por Steve Sarsfield) e por fim a introdução ao conceito e às práticas internacionais que prevalecem nesta área. Numa terceira parte são demonstradas as atividades desenvolvidas no estágio, onde foram aplicadas estas práticas à informação sobre a Iluminação Pública na EDP Distribuição e ainda algumas sugestões à empresa para análises futuras. Na última parte são apresentadas as conclusões a retirar de todo o trabalho.
Relatório de estágio do mestrado em Gestão, apresentado à Faculdade de Economia da Universidade de Coimbra, sob a orientação de Luís Alçada, Isabel Xisto.
APA, Harvard, Vancouver, ISO, and other styles
14

Randhawa, Tarlochan Singh. "Incorporating Data Governance Frameworks in the Financial Industry." ScholarWorks, 2019. https://scholarworks.waldenu.edu/dissertations/6478.

Full text
Abstract:
Data governance frameworks are critical to reducing operational costs and risks in the financial industry. Corporate data managers face challenges when implementing data governance frameworks. The purpose of this multiple case study was to explore the strategies that successful corporate data managers in some banks in the United States used to implement data governance frameworks to reduce operational costs and risks. The participants were 7 corporate data managers from 3 banks in North Carolina and New York. Servant leadership theory provided the conceptual framework for the study. Methodological triangulation involved assessment of nonconfidential bank documentation on the data governance framework, Basel Committee on Banking Supervision's standard 239 compliance documents, and semistructured interview transcripts. Data were analyzed using Yin's 5-step thematic data analysis technique. Five major themes emerged: leadership role in data governance frameworks to reduce risk and cost, data governance strategies and procedures, accuracy and security of data, establishment of a data office, and leadership commitment at the organizational level. The results of the study may lead to positive social change by supporting approaches to help banks maintain reliable and accurate data as well as reduce data breaches and misuse of consumer data. The availability of accurate data may enable corporate bank managers to make informed lending decisions to benefit consumers.
APA, Harvard, Vancouver, ISO, and other styles
15

Schumacher, Jörg [Verfasser]. "Prozess- und Data Governance im industriellen Anlagenmanagement / Jörg Schumacher." München : Verlag Dr. Hut, 2012. http://d-nb.info/102107313X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Stephens, Joshua J. "Data Governance Importance and Effectiveness| Health System Employee Perception." Thesis, Central Michigan University, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10751061.

Full text
Abstract:

The focus of this study was to understand how health system employees define Data Governance (DG), how they perceive its importance and effectiveness to their role and how it may impact strategic outcomes of the organization. Having a better understanding of employee perceptions will help identify areas of education, process improvement and opportunities for more structured data governance within the healthcare industry. Additionally, understanding how employees associate each of these domains to strategic outcomes, will help inform decision-makers on how best to align the Data Governance strategy with that of the organization.

This research is intended to expand the data governance community’s knowledge about how health system employee demographics influence their perceptions of Data Governance. Very little academic research has been done to-date, which is unfortunate given the value of employee engagement to an organization’s culture juxtaposed to the intent of Data Governance to change that culture into one that fully realizes the value of its data and treats it as a corporate asset. This lack of understanding leads to two distinct problems: executive resistance toward starting a Data Governance Program due to the lack of association between organizational strategic outcomes and Data Governance, and employee, or cultural, resistance to the change Data Governance brings to employee roles and processes.

The dataset for this research was provided by a large mid-west health system’s Enterprise Data Governance Program and was collected internally through an electronic survey. A mixed methods approach was taken. The first analysis intended to see how employees varied in their understanding of the definition of data governance as represented by the Data Management Association’s DAMA Wheel. The last three research questions focused on determining which factors influence a health system employee’s perception of the importance, effectiveness, and impact Data Governance has on their role and on the organization.

Perceptions on the definition of Data Governance varied slightly for Gender, Management Role, IT Role, and Role Tenure, and the thematic analysis identified a lack of understanding of Data Governance by health system employees. Perceptions of Data Governance importance and effectiveness varied by participants’ gender, and organizational role as part of analytics, IT, and Management. In general, employees perceive a deficit of data governance to their role based on their perceptions of importance and effectiveness. Lastly, employee perceptions of the impact of Data Governance on strategic outcomes varied among participants by gender for Cost of Care and by Analytics Role for Quality of Analytics. For both Quality of Care and Patient Experience, perceptions did not vary.

Perceptions related to the impact of Data Governance on strategic outcomes found that Data Quality Management was most impactful to all four strategic outcomes included in the study: quality of care, cost of care, patient experience, and quality of analytics. Leveraging the results of this study to tailor communication, education and training, and roles and responsibilities required for a successful implementation of Data Governance in healthcare should be considered by DG practitioners and executive leadership implementing or evaluating a DG Program within a healthcare organization. Additionally, understanding employee perceptions of Data Governance and their impact to strategic outcomes will provide meaningful insight to executive leadership who have difficulty connecting the cost of Data Governance to the value realization, which is moving the organization closer to achieving the Triple Aim by benefiting from their data.

APA, Harvard, Vancouver, ISO, and other styles
17

Rivera, Stephanie, Nataly Loarte, Carlos Raymundo, and Francisco Dominguez. "Data governance maturity model for micro financial organizations in Peru." SciTePress, 2017. http://hdl.handle.net/10757/656360.

Full text
Abstract:
Micro finance organizations play an important role since they facilitate integration of all social classes to sustained economic growth. Against this background, exponential growth of data, resulting from transactions and operations carried out with these companies on a daily basis, becomes imminent. Appropriate management of this data is therefore necessary because, otherwise, it will result in a competitive disadvantage due to the lack of valuable and quality information for decision-making and process improvement. Data Governance provides a different approach to data management, as seen from the perspective of business assets. In this regard, it is necessary that the organization have the ability to assess the extent to which that management is correct or is generating expected results. This paper proposes a data governance maturity model for micro finance organizations, which frames a series of formal requirements and criteria providing an objective diagnosis. This model was implemented based on the information of a Peruvian micro finance organization. Four domains, out of the seven listed in the model, were evaluated. Finally, after validation of the proposed model, it was evidenced that it serves as a means for identifying the gap between data management and objectives set.
APA, Harvard, Vancouver, ISO, and other styles
18

Assis, Celia Barbosa. "Governança da informação: viabilizadores e inibidores para adoção organizacional." Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/3/3136/tde-27042018-102121/.

Full text
Abstract:
A Governança da Informação (GI) é uma nova abordagem para a governança dos ativos informacionais nas organizações, resultando de desafios como o crescimento exponencial de dados, novas e mais complexas regras do negócio e de um contexto competitivo mais regulado e litigioso. O objetivo principal da pesquisa foi investigar como fatores organizacionais, relacionais e de Tecnologia da Informação (TI) podem atuar como viabilizadores, inibidores ou componentes da adoção da Governança da Informação em instituições. Alicerçada na revisão de literatura, foi feita uma pesquisa qualitativa descritiva e exploratória, baseada em vinte e um estudos de caso de empresas brasileiras escolhidas por apresentarem alta intensidade de utilização de informações em seus processos, produtos e serviços. A pesquisa apresenta como contribuições teóricas a proposta de dois modelos: uma Matriz para Comparação entre Governanças Institucionais, usada para diferenciar aspectos das Governanças Corporativa, da Informação, de TI e de Dados; e uma Matriz de Fatores Viabilizadores e Inibidores, com fatores derivados da teoria e das observações dos estudos de caso. As contribuições práticas compreendem: diferentes visões sobre o impacto dos fatores previstos em teoria, principalmente considerando-se a atuação dos entrevistados, a segmentação econômica e os portes das empresas; e fatores não previstos na teoria, como a falta de alinhamento entre TI e áreas de negócios, a cultura da empresa, os avanços da tecnologia e a gestão de mudanças. A comparação entre teoria e prática sugere maior polarização em fatores como: cultura de acumulação de dados; práticas e políticas organizacionais; comunicação entre áreas; e, educação dos usuários. Como conclusão destaca-se que a GI é entendida nas empresas como uma disciplina específica dos negócios, fundamental para atribuir sentido aos estudos empresariais e para suportar projetos e processos coerentes e eficazes.
Information Governance (IG) is a new approach to the governance of the organizational information assets, resulting from challenges such as exponential data growth, new and more complex business rules and a more regulated and litigious competitive context. The main objective of the research was to investigate how organizational, relational and Information Technology (IT) factors can act as enablers, inhibitors or components for IG adoption in companies. Supported by a literature review, an exploratory-descriptive qualitative research was carried conducted, based on 21 case studies from Brazilian companies selected by information high intensity usage in processes, products and services. The theoretical contributions of the research are two proposed models: a Matrix for Institutional Governance Comparison, to be used for differentiating Corporate, Information, IT and Data governance; and an Enablers and Inhibitors Matrix, with factors derived from theory and case studies. Practical contributions are different views from theory, especially related to the interviewee\'s professional area, industry and company size; unpredicted factors such as lack of alignment between IT and business areas, institutional culture, technology advancements and change management. The comparison between theory and practice suggests greater polarization in factors such as data accumulation mentality, organizational practices and policies, communication between areas and users education. The research concludes that IG is a business-specific discipline, fundamental to sensemaking for company studies and support to coherent and effective organizational projects and processes.
APA, Harvard, Vancouver, ISO, and other styles
19

Barata, André Montoia. "Governança de dados em organizações brasileiras: uma avaliação comparativa entre os benefícios previstos na literatura e os obtidos pelas organizações." Universidade de São Paulo, 2015. http://www.teses.usp.br/teses/disponiveis/100/100131/tde-28072015-215618/.

Full text
Abstract:
A Governança de TI (GTI) tem um papel fundamental na realização do alinhamento da TI com o negócio das organizações, potencializando os processos de TI juntamente com os objetivos do negócio. Alinhar a TI ao negócio da organização é fundamental, porém é necessário também garantir o alinhamento da GTI com a Governança de Dados (GD). A GD é responsável pelo controle e gestão dos dados da organização, possibilitando a transformação de dados em informações para a tomada de decisões estratégicas. Possuir uma GTI alinhada a GD propicia um melhor desempenho para as organizações, que precisam de informações corretas em tempo hábil para a tomada de decisões. Para colaborar com este alinhamento existem os frameworks de boas práticas de gestão, que auxiliam as organizações a implantar esta governança. Este trabalho teve como objetivo identificar os processos e frameworks de GD implantados em organizações brasileiras e comparar os benefícios obtidos na implantação com os propostos pela literatura. O trabalho exploratório e qualitativo proporcionou a realização de estudos de casos em três organizações brasileiras de grande porte que implantaram ou estão em processo de implantação dos processos de GD. Os estudos de casos foram realizados com duas visões diferentes: a consultoria que implantou a GD e a organização que contratou a consultoria. A coleta de dados foi realizada por meio de entrevistas e técnicas de análise de conteúdo foram aplicadas nos dados coletados. Como resultado identificou-se que para as organizações estudadas o nível de implantação dos processos de GD foi médio, entretanto o grau de obtenção dos benefícios foi alto. Isso ocorre devido à carência de GD que se encontram as organizações estudadas, bem como a grande melhoria e benefícios identificados pelos entrevistados, mesmo com uma implantação parcial da GD.
The IT Governance (ITG) has a key role in achieving the IT alignment with the business organization, empowering IT processes with business goals. Align IT with business organization is crucial, however it is also necessary to ensure the alignment of the GTI with Data Governance (DG) The DG is responsible for the control and management the organization\'s data, enabling the transformation of data into information for strategic decisions making. Have aligned DG with ITG is a better performance for organizations that need the right information in the right time for decision making. To collaborate with this alignment are the frameworks of good management practices that enable organizations implement this governance. This study aimed to identify the processes and frameworks of DG implemented in Brazilian organizations and compare the benefits achieved in the implementation with the proposed in the literature. The exploratory and qualitative study provided the realizations of case studies in three large Brazilian organizations that have implemented or are in the implementation DG process. The case studies were performed with two different views: a consultancy that implemented the DG and the organization that hired the consultancy. Data collection was conducted through interviews and content analysis techniques were applied in the data collected. As a result it was found that for organizations studied the implementation DG level was average, however the benefits degree was high. This is due to lack in DG in the organizations studied and the great improvement and benefits identified by interviewers even though with partial implementation DG.
APA, Harvard, Vancouver, ISO, and other styles
20

Viljoen, Melanie. "A framework towards effective control in information security governance." Thesis, Nelson Mandela Metropolitan University, 2009. http://hdl.handle.net/10948/887.

Full text
Abstract:
The importance of information in business today has made the need to properly secure this asset evident. Information security has become a responsibility for all managers of an organization. To better support more efficient management of information security, timely information security management information should be made available to all managers. Smaller organizations face special challenges with regard to information security management and reporting due to limited resources (Ross, 2008). This dissertation discusses a Framework for Information Security Management Information (FISMI) that aims to improve the visibility and contribute to better management of information security throughout an organization by enabling the provision of summarized, comprehensive information security management information to all managers in an affordable manner.
APA, Harvard, Vancouver, ISO, and other styles
21

Castillo, Luis Felipe, Carlos Raymundo, and Francisco Dominguez Mateos. "Information architecture model for data governance initiatives in peruvian universities." Association for Computing Machinery, Inc, 2017. http://hdl.handle.net/10757/656361.

Full text
Abstract:
El texto completo de este trabajo no está disponible en el Repositorio Académico UPC por restricciones de la casa editorial donde ha sido publicado.
This current research revealed the need to design an information architecture model for Data Governance In order to reduce the gap between the Information Technology versus the Information Management. The model designed to make a balance between the need to invest in technology and the ability to manage the information that is originated from the use of those technologies, as well as to measure with greater precision the generation of IT value through the use of quality information and user satisfaction. In order to test our model we take a case of study in the Higher Education sector in Peru in order to demonstrate the successful data governance projects with this model. 1
APA, Harvard, Vancouver, ISO, and other styles
22

Millar, Gary Engineering &amp Information Technology Australian Defence Force Academy UNSW. "The viable governance model (VGM) : a theoretical model of IT governance with a corporate setting." Awarded by:University of New South Wales - Australian Defence Force Academy. Engineering & Information Technology, 2009. http://handle.unsw.edu.au/1959.4/44262.

Full text
Abstract:
Empirical studies into IT governance have considerably advanced our understanding of the mechanisms and practices used by contemporary organisations to govern their current and future use of IT. However, despite the progress made in identifying the various elements employed by contemporary IT governance arrangements, there has been relatively little research into the formulation of a holistic model of IT governance that integrates the growing collection of parts into a coherent whole. To further advance the concept of IT governance, the Viable Governance Model (VGM) is proposed. The VGM is a theoretical model of IT governance within a corporate setting that is based on the laws and principles of cybernetics as embodied in Stafford Beer's Viable System Model (VSM). Cybernetics, the science of control and communication in biological and artificial systems, establishes a firm theoretical foundation upon which to design a system that directs and controls the IT function in a complex enterprise. The VGM is developed using an approach based on design science. Given the theoretical nature of the artefact that is being designed, the development and evaluation activities are primarily conceptual in nature. That is, the development activity involves the design of a theoretical model of IT governance using theoretical concepts and constructs drawn from several reference disciplines including cybernetics, organisation theory, and complexity theory. The conceptual evaluation of the VGM indicates that the model is sufficiently robust to incorporate many of the empirical findings arising from academic and professional research. The resultant model establishes a "blueprint", or set of design principles, that can be used by IS practitioners to design and implement a system of IT governance that is appropriate to their organisational contingencies. Novel aspects of this research include: the integration of corporate and IT governance; the reinterpretation of the role of the enterprise architecture (EA) within a complex enterprise; the exposition of the relationship between the corporate and divisional IT groups; and the resolution of the centralisation versus decentralisation dilemma that confront designers of IT governance arrangements.
APA, Harvard, Vancouver, ISO, and other styles
23

Mitchell, Elliot A. "Political competition and electoral competitiveness in Sub-Saharan Africa : a conceputal critique with data." Master's thesis, University of Cape Town, 2009. http://hdl.handle.net/11427/4438.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Pejčoch, David. "Komplexní řízení kvality dat a informací." Doctoral thesis, Vysoká škola ekonomická v Praze, 2010. http://www.nusl.cz/ntk/nusl-199303.

Full text
Abstract:
This work deals with the issue of Data and Information Quality. It critically assesses the current state of knowledge within tvarious methods used for Data Quality Assessment and Data (Information) Quality improvement. It proposes new principles where this critical assessment revealed some gaps. The main idea of this work is the concept of Data and Information Quality Management across the entire universe of data. This universe represents all data sources which respective subject comes into contact with and which are used under its existing or planned processes. For all these data sources this approach considers setting the consistent set of rules, policies and principles with respect to current and potential benefits of these resources and also taking into account the potential risks of their use. An imaginary red thread that runs through the text, the importance of additional knowledge within a process of Data (Information) Quality Management. The introduction of a knowledge base oriented to support the Data (Information) Quality Management (QKB) is therefore one of the fundamental principles of the author proposed a set of best
APA, Harvard, Vancouver, ISO, and other styles
25

Khalid, Shehla. "Towards Data Governance for International Dementia Care Mapping (DCM). A Study Proposing DCM Data Management through a Data Warehousing Approach." Thesis, University of Bradford, 2010. http://hdl.handle.net/10454/5226.

Full text
Abstract:
Information Technology (IT) plays a vital role in improving health care systems by enhancing the quality, efficiency, safety, security, collaboration and informing decision making. Dementia, a decline in mental ability which affects memory, concentration and perception, is a key issue in health and social care, given the current context of an aging population. The quality of dementia care is noted as an international area of concern. Dementia Care Mapping (DCM) is a systematic observational framework for assessing and improving dementia care quality. DCM has been used as both a research and practice development tool internationally. However, despite the success of DCM and the annual generation of a huge amount of data on dementia care quality, it lacks a governance framework, based on modern IT solutions for data management, such a framework would provide the organisations using DCM a systematic way of storing, retrieving and comparing data over time, to monitor progress or trends in care quality. Data Governance (DG) refers to the implications of policies and accountabilities to data management in an organisation. The data management procedure includes availability, usability, quality, integrity, and security of the organisation data according to their users and requirements. This novel multidisciplinary study proposes a comprehensive solution for governing the DCM data by introducing a data management framework based on a data warehousing approach. Original contributions have been made through the design and development of a data management framework, describing the DCM international database design and DCM data warehouse architecture. These data repositories will provide the acquisition and storage solutions for DCM data. The designed DCM data warehouse facilitates various analytical applications to be applied for multidimensional analysis. Different queries are applied to demonstrate the DCM data warehouse functionality. A case study is also presented to explain the clustering technique applied to the DCM data. The performance of the DCM data governance framework is demonstrated in this case study related to data clustering results. Results are encouraging and open up discussion for further analysis.
APA, Harvard, Vancouver, ISO, and other styles
26

Alserafi, Ayman. "Dataset proximity mining for supporting schema matching and data lake governance." Doctoral thesis, Universitat Politècnica de Catalunya, 2021. http://hdl.handle.net/10803/671540.

Full text
Abstract:
With the huge growth in the amount of data generated by information systems, it is common practice today to store datasets in their raw formats (i.e., without any data preprocessing or transformations) in large-scale data repositories called Data Lakes (DLs). Such repositories store datasets from heterogeneous subject-areas (covering many business topics) and with many different schemata. Therefore, it is a challenge for data scientists using the DL for data analysis to find relevant datasets for their analysis tasks without any support or data governance. The goal is to be able to extract metadata and information about datasets stored in the DL to support the data scientist in finding relevant sources. This shapes the main goal of this thesis, where we explore different techniques of data profiling, holistic schema matching and analysis recommendation to support the data scientist. We propose a novel framework based on supervised machine learning to automatically extract metadata describing datasets, including computation of their similarities and data overlaps using holistic schema matching techniques. We use the extracted relationships between datasets in automatically categorizing them to support the data scientist in finding relevant datasets with intersection between their data. This is done via a novel metadata-driven technique called proximity mining which consumes the extracted metadata via automated data mining algorithms in order to detect related datasets and to propose relevant categories for them. We focus on flat (tabular) datasets organised as rows of data instances and columns of attributes describing the instances. Our proposed framework uses the following four main techniques: (1) Instance-based schema matching for detecting relevant data items between heterogeneous datasets, (2) Dataset level metadata extraction and proximity mining for detecting related datasets, (3) Attribute level metadata extraction and proximity mining for detecting related datasets, and finally, (4) Automatic dataset categorization via supervised k-Nearest-Neighbour (kNN) techniques. We implement our proposed algorithms via a prototype that shows the feasibility of this framework. We apply the prototype in an experiment on a real-world DL scenario to prove the feasibility, effectiveness and efficiency of our approach, whereby we were able to achieve high recall rates and efficiency gains while improving the computational space and time consumption by two orders of magnitude via our proposed early-pruning and pre-filtering techniques in comparison to classical instance-based schema matching techniques. This proves the effectiveness of our proposed automatic methods in the early-pruning and pre-filtering tasks for holistic schema matching and the automatic dataset categorisation, while also demonstrating improvements over human-based data analysis for the same tasks.
Amb l’enorme creixement de la quantitat de dades generades pels sistemes d’informació, és habitual avui en dia emmagatzemar conjunts de dades en els seus formats bruts (és a dir, sense cap pre-processament de dades ni transformacions) en dipòsits de dades a gran escala anomenats Data Lakes (DL). Aquests dipòsits emmagatzemen conjunts de dades d’àrees temàtiques heterogènies (que abasten molts temes empresarials) i amb molts esquemes diferents. Per tant, és un repte per als científics de dades que utilitzin la DL per a l’anàlisi de dades trobar conjunts de dades rellevants per a les seves tasques d’anàlisi sense cap suport ni govern de dades. L’objectiu és poder extreure metadades i informació sobre conjunts de dades emmagatzemats a la DL per donar suport al científic en trobar fonts rellevants. Aquest és l’objectiu principal d’aquesta tesi, on explorem diferents tècniques de perfilació de dades, concordança d’esquemes holístics i recomanació d’anàlisi per donar suport al científic. Proposem un nou marc basat en l’aprenentatge automatitzat supervisat per extreure automàticament metadades que descriuen conjunts de dades, incloent el càlcul de les seves similituds i coincidències de dades mitjançant tècniques de concordança d’esquemes holístics. Utilitzem les relacions extretes entre conjunts de dades per categoritzar-les automàticament per donar suport al científic del fet de trobar conjunts de dades rellevants amb la intersecció entre les seves dades. Això es fa mitjançant una nova tècnica basada en metadades anomenada mineria de proximitat que consumeix els metadades extrets mitjançant algoritmes automatitzats de mineria de dades per tal de detectar conjunts de dades relacionats i proposar-ne categories rellevants. Ens centrem en conjunts de dades plans (tabulars) organitzats com a files d’instàncies de dades i columnes d’atributs que descriuen les instàncies. El nostre marc proposat utilitza les quatre tècniques principals següents: (1) Esquema de concordança basat en instàncies per detectar ítems rellevants de dades entre conjunts de dades heterogènies, (2) Extracció de metadades de nivell de dades i mineria de proximitat per detectar conjunts de dades relacionats, (3) Extracció de metadades a nivell de atribut i mineria de proximitat per detectar conjunts de dades relacionats i, finalment, (4) Categorització de conjunts de dades automàtica mitjançant tècniques supervisades per k-Nearest-Neighbour (kNN). Posem en pràctica els nostres algorismes proposats mitjançant un prototip que mostra la viabilitat d’aquest marc. El prototip s’experimenta en un escenari DL real del món per demostrar la viabilitat, l’eficàcia i l’eficiència del nostre enfocament, de manera que hem pogut aconseguir elevades taxes de record i guanys d’eficiència alhora que millorem el consum computacional d’espai i temps mitjançant dues ordres de magnitud mitjançant el nostre es van proposar tècniques de poda anticipada i pre-filtratge en comparació amb tècniques de concordança d’esquemes basades en instàncies clàssiques. Això demostra l'efectivitat dels nostres mètodes automàtics proposats en les tasques de poda inicial i pre-filtratge per a la coincidència d'esquemes holístics i la classificació automàtica del conjunt de dades, tot demostrant també millores en l'anàlisi de dades basades en humans per a les mateixes tasques.
Avec l’énorme croissance de la quantité de données générées par les systèmes d’information, il est courant aujourd’hui de stocker des ensembles de données (datasets) dans leurs formats bruts (c’est-à-dire sans prétraitement ni transformation de données) dans des référentiels de données à grande échelle appelés Data Lakes (DL). Ces référentiels stockent des ensembles de données provenant de domaines hétérogènes (couvrant de nombreux sujets commerciaux) et avec de nombreux schémas différents. Par conséquent, il est difficile pour les data-scientists utilisant les DL pour l’analyse des données de trouver des datasets pertinents pour leurs tâches d’analyse sans aucun support ni gouvernance des données. L’objectif est de pouvoir extraire des métadonnées et des informations sur les datasets stockés dans le DL pour aider le data-scientist à trouver des sources pertinentes. Cela constitue l’objectif principal de cette thèse, où nous explorons différentes techniques de profilage de données, de correspondance holistique de schéma et de recommandation d’analyse pour soutenir le data-scientist. Nous proposons une nouvelle approche basée sur l’intelligence artificielle, spécifiquement l’apprentissage automatique supervisé, pour extraire automatiquement les métadonnées décrivant les datasets, calculer automatiquement les similitudes et les chevauchements de données entre ces ensembles en utilisant des techniques de correspondance holistique de schéma. Les relations entre datasets ainsi extraites sont utilisées pour catégoriser automatiquement les datasets, afin d’aider le data-scientist à trouver des datasets pertinents avec intersection entre leurs données. Cela est fait via une nouvelle technique basée sur les métadonnées appelée proximity mining, qui consomme les métadonnées extraites via des algorithmes de data mining automatisés afin de détecter des datasets connexes et de leur proposer des catégories pertinentes. Nous nous concentrons sur des datasets plats (tabulaires) organisés en rangées d’instances de données et en colonnes d’attributs décrivant les instances. L’approche proposée utilise les quatres principales techniques suivantes: (1) Correspondance de schéma basée sur l’instance pour détecter les éléments de données pertinents entre des datasets hétérogènes, (2) Extraction de métadonnées au niveau du dataset et proximity mining pour détecter les datasets connexes, (3) Extraction de métadonnées au niveau des attributs et proximity mining pour détecter des datasets connexes, et enfin, (4) catégorisation automatique des datasets via des techniques supervisées k-Nearest-Neighbour (kNN). Nous implémentons les algorithmes proposés via un prototype qui montre la faisabilité de cette approche. Nous appliquons ce prototype à une scénario DL du monde réel pour prouver la faisabilité, l’efficacité et l’efficience de notre approche, nous permettant d’atteindre des taux de rappel élevés et des gains d’efficacité, tout en diminuant le coût en espace et en temps de deux ordres de grandeur, via nos techniques proposées d’élagage précoce et de pré-filtrage, comparé aux techniques classiques de correspondance de schémas basées sur les instances. Cela prouve l’efficacité des méthodes automatiques proposées dans les tâches d’élagage précoce et de pré-filtrage pour la correspondance de schéma holistique et la cartegorisation automatique des datasets, tout en démontrant des améliorations par rapport à l’analyse de données basée sur l’humain pour les mêmes tâches.
APA, Harvard, Vancouver, ISO, and other styles
27

Ofe, Hosea, and Carl Tinnsten. "Open Data : Attracting third party innovations." Thesis, Umeå universitet, Institutionen för informatik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-88061.

Full text
Abstract:
With the adoption of European Commission directives in 2003 related to open data,member States of EU were encouraged to provide citizens access to previously inaccessiblepublic sector data. This published public data could be used, reused and distributed free ofcharge. Following these directives, many municipalities within Sweden and Europe ingeneral created open data portals for publishing public sector data. With such datapublished, expectations of third party innovations were highly envisaged. This thesis adoptsa qualitative research approach to investigate the challenges and proposed solution ofusing open data for third party innovation. The thesis identifies various aspects ofgovernance, architecture and business model that public organizations should take intoconsideration in order to attract third party innovations on open data. Specifically, theresults of this thesis suggest that in order for open data to act as a platform for innovation,there is need for integration of open data policies. This involves developing commonstandards relating to governance, data format, and architecture. Harmonizing thesestandards across municipalities within Sweden and Europe, would provide the muchneededuser based which is necessary to enhance the two-sided nature of innovations onopen data platforms.
APA, Harvard, Vancouver, ISO, and other styles
28

Rainie, Stephanie Carroll, Jennifer Lee Schultz, Eileen Briggs, Patricia Riggs, and Nancy Lynn Palmanteer-Holder. "Data as a Strategic Resource: Self-determination, Governance, and the Data Challenge for Indigenous Nations in the United States." UNIV WESTERN ONTARIO, 2017. http://hdl.handle.net/10150/624737.

Full text
Abstract:
Data about Indigenous populations in the United States are inconsistent and irrelevant. Federal and state governments and researchers direct most collection, analysis, and use of data about U.S. Indigenous populations. Indigenous Peoples' justified mistrust further complicates the collection and use of these data. Nonetheless, tribal leaders and communities depend on these data to inform decision making. Reliance on data that do not reflect tribal needs, priorities, and self-conceptions threatens tribal self-determination. Tribal data sovereignty through governance of data on Indigenous populations is long overdue. This article provides two case studies of the Ysleta del Sur Pueblo and Cheyenne River Sioux Tribe and their demographic and socioeconomic data initiatives to create locally and culturally relevant data for decision making.
APA, Harvard, Vancouver, ISO, and other styles
29

Al-Najjar, Basil. "Modelling capital structure, dividend policy, and corporate governance : evidence from Jordanian data." Thesis, University of the West of England, Bristol, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.445110.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Ender, Linda. "Data Governance in Digital Platforms : A case analysis in the building sector." Thesis, Umeå universitet, Institutionen för informatik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-185598.

Full text
Abstract:
Data are often the foundation of digital innovation and are seen as a highly valuable asset for any organization. Many companies aim to put data at the core of their business, but struggle with regulating data in complex environments. Data governance becomes an integral part for data-driven business. However, only a minority of companies fully engage in data governance. Research also lacks knowledge about data governance in complex environments such as digital platforms. Therefore, this thesis examines the role of data governance in digital platforms, by researching the conceptual characteristics of platform data governance. The iterative taxonomy development process by Nickerson et al. (2013) has been used to classify the characteristics of platform data governance. The results are derived from existing literature and motivated by new insights from expert interviews as well as a case analysis of a real-life platform. The final taxonomy shows that the conceptual characteristics of platform data governance are based on the dimensions purpose, platform data, responsibilities, decision domains and compliance. The findings address challenges of data governance in inter organizational settings and help practitioners to define their own data governance. Additionally, the thesis highlights the potential for future research.
APA, Harvard, Vancouver, ISO, and other styles
31

Salman, Kanbar Ahmad. "Migrating and governing data in the jungle : A study of migrations and data governance in Seco Tools AB." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-253329.

Full text
Abstract:
How do you find relevant data in the jungle of a large multinational enterprise? The␣ purpose of this thesis is Investigate how to migrate and set strategies and methods␣ regarding the organizational governance which can provide employees and users to␣ find relevant data to perform their daily work in Seco Tools. A qualitative research approach has been used to conduct this study. Primary data␣ was obtained through semi-structured interviews with Seco Tools and its parent␣ company Sandvik. Secondary data was obtained through other sources such as␣ scientific articles, books and the internet. The conclusion of this study is that in order to make data relevant, valuable and easy␣ to find within Microsoft SharePoint, Seco Tools needs to implement migration and␣ information management strategies, policies and methods such as metadata,␣ taxonomies and collaboration strategies for its employees.
APA, Harvard, Vancouver, ISO, and other styles
32

Ampuero, Mendoza Libusi, and Carranza Rosa Alfaro. "Modelo de madurez Tecno-organizacional para la puesta en marcha exitosa de iniciativas de Data Governance." International Institute of Informatics and Systemics, IIIS, 2017. http://hdl.handle.net/10757/622492.

Full text
Abstract:
Septima Conferencia Iberoamericana de Complejidad, Informatica y Cibernetica, CICIC 2017 - 7th Ibero-American Conference on Complexity, Informatics and Cybernetics, CICIC 2017; Orlando; United States; 21 March 2017 through 24 March 2017; Code 131437
Data management has undergone several changes over the last few years, leaving behind the days when it was necessary to convince people about the value of data in their organizations. Over the years, the volume and expense of data management have been increasing at a high rate. Today, organizations need to have strategic management that allows them to transform data collected from various sources with clear and accurate information. So, that they can dispose of it when they need it. The motivation of the present study is to generate a model of measurement of the level of organizational maturity that allows them to ensure the success of a Data Governance initiative. In this way ensure that all the information of the organization meets the demands of the business. It is for this reason that an organizational maturity model is proposed for the success of Data Governance initiatives based on 11 categories taking into consideration the analysis of the most widespread and adopted frameworks by industry (Kalido, Dataflux, etc.) in order to know the level of maturity and the steps to be taken at each of these levels. In this way ensure the success of a Data Governance initiative.
Revisión por pares
APA, Harvard, Vancouver, ISO, and other styles
33

D, Vásquez, Daniel Vásquez, Romina Kukurelo, Carlos Raymundo, Francisco Dominguez, and Javier Moguerza. "Master data management maturity model for the successful of mdm initiatives in the microfinance sector in Peru." Association for Computing Machinery, 2018. http://hdl.handle.net/10757/624683.

Full text
Abstract:
El texto completo de este trabajo no está disponible en el Repositorio Académico UPC por restricciones de la casa editorial donde ha sido publicado.
The microfinance sector has a strategic role since they facilitate integration and development of all social classes to sustained economic growth. In this way the actual point is the exponential growth of data, resulting from transactions and operations carried out with these companies on a daily basis, becomes imminent. Appropriate management of this data is therefore necessary because, otherwise, it will result in a competitive disadvantage due to the lack of valuable and quality information for decision-making and process improvement. The Master Data Management (MDM) give a new way in the Data management, reducing the gap between the business perspectives versus the technology perspective In this regard, it is important that the organization have the ability to implement a data management model for Master Data Management. This paper proposes a Master Data management maturity model for microfinance sector, which frames a series of formal requirements and criteria providing an objective diagnosis with the aim of improving processes until entities reach desired maturity levels. This model was implemented based on the information of Peruvian microfinance organizations. Finally, after validation of the proposed model, it was evidenced that it serves as a means for identifying the maturity level to help in the successful of initiative for Master Data management projects.
Revisión por pares
APA, Harvard, Vancouver, ISO, and other styles
34

Vacek, Martin. "Řízení kvality klientských dat." Master's thesis, Vysoká škola ekonomická v Praze, 2011. http://www.nusl.cz/ntk/nusl-85260.

Full text
Abstract:
There are series of competition battles emerging in present day while companies are recovering from last economic crisis. These battles are for customers. Take financial market for example -- it's quite saturated. Most of people do have some financial product since their birth date. Each one of us has insurance and most of us have at least standard banking account. It is imperative that insurance companies, banks and such firms have needed information about ourselves, for us to be allowed the use of these products. As the time passes we change the settings of these products, we change products themselves, buy new ones, set their portfolios, go to competition, even employees and financial advisors who take care of us do change over time. All of the above means new data (or a change, to say at least). Our every action specified leaves a digital footprint in the information systems of financial services providers who then try to process these data and use them to raise the profit using various methods. From the individual company's point of view is customer (in this case a person who has at least one product historically) unfortunately tracked multiple times due to the above changes, so this person actually seems like multiple persons instead of one. There are many reasons behind this and they are well known in common practice (Many of them are named in theoretical part). One of the main reasons for this is a fact that data quality was not a priority in past. However, this is not the case of present day and one of the success factors when it comes to spoiling client base portfolio is the level of quality of information that are tracked by companies. Several methodologies for data quality governance are being created and defined nowadays, although there is still lack of knowledge of their implementation (not just in the local Czech market). These experiences are well prized but most of internal IT departments are facing lack of knowledge and capacity dispositions. This is where great opportunity emerges for companies that use accumulated know-how from various projects that are not quite frequent in individual firms. One of such company is KPMG, Czech republic, LLC., thanks to which this work was created. So, what is the purpose and field of knowledge that is covered on the pages following? The purpose is to describe one such project concerning analysis and implementation of chosen tools and methodologies of data quality in real company. Main output is represented by a supporting framework as well as a tool that will help managers cease administration and difficulties when managing projects that concern data quality.
APA, Harvard, Vancouver, ISO, and other styles
35

Romero, Alvaro, Antony Gonzales, and Carlos Raymundo. "Data governance reference model under the lean methodology for the implementation of successful initiatives in the Peruvian microfinance sector." Association for Computing Machinery, 2019. http://hdl.handle.net/10757/656344.

Full text
Abstract:
El texto completo de este trabajo no está disponible en el Repositorio Académico UPC por restricciones de la casa editorial donde ha sido publicado.
Microfinance allows the integration of all sectors for the country's economic growth. Data duplicity, invalid data and the inability to have reliable data for decision-making are generated without a formal Governance. For this reason, Data Governance is the key to enable an autonomous, productive and reliable work environment for the use of these. Although Data Governance models already exist, in most cases they don't meet the requirements of the sector, which has its own characteristics, such as the volume exponential growth, data criticality, and regulatory frameworks to which it is exposed. The purpose of this research is to design a reference model for the microfinance organizations, supported by an evaluation tool that provides a diagnosis with the objective of implementing and improving the organization processes regarding Data Governance. This model was implemented based on the information of Peru's microfinance organizations, from which a 1.72 score was diagnosed, which is encouraging for the organization, since it shows that it has defined all its plans concerning Data Governance. Finally, after the validation, it was concluded that the model serves as a medium to identify the current status of these organizations to ensure the success of the Data Governance initiatives.
APA, Harvard, Vancouver, ISO, and other styles
36

Korada, Nishat. "Evaluation of the Self-Governance Developer Framework from Software Developers' Perspective." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-230212.

Full text
Abstract:
very software developer uses some software process to build computer software. The process may be haphazard or ad hoc; may change on a daily basis; may or may not be efficient, effective, or even succesful; but a “process” does exist. Although many software development process models exist nowadays, there is a real dearth of models which focus on the job of the software developer and his/her individual effort. The Self-Governance Developer Framework, developed by Prof. Mira Kajko-Mattsson and Gudrun Jeppesen, is a new software development model aimed at the software developer in particular and how he/she can manage their individual processes at hand. It aims to be flexible yet specific in identifying all types of tasks, work activities and roles which can come up in a developer’s work and how to tackle them. We do not know anything about the usefulness of the proposed Self Governance Developer Framework. In this thesis, research was conducted on how industry-based software engineers evaluate the newly proposed SGD Framework and how useful it is to them. The thesis researched how useful the proposed SGD Framework is in real world software development. The thesis also researched how much time was being spent on individual tasks in a software development project at my own company Swiesh QLurn IT Solutions. The SGD Framework was the tool used to gather these resources. The purpose was to find out what tasks and activities can come up during software development and to improve the own way of working to become a more efficient software developer. The goal was to provide a basis for further research into relevant models like software process models, estimation models and software process improvement models. With this basis it should be possible to build or improve these relevant models in order to better manage a developer’s activities and resources in a software project. They can also provide a boost for inventing developer-centric models just like the Self-Governance Developer Framework. The research methods applied were both of a qualitative and quantitative nature. The research was of an inductive type where raw data was first gathered and general theories derived from the data. Along with the survey conducted by a qualitative questionnaire, quantitative data about time allocation in a software project was gathered via action research in a software project at a company. Data was also collected via a literature review of past process models. A survey questionnaire was distributed worldwide via a blog and among software professionals with a minimum of three years experience. The results from this survey questionnaire show how working professionals evaluate the proposed SGD Framework; and the coverage of software process activities in the SGD Framework. Their feedback was important to help suggest deficiencies in the SGD Framework and to help improve it for industry-wide target audience. The thesis results in data which could prove valuable to the individual developer to enhance their software writing skills, and become more efficient and independent. Other improvements are suggested in the final analysis and discussion chapter. However, further research is needed to get more accurate results from a bigger group of developers. These studies should then include complete software projects to gather metrics and data.
APA, Harvard, Vancouver, ISO, and other styles
37

Catarino, Rodrigo Manuel Gonçalves Pereira. "Concepção de um repositório de master Data de entidades numa seguradora." Master's thesis, Instituto Superior de Economia e Gestão, 2011. http://hdl.handle.net/10400.5/10201.

Full text
Abstract:
Mestrado em Gestão de Sistemas de Informação
Para empresas do ramo financeiro como bancos e seguradoras o seu principal activo são os dados. São os dados consistentes, seguros, fidedignos e disponíveis na altura certa que vão potenciar o conhecimento e possibilitar a tomada de decisões correctas por parte do negócio. Qualquer organização possui dados sobre clientes, fornecedores, localizações, contratos, etc chamados de Master Data, que são críticos para o funcionamento dos principais processos de negócio. Quando numa companhia de seguros estes dados estão espalhados por vários sistemas, existindo várias cópias diferentes, e sem qualidade, do mesmo objecto de negócio então a organização não conseguirá competir adequadamente num mercado tão maduro e regulamentado. Este trabalho procura explanar os conceitos base do processo, ferramentas, métodos e tecnologias envolvidas na aquisição, correcção, integração, manutenção, controlo e partilha de Master Data (Master Data Management), e enquadrar a sua necessidade no mercado segurador e a aplicação prática numa organização particular através da metodologia de Action Research.
In financial companies like insurance and bancs their main asset is their data. Consistent, secure, accurate, accessible and timely data lead to knowledge and enabling the business to make the most correct decisions. Any organization has data about their clients, suppliers, localizations, accounts, etc also known as Master Data, which are critical to their business process. When an insurance company has his data spread out over multiple systems, having multiple different copies of the same data, without quality, of the same business object, then the organization will not be able to compete effectively in such a mature and regulated market. This work intents to detail the basic of concepts of the process, tools, method and technology involved in the acquisition, correction, integration, maintenance, control and sharing of Master Data (Master Data Management). Is also an objective of the paper to detail the need of Master Data Management in the insurance industry and the practical implementation in a particular company through an Action Research.
APA, Harvard, Vancouver, ISO, and other styles
38

Schumacher, Jörg Christian [Verfasser], and W. [Akademischer Betreuer] Stucky. "Prozess- und Data-Governance im industriellen Anlagenmanagement / Jörg Christian Schumacher. Betreuer: W. Stucky." Karlsruhe : KIT-Bibliothek, 2011. http://d-nb.info/1018232540/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Sapienza, Salvatore <1993&gt. "Ethical Perspectives on Big Data in Agri-food: Ownership and Governance for Safety." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amsdottorato.unibo.it/9590/1/Ethical%20Perspectives%20on%20Big%20Data%20in%20Agri-food__Ownership%20and%20Governance%20for%20Safety%20-%20Sapienza%20S.pdf.

Full text
Abstract:
Big data are reshaping the way we interact with technology, thus fostering new applications to increase the safety-assessment of foods. An extraordinary amount of information is analysed using machine learning approaches aimed at detecting the existence or predicting the likelihood of future risks. Food business operators have to share the results of these analyses when applying to place on the market regulated products, whereas agri-food safety agencies (including the European Food Safety Authority) are exploring new avenues to increase the accuracy of their evaluations by processing Big data. Such an informational endowment brings with it opportunities and risks correlated to the extraction of meaningful inferences from data. However, conflicting interests and tensions among the involved entities - the industry, food safety agencies, and consumers - hinder the finding of shared methods to steer the processing of Big data in a sound, transparent and trustworthy way. A recent reform in the EU sectoral legislation, the lack of trust and the presence of a considerable number of stakeholders highlight the need of ethical contributions aimed at steering the development and the deployment of Big data applications. Moreover, Artificial Intelligence guidelines and charters published by European Union institutions and Member States have to be discussed in light of applied contexts, including the one at stake. This thesis aims to contribute to these goals by discussing what principles should be put forward when processing Big data in the context of agri-food safety-risk assessment. The research focuses on two interviewed topics - data ownership and data governance - by evaluating how the regulatory framework addresses the challenges raised by Big data analysis in these domains. The outcome of the project is a tentative Roadmap aimed to identify the principles to be observed when processing Big data in this domain and their possible implementations.
APA, Harvard, Vancouver, ISO, and other styles
40

Coertze, Jacques Jacobus. "A framework for information security governance in SMMEs." Thesis, Nelson Mandela Metropolitan University, 2012. http://hdl.handle.net/10948/d1014083.

Full text
Abstract:
It has been found that many small, medium and micro-sized enterprises (SMMEs) do not comply with sound information security governance principles, specifically the principles involved in drafting information security policies and monitoring compliance, mainly as a result of restricted resources and expertise. Research suggests that this problem occurs worldwide and that the impact it has on SMMEs is great. The problem is further compounded by the fact that, in our modern-day information technology environment, many larger organisations are providing SMMEs with access to their networks. This results not only in SMMEs being exposed to security risks, but the larger organisations as well. In previous research an information security management framework and toolbox was developed to assist SMMEs in drafting information security policies. Although this research was of some help to SMMEs, further research has shown that an even greater problem exists with the governance of information security as a result of the advancements that have been identified in information security literature. The aim of this dissertation is therefore to establish an information security governance framework that requires minimal effort and little expertise to alleviate governance problems. It is believed that such a framework would be useful for SMMEs and would result in the improved implementation of information security governance.
APA, Harvard, Vancouver, ISO, and other styles
41

Gamero, Alex, Jose Garcia, and Carlos Raymundo. "Reference Model with a Lean Approach of Master Data Management in the Peruvian Microfinance Sector." Institute of Electrical and Electronics Engineers Inc, 2019. http://hdl.handle.net/10757/656347.

Full text
Abstract:
El texto completo de este trabajo no está disponible en el Repositorio Académico UPC por restricciones de la casa editorial donde ha sido publicado.
Microfinance has undergone a great growth in the last years, bringing consequently the significant increase of the data of the transactions and daily operations, manual processes of cleaning, complexity in IT projects and, in comparison with the traditional bank, a less amount of resources. For this reason, the model must allow the master data to have maintenance processes that reduce manual cleaning activities and contribute to the implementation of technology projects in an agile manner. On the other hand, the research seeks to combine a basic pillar such as Master Data Management (MDM) for the analysis of information with the lean approach, already used in the industry for the operational cost and additionally an evaluation measure prior to this process obtaining the state of the capabilities in the organization. In this way, the result will be that the organization can be previously evaluated and quickly identify which points should be improved to achieve the implementation of MDM initiatives. Likewise, within the research it is concluded that the Peruvian microfinance sector is prepared for the implementation of master data management with a 'proactive' maturity level of 3.46 points.
APA, Harvard, Vancouver, ISO, and other styles
42

Principini, Gianluca. "Data Mesh: decentralizzare l'ownership dei dati mantenendo una governance centralizzata attraverso l'adozione di standard di processo e di interoperabilità." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021.

Find full text
Abstract:
Nel corso degli ultimi due decenni i progressi delle tecnologie cloud hanno consentito alle imprese di poter puntare su nuovi paradigmi implementativi per le Data Platform. Tuttavia, questi sono caratterizzati da centralizzazione e monoliticità, stretto accoppiamento tra gli stage di pipeline e da un'ownership dei dati centralizzata in team di data engineers altamente specializzati, ma lontani dal dominio. Queste caratteristiche, con l'aumentare delle sorgenti e dei consumatori dei dati, evidenziano un collo di bottiglia che rischia di pregiudicare la buona riuscita di progetti che spesso comportano grossi investimenti. Problemi simili sono stati affrontati dall'ingegneria del software con l'adozione del Domain Driven Design, con il passaggio da architetture monolitiche ad architetture orientate ai servizi e sistemi basati su microservizi, che ben si prestano ad operare in ambienti cloud. Nella tesi, svolta nel contesto aziendale di Agile Lab, viene illustrato come le stesse migliorie possano essere applicate alla progettazione delle Data Platform adottando il paradigma del Data Mesh, in cui ciascun dominio espone dati analitici attraverso i Data Product. Per dimostrare come sia possibile ridurre gli attriti nella predisposizione dell'infrastruttura di un Data Product attraverso l'adozione di standard di processo e di interoperabilità, che guidino l'interazione tra le diverse componenti all'interno della piattaforma, viene illustrata la progettazione e l'implementazione di un meccanismo di Infrastructure as Code per le risorse di observability di quest'ultimo.
APA, Harvard, Vancouver, ISO, and other styles
43

Ndamase, Zimasa. "The impact of data governance on corporate performance : the case of a petroleum company." Master's thesis, University of Cape Town, 2014. http://hdl.handle.net/11427/13323.

Full text
Abstract:
Includes bibliographical references.
While it is acknowledged that data is a valuable corporate asset, many companies fail to exploit it in order to better their performance. Organizations today need to be proactive in their operations and have to make informed business decisions in less time than ever before. This puts pressure on the organisations to better govern the use of data within an organization. Literature has shown that a holistic conceptualization of factors affecting data governance is missing. Also there is limited research on the effects of data governance on firm performance. This study therefore seeks to fill this gap by investigating the factors that affect data governance in organization X which operates in the petroleum industry and also determine the extent to which the quality of data governance influences its corporate performance. A conceptual model derived from the literature review was used to guide this study. Data was collected from 50 employees in organisation X whose job descriptions are aligned with data management via an intranet web based survey. Quantitative methods were then used to analyse the data. Results of the regression analysis confirmed four out of six research propositions made. Compliance with data policies and regulations, data stewardship and ownership were not found to be significant predictors of data governance. However, data modeling, data integration and data quality are necessary in order to achieve improved data governance. The present study also confirms that poor data governance has a negative impact on corporate performance suggesting that organisation X needs to enhance the quality of data governance in order to realise its full business value and also improved business performance.
APA, Harvard, Vancouver, ISO, and other styles
44

Siachiwena, Hangala. "Governance and socioeconomic development in Zambia : an analysis of survey data and development indicators." Master's thesis, University of Cape Town, 2014. http://hdl.handle.net/11427/13006.

Full text
Abstract:
Includes bibliographical references.
This study set out to establish statistical relationships between matters relating to governance and changes in Zambia’s socioeconomic development. With the aid of survey data compiled by the World Bank’s Worldwide Governance Indicators, and perceptions of governance amongst Zambian citizens obtained from Round 5 of the Afrobarometer survey, this study used quantitative research methods to investigate the performance of indicators of governance in Zambia between 1996 and 2012 and the perceptions that Zambians had toward matters relating to governance. The indicators and perceptions of governance were based on measures of Control of Corruption, Government Effectiveness, Rule of Law and Voice and Accountability. The study further addressed the changes in Zambia’s socioeconomic development by investigating trends in Zambia’s Human Development Index between 1996 and 2012. The study also established the extent of lived poverty in Zambia by addressing how Zambians rated their living conditions based on how much access they had to essential commodities such as food, cooking fuel, water and cash income.
APA, Harvard, Vancouver, ISO, and other styles
45

Ravinder, Singh. "Legalization of Privacy and Personal Data Governance: Feasibility Assessment for a New Global Framework Development." Thesis, Université d'Ottawa / University of Ottawa, 2016. http://hdl.handle.net/10393/35333.

Full text
Abstract:
The International Conference of Data Protection and Privacy Commissioners has been actively engaged in the development of a new, legally binding international framework for privacy and data protection. Given the existence of three international privacy and data protection regimes (i.e. the OECD Privacy Guidelines, the EU data protection framework and the APEC Privacy Framework) and the availability of other bilateral venues to resolve transnational data flows issues (e.g. the EU-US Safe Harbor agreement, the Umbrella Agreement and the latest, the Privacy Shield arrangement), the thesis asks whether the development of such a new regime is feasible. The main finding of the thesis is that in an era of a globalized society driven by the internet and information-communications technology, where all three of the leading international privacy and data protection regimes are consistently updating and modifying their respective frameworks, and where there is persistent divergence between the European Union and the United States approaches towards transborder data flow, the emergence of a new, legally binding international framework is unlikely, at least under the prevailing circumstances. Therefore, the thesis calls for a shift towards an institutionalized arrangement that is founded on existing international co-operation and convergence and that further expands ongoing inter-regime collaboration. The approach recommended in the thesis is an effective alternative to the development of a new, legally binding international framework, and even offers strong prospects for the evolution of a legalized arrangement for international privacy and personal data governance in due course.
APA, Harvard, Vancouver, ISO, and other styles
46

Imaginário, João Tiago Inverno. "The impact of governance in government debt." Master's thesis, Instituto Superior de Economia e Gestão, 2018. http://hdl.handle.net/10400.5/16581.

Full text
Abstract:
Mestrado em Economia Monetária e Financeira
Esta dissertação estuda a relação entre os Worldwide Governance Indicators e a Dívida Pública em 164 países para o período entre 2002 e 2015. Para tal, estimaram-se os modelos de fixed effects (FE) e generalized method of moments (GMM). Os resultados sugerem que a qualidade da governance está negativamente e estatisticamente relacionada com a dívida. Para os países de rendimento per capita mais baixo, foi encontrada evidência de que um melhor ambiente de governance está associado a níveis mais baixos de dívida pública.
This dissertation examines the relationship between Worldwide Governance Indicators and Government Debt in 164 countries for the period between 2002 and 2015. For this purpose, fixed effects (FE) and generalized method of moments (GMM) models are estimated. The results suggest that governance quality is negatively and statistically related with government debt. For Low Income countries was found evidence that better governance environment is associated with lower public debt levels.
info:eu-repo/semantics/publishedVersion
APA, Harvard, Vancouver, ISO, and other styles
47

Leite, Christopher C. "Evolutions in Transnational Authority: Practices of Risk and Data in European Disaster and Security Governance." Thesis, Université d'Ottawa / University of Ottawa, 2016. http://hdl.handle.net/10393/35121.

Full text
Abstract:
The scholarly field of International Relations (IR) has been slow to appreciate the evolutions in forms of governance authority currently seen in the European political system. Michael Barnett has insisted that ‘IR scholars also have had to confront the possibility that territoriality, authority, and the state might be bundled in different ways in present-day Europe’ (2001, 52). This thesis outlines how modern governing authority is generated and maintained in a Europe that is strongly impacted by the many institutions, departments, and agencies of the European Union (EU). Using the specific cases of the EU’s disaster response organisation, the DG for Civil Protection and Humanitarian Aid (ECHO), and the hub for EU internal security policy management, the DG for Home and Migration Affairs (HOME), this thesis understands the different policy areas under EU policymaker and bureaucrat jurisdictions as semi-autonomous fields of practice – fields that are largely confined to the groups of bureaucratic, diplomatic, corporate, NGO, contracted, and IO that exist in Brussels, decidedly removed from in-field or operational personnel. Transnational governance authority in Europe, at least in these two fields, is generated and maintained by actors recognised as highly expert in producing and using data to monitor for the risks of future disasters and entrenching that ability into central functional roles in their respective fields. Both ECHO and HOME actors came to be recognised as central authorities in their fields thanks to their ability to prepare for unknown future natural and manmade disasters by creating and collecting and managing data on them and then using this data to articulate possible future scenarios as risks. They use the resources at their disposal to generate and manage data about disaster and security monitoring and coordination, drawing on these resources to impress upon the other actors in their fields that cooperating with ECHO and HOME is the best way to minimise the risks posted by future disasters. In doing so, both sets of actors established the parameters by which other actors understood their own best practices: through the use of data to monitor for future scenarios and establish criteria upon which to justify policy decisions. The specific way ECHO and HOME actors were able to position themselves as primary or central figures, namely, by using centralised data management, demonstrates the role that risk practices play in generating and maintaining authority in complex institutional governance situations as currently seen in Europe.
APA, Harvard, Vancouver, ISO, and other styles
48

Castillo, Luis Felipe, Carlos Raymundo, and Francisco Dominguez Mateos. "Information architecture model for the successful data governance initiative in the peruvian higher education sector." Institute of Electrical and Electronics Engineers Inc, 2017. http://hdl.handle.net/10757/656364.

Full text
Abstract:
El texto completo de este trabajo no está disponible en el Repositorio Académico UPC por restricciones de la casa editorial donde ha sido publicado.
The research revealed the need to design an information architecture model for Data Governance initiative that can serve as an intercom between current IT / IS management trends: Information technology (IT) management and information management. A model is needed that strikes a balance between the need to invest in technology and the ability to manage the information that originates from the use of those technologies, as well as to measure with greater precision the generation of IT value through the use of quality information and user satisfaction, using the technologies that make it possible for the information to reach them to be used in their daily work.
APA, Harvard, Vancouver, ISO, and other styles
49

Dal, Maso Alvise <1993&gt. "The evolution of Data Governance: a tool for an improved and enhanced decision-making process." Master's Degree Thesis, Università Ca' Foscari Venezia, 2019. http://hdl.handle.net/10579/15950.

Full text
Abstract:
Questo lavoro di tesi propone un’analisi della gestione dati in ambito aziendale. L'elaborato si sviluppa in tre capitoli. Il primo tratta la gestione dei dati in maniera ampia e generale, fornendo dapprima una definizione della disciplina, per poi andare ad analizzare quella che è stata la sua evoluzione nel corso del tempo, il diverso utilizzo dei dati e le diverse applicazioni nei vari settori. La seconda parte dell'elaborato invece tratta la progettazione del modello di gestione dati, quindi la modalità con cui le aziende possono andare a creare un programma o una strategia riguardante i dati per il proprio business. Vengono dunque specificati i diversi componenti necessari all'implementazione, come si può sviluppare ed analizzare il processo, ed infine come si monitora. Nell'ultima parte del secondo capitolo vengono inoltre specificati i maggiori vantaggi, ma anche le peculiarità ed i maggiori rischi. Nella terza ed ultima parte del lavoro di tesi, infine, viene specificato il ruolo fondamentale che ha la gestione dei dati a livello aziendale e come sia di supporto a tutto il processo decisionale, diventando a tutti gli effetti un asset per l'azienda.
APA, Harvard, Vancouver, ISO, and other styles
50

Primerano, Ilaria. "A symbolic data analysis approach to explore the relation between governance and performance in the Italian industrial districs." Doctoral thesis, Universita degli studi di Salerno, 2016. http://hdl.handle.net/10556/2179.

Full text
Abstract:
2013 - 2014
Nowadays, complex phenomena need to bee analyzed through appropriate statistical methods that allow considering the knowledge hidden behind the classical data structure... [edited by author]
XIII n.s.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography