To see the other types of publications on this topic, follow the link: Data management system.

Dissertations / Theses on the topic 'Data management system'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Data management system.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Anumalla, Kalyani. "DATA PREPROCESSING MANAGEMENT SYSTEM." University of Akron / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=akron1196650015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wang, Yanchao. "Protein Structure Data Management System." Digital Archive @ GSU, 2007. http://digitalarchive.gsu.edu/cs_diss/20.

Full text
Abstract:
With advancement in the development of the new laboratory instruments and experimental techniques, the protein data has an explosive increasing rate. Therefore how to efficiently store, retrieve and modify protein data is becoming a challenging issue that most biological scientists have to face and solve. Traditional data models such as relational database lack of support for complex data types, which is a big issue for protein data application. Hence many scientists switch to the object-oriented databases since object-oriented nature of life science data perfectly matches the architecture of object-oriented databases, but there are still a lot of problems that need to be solved in order to apply OODB methodologies to manage protein data. One major problem is that the general-purpose OODBs do not have any built-in data types for biological research and built-in biological domain-specific functional operations. In this dissertation, we present an application system with built-in data types and built-in biological domain-specific functional operations that extends the Object-Oriented Database (OODB) system by adding domain-specific additional layers Protein-QL, Protein Algebra Architecture and Protein-OODB above OODB to manage protein structure data. This system is composed of three parts: 1) Client API to provide easy usage for different users. 2) Middleware including Protein-QL, Protein Algebra Architecture and Protein-OODB is designed to implement protein domain specific query language and optimize the complex queries, also it capsulates the details of the implementation such that users can easily understand and master Protein-QL. 3) Data Storage is used to store our protein data. This system is for protein domain, but it can be easily extended into other biological domains to build a bio-OODBMS. In this system, protein, primary, secondary, and tertiary structures are defined as internal data types to simplify the queries in Protein-QL such that the domain scientists can easily master the query language and formulate data requests, and EyeDB is used as the underlying OODB to communicate with Protein-OODB. In addition, protein data is usually stored as PDB format and PDB format is old, ambiguous, and inadequate, therefore, PDB data curation will be discussed in detail in the dissertation.
APA, Harvard, Vancouver, ISO, and other styles
3

Okkonen, O. (Olli). "RESTful clinical data management system." Master's thesis, University of Oulu, 2015. http://urn.fi/URN:NBN:fi:oulu-201505291735.

Full text
Abstract:
Elämme digitalisaation aikakautta, jolloin tietokoneiden avulla on mahdollista saavuttaa kustannustehokkuutta ja automaatio tiedonhallintaan. Kliiniset tutkimukset eivät ole onnistuneet pysymään mukana teknologiakehityksessä vaan turvautuvat yhä perinteisiin paperisiin keinoihin, mikä hidastaa ja vaikeuttaa tiedon hallintaa suurina määrinä viivästyttäen tutkimuksen loppuanalyysia. Suurimmat syyt tähän ovat olleen kehnot ohjelmistojen laadut, tietotaidon puute sekä epäonnistuneet käyttöönotot organisaation tasolla. Tämä diplomityö esittelee Genesiksen, web-pohjaisen kliinisen tiedonhallintajärjestelmän tukemaan LIRA-tutkimuksen tarpeita Suomessa sekä Ruotsissa. Työssä esitellään kuinka Genesiksen kehityksessä on huomioitu tietoturva ja ketterän kehityksen periaatteet tarjoamaan ohjelmistolle vaivatonta käyttöä sekä lisäarvoa uudelleenkäytettävyydellä. Uudelleenkäytettävyyttä on tavoitettu ohjelmiston helpolla siirrettävyydellä tuleviin tutkimuksiin sekä yhteensopivuutta web-pohjaisten laitteiden kanssa yhtenäisellä rajapinnalla. Lisäksi, työssä esitellään Genesiksen toteutus ja pohditaan järjestelmän tulevaisuutta. Alustavasti Genesiksen hyödyntämisestä on kiinnostunut myös maailman suurin tyypin-1 diabetes tutkimus<br>In the era of digitalization, clinical trials have often been left behind in adoption of automation and cost-efficiency offered by computerized systems. Poor implementations, lack of technical experience, and inertia caused by overlapping old and new procedures have failed to prove the business value of data management systems. This has led into settling for inadequate tools for data management, leaving many studies struggling with traditional approaches involving heavy paper usage further complicating the management and drastically slowing preparations for final analysis. This Master’s Thesis presents Genesis, a web-based clinical data management system development for the LIRA-study, which will take place in Finland and Sweden. Genesis has been developed to address the aforementioned obstacles with adopting information technology solutions in an agile manner with the integration of security concerns. Furthermore, Genesis has been designed to offer the long term value through reusability in terms of effortless portability for upcoming studies and interconnectability with web-enabled legacy system and handheld devices via a uniform interface. In addition to representing the design, implementation and evaluation of Genesis, the future prospects of Genesis are discussed, noting the preliminary interest of utilizing Genesis in additional studies, including the world’s largest type-1 diabetes study
APA, Harvard, Vancouver, ISO, and other styles
4

Tong, J. "A graphical process data management system." Thesis, Swansea University, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.639249.

Full text
Abstract:
This thesis aims to investigate aspects of data management and program integration in process design. It proposes a new method and model for data storage, manipulation, and representation, on which a new process design environment may be based. Several aspects of process design database systems are examined, including data models, data handling, graphical user interfaces, and application integration. Process design is an increasingly complex activity. Its complexity is compounded by a great amount of data, the variety of applications involved, and data transfer between applications. There is clear evidence that better data management is needed to support the design activity. The main focus of this research is the use of ODBC technology in a graphical process data management system (GPDMS). Since ODBC is employed, GPDMS uses ACCESS as the underlying DBMS. The GPDMS database consists of TABLSs or ACCESS, which are familiar to most software engineers. This approach benefits those who develop applications sharing data with GPDMS. This research also studies the strengths and weaknesses of existing data models for engineering usage. This leads to the Process Object Model (POM), a user-level graphical data model for handling complex process objects. A new application integration method is proposed by using MACROs to build links and exchange data between the GPDMS database and the integrated applications.
APA, Harvard, Vancouver, ISO, and other styles
5

Huml, Kathy Pederson. "Intelligent Data Object Management System (IDOMS)." Thesis, Kansas State University, 1986. http://hdl.handle.net/2097/9918.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Tatarinov, Igor. "Semantic data sharing with a peer data management system /." Thesis, Connect to this title online; UW restricted, 2004. http://hdl.handle.net/1773/6942.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ofori-Duodu, Michael Samuel. "Exploring Data Security Management Strategies for Preventing Data Breaches." ScholarWorks, 2019. https://scholarworks.waldenu.edu/dissertations/7947.

Full text
Abstract:
Insider threat continues to pose a risk to organizations, and in some cases, the country at large. Data breach events continue to show the insider threat risk has not subsided. This qualitative case study sought to explore the data security management strategies used by database and system administrators to prevent data breaches by malicious insiders. The study population consisted of database administrators and system administrators from a government contracting agency in the northeastern region of the United States. The general systems theory, developed by Von Bertalanffy, was used as the conceptual framework for the research study. The data collection process involved interviewing database and system administrators (n = 8), organizational documents and processes (n = 6), and direct observation of a training meeting (n = 3). By using methodological triangulation and by member checking with interviews and direct observation, efforts were taken to enhance the validity of the findings of this study. Through thematic analysis, 4 major themes emerged from the study: enforcement of organizational security policy through training, use of multifaceted identity and access management techniques, use of security frameworks, and use of strong technical control operations mechanisms. The findings of this study may benefit database and system administrators by enhancing their data security management strategies to prevent data breaches by malicious insiders. Enhanced data security management strategies may contribute to social change by protecting organizational and customer data from malicious insiders that could potentially lead to espionage, identity theft, trade secrets exposure, and cyber extortion.
APA, Harvard, Vancouver, ISO, and other styles
8

Lam, Lawrence G. "Digital Health-Data platforms : biometric data aggregation and their potential impact to centralize Digital Health-Data." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/106235.

Full text
Abstract:
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, School of Engineering, System Design and Management Program, Engineering and Management Program, 2015.<br>Cataloged from PDF version of thesis.<br>Includes bibliographical references (page 81).<br>Digital Health-Data is being collected at unprecedented rates today as biometric micro sensors continue to diffuse into our lives in the form of smart devices, wearables, and even clothing. From this data, we hope to learn more about preventative health so that we can spend less money on the doctor. To help users aggregate this perpetual growth of biometric "big" data, Apple HealthKit, Google Fit, and Samsung SAMI were each created with the hope of becoming the dominant design platform for Digital Health-Data. The research for this paper consists of citings from technology strategy literature and relevant journalism articles regarding recent and past developments that pertain to the wearables market and the digitization movement of electronic health records (EHR) and protected health information (PHI) along with their rules and regulations. The culmination of these citations will contribute to my hypothesis where the analysis will attempt to support my recommendations for Apple, Google, and Samsung. The ending chapters will encompass discussions around network effects and costs associated with multi-homing user data across multiple platforms and finally ending with my conclusion based on my hypothesis.<br>by Lawrence G. Lam.<br>S.M. in Engineering and Management
APA, Harvard, Vancouver, ISO, and other styles
9

Quintero, Michael C. "Constructing a Clinical Research Data Management System." Thesis, University of South Florida, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10640886.

Full text
Abstract:
<p> Clinical study data is usually collected without knowing what kind of data is going to be collected in advance. In addition, all of the possible data points that can apply to a patient in any given clinical study is almost always a superset of the data points that are actually recorded for a given patient. As a result of this, clinical data resembles a set of sparse data with an evolving data schema. To help researchers at the Moffitt Cancer Center better manage clinical data, a tool was developed called GURU that uses the Entity Attribute Value model to handle sparse data and allow users to manage a database entity&rsquo;s attributes without any changes to the database table definition. The Entity Attribute Value model&rsquo;s read performance gets faster as the data gets sparser but it was observed to perform many times worse than a wide table if the attribute count is not sufficiently large. Ultimately, the design trades read performance for flexibility in the data schema.</p><p>
APA, Harvard, Vancouver, ISO, and other styles
10

Quintero, Michael C. "Constructing a Clinical Research Data Management System." Scholar Commons, 2017. http://scholarcommons.usf.edu/etd/7081.

Full text
Abstract:
Clinical study data is usually collected without knowing what kind of data is going to be collected in advance. In addition, all of the possible data points that can apply to a patient in any given clinical study is almost always a superset of the data points that are actually recorded for a given patient. As a result of this, clinical data resembles a set of sparse data with an evolving data schema. To help researchers at the Moffitt Cancer Center better manage clinical data, a tool was developed called GURU that uses the Entity Attribute Value model to handle sparse data and allow users to manage a database entity’s attributes without any changes to the database table definition. The Entity Attribute Value model’s read performance gets faster as the data gets sparser but it was observed to perform many times worse than a wide table if the attribute count is not sufficiently large. Ultimately, the design trades read performance for flexibility in the data schema.
APA, Harvard, Vancouver, ISO, and other styles
11

Ma, Xuesong 1975. "Data mining using relational database management system." Thesis, McGill University, 2005. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=98757.

Full text
Abstract:
With the wide availability of huge amounts of data and the imminent demands to transform the raw data into useful information and knowledge, data mining has become an important research field both in the database area and the machine learning areas. Data mining is defined as the process to solve problems by analyzing data already present in the database and discovering knowledge in the data. Database systems provide efficient data storage, fast access structures and a wide variety of indexing methods to speed up data retrieval. Machine learning provides theory support for most of the popular data mining algorithms. Weka-DB combines properties of these two areas to improve the scalability of Weka, which is an open source machine learning software package. Weka implements most of the machine learning algorithms using main memory based data structure, so it cannot handle large datasets that cannot fit into main memory. Weka-DB is implemented to store the data into and access the data from DB2, so it achieves better scalability than Weka. However, the speed of Weka-DB is much slower than Weka because secondary storage access is more expensive than main memory access. In this thesis we extend Weka-DB with a buffer management component to improve the performance of Weka-DB. Furthermore, we increase the scalability of Weka-DB even further by putting further data structures into the database, which uses a buffer to access the data in database. Furthermore, we explore another method to improve the speed of the algorithms, which takes advantage of the data access properties of machine learning algorithms.
APA, Harvard, Vancouver, ISO, and other styles
12

Yim, Wai-Kei 1978. "Data modeling in a pavement management system." Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/8619.

Full text
Abstract:
Thesis (M.Eng.)--Massachusetts Institute of Technology, Dept. of Civil and Environmental Engineering, 2001.<br>Includes bibliographical references (p. 80).<br>Data modeling is one of the critical steps in the software development process. The use of data becomes a profitable business today and the data itself is a valuable asset of the company. In this thesis, the process of developing a data model will be introduced in the first part. The second part will be a case study. The case study is about the development of the data model of a pavement management system. This is an actual project implemented for the Public Works Department of Arlington, MA.<br>by Wai-Kei Yim.<br>M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
13

Schanzenberger, Anja. "System design for periodic data production management." Thesis, Middlesex University, 2006. http://eprints.mdx.ac.uk/10697/.

Full text
Abstract:
This research project introduces a new type of information system, the periodic data production management system, and proposes several innovative system design concepts for this application area. Periodic data production systems are common in the information industry for the production of information. These systems process large quantities of data in order to produce statistical reports in predefined intervals. The workflow of such a system is typically distributed world-wide and consists of several semi-computerized production steps which transform data packages. For example, market research companies apply these systems in order to sell marketing information over specified timelines. production of information. These systems process large quantities of data in order to produce statistical reports in predefined intervals. The workflow of such a system is typically distributed world-wide and consists of several semi-computerized production steps which transform data packages. For example, market research companies apply these systems in order to sell marketing information over specified timelines. There has been identified a lack of concepts for IT-aided management in this area. This thesis clearly defines the complex requirements of periodic data production management systems. It is shown that these systems can be defines as IT-support for planning, monitoring and controlling periodic data production processes. Their significant advantages are that information industry will be enabled to increase production performance, and to ease (and speed up) the identification of the production progress as well as the achievable optimisation potential in order to control rationalisation goals. In addition, this thesis provides solutions for he generic problem how to introduce such a management system on top of an unchangeable periodic data production system. Two promising system designs for periodic data production management are derived, analysed and compared in order to gain knowledge about appropriate concepts and this application area. Production planning systems are the metaphor models used for the so-called closely coupled approach. The metaphor model for the loosely coupled approach is project management. The latter approach is prototyped as an application in the market research industry and used as case study. Evaluation results are real-world experiences which demonstrate the extraordinary efficiency of systems based on the loosely coupled approach. Special is a scenario-based evaluation that accurately demonstrates the many improvements achievable with this approach. Main results are that production planning and process quality can vitally be improved. Finally, among other propositions, it is suggested to concentrate future work on the development of product lines for periodic data production management systems in order to increase their reuse.
APA, Harvard, Vancouver, ISO, and other styles
14

Räcke, Harald. "Data management and routing in general networks." [S.l. : s.n.], 2003. http://deposit.ddb.de/cgi-bin/dokserv?idn=971568987.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Pullokkaran, Laijo John. "Analysis of data virtualization & enterprise data standardization in business intelligence." Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/90703.

Full text
Abstract:
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, Engineering Systems Division, System Design and Management Program, 2013.<br>Cataloged from PDF version of thesis.<br>Includes bibliographical references (page 59).<br>Business Intelligence is an essential tool used by enterprises for strategic, tactical and operational decision making. Business Intelligence most often needs to correlate data from disparate data sources to derive insights. Unifying data from disparate data sources and providing a unifying view of data is generally known as data integration. Traditionally enterprises employed ETL and data warehouses for data integration. However in last few years a technology known as "Data Virtualization" has found some acceptance as an alternative data integration solution. "Data Virtualization" is a federated database termed as composite database by McLeod/Heimbigner's in 1985. Till few years back Data Virtualization weren't considered as an alternative for ETL but was rather thought of as a technology for niche integration challenges. In this paper we hypothesize that for many BI applications "data virtualization" is a better cost effective data integration strategy. We analyze the system architecture of "Data warehouse" and "Data Virtualization" solutions. We further employ System Dynamics Model to compare few key metrics like "Time to Market" and "Cost of "Data warehouse" and "Data Virtualization" solutions. We also look at the impact of "Enterprise Data Standardization" on data integration.<br>by Laijo John Pullokkaran.<br>S.M. in Engineering and Management
APA, Harvard, Vancouver, ISO, and other styles
16

Kairouz, Joseph. "Patient data management system medical knowledge-base evaluation." Thesis, McGill University, 1996. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=24060.

Full text
Abstract:
The purpose of this thesis is to evaluate the medical data management expert system at the Pediatric Intensive Care Unit of the Montreal Children's Hospital. The objective of this study is to provide a systematic method to evaluate and, progressively improve the knowledge embedded in the medical expert system.<br>Following a literature survey on evaluation techniques and architecture of existing expert systems, an overview of the Patient Data Management System hardware and software components is presented. The design of the Expert Monitoring System is elaborated. Following its installation in the intensive Care Unit, the performance of the Expert Monitoring System is evaluated, operating on real vital sign data and corrections were formulated. A progressive evaluation technique, new methodology for evaluating an expert system knowledge-base is proposed for subsequent corrections and evaluations of the Expert Monitoring System.
APA, Harvard, Vancouver, ISO, and other styles
17

Waltmire, Michelle Klaassen. "Design of IDOMS : Intelligent Data Object Management System." Thesis, Kansas State University, 1986. http://hdl.handle.net/2097/9982.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Tolat, Viral V. "An Object-Oriented System for Telemetry Data Management." International Foundation for Telemetering, 1992. http://hdl.handle.net/10150/611919.

Full text
Abstract:
International Telemetering Conference Proceedings / October 26-29, 1992 / Town and Country Hotel and Convention Center, San Diego, California<br>In this paper we describe an object-oriented software system for realtime telemetry data management and display. The system has also been designed to be used as the primary means of data management during post-mission activities. The software system consists of three parts: the data interface library, the data format specification and the display applications. The data interface library contains a set of object definitions and procedures to provide uniform access to heterogeneous data streams. The data format specification is used by the data interface library to extract data from the raw data stream. The display applications use the data interface library to access the data and present it to the user. Currently, the interface between the data format specification and the data interface library is implemented procedurally and is modeled after a device driver. Each format is assigned a unique id and then accessed via that id. A data stream may be accessed by any number of different format specifications. A future implementation will separate the data format specification into a separate process with a message or RPC based interface. Therefore the data may be kept on remote systems and accessed in a transparent fashion. In addition, this model will support operation in distributed heterogeneous computing environments. This system handles multiple simultaneous data streams and applications can access data from different streams relatively transparently. This is possible since data variables (objects) to be displayed are specified by a syntax that contains the specification of both the data streams and the format to use. In addition, the concept of a primary stream is introduced to allow the user to scroll through one data stream and have the other streams follow. Synchronization between streams is based on time information in the data streams. Several applications have been written including various stripchart displays a tabular display and some other custom displays. A data analysis application similar to the UNIX program "awk" is currently under development. It will provide the user with the ability to extract data, i.e., report generation, for display or further analysis in an object-oriented manner.
APA, Harvard, Vancouver, ISO, and other styles
19

Drira, Wassim. "Secure collection and data management system for WSNs." Phd thesis, Institut National des Télécommunications, 2012. http://tel.archives-ouvertes.fr/tel-00814664.

Full text
Abstract:
Nowadays, each user or organization is already connected to a large number of sensor nodes which generate a substantial amount of data, making their management not an obvious issue. In addition, these data can be confidential. For these reasons, developing a secure system managing the data from heterogeneous sensor nodes is a real need. In the first part, we developed a composite-based middleware for wireless sensor networks to communicate with the physical sensors for storing, processing, indexing, analyzing and generating alerts on those sensors data. Each composite is connected to a physical node or used to aggregate data from different composites. Each physical node communicating with the middleware is setup as a composite. The middleware has been used in the context of the European project Mobesens in order to manage data from a sensor network for monitoring water quality. In the second part of the thesis, we proposed a new hybrid authentication and key establishment scheme between senor nodes (SN), gateways (MN) and the middleware (SS). It is based on two protocols. The first protocol intent is the mutual authentication between SS and MN, on providing an asymmetric pair of keys for MN, and on establishing a pairwise key between them. The second protocol aims at authenticating them, and establishing a group key and pairwise keys between SN and the two others. The middleware has been generalized in the third part in order to provide a private space for multi-organization or -user to manage his sensors data using cloud computing. Next, we expanded the composite with gadgets to share securely sensor data in order to provide a secure social sensor network
APA, Harvard, Vancouver, ISO, and other styles
20

Drira, Wassim. "Secure collection and data management system for WSNs." Electronic Thesis or Diss., Evry, Institut national des télécommunications, 2012. http://www.theses.fr/2012TELE0051.

Full text
Abstract:
Le développement des réseaux de capteurs sans fil fait que chaque utilisateur ou organisation est déjà connecté à un nombre important de nœuds. Ces nœuds génèrent une quantité importante de données, rendant la gestion de ces données non évident. De plus, ces données peuvent contenir des informations concernant la vie privée. Les travaux de la thèse attaquent ces problématiques. Premièrement, nous avons conçu un middleware qui communique avec les capteurs physiques pour collecter, stocker, traduire, indexer, analyser et générer des alertes sur les données des capteurs. Ce middleware est basé sur la notion de composants et de composites. Chaque nœud physique communique avec un composite du middleware via une interface RESTFul. Ce middleware a été testé et utilisé dans le cadre du projet Européen Mobesens dans le but de gérer les données d'un réseau de capteurs pour la surveillance de la qualité de l'eau. Deuxièmement, nous avons conçu un protocole hybride d'authentification et d'établissement de clés de paires et de groupes. Considérant qu'il existe une différence de performance entre les noeuds capteur, la passerelle et le middleware, nous avons utilisé l'authentification basé sur la cryptographie basée sur les identités entre la passerelle et le serveur de stockage et une cryptographie symétrique entre les capteurs et les deux autres parties. Ensuite, le middleware a été généralisé dans la troisième partie de la thèse pour que chaque organisation ou individu puisse avoir son propre espace pour gérer les données de ses capteurs en utilisant le cloud computing. Ensuite, nous avons portail social sécurisé pour le partage des données des réseaux de capteurs<br>Nowadays, each user or organization is already connected to a large number of sensor nodes which generate a substantial amount of data, making their management not an obvious issue. In addition, these data can be confidential. For these reasons, developing a secure system managing the data from heterogeneous sensor nodes is a real need. In the first part, we developed a composite-based middleware for wireless sensor networks to communicate with the physical sensors for storing, processing, indexing, analyzing and generating alerts on those sensors data. Each composite is connected to a physical node or used to aggregate data from different composites. Each physical node communicating with the middleware is setup as a composite. The middleware has been used in the context of the European project Mobesens in order to manage data from a sensor network for monitoring water quality. In the second part of the thesis, we proposed a new hybrid authentication and key establishment scheme between senor nodes (SN), gateways (MN) and the middleware (SS). It is based on two protocols. The first protocol intent is the mutual authentication between SS and MN, on providing an asymmetric pair of keys for MN, and on establishing a pairwise key between them. The second protocol aims at authenticating them, and establishing a group key and pairwise keys between SN and the two others. The middleware has been generalized in the third part in order to provide a private space for multi-organization or -user to manage his sensors data using cloud computing. Next, we expanded the composite with gadgets to share securely sensor data in order to provide a secure social sensor network
APA, Harvard, Vancouver, ISO, and other styles
21

Matus, Castillejos Abel, and n/a. "Management of Time Series Data." University of Canberra. Information Sciences & Engineering, 2006. http://erl.canberra.edu.au./public/adt-AUC20070111.095300.

Full text
Abstract:
Every day large volumes of data are collected in the form of time series. Time series are collections of events or observations, predominantly numeric in nature, sequentially recorded on a regular or irregular time basis. Time series are becoming increasingly important in nearly every organisation and industry, including banking, finance, telecommunication, and transportation. Banking institutions, for instance, rely on the analysis of time series for forecasting economic indices, elaborating financial market models, and registering international trade operations. More and more time series are being used in this type of investigation and becoming a valuable resource in today�s organisations. This thesis investigates and proposes solutions to some current and important issues in time series data management (TSDM), using Design Science Research Methodology. The thesis presents new models for mapping time series data to relational databases which optimise the use of disk space, can handle different time granularities, status attributes, and facilitate time series data manipulation in a commercial Relational Database Management System (RDBMS). These new models provide a good solution for current time series database applications with RDBMS and are tested with a case study and prototype with financial time series information. Also included is a temporal data model for illustrating time series data lifetime behaviour based on a new set of time dimensions (confidentiality, definitiveness, validity, and maturity times) specially targeted to manage time series data which are introduced to correctly represent the different status of time series data in a timeline. The proposed temporal data model gives a clear and accurate picture of the time series data lifecycle. Formal definitions of these time series dimensions are also presented. In addition, a time series grouping mechanism in an extensible commercial relational database system is defined, illustrated, and justified. The extension consists of a new data type and its corresponding rich set of routines that support modelling and operating time series information within a higher level of abstraction. It extends the capability of the database server to organise and manipulate time series into groups. Thus, this thesis presents a new data type that is referred to as GroupTimeSeries, and its corresponding architecture and support functions and operations. Implementation options for the GroupTimeSeries data type in relational based technologies are also presented. Finally, a framework for TSDM with enough expressiveness of the main requirements of time series application and the management of that data is defined. The framework aims at providing initial domain know-how and requirements of time series data management, avoiding the impracticability of designing a TSDM system on paper from scratch. Many aspects of time series applications including the way time series data are organised at the conceptual level are addressed. The central abstraction for the proposed domain specific framework is the notions of business sections, group of time series, and time series itself. The framework integrates comprehensive specification regarding structural and functional aspects for time series data management. A formal framework specification using conceptual graphs is also explored.
APA, Harvard, Vancouver, ISO, and other styles
22

Pitts, David Vernon. "A storage management system for a reliable distributed operating system." Diss., Georgia Institute of Technology, 1986. http://hdl.handle.net/1853/16895.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Kamuhanda, Dany. "Visualising M-learning system usage data." Thesis, Nelson Mandela Metropolitan University, 2015. http://hdl.handle.net/10948/11015.

Full text
Abstract:
Data storage is an important practice for organisations that want to track their progress. The evolution of data storage technologies from manual methods of storing data on paper or in spreadsheets, to the automated methods of using computers to automatically log data into databases or text files has brought an amount of data that is beyond the level of human interpretation and comprehension. One way of addressing this issue of interpreting large amounts of data is data visualisation, which aims to convert abstract data into images that are easy to interpret. However, people often have difficulty in selecting an appropriate visualisation tool and visualisation techniques that can effectively visualise their data. This research proposes the processes that can be followed to effectively visualise data. Data logged from a mobile learning system is visualised as a proof of concept to show how the proposed processes can be followed during data visualisation. These processes are summarised into a model that consists of three main components: the data, the visualisation techniques and the visualisation tool. There are two main contributions in this research: the model to visualise mobile learning usage data and the visualisation of the usage data logged from a mobile learning system. The mobile learning system usage data was visualised to demonstrate how students used the mobile learning system. Visualisation of the usage data helped to convert the data into images (charts and graphs) that were easy to interpret. The evaluation results indicated that the proposed process and resulting visualisation techniques and tool assisted users in effectively and efficiently interpreting large volumes of mobile learning system usage data.
APA, Harvard, Vancouver, ISO, and other styles
24

Zhang, Chengyang. "Toward a Data-Type-Based Real Time Geospatial Data Stream Management System." Thesis, University of North Texas, 2011. https://digital.library.unt.edu/ark:/67531/metadc68070/.

Full text
Abstract:
The advent of sensory and communication technologies enables the generation and consumption of large volumes of streaming data. Many of these data streams are geo-referenced. Existing spatio-temporal databases and data stream management systems are not capable of handling real time queries on spatial extents. In this thesis, we investigated several fundamental research issues toward building a data-type-based real time geospatial data stream management system. The thesis makes contributions in the following areas: geo-stream data models, aggregation, window-based nearest neighbor operators, and query optimization strategies. The proposed geo-stream data model is based on second-order logic and multi-typed algebra. Both abstract and discrete data models are proposed and exemplified. I further propose two useful geo-stream operators, namely Region By and WNN, which abstract common aggregation and nearest neighbor queries as generalized data model constructs. Finally, I propose three query optimization algorithms based on spatial, temporal, and spatio-temporal constraints of geo-streams. I show the effectiveness of the data model through many query examples. The effectiveness and the efficiency of the algorithms are validated through extensive experiments on both synthetic and real data sets. This work established the fundamental building blocks toward a full-fledged geo-stream database management system and has potential impact in many applications such as hazard weather alerting and monitoring, traffic analysis, and environmental modeling.
APA, Harvard, Vancouver, ISO, and other styles
25

Wang, Anjia. "Apparel Companies' Management System (APLAN)." CSUSB ScholarWorks, 2004. https://scholarworks.lib.csusb.edu/etd-project/2524.

Full text
Abstract:
APLAN is a computer software system developed to aid in an apparel company's management system. APLAN is designed to improve the efficiency of production management by combining the companies' main production activities in one system. This project was designed to use MYSQL as the database system. JSP (Java Server Pages) is an interface between MySQL and the web browser and the database access scheme is JDBC (JAVA Database Connectivity).
APA, Harvard, Vancouver, ISO, and other styles
26

Folmer, Brennan Thomas. "Metadata storage for file management systems data storage and representation techniques for a file management system /." [Gainesville, Fla.] : University of Florida, 2002. http://purl.fcla.edu/fcla/etd/UFE1001141.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Wang, Junxian. "Online hotel booking system." CSUSB ScholarWorks, 2006. https://scholarworks.lib.csusb.edu/etd-project/3083.

Full text
Abstract:
The Online Hotel Booking System was developed to allow customers to use a web browser to book a hotel, change the booking details, cancel the booking, change the personal profile, view the booking history, or view the hotel information through a GUI (graphical user interface). The system is implemented in PHP (Hypertext Preprocessor) and HTML (Hyper Text Markup Language).
APA, Harvard, Vancouver, ISO, and other styles
28

Nair, Hema. "Evaluation of an Experimental Data Management System for Program Data at the College Level." DigitalCommons@Robert W. Woodruff Library, Atlanta University Center, 2016. http://digitalcommons.auctr.edu/cauetds/45.

Full text
Abstract:
An experimental data management system has been designed, developed, and implemented in this dissertation. The system satisfies the requirements specifications of the Department of Curriculum and Instruction in the School of Education. The university in this study has installed some learning management systems and assessment systems, such as Banner®, Canvas®, TracDat®, and Taskstream® (university’s name is omitted for anonymity purposes). These systems individually do not perform the necessary data analysis and data management to generate appropriate reports. The system developed in this study can generate more metrics and quantitative measures for reporting purposes within a shorter time. These metrics provide credible evidence for accreditation. Leadership is concerned with improving the effectiveness, efficiency, accountability, and performance of educational programs. The continuity, sustainability, and financial support of programs depend on demonstrating the evidence that they are effective and efficient, that they meet their objectives, and that they contribute to the mission and the vision of the educational institution. Leadership has to employ all means at its disposal in order to collect such evidence. The data management system provides comprehensive data analysis that can be utilized as evidence by the leadership to accomplish its goals. The pilot system developed in this research is web-based and platform independent. It leverages the power of Java® at the front-endand combines the reliability and stability of Oracle® as the back-end database. It has been tested on-site by some members of the departmental faculty and one administrator from the Dean’s Office in the School of Education. This research is a mixed methods study with quasi-experimental treatment. It is a single case experimental study. There is no control group. The sample chosen is a convenient sample. The results of this study indicate that the system is highly usable for assessment work. The data analysis results generated by the system are also actionable. These results assist by identifying gaps in student performance and in curriculum and instruction practices. In the future, the system developed in this dissertation can be extended to other departments in the School of Education. Some implications are provided in the concluding chapter of this dissertation.
APA, Harvard, Vancouver, ISO, and other styles
29

洪宜偉 and Edward Hung. "Data cube system design: an optimization problem." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2000. http://hub.hku.hk/bib/B31222730.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Hung, Edward. "Data cube system design : an optimization problem /." Hong Kong : University of Hong Kong, 2000. http://sunzi.lib.hku.hk/hkuto/record.jsp?B21852340.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Vykunta, Venkateswara Rao. "Class management in a distributed actor system." Master's thesis, This resource online, 1994. http://scholar.lib.vt.edu/theses/available/etd-02022010-020159/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Zhang, Tong. "Improving the performance of a traffic data management system." Ohio : Ohio University, 1999. http://www.ohiolink.edu/etd/view.cgi?ohiou1175198741.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Lisanskiy, Ilya 1976. "A data model for the Haystack document management system." Thesis, Massachusetts Institute of Technology, 1999. http://hdl.handle.net/1721.1/80103.

Full text
Abstract:
Thesis (S.B. and M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1999.<br>Includes bibliographical references (p. 97-98).<br>by Ilya Lisanskiy.<br>S.B.and M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
34

Whitelaw, Virigina A. "Telemetry Handling on the Space Station Data Management System." International Foundation for Telemetering, 1987. http://hdl.handle.net/10150/615260.

Full text
Abstract:
International Telemetering Conference Proceedings / October 26-29, 1987 / Town and Country Hotel, San Diego, California<br>Traditional space telemetry has generally been handled as asynchronous data stream fed into a time division multiplexed channel on a point-to-point radio frequency (RF) link between space and ground. The data handling concepts emerging for the Space Station challenge each of these precepts. According to current concepts, telemetry data on the Space Station will be packetized. It will be transported asynchronously through onboard networks. The space-to-ground link will not be time division multiplexed, but rather will have flexibly managed virtual channels, and finally, the routing of telemetry data must potentially traverse multiple ground distribution networks. Appropriately, the communication standards for handling telemetry are changing to support the highly networked Space Station environment. While a companion paper (1. W. Marker, "Telemetry Formats for the Space Station RF Links") examines the emerging telemetry concepts and formats for the RF link, this paper focuses on the impact of telemetry handling on the design of the onboard networks that are part of the Data Management System (DMS). The DMS will provide the connectivity between most telemetry sources and the onboard node for transmission to the ground. By far the bulk of data transported by DMS will be telemetry, however, not all telemetry will place the same demands on the communication system and DMS must also satisfy a rich array of services in support of distributed Space Station operations. These services include file transfer, data base access, application messaging and several others. The DMS communications architecture, which will follow the International Standards Organization (ISO) Reference Model, must support both the high throughput needed for telemetry transport, as well as the rich services needed for distributed computer systems. This paper discusses an architectural approach to satisfying the dual set of requirements and discusses several of the functionality vs. performance trade-offs that must be made in developing an optimized mechanism for handling telemetry data in the DMS.
APA, Harvard, Vancouver, ISO, and other styles
35

Rahko, J. (Joona). "Web client for a RESTful clinical data management system." Master's thesis, University of Oulu, 2015. http://urn.fi/URN:NBN:fi:oulu-201505291720.

Full text
Abstract:
With the emergence of computers, the fashion in which clinical trials are conducted has been revolutionized. Traditionally, most clinical trials have been run on paper based systems, which is inefficient in the light of today’s technology. Computerization of clinical data management has improved clinical trial processes in many ways, such as by reducing the cost of collecting, exchanging, and verifying information. Moreover, readily available data has also greatly improved subject safety, as physicians are faster aware of any adverse events. This thesis depicts the requirement elicitation, design, implementation, and evaluation of a web client for Genesis, a web-based clinical data management system developed primarily for the LIRA-Study. In the LIRA-Study, the software will be used at various study sites in Finland as well as in Sweden. The usability testing, presented in this thesis, indicated that the engineered application was user-friendly and that its development should be continued. In addition to serving the LIRA-Study, the secondary goal of the developed software is to be easily portable to other studies. Initial plans have already been made to deploy Genesis in two other large-scale studies, one of which is the largest type 1 diabetes study in the world. In addition to presenting the developed software, both the current state and the history of clinical data management are also discussed. After illustrating the software development process, the results and future prospects of Genesis are pondered<br>Kliinisten tutkimuksien toteuttamistapa on muuttunut valtavasti tietokoneiden yleistyessä. Ennen tietokoneiden valta-aikaa kliiniset tutkimukset ovat toimineet paperipohjaisilla järjestelmillä, jotka ovat nykyteknologian valossa olleet tehottomia. Kliinisen tiedonhallinnan tietokoneistuminen on parantanut kliinisen tutkimuksen prosesseja monilla tavoin, kuten pienentämällä informaation keruusta, jakamisesta ja tarkistamisesta aiheutuvia kustannuksia. Nopeasti ja helposti käytettävissä oleva data on lisäksi parantanut tutkimuspotilaiden turvallisuutta, sillä tutkimuslääkärit ovat nopeammin tietoisia lääkkeiden mahdollisista sivuvaikutuksista. Tämä diplomityö esittelee web-pohjaisen kliinisen tiedon hallintajärjestelmän Genesiksen web-asiakasohjelman vaatimukset, suunnittelun, toteutuksen ja evaluoinnin. Kehitetyn järjestelmän ensisijainen tarkoitus on palvella LIRA-tutkimusta. LIRA-tutkimuksessa ohjelmistoa käytetään monella tutkimus paikkakunnalla sekä Suomessa että Ruotsissa. Tässä työssä esiteltävän ensimmäisen version käyttäjätestaus osoitti, että kehitetty asiakasohjelma on käyttäjäystävällinen ja sen kehittämistä kannattaa jatkaa. Ohjelmiston toissijainen tavoite on olla helposti siirrettävissä muihin tutkimuksiin. Ohjelmiston käyttöönottoa onkin alustavasti suunniteltu myös kahdessa muussa ison mittakaavan tutkimuksessa. Toinen näistä tutkimuksista on maailman suurin tyypin 1 diabeteksen kehittymistä selvittävä tutkimus. Tässä diplomityössä keskustellaan ohjelmiston esittelyn lisäksi kliinisen tiedon hallinnoinnin historiasta ja nykytilasta. Ohjelmistokehitysprosessin kuvaamisen jälkeen tässä työssä pohditaan Genesiksen jatkonäkymiä ja onnistumista ohjelmistoprojektina
APA, Harvard, Vancouver, ISO, and other styles
36

Pires, Carlos Eduardo Santos. "Ontology-based clustering in a Peer Data Management System." Universidade Federal de Pernambuco, 2009. https://repositorio.ufpe.br/handle/123456789/1354.

Full text
Abstract:
Made available in DSpace on 2014-06-12T15:49:23Z (GMT). No. of bitstreams: 1 license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2009<br>Faculdade de Amparo à Ciência e Tecnologia do Estado de Pernambuco<br>Os Sistemas P2P de Gerenciamento de Dados (PDMS) são aplicações P2P avançadas que permitem aos usuários consultar, de forma transparente, várias fontes de dados distribuídas, heterogêneas e autônomas. Cada peer representa uma fonte de dados e exporta seu esquema de dados completo ou apenas uma parte dele. Tal esquema, denominado esquema exportado, representa os dados a serem compartilhados com outros peers no sistema e é comumente descrito por uma ontologia. Os dois aspectos mais estudados sobre gerenciamento de dados em PDMS estão relacionados com mapeamentos entre esquemas e processamento de consultas. Estes aspectos podem ser melhorados se os peers estiverem eficientemente dispostos na rede overlay de acordo com uma abordagem baseada em semântica. Nesse contexto, a noção de comunidade semântica de peers é bastante importante visto que permite aproximar logicamente peers com interesses comuns sobre um tópico específico. Entretanto, devido ao comportamento dinâmico dos peers, a criação e manutenção de comunidades semânticas é um aspecto desafiador no estágio atual de desenvolvimento dos PDMS. O objetivo principal desta tese é propor um processo baseado em semântica para agrupar, de modo incremental, peers semanticamente similares que compõem comunidades em um PDMS. Nesse processo, os peers são agrupados de acordo com o respectivo esquema exportado (uma ontologia) e processos de gerenciamento de ontologias (por exemplo, matching e sumarização) são utilizados para auxiliar a conexão dos peers. Uma arquitetura de PDMS é proposta para facilitar a organização semântica dos peers na rede overlay. Para obter a similaridade semântica entre duas ontologias de peers, propomos uma medida de similaridade global como saída de um processo de ontology matching. Para otimizar o matching entre ontologias, um processo automático para sumarização de ontologias também é proposto. Um simulador foi desenvolvido de acordo com a arquitetura do PDMS. Os processos de gerenciamento de ontologias propostos também foram desenvolvidos e incluídos no simulador. Experimentações de cada processo no contexto do PDMS assim como os resultados obtidos a partir dos experimentos são apresentadas
APA, Harvard, Vancouver, ISO, and other styles
37

Chung, Kristie (Kristie J. ). "Applying systems thinking to healthcare data cybersecurity." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/105307.

Full text
Abstract:
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, Engineering Systems Division, 2015.<br>Cataloged from PDF version of thesis.<br>Includes bibliographical references (pages 85-90).<br>Since the HITECH Act of 2009, adoption of Electronic Health Record (EHR) systems in US healthcare organizations has increased significantly. Along with the rapid increase in usage of EHR, cybercrimes are on the rise as well. Two recent cybercrime cases from early 2015, the Anthem and Premera breaches, are examples of the alarming increase of cybercrimes in this domain. Although modem Information Technology (IT) systems have evolved to become very complex and dynamic, cybersecurity strategies have remained static. Cyber attackers are now adopting more adaptive, sophisticated tactics, yet the cybersecurity counter tactics have proven to be inadequate and ineffective. The objective of this thesis is to analyze the recent Anthem security breach to assess the vulnerabilities of Anthem's data systems using current cybersecurity frameworks and guidelines and the Systems-Theoretic Accident Model and Process (STAMP) method. The STAMP analysis revealed Anthem's cybersecurity strategy needs to be reassessed and redesigned from a systems perspective using a holistic approach. Unless our society and government understand cybersecurity from a sociotechnical perspective, we will never be equipped to protect valuable information and will always lose this battle.<br>by Kristie Chung.<br>S.M. in Engineering and Management
APA, Harvard, Vancouver, ISO, and other styles
38

Mullin, Jim. "Prototype system for document management." Thesis, Kansas State University, 1985. http://hdl.handle.net/2097/9868.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Jäkel, Tobias. "Role-based Data Management." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2017. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-224416.

Full text
Abstract:
Database systems build an integral component of today’s software systems and as such they are the central point for storing and sharing a software system’s data while ensuring global data consistency at the same time. Introducing the primitives of roles and their accompanied metatype distinction in modeling and programming languages, results in a novel paradigm of designing, extending, and programming modern software systems. In detail, roles as modeling concept enable a separation of concerns within an entity. Along with its rigid core, an entity may acquire various roles in different contexts during its lifetime and thus, adapts its behavior and structure dynamically during runtime. Unfortunately, database systems, as important component and global consistency provider of such systems, do not keep pace with this trend. The absence of a metatype distinction, in terms of an entity’s separation of concerns, in the database system results in various problems for the software system in general, for the application developers, and finally for the database system itself. In case of relational database systems, these problems are concentrated under the term role-relational impedance mismatch. In particular, the whole software system is designed by using different semantics on various layers. In case of role-based software systems in combination with relational database systems this gap in semantics between applications and the database system increases dramatically. Consequently, the database system cannot directly represent the richer semantics of roles as well as the accompanied consistency constraints. These constraints have to be ensured by the applications and the database system loses its single point of truth characteristic in the software system. As the applications are in charge of guaranteeing global consistency, their development requires more effort in data management. Moreover, the software system’s data management is distributed over several layers, which results in an unstructured software system architecture. To overcome the role-relational impedance mismatch and bring the database system back in its rightful position as single point of truth in a software system, this thesis introduces the novel and tripartite RSQL approach. It combines a novel database model that represents the metatype distinction as first class citizen in a database system, an adapted query language on the database model’s basis, and finally a proper result representation. Precisely, RSQL’s logical database model introduces Dynamic Data Types, to directly represent the separation of concerns within an entity type on the schema level. On the instance level, the database model defines the notion of a Dynamic Tuple that combines an entity with the notion of roles and thus, allows for dynamic structure adaptations during runtime without changing an entity’s overall type. These definitions build the main data structures on which the database system operates. Moreover, formal operators connecting the query language statements with the database model data structures, complete the database model. The query language, as external database system interface, features an individual data definition, data manipulation, and data query language. Their statements directly represent the metatype distinction to address Dynamic Data Types and Dynamic Tuples, respectively. As a consequence of the novel data structures, the query processing of Dynamic Tuples is completely redesigned. As last piece for a complete database integration of a role-based notion and its accompanied metatype distinction, we specify the RSQL Result Net as result representation. It provides a novel result structure and features functionalities to navigate through query results. Finally, we evaluate all three RSQL components in comparison to a relational database system. This assessment clearly demonstrates the benefits of the roles concept’s full database integration.
APA, Harvard, Vancouver, ISO, and other styles
40

Giammarco, Kristin. "Data centric integration and analysis of information technology architectures." Thesis, Monterey, California. Naval Postgraduate School, 2007. http://hdl.handle.net/10945/2989.

Full text
Abstract:
The premise of this thesis is that integrated architectures have increased usefulness to the users of the systems they describe when they can be interactively and dynamically updated and used in conjunction with systems engineering analyses to enable systems optimization. In order to explore this premise, three research topics are presented. The first topic discusses needs and uses for integrated architectures indicated throughout Department of Defense (DoD) policies, directives, instructions, and guides. The second topic presents a systems engineering analysis process and discusses the relevancy of integrated architectures to these analyses. Building on the previous two topics, the third discusses federation, governance, and net-centric concepts that can be used to significantly improve DoD Enterprise Architecture development, integration, and analysis<br>with specific recommendations for the Army Architecture Integration Process. A key recommendation is the implementation of a collaborative environment for net-centric architecture integration and analysis, to provide a rich and agile data foundation for systems engineering and System of Systems engineering analyses, which are required to optimize the DoD Enterprise Architecture as a whole. Other conclusions, recommendations, and areas for future work are also presented.<br>US Army (USA) author.
APA, Harvard, Vancouver, ISO, and other styles
41

Polany, Rany. "Multidisciplinary system design optimization of fiber-optic networks within data centers." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/107503.

Full text
Abstract:
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, School of Engineering, System Design and Management Program, Engineering and Management Program, 2016.<br>This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.<br>Cataloged from student-submitted PDF version of thesis.<br>Includes bibliographical references (pages 136-142).<br>The growth of the Internet and the vast amount of cloud-based data have created a need to develop data centers that can respond to market dynamics. The role of a data center designer, whom is responsible for scoping, building, and managing the infrastructure design is becoming increasingly complex. This work presents a new analytical systems approach to modeling fiber-optic network design within data centers. Multidisciplinary system design optimization (MSDO) is utilized to integrate seven disciplines into a unified software framework for modeling 10G, 40G, and 100G multi-mode fiber-optics networks: 1) market and industry analysis, 2) fiber-optic technology, 3) data center infrastructure, 4) systems analysis, 5) multi-objective optimization using genetic algorithms, 6) parallel computing, and 7) simulation research using MATLAB and OptiSystem. The framework is applied to four theoretical data center case studies to simultaneously evaluate the Pareto optimal trade-offs of (a) minimizing life-cycle costs, (b) maximizing user capacity, and (c) maximizing optical transmission quality (Q-factor). The results demonstrate that data center life-cycle costs are most sensitive to power costs, 10G OM4 multi-mode optical fiber is Pareto optimal for long reach and low user capacity needs, and 100G OM4 multi-mode optical fiber is Pareto optimal for short reach and high user capacity needs.<br>by Rany Polany.<br>S.M. in Engineering and Management
APA, Harvard, Vancouver, ISO, and other styles
42

Wilmer, Greg. "OPM model-based integration of multiple data repositories." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/100389.

Full text
Abstract:
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, Engineering Systems Division, System Design and Management Program, 2015.<br>Cataloged from PDF version of thesis.<br>Includes bibliographical references (page 90).<br>Data integration is at the heart of a significant portion of current information system implementations. As companies continue to move towards a diverse, growing set of Commercial Off the Shelf (COTS) applications to fulfill their information technology needs, the need to integrate data between them continues to increase. In addition, these diverse application portfolios are becoming more geographically dispersed as more software is provided using the Software as a Service (SaaS) model, and companies continue the pattern of moving their internal data centers to cloud-based computing. As the growth of data integration activities continues, several prominent data integration patterns have emerged, and commercial software packages have been created that covers each of the patterns below: 1. Bulk and/or batch data extraction and delivery (ETL, ELT, etc.); 2. Messaging / Message-oriented data movement; 3. Granular, low-latency data capture and propagation (data synchronization). As the data integration landscape within an organization, and between organizations, becomes larger and more complex, opportunities exist to streamline aspects of the data integrating process not covered by current toolsets including: 1. Extensibility by third parties. Many COTS integration toolsets today are difficult if not impossible to extend by third parties; 2. Capabilities to handle different types of structured data from relational to hierarchical to graph models; 3. Enhanced modeling capabilities through use of data visualization and modeling techniques and tools; 4. Capabilities for automated unit testing of integrations; 5. A unified toolset that covers all three patterns, allowing an enterprise to implement the pattern that best suites business needs for the specific scenario; 6. A Web-based toolset that allows configuration, management and deployment via Web-based technologies allowing geographical indifference for application deployment and integration. While discussing these challenges with a large Fortune 500 client, they expressed the need for an enhanced data integration toolset that would allow them to accomplish such tasks. Given this request, the Object Process Methodology (OPM) and the Opcat toolset were used to begin design of a data integration toolset that could fulfill these needs. As part of this design process, lessons learned covering both the use of OPM in software design projects as well as enhancement requests for the Opcat toolset were documented.<br>by Greg Wilmer.<br>S.M. in Engineering and Management
APA, Harvard, Vancouver, ISO, and other styles
43

Dunnaville, Ted, and Mark Lindsey. "Excel Application Leverages XML to Configure Both Airborne Data Acquisition System and Ground Based Data Processing System." International Foundation for Telemetering, 2009. http://hdl.handle.net/10150/606007.

Full text
Abstract:
ITC/USA 2009 Conference Proceedings / The Forty-Fifth Annual International Telemetering Conference and Technical Exhibition / October 26-29, 2009 / Riviera Hotel & Convention Center, Las Vegas, Nevada<br>Flight test instrumentation/data processing environments consist of three components: * Airborne Data Acquisition System * Telemetry Control Room * Post Test Data Processing System While these three components require the same setup information, most often they are configured separately using a different tool for each system. Vendor supplied tools generally do not interact very well with hardware other than their own. This results in the multiple entry of the configuration information. Multiple entries of data for large complex systems are susceptible to data entry errors as well as version synchronization issues. This paper describes the successful implementation of a single Microsoft Excel based tool being used to program the instrumentation data acquisition hardware, the real-time telemetry system, and the post test data processing system on an active test program. This tool leverages the XML interfaces provided by vendors of telemetry equipment.
APA, Harvard, Vancouver, ISO, and other styles
44

Yildirim, Cem S. M. Massachusetts Institute of Technology. "Data-Centric Business Transformation." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/107344.

Full text
Abstract:
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, School of Engineering, Institute for Data, Systems, and Society, System Design and Management Program, 2014.<br>Cataloged from PDF version of thesis.<br>Includes bibliographical references (pages 44-45).<br>Today's digital business environment is imposing a great transformation challenge on the enterprises to effectively use vast amount data in order to gain critical business insights to stay competitive. In their aim to take advantage of data many large organizations are launching data management programs. In these attempts organizations recognize that taking full advantage of data requires enterprise wide changes in organizational aspects, business processes, and technology. The lack of recognition of this enterprise-wide scope haunts most data management programs. Research shows that most of these programs fail and get abandoned after long efforts and investments. This study aims to highlight critical reasons why these programs fail and a different approach to address the fundamental problems associated with the majority of these failures. It is important to be successful in the data efforts due to the fact that data driven businesses are gaining significant competitive edge. Data Centric Business Transformation Strategy (DCBT) is a holistic approach for the enterprise to transform into a data driven and agile entity. DCBT is also away to achieve better alignment in the enterprises. DCBT aims to achieve two goals to transform the organization; become a smarter organization by instilling continuous learning and improvement culture in all aspects of the business and achieve agility in enterprise-wide organizational learning and technology. To achieve these two goals, understanding the current state of the organization in the tree fundamental DCBT areas of organizational learning capacity, business processes and technology is essential to incrementally and continuously improve each one in concert. Required improvements should be introduced to smaller parts of the organization delivering the value of data. Strategically chosen pipeline of projects would allow the ramp up of the organization to a continuously learning and changing organization. In the age of digital economy, agile organizations can learn quicker from large amounts of data to have the competitive edge. This study will also look into how a data management program relates to DCBT and can be used in concert to enable DCBT.<br>by Cem Yildirim.<br>S.M. in Engineering and Management
APA, Harvard, Vancouver, ISO, and other styles
45

Abdul-Huda, Bilal Anas Hamed. "A system for managing distributed multi-media data." Thesis, University of Ulster, 1989. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.328195.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Oelofse, Andries Johannes. "Development of a MAIME-compliant microarray data management system for functional genomics data integration." Pretoria : [s.n.], 2006. http://upetd.up.ac.za/thesis/available/etd-08222007-135249.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Mirza, Ahmed Kamal. "Managing high data availability in dynamic distributed derived data management system (D4M) under Churn." Thesis, KTH, Kommunikationssystem, CoS, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-95220.

Full text
Abstract:
The popularity of decentralized systems is increasing day by day. These decentralized systems are preferable to centralized systems for many reasons, specifically they are more reliable and more resource efficient. Decentralized systems are more effective in the area of information management in the case when the data is distributed across multiple peers and maintained in a synchronized manner. This data synchronization is the main requirement for information management systems deployed in a decentralized environment, especially when data/information is needed for monitoring purposes or some dependent data artifacts rely upon this data. In order to ensure a consistent and cohesive synchronization of dependent/derived data in a decentralized environment, a dependency management system is needed. In a dependency management system, when one chunk of data relies on another piece of data, the resulting derived data artifacts can use a decentralized systems approach but must consider several critical issues, such as how the system behaves if any peer goes down, how the dependent data can be recalculated, and how the data which was stored on a failed peer can be recovered. In case of a churn (resulting from failing peers), how does the system adapt the transmission of data artifacts with respect to their access patterns and how does the system provide consistency management? The major focus of this thesis was to addresses the churn behavior issues and to suggest and evaluate potential solutions while ensuring a load balanced network, within the scope of a dependency information management system running in a decentralized network. Additionally, in peer-to-peer (P2P) algorithms, it is a very common assumption that all peers in the network have similar resources and capacities which is not true in real world networks. The peer‟s characteristics can be quite different in actual P2P systems; as the peers may differ in available bandwidth, CPU load, available storage space, stability, etc. As a consequence, peers having low capacities are forced to handle the same computational load which the high capacity peers handle, resulting in poor overall system performance. In order to handle this situation, the concept of utility based replication is introduced in this thesis to avoid the assumption of peer equality, enabling efficient operation even in heterogeneous environments where the peers have different configurations. In addition, the proposed protocol assures a load balanced network while meeting the requirement for high data availability, thus keeping the distributed dependent data consistent and cohesive across the network. Furthermore, an implementation and evaluation in the PeerfactSim.KOM P2P simulator of an integrated dependency management framework, D4M, was done. In order to benchmark the implementation of proposed protocol, the performance and fairness tests were examined. A conclusion is that the proposed solution adds little overhead to the management of the data availability in a distributed data management systems despite using a heterogeneous P2P environment. Additionally, the results show that the various P2P clusters can be introduced in the network based on peer‟s capabilities.<br>Populariteten av decentraliserade system ökar varje dag. Dessa decentraliserade system är att föredra framför centraliserade system för många anledningar, speciellt de är mer säkra och mer resurseffektiv. Decentraliserade system är mer effektiva inom informationshantering i fall när data delas ut över flera Peers och underhållas på ett synkroniserat sätt. Dessa data synkronisering är huvudkravet för informationshantering som utplacerade i en decentraliserad miljö, särskilt när data / information behövs för att kontrollera eller några beroende artefakter uppgifter lita på dessa data. För att säkerställa en konsistent och härstammar synkronisering av beroende / härledd data i en decentraliserad miljö, är ett beroende ledningssystem behövs. I ett beroende ledningssystem, när en bit av data som beror på en annan bit av data, kan de resulterande erhållna uppgifterna artefakter använd decentraliserad system approach, men måste tänka på flera viktiga frågor, såsom hur systemet fungerar om någon peer går ner, hur beroende data kan omräknas, och hur de data som lagrats på en felaktig peer kan återvinnas. I fall av churn (på grund av brist Peers), hur systemet anpassar sändning av data artefakter med avseende på deras tillgång mönster och hur systemet ger konsistens förvaltning? Den viktigaste fokus för denna avhandling var att behandlas churn beteende frågor och föreslå och bedöma möjliga lösningar samtidigt som en belastning välbalanserat nätverk, inom ramen för ett beroende information management system som kör i ett decentraliserade nätverket. Dessutom, i peerto- peer (P2P) algoritmer, är det en mycket vanlig uppfattning att alla Peers i nätverket har liknande resurser och kapacitet vilket inte är sant i verkliga nätverk. Peer egenskaper kan vara ganska olika i verkliga P2P system, som de Peers kan skilja sig tillgänglig bandbredd, CPU tillgängligt lagringsutrymme, stabilitet, etc. Som en följd, är peers har låg kapacitet tvingade att hantera sammaberäkningsbelastningen som har hög kapacitet peer hanterar vilket resulterar i dåligsystemets totala prestanda. För att hantera den här situationen, är begreppet verktygetbaserad replikering införs i denna uppsats att undvika antagandet om peer jämlikhet, så att effektiv drift även i heterogena miljöer där Peers har olika konfigurationer. Dessutom säkerställer det föreslagna protokollet en belastning välbalanserat nätverk med iakttagande kraven på hög tillgänglighet och därför hålla distribuerade beroende datakonsekvent och kohesiv över nätverket. Vidare ett genomförande och utvärdering iPeerfactSim.KOM P2P simulatorn av en integrerad beroende förvaltningsram, D4M, var gjort[.] De prestandatester och tester rättvisa undersöktes för att riktmärka genomförandet avföreslagna protokollet. En slutsats är att den föreslagna lösningen tillagt lite overhead för förvaltningen av tillgången till uppgifterna inom ett distribuerade system för datahantering, trots med användning av en heterogen P2P miljö. Dessutom visar resultaten att de olikaP2P-kluster kan införas i nätverket baserat på peer-möjligheter.
APA, Harvard, Vancouver, ISO, and other styles
48

Jan, Jonathan. "Collecting Data for Building Automation Analytics : A case study for collecting operational data with minimal human intervention." Thesis, KTH, Radio Systems Laboratory (RS Lab), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-233319.

Full text
Abstract:
Approximately 40% of the total energy consumption within the EU is due to buildings, and similar numbers can be found in the US. If the principal inefficiencies in buildings were easily identifiable, then a facility manager could focus their resources to make the buildings more efficient, which would lead to both cost savings for the facility owners and decrease the building’s ecological footprint. In building automation systems today, data is already being collected every second, but due to the lack of standardization for describing this data, having access to data is not the same as being able to make use of it. The existing heterogeneity makes it very costly to gather data from multiple buildings, thus making it difficult to understand the big picture. Facility managers cannot fix what they cannot see; thus it is important to facilitate the visualization of the data collected from all of the different building automation systems. This potentially offers great benefits with regards to both sustainability and economy. In this thesis, the author’s goal is to propose a sustainable, cost and time effective data integration strategy for real estate owners who wish to gain greater insight into their buildings’ efficiency. The study begins with a literature study to find previous and on-going attempts to solve this problem. Some initiatives for standardization of semantic models were found. Two of these models, Brick and Haystack, were chosen. One building automation system (BAS) was tested in a pilot case study, to test the appropriateness of a solution. The key results from this thesis project show that data from building automation systems, can be integrated into an analysis platform, and an extract, transform, and load (ETL) process for this is presented. How time efficiently data can be tagged and transformed into a common format is very dependent upon the current control system’s data storage format and whether information about its structure is adequate. It is also noted that there is no guarantee that facility managers have access to the control system’s database or information about how that is structured, in such cases other techniques can be used such as BACnet/IP, or Open Platform Communications (OPC) Unified Architecture.<br>Ungefär 40 % av den totala energikonsumtionen i E.U. och U.S.A. förbrukas av fastigheter. Om de delar av fastigheten som är ineffektiva enkelt kunde identifieras, skulle det underlätta fastighetsförvaltarnas arbete i att göra byggnader mer energisnåla. Detta har i sin tur potential att minska kostnader och byggnaders ekologiska fotavtryck. I dagens fastighetsautomationssystem samlas data in varje sekund, men på grund av att det saknas ett standardiserat sätt att beskriva den på, är det skillnad på att ha tillgång till data och att faktiskt kunna använda sig av den. Heterogeniteten gör att det blir både kostsamt och tidskrävande för fastighetsförvaltare att samla in data från sina fastigheter. Fastighetsförvaltare kan inte åtgärda något det inte kan se. Därför är det viktigt att underlätta möjligheten för visualisering av data från olika typer av fastighetsautomationssystem. Att lyckas med detta har potential att ge positiva effekter både när det gäller hållbarhet och ekonomi. I den här uppsatsen är författarens mål att komma fram till en hållbar, kostnads- och tidseffektiv integrationsstrategi för fastighetsförvaltare som vill få bättre insikter hur effektiv deras byggnad faktiskt är. Forskningsarbetet inleds med en litteraturstudie för att finna tidigare och pågående försök att lösa detta problem. Några initiativ för standardisering av semantiska modeller för att beskriva data inom fastighetsautomation hittades. Två av dessa, Brick och Project Haystack, valdes ut. En byggnad, och ett fastighetsautomationssystem testades i en pilotstudie. Resultaten från studien pekar på att data från fastighetautomationssystem kan integreras med en analysplattform, och en så kallad ETL-process, efter de engelska orden: extract, transform, load; presenteras för att uppnå det målet. Hur tidseffektivt data kan taggas och transformeras beror på det nuvarande kontrollsystemets datalagringsformat och om information om dess struktur är adekvat. Det noteras att det inte finns någon garanti till att få åtkomst till kontrollsystemets databas, eller information om dess struktur, därför presenteras även alternativa tekniker, däribland BACnet/IP och Open Platform Communications (OPC) Unified Architecture.
APA, Harvard, Vancouver, ISO, and other styles
49

Persson, Mathias. "Simultaneous Data Management in Sensor-Based Systems using Disaggregation and Processing." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-188856.

Full text
Abstract:
To enable high performance data management for sensor-based systems the system components in an architecture has to be tailored to the situation at hand. Therefore, each component has to handle a massive amount of data independently, and at the same time cooperate with other components within a system. To facilitate rapid data processing between components, a model detailing the flow of information and specifying internal component structures will assist in faster and more reliable system designs. This thesis presents a model for a scalable, safe, reliable and high performing system for managing sensor-based data. Based on the model a prototype is developed that can be used to handle a large amount of messages from various distributed sensors. The different components within the prototype are evaluated and their advantages and disadvantages are presented. The result merits the architecture of the prototype and validates the initial requirements of how it should operate to achieve high performance. By combining components with individual advantages, a system can be designed that allows a high amount of simultaneous data to be disaggregated into its respective category, processed to make the information usable and stored in a database for easy access to interested parties.<br>Om ett system som hanterar sensorbaserad data ska kunna prestera bra måste komponenterna som ingår i systemet vara skräddarsydda för att hantera olika situationer. Detta betyder att varje enskild komponent måste individuellt kunna hantera stora simultana datamängder, samtidigt som de måste samarbeta med de andra komponenterna i systemet. För att underlätta snabb bearbetning av data mellan komponenter kan en modell, som specificerar informationsflödet och interna strukturer hos komponenterna, assistera i skapande av snabbare och mer tillförlitliga systemarkitekturer. I denna uppsats presenteras en modell för skapande av skalbara, säkra, tillförlitliga och bra presterande system som hanterar sensor-baserad data. En prototyp utvecklas, baserad på modellen, som kan hantera en stor mängd meddelanden från distribuerade sensorer. De olika komponenterna som används i prototypen utvärderas och deras för- och nackdelar presenteras. Resultatet visar att arkitekturen hos prototypen fungerar enligt de initiala kraven om hur bra systemet ska prestera. Genom att kombinera individuella styrkor hos komponenterna kan ett system skapas som tillåter stora mängder data att bli fördelat enligt deras typ, behandlat för att få fram relevant information och lagrat i en databas för enkel tillgång.
APA, Harvard, Vancouver, ISO, and other styles
50

Bhagattjee, Benoy. "Emergence and taxonomy of big data as a service." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/90709.

Full text
Abstract:
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, Engineering Systems Division, System Design and Management Program, 2014.<br>Cataloged from PDF version of thesis.<br>Includes bibliographical references (pages 82-83).<br>The amount of data that we produce and consume is growing exponentially in the modem world. Increasing use of social media and new innovations such as smartphones generate large amounts of data that can yield invaluable information if properly managed. These large datasets, popularly known as Big Data, are difficult to manage using traditional computing technologies. New technologies are emerging in the market to address the problem of managing and analyzing Big Data to produce invaluable insights from it. Organizations are finding it difficult to implement these Big Data technologies effectively due to problems such as lack of available expertise. Some of the latest innovations in the industry are related to cloud computing and Big Data. There is significant interest in academia and industry in combining Big Data and cloud computing to create new technologies that can solve the Big Data problem. Big Data based on cloud computing is an upcoming area in computer science and many vendors are providing their ideas on this topic. The combination of Big Data technologies and cloud computing platforms has led to the emergence of a new category of technology called Big Data as a Service or BDaaS. This thesis aims to define the BDaaS service stack and to evaluate a few technologies in the cloud computing ecosystem using the BDaaS service stack. The BDaaS service stack provides an effective way to classify the Big Data technologies that enable technology users to evaluate and chose the technology that meets their requirements effectively. Technology vendors can use the same BDaaS stack to communicate the product offerings better to the consumer.<br>by Benoy Bhagattjee.<br>S.M. in Engineering and Management
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography