To see the other types of publications on this topic, follow the link: Databases and Grids.

Dissertations / Theses on the topic 'Databases and Grids'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 43 dissertations / theses for your research on the topic 'Databases and Grids.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Venugopal, Srikumar. "Scheduling distributed data-intensive applications on global grids /." Connect to thesis, 2006. http://eprints.unimelb.edu.au/archive/0002929.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Sonmez, Sunercan Hatice Kevser. "Data Integration Over Horizontally Partitioned Databases In Service-oriented Data Grids." Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12612414/index.pdf.

Full text
Abstract:
Information integration over distributed and heterogeneous resources has been challenging in many terms: coping with various kinds of heterogeneity including data model, platform, access interfaces
coping with various forms of data distribution and maintenance policies, scalability, performance, security and trust, reliability and resilience, legal issues etc. It is obvious that each of these dimensions deserves a separate thread of research efforts. One particular challenge among the ones listed above that is more relevant to the work presented in this thesis is coping with various forms of data distribution and maintenance policies. This thesis aims to provide a service-oriented data integration solution over data Grids for cases where distributed data sources are partitioned with overlapping sections of various proportions. This is an interesting variation which combines both replicated and partitioned data within the same data management framework. Thus, the data management infrastructure has to deal with specific challenges regarding the identification, access and aggregation of partitioned data with varying proportions of overlapping sections. To provide a solution we have extended OGSA-DAI DQP, a well-known service-oriented data access and integration middleware with distributed query processing facilities, by incorporating UnionPartitions operator into its algebra in order to cope with various unusual forms of horizontally partitioned databases. As a result
our solution extends OGSA-DAI DQP, in two points
1 - A new operator type is added to the algebra to perform a specialized union of the partitions with different characteristics, 2 - OGSA-DAI DQP Federation Description is extended to include some more metadata to facilitate the successful execution of the newly introduced operator.
APA, Harvard, Vancouver, ISO, and other styles
3

Fomkin, Ruslan. "Optimization and Execution of Complex Scientific Queries." Doctoral thesis, Uppsala : Acta Universitatis Upsaliensis, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-9514.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ahmed, Ejaz. "A grid enabled staging DBMS method for data Mapping, Matching & Loading." Thesis, University of Bedfordshire, 2011. http://hdl.handle.net/10547/204951.

Full text
Abstract:
This thesis is concerned with the need to deal with data anomalies, inconsistencies and redundancies within the context of data integration in grids. A data Mapping, Matching and Loading (MML) process that is based on the Grid Staging Catalogue Service (MML-GSCATS) method is identified. In particular, the MML-GSCATS method consists of the development of two mathematical algorithms for the MML processes. Specifically it defines an intermediate data storage staging facility in order to process, upload and integrate data from various small to large size data repositories. With this in mind, it expands the integration notion of a database management system (DBMS) to include the MML-GSCATS method in traditional distributed and grid environments. The data mapping employed is in the form of value correspondences between source and target databases whilst data matching consolidates distinct catalogue schemas of federated databases to access information seamlessly. There is a need to deal with anomalies and inconsistencies in the grid, MML processes are applied using a healthcare case study with developed scenarios. These scenarios were used to test the MML-GSCATS method with the help of software prototyping toolkit. Testing has set benchmarks, performance, reliability and error detections (anomalies and redundancies). Cross-scenario data sets were formulated and results of scenarios were compared with benchmarking. These benchmarks help in comparing the MMLGSCATS methodology with traditional and current grid methods. Results from the testing and experiments demonstrate that the MML-GSCATS is a valid method for identifying data anomalies, inconsistencies and redundancies that are produced during loading. Testing results indicates the MML-GSCATS is better than traditional methods.
APA, Harvard, Vancouver, ISO, and other styles
5

Xiang, Helen X. "A grid-based distributed database solution for large astronomy datasets." Thesis, University of Portsmouth, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.494003.

Full text
Abstract:
The volume of digital astronomical data is set to expand dramatically over the next ten years, as new satellites, telescopes and instruments come online. For example, both the VISTA [156] and DES [29] programmes will yield databases 20-30 terabytes in size in the coming decade. Storing and accessing such large datasets will be challenging, especially as scientific analysis will require coordinated use of several of these separate multi-terabyte databases, since they will contain complementary data, typically from observations made in different regions of the spectrum. We are exploring the use of emerging distributed database technologies for the management and manipulation of very large astronomical datasets.
APA, Harvard, Vancouver, ISO, and other styles
6

Taratoris, Evangelos. "A single-pass grid-based algorithm for clustering big data on spatial databases." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/113168.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 79-80).
The problem of clustering multi-dimensional data has been well researched in the scientific community. It is a problem with wide scope and applications. With the rapid growth of very large databases, traditional clustering algorithms become inefficient due to insufficient memory capacity. Grid-based algorithms try to solve this problem by dividing the space into cells and then performing clustering on the cells. However these algorithms also become inefficient when even the grid becomes too large to be saved in memory. This thesis presents a new algorithm, SingleClus, that is performing clustering on a 2-dimensional dataset with a single pass of the dataset. Moreover, it optimizes the amount of disk I/0 operations while making modest use of main memory. Therefore it is theoretically optimal in terms of performance. It modifies and improves on the Hoshen-Kopelman clustering algorithm while dealing with the algorithm's fundamental challenges when operating in a Big Data setting.
by Evangelos Taratoris.
M. Eng.
M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
APA, Harvard, Vancouver, ISO, and other styles
7

Hu, Zhengyu, and D. Phillip Guertin. "The Effect of GIS Database Grid Size on Hydrologic Simulation Results." Arizona-Nevada Academy of Science, 1991. http://hdl.handle.net/10150/296461.

Full text
Abstract:
From the Proceedings of the 1991 Meetings of the Arizona Section - American Water Resources Association and the Hydrology Section - Arizona-Nevada Academy of Science - April 20, 1991, Northern Arizona University, Flagstaff, Arizona
The use of geographic information systems (GIS) for assessing the hydrologic effects of management is increasing. In the near future most of our spatial or "mapped" information will come from GIS. The direct linkage of hydrologic simulation models to GIS should make the assessment process more efficient and powerful, allowing managers to quickly evaluate different landscape designs. This study investigates the effect the resolution of GIS databases have on hydrological simulation results from an urban watershed. The hydrologic model used in the study was the Soil Conservation Service Curve Number Model which computes the volume of runoff from rainfall events. A GIS database was created for High School Wash, a urban watershed in Tucson, Arizona. Fifteen rainfall-runoff events were used to test the simulation results. Five different grid sizes, ranging from 25x25 square feet to 300x300 square feet were evaluated. The results indicate that the higher the resolution the better the simulation results. The average ratio of simulated over observed runoff volumes ranged from 0.98 for the 25x25 square feet case to 0.43 for the 300x300 square feet case.
APA, Harvard, Vancouver, ISO, and other styles
8

Rokhsari, Mirfakhradin Derakhshan. "A development of the grid file for the storage of binary relations." Thesis, Birkbeck (University of London), 1989. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.388717.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Xu, Kai. "Database support for multi-resolution terrain models /." St. Lucia, Qld, 2004. http://www.library.uq.edu.au/pdfserve.php?image=thesisabs/absthe17869.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Paventhan, Arumugam. "Grid approaches to data-driven scientific and engineering workflows." Thesis, University of Southampton, 2007. https://eprints.soton.ac.uk/49926/.

Full text
Abstract:
Enabling the full life cycle of scientific and engineering workflows requires robust middleware and services that support near-realtime data movement, high-performance processing and effective data management. In this context, we consider two related technology areas: Grid computing which is fast emerging as an accepted way forward for the large-scale, distributed and multi-institutional resource sharing and Database systems whose capabilities are undergoing continuous change providing new possibilities for scientific data management in Grid. In this thesis, we look into the challenging requirements while integrating data-driven scientific and engineering experiment workflows onto Grid. We consider wind tunnels that house multiple experiments with differing characteristics, as an application exemplar. This thesis contributes two approaches while attempting to tackle some of the following questions: How to allow domain-specific workflow activity development by hiding the underlying complexity? Can new experiments be added to the system easily? How can the overall turnaround time be reduced by an end-to-end experimental workflow support? In the first approach, we show how experiment-specific workflows can help accelerate application development using Grid services. This has been realized with the development of MyCoG, the first Commodity Grid toolkit for .NET supporting multi-language programmability. In the second , we present an alternative approach based on federated database services to realize an end-to-end experimental workflow. We show with the help of a real-world example, how database services can be building blocks for scientific and engineering workflows.
APA, Harvard, Vancouver, ISO, and other styles
11

Tan, Koon Leai Larry. "An integrated methodology for creating composed Web/grid services." Thesis, University of Stirling, 2009. http://hdl.handle.net/1893/2515.

Full text
Abstract:
This thesis presents an approach to design, specify, validate, verify, implement, and evaluate composed web/grid services. Web and grid services can be composed to create new services with complex behaviours. The BPEL (Business Process Execution Language) standard was created to enable the orchestration of web services, but there have also been investigation of its use for grid services. BPEL specifies the implementation of service composition but has no formal semantics; implementations are in practice checked by testing. Formal methods are used in general to define an abstract model of system behaviour that allows simulation and reasoning about properties. The approach can detect and reduce potentially costly errors at design time. CRESS (Communication Representation Employing Systematic Specification) is a domainindependent, graphical, abstract notation, and integrated toolset for developing composite web service. The original version of CRESS had automated support for formal specification in LOTOS (Language Of Temporal Ordering Specification), executing formal validation with MUSTARD (Multiple-Use Scenario Testing and Refusal Description), and implementing in BPEL4WS as the early version of BPEL standard. This thesis work has extended CRESS and its integrated tools to design, specify, validate, verify, implement, and evaluate composed web/grid services. The work has extended the CRESS notation to support a wider range of service compositions, and has applied it to grid services as a new domain. The thesis presents two new tools, CLOVE (CRESS Language-Oriented Verification Environment) and MINT (MUSTARD Interpreter), to respectively support formal verification and implementation testing. New work has also extended CRESS to automate implementation of composed services using the more recent BPEL standard WS-BPEL 2.0.
APA, Harvard, Vancouver, ISO, and other styles
12

Lourenso, Reinaldo. "Segmentação de objetos complexos em um sistema de banco de dados objeto relacional baseado em GRIDS\"." Universidade de São Paulo, 2005. http://www.teses.usp.br/teses/disponiveis/3/3142/tde-16032006-121211/.

Full text
Abstract:
O principal objetivo desta tese consiste em propor, desenvolver e implementar uma infra-estrutura para gerenciamento de um Banco de Dados baseado em Grid. O armazenamento de objetos complexos como áudio, vídeo, softwares etc., em Sistemas de Banco de Dados, sempre se dá de maneira integral, ou seja, o documento, independente do seu tamanho, não é fragmentado pelo Sistema de Gerência de Banco de Dados (SGBD) ao ser armazenado. Metodologias de modelagem de dados utilizadas também não especificam a fragmentação ou segmentação de um documento complexo quando do seu armazenamento, pois só contemplam a fragmentação das estruturas de armazenamento, no caso relações ou classes, e não os objetos que serão armazenados. Ao avaliarmos o desempenho de sistemas que armazenam objetos complexos, verificamos que o tamanho dos objetos armazenados influencia consideravelmente o desempenho destes sistemas. Como objetos multimídia, softwares, etc., necessitam de grandes volumes em disco para seu armazenamento, métodos de replicação ou distribuição de cópias tradicionais tornam-se muito dispendiosos e por vezes ineficientes. Com a infra-estrutura desenvolvida neste trabalho foi possível segmentar e distribuir atributos complexos de linhas de uma tabela, instaladas em Bancos de Dados baseado em Grid. Nossa solução melhorou o desempenho do sistema que tinha a necessidade de armazenar documentos de tamanho elevado, acima de um tamanho limite. Também foi testada com sucesso a possível utilização dos códigos LDPC nesta infra-estrutura. Entretanto, não observamos ganhos que justificassem sua utilização em aplicações semelhantes a nossa.
This Thesis presents a proposal of an infrastructure to allow the distribution of data in a Database Grid. The storage of complex objects, such as audio, video and software etc. in Databases is always done in an integral way. This means that the object, regardless of its size, it is not fragmented by the Database Management System (DBMS). Methodologies used for data modeling also do not allow fragmentation or segmentation of complex objects. This happens because only the fragmentation of storage structures such as tables or classes are taken into account, not the embedded objects. When we evaluate the performance of systems that store complex objects, we can verify that the size of the stored objects has considerable impact. Since multimedia objects or software distribution package require significant disk space for storage, traditional methods for replication or distribution of copies become very costly and many times inefficient. With the infrastructure developed in this work it was possible to segment and to distribute complex attributes of lines of a table in Database Grids. In this way, our solution improves the performance of the system that had the necessity to store documents of raised size, above of a specified boundary-value. Also the possible use of codes LDPC in this infrastructure was tested successfully. However, does not observe profits that justified its use in same ours applications.
APA, Harvard, Vancouver, ISO, and other styles
13

Nguyen, Thi Thanh Quynh. "A new approach for distributed programming in smart grids." Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAT079.

Full text
Abstract:
Le principal défi du contrôle et de la gestion des réseaux électriques intelligents (Smartgrids) est la quantité de données à traiter. Les techniques classiques centralisées, même si elles offrent l’avantage de facilité de gestion par leur vision globale du réseau, ne supportent pas en pratique la croissance continue des volumes de données (bande passante limitée, goulot d’étranglement, quantité de calculs à assurer, etc.). Le passage à un contrôle et une gestion décentralisée (répartie), où le système est composé d’une multitude d’unités de calcul coopérantes, offre de très bonnes perspectives (robustesse, calcul au plus près des producteurs et consommateurs de données, exploitation de toutes les ressources disponibles), mais reste difficile à mettre en place. En effet, la programmation d’algorithmes distribués nécessite de prendre en compte les échanges de données et la synchronisation des unités participantes, cette complexité augmentant avec le nombre d’unités. Dans cette thèse, nous proposons une approche innovante de programmation de haut niveau d’abstraction masquant ces difficultés.Tout d’abord, nous proposons d’abstraire l’ensemble des unités de calcul du Smartgrid (compteur intelligent, capteurs, concentrateurs de données, etc.) sous forme d’une base de données distribuées. Chaque unité de calcul hébergeant une base de données locale et seules les données nécessaires à la poursuite du calcul sont échangées avec d’autres unités, ce qui diminue l’utilisation de la bande passante disponible. L’utilisation d’un langage de manipulation de données déclaratif simplifiera la programmation des applications de contrôle et de gestion. Nous proposons également SmartLog, un langage à base de règles (basé sur le langage Datalog et ses dérivés) dédié à ces applications. Il facilite la programmation distribuée des applications Smartgrid en réagissant immédiatement à tout changement dans les données.Même avec un langage tel que SmartLog, il est nécessaire de prendre en compte les échanges de données et la synchronisation des participants. C’est pourquoi nous proposons ensuite une approche simplifiant la programmation distribuée. Cette approche, nommée CPDE pour Centralized Programmation and Distributed Execution, consiste en deux étapes : (i) programmer l’application centralisée en SmartLog, car cela est plus facile, et (ii) traduire le programme centralisé en programme distribué en se basant sur la localisation réelle des données. Pour ce faire, nous proposons un algorithme de distribution semi-automatique des règles SmartLog.Afin de démontrer l’intérêt de CPDE, nous avons mené une expérimentation exhaustive en utilisant des applications et des algorithmes réellement utilisés dans les Smartgrids, telles que le contrôle secondaire dans les micro-réseaux isolés ou la régulation de tension équitable. L’expérimentation a été réalisée sur une plate-forme de simulation de réseau électrique temps réel, avec une machine de simulation OPAL-RT, et des un réseau Raspberry-Pi représentant les unités de calcul (leurs performances sont tout à fait comparables aux équipements réels). Cette expérimentation a permis de valider les comportements et les performances des programmes distribués conçus avec CPDE comparativement à leurs versions centralisées en SmartLog et à leurs versions de référence implantés en Java. Nous avons également étudié l’impact de différents paramètres, tels que le nombre d’unités de calcul ou les différentes alternatives de répartition des données
The main challenge of smart grids control and management is the amount of data to be processed. Traditional, centralized techniques, even if they offer the advantage of the ease of management by their global grid vision, do not support in practice the continuous growth of data volumes (limited bandwidth, bottleneck, amount of calculations, etc.). The transition to decentralized(distributed)control and management, where the system is made up of a multitude of co-operating computing units, offers very good prospects (robustness, calculation close to the producers and consumers of data, exploitation of data in all available resources), but remains challenging to implement. In fact, the programming of distributed algorithms requires taking into account the data exchanges and the synchronization of the participating units; this complexity increases with the number of units. In this thesis, we propose an innovative approach of programming of a high level of abstraction masking these difficulties.First, we propose to abstract all Smartgrid computing units (smart meters, sensors, data concentrators, etc.) as a distributed database. Each computing unit hosts a local database and only the data needed to continue the calculation are exchanged with other units, which decreases the use of the available bandwidth. The use of a declarative data handling language will simplify the programming of control and management applications. Besides, we also propose SmartLog, a rule-based language (based on the Datalog language and its derivatives dedicated to these applications. It facilitates distributed programming of Smartgrid applications by immediately responding to any changes in the data.Even with a language such as SmartLog, it is necessary to take into account the data exchange and the synchronization of the participants. This is why we then propose an approach that simplifies distributed programming. This approach, named CPDE for Centralized Programming and Distributed Execution, consists of two steps: (i) programming the centralized application in SmartLog, as this is easier, and (ii) translating the centralized program into a distributed program based on the actual location of the data. To do this, we propose a semi-automatic Smartlog rule distribution algorithm.In order to demonstrate the interest of CPDE, we conducted a comprehensive experiment using applications and algorithms actually used in Smartgrids, such as secondary control in isolated micro-grids or fair voltage regulation. The experiment was carried out on a real-time electrical network simulation platform, with an OPAL-RT simulation machine, and a Raspberry-Pi network representing the computing units (their performances are quite comparable to the real equipment). This experiment allowed validating the behaviours and the performances of the distributed programs conceived with CPDE, and comparing to their centralized versions in SmartLog and their reference versions implanted in Java. The impact of different parameters, such as the number of calculation units or different data distribution alternatives, is studied as well
APA, Harvard, Vancouver, ISO, and other styles
14

Navarro, Martín Joan. "From cluster databases to cloud storage: Providing transactional support on the cloud." Doctoral thesis, Universitat Ramon Llull, 2015. http://hdl.handle.net/10803/285655.

Full text
Abstract:
Durant les últimes tres dècades, les limitacions tecnològiques (com per exemple la capacitat dels dispositius d'emmagatzematge o l'ample de banda de les xarxes de comunicació) i les creixents demandes dels usuaris (estructures d'informació, volums de dades) han conduït l'evolució de les bases de dades distribuïdes. Des dels primers repositoris de dades per arxius plans que es van desenvolupar en la dècada dels vuitanta, s'han produït importants avenços en els algoritmes de control de concurrència, protocols de replicació i en la gestió de transaccions. No obstant això, els reptes moderns d'emmagatzematge de dades que plantegen el Big Data i el cloud computing—orientats a millorar la limitacions pel que fa a escalabilitat i elasticitat de les bases de dades estàtiques—estan empenyent als professionals a relaxar algunes propietats importants dels sistemes transaccionals clàssics, cosa que exclou a diverses aplicacions les quals no poden encaixar en aquesta estratègia degut a la seva alta dependència transaccional. El propòsit d'aquesta tesi és abordar dos reptes importants encara latents en el camp de les bases de dades distribuïdes: (1) les limitacions pel que fa a escalabilitat dels sistemes transaccionals i (2) el suport transaccional en repositoris d'emmagatzematge en el núvol. Analitzar les tècniques tradicionals de control de concurrència i de replicació, utilitzades per les bases de dades clàssiques per suportar transaccions, és fonamental per identificar les raons que fan que aquests sistemes degradin el seu rendiment quan el nombre de nodes i / o quantitat de dades creix. A més, aquest anàlisi està orientat a justificar el disseny dels repositoris en el núvol que deliberadament han deixat de banda el suport transaccional. Efectivament, apropar el paradigma de l'emmagatzematge en el núvol a les aplicacions que tenen una forta dependència en les transaccions és fonamental per a la seva adaptació als requeriments actuals pel que fa a volums de dades i models de negoci. Aquesta tesi comença amb la proposta d'un simulador de protocols per a bases de dades distribuïdes estàtiques, el qual serveix com a base per a la revisió i comparativa de rendiment dels protocols de control de concurrència i les tècniques de replicació existents. Pel que fa a la escalabilitat de les bases de dades i les transaccions, s'estudien els efectes que té executar diferents perfils de transacció sota diferents condicions. Aquesta anàlisi contínua amb una revisió dels repositoris d'emmagatzematge de dades en el núvol existents—que prometen encaixar en entorns dinàmics que requereixen alta escalabilitat i disponibilitat—, el qual permet avaluar els paràmetres i característiques que aquests sistemes han sacrificat per tal de complir les necessitats actuals pel que fa a emmagatzematge de dades a gran escala. Per explorar les possibilitats que ofereix el paradigma del cloud computing en un escenari real, es presenta el desenvolupament d'una arquitectura d'emmagatzematge de dades inspirada en el cloud computing la qual s’utilitza per emmagatzemar la informació generada en les Smart Grids. Concretament, es combinen les tècniques de replicació en bases de dades transaccionals i la propagació epidèmica amb els principis de disseny usats per construir els repositoris de dades en el núvol. Les lliçons recollides en l'estudi dels protocols de replicació i control de concurrència en el simulador de base de dades, juntament amb les experiències derivades del desenvolupament del repositori de dades per a les Smart Grids, desemboquen en el que hem batejat com Epidemia: una infraestructura d'emmagatzematge per Big Data concebuda per proporcionar suport transaccional en el núvol. A més d'heretar els beneficis dels repositoris en el núvol en quant a escalabilitat, Epidemia inclou una capa de gestió de transaccions que reenvia les transaccions dels clients a un conjunt jeràrquic de particions de dades, cosa que permet al sistema oferir diferents nivells de consistència i adaptar elàsticament la seva configuració a noves demandes de càrrega de treball. Finalment, els resultats experimentals posen de manifest la viabilitat de la nostra contribució i encoratgen als professionals a continuar treballant en aquesta àrea.
Durante las últimas tres décadas, las limitaciones tecnológicas (por ejemplo la capacidad de los dispositivos de almacenamiento o el ancho de banda de las redes de comunicación) y las crecientes demandas de los usuarios (estructuras de información, volúmenes de datos) han conducido la evolución de las bases de datos distribuidas. Desde los primeros repositorios de datos para archivos planos que se desarrollaron en la década de los ochenta, se han producido importantes avances en los algoritmos de control de concurrencia, protocolos de replicación y en la gestión de transacciones. Sin embargo, los retos modernos de almacenamiento de datos que plantean el Big Data y el cloud computing—orientados a mejorar la limitaciones en cuanto a escalabilidad y elasticidad de las bases de datos estáticas—están empujando a los profesionales a relajar algunas propiedades importantes de los sistemas transaccionales clásicos, lo que excluye a varias aplicaciones las cuales no pueden encajar en esta estrategia debido a su alta dependencia transaccional. El propósito de esta tesis es abordar dos retos importantes todavía latentes en el campo de las bases de datos distribuidas: (1) las limitaciones en cuanto a escalabilidad de los sistemas transaccionales y (2) el soporte transaccional en repositorios de almacenamiento en la nube. Analizar las técnicas tradicionales de control de concurrencia y de replicación, utilizadas por las bases de datos clásicas para soportar transacciones, es fundamental para identificar las razones que hacen que estos sistemas degraden su rendimiento cuando el número de nodos y/o cantidad de datos crece. Además, este análisis está orientado a justificar el diseño de los repositorios en la nube que deliberadamente han dejado de lado el soporte transaccional. Efectivamente, acercar el paradigma del almacenamiento en la nube a las aplicaciones que tienen una fuerte dependencia en las transacciones es crucial para su adaptación a los requerimientos actuales en cuanto a volúmenes de datos y modelos de negocio. Esta tesis empieza con la propuesta de un simulador de protocolos para bases de datos distribuidas estáticas, el cual sirve como base para la revisión y comparativa de rendimiento de los protocolos de control de concurrencia y las técnicas de replicación existentes. En cuanto a la escalabilidad de las bases de datos y las transacciones, se estudian los efectos que tiene ejecutar distintos perfiles de transacción bajo diferentes condiciones. Este análisis continua con una revisión de los repositorios de almacenamiento en la nube existentes—que prometen encajar en entornos dinámicos que requieren alta escalabilidad y disponibilidad—, el cual permite evaluar los parámetros y características que estos sistemas han sacrificado con el fin de cumplir las necesidades actuales en cuanto a almacenamiento de datos a gran escala. Para explorar las posibilidades que ofrece el paradigma del cloud computing en un escenario real, se presenta el desarrollo de una arquitectura de almacenamiento de datos inspirada en el cloud computing para almacenar la información generada en las Smart Grids. Concretamente, se combinan las técnicas de replicación en bases de datos transaccionales y la propagación epidémica con los principios de diseño usados para construir los repositorios de datos en la nube. Las lecciones recogidas en el estudio de los protocolos de replicación y control de concurrencia en el simulador de base de datos, junto con las experiencias derivadas del desarrollo del repositorio de datos para las Smart Grids, desembocan en lo que hemos acuñado como Epidemia: una infraestructura de almacenamiento para Big Data concebida para proporcionar soporte transaccional en la nube. Además de heredar los beneficios de los repositorios en la nube altamente en cuanto a escalabilidad, Epidemia incluye una capa de gestión de transacciones que reenvía las transacciones de los clientes a un conjunto jerárquico de particiones de datos, lo que permite al sistema ofrecer distintos niveles de consistencia y adaptar elásticamente su configuración a nuevas demandas cargas de trabajo. Por último, los resultados experimentales ponen de manifiesto la viabilidad de nuestra contribución y alientan a los profesionales a continuar trabajando en esta área.
Over the past three decades, technology constraints (e.g., capacity of storage devices, communication networks bandwidth) and an ever-increasing set of user demands (e.g., information structures, data volumes) have driven the evolution of distributed databases. Since flat-file data repositories developed in the early eighties, there have been important advances in concurrency control algorithms, replication protocols, and transactions management. However, modern concerns in data storage posed by Big Data and cloud computing—related to overcome the scalability and elasticity limitations of classic databases—are pushing practitioners to relax some important properties featured by transactions, which excludes several applications that are unable to fit in this strategy due to their intrinsic transactional nature. The purpose of this thesis is to address two important challenges still latent in distributed databases: (1) the scalability limitations of transactional databases and (2) providing transactional support on cloud-based storage repositories. Analyzing the traditional concurrency control and replication techniques, used by classic databases to support transactions, is critical to identify the reasons that make these systems degrade their throughput when the number of nodes and/or amount of data rockets. Besides, this analysis is devoted to justify the design rationale behind cloud repositories in which transactions have been generally neglected. Furthermore, enabling applications which are strongly dependent on transactions to take advantage of the cloud storage paradigm is crucial for their adaptation to current data demands and business models. This dissertation starts by proposing a custom protocol simulator for static distributed databases, which serves as a basis for revising and comparing the performance of existing concurrency control protocols and replication techniques. As this thesis is especially concerned with transactions, the effects on the database scalability of different transaction profiles under different conditions are studied. This analysis is followed by a review of existing cloud storage repositories—that claim to be highly dynamic, scalable, and available—, which leads to an evaluation of the parameters and features that these systems have sacrificed in order to meet current large-scale data storage demands. To further explore the possibilities of the cloud computing paradigm in a real-world scenario, a cloud-inspired approach to store data from Smart Grids is presented. More specifically, the proposed architecture combines classic database replication techniques and epidemic updates propagation with the design principles of cloud-based storage. The key insights collected when prototyping the replication and concurrency control protocols at the database simulator, together with the experiences derived from building a large-scale storage repository for Smart Grids, are wrapped up into what we have coined as Epidemia: a storage infrastructure conceived to provide transactional support on the cloud. In addition to inheriting the benefits of highly-scalable cloud repositories, Epidemia includes a transaction management layer that forwards client transactions to a hierarchical set of data partitions, which allows the system to offer different consistency levels and elastically adapt its configuration to incoming workloads. Finally, experimental results highlight the feasibility of our contribution and encourage practitioners to further research in this area.
APA, Harvard, Vancouver, ISO, and other styles
15

Neumann, Detlef, Gunter Teichmann, Frank Wehner, and Martin Engelien. "VU-Grid – Integrationsplattform für Virtuelle Unternehmen." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-155485.

Full text
Abstract:
Das Projekt „Collaboration-Grid für Virtuelle Unternehmen“ (VU-Grid) ist ein Forschungsverbundprojekt, an dem die Fakultät Informatik der Technischen Universität Dresden sowie der mittelständische IT-Dienstleister SALT Solutions GmbH beteiligt sind. Das Vorhaben wird von der Sächsischen Aufbaubank gefördert. Ziel des Forschungsvorhabens ist die prototypische Entwicklung einer Integrationsplattform (Collaboration-Grid) für die Unterstützung der veränderlichen, unternehmensübergreifenden Geschäftsprozesse im Umfeld eines IT-Dienstleisters am Beispiel der SALT Solutions GmbH. Theoretische Basis der Realisierung ist dabei das Konzept des Virtuellen Informationssystems, das im Rahmen des Dissertationsvorhabens von D. Neumann erarbeitet wurde.
APA, Harvard, Vancouver, ISO, and other styles
16

Neumann, Detlef, Gunter Teichmann, Frank Wehner, and Martin Engelien. "VU-Grid – Integrationsplattform für Virtuelle Unternehmen." Technische Universität Dresden, 2005. https://tud.qucosa.de/id/qucosa%3A28381.

Full text
Abstract:
Das Projekt „Collaboration-Grid für Virtuelle Unternehmen“ (VU-Grid) ist ein Forschungsverbundprojekt, an dem die Fakultät Informatik der Technischen Universität Dresden sowie der mittelständische IT-Dienstleister SALT Solutions GmbH beteiligt sind. Das Vorhaben wird von der Sächsischen Aufbaubank gefördert. Ziel des Forschungsvorhabens ist die prototypische Entwicklung einer Integrationsplattform (Collaboration-Grid) für die Unterstützung der veränderlichen, unternehmensübergreifenden Geschäftsprozesse im Umfeld eines IT-Dienstleisters am Beispiel der SALT Solutions GmbH. Theoretische Basis der Realisierung ist dabei das Konzept des Virtuellen Informationssystems, das im Rahmen des Dissertationsvorhabens von D. Neumann erarbeitet wurde.
APA, Harvard, Vancouver, ISO, and other styles
17

Čupačenko, Aleksandr. "Tinklinių duomenų bazių sandara tinklo paslaugų suradimui." Master's thesis, Lithuanian Academic Libraries Network (LABT), 2004. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2004~D_20040924_105246-65674.

Full text
Abstract:
Grids are collaborative distributed Internet systems characterized by large scale, heterogeneity, lack of central control, multiple autonomous administrative domains, unreliable components and frequent dynamic change. In such systems, it is desirable to maintain and query dynamic and timely information about active participants such as services, resources and user communities. The web services vision promises that programs are made more flexible, adaptive and powerful by querying Internet databases (registries) at runtime in order to discover information and network attached building blocks, enabling the assembly of distributed higher-level components. In support of this vision, we introduce the Web Service Discovery Architecture (WSDA), we introduce the hyper registry, which is a centralized database node for discovery of dynamic distributed content. It supports XQueries over a tuple set from a dynamic XML data model.
APA, Harvard, Vancouver, ISO, and other styles
18

Bílek, Ondřej. "Geografické informační systémy." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2013. http://www.nusl.cz/ntk/nusl-220325.

Full text
Abstract:
This diploma thesis focuses on the problematic of geographical information systems, analysis of the commercial as well as freeware software tools for GIS, analysis of vector and raster representation of geographical data and basic methods for data representation in GIS. The another important part of this thesis is the design of web application, that can be used as a tool for visualization of bussiness data from commercial company, in compare with data owned by Czech Statistical Office. This aplication is built on free platform Google Maps API V3.
APA, Harvard, Vancouver, ISO, and other styles
19

Tang, Jia. "An agent-based peer-to-peer grid computing architecture." Access electronically, 2005. http://www.library.uow.edu.au/adt-NWU/public/adt-NWU20060508.151716/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Pehal, Petr. "Systém řízení báze dat v operační paměti." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2013. http://www.nusl.cz/ntk/nusl-236348.

Full text
Abstract:
The focus of this thesis is a proprietary database interface for management tables in memory. At the beginning, there is given a short introduction to the databases. Then the concept of in-memory database systems is presented. Also the main advantages and disadvantages of this solution are discussed. The theoretical introduction is ended by brief overview of existing systems. After that the basic information about energetic management system RIS are presented together with system's in-memory database interface. Further the work aims at the specification and design of required modifications and extensions of the interface. Then the implementation details and tests results are presented. In conclusion the results are summarized and future development is discussed.
APA, Harvard, Vancouver, ISO, and other styles
21

Miles, David B. L. "A User-Centric Tabular Multi-Column Sorting Interface For Intact Transposition Of Columnar Data." Diss., CLICK HERE for online access, 2006. http://contentdm.lib.byu.edu/ETD/image/etd1160.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Head, Michael Reuben. "Analysis and optimization for processing grid-scale XML datasets." Diss., Online access via UMI:, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
23

Kakugawa, Fernando Ryoji. "Integração de bancos de dados heterogêneos utilizando grades computacionais." Universidade de São Paulo, 2010. http://www.teses.usp.br/teses/disponiveis/3/3141/tde-07012011-145400/.

Full text
Abstract:
Bancos de dados normalmente são projetados para atender a um domínio específico de uma aplicação, tornando o acesso aos dados limitado e uma tarefa árdua em relação à integração de bancos e compartilhamento de dados. Existem várias pesquisas no intuito de integrar dados, como a criação de softwares específicos para uma determinada aplicação e até soluções mais radicais como refazer todos os bancos de dados envolvidos, demonstrando que ainda existem questões em aberto e que a área está longe de atingir soluções definitivas. Este trabalho apresenta conceitos e estratégias para a integração de bancos de dados heterogêneos e a implementa na forma do DIGE, uma ferramenta para desenvolver sistemas de banco de dados integrando diferentes bancos de dados relacionais heterogêneos utilizando grades computacionais. O sistema criado permite o compartilhamento de acesso deixando os dados armazenados em seu local de origem, desta forma, usuários do sistema acessam os dados em outras instituições com a impressão de que os dados estão armazenados localmente. O programador da aplicação final pode acessar e manipular os dados de forma convencional utilizando a linguagem SQL sem se preocupar com a localização e o esquema de cada banco e o administrador do sistema pode adicionar ou remover bancos de forma facilitada sem a necessidade de solicitar alterações na aplicação final.
Databases are usually designed to support a specific application domain, thus making data-access and data-sharing a hard and arduous task when database integration is required. Therefore, research projects have been developed in order to integrate several and heterogeneous databases systems, such as specific-domain application tools or even more extreme solutions, as a complete database redefinition and redesign. Considering these open questions and with no definite answers, this work presents some concepts, strategies and an implementation for heterogeneous databases integration. In this implementation, the DIGE tool was developed to provide access to heterogeneous and geographically distributed databases, using Grid computing, which store locally its data, appearing to the user application, the data are stored locally. By this way, programmers can manipulate data using conventional SQL language with no concern about database location or its schema. Systems Administrators may also add or remove databases on the whole system without need to change the final user application.
APA, Harvard, Vancouver, ISO, and other styles
24

Duque, Hector Brunie Lionel Magnin Isabelle. "Conception et mise en oeuvre d'un environnement logiciel de manipulation et d'accès à des données réparties." Villeurbanne : Doc'INSA, 2006. http://docinsa.insa-lyon.fr/these/pont.php?id=duque.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Ghemtio, Wafo Léo Aymar. "Simulation numérique et approche orientée connaissance pour la découverte de nouvelles molécules thérapeutiques." Thesis, Nancy 1, 2010. http://www.theses.fr/2010NAN10103/document.

Full text
Abstract:
L’innovation thérapeutique progresse traditionnellement par la combinaison du criblage expérimental et de la modélisation moléculaire. En pratique, cette dernière approche est souvent limitée par la pénurie de données expérimentales, particulièrement les informations structurales et biologiques. Aujourd'hui, la situation a complètement changé avec le séquençage à haut débit du génome humain et les avancées réalisées dans la détermination des structures tridimensionnelles des protéines. Cette détermination permet d’avoir accès à une grande quantité de données pouvant servir à la recherche de nouveaux traitements pour un grand nombre de maladies. À cet égard, les approches informatiques permettant de développer des programmes de criblage virtuel à haut débit offrent une alternative ou un complément aux méthodes expérimentales qui font gagner du temps et de l’argent dans la découverte de nouveaux traitements.Cependant, la plupart de ces approches souffrent des mêmes limitations. Le coût et la durée des temps de calcul pour évaluer la fixation d'une collection de molécules à une cible, qui est considérable dans le contexte du haut débit, ainsi que la précision des résultats obtenus sont les défis les plus évidents dans le domaine. Le besoin de gérer une grande quantité de données hétérogènes est aussi particulièrement crucial.Pour surmonter les limitations actuelles du criblage virtuel à haut débit et ainsi optimiser les premières étapes du processus de découverte de nouveaux médicaments, j’ai mis en place une méthodologie innovante permettant, d’une part, de gérer une masse importante de données hétérogènes et d’en extraire des connaissances et, d’autre part, de distribuer les calculs nécessaires sur les grilles de calcul comportant plusieurs milliers de processeurs, le tout intégré à un protocole de criblage virtuel en plusieurs étapes. L’objectif est la prise en compte, sous forme de contraintes, des connaissances sur le problème posé afin d’optimiser la précision des résultats et les coûts en termes de temps et d’argent du criblage virtuel
Therapeutic innovation has traditionally benefited from the combination of experimental screening and molecular modelling. In practice, however, the latter is often limited by the shortage of structural and biological information. Today, the situation has completely changed with the high-throughput sequencing of the human genome, and the advances realized in the three-dimensional determination of the structures of proteins. This gives access to an enormous amount of data which can be used to search for new treatments for a large number of diseases. In this respect, computational approaches have been used for high-throughput virtual screening (HTVS) and offer an alternative or a complement to the experimental methods, which allow more time for the discovery of new treatments.However, most of these approaches suffer the same limitations. One of these is the cost and the computing time required for estimating the binding of all the molecules from a large data bank to a target, which can be considerable in the context of the high-throughput. Also, the accuracy of the results obtained is another very evident challenge in the domain. The need to manage a large amount of heterogeneous data is also particularly crucial.To try to surmount the current limitations of HTVS and to optimize the first stages of the drug discovery process, I set up an innovative methodology presenting two advantages. Firstly, it allows to manage an important mass of heterogeneous data and to extract knowledge from it. Secondly, it allows distributing the necessary calculations on a grid computing platform that contains several thousand of processors. The whole methodology is integrated into a multiple-step virtual screening funnel. The purpose is the consideration, in the form of constraints, of the knowledge available about the problem posed in order to optimize the accuracy of the results and the costs in terms of time and money at various stages of high-throughput virtual screening
APA, Harvard, Vancouver, ISO, and other styles
26

Kulhavý, Lukáš. "Praktické uplatnění technologií data mining ve zdravotních pojišťovnách." Master's thesis, Vysoká škola ekonomická v Praze, 2010. http://www.nusl.cz/ntk/nusl-77726.

Full text
Abstract:
This thesis focuses on data mining technology and its possible practical use in the field of health insurance companies. Thesis defines the term data mining and its relation to the term knowledge discovery in databases. The term data mining is explained, inter alia, with methods describing the individual phases of the process of knowledge discovery in databases (CRISP-DM, SEMMA). There is also information about possible practical applications, technologies and products available in the market (both products available free and commercial products). Introduction of the main data mining methods and specific algorithms (decision trees, association rules, neural networks and other methods) serves as a theoretical introduction, on which are the practical applications of real data in real health insurance companies build. These are applications seeking the causes of increased remittances and churn prediction. I have solved these applications in freely-available systems Weka and LISP-Miner. The objective is to introduce and to prove data mining capabilities over this type of data and to prove capabilities of Weka and LISP-Miner systems in solving tasks due to the methodology CRISP-DM. The last part of thesis is devoted the fields of cloud and grid computing in conjunction with data mining. It offers an insight into possibilities of these technologies and their benefits to the technology of data mining. Possibilities of cloud computing are presented on the Amazon EC2 system, grid computing can be used in Weka Experimenter interface.
APA, Harvard, Vancouver, ISO, and other styles
27

De, Vlieger Paul. "Création d'un environnement de gestion de base de données "en grille" : application à l'échange de données médicales." Phd thesis, Université d'Auvergne - Clermont-Ferrand I, 2011. http://tel.archives-ouvertes.fr/tel-00719688.

Full text
Abstract:
La problématique du transport de la donnée médicale, de surcroît nominative, comporte de nombreuses contraintes, qu'elles soient d'ordre technique, légale ou encore relationnelle. Les nouvelles technologies, issues particulièrement des grilles informatiques, permettent d'offrir une nouvelle approche au partage de l'information. En effet, le développement des intergiciels de grilles, notamment ceux issus du projet européen EGEE, ont permis d'ouvrir de nouvelles perspectives pour l'accès distribué aux données. Les principales contraintes d'un système de partage de données médicales, outre les besoins en termes de sécurité, proviennent de la façon de recueillir et d'accéder à l'information. En effet, la collecte, le déplacement, la concentration et la gestion de la donnée, se fait habituellement sur le modèle client-serveur traditionnel et se heurte à de nombreuses problématiques de propriété, de contrôle, de mise à jour, de disponibilité ou encore de dimensionnement des systèmes. La méthodologie proposée dans cette thèse utilise une autre philosophie dans la façon d'accéder à l'information. En utilisant toute la couche de contrôle d'accès et de sécurité des grilles informatiques, couplée aux méthodes d'authentification robuste des utilisateurs, un accès décentralisé aux données médicales est proposé. Ainsi, le principal avantage est de permettre aux fournisseurs de données de garder le contrôle sur leurs informations et ainsi de s'affranchir de la gestion des données médicales, le système étant capable d'aller directement chercher la donnée à la source.L'utilisation de cette approche n'est cependant pas complètement transparente et tous les mécanismes d'identification des patients et de rapprochement d'identités (data linkage) doivent être complètement repensés et réécris afin d'être compatibles avec un système distribué de gestion de bases de données. Le projet RSCA (Réseau Sentinelle Cancer Auvergne - www.e-sentinelle.org) constitue le cadre d'application de ce travail. Il a pour objectif de mutualiser les sources de données auvergnates sur le dépistage organisé des cancers du sein et du côlon. Les objectifs sont multiples : permettre, tout en respectant les lois en vigueur, d'échanger des données cancer entre acteurs médicaux et, dans un second temps, offrir un support à l'analyse statistique et épidémiologique.
APA, Harvard, Vancouver, ISO, and other styles
28

Saša, Dević. "Приступи развоју базe података Општег информационог модела за електроенергетске мреже." Phd thesis, Univerzitet u Novom Sadu, Fakultet tehničkih nauka u Novom Sadu, 2019. https://www.cris.uns.ac.rs/record.jsf?recordId=108886&source=NDLTD&language=en.

Full text
Abstract:
Општи информациони модел (CIM) користи се за опис електроенергетске мреже и за размену података између оператера преносних електроенергетских система. Како је модел постајао све заступљенији, појавила се потреба за његовим складиштењем. У раду је развијен методолошки приступ за развој базе података која би подржала релативно једноставно складиштење и рад са инстанцама CIM модела, које описују тренутно, активно стање у систему. Такође, омогућено је и праћење претходних, историјских стања CIM инстанци, као и њихова рестаурација у жељено стање. Очекује се да предложени приступ олакша увођење CIM модела у различита, наменска програмска решења.
Opšti informacioni model (CIM) koristi se za opis elektroenergetske mreže i za razmenu podataka između operatera prenosnih elektroenergetskih sistema. Kako je model postajao sve zastupljeniji, pojavila se potreba za njegovim skladištenjem. U radu je razvijen metodološki pristup za razvoj baze podataka koja bi podržala relativno jednostavno skladištenje i rad sa instancama CIM modela, koje opisuju trenutno, aktivno stanje u sistemu. Takođe, omogućeno je i praćenje prethodnih, istorijskih stanja CIM instanci, kao i njihova restauracija u željeno stanje. Očekuje se da predloženi pristup olakša uvođenje CIM modela u različita, namenska programska rešenja.
Common Information Model (CIM) is used for describing power grid networksand data exchange among transmission system operators (TSO). As themodel became widely used, there was a need to store such model. In thisthesis we present a methodological approach to development of a databasethat supports relatively easy storing and managing CIM instances, whichdescribe current, active state of the system. Also, tracking changes andrestoring CIM instances to its previous states are supported. We expect thatsuch methodological approach would ease the implementation of CIM modelin various, domain specific software solutions.
APA, Harvard, Vancouver, ISO, and other styles
29

Ozel, Melda. "Behavior of concrete beams reinforced with 3-D fiber reinforced plastic grids." 2002. http://www.library.wisc.edu/databases/connect/dissertations.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Wang, Peng-Hsiang, and 王鵬翔. "Distributed Geo-Databases Integration using Data Grid." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/38637483423411558663.

Full text
Abstract:
碩士
中原大學
資訊管理研究所
95
ABSTRACT Remote sensing has been a well developed technology. Especially in 3D applications, people often use satellite remote sensing imagery and aerial photos to build the 3D model. For real-time 3D flight simulation, besides a interactive virtual geographic environment, service providers have to update the imagery and DTM frequently to generate a realistic 3D scene of the specific location. However, relative materials like remote sensing data and aerial photos are distributed within government departments, research agencies or private enterprises and kept in different databases. To obtain a remote sensing image is very expensive, therefore we need to integrate remote sensing resource in different database to save the cost for a new image collect and reuse the archived images. But the problem is people don’t have a simple and convenient query interface to find out all the resource in different database to build a 3D scene. The purpose of this research is to use Grid technique to integrate distributed geo-spatial database and provide a query interface so uses can search between geo-spatial databases and get remote sensing data they need for 3D flight simulation through GridFTP. These difference imagery databases and remote sensing satellites are regarded as different ground observers, and we use Data Grid technique to integrate all of them. Through Grid Portal, People can use their computers to access remote sensing imagery from different databases in different companies. So they can almost real-time monitor their area of interest. In the demonstration, therefore to solve the problem of query between different databases from software, through GT4 (Globus Toolkit 4.03) basic framework, we use Grid operation and related technologies to build “Distribute Geo-Databases Integration using Data Grid Portal” to solve coordination and corporation problems. we present how to a create a flight path between two points and use it to query imagery and metadata databases. Since we already integrate different databases, optical remote sensing images and DEM will deliver to uses through GridFTP and they can use commercial software for 3D display. With the integrated database we provide, users can get all the resource they need to build a realistic 3D scene with the same scenario.
APA, Harvard, Vancouver, ISO, and other styles
31

Donaldson, William Walden. "Grid-graph paritioning." 2000. http://www.library.wisc.edu/databases/connect/dissertations.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Liao, Rong-Jhang, and 廖容章. "A signature-based Grid Index Design for Main-Memory Databases." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/98932298330756654626.

Full text
Abstract:
碩士
國立臺灣大學
資訊工程學研究所
97
A large-scaled RFID application often needs a highly efficient database system in data processing. This research is motivated by the strong demand of an efficient index structure design for main-memory database systems of RFID applications. In this paper, a signature-based grid index structure is proposed for efficient data queries and storage. An efficient methodology is proposed to locate duplicates and to execute batch deletions and range queries based on application domain knowhow. The capability of the design is implemented in an open source main-memory database system H2 and evaluated by realistic workloads of RFID applications.
APA, Harvard, Vancouver, ISO, and other styles
33

Tu, Manghui. "A data management framework for secure and dependable data grid /." 2006. http://proquest.umi.com/pqdweb?did=1225157601&sid=1&Fmt=2&clientId=10361&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Kuo, Hsun-Hung, and 郭訓宏. "Towards a High-Performance Grid Computing Infrastructure —A Distributed Databases Approach." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/55069768643873995817.

Full text
Abstract:
碩士
國立交通大學
資訊科學系所
92
Grid Computing promises to provide uniform, non-expensive access to computing power through aggregation and utilization of potentially unlimited number of storage and computing devices. For Grid infrastructure developers, this goal amounts to creating effective mechanisms that can allocate and coordinate distributed, heterogeneous resources in a robust and secure manner. For Grid application developers, on the other hand, the main challenge is to make best use of the facilities provided by the infrastructure. Typically, a developer needs to divide a problem into smaller pieces, and plan for appropriate data manipulation and transfer among them. Such divide-and-conquer effort is essential when required memory space is beyond the capabilities of individual machines, but complicated when the infrastructure provides only low-level facilities. This thesis describes database-specific techniques that can relieve developers from complicated memory management. Simply speaking, we use individual relational databases as computational nodes for their storage and computation capabilities, and connect them together into a distributed computing platform. In addition, we define a generic schema capable of storing complex data structures, and mechanisms that allow flexible translation between the schema and other computation-friendly tabular structures. We argue that together these constructs form an attractive platform that can greatly simplify Grid application development, thus contribute to the general Grid Computing community in a useful way.
APA, Harvard, Vancouver, ISO, and other styles
35

Chiu, Chien-Sheng, and 邱建昇. "GOD-CS: A New Grid-Oriented Dissection Clustering Scheme for Large Databases." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/33026668207918421374.

Full text
Abstract:
碩士
國立屏東科技大學
資訊管理系所
98
Many recent clustering schemes have been applied in previous investigations to resolve the issues of high execution cost and low correction ratio in arbitrary shapes. Two conventional approaches that can solve one of these problems accurately are K-means and DBSCAN. However, DBSCAN is inefficient while K-means has poor accuracy. ANGEL and G-TREACLE have been proposed to improve current clustering tribulations, but require complicated procedures and numerous thresholds. This thesis presents a new clustering technique, called 「GOD-CS」, which employs grid-based clustering, neighborhood 8-square searching and tolerance rate to reduce these problems. Simulation results indicate that GOD-CS clusters large databases very quickly, while having almost identical or even better clustering performance in comparison to several existing well-known approaches with the original patterns in a simple procedure. Thus, GOD-CS performs well and is simple to implement.
APA, Harvard, Vancouver, ISO, and other styles
36

Ramos, Luís Albino Nogueira. "Performance analysis of a database caching system in a grid environment." Dissertação, 2007. http://hdl.handle.net/10216/12860.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

LI, YI, and 李奕. "Design and Implementation Power Quality Database Platform of Smart Grid Environment." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/30427791789464631574.

Full text
Abstract:
碩士
國立中正大學
電機工程研究所
104
In recent years, due to the different power quality manufacturers, the monitor equipment and analysis tools, are with no interoperabilities due to smart grid need. To share information and application messages between different power quality meters, this thesis developed a PQ data exchange platform based on IEEE 1159.3 PQDIF Power Quality Data Interchange Format. It provides achieving unified management and analysis, multi-level data for large shared data, and cross-platform network rendering. In this thesis, Visual Studio is used to implement platform operational processes which used ASP.NET to develop web applications. Visual Studio supports for multi-language development as well, such as VB.NET and C #. However, this thesis use the SQL Server 2008 in database, which is Microsoft's server management system, SQL Server 2008 had the analysis, reporting, notification, and other functions. The study mainly focuses on data storage, management, transaction handling and access control. The developed system can achieve cross-platform data accessing, displaying and analyzing and the information is presented to the web platform by building entire electric power information and retrieving server’s power information. The developed platform not only achieves data management and analysis, but also data presentation and analysis across from platform to network.
APA, Harvard, Vancouver, ISO, and other styles
38

Ramos, Luís Albino Nogueira. "Performance analysis of a database caching system in a grid environment." Master's thesis, 2007. http://hdl.handle.net/10216/12860.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Dominguez, Ontiveros Elvis Efren. "Non-Intrusive Experiemental Investigation of Multi-Scale Flow Behavior in Rod Bundle with Spacer-Grids." Thesis, 2010. http://hdl.handle.net/1969.1/ETD-TAMU-2010-05-7919.

Full text
Abstract:
Experiments investigating complex flows in rod bundles with spacer grids that have mixing devices (such as flow mixing vanes) have mostly been performed using single-point measurements. Although these measurements allow local comparisons of experimental and numerical data they provide little insight because the discrepancies can be due to the integrated effects of many complex flow phenomena such as wake-wake, wake-vane, and vane-boundary layer interactions occurring simultaneously in a complex flow environment. In order to validate the simulations results, detailed comparison with experimental data must be done. This work describes an experimental database obtained using Time Resolved Particle Image Velocimetry (TR-PIV) measurements within a 5 x 5 rod bundle with spacer-grids. Measurements were performed using two different grid designs. One typical of Boiling Water Reactors (BWR) with swirl type mixing vanes and the other typical of Pressurized Water Reactors (PWR) with split type mixing vanes. High quality data was obtained in the vicinity of the grid using the multi-scale approach. One of the unique characteristic of this set-up is the use of the Matched Index of Refraction (MIR) technique employed in this investigation. This approach allows the use of high temporal and spatial non-intrusive dynamic measurement techniques to investigate the flow evolution below and immediately above the spacer. The experimental data presented includes explanation of the various cases tested such as test rig dimensions, measurement zones, the test equipment and the boundary conditions in order to provide appropriate data for comparison with Computational Fluid Dynamics (CFD) simulations. Turbulence parameters of the obtained data are analyzed in order to gain insight of the physical phenomena. The shape of the velocity profile at various distances from the spacer show important modifications passing the grid which delineates the significant effects of the presence of the grid spacer. Influence of the vanes wake in the global velocity was quantified to be up to a distance of 4 hydraulic diameters from the edge of the grid.Spatial and temporal correlations in the two measured dimensions were performed to quantify the time and length scales present in the flow in the vicinity of the grids and its influence in the flow modification induced by the vanes. Detection of vortex cores was performed using the vorticity, swirl strength and Galilean decomposition approach. The resulted cores were then tracked in time, in order to observe the evolution of the structures under the influence of the vanes for each grid. Vortex stretching was quantified in order to gain insight of the energy dissipation process normally associated with the phenomena. This work presents data in a single-phase flow situation and an analysis of these data for understanding complex flow structure. This data provide for the first time detailed temporal velocity full field which can be used to validate CFD codes.
APA, Harvard, Vancouver, ISO, and other styles
40

Chen, Chun-Hui, and 陳君輝. "Direct Grid-Connected Excited Synchronous Wind Power Generators with Real-time Remote Control and Database Analysis System." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/14147579808557899197.

Full text
Abstract:
碩士
國立中山大學
電機工程學系研究所
104
This thesis develops a system for real-time remote monitoring and database analysis for excited synchronous wind power generators. The system uses MS Visual c#, SQL and TI microcontroller as a development platform. Communication interface between the system and microcontroller is RS485, which is in asynchronous serial mode. In addition, we design the communications protocol between the system and microcontroller. System programing is dependent on functions to classify, that facilitates the reading and expansion. There are two parts of the system: real-time remote monitoring and querying database to analyze. The real-time remote monitoring part has four pages of table for observation and one block for operation. The four pages of table are status confirmation, parameter setting, numeric monitoring and graphical monitoring. The status confirmation confirms whether the generators is ready or not; the parameter setting sets parameters of control strategies in the generators; the numeric monitoring and the graphical monitoring in which are to observe the information of grid network, excited generator, servo motor and battery. The former is represented by numerical, and the latter is represented by graph. The operational block is able to remote control the functions of the generators. In database analysis part, data can be recovered from database that is stored in real-time remote monitoring part to analyze and diagnose performance of the generators. As the results, the experiments verify the functionality of the system.
APA, Harvard, Vancouver, ISO, and other styles
41

Miao, Zhuo. "Grid-aware evaluation of regular path queries on large Spatial networks." Thesis, 2007. http://hdl.handle.net/1828/192.

Full text
Abstract:
Regular path queries (RPQs), expressed as regular expressions over the alphabet of database edge-labels, are commonly used for guided navigation of graph databases. RPQs are the basic building block of almost all the query languages for graph databases, providing the user with a nice and simple way to express recursion. While convenient to use, RPQs are notorious for their high computational demand. Except for few theoretical works, there has been little work evaluating RPQs on databases of great practical interest, such as large spatial networks. In this thesis, we present a grid-aware, fault tolerant distributed algorithm for answering RPQs on spatial networks. We engineer each part of the algorithm to account for the assumed computational-grid setting. We experimentally evaluate our algorithm, and show that for typical user queries, our algorithm satisfies the desiderata for distributed computing in general, and computational-grids in particular.
APA, Harvard, Vancouver, ISO, and other styles
42

Hsieh, Meng-Han, and 謝孟翰. "Supervisory Control and Database Analysis System For Direct Grid-Connected Excited Synchronous Wind Power Generators With Servo Motor Control." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/2z758j.

Full text
Abstract:
碩士
國立中山大學
電機工程學系研究所
106
This thesis uses Microsoft Visual Studio and Android Studio development platform with Visual C# and Java programming language to establish a Human-Machine Interface system which includes real-time monitoring, remote monitoring, database analysis, etc., applies to the direct grid-connected excited synchronous wind power system. The real-time supervisory control system uses asynchronous serial communication and RS485 as a connection between the system and micro-controller. Establish a packet format, convert packet data into graphic and numerical information to display in the human-machine interface. All acquired data will also be transmitted to the remote monitoring system and Microsoft SQL Server. The remote supervisory system consists of two parts, App base on Android mobile device, Windows Form base on computer. Both are used as Client to communicate with the TCP/IP Server of real-time monitoring system. The Server will poll commands of each Client and broadcast data to all Client. The database analysis system uses the SQL Commands of Visual C# to query various data stored in Microsoft SQL Server during experiment. Display the data in pages of the system so that the data and status of the wind power system during operation can be observed and evaluate the effect of parameters for the system.
APA, Harvard, Vancouver, ISO, and other styles
43

Chang, You-Chuan, and 張祐銓. "Supervisory Control and Database Analysis System applied to Mobile Devices for Direct Grid-Connected Excited Synchronous Wind Power Generators with Servo Motor Control." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/b29dfx.

Full text
Abstract:
碩士
國立中山大學
電機工程學系研究所
107
This thesis uses Microsoft Visual Studio and Android Studio development platform with Visual C# and Java programming language to establish a Human-Machine Interface system which includes real-time monitoring, remote monitoring, database analysis, etc., applies to the direct grid-connected excited synchronous wind power system. The real-time supervisory control system uses asynchronous serial communication and RS485 as a connection between the system and micro-controller. Establish a packet format, convert packet data into graphic and numerical information to display in the human-machine interface. All acquired data will also be transmitted to the remote monitoring system and Microsoft SQL Server. The remote supervisory system consists of two parts, App base on Android mobile device and Windows Form base on computer. Both are used as Client to communicate with the TCP/IP Server of real-time monitoring system. The Server will poll commands of each Client and broadcast data to all Client. The database analysis system also consists of two parts, App base on Android mobile device and Windows Form base on computer. Both use the SQL Commands of Java and Visual C# to query various data stored in Microsoft SQL Server during experiment. Display the data in pages of the system so that the data and status of the wind power system during operation can be observed and evaluate the effect of parameters for the system. This thesis mainly designs a data analysis app for Android mobile devices. By combining numerical information with graphical waveform display, users can query and analyze the historical data of wind power system through mobile devices everywhere.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography