Dissertations / Theses on the topic 'Databases and Grids'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 43 dissertations / theses for your research on the topic 'Databases and Grids.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Venugopal, Srikumar. "Scheduling distributed data-intensive applications on global grids /." Connect to thesis, 2006. http://eprints.unimelb.edu.au/archive/0002929.
Full textSonmez, Sunercan Hatice Kevser. "Data Integration Over Horizontally Partitioned Databases In Service-oriented Data Grids." Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12612414/index.pdf.
Full textcoping with various forms of data distribution and maintenance policies, scalability, performance, security and trust, reliability and resilience, legal issues etc. It is obvious that each of these dimensions deserves a separate thread of research efforts. One particular challenge among the ones listed above that is more relevant to the work presented in this thesis is coping with various forms of data distribution and maintenance policies. This thesis aims to provide a service-oriented data integration solution over data Grids for cases where distributed data sources are partitioned with overlapping sections of various proportions. This is an interesting variation which combines both replicated and partitioned data within the same data management framework. Thus, the data management infrastructure has to deal with specific challenges regarding the identification, access and aggregation of partitioned data with varying proportions of overlapping sections. To provide a solution we have extended OGSA-DAI DQP, a well-known service-oriented data access and integration middleware with distributed query processing facilities, by incorporating UnionPartitions operator into its algebra in order to cope with various unusual forms of horizontally partitioned databases. As a result
our solution extends OGSA-DAI DQP, in two points
1 - A new operator type is added to the algebra to perform a specialized union of the partitions with different characteristics, 2 - OGSA-DAI DQP Federation Description is extended to include some more metadata to facilitate the successful execution of the newly introduced operator.
Fomkin, Ruslan. "Optimization and Execution of Complex Scientific Queries." Doctoral thesis, Uppsala : Acta Universitatis Upsaliensis, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-9514.
Full textAhmed, Ejaz. "A grid enabled staging DBMS method for data Mapping, Matching & Loading." Thesis, University of Bedfordshire, 2011. http://hdl.handle.net/10547/204951.
Full textXiang, Helen X. "A grid-based distributed database solution for large astronomy datasets." Thesis, University of Portsmouth, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.494003.
Full textTaratoris, Evangelos. "A single-pass grid-based algorithm for clustering big data on spatial databases." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/113168.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (pages 79-80).
The problem of clustering multi-dimensional data has been well researched in the scientific community. It is a problem with wide scope and applications. With the rapid growth of very large databases, traditional clustering algorithms become inefficient due to insufficient memory capacity. Grid-based algorithms try to solve this problem by dividing the space into cells and then performing clustering on the cells. However these algorithms also become inefficient when even the grid becomes too large to be saved in memory. This thesis presents a new algorithm, SingleClus, that is performing clustering on a 2-dimensional dataset with a single pass of the dataset. Moreover, it optimizes the amount of disk I/0 operations while making modest use of main memory. Therefore it is theoretically optimal in terms of performance. It modifies and improves on the Hoshen-Kopelman clustering algorithm while dealing with the algorithm's fundamental challenges when operating in a Big Data setting.
by Evangelos Taratoris.
M. Eng.
M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
Hu, Zhengyu, and D. Phillip Guertin. "The Effect of GIS Database Grid Size on Hydrologic Simulation Results." Arizona-Nevada Academy of Science, 1991. http://hdl.handle.net/10150/296461.
Full textThe use of geographic information systems (GIS) for assessing the hydrologic effects of management is increasing. In the near future most of our spatial or "mapped" information will come from GIS. The direct linkage of hydrologic simulation models to GIS should make the assessment process more efficient and powerful, allowing managers to quickly evaluate different landscape designs. This study investigates the effect the resolution of GIS databases have on hydrological simulation results from an urban watershed. The hydrologic model used in the study was the Soil Conservation Service Curve Number Model which computes the volume of runoff from rainfall events. A GIS database was created for High School Wash, a urban watershed in Tucson, Arizona. Fifteen rainfall-runoff events were used to test the simulation results. Five different grid sizes, ranging from 25x25 square feet to 300x300 square feet were evaluated. The results indicate that the higher the resolution the better the simulation results. The average ratio of simulated over observed runoff volumes ranged from 0.98 for the 25x25 square feet case to 0.43 for the 300x300 square feet case.
Rokhsari, Mirfakhradin Derakhshan. "A development of the grid file for the storage of binary relations." Thesis, Birkbeck (University of London), 1989. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.388717.
Full textXu, Kai. "Database support for multi-resolution terrain models /." St. Lucia, Qld, 2004. http://www.library.uq.edu.au/pdfserve.php?image=thesisabs/absthe17869.pdf.
Full textPaventhan, Arumugam. "Grid approaches to data-driven scientific and engineering workflows." Thesis, University of Southampton, 2007. https://eprints.soton.ac.uk/49926/.
Full textTan, Koon Leai Larry. "An integrated methodology for creating composed Web/grid services." Thesis, University of Stirling, 2009. http://hdl.handle.net/1893/2515.
Full textLourenso, Reinaldo. "Segmentação de objetos complexos em um sistema de banco de dados objeto relacional baseado em GRIDS\"." Universidade de São Paulo, 2005. http://www.teses.usp.br/teses/disponiveis/3/3142/tde-16032006-121211/.
Full textThis Thesis presents a proposal of an infrastructure to allow the distribution of data in a Database Grid. The storage of complex objects, such as audio, video and software etc. in Databases is always done in an integral way. This means that the object, regardless of its size, it is not fragmented by the Database Management System (DBMS). Methodologies used for data modeling also do not allow fragmentation or segmentation of complex objects. This happens because only the fragmentation of storage structures such as tables or classes are taken into account, not the embedded objects. When we evaluate the performance of systems that store complex objects, we can verify that the size of the stored objects has considerable impact. Since multimedia objects or software distribution package require significant disk space for storage, traditional methods for replication or distribution of copies become very costly and many times inefficient. With the infrastructure developed in this work it was possible to segment and to distribute complex attributes of lines of a table in Database Grids. In this way, our solution improves the performance of the system that had the necessity to store documents of raised size, above of a specified boundary-value. Also the possible use of codes LDPC in this infrastructure was tested successfully. However, does not observe profits that justified its use in same ours applications.
Nguyen, Thi Thanh Quynh. "A new approach for distributed programming in smart grids." Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAT079.
Full textThe main challenge of smart grids control and management is the amount of data to be processed. Traditional, centralized techniques, even if they offer the advantage of the ease of management by their global grid vision, do not support in practice the continuous growth of data volumes (limited bandwidth, bottleneck, amount of calculations, etc.). The transition to decentralized(distributed)control and management, where the system is made up of a multitude of co-operating computing units, offers very good prospects (robustness, calculation close to the producers and consumers of data, exploitation of data in all available resources), but remains challenging to implement. In fact, the programming of distributed algorithms requires taking into account the data exchanges and the synchronization of the participating units; this complexity increases with the number of units. In this thesis, we propose an innovative approach of programming of a high level of abstraction masking these difficulties.First, we propose to abstract all Smartgrid computing units (smart meters, sensors, data concentrators, etc.) as a distributed database. Each computing unit hosts a local database and only the data needed to continue the calculation are exchanged with other units, which decreases the use of the available bandwidth. The use of a declarative data handling language will simplify the programming of control and management applications. Besides, we also propose SmartLog, a rule-based language (based on the Datalog language and its derivatives dedicated to these applications. It facilitates distributed programming of Smartgrid applications by immediately responding to any changes in the data.Even with a language such as SmartLog, it is necessary to take into account the data exchange and the synchronization of the participants. This is why we then propose an approach that simplifies distributed programming. This approach, named CPDE for Centralized Programming and Distributed Execution, consists of two steps: (i) programming the centralized application in SmartLog, as this is easier, and (ii) translating the centralized program into a distributed program based on the actual location of the data. To do this, we propose a semi-automatic Smartlog rule distribution algorithm.In order to demonstrate the interest of CPDE, we conducted a comprehensive experiment using applications and algorithms actually used in Smartgrids, such as secondary control in isolated micro-grids or fair voltage regulation. The experiment was carried out on a real-time electrical network simulation platform, with an OPAL-RT simulation machine, and a Raspberry-Pi network representing the computing units (their performances are quite comparable to the real equipment). This experiment allowed validating the behaviours and the performances of the distributed programs conceived with CPDE, and comparing to their centralized versions in SmartLog and their reference versions implanted in Java. The impact of different parameters, such as the number of calculation units or different data distribution alternatives, is studied as well
Navarro, Martín Joan. "From cluster databases to cloud storage: Providing transactional support on the cloud." Doctoral thesis, Universitat Ramon Llull, 2015. http://hdl.handle.net/10803/285655.
Full textDurante las últimas tres décadas, las limitaciones tecnológicas (por ejemplo la capacidad de los dispositivos de almacenamiento o el ancho de banda de las redes de comunicación) y las crecientes demandas de los usuarios (estructuras de información, volúmenes de datos) han conducido la evolución de las bases de datos distribuidas. Desde los primeros repositorios de datos para archivos planos que se desarrollaron en la década de los ochenta, se han producido importantes avances en los algoritmos de control de concurrencia, protocolos de replicación y en la gestión de transacciones. Sin embargo, los retos modernos de almacenamiento de datos que plantean el Big Data y el cloud computing—orientados a mejorar la limitaciones en cuanto a escalabilidad y elasticidad de las bases de datos estáticas—están empujando a los profesionales a relajar algunas propiedades importantes de los sistemas transaccionales clásicos, lo que excluye a varias aplicaciones las cuales no pueden encajar en esta estrategia debido a su alta dependencia transaccional. El propósito de esta tesis es abordar dos retos importantes todavía latentes en el campo de las bases de datos distribuidas: (1) las limitaciones en cuanto a escalabilidad de los sistemas transaccionales y (2) el soporte transaccional en repositorios de almacenamiento en la nube. Analizar las técnicas tradicionales de control de concurrencia y de replicación, utilizadas por las bases de datos clásicas para soportar transacciones, es fundamental para identificar las razones que hacen que estos sistemas degraden su rendimiento cuando el número de nodos y/o cantidad de datos crece. Además, este análisis está orientado a justificar el diseño de los repositorios en la nube que deliberadamente han dejado de lado el soporte transaccional. Efectivamente, acercar el paradigma del almacenamiento en la nube a las aplicaciones que tienen una fuerte dependencia en las transacciones es crucial para su adaptación a los requerimientos actuales en cuanto a volúmenes de datos y modelos de negocio. Esta tesis empieza con la propuesta de un simulador de protocolos para bases de datos distribuidas estáticas, el cual sirve como base para la revisión y comparativa de rendimiento de los protocolos de control de concurrencia y las técnicas de replicación existentes. En cuanto a la escalabilidad de las bases de datos y las transacciones, se estudian los efectos que tiene ejecutar distintos perfiles de transacción bajo diferentes condiciones. Este análisis continua con una revisión de los repositorios de almacenamiento en la nube existentes—que prometen encajar en entornos dinámicos que requieren alta escalabilidad y disponibilidad—, el cual permite evaluar los parámetros y características que estos sistemas han sacrificado con el fin de cumplir las necesidades actuales en cuanto a almacenamiento de datos a gran escala. Para explorar las posibilidades que ofrece el paradigma del cloud computing en un escenario real, se presenta el desarrollo de una arquitectura de almacenamiento de datos inspirada en el cloud computing para almacenar la información generada en las Smart Grids. Concretamente, se combinan las técnicas de replicación en bases de datos transaccionales y la propagación epidémica con los principios de diseño usados para construir los repositorios de datos en la nube. Las lecciones recogidas en el estudio de los protocolos de replicación y control de concurrencia en el simulador de base de datos, junto con las experiencias derivadas del desarrollo del repositorio de datos para las Smart Grids, desembocan en lo que hemos acuñado como Epidemia: una infraestructura de almacenamiento para Big Data concebida para proporcionar soporte transaccional en la nube. Además de heredar los beneficios de los repositorios en la nube altamente en cuanto a escalabilidad, Epidemia incluye una capa de gestión de transacciones que reenvía las transacciones de los clientes a un conjunto jerárquico de particiones de datos, lo que permite al sistema ofrecer distintos niveles de consistencia y adaptar elásticamente su configuración a nuevas demandas cargas de trabajo. Por último, los resultados experimentales ponen de manifiesto la viabilidad de nuestra contribución y alientan a los profesionales a continuar trabajando en esta área.
Over the past three decades, technology constraints (e.g., capacity of storage devices, communication networks bandwidth) and an ever-increasing set of user demands (e.g., information structures, data volumes) have driven the evolution of distributed databases. Since flat-file data repositories developed in the early eighties, there have been important advances in concurrency control algorithms, replication protocols, and transactions management. However, modern concerns in data storage posed by Big Data and cloud computing—related to overcome the scalability and elasticity limitations of classic databases—are pushing practitioners to relax some important properties featured by transactions, which excludes several applications that are unable to fit in this strategy due to their intrinsic transactional nature. The purpose of this thesis is to address two important challenges still latent in distributed databases: (1) the scalability limitations of transactional databases and (2) providing transactional support on cloud-based storage repositories. Analyzing the traditional concurrency control and replication techniques, used by classic databases to support transactions, is critical to identify the reasons that make these systems degrade their throughput when the number of nodes and/or amount of data rockets. Besides, this analysis is devoted to justify the design rationale behind cloud repositories in which transactions have been generally neglected. Furthermore, enabling applications which are strongly dependent on transactions to take advantage of the cloud storage paradigm is crucial for their adaptation to current data demands and business models. This dissertation starts by proposing a custom protocol simulator for static distributed databases, which serves as a basis for revising and comparing the performance of existing concurrency control protocols and replication techniques. As this thesis is especially concerned with transactions, the effects on the database scalability of different transaction profiles under different conditions are studied. This analysis is followed by a review of existing cloud storage repositories—that claim to be highly dynamic, scalable, and available—, which leads to an evaluation of the parameters and features that these systems have sacrificed in order to meet current large-scale data storage demands. To further explore the possibilities of the cloud computing paradigm in a real-world scenario, a cloud-inspired approach to store data from Smart Grids is presented. More specifically, the proposed architecture combines classic database replication techniques and epidemic updates propagation with the design principles of cloud-based storage. The key insights collected when prototyping the replication and concurrency control protocols at the database simulator, together with the experiences derived from building a large-scale storage repository for Smart Grids, are wrapped up into what we have coined as Epidemia: a storage infrastructure conceived to provide transactional support on the cloud. In addition to inheriting the benefits of highly-scalable cloud repositories, Epidemia includes a transaction management layer that forwards client transactions to a hierarchical set of data partitions, which allows the system to offer different consistency levels and elastically adapt its configuration to incoming workloads. Finally, experimental results highlight the feasibility of our contribution and encourage practitioners to further research in this area.
Neumann, Detlef, Gunter Teichmann, Frank Wehner, and Martin Engelien. "VU-Grid – Integrationsplattform für Virtuelle Unternehmen." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-155485.
Full textNeumann, Detlef, Gunter Teichmann, Frank Wehner, and Martin Engelien. "VU-Grid – Integrationsplattform für Virtuelle Unternehmen." Technische Universität Dresden, 2005. https://tud.qucosa.de/id/qucosa%3A28381.
Full textČupačenko, Aleksandr. "Tinklinių duomenų bazių sandara tinklo paslaugų suradimui." Master's thesis, Lithuanian Academic Libraries Network (LABT), 2004. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2004~D_20040924_105246-65674.
Full textBílek, Ondřej. "Geografické informační systémy." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2013. http://www.nusl.cz/ntk/nusl-220325.
Full textTang, Jia. "An agent-based peer-to-peer grid computing architecture." Access electronically, 2005. http://www.library.uow.edu.au/adt-NWU/public/adt-NWU20060508.151716/index.html.
Full textPehal, Petr. "Systém řízení báze dat v operační paměti." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2013. http://www.nusl.cz/ntk/nusl-236348.
Full textMiles, David B. L. "A User-Centric Tabular Multi-Column Sorting Interface For Intact Transposition Of Columnar Data." Diss., CLICK HERE for online access, 2006. http://contentdm.lib.byu.edu/ETD/image/etd1160.pdf.
Full textHead, Michael Reuben. "Analysis and optimization for processing grid-scale XML datasets." Diss., Online access via UMI:, 2009.
Find full textKakugawa, Fernando Ryoji. "Integração de bancos de dados heterogêneos utilizando grades computacionais." Universidade de São Paulo, 2010. http://www.teses.usp.br/teses/disponiveis/3/3141/tde-07012011-145400/.
Full textDatabases are usually designed to support a specific application domain, thus making data-access and data-sharing a hard and arduous task when database integration is required. Therefore, research projects have been developed in order to integrate several and heterogeneous databases systems, such as specific-domain application tools or even more extreme solutions, as a complete database redefinition and redesign. Considering these open questions and with no definite answers, this work presents some concepts, strategies and an implementation for heterogeneous databases integration. In this implementation, the DIGE tool was developed to provide access to heterogeneous and geographically distributed databases, using Grid computing, which store locally its data, appearing to the user application, the data are stored locally. By this way, programmers can manipulate data using conventional SQL language with no concern about database location or its schema. Systems Administrators may also add or remove databases on the whole system without need to change the final user application.
Duque, Hector Brunie Lionel Magnin Isabelle. "Conception et mise en oeuvre d'un environnement logiciel de manipulation et d'accès à des données réparties." Villeurbanne : Doc'INSA, 2006. http://docinsa.insa-lyon.fr/these/pont.php?id=duque.
Full textGhemtio, Wafo Léo Aymar. "Simulation numérique et approche orientée connaissance pour la découverte de nouvelles molécules thérapeutiques." Thesis, Nancy 1, 2010. http://www.theses.fr/2010NAN10103/document.
Full textTherapeutic innovation has traditionally benefited from the combination of experimental screening and molecular modelling. In practice, however, the latter is often limited by the shortage of structural and biological information. Today, the situation has completely changed with the high-throughput sequencing of the human genome, and the advances realized in the three-dimensional determination of the structures of proteins. This gives access to an enormous amount of data which can be used to search for new treatments for a large number of diseases. In this respect, computational approaches have been used for high-throughput virtual screening (HTVS) and offer an alternative or a complement to the experimental methods, which allow more time for the discovery of new treatments.However, most of these approaches suffer the same limitations. One of these is the cost and the computing time required for estimating the binding of all the molecules from a large data bank to a target, which can be considerable in the context of the high-throughput. Also, the accuracy of the results obtained is another very evident challenge in the domain. The need to manage a large amount of heterogeneous data is also particularly crucial.To try to surmount the current limitations of HTVS and to optimize the first stages of the drug discovery process, I set up an innovative methodology presenting two advantages. Firstly, it allows to manage an important mass of heterogeneous data and to extract knowledge from it. Secondly, it allows distributing the necessary calculations on a grid computing platform that contains several thousand of processors. The whole methodology is integrated into a multiple-step virtual screening funnel. The purpose is the consideration, in the form of constraints, of the knowledge available about the problem posed in order to optimize the accuracy of the results and the costs in terms of time and money at various stages of high-throughput virtual screening
Kulhavý, Lukáš. "Praktické uplatnění technologií data mining ve zdravotních pojišťovnách." Master's thesis, Vysoká škola ekonomická v Praze, 2010. http://www.nusl.cz/ntk/nusl-77726.
Full textDe, Vlieger Paul. "Création d'un environnement de gestion de base de données "en grille" : application à l'échange de données médicales." Phd thesis, Université d'Auvergne - Clermont-Ferrand I, 2011. http://tel.archives-ouvertes.fr/tel-00719688.
Full textSaša, Dević. "Приступи развоју базe података Општег информационог модела за електроенергетске мреже." Phd thesis, Univerzitet u Novom Sadu, Fakultet tehničkih nauka u Novom Sadu, 2019. https://www.cris.uns.ac.rs/record.jsf?recordId=108886&source=NDLTD&language=en.
Full textOpšti informacioni model (CIM) koristi se za opis elektroenergetske mreže i za razmenu podataka između operatera prenosnih elektroenergetskih sistema. Kako je model postajao sve zastupljeniji, pojavila se potreba za njegovim skladištenjem. U radu je razvijen metodološki pristup za razvoj baze podataka koja bi podržala relativno jednostavno skladištenje i rad sa instancama CIM modela, koje opisuju trenutno, aktivno stanje u sistemu. Takođe, omogućeno je i praćenje prethodnih, istorijskih stanja CIM instanci, kao i njihova restauracija u željeno stanje. Očekuje se da predloženi pristup olakša uvođenje CIM modela u različita, namenska programska rešenja.
Common Information Model (CIM) is used for describing power grid networksand data exchange among transmission system operators (TSO). As themodel became widely used, there was a need to store such model. In thisthesis we present a methodological approach to development of a databasethat supports relatively easy storing and managing CIM instances, whichdescribe current, active state of the system. Also, tracking changes andrestoring CIM instances to its previous states are supported. We expect thatsuch methodological approach would ease the implementation of CIM modelin various, domain specific software solutions.
Ozel, Melda. "Behavior of concrete beams reinforced with 3-D fiber reinforced plastic grids." 2002. http://www.library.wisc.edu/databases/connect/dissertations.html.
Full textWang, Peng-Hsiang, and 王鵬翔. "Distributed Geo-Databases Integration using Data Grid." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/38637483423411558663.
Full text中原大學
資訊管理研究所
95
ABSTRACT Remote sensing has been a well developed technology. Especially in 3D applications, people often use satellite remote sensing imagery and aerial photos to build the 3D model. For real-time 3D flight simulation, besides a interactive virtual geographic environment, service providers have to update the imagery and DTM frequently to generate a realistic 3D scene of the specific location. However, relative materials like remote sensing data and aerial photos are distributed within government departments, research agencies or private enterprises and kept in different databases. To obtain a remote sensing image is very expensive, therefore we need to integrate remote sensing resource in different database to save the cost for a new image collect and reuse the archived images. But the problem is people don’t have a simple and convenient query interface to find out all the resource in different database to build a 3D scene. The purpose of this research is to use Grid technique to integrate distributed geo-spatial database and provide a query interface so uses can search between geo-spatial databases and get remote sensing data they need for 3D flight simulation through GridFTP. These difference imagery databases and remote sensing satellites are regarded as different ground observers, and we use Data Grid technique to integrate all of them. Through Grid Portal, People can use their computers to access remote sensing imagery from different databases in different companies. So they can almost real-time monitor their area of interest. In the demonstration, therefore to solve the problem of query between different databases from software, through GT4 (Globus Toolkit 4.03) basic framework, we use Grid operation and related technologies to build “Distribute Geo-Databases Integration using Data Grid Portal” to solve coordination and corporation problems. we present how to a create a flight path between two points and use it to query imagery and metadata databases. Since we already integrate different databases, optical remote sensing images and DEM will deliver to uses through GridFTP and they can use commercial software for 3D display. With the integrated database we provide, users can get all the resource they need to build a realistic 3D scene with the same scenario.
Donaldson, William Walden. "Grid-graph paritioning." 2000. http://www.library.wisc.edu/databases/connect/dissertations.html.
Full textLiao, Rong-Jhang, and 廖容章. "A signature-based Grid Index Design for Main-Memory Databases." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/98932298330756654626.
Full text國立臺灣大學
資訊工程學研究所
97
A large-scaled RFID application often needs a highly efficient database system in data processing. This research is motivated by the strong demand of an efficient index structure design for main-memory database systems of RFID applications. In this paper, a signature-based grid index structure is proposed for efficient data queries and storage. An efficient methodology is proposed to locate duplicates and to execute batch deletions and range queries based on application domain knowhow. The capability of the design is implemented in an open source main-memory database system H2 and evaluated by realistic workloads of RFID applications.
Tu, Manghui. "A data management framework for secure and dependable data grid /." 2006. http://proquest.umi.com/pqdweb?did=1225157601&sid=1&Fmt=2&clientId=10361&RQT=309&VName=PQD.
Full textKuo, Hsun-Hung, and 郭訓宏. "Towards a High-Performance Grid Computing Infrastructure —A Distributed Databases Approach." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/55069768643873995817.
Full text國立交通大學
資訊科學系所
92
Grid Computing promises to provide uniform, non-expensive access to computing power through aggregation and utilization of potentially unlimited number of storage and computing devices. For Grid infrastructure developers, this goal amounts to creating effective mechanisms that can allocate and coordinate distributed, heterogeneous resources in a robust and secure manner. For Grid application developers, on the other hand, the main challenge is to make best use of the facilities provided by the infrastructure. Typically, a developer needs to divide a problem into smaller pieces, and plan for appropriate data manipulation and transfer among them. Such divide-and-conquer effort is essential when required memory space is beyond the capabilities of individual machines, but complicated when the infrastructure provides only low-level facilities. This thesis describes database-specific techniques that can relieve developers from complicated memory management. Simply speaking, we use individual relational databases as computational nodes for their storage and computation capabilities, and connect them together into a distributed computing platform. In addition, we define a generic schema capable of storing complex data structures, and mechanisms that allow flexible translation between the schema and other computation-friendly tabular structures. We argue that together these constructs form an attractive platform that can greatly simplify Grid application development, thus contribute to the general Grid Computing community in a useful way.
Chiu, Chien-Sheng, and 邱建昇. "GOD-CS: A New Grid-Oriented Dissection Clustering Scheme for Large Databases." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/33026668207918421374.
Full text國立屏東科技大學
資訊管理系所
98
Many recent clustering schemes have been applied in previous investigations to resolve the issues of high execution cost and low correction ratio in arbitrary shapes. Two conventional approaches that can solve one of these problems accurately are K-means and DBSCAN. However, DBSCAN is inefficient while K-means has poor accuracy. ANGEL and G-TREACLE have been proposed to improve current clustering tribulations, but require complicated procedures and numerous thresholds. This thesis presents a new clustering technique, called 「GOD-CS」, which employs grid-based clustering, neighborhood 8-square searching and tolerance rate to reduce these problems. Simulation results indicate that GOD-CS clusters large databases very quickly, while having almost identical or even better clustering performance in comparison to several existing well-known approaches with the original patterns in a simple procedure. Thus, GOD-CS performs well and is simple to implement.
Ramos, Luís Albino Nogueira. "Performance analysis of a database caching system in a grid environment." Dissertação, 2007. http://hdl.handle.net/10216/12860.
Full textLI, YI, and 李奕. "Design and Implementation Power Quality Database Platform of Smart Grid Environment." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/30427791789464631574.
Full text國立中正大學
電機工程研究所
104
In recent years, due to the different power quality manufacturers, the monitor equipment and analysis tools, are with no interoperabilities due to smart grid need. To share information and application messages between different power quality meters, this thesis developed a PQ data exchange platform based on IEEE 1159.3 PQDIF Power Quality Data Interchange Format. It provides achieving unified management and analysis, multi-level data for large shared data, and cross-platform network rendering. In this thesis, Visual Studio is used to implement platform operational processes which used ASP.NET to develop web applications. Visual Studio supports for multi-language development as well, such as VB.NET and C #. However, this thesis use the SQL Server 2008 in database, which is Microsoft's server management system, SQL Server 2008 had the analysis, reporting, notification, and other functions. The study mainly focuses on data storage, management, transaction handling and access control. The developed system can achieve cross-platform data accessing, displaying and analyzing and the information is presented to the web platform by building entire electric power information and retrieving server’s power information. The developed platform not only achieves data management and analysis, but also data presentation and analysis across from platform to network.
Ramos, Luís Albino Nogueira. "Performance analysis of a database caching system in a grid environment." Master's thesis, 2007. http://hdl.handle.net/10216/12860.
Full textDominguez, Ontiveros Elvis Efren. "Non-Intrusive Experiemental Investigation of Multi-Scale Flow Behavior in Rod Bundle with Spacer-Grids." Thesis, 2010. http://hdl.handle.net/1969.1/ETD-TAMU-2010-05-7919.
Full textChen, Chun-Hui, and 陳君輝. "Direct Grid-Connected Excited Synchronous Wind Power Generators with Real-time Remote Control and Database Analysis System." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/14147579808557899197.
Full text國立中山大學
電機工程學系研究所
104
This thesis develops a system for real-time remote monitoring and database analysis for excited synchronous wind power generators. The system uses MS Visual c#, SQL and TI microcontroller as a development platform. Communication interface between the system and microcontroller is RS485, which is in asynchronous serial mode. In addition, we design the communications protocol between the system and microcontroller. System programing is dependent on functions to classify, that facilitates the reading and expansion. There are two parts of the system: real-time remote monitoring and querying database to analyze. The real-time remote monitoring part has four pages of table for observation and one block for operation. The four pages of table are status confirmation, parameter setting, numeric monitoring and graphical monitoring. The status confirmation confirms whether the generators is ready or not; the parameter setting sets parameters of control strategies in the generators; the numeric monitoring and the graphical monitoring in which are to observe the information of grid network, excited generator, servo motor and battery. The former is represented by numerical, and the latter is represented by graph. The operational block is able to remote control the functions of the generators. In database analysis part, data can be recovered from database that is stored in real-time remote monitoring part to analyze and diagnose performance of the generators. As the results, the experiments verify the functionality of the system.
Miao, Zhuo. "Grid-aware evaluation of regular path queries on large Spatial networks." Thesis, 2007. http://hdl.handle.net/1828/192.
Full textHsieh, Meng-Han, and 謝孟翰. "Supervisory Control and Database Analysis System For Direct Grid-Connected Excited Synchronous Wind Power Generators With Servo Motor Control." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/2z758j.
Full text國立中山大學
電機工程學系研究所
106
This thesis uses Microsoft Visual Studio and Android Studio development platform with Visual C# and Java programming language to establish a Human-Machine Interface system which includes real-time monitoring, remote monitoring, database analysis, etc., applies to the direct grid-connected excited synchronous wind power system. The real-time supervisory control system uses asynchronous serial communication and RS485 as a connection between the system and micro-controller. Establish a packet format, convert packet data into graphic and numerical information to display in the human-machine interface. All acquired data will also be transmitted to the remote monitoring system and Microsoft SQL Server. The remote supervisory system consists of two parts, App base on Android mobile device, Windows Form base on computer. Both are used as Client to communicate with the TCP/IP Server of real-time monitoring system. The Server will poll commands of each Client and broadcast data to all Client. The database analysis system uses the SQL Commands of Visual C# to query various data stored in Microsoft SQL Server during experiment. Display the data in pages of the system so that the data and status of the wind power system during operation can be observed and evaluate the effect of parameters for the system.
Chang, You-Chuan, and 張祐銓. "Supervisory Control and Database Analysis System applied to Mobile Devices for Direct Grid-Connected Excited Synchronous Wind Power Generators with Servo Motor Control." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/b29dfx.
Full text國立中山大學
電機工程學系研究所
107
This thesis uses Microsoft Visual Studio and Android Studio development platform with Visual C# and Java programming language to establish a Human-Machine Interface system which includes real-time monitoring, remote monitoring, database analysis, etc., applies to the direct grid-connected excited synchronous wind power system. The real-time supervisory control system uses asynchronous serial communication and RS485 as a connection between the system and micro-controller. Establish a packet format, convert packet data into graphic and numerical information to display in the human-machine interface. All acquired data will also be transmitted to the remote monitoring system and Microsoft SQL Server. The remote supervisory system consists of two parts, App base on Android mobile device and Windows Form base on computer. Both are used as Client to communicate with the TCP/IP Server of real-time monitoring system. The Server will poll commands of each Client and broadcast data to all Client. The database analysis system also consists of two parts, App base on Android mobile device and Windows Form base on computer. Both use the SQL Commands of Java and Visual C# to query various data stored in Microsoft SQL Server during experiment. Display the data in pages of the system so that the data and status of the wind power system during operation can be observed and evaluate the effect of parameters for the system. This thesis mainly designs a data analysis app for Android mobile devices. By combining numerical information with graphical waveform display, users can query and analyze the historical data of wind power system through mobile devices everywhere.