Dissertations / Theses on the topic 'Transaction databases'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Transaction databases.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Aleksic, Mario. "Incremental computation methods in valid and transaction time databases." Thesis, Georgia Institute of Technology, 1996. http://hdl.handle.net/1853/8126.
Full textTuck, Terry W. "Temporally Correct Algorithms for Transaction Concurrency Control in Distributed Databases." Thesis, University of North Texas, 2001. https://digital.library.unt.edu/ark:/67531/metadc2743/.
Full textSinha, Aman. "Memory management and transaction scheduling for large-scale databases /." Digital version accessible at:, 1999. http://wwwlib.umi.com/cr/utexas/main.
Full textAleksic, Mario. "Incremental computation methods in valid & transaction time databases." [S.l.] : Universität Stuttgart , Fakultät Informatik, 1996. http://www.bsz-bw.de/cgi-bin/xvms.cgi?SWB6783621.
Full textKonana, Prabhudev Chennabasappa, and Prabhudev Chennabasappa Konana. "A transaction model for active and real-time databases." Diss., The University of Arizona, 1995. http://hdl.handle.net/10150/187289.
Full textZhang, Connie. "Static Conflict Analysis of Transaction Programs." Thesis, University of Waterloo, 2000. http://hdl.handle.net/10012/1052.
Full textMena, Eduardo Illarramendi Arantza. "Ontology-based query processing for global information systems /." Boston [u.a.] : Kluwer Acad. Publ, 2001. http://www.loc.gov/catdir/enhancements/fy0813/2001029621-d.html.
Full textBrodsky, Lloyd. "A knowledge-based preprocessor for approximate joins in improperly designed transaction databases." Thesis, Massachusetts Institute of Technology, 1991. http://hdl.handle.net/1721.1/13744.
Full textDixon, Eric Richard. "Developing distributed applications with distributed heterogenous databases." Thesis, Virginia Tech, 1993. http://hdl.handle.net/10919/42748.
Full textXie, Wanxia. "Supporting Distributed Transaction Processing Over Mobile and Heterogeneous Platforms." Diss., Georgia Institute of Technology, 2005. http://hdl.handle.net/1853/14073.
Full textCassol, Tiago Sperb. "Um estudo sobre alternativas de representação de dados temporais em bancos de dados relacionais." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2012. http://hdl.handle.net/10183/67849.
Full textTemporal information is present on a wide range of applications. Almost every application has at least one field that contains temporal data like dates or timestamps. However, traditional databases don’t have a comprehensive support to storage and query this kind of data efficiently, and DBMS with native support for temporal data are rarely available to system developers. Most of the time, regular databases are used to store application data and when temporal data is needed, it is handled using the poor support offered by standard relational DBMS. That said, the database designer must rely on good schema design so that the natural difficulty faced when dealing with temporal data on standard relational DBMS can be minimized. While some design choices may seem obvious, others are difficult to evaluate just by looking at them, therefore needing experimentation prior to being applied or not. For example, in several cases it might be difficult to measure how much will a specific design choice affect the disk space consumption, and how much will this same design choice affect overall performance. This kind of information is needed so that the database designer will be able to determine if, for example, the increased disk space consumption generated by a given choice is acceptable because of the performance enhancement it gives. The problem is that there is no study that analyses the design choices available, analyzing them through concrete data. Even when it is easy to see which of two design choices perform better in a given criterion, it is hard to see how better the better choice does, and if any other side-effect it has is acceptable. Having concrete data to support this kind of decision allows the database designer to make the choices that suits his application’s context best. The objective of this work is to analyze several common design choices to represent and handle different kinds of temporal data on standard SQL DBMS, providing guidance on which alternative suits best each situation where temporal data is required. Concrete data about each of the studied alternatives are generated and analyzed, and conclusions are drawn from them.
Burger, Albert G. "Branching transactions : a transaction model for parallel database systems." Thesis, University of Edinburgh, 1996. http://hdl.handle.net/1842/15591.
Full textHedman, Surlien Peter. "Economic advantages of Blockchain technology VS Relational database : An study focusing on economic advantages with Blockchain technology and relational databases." Thesis, Blekinge Tekniska Högskola, Institutionen för industriell ekonomi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-17366.
Full textKoroncziová, Dominika. "Doplnění a optimalizace temporálního rozšíření pro PostgreSQL." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2016. http://www.nusl.cz/ntk/nusl-255417.
Full textWu, Jiang. "CHECKPOINTING AND RECOVERY IN DISTRIBUTED AND DATABASE SYSTEMS." UKnowledge, 2011. http://uknowledge.uky.edu/cs_etds/2.
Full textDias, Ricardo Jorge Freire. "Cooperative memory and database transactions." Master's thesis, Faculdade de Ciências e Tecnologia, 2008. http://hdl.handle.net/10362/4192.
Full textSince the introduction of Software Transactional Memory (STM), this topic has received a strong interest by the scientific community, as it has the potential of greatly facilitating concurrent programming by hiding many of the concurrency issues under the transactional layer, being in this way a potential alternative to the lock based constructs, such as mutexes and semaphores. The current practice of STM is based on keeping track of changes made to the memory and, if needed, restoring previous states in case of transaction rollbacks. The operations in a program that can be reversible,by restoring the memory state, are called transactional operations. The way that this reversibility necessary to transactional operations is achieved is implementation dependent on the STM libraries being used. Operations that cannot be reversed,such as I/O to external data repositories (e.g., disks) or to the console, are called nontransactional operations. Non-transactional operations are usually disallowed inside a memory transaction, because if the transaction aborts their effects cannot be undone. In transactional databases, operations like inserting, removing or transforming data in the database can be undone if executed in the context of a transaction. Since database I/O operations can be reversed, it should be possible to execute those operations in the context of a memory transaction. To achieve such purpose, a new transactional model unifying memory and database transactions into a single one was defined, implemented, and evaluated. This new transactional model satisfies the properties from both the memory and database transactional models. Programmers can now execute memory and database operations in the same transaction and in case of a transaction rollback, the transaction effects in both the memory and the database are reverted.
Zawis, John A., and David K. Hsiao. "Accessing hierarchical databases via SQL transactions in a multi-model database system." Thesis, Monterey, California. Naval Postgraduate School, 1987. http://hdl.handle.net/10945/22186.
Full textWalpole, Dennis A., and Alphonso L. Woods. "Accessing network databases via SQL transactions in a multi-model database system." Thesis, Monterey, California. Naval Postgraduate School, 1989. http://hdl.handle.net/10945/25647.
Full textKunovský, Tomáš. "Temporální XML databáze." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2016. http://www.nusl.cz/ntk/nusl-255389.
Full textDuarte, Gustavo Luiz. "Metadados para reconciliação de transações em bancos de dados autônomos." Universidade de São Paulo, 2011. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-27082012-153008/.
Full textThe use of data replication techniques on mobile devices allows a mobile application to share data with a server and to work on such data while disconnected. While this feature is crucial in some application domains, the reconciliation of transactions applied to the mobile replica of data proves to be challenging. The use of locking is not feasible in some application domains. However, allowing write operations to be applied on several replicas without \\emph{a priori} synchronization makes the system susceptible to update conflicts, requiring a conflict resolution mechanism. Conflict resolution is a complex and error prone task, specially when human intervention is involved. Given this scenario, we developed a transactions control model for autonomous databases that uses metadata and database versioning to provide auditing and rectification of conflict resolutions. This turns the conflict resolution into a nondestructive operation, thus reducing the impact of an incorrect conflict resolution. This work presents also a framework for transaction reconciliation that implements the proposed model. As a case study, the developed framework was used to integrate two real systems that needed data replication and disconnected updates.
Prabhu, Nitin Kumar Vijay. "Transaction processing in Mobile Database System." Diss., UMK access, 2006.
Find full text"A dissertation in computer science and informatics and telecommunications and computer networking." Advisor: Vijay Kumar. Typescript. Vita. Title from "catalog record" of the print edition Description based on contents viewed Nov. 9, 2007. Includes bibliographical references (leaves 152-157). Online version of the print edition.
Nekroševičius, Marijonas. "Informacijos valdymo metodų analizė ir sprendimas informacijos paieškai naudojant ontologijas." Master's thesis, Lithuanian Academic Libraries Network (LABT), 2009. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2009~D_20090304_100547-67948.
Full textThe main problem in heterogeneous database integration is data incompatibility in different databases. XML is perfect solution in data exchange between different databases as it is independent from OS, applications or hardware. To implement XML in data exchange XML must be created corresponding to the databases. This work propose use of ontologies for information retrieving from heterogenous data bases. Such method let optimize user query to avoid wasted information.
Tu, Stephen Lyle. "Fast transactions for multicore in-memory databases." Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/82375.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (p. 55-57).
Though modern multicore machines have sufficient RAM and processors to manage very large in-memory databases, it is not clear what the best strategy for dividing work among cores is. Should each core handle a data partition, avoiding the overhead of concurrency control for most transactions (at the cost of increasing it for cross-partition transactions)? Or should cores access a shared data structure instead? We investigate this question in the context of a fast in-memory database. We describe a new transactionally consistent database storage engine called MAFLINGO. Its cache-centered data structure design provides excellent base key-value store performance, to which we add a new, cache-friendly serializable protocol and support for running large, read-only transactions on a recent snapshot. On a key-value workload, the resulting system introduces negligible performance overhead as compared to a version of our system with transactional support stripped out, while achieving linear scalability versus the number of cores. It also exhibits linear scalability on TPC-C, a popular transactional benchmark. In addition, we show that a partitioning-based approach ceases to be beneficial if the database cannot be partitioned such that only a small fraction of transactions access multiple partitions, making our shared-everything approach more relevant. Finally, based on a survey of results from the literature, we argue that our implementation substantially outperforms previous main-memory databases on TPC-C benchmarks.
by Stephen Lyle Tu.
S.M.
Lawley, Michael John, and n/a. "Program Transformation for Proving Database Transaction Safety." Griffith University. School of Computing and Information Technology, 2000. http://www4.gu.edu.au:8080/adt-root/public/adt-QGU20070228.150125.
Full textShang, Pengju. "Research in high performance and low power computer systems for data-intensive environment." Doctoral diss., University of Central Florida, 2011. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/5033.
Full textID: 030423445; System requirements: World Wide Web browser and PDF reader.; Mode of access: World Wide Web.; Thesis (Ph.D.)--University of Central Florida, 2011.; Includes bibliographical references (p. 119-128).
Ph.D.
Doctorate
Electrical Engineering and Computer Science
Engineering and Computer Science
Computer Science
Hamilton, Howard Gregory. "An Examination of Service Level Agreement Attributes that Influence Cloud Computing Adoption." NSUWorks, 2015. http://nsuworks.nova.edu/gscis_etd/53.
Full textOgunyadeka, Adewole C. "Transactions and data management in NoSQL cloud databases." Thesis, Oxford Brookes University, 2016. https://radar.brookes.ac.uk/radar/items/c87fa049-f8c7-4b9e-a27c-3c106fcda018/1/.
Full textJones, Evan P. C. (Evan Philip Charles) 1981. "Fault-tolerant distributed transactions for partitioned OLTP databases." Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/71477.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (p. 103-112).
This thesis presents Dtxn, a fault-tolerant distributed transaction system designed specifically for building online transaction processing (OLTP) databases. Databases have traditionally been designed as general purpose data processing tools. By being designed only for OLTP workloads, Dtxn can be more efficient. It is designed to support very large databases by partitioning data across a cluster of commodity servers in a data center. Combining multiple servers together allows systems built with Dtxn to be cost effective, highly available, scalable, and fault-tolerant. Dtxn provides three novel features. First, it provides reusable infrastructure for building a distributed OLTP database out of single machine databases. This allows developers to take a specialized backend storage engine and use it across multiple machines, without needing to re-implement the distributed transaction infrastructure. We used Dtxn to build four different applications: a simple key/value store, a specialized TPC-C implementation, a main-memory OLTP database, and a traditional disk-based OLTP database. Second, Dtxn provides a novel concurrency control mechanism called speculative concurrency control, designed for main memory OLTP workloads that are primarily composed of transactions with a single round of communication between the application and database. Speculative concurrency control executes one transaction at a time, with no concurrency control overhead. In cases where there may be stalls due to network communication, it speculates future transactions. Our results show that this provides significantly better throughput than traditional two-phase locking, outperforming it by a factor of two on the TPC-C benchmark. Finally, Dtxn supports live migration, allowing part of the data on one server to be moved to another server while processing transactions. Our experiments show that our approach has nearly no visible impact on throughput or latency when moving data under moderate to high loads. It has significantly less impact than the best commercially available systems when the database is overloaded. The period of time where the throughput is reduced is less than half as long as failing over to another replica or using virtual machine migration.
by Evan Philip Charles Jones.
Ph.D.
Pang, Gene. "Scalable Transactions for Scalable Distributed Database Systems." Thesis, University of California, Berkeley, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=3733329.
Full textWith the advent of the Internet and Internet-connected devices, modern applications can experience very rapid growth of users from all parts of the world. A growing user base leads to greater usage and large data sizes, so scalable database systems capable of handling the great demands are critical for applications. With the emergence of cloud computing, a major movement in the industry, modern applications depend on distributed data stores for their scalable data management solutions. Many large-scale applications utilize NoSQL systems, such as distributed key-value stores, for their scalability and availability properties over traditional relational database systems. By simplifying the design and interface, NoSQL systems can provide high scalability and performance for large data sets and high volume workloads. However, to provide such benefits, NoSQL systems sacrifice traditional consistency models and support for transactions typically available in database systems. Without transaction semantics, it is harder for developers to reason about the correctness of the interactions with the data. Therefore, it is important to support transactions for distributed database systems without sacrificing scalability.
In this thesis, I present new techniques for scalable transactions for scalable database systems. Distributed data stores need scalable transactions to take advantage of cloud computing, and to meet the demands of modern applications. Traditional techniques for transactions may not be appropriate in a large, distributed environment, so in this thesis, I describe new techniques for distributed transactions, without having to sacrifice traditional semantics or scalability.
I discuss three facets to improving transaction scalability and support in distributed database systems. First, I describe a new transaction commit protocol that reduces the response times for distributed transactions. Second, I propose a new transaction programming model that allows developers to better deal with the unexpected behavior of distributed transactions. Lastly, I present a new scalable view maintenance algorithm for convergent join views. Together, the new techniques in this thesis contribute to providing scalable transactions for modern, distributed database systems.
Cahill, Michael James. "Serializable Isolation for Snapshot Databases." University of Sydney, 2009. http://hdl.handle.net/2123/5353.
Full textMany popular database management systems implement a multiversion concurrency control algorithm called snapshot isolation rather than providing full serializability based on locking. There are well-known anomalies permitted by snapshot isolation that can lead to violations of data consistency by interleaving transactions that would maintain consistency if run serially. Until now, the only way to prevent these anomalies was to modify the applications by introducing explicit locking or artificial update conflicts, following careful analysis of conflicts between all pairs of transactions. This thesis describes a modification to the concurrency control algorithm of a database management system that automatically detects and prevents snapshot isolation anomalies at runtime for arbitrary applications, thus providing serializable isolation. The new algorithm preserves the properties that make snapshot isolation attractive, including that readers do not block writers and vice versa. An implementation of the algorithm in a relational database management system is described, along with a benchmark and performance study, showing that the throughput approaches that of snapshot isolation in most cases.
Oza, Smita. "Implementing real-time transactions using distributed main memory databases." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape16/PQDD_0031/MQ27056.pdf.
Full textOza, Smita Carleton University Dissertation Computer Science. "Implementing real- time transactions using distributed main memory databases." Ottawa, 1997.
Find full textNiles, Duane Francis Jr. "Improving Performance of Highly-Programmable Concurrent Applications by Leveraging Parallel Nesting and Weaker Isolation Levels." Thesis, Virginia Tech, 2015. http://hdl.handle.net/10919/54557.
Full textMaster of Science
Ongkasuwan, Patarawan. "Transaction synchronization and privacy aspect in blockchain decentralized applications." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-272134.
Full textIdéer och tekniker för kryptografi och decentraliserad lagring har haft en enorm tillväxt i många branscher, eftersom de har antagits för att förbättra verksamheten i organisationen. Den som kallas Blockchain-tekniken ger den en effektiv transparenslösning. Generellt har Blockchain använts för digital valuta eller cryptocurrency sedan starten. Ett av de mest kända Blockchainprotokollen är Ethereum, som har uppfunnit det smarta kontraktet för att möjliggöra Blockchains förmåga att utföra ett villkor, snarare än att bara fungera som lagring. Applikationer som använder denna teknik kallas 'Dapps' eller 'decentraliserade applikationer'. Det finns emellertid pågående argument om synkronisering associerad med systemet. Systemsynkronisering är för närvarande oerhört viktigt för applikationer, eftersom väntetiden för att en transaktion ska verifieras kan orsaka missnöje i användarupplevelsen. Flera studier har visat att sekretessläckage inträffar, även om Blockchain ger en viss säkerhet, till följd av den traditionella transaktionen, som kräver godkännande genom en mellaninstitution. Till exempel måste en bank bearbeta transaktioner via många konstitutionspartier innan den får den slutliga bekräftelsen, vilket kräver att användaren väntar en betydande tid. Den här avhandlingen beskriver utmaningen med transaktionssynkronisering mellan användaren och smart kontrakt, samt frågan om en sekretessstrategi för systemet och efterlevnad. För att närma sig dessa två utmaningar separerar den första uppgiften olika händelser och utvärderar resultaten jämfört med en alternativ lösning. Detta görs genom att testa det smarta kontraktet för att hitta det bästa gasprisresultatet, som varierar över tiden. I Ethereum-protokollet är gaspriset ett av de bästa sätten att minska transaktionstiden för att möta användarens förväntningar. Gaspriset påverkas av kodstrukturen och nätverket. I det smarta kontraktet körs test baserat på två fall och löser plattformsproblem som löpare och användarupplevelse och minskar kostnaderna. Det har också visat sig att insamlingen av avgiften innan du deltar i en auktion kan förhindra löparproblemet. Det andra fallet syftar till att bevisa att frysning av budbeloppet är det bästa sättet att öka användarens upplevelse och att uppnå en bättre upplevelse av en online auktion. Den andra utmaningen fokuserar huvudsakligen på sekretessstrategin och riskhanteringen för plattformen, som innebär att identifiera möjliga lösningar för alla risksituationer, samt att upptäcka, förutse och förhindra dem. Tillhandahållande av strategier, som att säkra den smarta kontraktsstrukturen, öka krypteringsmetoden i databasen, utforma ett termblad och avtal och godkännande, hjälper till att förhindra systemets sårbarheter. Därför syftar denna forskning till att förbättra och undersöka en online-auktionsplattform genom att använda ett smart avtal med Blockchain för att ge upplevande användarupplevelser.
Sauer, Caetano [Verfasser]. "Modern techniques for transaction-oriented database recovery / Caetano Sauer." München : Verlag Dr. Hut, 2017. http://d-nb.info/1140977644/34.
Full textYoussef, Mohamed Wagdy Abdel Fattah. "Transaction behaviour in large database environments : a methodological approach." Thesis, City University London, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.358945.
Full textAldarmi, Saud Ahmed. "Scheduling soft-deadline real-time transactions." Thesis, University of York, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.310917.
Full textTakkar, Sonia. "Scheduling real-time transactions in parallel database systems." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape16/PQDD_0025/MQ26975.pdf.
Full textTakkar, Sonia Carleton University Dissertation Computer Science. "Scheduling real-time transactions in parallel database systems." Ottawa, 1997.
Find full textOn, Sai Tung. "Efficient transaction recovery on flash disks." HKBU Institutional Repository, 2010. http://repository.hkbu.edu.hk/etd_ra/1170.
Full textBarga, Roger S. "A reflective framework for implementing extended transactions /." Full text open access at:, 1999. http://content.ohsu.edu/u?/etd,205.
Full textWang, Xiangyang. "The development of a knowledge-based database transaction design assistant." Thesis, Cardiff University, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.359755.
Full textSavasere, Ashok. "Efficient algorithms for mining association rules in large databases of cutomer transactions." Diss., Georgia Institute of Technology, 1998. http://hdl.handle.net/1853/8260.
Full textYan, Cong S. M. Massachusetts Institute of Technology. "Exploiting fine-grain parallelism in transactional database systems." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/101592.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (pages 47-50).
Current database engines designed for conventional multicore systems exploit a fraction of the parallelism available in transactional workloads. Specifically, database engines only exploit inter-transaction parallelism: they use speculation to concurrently execute multiple, potentially-conflicting database transactions while maintaining atomicity and isolation. However, they do not exploit intra-transaction parallelism: each transaction is executed sequentially on a single thread. While fine-grain intra-transaction parallelism is often abundant, it is too costly to exploit in conventional multicores. Software would need to implement fine-grain speculative execution and scheduling, introducing prohibitive overheads that would negate the benefits of additional intra-transaction parallelism. In this thesis, we leverage novel hardware support to design and implement a database engine that effectively exploits both inter- and intra-transaction parallelism. Specifically, we use Swarm, a new parallel architecture that exploits fine-grained and ordered parallelism. Swarm executes tasks speculatively and out of order, but commits them in order. Integrated hardware task queueing and speculation mechanisms allow Swarm to speculate thousands of tasks ahead of the earliest active task and reduce task management overheads. We modify Silo, a state-of-the-art in-memory database engine, to leverage Swarm's features. The resulting database engine, which we call SwarmDB, has several key benefits over Silo: it eliminates software concurrency control, reducing overheads; it efficiently executes tasks within a database transaction in parallel; it reduces conflicts; and it reduces the amount of work that needs to be discarded and re-executed on each conflict. We evaluate SwarmDB on simulated Swarm systems of up to 64 cores. At 64 cores, SwarmDB outperforms Silo by 6.7x on TPC-C and 6.9x on TPC-E, and achieves near-linear scalability.
by Cong Yan.
S.M.
Smékal, Luděk. "Získávání znalostí z textových dat." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2007. http://www.nusl.cz/ntk/nusl-412756.
Full textGong, Daoya. "Transaction process modeling and implementation for 3-tiered Web-based database systems." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0023/MQ62128.pdf.
Full textNavarro, Martín Joan. "From cluster databases to cloud storage: Providing transactional support on the cloud." Doctoral thesis, Universitat Ramon Llull, 2015. http://hdl.handle.net/10803/285655.
Full textDurante las últimas tres décadas, las limitaciones tecnológicas (por ejemplo la capacidad de los dispositivos de almacenamiento o el ancho de banda de las redes de comunicación) y las crecientes demandas de los usuarios (estructuras de información, volúmenes de datos) han conducido la evolución de las bases de datos distribuidas. Desde los primeros repositorios de datos para archivos planos que se desarrollaron en la década de los ochenta, se han producido importantes avances en los algoritmos de control de concurrencia, protocolos de replicación y en la gestión de transacciones. Sin embargo, los retos modernos de almacenamiento de datos que plantean el Big Data y el cloud computing—orientados a mejorar la limitaciones en cuanto a escalabilidad y elasticidad de las bases de datos estáticas—están empujando a los profesionales a relajar algunas propiedades importantes de los sistemas transaccionales clásicos, lo que excluye a varias aplicaciones las cuales no pueden encajar en esta estrategia debido a su alta dependencia transaccional. El propósito de esta tesis es abordar dos retos importantes todavía latentes en el campo de las bases de datos distribuidas: (1) las limitaciones en cuanto a escalabilidad de los sistemas transaccionales y (2) el soporte transaccional en repositorios de almacenamiento en la nube. Analizar las técnicas tradicionales de control de concurrencia y de replicación, utilizadas por las bases de datos clásicas para soportar transacciones, es fundamental para identificar las razones que hacen que estos sistemas degraden su rendimiento cuando el número de nodos y/o cantidad de datos crece. Además, este análisis está orientado a justificar el diseño de los repositorios en la nube que deliberadamente han dejado de lado el soporte transaccional. Efectivamente, acercar el paradigma del almacenamiento en la nube a las aplicaciones que tienen una fuerte dependencia en las transacciones es crucial para su adaptación a los requerimientos actuales en cuanto a volúmenes de datos y modelos de negocio. Esta tesis empieza con la propuesta de un simulador de protocolos para bases de datos distribuidas estáticas, el cual sirve como base para la revisión y comparativa de rendimiento de los protocolos de control de concurrencia y las técnicas de replicación existentes. En cuanto a la escalabilidad de las bases de datos y las transacciones, se estudian los efectos que tiene ejecutar distintos perfiles de transacción bajo diferentes condiciones. Este análisis continua con una revisión de los repositorios de almacenamiento en la nube existentes—que prometen encajar en entornos dinámicos que requieren alta escalabilidad y disponibilidad—, el cual permite evaluar los parámetros y características que estos sistemas han sacrificado con el fin de cumplir las necesidades actuales en cuanto a almacenamiento de datos a gran escala. Para explorar las posibilidades que ofrece el paradigma del cloud computing en un escenario real, se presenta el desarrollo de una arquitectura de almacenamiento de datos inspirada en el cloud computing para almacenar la información generada en las Smart Grids. Concretamente, se combinan las técnicas de replicación en bases de datos transaccionales y la propagación epidémica con los principios de diseño usados para construir los repositorios de datos en la nube. Las lecciones recogidas en el estudio de los protocolos de replicación y control de concurrencia en el simulador de base de datos, junto con las experiencias derivadas del desarrollo del repositorio de datos para las Smart Grids, desembocan en lo que hemos acuñado como Epidemia: una infraestructura de almacenamiento para Big Data concebida para proporcionar soporte transaccional en la nube. Además de heredar los beneficios de los repositorios en la nube altamente en cuanto a escalabilidad, Epidemia incluye una capa de gestión de transacciones que reenvía las transacciones de los clientes a un conjunto jerárquico de particiones de datos, lo que permite al sistema ofrecer distintos niveles de consistencia y adaptar elásticamente su configuración a nuevas demandas cargas de trabajo. Por último, los resultados experimentales ponen de manifiesto la viabilidad de nuestra contribución y alientan a los profesionales a continuar trabajando en esta área.
Over the past three decades, technology constraints (e.g., capacity of storage devices, communication networks bandwidth) and an ever-increasing set of user demands (e.g., information structures, data volumes) have driven the evolution of distributed databases. Since flat-file data repositories developed in the early eighties, there have been important advances in concurrency control algorithms, replication protocols, and transactions management. However, modern concerns in data storage posed by Big Data and cloud computing—related to overcome the scalability and elasticity limitations of classic databases—are pushing practitioners to relax some important properties featured by transactions, which excludes several applications that are unable to fit in this strategy due to their intrinsic transactional nature. The purpose of this thesis is to address two important challenges still latent in distributed databases: (1) the scalability limitations of transactional databases and (2) providing transactional support on cloud-based storage repositories. Analyzing the traditional concurrency control and replication techniques, used by classic databases to support transactions, is critical to identify the reasons that make these systems degrade their throughput when the number of nodes and/or amount of data rockets. Besides, this analysis is devoted to justify the design rationale behind cloud repositories in which transactions have been generally neglected. Furthermore, enabling applications which are strongly dependent on transactions to take advantage of the cloud storage paradigm is crucial for their adaptation to current data demands and business models. This dissertation starts by proposing a custom protocol simulator for static distributed databases, which serves as a basis for revising and comparing the performance of existing concurrency control protocols and replication techniques. As this thesis is especially concerned with transactions, the effects on the database scalability of different transaction profiles under different conditions are studied. This analysis is followed by a review of existing cloud storage repositories—that claim to be highly dynamic, scalable, and available—, which leads to an evaluation of the parameters and features that these systems have sacrificed in order to meet current large-scale data storage demands. To further explore the possibilities of the cloud computing paradigm in a real-world scenario, a cloud-inspired approach to store data from Smart Grids is presented. More specifically, the proposed architecture combines classic database replication techniques and epidemic updates propagation with the design principles of cloud-based storage. The key insights collected when prototyping the replication and concurrency control protocols at the database simulator, together with the experiences derived from building a large-scale storage repository for Smart Grids, are wrapped up into what we have coined as Epidemia: a storage infrastructure conceived to provide transactional support on the cloud. In addition to inheriting the benefits of highly-scalable cloud repositories, Epidemia includes a transaction management layer that forwards client transactions to a hierarchical set of data partitions, which allows the system to offer different consistency levels and elastically adapt its configuration to incoming workloads. Finally, experimental results highlight the feasibility of our contribution and encourage practitioners to further research in this area.
Quantock, David E. "The real-time roll-back and recovery of transactions in database systems." Thesis, Monterey, California. Naval Postgraduate School, 1989. http://hdl.handle.net/10945/27234.
Full textA modern database transaction may involve a long series of updates, deletions, and insertions of data and a complex mix of these primary database operations. Due to its length and complexity, the transaction requires back-up and recovery procedures. The back-up procedure allows the user to either commit or abort a lengthy and complex transaction without comprising the integrity of the data. The recovery procedure allows the system to maintain the data integrity during the execution of a transaction, should the transaction be interrupted by the system. With both the back-up and recovery procedures, the modern database system will be able to provide consistent data throughout the life-span of a database without ever corrupting either its data values or its data types. However, the implementation of back-up and recovery procedures in a database system is a difficult and involved effort since it effects the base as well as meta data of the database. Further, it effects the state of the database system. This thesis is mainly focused on the design trade-offs and issues of implementing an effective and efficient mechanism for back-up and recovery in the multimodel, multilingual, and multi backend database system. Keywords: Data base management systems. (KR)
Yu, Heng. "On Decoupling Concurrency Control from Recovery in Database Repositories." Thesis, University of Waterloo, 2005. http://hdl.handle.net/10012/1084.
Full textBecause it is the possibility of transaction aborts for deadlock resolution that makes the recovery subsystem necessary, we choose the deadlock-free tree locking (TL) scheme for our purpose. With the knowledge of transaction workload, efficacious lock trees for runtime control can be determined at compile-time. We have designed compile-time algorithms to generate the lock tree and other relevant data structures, and runtime locking/unlocking algorithms based on such structures. We have further explored how to insert the lock steps into the transaction types at compile time.
To conduct our simulation experiments to evaluate the performance of TL, we have designed two workloads. The first one is from the OLTP benchmark TPC-C. The second is from the open-source operating system MINIX. Our experimental results show TL produces better throughput than the traditional two-phase locking (2PL) when the transactions are write-only; and for main-memory data, TL performs comparably to 2PL even in workloads with many reads.
Ahmed, Shamim. "Transaction and version management in object-oriented database management systems for collaborative engineering applications." Thesis, Massachusetts Institute of Technology, 1991. http://hdl.handle.net/1721.1/13854.
Full text