Dissertations / Theses on the topic 'Transactional databases'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Transactional databases.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Navarro, Martín Joan. "From cluster databases to cloud storage: Providing transactional support on the cloud." Doctoral thesis, Universitat Ramon Llull, 2015. http://hdl.handle.net/10803/285655.
Full textDurante las últimas tres décadas, las limitaciones tecnológicas (por ejemplo la capacidad de los dispositivos de almacenamiento o el ancho de banda de las redes de comunicación) y las crecientes demandas de los usuarios (estructuras de información, volúmenes de datos) han conducido la evolución de las bases de datos distribuidas. Desde los primeros repositorios de datos para archivos planos que se desarrollaron en la década de los ochenta, se han producido importantes avances en los algoritmos de control de concurrencia, protocolos de replicación y en la gestión de transacciones. Sin embargo, los retos modernos de almacenamiento de datos que plantean el Big Data y el cloud computing—orientados a mejorar la limitaciones en cuanto a escalabilidad y elasticidad de las bases de datos estáticas—están empujando a los profesionales a relajar algunas propiedades importantes de los sistemas transaccionales clásicos, lo que excluye a varias aplicaciones las cuales no pueden encajar en esta estrategia debido a su alta dependencia transaccional. El propósito de esta tesis es abordar dos retos importantes todavía latentes en el campo de las bases de datos distribuidas: (1) las limitaciones en cuanto a escalabilidad de los sistemas transaccionales y (2) el soporte transaccional en repositorios de almacenamiento en la nube. Analizar las técnicas tradicionales de control de concurrencia y de replicación, utilizadas por las bases de datos clásicas para soportar transacciones, es fundamental para identificar las razones que hacen que estos sistemas degraden su rendimiento cuando el número de nodos y/o cantidad de datos crece. Además, este análisis está orientado a justificar el diseño de los repositorios en la nube que deliberadamente han dejado de lado el soporte transaccional. Efectivamente, acercar el paradigma del almacenamiento en la nube a las aplicaciones que tienen una fuerte dependencia en las transacciones es crucial para su adaptación a los requerimientos actuales en cuanto a volúmenes de datos y modelos de negocio. Esta tesis empieza con la propuesta de un simulador de protocolos para bases de datos distribuidas estáticas, el cual sirve como base para la revisión y comparativa de rendimiento de los protocolos de control de concurrencia y las técnicas de replicación existentes. En cuanto a la escalabilidad de las bases de datos y las transacciones, se estudian los efectos que tiene ejecutar distintos perfiles de transacción bajo diferentes condiciones. Este análisis continua con una revisión de los repositorios de almacenamiento en la nube existentes—que prometen encajar en entornos dinámicos que requieren alta escalabilidad y disponibilidad—, el cual permite evaluar los parámetros y características que estos sistemas han sacrificado con el fin de cumplir las necesidades actuales en cuanto a almacenamiento de datos a gran escala. Para explorar las posibilidades que ofrece el paradigma del cloud computing en un escenario real, se presenta el desarrollo de una arquitectura de almacenamiento de datos inspirada en el cloud computing para almacenar la información generada en las Smart Grids. Concretamente, se combinan las técnicas de replicación en bases de datos transaccionales y la propagación epidémica con los principios de diseño usados para construir los repositorios de datos en la nube. Las lecciones recogidas en el estudio de los protocolos de replicación y control de concurrencia en el simulador de base de datos, junto con las experiencias derivadas del desarrollo del repositorio de datos para las Smart Grids, desembocan en lo que hemos acuñado como Epidemia: una infraestructura de almacenamiento para Big Data concebida para proporcionar soporte transaccional en la nube. Además de heredar los beneficios de los repositorios en la nube altamente en cuanto a escalabilidad, Epidemia incluye una capa de gestión de transacciones que reenvía las transacciones de los clientes a un conjunto jerárquico de particiones de datos, lo que permite al sistema ofrecer distintos niveles de consistencia y adaptar elásticamente su configuración a nuevas demandas cargas de trabajo. Por último, los resultados experimentales ponen de manifiesto la viabilidad de nuestra contribución y alientan a los profesionales a continuar trabajando en esta área.
Over the past three decades, technology constraints (e.g., capacity of storage devices, communication networks bandwidth) and an ever-increasing set of user demands (e.g., information structures, data volumes) have driven the evolution of distributed databases. Since flat-file data repositories developed in the early eighties, there have been important advances in concurrency control algorithms, replication protocols, and transactions management. However, modern concerns in data storage posed by Big Data and cloud computing—related to overcome the scalability and elasticity limitations of classic databases—are pushing practitioners to relax some important properties featured by transactions, which excludes several applications that are unable to fit in this strategy due to their intrinsic transactional nature. The purpose of this thesis is to address two important challenges still latent in distributed databases: (1) the scalability limitations of transactional databases and (2) providing transactional support on cloud-based storage repositories. Analyzing the traditional concurrency control and replication techniques, used by classic databases to support transactions, is critical to identify the reasons that make these systems degrade their throughput when the number of nodes and/or amount of data rockets. Besides, this analysis is devoted to justify the design rationale behind cloud repositories in which transactions have been generally neglected. Furthermore, enabling applications which are strongly dependent on transactions to take advantage of the cloud storage paradigm is crucial for their adaptation to current data demands and business models. This dissertation starts by proposing a custom protocol simulator for static distributed databases, which serves as a basis for revising and comparing the performance of existing concurrency control protocols and replication techniques. As this thesis is especially concerned with transactions, the effects on the database scalability of different transaction profiles under different conditions are studied. This analysis is followed by a review of existing cloud storage repositories—that claim to be highly dynamic, scalable, and available—, which leads to an evaluation of the parameters and features that these systems have sacrificed in order to meet current large-scale data storage demands. To further explore the possibilities of the cloud computing paradigm in a real-world scenario, a cloud-inspired approach to store data from Smart Grids is presented. More specifically, the proposed architecture combines classic database replication techniques and epidemic updates propagation with the design principles of cloud-based storage. The key insights collected when prototyping the replication and concurrency control protocols at the database simulator, together with the experiences derived from building a large-scale storage repository for Smart Grids, are wrapped up into what we have coined as Epidemia: a storage infrastructure conceived to provide transactional support on the cloud. In addition to inheriting the benefits of highly-scalable cloud repositories, Epidemia includes a transaction management layer that forwards client transactions to a hierarchical set of data partitions, which allows the system to offer different consistency levels and elastically adapt its configuration to incoming workloads. Finally, experimental results highlight the feasibility of our contribution and encourage practitioners to further research in this area.
Araujo, Neto Afonso Comba de. "Security Benchmarking of Transactional Systems." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2012. http://hdl.handle.net/10183/143292.
Full textMost organizations nowadays depend on some kind of computer infrastructure to manage business critical activities. This dependence grows as computer systems become more reliable and useful, but so does the complexity and size of systems. Transactional systems, which are database-centered applications used by most organizations to support daily tasks, are no exception. A typical solution to cope with systems complexity is to delegate the software development task, and to use existing solutions independently developed and maintained (either proprietary or open source). The multiplicity of software and component alternatives available has boosted the interest in suitable benchmarks, able to assist in the selection of the best candidate solutions, concerning several attributes. However, the huge success of performance and dependability benchmarking markedly contrasts with the small advances on security benchmarking, which has only sparsely been studied in the past. his thesis discusses the security benchmarking problem and main characteristics, particularly comparing these with other successful benchmarking initiatives, like performance and dependability benchmarking. Based on this analysis, a general framework for security benchmarking is proposed. This framework, suitable for most types of software systems and application domains, includes two main phases: security qualification and trustworthiness benchmarking. Security qualification is a process designed to evaluate the most obvious and identifiable security aspects of the system, dividing the evaluated targets in acceptable or unacceptable, given the specific security requirements of the application domain. Trustworthiness benchmarking, on the other hand, consists of an evaluation process that is applied over the qualified targets to estimate the probability of the existence of hidden or hard to detect security issues in a system (the main goal is to cope with the uncertainties related to security aspects). The framework is thoroughly demonstrated and evaluated in the context of transactional systems, which can be divided in two parts: the infrastructure and the business applications. As these parts have significantly different security goals, the framework is used to develop methodologies and approaches that fit their specific characteristics. First, the thesis proposes a security benchmark for transactional systems infrastructures and describes, discusses and justifies all the steps of the process. The benchmark is applied to four distinct real infrastructures, and the results of the assessment are thoroughly analyzed. Still in the context of transactional systems infrastructures, the thesis also addresses the problem of the selecting software components. This is complex as evaluating the security of an infrastructure cannot be done before deployment. The proposed tool, aimed at helping in the selection of basic software packages to support the infrastructure, is used to evaluate seven different software packages, representative alternatives for the deployment of real infrastructures. Finally, the thesis discusses the problem of designing trustworthiness benchmarks for business applications, focusing specifically on the case of web applications. First, a benchmarking approach based on static code analysis tools is proposed. Several experiments are presented to evaluate the effectiveness of the proposed metrics, including a representative experiment where the challenge was the selection of the most secure application among a set of seven web forums. Based on the analysis of the limitations of such approach, a generic approach for the definition of trustworthiness benchmarks for web applications is defined.
Li, Yanrong. "Techniques for improving clustering and association rules mining from very large transactional databases." Thesis, Curtin University, 2009. http://hdl.handle.net/20.500.11937/907.
Full textLiu, Yufan. "A Survey Of Persistent Graph Databases." Kent State University / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=kent1395166105.
Full textNiles, Duane Francis Jr. "Improving Performance of Highly-Programmable Concurrent Applications by Leveraging Parallel Nesting and Weaker Isolation Levels." Thesis, Virginia Tech, 2015. http://hdl.handle.net/10919/54557.
Full textMaster of Science
Bejaoui, Lofti. "Qualitative topological relationships for objects with possibly vague shapes: implications on the specification of topological integrity constraints in transactional spatial databases and in spatial data warehouses." Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2009. http://tel.archives-ouvertes.fr/tel-00725614.
Full textBejaoui, Lotfi. "Qualitative Topological Relationships for Objects with Possibly Vague Shapes: Implications on the Specification of Topological Integrity Constraints in Transactional Spatial Databases and in Spatial Data Warehouses." Thesis, Université Laval, 2009. http://www.theses.ulaval.ca/2009/26348/26348.pdf.
Full textBurger, Albert G. "Branching transactions : a transaction model for parallel database systems." Thesis, University of Edinburgh, 1996. http://hdl.handle.net/1842/15591.
Full textDias, Ricardo Jorge Freire. "Cooperative memory and database transactions." Master's thesis, Faculdade de Ciências e Tecnologia, 2008. http://hdl.handle.net/10362/4192.
Full textSince the introduction of Software Transactional Memory (STM), this topic has received a strong interest by the scientific community, as it has the potential of greatly facilitating concurrent programming by hiding many of the concurrency issues under the transactional layer, being in this way a potential alternative to the lock based constructs, such as mutexes and semaphores. The current practice of STM is based on keeping track of changes made to the memory and, if needed, restoring previous states in case of transaction rollbacks. The operations in a program that can be reversible,by restoring the memory state, are called transactional operations. The way that this reversibility necessary to transactional operations is achieved is implementation dependent on the STM libraries being used. Operations that cannot be reversed,such as I/O to external data repositories (e.g., disks) or to the console, are called nontransactional operations. Non-transactional operations are usually disallowed inside a memory transaction, because if the transaction aborts their effects cannot be undone. In transactional databases, operations like inserting, removing or transforming data in the database can be undone if executed in the context of a transaction. Since database I/O operations can be reversed, it should be possible to execute those operations in the context of a memory transaction. To achieve such purpose, a new transactional model unifying memory and database transactions into a single one was defined, implemented, and evaluated. This new transactional model satisfies the properties from both the memory and database transactional models. Programmers can now execute memory and database operations in the same transaction and in case of a transaction rollback, the transaction effects in both the memory and the database are reverted.
Aleksic, Mario. "Incremental computation methods in valid and transaction time databases." Thesis, Georgia Institute of Technology, 1996. http://hdl.handle.net/1853/8126.
Full textSilva, Pedro Paulo de Souza Bento da. "Uma abordagem transacional para o tratamento de exceções em processos de negócio." Universidade de São Paulo, 2013. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-24022014-094221/.
Full textWith the aim of becoming more efficient, many organizations -- companies, governmental entities, research centers, etc -- choose to use software tools for supporting the accomplishment of its processes. An option that becomes more popular is the usage of Business Process Management Systems (BPM), which are generic tools, that is, not specific to any organization and highly configurable to the domain needs of any organization. One of the main responsibilities of BPM Systems is to provide exception handling mechanisms for the execution of business process instances. Exceptions, if ignored or incorrectly handled, may induce the abortion of instance executions and, depending on the gravity of the situation, induce failures on BPM Systems or even on subjacent systems (operational system, database management systems, etc.). Thus, exception handling mechanisms aim to solve the exceptional situation or stopping its collateral effects by ensuring, at least, a graceful degradation to the system. In this work, we study some of the main deficiencies of present exception handling models -- in the context of BPM Systems -- and present solutions based on Advanced Transaction Models to bypass them. We do this through the improvement of exception handling mechanisms from WED-flow, a business process modelling and instance execution managing approach. Lastly, we extend the WED-tool, an implementation of WED-flow approach, through the development of its failure recovery manager.
Zawis, John A., and David K. Hsiao. "Accessing hierarchical databases via SQL transactions in a multi-model database system." Thesis, Monterey, California. Naval Postgraduate School, 1987. http://hdl.handle.net/10945/22186.
Full textWalpole, Dennis A., and Alphonso L. Woods. "Accessing network databases via SQL transactions in a multi-model database system." Thesis, Monterey, California. Naval Postgraduate School, 1989. http://hdl.handle.net/10945/25647.
Full textTuck, Terry W. "Temporally Correct Algorithms for Transaction Concurrency Control in Distributed Databases." Thesis, University of North Texas, 2001. https://digital.library.unt.edu/ark:/67531/metadc2743/.
Full textMena, Eduardo Illarramendi Arantza. "Ontology-based query processing for global information systems /." Boston [u.a.] : Kluwer Acad. Publ, 2001. http://www.loc.gov/catdir/enhancements/fy0813/2001029621-d.html.
Full textTu, Stephen Lyle. "Fast transactions for multicore in-memory databases." Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/82375.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (p. 55-57).
Though modern multicore machines have sufficient RAM and processors to manage very large in-memory databases, it is not clear what the best strategy for dividing work among cores is. Should each core handle a data partition, avoiding the overhead of concurrency control for most transactions (at the cost of increasing it for cross-partition transactions)? Or should cores access a shared data structure instead? We investigate this question in the context of a fast in-memory database. We describe a new transactionally consistent database storage engine called MAFLINGO. Its cache-centered data structure design provides excellent base key-value store performance, to which we add a new, cache-friendly serializable protocol and support for running large, read-only transactions on a recent snapshot. On a key-value workload, the resulting system introduces negligible performance overhead as compared to a version of our system with transactional support stripped out, while achieving linear scalability versus the number of cores. It also exhibits linear scalability on TPC-C, a popular transactional benchmark. In addition, we show that a partitioning-based approach ceases to be beneficial if the database cannot be partitioned such that only a small fraction of transactions access multiple partitions, making our shared-everything approach more relevant. Finally, based on a survey of results from the literature, we argue that our implementation substantially outperforms previous main-memory databases on TPC-C benchmarks.
by Stephen Lyle Tu.
S.M.
Yan, Cong S. M. Massachusetts Institute of Technology. "Exploiting fine-grain parallelism in transactional database systems." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/101592.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (pages 47-50).
Current database engines designed for conventional multicore systems exploit a fraction of the parallelism available in transactional workloads. Specifically, database engines only exploit inter-transaction parallelism: they use speculation to concurrently execute multiple, potentially-conflicting database transactions while maintaining atomicity and isolation. However, they do not exploit intra-transaction parallelism: each transaction is executed sequentially on a single thread. While fine-grain intra-transaction parallelism is often abundant, it is too costly to exploit in conventional multicores. Software would need to implement fine-grain speculative execution and scheduling, introducing prohibitive overheads that would negate the benefits of additional intra-transaction parallelism. In this thesis, we leverage novel hardware support to design and implement a database engine that effectively exploits both inter- and intra-transaction parallelism. Specifically, we use Swarm, a new parallel architecture that exploits fine-grained and ordered parallelism. Swarm executes tasks speculatively and out of order, but commits them in order. Integrated hardware task queueing and speculation mechanisms allow Swarm to speculate thousands of tasks ahead of the earliest active task and reduce task management overheads. We modify Silo, a state-of-the-art in-memory database engine, to leverage Swarm's features. The resulting database engine, which we call SwarmDB, has several key benefits over Silo: it eliminates software concurrency control, reducing overheads; it efficiently executes tasks within a database transaction in parallel; it reduces conflicts; and it reduces the amount of work that needs to be discarded and re-executed on each conflict. We evaluate SwarmDB on simulated Swarm systems of up to 64 cores. At 64 cores, SwarmDB outperforms Silo by 6.7x on TPC-C and 6.9x on TPC-E, and achieves near-linear scalability.
by Cong Yan.
S.M.
Prabhu, Nitin Kumar Vijay. "Transaction processing in Mobile Database System." Diss., UMK access, 2006.
Find full text"A dissertation in computer science and informatics and telecommunications and computer networking." Advisor: Vijay Kumar. Typescript. Vita. Title from "catalog record" of the print edition Description based on contents viewed Nov. 9, 2007. Includes bibliographical references (leaves 152-157). Online version of the print edition.
Ogunyadeka, Adewole C. "Transactions and data management in NoSQL cloud databases." Thesis, Oxford Brookes University, 2016. https://radar.brookes.ac.uk/radar/items/c87fa049-f8c7-4b9e-a27c-3c106fcda018/1/.
Full textJones, Evan P. C. (Evan Philip Charles) 1981. "Fault-tolerant distributed transactions for partitioned OLTP databases." Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/71477.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (p. 103-112).
This thesis presents Dtxn, a fault-tolerant distributed transaction system designed specifically for building online transaction processing (OLTP) databases. Databases have traditionally been designed as general purpose data processing tools. By being designed only for OLTP workloads, Dtxn can be more efficient. It is designed to support very large databases by partitioning data across a cluster of commodity servers in a data center. Combining multiple servers together allows systems built with Dtxn to be cost effective, highly available, scalable, and fault-tolerant. Dtxn provides three novel features. First, it provides reusable infrastructure for building a distributed OLTP database out of single machine databases. This allows developers to take a specialized backend storage engine and use it across multiple machines, without needing to re-implement the distributed transaction infrastructure. We used Dtxn to build four different applications: a simple key/value store, a specialized TPC-C implementation, a main-memory OLTP database, and a traditional disk-based OLTP database. Second, Dtxn provides a novel concurrency control mechanism called speculative concurrency control, designed for main memory OLTP workloads that are primarily composed of transactions with a single round of communication between the application and database. Speculative concurrency control executes one transaction at a time, with no concurrency control overhead. In cases where there may be stalls due to network communication, it speculates future transactions. Our results show that this provides significantly better throughput than traditional two-phase locking, outperforming it by a factor of two on the TPC-C benchmark. Finally, Dtxn supports live migration, allowing part of the data on one server to be moved to another server while processing transactions. Our experiments show that our approach has nearly no visible impact on throughput or latency when moving data under moderate to high loads. It has significantly less impact than the best commercially available systems when the database is overloaded. The period of time where the throughput is reduced is less than half as long as failing over to another replica or using virtual machine migration.
by Evan Philip Charles Jones.
Ph.D.
Cahill, Michael James. "Serializable Isolation for Snapshot Databases." University of Sydney, 2009. http://hdl.handle.net/2123/5353.
Full textMany popular database management systems implement a multiversion concurrency control algorithm called snapshot isolation rather than providing full serializability based on locking. There are well-known anomalies permitted by snapshot isolation that can lead to violations of data consistency by interleaving transactions that would maintain consistency if run serially. Until now, the only way to prevent these anomalies was to modify the applications by introducing explicit locking or artificial update conflicts, following careful analysis of conflicts between all pairs of transactions. This thesis describes a modification to the concurrency control algorithm of a database management system that automatically detects and prevents snapshot isolation anomalies at runtime for arbitrary applications, thus providing serializable isolation. The new algorithm preserves the properties that make snapshot isolation attractive, including that readers do not block writers and vice versa. An implementation of the algorithm in a relational database management system is described, along with a benchmark and performance study, showing that the throughput approaches that of snapshot isolation in most cases.
Pang, Gene. "Scalable Transactions for Scalable Distributed Database Systems." Thesis, University of California, Berkeley, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=3733329.
Full textWith the advent of the Internet and Internet-connected devices, modern applications can experience very rapid growth of users from all parts of the world. A growing user base leads to greater usage and large data sizes, so scalable database systems capable of handling the great demands are critical for applications. With the emergence of cloud computing, a major movement in the industry, modern applications depend on distributed data stores for their scalable data management solutions. Many large-scale applications utilize NoSQL systems, such as distributed key-value stores, for their scalability and availability properties over traditional relational database systems. By simplifying the design and interface, NoSQL systems can provide high scalability and performance for large data sets and high volume workloads. However, to provide such benefits, NoSQL systems sacrifice traditional consistency models and support for transactions typically available in database systems. Without transaction semantics, it is harder for developers to reason about the correctness of the interactions with the data. Therefore, it is important to support transactions for distributed database systems without sacrificing scalability.
In this thesis, I present new techniques for scalable transactions for scalable database systems. Distributed data stores need scalable transactions to take advantage of cloud computing, and to meet the demands of modern applications. Traditional techniques for transactions may not be appropriate in a large, distributed environment, so in this thesis, I describe new techniques for distributed transactions, without having to sacrifice traditional semantics or scalability.
I discuss three facets to improving transaction scalability and support in distributed database systems. First, I describe a new transaction commit protocol that reduces the response times for distributed transactions. Second, I propose a new transaction programming model that allows developers to better deal with the unexpected behavior of distributed transactions. Lastly, I present a new scalable view maintenance algorithm for convergent join views. Together, the new techniques in this thesis contribute to providing scalable transactions for modern, distributed database systems.
Oza, Smita. "Implementing real-time transactions using distributed main memory databases." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape16/PQDD_0031/MQ27056.pdf.
Full textOza, Smita Carleton University Dissertation Computer Science. "Implementing real- time transactions using distributed main memory databases." Ottawa, 1997.
Find full textLawley, Michael John, and n/a. "Program Transformation for Proving Database Transaction Safety." Griffith University. School of Computing and Information Technology, 2000. http://www4.gu.edu.au:8080/adt-root/public/adt-QGU20070228.150125.
Full textKonana, Prabhudev Chennabasappa, and Prabhudev Chennabasappa Konana. "A transaction model for active and real-time databases." Diss., The University of Arizona, 1995. http://hdl.handle.net/10150/187289.
Full textAleksic, Mario. "Incremental computation methods in valid & transaction time databases." [S.l.] : Universität Stuttgart , Fakultät Informatik, 1996. http://www.bsz-bw.de/cgi-bin/xvms.cgi?SWB6783621.
Full textSinha, Aman. "Memory management and transaction scheduling for large-scale databases /." Digital version accessible at:, 1999. http://wwwlib.umi.com/cr/utexas/main.
Full textCouto, Emanuel Amaral. "Speculative execution by using software transactional memory." Master's thesis, FCT - UNL, 2009. http://hdl.handle.net/10362/2659.
Full textMany programs sequentially execute operations that take a long time to complete. Some of these operations may return a highly predictable result. If this is the case, speculative execution can improve the overall performance of the program. Speculative execution is the execution of code whose result may not be needed. Generally it is used as a performance optimization. Instead of waiting for the result of a costly operation,speculative execution can be used to speculate the operation most probable result and continue executing based in this speculation. If later the speculation is confirmed to be correct, time had been gained. Otherwise, if the speculation is incorrect, the execution based in the speculation must abort and re-execute with the correct result. In this dissertation we propose the design of an abstract process to add speculative execution to a program by doing source-to-source transformation. This abstract process is used in the definition of a mechanism and methodology that enable programmer to add speculative execution to the source code of programs. The abstract process is also used in the design of an automatic source-to-source transformation process that adds speculative execution to existing programs without user intervention. Finally, we also evaluate the performance impact of introducing speculative execution in database clients. Existing proposals for the design of mechanisms to add speculative execution lacked portability in favor of performance. Some were designed to be implemented at kernel or hardware level. The process and mechanisms we propose in this dissertation can add speculative execution to the source of program, independently of the kernel or hardware that is used. From our experiments we have concluded that database clients can improve their performance by using speculative execution. There is nothing in the system we propose that limits in the scope of database clients. Although this was the scope of the case study, we strongly believe that other programs can benefit from the proposed process and mechanisms for introduction of speculative execution.
Takkar, Sonia. "Scheduling real-time transactions in parallel database systems." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape16/PQDD_0025/MQ26975.pdf.
Full textTakkar, Sonia Carleton University Dissertation Computer Science. "Scheduling real-time transactions in parallel database systems." Ottawa, 1997.
Find full textAldarmi, Saud Ahmed. "Scheduling soft-deadline real-time transactions." Thesis, University of York, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.310917.
Full textSuriyakarn, Sorawit. "A framework for synthesizing transactional database implementations in a proof assistant." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/113101.
Full textThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 67-68).
We propose CoqSQL, a framework for optimizing relational queries and automatically synthesizing relational database implementations in the Coq proof assistant, based on Anders Kaseorg's and Mohsen Lesani's Transactions framework. The synthesized codes support concurrent transaction execution on multiple processors and are accompanied with proofs certifying their correctness. The contributions include: (1) a complete specification of a subset of SQL queries and database relations, including support for indexes; and (2) an extensible, automated, and complete synthesis process from standard SQL-like specifications to executable concurrent programs.
by Sorawit Suriyakarn.
M. Eng.
Barga, Roger S. "A reflective framework for implementing extended transactions /." Full text open access at:, 1999. http://content.ohsu.edu/u?/etd,205.
Full textSavasere, Ashok. "Efficient algorithms for mining association rules in large databases of cutomer transactions." Diss., Georgia Institute of Technology, 1998. http://hdl.handle.net/1853/8260.
Full textZhang, Connie. "Static Conflict Analysis of Transaction Programs." Thesis, University of Waterloo, 2000. http://hdl.handle.net/10012/1052.
Full textOngkasuwan, Patarawan. "Transaction synchronization and privacy aspect in blockchain decentralized applications." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-272134.
Full textIdéer och tekniker för kryptografi och decentraliserad lagring har haft en enorm tillväxt i många branscher, eftersom de har antagits för att förbättra verksamheten i organisationen. Den som kallas Blockchain-tekniken ger den en effektiv transparenslösning. Generellt har Blockchain använts för digital valuta eller cryptocurrency sedan starten. Ett av de mest kända Blockchainprotokollen är Ethereum, som har uppfunnit det smarta kontraktet för att möjliggöra Blockchains förmåga att utföra ett villkor, snarare än att bara fungera som lagring. Applikationer som använder denna teknik kallas 'Dapps' eller 'decentraliserade applikationer'. Det finns emellertid pågående argument om synkronisering associerad med systemet. Systemsynkronisering är för närvarande oerhört viktigt för applikationer, eftersom väntetiden för att en transaktion ska verifieras kan orsaka missnöje i användarupplevelsen. Flera studier har visat att sekretessläckage inträffar, även om Blockchain ger en viss säkerhet, till följd av den traditionella transaktionen, som kräver godkännande genom en mellaninstitution. Till exempel måste en bank bearbeta transaktioner via många konstitutionspartier innan den får den slutliga bekräftelsen, vilket kräver att användaren väntar en betydande tid. Den här avhandlingen beskriver utmaningen med transaktionssynkronisering mellan användaren och smart kontrakt, samt frågan om en sekretessstrategi för systemet och efterlevnad. För att närma sig dessa två utmaningar separerar den första uppgiften olika händelser och utvärderar resultaten jämfört med en alternativ lösning. Detta görs genom att testa det smarta kontraktet för att hitta det bästa gasprisresultatet, som varierar över tiden. I Ethereum-protokollet är gaspriset ett av de bästa sätten att minska transaktionstiden för att möta användarens förväntningar. Gaspriset påverkas av kodstrukturen och nätverket. I det smarta kontraktet körs test baserat på två fall och löser plattformsproblem som löpare och användarupplevelse och minskar kostnaderna. Det har också visat sig att insamlingen av avgiften innan du deltar i en auktion kan förhindra löparproblemet. Det andra fallet syftar till att bevisa att frysning av budbeloppet är det bästa sättet att öka användarens upplevelse och att uppnå en bättre upplevelse av en online auktion. Den andra utmaningen fokuserar huvudsakligen på sekretessstrategin och riskhanteringen för plattformen, som innebär att identifiera möjliga lösningar för alla risksituationer, samt att upptäcka, förutse och förhindra dem. Tillhandahållande av strategier, som att säkra den smarta kontraktsstrukturen, öka krypteringsmetoden i databasen, utforma ett termblad och avtal och godkännande, hjälper till att förhindra systemets sårbarheter. Därför syftar denna forskning till att förbättra och undersöka en online-auktionsplattform genom att använda ett smart avtal med Blockchain för att ge upplevande användarupplevelser.
Youssef, Mohamed Wagdy Abdel Fattah. "Transaction behaviour in large database environments : a methodological approach." Thesis, City University London, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.358945.
Full textSauer, Caetano [Verfasser]. "Modern techniques for transaction-oriented database recovery / Caetano Sauer." München : Verlag Dr. Hut, 2017. http://d-nb.info/1140977644/34.
Full textHedman, Surlien Peter. "Economic advantages of Blockchain technology VS Relational database : An study focusing on economic advantages with Blockchain technology and relational databases." Thesis, Blekinge Tekniska Högskola, Institutionen för industriell ekonomi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-17366.
Full textBrodsky, Lloyd. "A knowledge-based preprocessor for approximate joins in improperly designed transaction databases." Thesis, Massachusetts Institute of Technology, 1991. http://hdl.handle.net/1721.1/13744.
Full textWang, Xiangyang. "The development of a knowledge-based database transaction design assistant." Thesis, Cardiff University, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.359755.
Full textDixon, Eric Richard. "Developing distributed applications with distributed heterogenous databases." Thesis, Virginia Tech, 1993. http://hdl.handle.net/10919/42748.
Full textOn, Sai Tung. "Efficient transaction recovery on flash disks." HKBU Institutional Repository, 2010. http://repository.hkbu.edu.hk/etd_ra/1170.
Full textXie, Wanxia. "Supporting Distributed Transaction Processing Over Mobile and Heterogeneous Platforms." Diss., Georgia Institute of Technology, 2005. http://hdl.handle.net/1853/14073.
Full textQuantock, David E. "The real-time roll-back and recovery of transactions in database systems." Thesis, Monterey, California. Naval Postgraduate School, 1989. http://hdl.handle.net/10945/27234.
Full textA modern database transaction may involve a long series of updates, deletions, and insertions of data and a complex mix of these primary database operations. Due to its length and complexity, the transaction requires back-up and recovery procedures. The back-up procedure allows the user to either commit or abort a lengthy and complex transaction without comprising the integrity of the data. The recovery procedure allows the system to maintain the data integrity during the execution of a transaction, should the transaction be interrupted by the system. With both the back-up and recovery procedures, the modern database system will be able to provide consistent data throughout the life-span of a database without ever corrupting either its data values or its data types. However, the implementation of back-up and recovery procedures in a database system is a difficult and involved effort since it effects the base as well as meta data of the database. Further, it effects the state of the database system. This thesis is mainly focused on the design trade-offs and issues of implementing an effective and efficient mechanism for back-up and recovery in the multimodel, multilingual, and multi backend database system. Keywords: Data base management systems. (KR)
Gong, Daoya. "Transaction process modeling and implementation for 3-tiered Web-based database systems." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0023/MQ62128.pdf.
Full textKoroncziová, Dominika. "Doplnění a optimalizace temporálního rozšíření pro PostgreSQL." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2016. http://www.nusl.cz/ntk/nusl-255417.
Full textOlsson, Markus. "Design and Implementation of Transactions in a Column-Oriented In-Memory Database System." Thesis, Umeå University, Department of Computing Science, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-32705.
Full textColdbase is a column-oriented in-memory database implemented in Java that is used with a specific workload in mind. Coldbase is optimized to receive large streams of timestamped trading data arriving at a fast pace while allowing simple but frequent queries that analyse the data concurrently. By limiting the functionality, Coldbase is able to reach a high performance while the memory consumption is low. This thesis presents ColdbaseTX which is an extension to Coldbase that adds support for transactions. It uses an optimistic approach by storing all writes of a transaction locally and applying them when the transaction commits. Readers are separated from writers by using two versions of the data which makes it possible to guarantee that readers are never blocked.Benchmarks compare Coldbase to ColdbaseTX regarding both performance andmemory efficiency. The results show that ColdbaseTX introduces a small overhead in both memory and performance which however is deemed acceptable since the gain is support for transactions.
Huang, Kuo-Yu, and 黃國瑜. "Mining Inter-Transaction Association Rules in Transactional Databases." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/skp85p.
Full text國立中央大學
資訊工程研究所
94
Abstract In this dissertation, we focus on how to devise an efficient and effective algorithm for discovering inter-transaction associations such as, periodic patterns, frequent continuities, frequent episodes and sequential pattern. Firstly, we propose a 3-phase FITS model in inter-transaction association mining. We adopt both horizontal and vertical formats to increase the mining efficiency. Furthermore, we focus on the application of FITS to closed pattern mining to reduce the number of patterns to be enumerated. The insight is “If an intra-transaction pattern is not a closed pattern, it will not be a closed frequent inter-transaction pattern”. The bi-format and bi-phase reduction are applied to overcome the problem of the duplicate item extensions especially for closed pattern mining. We have applied the FITS model to all inter-transaction mining tasks with a little modification. Although the FITS model can be used for periodic pattern mining, it is not efficient enough since the constraints on periodicy are not fully utilized. Therefore, we propose a more general model, SMCA, to mine asynchronous periodic patterns from a complex sequence and correct some problem of the previous works. A 4-phase algorithm, including SPMiner, MPMiner, CPMiner and APMiner, is devised to discover periodic patterns from a transactional database presented in vertical format. The essential idea of SPMiner is to trace the possible segments for period p by a hash table. Besides, to avoid additional scans over the transactional database, we propose a segment-based combination to reduce redundant generation and testing. The experiments have demonstrated good performance of the proposed model on several inter-transaction patterns. Although the efficiency improvement is based on the requirement of additional memory cost, the memory cost can be further reduced by disk-based or partition-based approaches, which in turn also prove to be better than state-of-the-art algorithms. In summary, the proposed model can be orders of magnitude faster than previous works with a modest memory cost.