To see the other types of publications on this topic, follow the link: Transactional databases.

Dissertations / Theses on the topic 'Transactional databases'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Transactional databases.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Navarro, Martín Joan. "From cluster databases to cloud storage: Providing transactional support on the cloud." Doctoral thesis, Universitat Ramon Llull, 2015. http://hdl.handle.net/10803/285655.

Full text
Abstract:
Durant les últimes tres dècades, les limitacions tecnològiques (com per exemple la capacitat dels dispositius d'emmagatzematge o l'ample de banda de les xarxes de comunicació) i les creixents demandes dels usuaris (estructures d'informació, volums de dades) han conduït l'evolució de les bases de dades distribuïdes. Des dels primers repositoris de dades per arxius plans que es van desenvolupar en la dècada dels vuitanta, s'han produït importants avenços en els algoritmes de control de concurrència, protocols de replicació i en la gestió de transaccions. No obstant això, els reptes moderns d'emmagatzematge de dades que plantegen el Big Data i el cloud computing—orientats a millorar la limitacions pel que fa a escalabilitat i elasticitat de les bases de dades estàtiques—estan empenyent als professionals a relaxar algunes propietats importants dels sistemes transaccionals clàssics, cosa que exclou a diverses aplicacions les quals no poden encaixar en aquesta estratègia degut a la seva alta dependència transaccional. El propòsit d'aquesta tesi és abordar dos reptes importants encara latents en el camp de les bases de dades distribuïdes: (1) les limitacions pel que fa a escalabilitat dels sistemes transaccionals i (2) el suport transaccional en repositoris d'emmagatzematge en el núvol. Analitzar les tècniques tradicionals de control de concurrència i de replicació, utilitzades per les bases de dades clàssiques per suportar transaccions, és fonamental per identificar les raons que fan que aquests sistemes degradin el seu rendiment quan el nombre de nodes i / o quantitat de dades creix. A més, aquest anàlisi està orientat a justificar el disseny dels repositoris en el núvol que deliberadament han deixat de banda el suport transaccional. Efectivament, apropar el paradigma de l'emmagatzematge en el núvol a les aplicacions que tenen una forta dependència en les transaccions és fonamental per a la seva adaptació als requeriments actuals pel que fa a volums de dades i models de negoci. Aquesta tesi comença amb la proposta d'un simulador de protocols per a bases de dades distribuïdes estàtiques, el qual serveix com a base per a la revisió i comparativa de rendiment dels protocols de control de concurrència i les tècniques de replicació existents. Pel que fa a la escalabilitat de les bases de dades i les transaccions, s'estudien els efectes que té executar diferents perfils de transacció sota diferents condicions. Aquesta anàlisi contínua amb una revisió dels repositoris d'emmagatzematge de dades en el núvol existents—que prometen encaixar en entorns dinàmics que requereixen alta escalabilitat i disponibilitat—, el qual permet avaluar els paràmetres i característiques que aquests sistemes han sacrificat per tal de complir les necessitats actuals pel que fa a emmagatzematge de dades a gran escala. Per explorar les possibilitats que ofereix el paradigma del cloud computing en un escenari real, es presenta el desenvolupament d'una arquitectura d'emmagatzematge de dades inspirada en el cloud computing la qual s’utilitza per emmagatzemar la informació generada en les Smart Grids. Concretament, es combinen les tècniques de replicació en bases de dades transaccionals i la propagació epidèmica amb els principis de disseny usats per construir els repositoris de dades en el núvol. Les lliçons recollides en l'estudi dels protocols de replicació i control de concurrència en el simulador de base de dades, juntament amb les experiències derivades del desenvolupament del repositori de dades per a les Smart Grids, desemboquen en el que hem batejat com Epidemia: una infraestructura d'emmagatzematge per Big Data concebuda per proporcionar suport transaccional en el núvol. A més d'heretar els beneficis dels repositoris en el núvol en quant a escalabilitat, Epidemia inclou una capa de gestió de transaccions que reenvia les transaccions dels clients a un conjunt jeràrquic de particions de dades, cosa que permet al sistema oferir diferents nivells de consistència i adaptar elàsticament la seva configuració a noves demandes de càrrega de treball. Finalment, els resultats experimentals posen de manifest la viabilitat de la nostra contribució i encoratgen als professionals a continuar treballant en aquesta àrea.
Durante las últimas tres décadas, las limitaciones tecnológicas (por ejemplo la capacidad de los dispositivos de almacenamiento o el ancho de banda de las redes de comunicación) y las crecientes demandas de los usuarios (estructuras de información, volúmenes de datos) han conducido la evolución de las bases de datos distribuidas. Desde los primeros repositorios de datos para archivos planos que se desarrollaron en la década de los ochenta, se han producido importantes avances en los algoritmos de control de concurrencia, protocolos de replicación y en la gestión de transacciones. Sin embargo, los retos modernos de almacenamiento de datos que plantean el Big Data y el cloud computing—orientados a mejorar la limitaciones en cuanto a escalabilidad y elasticidad de las bases de datos estáticas—están empujando a los profesionales a relajar algunas propiedades importantes de los sistemas transaccionales clásicos, lo que excluye a varias aplicaciones las cuales no pueden encajar en esta estrategia debido a su alta dependencia transaccional. El propósito de esta tesis es abordar dos retos importantes todavía latentes en el campo de las bases de datos distribuidas: (1) las limitaciones en cuanto a escalabilidad de los sistemas transaccionales y (2) el soporte transaccional en repositorios de almacenamiento en la nube. Analizar las técnicas tradicionales de control de concurrencia y de replicación, utilizadas por las bases de datos clásicas para soportar transacciones, es fundamental para identificar las razones que hacen que estos sistemas degraden su rendimiento cuando el número de nodos y/o cantidad de datos crece. Además, este análisis está orientado a justificar el diseño de los repositorios en la nube que deliberadamente han dejado de lado el soporte transaccional. Efectivamente, acercar el paradigma del almacenamiento en la nube a las aplicaciones que tienen una fuerte dependencia en las transacciones es crucial para su adaptación a los requerimientos actuales en cuanto a volúmenes de datos y modelos de negocio. Esta tesis empieza con la propuesta de un simulador de protocolos para bases de datos distribuidas estáticas, el cual sirve como base para la revisión y comparativa de rendimiento de los protocolos de control de concurrencia y las técnicas de replicación existentes. En cuanto a la escalabilidad de las bases de datos y las transacciones, se estudian los efectos que tiene ejecutar distintos perfiles de transacción bajo diferentes condiciones. Este análisis continua con una revisión de los repositorios de almacenamiento en la nube existentes—que prometen encajar en entornos dinámicos que requieren alta escalabilidad y disponibilidad—, el cual permite evaluar los parámetros y características que estos sistemas han sacrificado con el fin de cumplir las necesidades actuales en cuanto a almacenamiento de datos a gran escala. Para explorar las posibilidades que ofrece el paradigma del cloud computing en un escenario real, se presenta el desarrollo de una arquitectura de almacenamiento de datos inspirada en el cloud computing para almacenar la información generada en las Smart Grids. Concretamente, se combinan las técnicas de replicación en bases de datos transaccionales y la propagación epidémica con los principios de diseño usados para construir los repositorios de datos en la nube. Las lecciones recogidas en el estudio de los protocolos de replicación y control de concurrencia en el simulador de base de datos, junto con las experiencias derivadas del desarrollo del repositorio de datos para las Smart Grids, desembocan en lo que hemos acuñado como Epidemia: una infraestructura de almacenamiento para Big Data concebida para proporcionar soporte transaccional en la nube. Además de heredar los beneficios de los repositorios en la nube altamente en cuanto a escalabilidad, Epidemia incluye una capa de gestión de transacciones que reenvía las transacciones de los clientes a un conjunto jerárquico de particiones de datos, lo que permite al sistema ofrecer distintos niveles de consistencia y adaptar elásticamente su configuración a nuevas demandas cargas de trabajo. Por último, los resultados experimentales ponen de manifiesto la viabilidad de nuestra contribución y alientan a los profesionales a continuar trabajando en esta área.
Over the past three decades, technology constraints (e.g., capacity of storage devices, communication networks bandwidth) and an ever-increasing set of user demands (e.g., information structures, data volumes) have driven the evolution of distributed databases. Since flat-file data repositories developed in the early eighties, there have been important advances in concurrency control algorithms, replication protocols, and transactions management. However, modern concerns in data storage posed by Big Data and cloud computing—related to overcome the scalability and elasticity limitations of classic databases—are pushing practitioners to relax some important properties featured by transactions, which excludes several applications that are unable to fit in this strategy due to their intrinsic transactional nature. The purpose of this thesis is to address two important challenges still latent in distributed databases: (1) the scalability limitations of transactional databases and (2) providing transactional support on cloud-based storage repositories. Analyzing the traditional concurrency control and replication techniques, used by classic databases to support transactions, is critical to identify the reasons that make these systems degrade their throughput when the number of nodes and/or amount of data rockets. Besides, this analysis is devoted to justify the design rationale behind cloud repositories in which transactions have been generally neglected. Furthermore, enabling applications which are strongly dependent on transactions to take advantage of the cloud storage paradigm is crucial for their adaptation to current data demands and business models. This dissertation starts by proposing a custom protocol simulator for static distributed databases, which serves as a basis for revising and comparing the performance of existing concurrency control protocols and replication techniques. As this thesis is especially concerned with transactions, the effects on the database scalability of different transaction profiles under different conditions are studied. This analysis is followed by a review of existing cloud storage repositories—that claim to be highly dynamic, scalable, and available—, which leads to an evaluation of the parameters and features that these systems have sacrificed in order to meet current large-scale data storage demands. To further explore the possibilities of the cloud computing paradigm in a real-world scenario, a cloud-inspired approach to store data from Smart Grids is presented. More specifically, the proposed architecture combines classic database replication techniques and epidemic updates propagation with the design principles of cloud-based storage. The key insights collected when prototyping the replication and concurrency control protocols at the database simulator, together with the experiences derived from building a large-scale storage repository for Smart Grids, are wrapped up into what we have coined as Epidemia: a storage infrastructure conceived to provide transactional support on the cloud. In addition to inheriting the benefits of highly-scalable cloud repositories, Epidemia includes a transaction management layer that forwards client transactions to a hierarchical set of data partitions, which allows the system to offer different consistency levels and elastically adapt its configuration to incoming workloads. Finally, experimental results highlight the feasibility of our contribution and encourage practitioners to further research in this area.
APA, Harvard, Vancouver, ISO, and other styles
2

Araujo, Neto Afonso Comba de. "Security Benchmarking of Transactional Systems." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2012. http://hdl.handle.net/10183/143292.

Full text
Abstract:
A maioria das organizações depende atualmente de algum tipo de infraestrutura computacional para suportar as atividades críticas para o negócio. Esta dependência cresce com o aumento da capacidade dos sistemas informáticos e da confiança que se pode depositar nesses sistemas, ao mesmo tempo que aumenta também o seu tamanho e complexidade. Os sistemas transacionais, tipicamente centrados em bases de dados utilizadas para armazenar e gerir a informação de suporte às tarefas diárias, sofrem naturalmente deste mesmo problema. Assim, uma solução frequentemente utilizada para amenizar a dificuldade em lidar com a complexidade dos sistemas passa por delegar sob outras organizações o trabalho de desenvolvimento, ou mesmo por utilizar soluções já disponíveis no mercado (sejam elas proprietárias ou abertas). A diversidade de software e componentes alternativos disponíveis atualmente torna necessária a existência de testes padronizados que ajudem na seleção da opção mais adequada entre as alternativas existentes, considerando uma conjunto de diferentes características. No entanto, o sucesso da investigação em testes padronizados de desempenho e confiabilidade contrasta radicalmente com os avanços em testes padronizados de segurança, os quais têm sido pouco investigados, apesar da sua extrema relevância. Esta tese discute o problema da definição de testes padronizados de segurança, comparando-o com outras iniciativas de sucesso, como a definição de testes padronizados de desempenho e de confiabilidade. Com base nesta análise é proposta um modelo de base para a definição de testes padronizados de segurança. Este modelo, aplicável de forma genérica a diversos tipos de sistemas e domínios, define duas etapas principais: qualificação de segurança e teste padronizado de confiança. A qualificação de segurança é um processo que permite avaliar um sistema tendo em conta os aspectos e requisitos de segurança mais evidentes num determinado domínio de aplicação, dividindo os sistemas avaliados entre aceitáveis e não aceitáveis. O teste padronizado de confiança, por outro lado, consiste em avaliar os sistemas considerados aceitáveis de modo a estimar a probabilidade de existirem problemas de segurança ocultados ou difíceis de detectar (o objetivo do processo é lidar com as incertezas inerentes aos aspectos de segurança). O modelo proposto é demonstrado e avaliado no contexto de sistemas transacionais, os quais podem ser divididos em duas partes: a infraestrutura e as aplicações de negócio. Uma vez que cada uma destas partes possui objetivos de segurança distintos, o modelo é utilizado no desenvolvimento de metodologias adequadas para cada uma delas. Primeiro, a tese apresenta um teste padronizado de segurança para infraestruturas de sistemas transacionais, descrevendo e justificando todos os passos e decisões tomadas ao longo do seu desenvolvimento. Este teste foi aplicado a quatro infraestruturas reais, sendo os resultados obtidos cuidadosamente apresentados e analisados. Ainda no contexto das infraestruturas de sistemas transacionais, a tese discute o problema da seleção de componentes de software. Este é um problema complexo uma vez que a avaliação de segurança destas infraestruturas não é exequível antes da sua entrada em funcionamento. A ferramenta proposta, que tem por objetivo ajudar na seleção do software básico para suportar este tipo de infraestrutura, é aplicada na avaliação e análise de sete pacotes de software distintos, todos alternativas tipicamente utilizadas em infraestruturas reais. Finalmente, a tese aborda o problema do desenvolvimento de testes padronizados de confiança para aplicações de negócio, focando especificamente em aplicações Web. Primeiro, é proposta uma abordagem baseada no uso de ferramentas de análise de código, sendo apresentadas as diversas experiências realizadas para avaliar a validade da proposta, incluindo um cenário representativo de situações reais, em que o objetivo passa por selecionar o mais seguro de entre sete alternativas de software para suportar fóruns Web. Com base nas análises realizadas e nas limitações desta proposta, é de seguida definida uma abordagem genérica para a definição de testes padronizados de confiança para aplicações Web.
Most organizations nowadays depend on some kind of computer infrastructure to manage business critical activities. This dependence grows as computer systems become more reliable and useful, but so does the complexity and size of systems. Transactional systems, which are database-centered applications used by most organizations to support daily tasks, are no exception. A typical solution to cope with systems complexity is to delegate the software development task, and to use existing solutions independently developed and maintained (either proprietary or open source). The multiplicity of software and component alternatives available has boosted the interest in suitable benchmarks, able to assist in the selection of the best candidate solutions, concerning several attributes. However, the huge success of performance and dependability benchmarking markedly contrasts with the small advances on security benchmarking, which has only sparsely been studied in the past. his thesis discusses the security benchmarking problem and main characteristics, particularly comparing these with other successful benchmarking initiatives, like performance and dependability benchmarking. Based on this analysis, a general framework for security benchmarking is proposed. This framework, suitable for most types of software systems and application domains, includes two main phases: security qualification and trustworthiness benchmarking. Security qualification is a process designed to evaluate the most obvious and identifiable security aspects of the system, dividing the evaluated targets in acceptable or unacceptable, given the specific security requirements of the application domain. Trustworthiness benchmarking, on the other hand, consists of an evaluation process that is applied over the qualified targets to estimate the probability of the existence of hidden or hard to detect security issues in a system (the main goal is to cope with the uncertainties related to security aspects). The framework is thoroughly demonstrated and evaluated in the context of transactional systems, which can be divided in two parts: the infrastructure and the business applications. As these parts have significantly different security goals, the framework is used to develop methodologies and approaches that fit their specific characteristics. First, the thesis proposes a security benchmark for transactional systems infrastructures and describes, discusses and justifies all the steps of the process. The benchmark is applied to four distinct real infrastructures, and the results of the assessment are thoroughly analyzed. Still in the context of transactional systems infrastructures, the thesis also addresses the problem of the selecting software components. This is complex as evaluating the security of an infrastructure cannot be done before deployment. The proposed tool, aimed at helping in the selection of basic software packages to support the infrastructure, is used to evaluate seven different software packages, representative alternatives for the deployment of real infrastructures. Finally, the thesis discusses the problem of designing trustworthiness benchmarks for business applications, focusing specifically on the case of web applications. First, a benchmarking approach based on static code analysis tools is proposed. Several experiments are presented to evaluate the effectiveness of the proposed metrics, including a representative experiment where the challenge was the selection of the most secure application among a set of seven web forums. Based on the analysis of the limitations of such approach, a generic approach for the definition of trustworthiness benchmarks for web applications is defined.
APA, Harvard, Vancouver, ISO, and other styles
3

Li, Yanrong. "Techniques for improving clustering and association rules mining from very large transactional databases." Thesis, Curtin University, 2009. http://hdl.handle.net/20.500.11937/907.

Full text
Abstract:
Clustering and association rules mining are two core data mining tasks that have been actively studied by data mining community for nearly two decades. Though many clustering and association rules mining algorithms have been developed, no algorithm is better than others on all aspects, such as accuracy, efficiency, scalability, adaptability and memory usage. While more efficient and effective algorithms need to be developed for handling the large-scale and complex stored datasets, emerging applications where data takes the form of streams pose new challenges for the data mining community. The existing techniques and algorithms for static stored databases cannot be applied to the data streams directly. They need to be extended or modified, or new methods need to be developed to process the data streams.In this thesis, algorithms have been developed for improving efficiency and accuracy of clustering and association rules mining on very large, high dimensional, high cardinality, sparse transactional databases and data streams.A new similarity measure suitable for clustering transactional data is defined and an incremental clustering algorithm, INCLUS, is proposed using this similarity measure. The algorithm only scans the database once and produces clusters based on the user’s expectations of similarities between transactions in a cluster, which is controlled by the user input parameters, a similarity threshold and a support threshold. Intensive testing has been performed to evaluate the effectiveness, efficiency, scalability and order insensitiveness of the algorithm.To extend INCLUS for transactional data streams, an equal-width time window model and an elastic time window model are proposed that allow mining of clustering changes in evolving data streams. The minimal width of the window is determined by the minimum clustering granularity for a particular application. Two algorithms, CluStream_EQ and CluStream_EL, based on the equal-width window model and the elastic window model respectively, are developed by incorporating these models into INCLUS. Each algorithm consists of an online micro-clustering component and an offline macro-clustering component. The online component writes summary statistics of a data stream to the disk, and the offline components uses those summaries and other user input to discover changes in a data stream. The effectiveness and scalability of the algorithms are evaluated by experiments.This thesis also looks into sampling techniques that can improve efficiency of mining association rules in a very large transactional database. The sample size is derived based on the binomial distribution and central limit theorem. The sample size used is smaller than that based on Chernoff Bounds, but still provides the same approximation guarantees. The accuracy of the proposed sampling approach is theoretically analyzed and its effectiveness is experimentally evaluated on both dense and sparse datasets.Applications of stratified sampling for association rules mining is also explored in this thesis. The database is first partitioned into strata based on the length of transactions, and simple random sampling is then performed on each stratum. The total sample size is determined by a formula derived in this thesis and the sample size for each stratum is proportionate to the size of the stratum. The accuracy of transaction size based stratified sampling is experimentally compared with that of random sampling.The thesis concludes with a summary of significant contributions and some pointers for further work.
APA, Harvard, Vancouver, ISO, and other styles
4

Liu, Yufan. "A Survey Of Persistent Graph Databases." Kent State University / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=kent1395166105.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Niles, Duane Francis Jr. "Improving Performance of Highly-Programmable Concurrent Applications by Leveraging Parallel Nesting and Weaker Isolation Levels." Thesis, Virginia Tech, 2015. http://hdl.handle.net/10919/54557.

Full text
Abstract:
The recent development of multi-core computer architectures has largely affected the creation of everyday applications, requiring the adoption of concurrent programming to significantly utilize the divided processing power of computers. Applications must be split into sections able to execute in parallel, without any of these sections conflicting with one another, thereby necessitating some form of synchronization to be declared. The most commonly used methodology is lock-based synchronization; although, to improve performance the most, developers must typically form complex, low-level implementations for large applications, which can easily create potential errors or hindrances. An abstraction from database systems, known as transactions, is a rising concurrency control design aimed to circumvent the challenges with programmability, composability, and scalability in lock-based synchronization. Transactions execute their operations speculatively and are capable of being restarted (or rolled back) when there exist conflicts between concurrent actions. As such issues can occur later in the lifespans of transactions, entire rollbacks are not that effective for performance. One particular method, known as nesting, was created to counter that drawback. Nesting is the act of enclosing transactions within other transactions, essentially dividing the work into pieces called sub-transactions. These sub-transactions can roll back without affecting the entire main transaction, although general nesting models only allow one sub-transaction to perform work at a time. The first main contribution in this thesis is SPCN, an algorithm that parallelizes nested transactions while automatically processing any potential conflicts that may arise, eliminating the burden of additional processing from the application developers. Two versions of SPCN exist: Strict, which enforces the sub-transactions' work to be made visible in a serialized order; and Relaxed, which allows sub-transactions to distribute their information immediately as they finish (therefore invalidation may occur after-the-fact and must be handled). Despite the additional logic required by SPCN, it outperforms traditional closed nesting by 1.78x at the lowest and 3.78x at the highest in the experiments run. Another method to alter transactional execution and boost performance is to relax the rules of visibility for parallel operations (known as their isolation). Depending on the application, correctness is not broken even if some transactions see external work that may later be undone due to a rollback, or if an object is written while another transaction is using an older instance of its data. With lock-based synchronization, developers would have to explicitly design their application with varying amounts of locks, and different lock organizations or hierarchies, to change the strictness of the execution. With transactional systems, the processing performed by the system itself can be set to utilize different rulings, which can change the performance of an application without requiring it to be largely redesigned. This notion leads to the second contribution in this thesis: AsR, or As-Serializable transactions. Serializability is the general form of isolation or strictness for transactions in many applications. In terms of execution, its definition is equivalent to only one transaction running at a time in a given system. Many transactional systems use their own internal form of locking to create Serializable executions, but it is typically too strict for many applications. AsR transactions allow the internal processing to be relaxed while additional meta-data is used external to the system, without requiring any interaction from the developer or any changes to the given application. AsR transactions offer multiple orders of magnitude more in throughput in highly-contentious scenarios, due to their capability to outlast traditional levels of isolation.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
6

Bejaoui, Lofti. "Qualitative topological relationships for objects with possibly vague shapes: implications on the specification of topological integrity constraints in transactional spatial databases and in spatial data warehouses." Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2009. http://tel.archives-ouvertes.fr/tel-00725614.

Full text
Abstract:
Dans les bases de données spatiales actuellement mises en oeuvre, les phénomènes naturels sont généralement représentés par des géométries ayant des frontières bien délimitées. Une telle description de la réalité ignore le vague qui caractérise la forme de certains objets spatiaux (zones d'inondation, lacs, peuplements forestiers, etc.). La qualité des données enregistrées est donc dégradée du fait de ce décalage entre la réalitée et sa description. Cette thèse s'attaque à ce problème en proposant une nouvelle approche pour représenter des objets spatiaux ayant des formes vagues et caractériser leurs relations topologiques. Le modèle proposé, appelé QMM model (acronyme de Qualitative Min-Max model), utilise les notions d'extensions minimale et maximale pour représenter la partie incertaine d'un objet. Un ensemble d'adverbes permet d'exprimer la forme vague d'un objet (ex : a region with a partially broad boundary), ainsi que l'incertitude des relations topologiques entre deux objets (ex : weakly Contains, fairly Contains, etc.). Cette approche est moins fine que d'autres approches concurrentes (modélisation par sous-ensembles flous ou modélisation probabiliste). Mais elle nécessite un processus d'acquisition complexe des données. De plus elle est relativement simple à mettre en oeuvre avec les systèmes existants de gestion de bases de données. Cette approche est ensuite utilisée pour contrôler la qualité des données dans les bases de données spatiales et les entrepôts de données spatiales en spécifiant les contraintes d'intégrité basé sur les concepts du modèle QMM. Une extension du langage de contraintes OCL (Object Constraint Language) a été étudiée pour spécifier des contraintes topologiques impliquant des objets ayant des formes vagues. Un logiciel existant (outil OCLtoSQL développé à l'Université de Dresden) a été étendu pour permettre la génération automatique du code SQL d'une contrainte lorsque la base de données est gérée par un système relationnel. Une expérimentation de cet outil a été réalisée avec une base de données utilisée pour la gestion des épandages agricoles. Pour cette application, l'approche et l'outil sont apparus très efficients. Cette thèse comprend aussi une étude de l'intégration de bases de données spatiales hétérogènes lorsque les objets sont représentés avec le modèle QMM. Des résultats nouveaux ont été produits et des exemples d'application ont été explicités.
APA, Harvard, Vancouver, ISO, and other styles
7

Bejaoui, Lotfi. "Qualitative Topological Relationships for Objects with Possibly Vague Shapes: Implications on the Specification of Topological Integrity Constraints in Transactional Spatial Databases and in Spatial Data Warehouses." Thesis, Université Laval, 2009. http://www.theses.ulaval.ca/2009/26348/26348.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Burger, Albert G. "Branching transactions : a transaction model for parallel database systems." Thesis, University of Edinburgh, 1996. http://hdl.handle.net/1842/15591.

Full text
Abstract:
In order to exploit parallel computers, database management systems must achieve a high level of concurrency when executing transactions. In a high contention environment, however, concurrency is severely limited due to transaction blocking, and the utilisation of parallel hardware resources, e.g. multiple CPUs, can be low. In this dissertation, a new transaction model, Branching Transactions, is proposed. Under branching transactions, more than one possible path of execution of a transaction is followed up in parallel, which allows us to avoid unnecessary transaction blockings and restarts. This approach uses additional hardware resources, mainly CPU - which would otherwise sit idle due to data contention - to improve transaction response time and throughput. A new transaction model has implications for many transaction processing algorithms, in particular concurrency control. A family of locking algorithms, based on multi-version two-phase locking, has been developed for branching transactions, including an algorithm which can dynamically switch between branching and non-branching modes. The issues of deadlock handling and recovery are also considered. The correctness of all new concurrency control algorithms is proved by extending traditional serializability theory so that it is able to cope with the notion of a branching transaction. Architectural descriptions of branching transaction systems for shared-memory parallel data-bases and hybrid shared-disk/shared-memory systems are discussed. In particular, the problem of cache coherence is addressed. The performance of branching transactions in a shared-memory parallel database system has been investigated, using discrete-event simulation. One field which may potentially benefit greatly from branching transactions is that of so-called "real-time" database systems, in which transactions have execution deadlines. A new real-time concurrency control algorithm based on branching transactions is introduced.
APA, Harvard, Vancouver, ISO, and other styles
9

Dias, Ricardo Jorge Freire. "Cooperative memory and database transactions." Master's thesis, Faculdade de Ciências e Tecnologia, 2008. http://hdl.handle.net/10362/4192.

Full text
Abstract:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para a obtenção do Grau de Mestre em Engenharia Informática
Since the introduction of Software Transactional Memory (STM), this topic has received a strong interest by the scientific community, as it has the potential of greatly facilitating concurrent programming by hiding many of the concurrency issues under the transactional layer, being in this way a potential alternative to the lock based constructs, such as mutexes and semaphores. The current practice of STM is based on keeping track of changes made to the memory and, if needed, restoring previous states in case of transaction rollbacks. The operations in a program that can be reversible,by restoring the memory state, are called transactional operations. The way that this reversibility necessary to transactional operations is achieved is implementation dependent on the STM libraries being used. Operations that cannot be reversed,such as I/O to external data repositories (e.g., disks) or to the console, are called nontransactional operations. Non-transactional operations are usually disallowed inside a memory transaction, because if the transaction aborts their effects cannot be undone. In transactional databases, operations like inserting, removing or transforming data in the database can be undone if executed in the context of a transaction. Since database I/O operations can be reversed, it should be possible to execute those operations in the context of a memory transaction. To achieve such purpose, a new transactional model unifying memory and database transactions into a single one was defined, implemented, and evaluated. This new transactional model satisfies the properties from both the memory and database transactional models. Programmers can now execute memory and database operations in the same transaction and in case of a transaction rollback, the transaction effects in both the memory and the database are reverted.
APA, Harvard, Vancouver, ISO, and other styles
10

Aleksic, Mario. "Incremental computation methods in valid and transaction time databases." Thesis, Georgia Institute of Technology, 1996. http://hdl.handle.net/1853/8126.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Silva, Pedro Paulo de Souza Bento da. "Uma abordagem transacional para o tratamento de exceções em processos de negócio." Universidade de São Paulo, 2013. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-24022014-094221/.

Full text
Abstract:
Com o intuito de tornarem-se mais eficientes, muitas organizações -- empresas, órgãos governamentais, centros de pesquisa, etc. -- optam pela utilização de ferramentas de software para apoiar a realização de seus processos. Uma opção que vem se tornando cada vez mais popular é a utilização de sistemas de Gestão de Processos de Negócio (GPN), que são ferramentas genéricas, ou seja, não são específicas a nenhuma organização, altamente configuráveis e ajustáveis às necessidades dos objetos de atuação de cada organização. Uma das principais responsabilidades de um sistema de GPN é prover mecanismos de tratamento de exceções à execução de instâncias de processos de negócio. Exceções, se forem ignoradas ou se não forem corretamente tratadas, podem causar o aborto da execução de instâncias e, dependendo da gravidade da situação, podem causar falhas em sistemas de GPN ou até mesmo em sistemas subjacentes (sistema operacional, sistema gerenciador de banco de dados, etc.). Sendo assim, mecanismos de tratamento de exceções têm por objetivo resolver a situação excepcional ou conter seus efeitos colaterais garantindo, ao menos, uma degradação controlada (graceful degradation) do sistema. Neste trabalho, estudamos algumas das principais deficiências de modelos atuais de tratamento de exceções, no contexto de sistemas de GPN, e apresentamos soluções baseadas em Modelos Transacionais Avançados para contorná-las. Isso é feito por meio do aprimoramento dos mecanismos de tratamento de exceções da abordagem de modelagem e gerenciamento de execução de processos de negócio WED-flow. Por fim, estendemos a ferramenta WED-tool, uma implementação da abordagem WED-flow, através do desenvolvimento de seu gerenciador de recuperação de falhas.
With the aim of becoming more efficient, many organizations -- companies, governmental entities, research centers, etc -- choose to use software tools for supporting the accomplishment of its processes. An option that becomes more popular is the usage of Business Process Management Systems (BPM), which are generic tools, that is, not specific to any organization and highly configurable to the domain needs of any organization. One of the main responsibilities of BPM Systems is to provide exception handling mechanisms for the execution of business process instances. Exceptions, if ignored or incorrectly handled, may induce the abortion of instance executions and, depending on the gravity of the situation, induce failures on BPM Systems or even on subjacent systems (operational system, database management systems, etc.). Thus, exception handling mechanisms aim to solve the exceptional situation or stopping its collateral effects by ensuring, at least, a graceful degradation to the system. In this work, we study some of the main deficiencies of present exception handling models -- in the context of BPM Systems -- and present solutions based on Advanced Transaction Models to bypass them. We do this through the improvement of exception handling mechanisms from WED-flow, a business process modelling and instance execution managing approach. Lastly, we extend the WED-tool, an implementation of WED-flow approach, through the development of its failure recovery manager.
APA, Harvard, Vancouver, ISO, and other styles
12

Zawis, John A., and David K. Hsiao. "Accessing hierarchical databases via SQL transactions in a multi-model database system." Thesis, Monterey, California. Naval Postgraduate School, 1987. http://hdl.handle.net/10945/22186.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Walpole, Dennis A., and Alphonso L. Woods. "Accessing network databases via SQL transactions in a multi-model database system." Thesis, Monterey, California. Naval Postgraduate School, 1989. http://hdl.handle.net/10945/25647.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Tuck, Terry W. "Temporally Correct Algorithms for Transaction Concurrency Control in Distributed Databases." Thesis, University of North Texas, 2001. https://digital.library.unt.edu/ark:/67531/metadc2743/.

Full text
Abstract:
Many activities are comprised of temporally dependent events that must be executed in a specific chronological order. Supportive software applications must preserve these temporal dependencies. Whenever the processing of this type of an application includes transactions submitted to a database that is shared with other such applications, the transaction concurrency control mechanisms within the database must also preserve the temporal dependencies. A basis for preserving temporal dependencies is established by using (within the applications and databases) real-time timestamps to identify and order events and transactions. The use of optimistic approaches to transaction concurrency control can be undesirable in such situations, as they allow incorrect results for database read operations. Although the incorrectness is detected prior to transaction committal and the corresponding transaction(s) restarted, the impact on the application or entity that submitted the transaction can be too costly. Three transaction concurrency control algorithms are proposed in this dissertation. These algorithms are based on timestamp ordering, and are designed to preserve temporal dependencies existing among data-dependent transactions. The algorithms produce execution schedules that are equivalent to temporally ordered serial schedules, where the temporal order is established by the transactions' start times. The algorithms provide this equivalence while supporting currency to the extent out-of-order commits and reads. With respect to the stated concern with optimistic approaches, two of the proposed algorithms are risk-free and return to read operations only committed data-item values. Risk with the third algorithm is greatly reduced by its conservative bias. All three algorithms avoid deadlock while providing risk-free or reduced-risk operation. The performance of the algorithms is determined analytically and with experimentation. Experiments are performed using functional database management system models that implement the proposed algorithms and the well-known Conservative Multiversion Timestamp Ordering algorithm.
APA, Harvard, Vancouver, ISO, and other styles
15

Mena, Eduardo Illarramendi Arantza. "Ontology-based query processing for global information systems /." Boston [u.a.] : Kluwer Acad. Publ, 2001. http://www.loc.gov/catdir/enhancements/fy0813/2001029621-d.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Tu, Stephen Lyle. "Fast transactions for multicore in-memory databases." Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/82375.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2013.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 55-57).
Though modern multicore machines have sufficient RAM and processors to manage very large in-memory databases, it is not clear what the best strategy for dividing work among cores is. Should each core handle a data partition, avoiding the overhead of concurrency control for most transactions (at the cost of increasing it for cross-partition transactions)? Or should cores access a shared data structure instead? We investigate this question in the context of a fast in-memory database. We describe a new transactionally consistent database storage engine called MAFLINGO. Its cache-centered data structure design provides excellent base key-value store performance, to which we add a new, cache-friendly serializable protocol and support for running large, read-only transactions on a recent snapshot. On a key-value workload, the resulting system introduces negligible performance overhead as compared to a version of our system with transactional support stripped out, while achieving linear scalability versus the number of cores. It also exhibits linear scalability on TPC-C, a popular transactional benchmark. In addition, we show that a partitioning-based approach ceases to be beneficial if the database cannot be partitioned such that only a small fraction of transactions access multiple partitions, making our shared-everything approach more relevant. Finally, based on a survey of results from the literature, we argue that our implementation substantially outperforms previous main-memory databases on TPC-C benchmarks.
by Stephen Lyle Tu.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
17

Yan, Cong S. M. Massachusetts Institute of Technology. "Exploiting fine-grain parallelism in transactional database systems." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/101592.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2015.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 47-50).
Current database engines designed for conventional multicore systems exploit a fraction of the parallelism available in transactional workloads. Specifically, database engines only exploit inter-transaction parallelism: they use speculation to concurrently execute multiple, potentially-conflicting database transactions while maintaining atomicity and isolation. However, they do not exploit intra-transaction parallelism: each transaction is executed sequentially on a single thread. While fine-grain intra-transaction parallelism is often abundant, it is too costly to exploit in conventional multicores. Software would need to implement fine-grain speculative execution and scheduling, introducing prohibitive overheads that would negate the benefits of additional intra-transaction parallelism. In this thesis, we leverage novel hardware support to design and implement a database engine that effectively exploits both inter- and intra-transaction parallelism. Specifically, we use Swarm, a new parallel architecture that exploits fine-grained and ordered parallelism. Swarm executes tasks speculatively and out of order, but commits them in order. Integrated hardware task queueing and speculation mechanisms allow Swarm to speculate thousands of tasks ahead of the earliest active task and reduce task management overheads. We modify Silo, a state-of-the-art in-memory database engine, to leverage Swarm's features. The resulting database engine, which we call SwarmDB, has several key benefits over Silo: it eliminates software concurrency control, reducing overheads; it efficiently executes tasks within a database transaction in parallel; it reduces conflicts; and it reduces the amount of work that needs to be discarded and re-executed on each conflict. We evaluate SwarmDB on simulated Swarm systems of up to 64 cores. At 64 cores, SwarmDB outperforms Silo by 6.7x on TPC-C and 6.9x on TPC-E, and achieves near-linear scalability.
by Cong Yan.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
18

Prabhu, Nitin Kumar Vijay. "Transaction processing in Mobile Database System." Diss., UMK access, 2006.

Find full text
Abstract:
Thesis (Ph. D.)--School of Computing and Engineering. University of Missouri--Kansas City, 2006.
"A dissertation in computer science and informatics and telecommunications and computer networking." Advisor: Vijay Kumar. Typescript. Vita. Title from "catalog record" of the print edition Description based on contents viewed Nov. 9, 2007. Includes bibliographical references (leaves 152-157). Online version of the print edition.
APA, Harvard, Vancouver, ISO, and other styles
19

Ogunyadeka, Adewole C. "Transactions and data management in NoSQL cloud databases." Thesis, Oxford Brookes University, 2016. https://radar.brookes.ac.uk/radar/items/c87fa049-f8c7-4b9e-a27c-3c106fcda018/1/.

Full text
Abstract:
NoSQL databases have become the preferred option for storing and processing data in cloud computing as they are capable of providing high data availability, scalability and efficiency. But in order to achieve these attributes, NoSQL databases make certain trade-offs. First, NoSQL databases cannot guarantee strong consistency of data. They only guarantee a weaker consistency which is based on eventual consistency model. Second, NoSQL databases adopt a simple data model which makes it easy for data to be scaled across multiple nodes. Third, NoSQL databases do not support table joins and referential integrity which by implication, means they cannot implement complex queries. The combination of these factors implies that NoSQL databases cannot support transactions. Motivated by these crucial issues this thesis investigates into the transactions and data management in NoSQL databases. It presents a novel approach that implements transactional support for NoSQL databases in order to ensure stronger data consistency and provide appropriate level of performance. The novelty lies in the design of a Multi-Key transaction model that guarantees the standard properties of transactions in order to ensure stronger consistency and integrity of data. The model is implemented in a novel loosely-coupled architecture that separates the implementation of transactional logic from the underlying data thus ensuring transparency and abstraction in cloud and NoSQL databases. The proposed approach is validated through the development of a prototype system using real MongoDB system. An extended version of the standard Yahoo! Cloud Services Benchmark (YCSB) has been used in order to test and evaluate the proposed approach. Various experiments have been conducted and sets of results have been generated. The results show that the proposed approach meets the research objectives. It maintains stronger consistency of cloud data as well as appropriate level of reliability and performance.
APA, Harvard, Vancouver, ISO, and other styles
20

Jones, Evan P. C. (Evan Philip Charles) 1981. "Fault-tolerant distributed transactions for partitioned OLTP databases." Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/71477.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 103-112).
This thesis presents Dtxn, a fault-tolerant distributed transaction system designed specifically for building online transaction processing (OLTP) databases. Databases have traditionally been designed as general purpose data processing tools. By being designed only for OLTP workloads, Dtxn can be more efficient. It is designed to support very large databases by partitioning data across a cluster of commodity servers in a data center. Combining multiple servers together allows systems built with Dtxn to be cost effective, highly available, scalable, and fault-tolerant. Dtxn provides three novel features. First, it provides reusable infrastructure for building a distributed OLTP database out of single machine databases. This allows developers to take a specialized backend storage engine and use it across multiple machines, without needing to re-implement the distributed transaction infrastructure. We used Dtxn to build four different applications: a simple key/value store, a specialized TPC-C implementation, a main-memory OLTP database, and a traditional disk-based OLTP database. Second, Dtxn provides a novel concurrency control mechanism called speculative concurrency control, designed for main memory OLTP workloads that are primarily composed of transactions with a single round of communication between the application and database. Speculative concurrency control executes one transaction at a time, with no concurrency control overhead. In cases where there may be stalls due to network communication, it speculates future transactions. Our results show that this provides significantly better throughput than traditional two-phase locking, outperforming it by a factor of two on the TPC-C benchmark. Finally, Dtxn supports live migration, allowing part of the data on one server to be moved to another server while processing transactions. Our experiments show that our approach has nearly no visible impact on throughput or latency when moving data under moderate to high loads. It has significantly less impact than the best commercially available systems when the database is overloaded. The period of time where the throughput is reduced is less than half as long as failing over to another replica or using virtual machine migration.
by Evan Philip Charles Jones.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
21

Cahill, Michael James. "Serializable Isolation for Snapshot Databases." University of Sydney, 2009. http://hdl.handle.net/2123/5353.

Full text
Abstract:
PhD
Many popular database management systems implement a multiversion concurrency control algorithm called snapshot isolation rather than providing full serializability based on locking. There are well-known anomalies permitted by snapshot isolation that can lead to violations of data consistency by interleaving transactions that would maintain consistency if run serially. Until now, the only way to prevent these anomalies was to modify the applications by introducing explicit locking or artificial update conflicts, following careful analysis of conflicts between all pairs of transactions. This thesis describes a modification to the concurrency control algorithm of a database management system that automatically detects and prevents snapshot isolation anomalies at runtime for arbitrary applications, thus providing serializable isolation. The new algorithm preserves the properties that make snapshot isolation attractive, including that readers do not block writers and vice versa. An implementation of the algorithm in a relational database management system is described, along with a benchmark and performance study, showing that the throughput approaches that of snapshot isolation in most cases.
APA, Harvard, Vancouver, ISO, and other styles
22

Pang, Gene. "Scalable Transactions for Scalable Distributed Database Systems." Thesis, University of California, Berkeley, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=3733329.

Full text
Abstract:

With the advent of the Internet and Internet-connected devices, modern applications can experience very rapid growth of users from all parts of the world. A growing user base leads to greater usage and large data sizes, so scalable database systems capable of handling the great demands are critical for applications. With the emergence of cloud computing, a major movement in the industry, modern applications depend on distributed data stores for their scalable data management solutions. Many large-scale applications utilize NoSQL systems, such as distributed key-value stores, for their scalability and availability properties over traditional relational database systems. By simplifying the design and interface, NoSQL systems can provide high scalability and performance for large data sets and high volume workloads. However, to provide such benefits, NoSQL systems sacrifice traditional consistency models and support for transactions typically available in database systems. Without transaction semantics, it is harder for developers to reason about the correctness of the interactions with the data. Therefore, it is important to support transactions for distributed database systems without sacrificing scalability.

In this thesis, I present new techniques for scalable transactions for scalable database systems. Distributed data stores need scalable transactions to take advantage of cloud computing, and to meet the demands of modern applications. Traditional techniques for transactions may not be appropriate in a large, distributed environment, so in this thesis, I describe new techniques for distributed transactions, without having to sacrifice traditional semantics or scalability.

I discuss three facets to improving transaction scalability and support in distributed database systems. First, I describe a new transaction commit protocol that reduces the response times for distributed transactions. Second, I propose a new transaction programming model that allows developers to better deal with the unexpected behavior of distributed transactions. Lastly, I present a new scalable view maintenance algorithm for convergent join views. Together, the new techniques in this thesis contribute to providing scalable transactions for modern, distributed database systems.

APA, Harvard, Vancouver, ISO, and other styles
23

Oza, Smita. "Implementing real-time transactions using distributed main memory databases." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape16/PQDD_0031/MQ27056.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Oza, Smita Carleton University Dissertation Computer Science. "Implementing real- time transactions using distributed main memory databases." Ottawa, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
25

Lawley, Michael John, and n/a. "Program Transformation for Proving Database Transaction Safety." Griffith University. School of Computing and Information Technology, 2000. http://www4.gu.edu.au:8080/adt-root/public/adt-QGU20070228.150125.

Full text
Abstract:
In this thesis we propose the use of Dijkstra's concept of a predicate transformer [Dij75] for the determination of database transaction safety [SS89] and the generation of simple conditions to check that a transaction will not violate the integrity constraints in the case that it is not safe. The generation of this simple condition is something that can be done statically, thus providing a mechanism for generating safe transactions. Our approach treats a database as state, a database transaction as a program, and the database's integrity constraints as a postcondition in order to use a predicate transformer [Dij75] to generate a weakest precondition. We begin by introducing a set-oriented update language for relational databases for which a predicate transformer is then defined. Subsequently, we introduce a more powerful update language for deductive databases and define a new predicate transformer to deal with this language and the more powerful integrity constraints that can be expressed using recursive rules. Next we introduce a data model with object-oriented features including methods, inheritance and dynamic overriding. We then extend the predicate transformer to handle these new features. For each of the predicate transformers, we prove that they do indeed generate a weakest precondition for a transaction and the database integrity constraints. However, the weakest precondition generated by a predicate transformer still involves much redundant checking. For several general classes of integrity constraint, including referential integrity and functional dependencies, we prove that the weakest precondition can be substantially further simplified to avoid checking things we already know to be true under the assumption that the database currently satisfies its integrity con-straints. In addition, we propose the use of the predicate transformer in combination with meta-rules that capture the exact incremental change to the database of a particular transaction. This provides a more general approach to generating simple checks for enforcing transaction safety. We show that this approach is superior to known existing previous approaches to the problem of efficient integrity constraint checking and transaction safety for relational, deductive, and deductive object-oriented databases. Finally we demonstrate several further applications of the predicate transformer to the problems of schema constraints, dynamic integrity constraints, and determining the correctness of methods for view updates. We also show how to support transactions embedded in procedural languages such as C.
APA, Harvard, Vancouver, ISO, and other styles
26

Konana, Prabhudev Chennabasappa, and Prabhudev Chennabasappa Konana. "A transaction model for active and real-time databases." Diss., The University of Arizona, 1995. http://hdl.handle.net/10150/187289.

Full text
Abstract:
Many emerging database applications, such as, automated financial trading, network management and manufacturing process control involve accessing and manipulating large amounts of data under time constraints. This has led to the emergence of active and real-time database as a research area, wherein transactions trigger other transactions and have time deadlines. In this dissertation, three important issues are investigated: the correctness criteria for various classes of transactions; real-time transaction scheduling algorithms for overload situation; and, a concurrency control policy that is sensitive to time deadline and transaction triggering. The first part of the dissertation deals with the issue of consistency of sensor reported data. We formally define sensor data consistency and a new notion of visibility called quasi immediate visibility (QIV) for concurrent execution of write-only and read-only transactions. We propose a protocol for maintaining sensor data consistency that has lower response time and higher throughput. The protocol is validated through simulation. Real-time schedulers must perform well both under underloaded and overloaded situations. In this dissertation, we propose a variation of weighted priority scheduling algorithm called Deadline Access Parameter Ratio (DAPR), that actively considers the I/O requirements and the amount of unprocessed work for "canned" transaction assumption. We show through simulation that DAPR performs significantly better than existing scheduling algorithms under overloaded situations. The limitation of the proposed algorithm is that in underloaded situations DAPR is not an option. The last part of this dissertation proposes a concurrency control (CC), called OCCWB, which is an extension of conventional optimistic CC. OCCWB takes advantage of the "canned" transaction assumption and includes pre-analysis stage, wherein, transactions are selectively blocked from executing if there is a high probability of restarting. The algorithm defines favorable serialization orders considering transaction semantics and tries to achieve such orders through appropriate priority adjustment. OCCWB is shown to perform significantly better than other CC policies under reasonable pre-analysis overhead for underloaded situation, and consistently better in overloaded situations, even with high pre-analysis overhead, for a wide range of workload and resource parameters.
APA, Harvard, Vancouver, ISO, and other styles
27

Aleksic, Mario. "Incremental computation methods in valid & transaction time databases." [S.l.] : Universität Stuttgart , Fakultät Informatik, 1996. http://www.bsz-bw.de/cgi-bin/xvms.cgi?SWB6783621.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Sinha, Aman. "Memory management and transaction scheduling for large-scale databases /." Digital version accessible at:, 1999. http://wwwlib.umi.com/cr/utexas/main.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Couto, Emanuel Amaral. "Speculative execution by using software transactional memory." Master's thesis, FCT - UNL, 2009. http://hdl.handle.net/10362/2659.

Full text
Abstract:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para a obtenção do Grau de Mestre em Engenharia Informática.
Many programs sequentially execute operations that take a long time to complete. Some of these operations may return a highly predictable result. If this is the case, speculative execution can improve the overall performance of the program. Speculative execution is the execution of code whose result may not be needed. Generally it is used as a performance optimization. Instead of waiting for the result of a costly operation,speculative execution can be used to speculate the operation most probable result and continue executing based in this speculation. If later the speculation is confirmed to be correct, time had been gained. Otherwise, if the speculation is incorrect, the execution based in the speculation must abort and re-execute with the correct result. In this dissertation we propose the design of an abstract process to add speculative execution to a program by doing source-to-source transformation. This abstract process is used in the definition of a mechanism and methodology that enable programmer to add speculative execution to the source code of programs. The abstract process is also used in the design of an automatic source-to-source transformation process that adds speculative execution to existing programs without user intervention. Finally, we also evaluate the performance impact of introducing speculative execution in database clients. Existing proposals for the design of mechanisms to add speculative execution lacked portability in favor of performance. Some were designed to be implemented at kernel or hardware level. The process and mechanisms we propose in this dissertation can add speculative execution to the source of program, independently of the kernel or hardware that is used. From our experiments we have concluded that database clients can improve their performance by using speculative execution. There is nothing in the system we propose that limits in the scope of database clients. Although this was the scope of the case study, we strongly believe that other programs can benefit from the proposed process and mechanisms for introduction of speculative execution.
APA, Harvard, Vancouver, ISO, and other styles
30

Takkar, Sonia. "Scheduling real-time transactions in parallel database systems." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape16/PQDD_0025/MQ26975.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Takkar, Sonia Carleton University Dissertation Computer Science. "Scheduling real-time transactions in parallel database systems." Ottawa, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
32

Aldarmi, Saud Ahmed. "Scheduling soft-deadline real-time transactions." Thesis, University of York, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.310917.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Suriyakarn, Sorawit. "A framework for synthesizing transactional database implementations in a proof assistant." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/113101.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 67-68).
We propose CoqSQL, a framework for optimizing relational queries and automatically synthesizing relational database implementations in the Coq proof assistant, based on Anders Kaseorg's and Mohsen Lesani's Transactions framework. The synthesized codes support concurrent transaction execution on multiple processors and are accompanied with proofs certifying their correctness. The contributions include: (1) a complete specification of a subset of SQL queries and database relations, including support for indexes; and (2) an extensible, automated, and complete synthesis process from standard SQL-like specifications to executable concurrent programs.
by Sorawit Suriyakarn.
M. Eng.
APA, Harvard, Vancouver, ISO, and other styles
34

Barga, Roger S. "A reflective framework for implementing extended transactions /." Full text open access at:, 1999. http://content.ohsu.edu/u?/etd,205.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Savasere, Ashok. "Efficient algorithms for mining association rules in large databases of cutomer transactions." Diss., Georgia Institute of Technology, 1998. http://hdl.handle.net/1853/8260.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Zhang, Connie. "Static Conflict Analysis of Transaction Programs." Thesis, University of Waterloo, 2000. http://hdl.handle.net/10012/1052.

Full text
Abstract:
Transaction programs are comprised of read and write operations issued against the database. In a shared database system, one transaction program conflicts with another if it reads or writes data that another transaction program has written. This thesis presents a semi-automatic technique for pairwise static conflict analysis of embedded transaction programs. The analysis predicts whether a given pair of programs will conflict when executed against the database. There are several potential applications of this technique, the most obvious being transaction concurrency control in systems where it is not necessary to support arbitrary, dynamic queries and updates. By analyzing transactions in such systems before the transactions are run, it is possible to reduce or eliminate the need for locking or other dynamic concurrency control schemes.
APA, Harvard, Vancouver, ISO, and other styles
37

Ongkasuwan, Patarawan. "Transaction synchronization and privacy aspect in blockchain decentralized applications." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-272134.

Full text
Abstract:
The ideas and techniques of cryptography and decentralized storage have seen tremendous growth in many industries, as they have been adopted to improve activities in the organization. That called Blockchain technology, it provides an effective transparency solution. Generally, Blockchain has been used for digital currency or cryptocurrency since its inception. One of the best-known Blockchain protocols is Ethereum, which has invented the smart contract to enable Blockchain’s ability to execute a condition, rather than simply acting as storage. Applications that adopt this technology are called ‘Dapps’ or ‘decentralized applications’. However, there are ongoing arguments about synchronization associated with the system. System synchronization is currently extremely important for applications, because the waiting time for a transaction to be verified can cause dissatisfaction in the user experience. Several studies have revealed that privacy leakage occurs, even though the Blockchain provides a degree of security, as a result of the traditional transaction, which requires approval through an intermediate institution. For instance, a bank needs to process transactions via many constitution parties before receiving the final confirmation, which requires the user to wait for a considerable amount of time. This thesis describes the challenge of transaction synchronization between the user and smart contract, as well as the matter of a privacy strategy for the system and compliance. To approach these two challenges, the first task separates different events and evaluates the results compared to an alternative solution. This is done by testing the smart contract to find the best gas price result, which varies over time. In the Ethereum protocol, gas price is one of the best ways to decrease the transaction time to meet user expectations. The gas price is affected by the code structure and the network. In the smart contract, testing is run based on two cases, and solves platform issues such as runners and user experience and reduces costs. It has also been found that collecting the fee before participating in an auction can prevent the problem of runners. The second case aims to prove that freezing the amount of a bid is the best way to increase the user’s experience, and to achieve the better experience of an online auction. The second challenge mainly focuses on the privacy strategy and risk management for the platform, which involves identifying possible solutions for all risk situations, as well as detecting, forecasting and preventing them. Providing strategies, such as securing the smart contract structure, increasing the encryption method in the database, designing a term sheet and agreement, and authorization, help to prevent system vulnerabilities. Therefore, this research aims to improve and investigate an online auction platform by using a Blockchain smart contract to provide evocative user experiences.
Idéer och tekniker för kryptografi och decentraliserad lagring har haft en enorm tillväxt i många branscher, eftersom de har antagits för att förbättra verksamheten i organisationen. Den som kallas Blockchain-tekniken ger den en effektiv transparenslösning. Generellt har Blockchain använts för digital valuta eller cryptocurrency sedan starten. Ett av de mest kända Blockchainprotokollen är Ethereum, som har uppfunnit det smarta kontraktet för att möjliggöra Blockchains förmåga att utföra ett villkor, snarare än att bara fungera som lagring. Applikationer som använder denna teknik kallas 'Dapps' eller 'decentraliserade applikationer'. Det finns emellertid pågående argument om synkronisering associerad med systemet. Systemsynkronisering är för närvarande oerhört viktigt för applikationer, eftersom väntetiden för att en transaktion ska verifieras kan orsaka missnöje i användarupplevelsen. Flera studier har visat att sekretessläckage inträffar, även om Blockchain ger en viss säkerhet, till följd av den traditionella transaktionen, som kräver godkännande genom en mellaninstitution. Till exempel måste en bank bearbeta transaktioner via många konstitutionspartier innan den får den slutliga bekräftelsen, vilket kräver att användaren väntar en betydande tid. Den här avhandlingen beskriver utmaningen med transaktionssynkronisering mellan användaren och smart kontrakt, samt frågan om en sekretessstrategi för systemet och efterlevnad. För att närma sig dessa två utmaningar separerar den första uppgiften olika händelser och utvärderar resultaten jämfört med en alternativ lösning. Detta görs genom att testa det smarta kontraktet för att hitta det bästa gasprisresultatet, som varierar över tiden. I Ethereum-protokollet är gaspriset ett av de bästa sätten att minska transaktionstiden för att möta användarens förväntningar. Gaspriset påverkas av kodstrukturen och nätverket. I det smarta kontraktet körs test baserat på två fall och löser plattformsproblem som löpare och användarupplevelse och minskar kostnaderna. Det har också visat sig att insamlingen av avgiften innan du deltar i en auktion kan förhindra löparproblemet. Det andra fallet syftar till att bevisa att frysning av budbeloppet är det bästa sättet att öka användarens upplevelse och att uppnå en bättre upplevelse av en online auktion. Den andra utmaningen fokuserar huvudsakligen på sekretessstrategin och riskhanteringen för plattformen, som innebär att identifiera möjliga lösningar för alla risksituationer, samt att upptäcka, förutse och förhindra dem. Tillhandahållande av strategier, som att säkra den smarta kontraktsstrukturen, öka krypteringsmetoden i databasen, utforma ett termblad och avtal och godkännande, hjälper till att förhindra systemets sårbarheter. Därför syftar denna forskning till att förbättra och undersöka en online-auktionsplattform genom att använda ett smart avtal med Blockchain för att ge upplevande användarupplevelser.
APA, Harvard, Vancouver, ISO, and other styles
38

Youssef, Mohamed Wagdy Abdel Fattah. "Transaction behaviour in large database environments : a methodological approach." Thesis, City University London, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.358945.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Sauer, Caetano [Verfasser]. "Modern techniques for transaction-oriented database recovery / Caetano Sauer." München : Verlag Dr. Hut, 2017. http://d-nb.info/1140977644/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Hedman, Surlien Peter. "Economic advantages of Blockchain technology VS Relational database : An study focusing on economic advantages with Blockchain technology and relational databases." Thesis, Blekinge Tekniska Högskola, Institutionen för industriell ekonomi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-17366.

Full text
Abstract:
Many IT-systems are when created not designed to be flexible and dynamic resulting in old and complex systems hard to maintain. Systems usually build their functionality and capability on the data contained in their databases. The database underlines such system, and when data do not correspond between different and synchronizing systems, it is a troublesome debugging process. This is because systems are complex and the software architecture is not always easy to understand. Due to increasing complexity in systems over time, making systems harder to debug and understand, there is a need for a system that decreases debugging costs. Furthermore, result in better transaction costs. This study proposes a system based on blockchain technology to accomplish this.   An ERP system based on blockchain with encrypted transactions was constructed to determine if the proposed system can contribute in better transaction costs. A case study at multiple IT-companies and comparison to an existing ERP system module validated the system. A successful simulation showed that multiple parts could read and append data to an immutable storage system for one truth of data. By all counts, and with proven results, the constructed blockchain solution based on encrypted transactions for an ERP system can reduce debugging costs.   It is also shown that a centralized database structure where external and internal systems can get one truth of data, decreases transaction costs. However, it is the decision makers in companies that need to be convinced for the constructed system to be implemented. A problem is also when modifications to the object type, then historical transactions cannot be changed in an immutable storage solution. Blockchain is still a new technology, and the knowledge of the technology and the evolution of the system determines if the proposed software architecture will result in better transaction costs.
APA, Harvard, Vancouver, ISO, and other styles
41

Brodsky, Lloyd. "A knowledge-based preprocessor for approximate joins in improperly designed transaction databases." Thesis, Massachusetts Institute of Technology, 1991. http://hdl.handle.net/1721.1/13744.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Wang, Xiangyang. "The development of a knowledge-based database transaction design assistant." Thesis, Cardiff University, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.359755.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Dixon, Eric Richard. "Developing distributed applications with distributed heterogenous databases." Thesis, Virginia Tech, 1993. http://hdl.handle.net/10919/42748.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

On, Sai Tung. "Efficient transaction recovery on flash disks." HKBU Institutional Repository, 2010. http://repository.hkbu.edu.hk/etd_ra/1170.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Xie, Wanxia. "Supporting Distributed Transaction Processing Over Mobile and Heterogeneous Platforms." Diss., Georgia Institute of Technology, 2005. http://hdl.handle.net/1853/14073.

Full text
Abstract:
Recent advances in pervasive computing and peer-to-peer computing have opened up vast opportunities for developing collaborative applications. To benefit from these emerging technologies, there is a need for investigating techniques and tools that will allow development and deployment of these applications on mobile and heterogeneous platforms. To meet these challenging tasks, we need to address the typical characteristics of mobile peer-to-peer systems such as frequent disconnections, frequent network partitions, and peer heterogeneity. This research focuses on developing the necessary models, techniques and algorithms that will enable us to build and deploy collaborative applications in the Internet enabled, mobile peer-to-peer environments. This dissertation proposes a multi-state transaction model and develops a quality aware transaction processing framework to incorporate quality of service with transaction processing. It proposes adaptive ACID properties and develops a quality specification language to associate a quality level with transactions. In addition, this research develops a probabilistic concurrency control mechanism and a group based transaction commit protocol for mobile peer-to-peer systems that greatly reduces blockings in transactions and improves the transaction commit ratio. To the best of our knowledge, this is the first attempt to systematically support disconnection-tolerant and partition-tolerant transaction processing. This dissertation also develops a scalable directory service called PeerDS to support the above framework. It addresses the scalability and dynamism of the directory service from two aspects: peer-to-peer and push-pull hybrid interfaces. It also addresses peer heterogeneity and develops a new technique for load balancing in the peer-to-peer system. This technique comprises an improved routing algorithm for virtualized P2P overlay networks and a generalized Top-K server selection algorithm for load balancing, which could be optimized based on multiple factors such as proximity and cost. The proposed push-pull hybrid interfaces greatly reduce the overhead of directory servers caused by frequent queries from directory clients. In order to further improve the scalability of the push interface, this dissertation also studies and evaluates different filter indexing schemes through which the interests of each update could be calculated very efficiently. This dissertation was developed in conjunction with the middleware called System on Mobile Devices (SyD).
APA, Harvard, Vancouver, ISO, and other styles
46

Quantock, David E. "The real-time roll-back and recovery of transactions in database systems." Thesis, Monterey, California. Naval Postgraduate School, 1989. http://hdl.handle.net/10945/27234.

Full text
Abstract:
Approved for public release; distribution is unlimited.
A modern database transaction may involve a long series of updates, deletions, and insertions of data and a complex mix of these primary database operations. Due to its length and complexity, the transaction requires back-up and recovery procedures. The back-up procedure allows the user to either commit or abort a lengthy and complex transaction without comprising the integrity of the data. The recovery procedure allows the system to maintain the data integrity during the execution of a transaction, should the transaction be interrupted by the system. With both the back-up and recovery procedures, the modern database system will be able to provide consistent data throughout the life-span of a database without ever corrupting either its data values or its data types. However, the implementation of back-up and recovery procedures in a database system is a difficult and involved effort since it effects the base as well as meta data of the database. Further, it effects the state of the database system. This thesis is mainly focused on the design trade-offs and issues of implementing an effective and efficient mechanism for back-up and recovery in the multimodel, multilingual, and multi backend database system. Keywords: Data base management systems. (KR)
APA, Harvard, Vancouver, ISO, and other styles
47

Gong, Daoya. "Transaction process modeling and implementation for 3-tiered Web-based database systems." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0023/MQ62128.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Koroncziová, Dominika. "Doplnění a optimalizace temporálního rozšíření pro PostgreSQL." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2016. http://www.nusl.cz/ntk/nusl-255417.

Full text
Abstract:
This thesis focuses on a implemenation of a a temporal data support within traditional relational environment of PostgreSQL system. I pick up on Radek Jelínek's thesis and an extension developed by him. I've analyzed the extension from functional, practical and performance perspectives. Based on my results, I've designed and implemented changes to the original extension. The work also contains implementation details as well as performance comparison results between the new and the original extensions.
APA, Harvard, Vancouver, ISO, and other styles
49

Olsson, Markus. "Design and Implementation of Transactions in a Column-Oriented In-Memory Database System." Thesis, Umeå University, Department of Computing Science, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-32705.

Full text
Abstract:

Coldbase is a column-oriented in-memory database implemented in Java that is used with a specific workload in mind. Coldbase is optimized to receive large streams of timestamped trading data arriving at a fast pace while allowing simple but frequent queries that analyse the data concurrently. By limiting the functionality, Coldbase is able to reach a high performance while the memory consumption is low. This thesis presents ColdbaseTX which is an extension to Coldbase that adds support for transactions. It uses an optimistic approach by storing all writes of a transaction locally and applying them when the transaction commits. Readers are separated from writers by using two versions of the data which makes it possible to guarantee that readers are never blocked.Benchmarks compare Coldbase to ColdbaseTX regarding both performance andmemory efficiency. The results show that ColdbaseTX introduces a small overhead in both memory and performance which however is deemed acceptable since the gain is support for transactions.

APA, Harvard, Vancouver, ISO, and other styles
50

Huang, Kuo-Yu, and 黃國瑜. "Mining Inter-Transaction Association Rules in Transactional Databases." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/skp85p.

Full text
Abstract:
博士
國立中央大學
資訊工程研究所
94
Abstract In this dissertation, we focus on how to devise an efficient and effective algorithm for discovering inter-transaction associations such as, periodic patterns, frequent continuities, frequent episodes and sequential pattern. Firstly, we propose a 3-phase FITS model in inter-transaction association mining. We adopt both horizontal and vertical formats to increase the mining efficiency. Furthermore, we focus on the application of FITS to closed pattern mining to reduce the number of patterns to be enumerated. The insight is “If an intra-transaction pattern is not a closed pattern, it will not be a closed frequent inter-transaction pattern”. The bi-format and bi-phase reduction are applied to overcome the problem of the duplicate item extensions especially for closed pattern mining. We have applied the FITS model to all inter-transaction mining tasks with a little modification. Although the FITS model can be used for periodic pattern mining, it is not efficient enough since the constraints on periodicy are not fully utilized. Therefore, we propose a more general model, SMCA, to mine asynchronous periodic patterns from a complex sequence and correct some problem of the previous works. A 4-phase algorithm, including SPMiner, MPMiner, CPMiner and APMiner, is devised to discover periodic patterns from a transactional database presented in vertical format. The essential idea of SPMiner is to trace the possible segments for period p by a hash table. Besides, to avoid additional scans over the transactional database, we propose a segment-based combination to reduce redundant generation and testing. The experiments have demonstrated good performance of the proposed model on several inter-transaction patterns. Although the efficiency improvement is based on the requirement of additional memory cost, the memory cost can be further reduced by disk-based or partition-based approaches, which in turn also prove to be better than state-of-the-art algorithms. In summary, the proposed model can be orders of magnitude faster than previous works with a modest memory cost.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography