To see the other types of publications on this topic, follow the link: Transaction databases.

Dissertations / Theses on the topic 'Transaction databases'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Transaction databases.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Aleksic, Mario. "Incremental computation methods in valid and transaction time databases." Thesis, Georgia Institute of Technology, 1996. http://hdl.handle.net/1853/8126.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Tuck, Terry W. "Temporally Correct Algorithms for Transaction Concurrency Control in Distributed Databases." Thesis, University of North Texas, 2001. https://digital.library.unt.edu/ark:/67531/metadc2743/.

Full text
Abstract:
Many activities are comprised of temporally dependent events that must be executed in a specific chronological order. Supportive software applications must preserve these temporal dependencies. Whenever the processing of this type of an application includes transactions submitted to a database that is shared with other such applications, the transaction concurrency control mechanisms within the database must also preserve the temporal dependencies. A basis for preserving temporal dependencies is established by using (within the applications and databases) real-time timestamps to identify and order events and transactions. The use of optimistic approaches to transaction concurrency control can be undesirable in such situations, as they allow incorrect results for database read operations. Although the incorrectness is detected prior to transaction committal and the corresponding transaction(s) restarted, the impact on the application or entity that submitted the transaction can be too costly. Three transaction concurrency control algorithms are proposed in this dissertation. These algorithms are based on timestamp ordering, and are designed to preserve temporal dependencies existing among data-dependent transactions. The algorithms produce execution schedules that are equivalent to temporally ordered serial schedules, where the temporal order is established by the transactions' start times. The algorithms provide this equivalence while supporting currency to the extent out-of-order commits and reads. With respect to the stated concern with optimistic approaches, two of the proposed algorithms are risk-free and return to read operations only committed data-item values. Risk with the third algorithm is greatly reduced by its conservative bias. All three algorithms avoid deadlock while providing risk-free or reduced-risk operation. The performance of the algorithms is determined analytically and with experimentation. Experiments are performed using functional database management system models that implement the proposed algorithms and the well-known Conservative Multiversion Timestamp Ordering algorithm.
APA, Harvard, Vancouver, ISO, and other styles
3

Sinha, Aman. "Memory management and transaction scheduling for large-scale databases /." Digital version accessible at:, 1999. http://wwwlib.umi.com/cr/utexas/main.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Aleksic, Mario. "Incremental computation methods in valid & transaction time databases." [S.l.] : Universität Stuttgart , Fakultät Informatik, 1996. http://www.bsz-bw.de/cgi-bin/xvms.cgi?SWB6783621.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Konana, Prabhudev Chennabasappa, and Prabhudev Chennabasappa Konana. "A transaction model for active and real-time databases." Diss., The University of Arizona, 1995. http://hdl.handle.net/10150/187289.

Full text
Abstract:
Many emerging database applications, such as, automated financial trading, network management and manufacturing process control involve accessing and manipulating large amounts of data under time constraints. This has led to the emergence of active and real-time database as a research area, wherein transactions trigger other transactions and have time deadlines. In this dissertation, three important issues are investigated: the correctness criteria for various classes of transactions; real-time transaction scheduling algorithms for overload situation; and, a concurrency control policy that is sensitive to time deadline and transaction triggering. The first part of the dissertation deals with the issue of consistency of sensor reported data. We formally define sensor data consistency and a new notion of visibility called quasi immediate visibility (QIV) for concurrent execution of write-only and read-only transactions. We propose a protocol for maintaining sensor data consistency that has lower response time and higher throughput. The protocol is validated through simulation. Real-time schedulers must perform well both under underloaded and overloaded situations. In this dissertation, we propose a variation of weighted priority scheduling algorithm called Deadline Access Parameter Ratio (DAPR), that actively considers the I/O requirements and the amount of unprocessed work for "canned" transaction assumption. We show through simulation that DAPR performs significantly better than existing scheduling algorithms under overloaded situations. The limitation of the proposed algorithm is that in underloaded situations DAPR is not an option. The last part of this dissertation proposes a concurrency control (CC), called OCCWB, which is an extension of conventional optimistic CC. OCCWB takes advantage of the "canned" transaction assumption and includes pre-analysis stage, wherein, transactions are selectively blocked from executing if there is a high probability of restarting. The algorithm defines favorable serialization orders considering transaction semantics and tries to achieve such orders through appropriate priority adjustment. OCCWB is shown to perform significantly better than other CC policies under reasonable pre-analysis overhead for underloaded situation, and consistently better in overloaded situations, even with high pre-analysis overhead, for a wide range of workload and resource parameters.
APA, Harvard, Vancouver, ISO, and other styles
6

Zhang, Connie. "Static Conflict Analysis of Transaction Programs." Thesis, University of Waterloo, 2000. http://hdl.handle.net/10012/1052.

Full text
Abstract:
Transaction programs are comprised of read and write operations issued against the database. In a shared database system, one transaction program conflicts with another if it reads or writes data that another transaction program has written. This thesis presents a semi-automatic technique for pairwise static conflict analysis of embedded transaction programs. The analysis predicts whether a given pair of programs will conflict when executed against the database. There are several potential applications of this technique, the most obvious being transaction concurrency control in systems where it is not necessary to support arbitrary, dynamic queries and updates. By analyzing transactions in such systems before the transactions are run, it is possible to reduce or eliminate the need for locking or other dynamic concurrency control schemes.
APA, Harvard, Vancouver, ISO, and other styles
7

Mena, Eduardo Illarramendi Arantza. "Ontology-based query processing for global information systems /." Boston [u.a.] : Kluwer Acad. Publ, 2001. http://www.loc.gov/catdir/enhancements/fy0813/2001029621-d.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Brodsky, Lloyd. "A knowledge-based preprocessor for approximate joins in improperly designed transaction databases." Thesis, Massachusetts Institute of Technology, 1991. http://hdl.handle.net/1721.1/13744.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Dixon, Eric Richard. "Developing distributed applications with distributed heterogenous databases." Thesis, Virginia Tech, 1993. http://hdl.handle.net/10919/42748.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Xie, Wanxia. "Supporting Distributed Transaction Processing Over Mobile and Heterogeneous Platforms." Diss., Georgia Institute of Technology, 2005. http://hdl.handle.net/1853/14073.

Full text
Abstract:
Recent advances in pervasive computing and peer-to-peer computing have opened up vast opportunities for developing collaborative applications. To benefit from these emerging technologies, there is a need for investigating techniques and tools that will allow development and deployment of these applications on mobile and heterogeneous platforms. To meet these challenging tasks, we need to address the typical characteristics of mobile peer-to-peer systems such as frequent disconnections, frequent network partitions, and peer heterogeneity. This research focuses on developing the necessary models, techniques and algorithms that will enable us to build and deploy collaborative applications in the Internet enabled, mobile peer-to-peer environments. This dissertation proposes a multi-state transaction model and develops a quality aware transaction processing framework to incorporate quality of service with transaction processing. It proposes adaptive ACID properties and develops a quality specification language to associate a quality level with transactions. In addition, this research develops a probabilistic concurrency control mechanism and a group based transaction commit protocol for mobile peer-to-peer systems that greatly reduces blockings in transactions and improves the transaction commit ratio. To the best of our knowledge, this is the first attempt to systematically support disconnection-tolerant and partition-tolerant transaction processing. This dissertation also develops a scalable directory service called PeerDS to support the above framework. It addresses the scalability and dynamism of the directory service from two aspects: peer-to-peer and push-pull hybrid interfaces. It also addresses peer heterogeneity and develops a new technique for load balancing in the peer-to-peer system. This technique comprises an improved routing algorithm for virtualized P2P overlay networks and a generalized Top-K server selection algorithm for load balancing, which could be optimized based on multiple factors such as proximity and cost. The proposed push-pull hybrid interfaces greatly reduce the overhead of directory servers caused by frequent queries from directory clients. In order to further improve the scalability of the push interface, this dissertation also studies and evaluates different filter indexing schemes through which the interests of each update could be calculated very efficiently. This dissertation was developed in conjunction with the middleware called System on Mobile Devices (SyD).
APA, Harvard, Vancouver, ISO, and other styles
11

Cassol, Tiago Sperb. "Um estudo sobre alternativas de representação de dados temporais em bancos de dados relacionais." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2012. http://hdl.handle.net/10183/67849.

Full text
Abstract:
Informações temporais estão presentes numa ampla gama de aplicações. Praticamente qualquer aplicação possui pelo menos um campo que contém dados temporais como datas ou timestamps. Entretanto, bancos de dados tradicionais não tem um suporte amplo para armazenamento e consulta sobre esse tipo de dados eficientemente, e SGBDs com suporte nativo para dados temporais raramente estão disponíveis para os desenvolvedores de sistemas. Na maior parte do tempo, bases de dados comuns são usadas para armazenar dados das aplicações, e quando dados temporais são necessários, eles são gerenciados utilizando o pobre suporte oferecido por SGBDs relacionais tradicionais. Dito isso, o projetista da base de dados precisa confiar em um bom design de esquema para que a dificuldade natural enfrentada ao lidar com dados temporais possa ser minimizada. Enquanto algumas escolhas de design podem parecer óbvias, outras são difíceis de avaliar apenas com uma análise superficial, necessitando experimentação antes de serem aplicadas ou não. Por exemplo, em vários casos pode ser difícil de medir o quanto uma determinada escolha de design vai afetar o consumo de espaço em disco, e quanto essa mesma escolha afetará a performance geral. Esse tipo de informação é necessária para que o projetista da base de dados seja capaz de determinar se , por exemplo, o aumento no consumo de espaço em disco gerado por uma escolha específica é aceitável por conta da melhora de performance que ela oferece. O problema é que não há estudo que analise as escolhas de design disponíveis, fazendo uma análise através de dados concretos. Mesmo quando é fácil identificar, dentre duas escolhas, qual tem performance melhor em um determinado critério, é difícil mensurar o quão melhor a escolha melhor se sai, e se algum efeito colateral trazido por ela é aceitável. Ter dados concretos para suportar esse tipo de decisão permite ao projetista da base de dados fazer escolhas que se enquadram melhor no contexto da sua aplicação. O objetivo desse trabalho é analisar algumas escolhas de design comuns para representar e gerenciar dados temporais em SGBDs relacionais tradicionais, provendo direcionamento sobre qual alternativa se enquadra melhor em cada situação onde dados temporais são necessários. Dados concretos sobre cada uma das alternativas estudadas são gerados e analisados e conclusões são obtidas a partir deles.
Temporal information is present on a wide range of applications. Almost every application has at least one field that contains temporal data like dates or timestamps. However, traditional databases don’t have a comprehensive support to storage and query this kind of data efficiently, and DBMS with native support for temporal data are rarely available to system developers. Most of the time, regular databases are used to store application data and when temporal data is needed, it is handled using the poor support offered by standard relational DBMS. That said, the database designer must rely on good schema design so that the natural difficulty faced when dealing with temporal data on standard relational DBMS can be minimized. While some design choices may seem obvious, others are difficult to evaluate just by looking at them, therefore needing experimentation prior to being applied or not. For example, in several cases it might be difficult to measure how much will a specific design choice affect the disk space consumption, and how much will this same design choice affect overall performance. This kind of information is needed so that the database designer will be able to determine if, for example, the increased disk space consumption generated by a given choice is acceptable because of the performance enhancement it gives. The problem is that there is no study that analyses the design choices available, analyzing them through concrete data. Even when it is easy to see which of two design choices perform better in a given criterion, it is hard to see how better the better choice does, and if any other side-effect it has is acceptable. Having concrete data to support this kind of decision allows the database designer to make the choices that suits his application’s context best. The objective of this work is to analyze several common design choices to represent and handle different kinds of temporal data on standard SQL DBMS, providing guidance on which alternative suits best each situation where temporal data is required. Concrete data about each of the studied alternatives are generated and analyzed, and conclusions are drawn from them.
APA, Harvard, Vancouver, ISO, and other styles
12

Burger, Albert G. "Branching transactions : a transaction model for parallel database systems." Thesis, University of Edinburgh, 1996. http://hdl.handle.net/1842/15591.

Full text
Abstract:
In order to exploit parallel computers, database management systems must achieve a high level of concurrency when executing transactions. In a high contention environment, however, concurrency is severely limited due to transaction blocking, and the utilisation of parallel hardware resources, e.g. multiple CPUs, can be low. In this dissertation, a new transaction model, Branching Transactions, is proposed. Under branching transactions, more than one possible path of execution of a transaction is followed up in parallel, which allows us to avoid unnecessary transaction blockings and restarts. This approach uses additional hardware resources, mainly CPU - which would otherwise sit idle due to data contention - to improve transaction response time and throughput. A new transaction model has implications for many transaction processing algorithms, in particular concurrency control. A family of locking algorithms, based on multi-version two-phase locking, has been developed for branching transactions, including an algorithm which can dynamically switch between branching and non-branching modes. The issues of deadlock handling and recovery are also considered. The correctness of all new concurrency control algorithms is proved by extending traditional serializability theory so that it is able to cope with the notion of a branching transaction. Architectural descriptions of branching transaction systems for shared-memory parallel data-bases and hybrid shared-disk/shared-memory systems are discussed. In particular, the problem of cache coherence is addressed. The performance of branching transactions in a shared-memory parallel database system has been investigated, using discrete-event simulation. One field which may potentially benefit greatly from branching transactions is that of so-called "real-time" database systems, in which transactions have execution deadlines. A new real-time concurrency control algorithm based on branching transactions is introduced.
APA, Harvard, Vancouver, ISO, and other styles
13

Hedman, Surlien Peter. "Economic advantages of Blockchain technology VS Relational database : An study focusing on economic advantages with Blockchain technology and relational databases." Thesis, Blekinge Tekniska Högskola, Institutionen för industriell ekonomi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-17366.

Full text
Abstract:
Many IT-systems are when created not designed to be flexible and dynamic resulting in old and complex systems hard to maintain. Systems usually build their functionality and capability on the data contained in their databases. The database underlines such system, and when data do not correspond between different and synchronizing systems, it is a troublesome debugging process. This is because systems are complex and the software architecture is not always easy to understand. Due to increasing complexity in systems over time, making systems harder to debug and understand, there is a need for a system that decreases debugging costs. Furthermore, result in better transaction costs. This study proposes a system based on blockchain technology to accomplish this.   An ERP system based on blockchain with encrypted transactions was constructed to determine if the proposed system can contribute in better transaction costs. A case study at multiple IT-companies and comparison to an existing ERP system module validated the system. A successful simulation showed that multiple parts could read and append data to an immutable storage system for one truth of data. By all counts, and with proven results, the constructed blockchain solution based on encrypted transactions for an ERP system can reduce debugging costs.   It is also shown that a centralized database structure where external and internal systems can get one truth of data, decreases transaction costs. However, it is the decision makers in companies that need to be convinced for the constructed system to be implemented. A problem is also when modifications to the object type, then historical transactions cannot be changed in an immutable storage solution. Blockchain is still a new technology, and the knowledge of the technology and the evolution of the system determines if the proposed software architecture will result in better transaction costs.
APA, Harvard, Vancouver, ISO, and other styles
14

Koroncziová, Dominika. "Doplnění a optimalizace temporálního rozšíření pro PostgreSQL." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2016. http://www.nusl.cz/ntk/nusl-255417.

Full text
Abstract:
This thesis focuses on a implemenation of a a temporal data support within traditional relational environment of PostgreSQL system. I pick up on Radek Jelínek's thesis and an extension developed by him. I've analyzed the extension from functional, practical and performance perspectives. Based on my results, I've designed and implemented changes to the original extension. The work also contains implementation details as well as performance comparison results between the new and the original extensions.
APA, Harvard, Vancouver, ISO, and other styles
15

Wu, Jiang. "CHECKPOINTING AND RECOVERY IN DISTRIBUTED AND DATABASE SYSTEMS." UKnowledge, 2011. http://uknowledge.uky.edu/cs_etds/2.

Full text
Abstract:
A transaction-consistent global checkpoint of a database records a state of the database which reflects the effect of only completed transactions and not the re- sults of any partially executed transactions. This thesis establishes the necessary and sufficient conditions for a checkpoint of a data item (or the checkpoints of a set of data items) to be part of a transaction-consistent global checkpoint of the database. This result would be useful for constructing transaction-consistent global checkpoints incrementally from the checkpoints of each individual data item of a database. By applying this condition, we can start from any useful checkpoint of any data item and then incrementally add checkpoints of other data items until we get a transaction- consistent global checkpoint of the database. This result can also help in designing non-intrusive checkpointing protocols for database systems. Based on the intuition gained from the development of the necessary and sufficient conditions, we also de- veloped a non-intrusive low-overhead checkpointing protocol for distributed database systems. Checkpointing and rollback recovery are also established techniques for achiev- ing fault-tolerance in distributed systems. Communication-induced checkpointing algorithms allow processes involved in a distributed computation take checkpoints independently while at the same time force processes to take additional checkpoints to make each checkpoint to be part of a consistent global checkpoint. This thesis develops a low-overhead communication-induced checkpointing protocol and presents a performance evaluation of the protocol.
APA, Harvard, Vancouver, ISO, and other styles
16

Dias, Ricardo Jorge Freire. "Cooperative memory and database transactions." Master's thesis, Faculdade de Ciências e Tecnologia, 2008. http://hdl.handle.net/10362/4192.

Full text
Abstract:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para a obtenção do Grau de Mestre em Engenharia Informática
Since the introduction of Software Transactional Memory (STM), this topic has received a strong interest by the scientific community, as it has the potential of greatly facilitating concurrent programming by hiding many of the concurrency issues under the transactional layer, being in this way a potential alternative to the lock based constructs, such as mutexes and semaphores. The current practice of STM is based on keeping track of changes made to the memory and, if needed, restoring previous states in case of transaction rollbacks. The operations in a program that can be reversible,by restoring the memory state, are called transactional operations. The way that this reversibility necessary to transactional operations is achieved is implementation dependent on the STM libraries being used. Operations that cannot be reversed,such as I/O to external data repositories (e.g., disks) or to the console, are called nontransactional operations. Non-transactional operations are usually disallowed inside a memory transaction, because if the transaction aborts their effects cannot be undone. In transactional databases, operations like inserting, removing or transforming data in the database can be undone if executed in the context of a transaction. Since database I/O operations can be reversed, it should be possible to execute those operations in the context of a memory transaction. To achieve such purpose, a new transactional model unifying memory and database transactions into a single one was defined, implemented, and evaluated. This new transactional model satisfies the properties from both the memory and database transactional models. Programmers can now execute memory and database operations in the same transaction and in case of a transaction rollback, the transaction effects in both the memory and the database are reverted.
APA, Harvard, Vancouver, ISO, and other styles
17

Zawis, John A., and David K. Hsiao. "Accessing hierarchical databases via SQL transactions in a multi-model database system." Thesis, Monterey, California. Naval Postgraduate School, 1987. http://hdl.handle.net/10945/22186.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Walpole, Dennis A., and Alphonso L. Woods. "Accessing network databases via SQL transactions in a multi-model database system." Thesis, Monterey, California. Naval Postgraduate School, 1989. http://hdl.handle.net/10945/25647.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Kunovský, Tomáš. "Temporální XML databáze." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2016. http://www.nusl.cz/ntk/nusl-255389.

Full text
Abstract:
The primary goal of this work is a implementation of temporal XML database in Java. There are described databases for XML documents and temporal databases with emphasis on their query languages and problem data storing is also analyzes for temporal databases. Source codes of the resulting application are public as open-source.
APA, Harvard, Vancouver, ISO, and other styles
20

Duarte, Gustavo Luiz. "Metadados para reconciliação de transações em bancos de dados autônomos." Universidade de São Paulo, 2011. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-27082012-153008/.

Full text
Abstract:
O uso de técnicas de replicação de dados em dispositivos móveis permite que uma aplicação móvel compartilhe dados com um servidor e possa atuar sobre tais dados durante períodos de desconexão. Embora essa característica seja fundamental em diversos domínios, a reconciliação das transações que foram aplicadas sobre a réplica móvel dos dados apresenta-se como um desafio a ser superado. O uso de bloqueios apresenta-se impraticável em determinados domínios de aplicação. Por outro lado, ao permitir a execução de operações de escrita em diversas réplicas sem uma sincronização a priori, o sistema se torna suscetível a conflitos de atualização, sendo necessário a implementação de um mecanismo de resolução de conflitos. Resolver conflitos é uma tarefa complexa e propensa a erros, em especial nos casos em que há a necessidade de intervenção humana. Diante desse cenário, foi desenvolvido um modelo para controle de transações em bancos de dados autônomos que faz uso de metadados e multiversão de banco de dados de forma a permitir a auditoria e retificação de resoluções de conflitos. Isso torna a resolução de conflitos uma operação não destrutiva, reduzindo, assim, o impacto de uma resolução de conflito incorreta. Neste trabalho é apresentado também um arcabouço para reconciliação de transações que implementa o modelo proposto. Como estudo de caso, o arcabouço desenvolvido foi utilizado para implementar a integração entre dois sistemas reais que possuem necessidades de replicação de dados e atualizações desconectadas.
The use of data replication techniques on mobile devices allows a mobile application to share data with a server and to work on such data while disconnected. While this feature is crucial in some application domains, the reconciliation of transactions applied to the mobile replica of data proves to be challenging. The use of locking is not feasible in some application domains. However, allowing write operations to be applied on several replicas without \\emph{a priori} synchronization makes the system susceptible to update conflicts, requiring a conflict resolution mechanism. Conflict resolution is a complex and error prone task, specially when human intervention is involved. Given this scenario, we developed a transactions control model for autonomous databases that uses metadata and database versioning to provide auditing and rectification of conflict resolutions. This turns the conflict resolution into a nondestructive operation, thus reducing the impact of an incorrect conflict resolution. This work presents also a framework for transaction reconciliation that implements the proposed model. As a case study, the developed framework was used to integrate two real systems that needed data replication and disconnected updates.
APA, Harvard, Vancouver, ISO, and other styles
21

Prabhu, Nitin Kumar Vijay. "Transaction processing in Mobile Database System." Diss., UMK access, 2006.

Find full text
Abstract:
Thesis (Ph. D.)--School of Computing and Engineering. University of Missouri--Kansas City, 2006.
"A dissertation in computer science and informatics and telecommunications and computer networking." Advisor: Vijay Kumar. Typescript. Vita. Title from "catalog record" of the print edition Description based on contents viewed Nov. 9, 2007. Includes bibliographical references (leaves 152-157). Online version of the print edition.
APA, Harvard, Vancouver, ISO, and other styles
22

Nekroševičius, Marijonas. "Informacijos valdymo metodų analizė ir sprendimas informacijos paieškai naudojant ontologijas." Master's thesis, Lithuanian Academic Libraries Network (LABT), 2009. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2009~D_20090304_100547-67948.

Full text
Abstract:
Šiuo metu informacija yra sukaupta paskirstytuose šaltiniuose, kurie kaip saugojimo priemonę naudoja duomenų bazes. Šios bazės yra skirtingų tipų. Vartotojo tikslas yra naudoti keletą nepriklausomų informacijos šaltinių kaip vieną. Dėl savo nepriklausomumo nuo DBVS ir platformos tipo, XML tapo ypatingai naudingas atviroms susisiekiančioms sistemoms, kuomet duomenys apsikeičiami tarp paskirstytų duomenų šaltinių. Šiame darbe pateikiamas ontologijų, kaip dalykinės srities apibrėžimo informacijos paieškai, panaudojimas. Tai leidžia optimizuoti reikiamos informacijos paieškos procesą ir išvengti užklausos rezultatų pertekliškumo.
The main problem in heterogeneous database integration is data incompatibility in different databases. XML is perfect solution in data exchange between different databases as it is independent from OS, applications or hardware. To implement XML in data exchange XML must be created corresponding to the databases. This work propose use of ontologies for information retrieving from heterogenous data bases. Such method let optimize user query to avoid wasted information.
APA, Harvard, Vancouver, ISO, and other styles
23

Tu, Stephen Lyle. "Fast transactions for multicore in-memory databases." Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/82375.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2013.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 55-57).
Though modern multicore machines have sufficient RAM and processors to manage very large in-memory databases, it is not clear what the best strategy for dividing work among cores is. Should each core handle a data partition, avoiding the overhead of concurrency control for most transactions (at the cost of increasing it for cross-partition transactions)? Or should cores access a shared data structure instead? We investigate this question in the context of a fast in-memory database. We describe a new transactionally consistent database storage engine called MAFLINGO. Its cache-centered data structure design provides excellent base key-value store performance, to which we add a new, cache-friendly serializable protocol and support for running large, read-only transactions on a recent snapshot. On a key-value workload, the resulting system introduces negligible performance overhead as compared to a version of our system with transactional support stripped out, while achieving linear scalability versus the number of cores. It also exhibits linear scalability on TPC-C, a popular transactional benchmark. In addition, we show that a partitioning-based approach ceases to be beneficial if the database cannot be partitioned such that only a small fraction of transactions access multiple partitions, making our shared-everything approach more relevant. Finally, based on a survey of results from the literature, we argue that our implementation substantially outperforms previous main-memory databases on TPC-C benchmarks.
by Stephen Lyle Tu.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
24

Lawley, Michael John, and n/a. "Program Transformation for Proving Database Transaction Safety." Griffith University. School of Computing and Information Technology, 2000. http://www4.gu.edu.au:8080/adt-root/public/adt-QGU20070228.150125.

Full text
Abstract:
In this thesis we propose the use of Dijkstra's concept of a predicate transformer [Dij75] for the determination of database transaction safety [SS89] and the generation of simple conditions to check that a transaction will not violate the integrity constraints in the case that it is not safe. The generation of this simple condition is something that can be done statically, thus providing a mechanism for generating safe transactions. Our approach treats a database as state, a database transaction as a program, and the database's integrity constraints as a postcondition in order to use a predicate transformer [Dij75] to generate a weakest precondition. We begin by introducing a set-oriented update language for relational databases for which a predicate transformer is then defined. Subsequently, we introduce a more powerful update language for deductive databases and define a new predicate transformer to deal with this language and the more powerful integrity constraints that can be expressed using recursive rules. Next we introduce a data model with object-oriented features including methods, inheritance and dynamic overriding. We then extend the predicate transformer to handle these new features. For each of the predicate transformers, we prove that they do indeed generate a weakest precondition for a transaction and the database integrity constraints. However, the weakest precondition generated by a predicate transformer still involves much redundant checking. For several general classes of integrity constraint, including referential integrity and functional dependencies, we prove that the weakest precondition can be substantially further simplified to avoid checking things we already know to be true under the assumption that the database currently satisfies its integrity con-straints. In addition, we propose the use of the predicate transformer in combination with meta-rules that capture the exact incremental change to the database of a particular transaction. This provides a more general approach to generating simple checks for enforcing transaction safety. We show that this approach is superior to known existing previous approaches to the problem of efficient integrity constraint checking and transaction safety for relational, deductive, and deductive object-oriented databases. Finally we demonstrate several further applications of the predicate transformer to the problems of schema constraints, dynamic integrity constraints, and determining the correctness of methods for view updates. We also show how to support transactions embedded in procedural languages such as C.
APA, Harvard, Vancouver, ISO, and other styles
25

Shang, Pengju. "Research in high performance and low power computer systems for data-intensive environment." Doctoral diss., University of Central Florida, 2011. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/5033.

Full text
Abstract:
According to the data affinity, DAFA re-organizes data to maximize the parallelism of the affinitive data, and also subjective to the overall load balance. This enables DAFA to realize the maximum number of map tasks with data-locality. Besides the system performance, power consumption is another important concern of current computer systems. In the U.S. alone, the energy used by servers which could be saved comes to 3.17 million tons of carbon dioxide, or 580,678 cars {Kar09}. However, the goals of high performance and low energy consumption are at odds with each other. An ideal power management strategy should be able to dynamically respond to the change (either linear or nonlinear, or non-model) of workloads and system configuration without violating the performance requirement. We propose a novel power management scheme called MAR (modeless, adaptive, rule-based) in multiprocessor systems to minimize the CPU power consumption under performance constraints. By using richer feedback factors, e.g. the I/O wait, MAR is able to accurately describe the relationships among core frequencies, performance and power consumption. We adopt a modeless control model to reduce the complexity of system modeling. MAR is designed for CMP (Chip Multi Processor) systems by employing multi-input/multi-output (MIMO) theory and per-core level DVFS (Dynamic Voltage and Frequency Scaling).; TRAID deduplicates this overlap by only logging one compact version (XOR results) of recovery references for the updating data. It minimizes the amount of log content as well as the log flushing overhead, thereby boosts the overall transaction processing performance. At the same time, TRAID guarantees comparable RAID reliability, the same recovery correctness and ACID semantics of traditional transactional processing systems. On the other hand, the emerging myriad data intensive applications place a demand for high-performance computing resources with massive storage. Academia and industry pioneers have been developing big data parallel computing frameworks and large-scale distributed file systems (DFS) widely used to facilitate the high-performance runs of data-intensive applications, such as bio-informatics {Sch09}, astronomy {RSG10}, and high-energy physics {LGC06}. Our recent work {SMW10} reported that data distribution in DFS can significantly affect the efficiency of data processing and hence the overall application performance. This is especially true for those with sophisticated access patterns. For example, Yahoo's Hadoop {refg} clusters employs a random data placement strategy for load balance and simplicity {reff}. This allows the MapReduce {DG08} programs to access all the data (without or not distinguishing interest locality) at full parallelism. Our work focuses on Hadoop systems. We observed that the data distribution is one of the most important factors that affect the parallel programming performance. However, the default Hadoop adopts random data distribution strategy, which does not consider the data semantics, specifically, data affinity. We propose a Data-Affinity-Aware (DAFA) data placement scheme to address the above problem. DAFA builds a history data access graph to exploit the data affinity.; The evolution of computer science and engineering is always motivated by the requirements for better performance, power efficiency, security, user interface (UI), etc {CM02}. The first two factors are potential tradeoffs: better performance usually requires better hardware, e.g., the CPUs with larger number of transistors, the disks with higher rotation speed; however, the increasing number of transistors on the single die or chip reveals super-linear growth in CPU power consumption {FAA08a}, and the change in disk rotation speed has a quadratic effect on disk power consumption {GSK03}. We propose three new systematic approaches as shown in Figure 1.1, Transactional RAID, data-affinity-aware data placement DAFA and Modeless power management, to tackle the performance problem in Database systems, large scale clusters or cloud platforms, and the power management problem in Chip Multi Processors, respectively. The first design, Transactional RAID (TRAID), is motivated by the fact that in recent years, more storage system applications have employed transaction processing techniques Figure 1.1 Research Work Overview] to ensure data integrity and consistency. In transaction processing systems(TPS), log is a kind of redundancy to ensure transaction ACID (atomicity, consistency, isolation, durability) properties and data recoverability. Furthermore, high reliable storage systems, such as redundant array of inexpensive disks (RAID), are widely used as the underlying storage system for Databases to guarantee system reliability and availability with high I/O performance. However, the Databases and storage systems tend to implement their independent fault tolerant mechanisms {GR93, Tho05} from their own perspectives and thereby leading to potential high overhead. We observe the overlapped redundancies between the TPS and RAID systems, and propose a novel reliable storage architecture called Transactional RAID (TRAID).
ID: 030423445; System requirements: World Wide Web browser and PDF reader.; Mode of access: World Wide Web.; Thesis (Ph.D.)--University of Central Florida, 2011.; Includes bibliographical references (p. 119-128).
Ph.D.
Doctorate
Electrical Engineering and Computer Science
Engineering and Computer Science
Computer Science
APA, Harvard, Vancouver, ISO, and other styles
26

Hamilton, Howard Gregory. "An Examination of Service Level Agreement Attributes that Influence Cloud Computing Adoption." NSUWorks, 2015. http://nsuworks.nova.edu/gscis_etd/53.

Full text
Abstract:
Cloud computing is perceived as the technological innovation that will transform future investments in information technology. As cloud services become more ubiquitous, public and private enterprises still grapple with concerns about cloud computing. One such concern is about service level agreements (SLAs) and their appropriateness. While the benefits of using cloud services are well defined, the debate about the challenges that may inhibit the seamless adoption of these services still continues. SLAs are seen as an instrument to help foster adoption. However, cloud computing SLAs are alleged to be ineffective, meaningless, and costly to administer. This could impact widespread acceptance of cloud computing. This research was based on the transaction cost economics theory with focus on uncertainty, asset specificity and transaction cost. SLA uncertainty and SLA asset specificity were introduced by this research and used to determine the technical and non-technical attributes for cloud computing SLAs. A conceptual model, built on the concept of transaction cost economics, was used to highlight the theoretical framework for this research. This study applied a mixed methods sequential exploratory research design to determine SLA attributes that influence the adoption of cloud computing. The research was conducted using two phases. First, interviews with 10 cloud computing experts were done to identify and confirm key SLA attributes. These attributes were then used as the main thematic areas for this study. In the second phase, the output from phase one was used as the input to the development of an instrument which was administered to 97 businesses to determine their perspectives on the cloud computing SLA attributes identified in the first phase. Partial least squares structural equation modelling was used to test for statistical significance of the hypotheses and to validate the theoretical basis of this study. Qualitative and quantitative analyses were done on the data to establish a set of attributes considered SLA imperatives for cloud computing adoption.
APA, Harvard, Vancouver, ISO, and other styles
27

Ogunyadeka, Adewole C. "Transactions and data management in NoSQL cloud databases." Thesis, Oxford Brookes University, 2016. https://radar.brookes.ac.uk/radar/items/c87fa049-f8c7-4b9e-a27c-3c106fcda018/1/.

Full text
Abstract:
NoSQL databases have become the preferred option for storing and processing data in cloud computing as they are capable of providing high data availability, scalability and efficiency. But in order to achieve these attributes, NoSQL databases make certain trade-offs. First, NoSQL databases cannot guarantee strong consistency of data. They only guarantee a weaker consistency which is based on eventual consistency model. Second, NoSQL databases adopt a simple data model which makes it easy for data to be scaled across multiple nodes. Third, NoSQL databases do not support table joins and referential integrity which by implication, means they cannot implement complex queries. The combination of these factors implies that NoSQL databases cannot support transactions. Motivated by these crucial issues this thesis investigates into the transactions and data management in NoSQL databases. It presents a novel approach that implements transactional support for NoSQL databases in order to ensure stronger data consistency and provide appropriate level of performance. The novelty lies in the design of a Multi-Key transaction model that guarantees the standard properties of transactions in order to ensure stronger consistency and integrity of data. The model is implemented in a novel loosely-coupled architecture that separates the implementation of transactional logic from the underlying data thus ensuring transparency and abstraction in cloud and NoSQL databases. The proposed approach is validated through the development of a prototype system using real MongoDB system. An extended version of the standard Yahoo! Cloud Services Benchmark (YCSB) has been used in order to test and evaluate the proposed approach. Various experiments have been conducted and sets of results have been generated. The results show that the proposed approach meets the research objectives. It maintains stronger consistency of cloud data as well as appropriate level of reliability and performance.
APA, Harvard, Vancouver, ISO, and other styles
28

Jones, Evan P. C. (Evan Philip Charles) 1981. "Fault-tolerant distributed transactions for partitioned OLTP databases." Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/71477.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 103-112).
This thesis presents Dtxn, a fault-tolerant distributed transaction system designed specifically for building online transaction processing (OLTP) databases. Databases have traditionally been designed as general purpose data processing tools. By being designed only for OLTP workloads, Dtxn can be more efficient. It is designed to support very large databases by partitioning data across a cluster of commodity servers in a data center. Combining multiple servers together allows systems built with Dtxn to be cost effective, highly available, scalable, and fault-tolerant. Dtxn provides three novel features. First, it provides reusable infrastructure for building a distributed OLTP database out of single machine databases. This allows developers to take a specialized backend storage engine and use it across multiple machines, without needing to re-implement the distributed transaction infrastructure. We used Dtxn to build four different applications: a simple key/value store, a specialized TPC-C implementation, a main-memory OLTP database, and a traditional disk-based OLTP database. Second, Dtxn provides a novel concurrency control mechanism called speculative concurrency control, designed for main memory OLTP workloads that are primarily composed of transactions with a single round of communication between the application and database. Speculative concurrency control executes one transaction at a time, with no concurrency control overhead. In cases where there may be stalls due to network communication, it speculates future transactions. Our results show that this provides significantly better throughput than traditional two-phase locking, outperforming it by a factor of two on the TPC-C benchmark. Finally, Dtxn supports live migration, allowing part of the data on one server to be moved to another server while processing transactions. Our experiments show that our approach has nearly no visible impact on throughput or latency when moving data under moderate to high loads. It has significantly less impact than the best commercially available systems when the database is overloaded. The period of time where the throughput is reduced is less than half as long as failing over to another replica or using virtual machine migration.
by Evan Philip Charles Jones.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
29

Pang, Gene. "Scalable Transactions for Scalable Distributed Database Systems." Thesis, University of California, Berkeley, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=3733329.

Full text
Abstract:

With the advent of the Internet and Internet-connected devices, modern applications can experience very rapid growth of users from all parts of the world. A growing user base leads to greater usage and large data sizes, so scalable database systems capable of handling the great demands are critical for applications. With the emergence of cloud computing, a major movement in the industry, modern applications depend on distributed data stores for their scalable data management solutions. Many large-scale applications utilize NoSQL systems, such as distributed key-value stores, for their scalability and availability properties over traditional relational database systems. By simplifying the design and interface, NoSQL systems can provide high scalability and performance for large data sets and high volume workloads. However, to provide such benefits, NoSQL systems sacrifice traditional consistency models and support for transactions typically available in database systems. Without transaction semantics, it is harder for developers to reason about the correctness of the interactions with the data. Therefore, it is important to support transactions for distributed database systems without sacrificing scalability.

In this thesis, I present new techniques for scalable transactions for scalable database systems. Distributed data stores need scalable transactions to take advantage of cloud computing, and to meet the demands of modern applications. Traditional techniques for transactions may not be appropriate in a large, distributed environment, so in this thesis, I describe new techniques for distributed transactions, without having to sacrifice traditional semantics or scalability.

I discuss three facets to improving transaction scalability and support in distributed database systems. First, I describe a new transaction commit protocol that reduces the response times for distributed transactions. Second, I propose a new transaction programming model that allows developers to better deal with the unexpected behavior of distributed transactions. Lastly, I present a new scalable view maintenance algorithm for convergent join views. Together, the new techniques in this thesis contribute to providing scalable transactions for modern, distributed database systems.

APA, Harvard, Vancouver, ISO, and other styles
30

Cahill, Michael James. "Serializable Isolation for Snapshot Databases." University of Sydney, 2009. http://hdl.handle.net/2123/5353.

Full text
Abstract:
PhD
Many popular database management systems implement a multiversion concurrency control algorithm called snapshot isolation rather than providing full serializability based on locking. There are well-known anomalies permitted by snapshot isolation that can lead to violations of data consistency by interleaving transactions that would maintain consistency if run serially. Until now, the only way to prevent these anomalies was to modify the applications by introducing explicit locking or artificial update conflicts, following careful analysis of conflicts between all pairs of transactions. This thesis describes a modification to the concurrency control algorithm of a database management system that automatically detects and prevents snapshot isolation anomalies at runtime for arbitrary applications, thus providing serializable isolation. The new algorithm preserves the properties that make snapshot isolation attractive, including that readers do not block writers and vice versa. An implementation of the algorithm in a relational database management system is described, along with a benchmark and performance study, showing that the throughput approaches that of snapshot isolation in most cases.
APA, Harvard, Vancouver, ISO, and other styles
31

Oza, Smita. "Implementing real-time transactions using distributed main memory databases." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape16/PQDD_0031/MQ27056.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Oza, Smita Carleton University Dissertation Computer Science. "Implementing real- time transactions using distributed main memory databases." Ottawa, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
33

Niles, Duane Francis Jr. "Improving Performance of Highly-Programmable Concurrent Applications by Leveraging Parallel Nesting and Weaker Isolation Levels." Thesis, Virginia Tech, 2015. http://hdl.handle.net/10919/54557.

Full text
Abstract:
The recent development of multi-core computer architectures has largely affected the creation of everyday applications, requiring the adoption of concurrent programming to significantly utilize the divided processing power of computers. Applications must be split into sections able to execute in parallel, without any of these sections conflicting with one another, thereby necessitating some form of synchronization to be declared. The most commonly used methodology is lock-based synchronization; although, to improve performance the most, developers must typically form complex, low-level implementations for large applications, which can easily create potential errors or hindrances. An abstraction from database systems, known as transactions, is a rising concurrency control design aimed to circumvent the challenges with programmability, composability, and scalability in lock-based synchronization. Transactions execute their operations speculatively and are capable of being restarted (or rolled back) when there exist conflicts between concurrent actions. As such issues can occur later in the lifespans of transactions, entire rollbacks are not that effective for performance. One particular method, known as nesting, was created to counter that drawback. Nesting is the act of enclosing transactions within other transactions, essentially dividing the work into pieces called sub-transactions. These sub-transactions can roll back without affecting the entire main transaction, although general nesting models only allow one sub-transaction to perform work at a time. The first main contribution in this thesis is SPCN, an algorithm that parallelizes nested transactions while automatically processing any potential conflicts that may arise, eliminating the burden of additional processing from the application developers. Two versions of SPCN exist: Strict, which enforces the sub-transactions' work to be made visible in a serialized order; and Relaxed, which allows sub-transactions to distribute their information immediately as they finish (therefore invalidation may occur after-the-fact and must be handled). Despite the additional logic required by SPCN, it outperforms traditional closed nesting by 1.78x at the lowest and 3.78x at the highest in the experiments run. Another method to alter transactional execution and boost performance is to relax the rules of visibility for parallel operations (known as their isolation). Depending on the application, correctness is not broken even if some transactions see external work that may later be undone due to a rollback, or if an object is written while another transaction is using an older instance of its data. With lock-based synchronization, developers would have to explicitly design their application with varying amounts of locks, and different lock organizations or hierarchies, to change the strictness of the execution. With transactional systems, the processing performed by the system itself can be set to utilize different rulings, which can change the performance of an application without requiring it to be largely redesigned. This notion leads to the second contribution in this thesis: AsR, or As-Serializable transactions. Serializability is the general form of isolation or strictness for transactions in many applications. In terms of execution, its definition is equivalent to only one transaction running at a time in a given system. Many transactional systems use their own internal form of locking to create Serializable executions, but it is typically too strict for many applications. AsR transactions allow the internal processing to be relaxed while additional meta-data is used external to the system, without requiring any interaction from the developer or any changes to the given application. AsR transactions offer multiple orders of magnitude more in throughput in highly-contentious scenarios, due to their capability to outlast traditional levels of isolation.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
34

Ongkasuwan, Patarawan. "Transaction synchronization and privacy aspect in blockchain decentralized applications." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-272134.

Full text
Abstract:
The ideas and techniques of cryptography and decentralized storage have seen tremendous growth in many industries, as they have been adopted to improve activities in the organization. That called Blockchain technology, it provides an effective transparency solution. Generally, Blockchain has been used for digital currency or cryptocurrency since its inception. One of the best-known Blockchain protocols is Ethereum, which has invented the smart contract to enable Blockchain’s ability to execute a condition, rather than simply acting as storage. Applications that adopt this technology are called ‘Dapps’ or ‘decentralized applications’. However, there are ongoing arguments about synchronization associated with the system. System synchronization is currently extremely important for applications, because the waiting time for a transaction to be verified can cause dissatisfaction in the user experience. Several studies have revealed that privacy leakage occurs, even though the Blockchain provides a degree of security, as a result of the traditional transaction, which requires approval through an intermediate institution. For instance, a bank needs to process transactions via many constitution parties before receiving the final confirmation, which requires the user to wait for a considerable amount of time. This thesis describes the challenge of transaction synchronization between the user and smart contract, as well as the matter of a privacy strategy for the system and compliance. To approach these two challenges, the first task separates different events and evaluates the results compared to an alternative solution. This is done by testing the smart contract to find the best gas price result, which varies over time. In the Ethereum protocol, gas price is one of the best ways to decrease the transaction time to meet user expectations. The gas price is affected by the code structure and the network. In the smart contract, testing is run based on two cases, and solves platform issues such as runners and user experience and reduces costs. It has also been found that collecting the fee before participating in an auction can prevent the problem of runners. The second case aims to prove that freezing the amount of a bid is the best way to increase the user’s experience, and to achieve the better experience of an online auction. The second challenge mainly focuses on the privacy strategy and risk management for the platform, which involves identifying possible solutions for all risk situations, as well as detecting, forecasting and preventing them. Providing strategies, such as securing the smart contract structure, increasing the encryption method in the database, designing a term sheet and agreement, and authorization, help to prevent system vulnerabilities. Therefore, this research aims to improve and investigate an online auction platform by using a Blockchain smart contract to provide evocative user experiences.
Idéer och tekniker för kryptografi och decentraliserad lagring har haft en enorm tillväxt i många branscher, eftersom de har antagits för att förbättra verksamheten i organisationen. Den som kallas Blockchain-tekniken ger den en effektiv transparenslösning. Generellt har Blockchain använts för digital valuta eller cryptocurrency sedan starten. Ett av de mest kända Blockchainprotokollen är Ethereum, som har uppfunnit det smarta kontraktet för att möjliggöra Blockchains förmåga att utföra ett villkor, snarare än att bara fungera som lagring. Applikationer som använder denna teknik kallas 'Dapps' eller 'decentraliserade applikationer'. Det finns emellertid pågående argument om synkronisering associerad med systemet. Systemsynkronisering är för närvarande oerhört viktigt för applikationer, eftersom väntetiden för att en transaktion ska verifieras kan orsaka missnöje i användarupplevelsen. Flera studier har visat att sekretessläckage inträffar, även om Blockchain ger en viss säkerhet, till följd av den traditionella transaktionen, som kräver godkännande genom en mellaninstitution. Till exempel måste en bank bearbeta transaktioner via många konstitutionspartier innan den får den slutliga bekräftelsen, vilket kräver att användaren väntar en betydande tid. Den här avhandlingen beskriver utmaningen med transaktionssynkronisering mellan användaren och smart kontrakt, samt frågan om en sekretessstrategi för systemet och efterlevnad. För att närma sig dessa två utmaningar separerar den första uppgiften olika händelser och utvärderar resultaten jämfört med en alternativ lösning. Detta görs genom att testa det smarta kontraktet för att hitta det bästa gasprisresultatet, som varierar över tiden. I Ethereum-protokollet är gaspriset ett av de bästa sätten att minska transaktionstiden för att möta användarens förväntningar. Gaspriset påverkas av kodstrukturen och nätverket. I det smarta kontraktet körs test baserat på två fall och löser plattformsproblem som löpare och användarupplevelse och minskar kostnaderna. Det har också visat sig att insamlingen av avgiften innan du deltar i en auktion kan förhindra löparproblemet. Det andra fallet syftar till att bevisa att frysning av budbeloppet är det bästa sättet att öka användarens upplevelse och att uppnå en bättre upplevelse av en online auktion. Den andra utmaningen fokuserar huvudsakligen på sekretessstrategin och riskhanteringen för plattformen, som innebär att identifiera möjliga lösningar för alla risksituationer, samt att upptäcka, förutse och förhindra dem. Tillhandahållande av strategier, som att säkra den smarta kontraktsstrukturen, öka krypteringsmetoden i databasen, utforma ett termblad och avtal och godkännande, hjälper till att förhindra systemets sårbarheter. Därför syftar denna forskning till att förbättra och undersöka en online-auktionsplattform genom att använda ett smart avtal med Blockchain för att ge upplevande användarupplevelser.
APA, Harvard, Vancouver, ISO, and other styles
35

Sauer, Caetano [Verfasser]. "Modern techniques for transaction-oriented database recovery / Caetano Sauer." München : Verlag Dr. Hut, 2017. http://d-nb.info/1140977644/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Youssef, Mohamed Wagdy Abdel Fattah. "Transaction behaviour in large database environments : a methodological approach." Thesis, City University London, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.358945.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Aldarmi, Saud Ahmed. "Scheduling soft-deadline real-time transactions." Thesis, University of York, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.310917.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Takkar, Sonia. "Scheduling real-time transactions in parallel database systems." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape16/PQDD_0025/MQ26975.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Takkar, Sonia Carleton University Dissertation Computer Science. "Scheduling real-time transactions in parallel database systems." Ottawa, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
40

On, Sai Tung. "Efficient transaction recovery on flash disks." HKBU Institutional Repository, 2010. http://repository.hkbu.edu.hk/etd_ra/1170.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Barga, Roger S. "A reflective framework for implementing extended transactions /." Full text open access at:, 1999. http://content.ohsu.edu/u?/etd,205.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Wang, Xiangyang. "The development of a knowledge-based database transaction design assistant." Thesis, Cardiff University, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.359755.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Savasere, Ashok. "Efficient algorithms for mining association rules in large databases of cutomer transactions." Diss., Georgia Institute of Technology, 1998. http://hdl.handle.net/1853/8260.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Yan, Cong S. M. Massachusetts Institute of Technology. "Exploiting fine-grain parallelism in transactional database systems." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/101592.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2015.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 47-50).
Current database engines designed for conventional multicore systems exploit a fraction of the parallelism available in transactional workloads. Specifically, database engines only exploit inter-transaction parallelism: they use speculation to concurrently execute multiple, potentially-conflicting database transactions while maintaining atomicity and isolation. However, they do not exploit intra-transaction parallelism: each transaction is executed sequentially on a single thread. While fine-grain intra-transaction parallelism is often abundant, it is too costly to exploit in conventional multicores. Software would need to implement fine-grain speculative execution and scheduling, introducing prohibitive overheads that would negate the benefits of additional intra-transaction parallelism. In this thesis, we leverage novel hardware support to design and implement a database engine that effectively exploits both inter- and intra-transaction parallelism. Specifically, we use Swarm, a new parallel architecture that exploits fine-grained and ordered parallelism. Swarm executes tasks speculatively and out of order, but commits them in order. Integrated hardware task queueing and speculation mechanisms allow Swarm to speculate thousands of tasks ahead of the earliest active task and reduce task management overheads. We modify Silo, a state-of-the-art in-memory database engine, to leverage Swarm's features. The resulting database engine, which we call SwarmDB, has several key benefits over Silo: it eliminates software concurrency control, reducing overheads; it efficiently executes tasks within a database transaction in parallel; it reduces conflicts; and it reduces the amount of work that needs to be discarded and re-executed on each conflict. We evaluate SwarmDB on simulated Swarm systems of up to 64 cores. At 64 cores, SwarmDB outperforms Silo by 6.7x on TPC-C and 6.9x on TPC-E, and achieves near-linear scalability.
by Cong Yan.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
45

Smékal, Luděk. "Získávání znalostí z textových dat." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2007. http://www.nusl.cz/ntk/nusl-412756.

Full text
Abstract:
This MSc Thesis handles with so-called data mining. Data mining is about obtaining some data or informations from databases, where these data or informations are not directly visible, but they are accessible by using special algorithms. This MSc Thesis mainly aims documents clasifying by selected method in scope of digital library. The selected method is based on sets of items called "itemsets method". This method extends Apriori algorithm application field originally designed for transaction databases processing and generation of sets of frequented items.
APA, Harvard, Vancouver, ISO, and other styles
46

Gong, Daoya. "Transaction process modeling and implementation for 3-tiered Web-based database systems." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0023/MQ62128.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Navarro, Martín Joan. "From cluster databases to cloud storage: Providing transactional support on the cloud." Doctoral thesis, Universitat Ramon Llull, 2015. http://hdl.handle.net/10803/285655.

Full text
Abstract:
Durant les últimes tres dècades, les limitacions tecnològiques (com per exemple la capacitat dels dispositius d'emmagatzematge o l'ample de banda de les xarxes de comunicació) i les creixents demandes dels usuaris (estructures d'informació, volums de dades) han conduït l'evolució de les bases de dades distribuïdes. Des dels primers repositoris de dades per arxius plans que es van desenvolupar en la dècada dels vuitanta, s'han produït importants avenços en els algoritmes de control de concurrència, protocols de replicació i en la gestió de transaccions. No obstant això, els reptes moderns d'emmagatzematge de dades que plantegen el Big Data i el cloud computing—orientats a millorar la limitacions pel que fa a escalabilitat i elasticitat de les bases de dades estàtiques—estan empenyent als professionals a relaxar algunes propietats importants dels sistemes transaccionals clàssics, cosa que exclou a diverses aplicacions les quals no poden encaixar en aquesta estratègia degut a la seva alta dependència transaccional. El propòsit d'aquesta tesi és abordar dos reptes importants encara latents en el camp de les bases de dades distribuïdes: (1) les limitacions pel que fa a escalabilitat dels sistemes transaccionals i (2) el suport transaccional en repositoris d'emmagatzematge en el núvol. Analitzar les tècniques tradicionals de control de concurrència i de replicació, utilitzades per les bases de dades clàssiques per suportar transaccions, és fonamental per identificar les raons que fan que aquests sistemes degradin el seu rendiment quan el nombre de nodes i / o quantitat de dades creix. A més, aquest anàlisi està orientat a justificar el disseny dels repositoris en el núvol que deliberadament han deixat de banda el suport transaccional. Efectivament, apropar el paradigma de l'emmagatzematge en el núvol a les aplicacions que tenen una forta dependència en les transaccions és fonamental per a la seva adaptació als requeriments actuals pel que fa a volums de dades i models de negoci. Aquesta tesi comença amb la proposta d'un simulador de protocols per a bases de dades distribuïdes estàtiques, el qual serveix com a base per a la revisió i comparativa de rendiment dels protocols de control de concurrència i les tècniques de replicació existents. Pel que fa a la escalabilitat de les bases de dades i les transaccions, s'estudien els efectes que té executar diferents perfils de transacció sota diferents condicions. Aquesta anàlisi contínua amb una revisió dels repositoris d'emmagatzematge de dades en el núvol existents—que prometen encaixar en entorns dinàmics que requereixen alta escalabilitat i disponibilitat—, el qual permet avaluar els paràmetres i característiques que aquests sistemes han sacrificat per tal de complir les necessitats actuals pel que fa a emmagatzematge de dades a gran escala. Per explorar les possibilitats que ofereix el paradigma del cloud computing en un escenari real, es presenta el desenvolupament d'una arquitectura d'emmagatzematge de dades inspirada en el cloud computing la qual s’utilitza per emmagatzemar la informació generada en les Smart Grids. Concretament, es combinen les tècniques de replicació en bases de dades transaccionals i la propagació epidèmica amb els principis de disseny usats per construir els repositoris de dades en el núvol. Les lliçons recollides en l'estudi dels protocols de replicació i control de concurrència en el simulador de base de dades, juntament amb les experiències derivades del desenvolupament del repositori de dades per a les Smart Grids, desemboquen en el que hem batejat com Epidemia: una infraestructura d'emmagatzematge per Big Data concebuda per proporcionar suport transaccional en el núvol. A més d'heretar els beneficis dels repositoris en el núvol en quant a escalabilitat, Epidemia inclou una capa de gestió de transaccions que reenvia les transaccions dels clients a un conjunt jeràrquic de particions de dades, cosa que permet al sistema oferir diferents nivells de consistència i adaptar elàsticament la seva configuració a noves demandes de càrrega de treball. Finalment, els resultats experimentals posen de manifest la viabilitat de la nostra contribució i encoratgen als professionals a continuar treballant en aquesta àrea.
Durante las últimas tres décadas, las limitaciones tecnológicas (por ejemplo la capacidad de los dispositivos de almacenamiento o el ancho de banda de las redes de comunicación) y las crecientes demandas de los usuarios (estructuras de información, volúmenes de datos) han conducido la evolución de las bases de datos distribuidas. Desde los primeros repositorios de datos para archivos planos que se desarrollaron en la década de los ochenta, se han producido importantes avances en los algoritmos de control de concurrencia, protocolos de replicación y en la gestión de transacciones. Sin embargo, los retos modernos de almacenamiento de datos que plantean el Big Data y el cloud computing—orientados a mejorar la limitaciones en cuanto a escalabilidad y elasticidad de las bases de datos estáticas—están empujando a los profesionales a relajar algunas propiedades importantes de los sistemas transaccionales clásicos, lo que excluye a varias aplicaciones las cuales no pueden encajar en esta estrategia debido a su alta dependencia transaccional. El propósito de esta tesis es abordar dos retos importantes todavía latentes en el campo de las bases de datos distribuidas: (1) las limitaciones en cuanto a escalabilidad de los sistemas transaccionales y (2) el soporte transaccional en repositorios de almacenamiento en la nube. Analizar las técnicas tradicionales de control de concurrencia y de replicación, utilizadas por las bases de datos clásicas para soportar transacciones, es fundamental para identificar las razones que hacen que estos sistemas degraden su rendimiento cuando el número de nodos y/o cantidad de datos crece. Además, este análisis está orientado a justificar el diseño de los repositorios en la nube que deliberadamente han dejado de lado el soporte transaccional. Efectivamente, acercar el paradigma del almacenamiento en la nube a las aplicaciones que tienen una fuerte dependencia en las transacciones es crucial para su adaptación a los requerimientos actuales en cuanto a volúmenes de datos y modelos de negocio. Esta tesis empieza con la propuesta de un simulador de protocolos para bases de datos distribuidas estáticas, el cual sirve como base para la revisión y comparativa de rendimiento de los protocolos de control de concurrencia y las técnicas de replicación existentes. En cuanto a la escalabilidad de las bases de datos y las transacciones, se estudian los efectos que tiene ejecutar distintos perfiles de transacción bajo diferentes condiciones. Este análisis continua con una revisión de los repositorios de almacenamiento en la nube existentes—que prometen encajar en entornos dinámicos que requieren alta escalabilidad y disponibilidad—, el cual permite evaluar los parámetros y características que estos sistemas han sacrificado con el fin de cumplir las necesidades actuales en cuanto a almacenamiento de datos a gran escala. Para explorar las posibilidades que ofrece el paradigma del cloud computing en un escenario real, se presenta el desarrollo de una arquitectura de almacenamiento de datos inspirada en el cloud computing para almacenar la información generada en las Smart Grids. Concretamente, se combinan las técnicas de replicación en bases de datos transaccionales y la propagación epidémica con los principios de diseño usados para construir los repositorios de datos en la nube. Las lecciones recogidas en el estudio de los protocolos de replicación y control de concurrencia en el simulador de base de datos, junto con las experiencias derivadas del desarrollo del repositorio de datos para las Smart Grids, desembocan en lo que hemos acuñado como Epidemia: una infraestructura de almacenamiento para Big Data concebida para proporcionar soporte transaccional en la nube. Además de heredar los beneficios de los repositorios en la nube altamente en cuanto a escalabilidad, Epidemia incluye una capa de gestión de transacciones que reenvía las transacciones de los clientes a un conjunto jerárquico de particiones de datos, lo que permite al sistema ofrecer distintos niveles de consistencia y adaptar elásticamente su configuración a nuevas demandas cargas de trabajo. Por último, los resultados experimentales ponen de manifiesto la viabilidad de nuestra contribución y alientan a los profesionales a continuar trabajando en esta área.
Over the past three decades, technology constraints (e.g., capacity of storage devices, communication networks bandwidth) and an ever-increasing set of user demands (e.g., information structures, data volumes) have driven the evolution of distributed databases. Since flat-file data repositories developed in the early eighties, there have been important advances in concurrency control algorithms, replication protocols, and transactions management. However, modern concerns in data storage posed by Big Data and cloud computing—related to overcome the scalability and elasticity limitations of classic databases—are pushing practitioners to relax some important properties featured by transactions, which excludes several applications that are unable to fit in this strategy due to their intrinsic transactional nature. The purpose of this thesis is to address two important challenges still latent in distributed databases: (1) the scalability limitations of transactional databases and (2) providing transactional support on cloud-based storage repositories. Analyzing the traditional concurrency control and replication techniques, used by classic databases to support transactions, is critical to identify the reasons that make these systems degrade their throughput when the number of nodes and/or amount of data rockets. Besides, this analysis is devoted to justify the design rationale behind cloud repositories in which transactions have been generally neglected. Furthermore, enabling applications which are strongly dependent on transactions to take advantage of the cloud storage paradigm is crucial for their adaptation to current data demands and business models. This dissertation starts by proposing a custom protocol simulator for static distributed databases, which serves as a basis for revising and comparing the performance of existing concurrency control protocols and replication techniques. As this thesis is especially concerned with transactions, the effects on the database scalability of different transaction profiles under different conditions are studied. This analysis is followed by a review of existing cloud storage repositories—that claim to be highly dynamic, scalable, and available—, which leads to an evaluation of the parameters and features that these systems have sacrificed in order to meet current large-scale data storage demands. To further explore the possibilities of the cloud computing paradigm in a real-world scenario, a cloud-inspired approach to store data from Smart Grids is presented. More specifically, the proposed architecture combines classic database replication techniques and epidemic updates propagation with the design principles of cloud-based storage. The key insights collected when prototyping the replication and concurrency control protocols at the database simulator, together with the experiences derived from building a large-scale storage repository for Smart Grids, are wrapped up into what we have coined as Epidemia: a storage infrastructure conceived to provide transactional support on the cloud. In addition to inheriting the benefits of highly-scalable cloud repositories, Epidemia includes a transaction management layer that forwards client transactions to a hierarchical set of data partitions, which allows the system to offer different consistency levels and elastically adapt its configuration to incoming workloads. Finally, experimental results highlight the feasibility of our contribution and encourage practitioners to further research in this area.
APA, Harvard, Vancouver, ISO, and other styles
48

Quantock, David E. "The real-time roll-back and recovery of transactions in database systems." Thesis, Monterey, California. Naval Postgraduate School, 1989. http://hdl.handle.net/10945/27234.

Full text
Abstract:
Approved for public release; distribution is unlimited.
A modern database transaction may involve a long series of updates, deletions, and insertions of data and a complex mix of these primary database operations. Due to its length and complexity, the transaction requires back-up and recovery procedures. The back-up procedure allows the user to either commit or abort a lengthy and complex transaction without comprising the integrity of the data. The recovery procedure allows the system to maintain the data integrity during the execution of a transaction, should the transaction be interrupted by the system. With both the back-up and recovery procedures, the modern database system will be able to provide consistent data throughout the life-span of a database without ever corrupting either its data values or its data types. However, the implementation of back-up and recovery procedures in a database system is a difficult and involved effort since it effects the base as well as meta data of the database. Further, it effects the state of the database system. This thesis is mainly focused on the design trade-offs and issues of implementing an effective and efficient mechanism for back-up and recovery in the multimodel, multilingual, and multi backend database system. Keywords: Data base management systems. (KR)
APA, Harvard, Vancouver, ISO, and other styles
49

Yu, Heng. "On Decoupling Concurrency Control from Recovery in Database Repositories." Thesis, University of Waterloo, 2005. http://hdl.handle.net/10012/1084.

Full text
Abstract:
We report on initial research on the concurrency control issue of compiled database applications. Such applications have a repository style of architecture in which a collection of software modules operate on a common database in terms of a set of predefined transaction types, an architectural view that is useful for the deployment of database technology to embedded control programs. We focus on decoupling concurrency control from any functionality relating to recovery. Such decoupling facilitates the compile-time query optimization.

Because it is the possibility of transaction aborts for deadlock resolution that makes the recovery subsystem necessary, we choose the deadlock-free tree locking (TL) scheme for our purpose. With the knowledge of transaction workload, efficacious lock trees for runtime control can be determined at compile-time. We have designed compile-time algorithms to generate the lock tree and other relevant data structures, and runtime locking/unlocking algorithms based on such structures. We have further explored how to insert the lock steps into the transaction types at compile time.

To conduct our simulation experiments to evaluate the performance of TL, we have designed two workloads. The first one is from the OLTP benchmark TPC-C. The second is from the open-source operating system MINIX. Our experimental results show TL produces better throughput than the traditional two-phase locking (2PL) when the transactions are write-only; and for main-memory data, TL performs comparably to 2PL even in workloads with many reads.
APA, Harvard, Vancouver, ISO, and other styles
50

Ahmed, Shamim. "Transaction and version management in object-oriented database management systems for collaborative engineering applications." Thesis, Massachusetts Institute of Technology, 1991. http://hdl.handle.net/1721.1/13854.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography