To see the other types of publications on this topic, follow the link: Transaction processing systems.

Dissertations / Theses on the topic 'Transaction processing systems'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 43 dissertations / theses for your research on the topic 'Transaction processing systems.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Prabhu, Nitin Kumar Vijay. "Transaction processing in Mobile Database System." Diss., UMK access, 2006.

Find full text
Abstract:
Thesis (Ph. D.)--School of Computing and Engineering. University of Missouri--Kansas City, 2006.
"A dissertation in computer science and informatics and telecommunications and computer networking." Advisor: Vijay Kumar. Typescript. Vita. Title from "catalog record" of the print edition Description based on contents viewed Nov. 9, 2007. Includes bibliographical references (leaves 152-157). Online version of the print edition.
APA, Harvard, Vancouver, ISO, and other styles
2

Xia, Yu S. M. Massachusetts Institute of Technology. "Logical timestamps in distributed transaction processing systems." Thesis, Massachusetts Institute of Technology, 2018. https://hdl.handle.net/1721.1/122877.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 73-79).
Distributed transactions are such transactions with remote data access. They usually suffer from high network latency (compared to the internal overhead) during data operations on remote data servers, and therefore lengthen the entire transaction executiont time. This increases the probability of conflicting with other transactions, causing high abort rates. This, in turn, causes poor performance. In this work, we constructed Sundial, a distributed concurrency control algorithm that applies logical timestamps seaminglessly with a cache protocol, and works in a hybrid fashion where an optimistic approach is combined with lock-based schemes. Sundial tackles the inefficiency problem in two ways. Firstly, Sundial decides the order of transactions on the fly. Transactions get their commit timestamp according to their data access traces. Each data item in the database has logical leases maintained by the system. A lease corresponds to a version of the item. At any logical time point, only a single transaction holds the 'lease' for any particular data item. Therefore, lease holders do not have to worry about someone else writing to the item because in the logical timeline, the data writer needs to acquire a new lease which is disjoint from the holder's. This lease information is used to calculate the logical commit time for transactions. Secondly, Sundial has a novel caching scheme that works together with logical leases. The scheme allows the local data server to automatically cache data from the remote server while preserving data coherence. We benchmarked Sundial along with state-of-the-art distributed transactional concurrency control protocols. On YCSB, Sundial outperforms the second best protocol by 57% under high data access contention. On TPC-C, Sundial has a 34% improvement over the state-of-the-art candidate. Our caching scheme has performance gain comparable with hand-optimized data replication. With high access skew, it speeds the workload by up to 4.6 x.
"This work was supported (in part) by the U.S. National Science Foundation (CCF-1438955)"
by Yu Xia.
S.M.
S.M. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
APA, Harvard, Vancouver, ISO, and other styles
3

Dwyer, Barry. "Automatic design of batch processing systems." Title page, abstract, table of contents and introduction only, 1999. http://web4.library.adelaide.edu.au/theses/09PH/09phd993.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Hui, Chui Ying. "Broadcast algorithms and caching strategies for mobile transaction processing." HKBU Institutional Repository, 2007. http://repository.hkbu.edu.hk/etd_ra/781.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Xie, Wanxia. "Supporting Distributed Transaction Processing Over Mobile and Heterogeneous Platforms." Diss., Georgia Institute of Technology, 2005. http://hdl.handle.net/1853/14073.

Full text
Abstract:
Recent advances in pervasive computing and peer-to-peer computing have opened up vast opportunities for developing collaborative applications. To benefit from these emerging technologies, there is a need for investigating techniques and tools that will allow development and deployment of these applications on mobile and heterogeneous platforms. To meet these challenging tasks, we need to address the typical characteristics of mobile peer-to-peer systems such as frequent disconnections, frequent network partitions, and peer heterogeneity. This research focuses on developing the necessary models, techniques and algorithms that will enable us to build and deploy collaborative applications in the Internet enabled, mobile peer-to-peer environments. This dissertation proposes a multi-state transaction model and develops a quality aware transaction processing framework to incorporate quality of service with transaction processing. It proposes adaptive ACID properties and develops a quality specification language to associate a quality level with transactions. In addition, this research develops a probabilistic concurrency control mechanism and a group based transaction commit protocol for mobile peer-to-peer systems that greatly reduces blockings in transactions and improves the transaction commit ratio. To the best of our knowledge, this is the first attempt to systematically support disconnection-tolerant and partition-tolerant transaction processing. This dissertation also develops a scalable directory service called PeerDS to support the above framework. It addresses the scalability and dynamism of the directory service from two aspects: peer-to-peer and push-pull hybrid interfaces. It also addresses peer heterogeneity and develops a new technique for load balancing in the peer-to-peer system. This technique comprises an improved routing algorithm for virtualized P2P overlay networks and a generalized Top-K server selection algorithm for load balancing, which could be optimized based on multiple factors such as proximity and cost. The proposed push-pull hybrid interfaces greatly reduce the overhead of directory servers caused by frequent queries from directory clients. In order to further improve the scalability of the push interface, this dissertation also studies and evaluates different filter indexing schemes through which the interests of each update could be calculated very efficiently. This dissertation was developed in conjunction with the middleware called System on Mobile Devices (SyD).
APA, Harvard, Vancouver, ISO, and other styles
6

Chan, Yew Meng. "Processing mobile read-only transactions in broadcast environments with group consistency /." access full-text access abstract and table of contents, 2005. http://libweb.cityu.edu.hk/cgi-bin/ezdb/thesis.pl?mphil-cs-b19887504a.pdf.

Full text
Abstract:
Thesis (M.Phil.)--City University of Hong Kong, 2005.
"Submitted to Department of Computer Science in partial fulfillment of the requirements for the degree of Master of Philosophy" Includes bibliographical references (leaves 98-102)
APA, Harvard, Vancouver, ISO, and other styles
7

Mena, Eduardo Illarramendi Arantza. "Ontology-based query processing for global information systems /." Boston [u.a.] : Kluwer Acad. Publ, 2001. http://www.loc.gov/catdir/enhancements/fy0813/2001029621-d.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Reid, Elizabeth G. "Design and evaluation of a benchmark for main memory transaction processing systems." Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/53162.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.
Includes bibliographical references (p. 63).
We designed a diverse collection of benchmarks for Main Memory Database Systems (MMDBs) to validate and compare entries in a programming contest. Each entrant to the contest programmed an indexing system optimized for multicore multithread execution. The contest framework provided an API for the contestants, and benchmarked their submissions. This thesis describes the test goals, the API, and the test environment. It documents the website used by the contestants, describes the general nature of the tests run on each submission, and summarizes the results for each submission that was able to complete the tests.
by Elizabeth G. Reid.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
9

Hirve, Sachin. "On the Fault-tolerance and High Performance of Replicated Transactional Systems." Diss., Virginia Tech, 2015. http://hdl.handle.net/10919/56668.

Full text
Abstract:
With the recent technological developments in last few decades, there is a notable shift in the way business/consumer transactions are conducted. These transactions are usually triggered over the internet and transactional systems working in the background ensure that these transactions are processed. The majority of these transactions nowadays fall in Online Transaction Processing (OLTP) category, where low latency is preferred characteristic. In addition to low latency, OLTP transaction systems also require high service continuity and dependability. Replication is a common technique that makes the services dependable and therefore helps in providing reliability, availability and fault-tolerance. Deferred Update Replication (DUR) and Deferred Execution Replication (DER) represent the two well known transaction execution models for replicated transactional systems. Under DUR, a transaction is executed locally at one node before a global certification is invoked to resolve conflicts against other transactions running on remote nodes. On the other hand, DER postpones the transaction execution until the agreement on a common order of transaction requests is reached. Both DUR and DER require a distributed ordering layer, which ensures a total order of transactions even in case of faults. In today's distributed transactional systems, performance is of paramount importance. Any loss in performance, e.g., increased latency due to slow processing of client requests, may entail loss of revenue for businesses. On one hand, the DUR model is a good candidate for transaction processing in those systems in case the conflicts among transactions are rare, while it can be detrimental for high conflict workload profiles. On the other hand, the DER model is an attractive choice because of its ability to behave as independent of the characteristics of the workload, but trivial realizations of the model ultimately do not offer a good performance increase margin. Indeed transactions are executed sequentially and the total order layer can be a serious bottleneck for latency and scalability. This dissertation proposes novel solutions and system optimizations to enhance the overall performance of replicated transactional systems. The first presented result is HiperTM, a DER-based transaction replication solution that is able to alleviate the costs of the total order layer via speculative execution techniques. HiperTM exploits the time that is between the broadcast of a client request and the finalization of the order for that request to speculatively execute the request, so to achieve an overlapping between replicas coordination and transactions execution. HiperTM proposes two main components: OS-Paxos, a novel total order layer that is able to early deliver requests optimistically according to a tentative order, which is then either confirmed or rejected by a final total order; SCC, a lightweight speculative concurrency control protocol that is able to exploit the optimistic delivery of OS-Paxos and execute transactions in a speculative fashion. SCC still processes write transactions serially in order to minimize the code instrumentation overheads, but it is able to parallelize the execution of read-only transactions thanks to its built-in object multiversion scheme. The second contribution in this dissertation is X-DUR, a novel transaction replication system that addressed the high cost of local and remote aborts in case of high contention on shared objects in DUR based approaches, due to which the performance is adversely affected. Exploiting the knowledge of client's transaction locality, X-DUR incorporates the benefits of state machine approach to scale-up the distributed performance of DUR systems. As third contribution, this dissertation proposes Archie, a DER-based replicated transactional system that improves HiperTM in two aspects. First, Archie includes a highly optimized total order layer that combines optimistic-delivery and batching thus allowing the anticipation of a big amount of work before the total order is finalized. Then the concurrency control is able to process transactions speculatively and with a higher degree of parallelism, although the order of the speculative commits still follows the order defined by the optimistic delivery. Both HiperTM and Archie perform well up to a certain number of nodes in the system, beyond which their performance is impacted by limitations of single leader-based total-order layer. This motivates the design of Caesar, the forth contribution of this dissertation, which is a transactional system based on a novel multi-leader partial order protocol. Caesar enforces a partial order on the execution of transactions according to their conflicts, by letting non-conflicting transactions to proceed in parallel and without enforcing any synchronization during the execution (e.g., no locks). As the last contribution, this dissertation presents Dexter, a replication framework that exploits the commonly observed phenomenon such that not all read-only workloads require up-to-date data. It harnesses the application specific freshness and content-based constraints of read-only transactions to achieve high scalability. Dexter services the read-only requests according to the freshness guarantees specified by the application and routes the read-only workload accordingly in the system to achieve high performance and low latency. As a result, Dexter framework also alleviates the interference between read-only requests and read-write requests thereby helping to improve the performance of read-write requests execution as well.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
10

Flodin, Anton. "Leerec : A scalable product recommendation engine suitable for transaction data." Thesis, Mittuniversitetet, Avdelningen för informationssystem och -teknologi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-33941.

Full text
Abstract:
We are currently living in the Internet of Things (IoT) era, which involves devices that are connected to Internet and are communicating with each other. Each year, the number of devices increases rapidly, which result in rapid growth of data that is generated. This large amount of data is sometimes titled as Big Data, which is generated from different sources, such as log data of user behavior. These log files can be collected and analyzed in different ways, such as creating product recommendations. Product recommendations have been around since the late 90s, when the amount of data collected were not at the same level as it is today. The aim of this thesis has been to investigating methods to process and create product recommendations to see how well they are adapted for Big Data. This has been accomplished by three theory studies on how to process user events, how to make the product recommendation algorithm called collaborative filtering scalable and finally how to convert implicit feedback to explicit feedback (ratings). This resulted in a recommendation engine consisting of Apache Spark as the data processing system, which had three functions: read multiple log files and concatenate log files for each month, parsing the log files of the user events to create explicit ratings from the transactions and create four types of recommendations. The NoSQL database MongoDB was chosen as the database to store the different types of product recommendations that was created. To be able to get the recommendations from the recommendation engine and the database, a REST API was implemented which can be used by any third-party. What can be concluded from the results of this thesis work is that the system that was implemented is partial scalable. This means that Apache Spark was scalable for both concatenating files, parse and create ratings and also create the recommendations using the ALS method. However, MongoDB was shown to be not scalable when managing more than 100 concurrent requests. Future work involves making the recommendation engine distributed in a multi-node cluster to utilize the parallelization of Apache Spark. Other recommendations include considering other NoSQL databases that might be more scalable than MongoDB.
APA, Harvard, Vancouver, ISO, and other styles
11

Cooney, Vance. "Determining user interface effects of superficial presentation of dialog and visual representation of system objects in user directed transaction processing systems." Diss., The University of Arizona, 2001. http://hdl.handle.net/10150/290179.

Full text
Abstract:
At the point of sale in retail businesses, employees are a problem. A problem comprised of high turnover, unmet consumer expectations and lost sales, among other things. One of the traditional strategies used by human resource departments to cope with employee behavior, or "misbehavior," has been to strictly script employee/customer interactions. Another, more recent approach, has been the development of systems to replace the human worker; in other words, to effect transactions directly between customers and an information system. In these systems one determinant of public acceptance may be the system's affect, whether that affect is "human-like" or takes some other form. Human-like affect can be portrayed by the use of multimedia presentation and interaction techniques to depict "employees" in familiar settings, as well as incorporating elements of human exchange (i.e., having the system use the customer's name in dialogs). The field of Human-Computer Interaction, which informs design decisions for such multimedia systems, is still evolving, and research on the application of multimedia to User Interfaces for automated transaction processing of this type is just beginning. This dissertation investigates two dimensions of User Interface design that bear on the issues of emulating "natural human" transactions by using a laboratory experiment employing a 2 x 2 factorial design. The first dimension investigated is personalization. Personalization is a theoretical construct derived from social role theory and applied in marketing. It is, briefly, the inclusion of scripted dialog crafted to make the customer feel a transaction is personalized. In addition to using the customer's name, scripts might call for ending a transaction with the ubiquitous "Have a nice day!" The second dimension investigated is the "richness" of representation of the UI. Richness is here defined as the degree of realism of visual presentation in the interface and bears on the concept of direct manipulation. An object's richness could vary from a text based description of the object to a full motion movie depicting the object. The design implications of the presence or absence of personalization at varying levels of richness in a prototype UI simulating a fast food ordering system are investigated. The results are presented.
APA, Harvard, Vancouver, ISO, and other styles
12

Schricker, Marc. "Extract of reasons which could determine the decision to change from an EDI to a XML transaction processing system." Thesis, University of Skövde, School of Humanities and Informatics, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-932.

Full text
Abstract:

EDI as well as XML are the basic communication standards used in B2B e-commerce. With special interest to transaction processing systems. According to the specific attributes EDI and XML provides. There could be some common and merely different features derived. But also there could be the impact of considerations from the business organisation domain with additional issues which determine the use of EDI or XML.

In this study the particular interest is in the finding of a set of reasons, not only in the single sight of the performance of the two techniques. The study surveys the impact of business processes, the found business environment settings and the choose of standards with special reference to the scrutinised area.

APA, Harvard, Vancouver, ISO, and other styles
13

Dixon, Eric Richard. "Developing distributed applications with distributed heterogenous databases." Thesis, Virginia Tech, 1993. http://hdl.handle.net/10919/42748.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Sharma, Ankur [Verfasser], and Jens [Akademischer Betreuer] Dittrich. "Snapshot : friend or foe of data management - on optimizing transaction processing in database and blockchain systems / Ankur Sharma ; Betreuer: Jens Dittrich." Saarbrücken : Saarländische Universitäts- und Landesbibliothek, 2020. http://d-nb.info/1212853245/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Zhao, Haiquan. "Measurement and resource allocation problems in data streaming systems." Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/34785.

Full text
Abstract:
In a data streaming system, each component consumes one or several streams of data on the fly and produces one or several streams of data for other components. The entire Internet can be viewed as a giant data streaming system. Other examples include real-time exploratory data mining and high performance transaction processing. In this thesis we study several measurement and resource allocation optimization problems of data streaming systems. Measuring quantities associated with one or several data streams is often challenging because the sheer volume of data makes it impractical to store the streams in memory or ship them across the network. A data streaming algorithm processes a long stream of data in one pass using a small working memory (called a sketch). Estimation queries can then be answered from one or more such sketches. An important task is to analyze the performance guarantee of such algorithms. In this thesis we describe a tail bound problem that often occurs and present a technique for solving it using majorization and convex ordering theories. We present two algorithms that utilize our technique. The first is to store a large array of counters in DRAM while achieving the update speed of SRAM. The second is to detect global icebergs across distributed data streams. Resource allocation decisions are important for the performance of a data streaming system. The processing graph of a data streaming system forms a fork and join network. The underlying data processing tasks consists of a rich set of semantics that include synchronous and asynchronous data fork and data join. The different types of semantics and processing requirements introduce complex interdependence between various data streams within the network. We study the distributed resource allocation problem in such systems with the goal of achieving the maximum total utility of output streams. For networks with only synchronous fork and join semantics, we present several decentralized iterative algorithms using primal and dual based optimization techniques. For general networks with both synchronous and asynchronous fork and join semantics, we present a novel modeling framework to formulate the resource allocation problem, and present a shadow-queue based decentralized iterative algorithm to solve the resource allocation problem. We show that all the algorithms guarantee optimality and demonstrate through simulation that they can adapt quickly to dynamically changing environments.
APA, Harvard, Vancouver, ISO, and other styles
16

Mühlbauer, Tobias [Verfasser], Alfons [Akademischer Betreuer] [Gutachter] Kemper, Thomas [Gutachter] Neumann, and Martin [Gutachter] Kersten. "On Scalable and Flexible Transaction and Query Processing in Main-Memory Database Systems / Tobias Mühlbauer ; Gutachter: Thomas Neumann, Alfons Kemper, Martin Kersten ; Betreuer: Alfons Kemper." München : Universitätsbibliothek der TU München, 2016. http://d-nb.info/1126644145/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Deb, Abhijit Kumar. "System Design for DSP Applications with the MASIC Methodology." Doctoral thesis, KTH, Microelectronics and Information Technology, IMIT, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-3820.

Full text
Abstract:

The difficulties of system design are persistentlyincreasing due to the integration of more functionality on asystem, time-to-market pressure, productivity gap, andperformance requirements. To address the system designproblems, design methodologies build system models at higherabstraction level. However, the design task to map an abstractfunctional model on a system architecture is nontrivial becausethe architecture contains a wide variety of system componentsand interconnection topology, and a given functionality can berealized in various ways depending on cost-performancetradeoffs. Therefore, a system design methodology must provideadequate design steps to map the abstract functionality on adetailed architecture.

MASIC—Maths to ASIC—is a system design methodologytargeting DSP applications. In MASIC, we begin with afunctional model of the system. Next, the architecturaldecisions are captured to map the functionality on the systemarchitecture. We present a systematic approach to classify thearchitectural decisions in two categories: system leveldecisions (SLDs) and implementation level decisions (ILDs). Asa result of this categorization, we only need to consider asubset of the decisions at once. To capture these decisions inan abstract way, we present three transaction level models(TLMs) in the context of DSP systems. These TLMs capture thedesign decisions using abstract transactions where timing ismodeled only to describe the major synchronization events. As aresult the functionality can be mapped to the systemarchitecture without meticulous details. Also, the artifacts ofthe design decisions in terms of delay can be simulatedquickly. Thus the MASIC approach saves both modeling andsimulation time. It also facilitates the reuse of predesignedhardware and software components.

To capture and inject the architectural decisionsefficiently, we present the grammar based language of MASIC.This language effectively helps us to implement the stepspertaining to the methodology. A Petri net based simulationtechnique is developed, which avoids the need to compile theMASIC description to VHDL for the sake of simulation. We alsopresent a divide and conquer based approach to verify the MASICmodel of a system.

Keywords:System design methodology, Signal processingsystems, Design decision, Communication, Computation, Modeldevelopment, Transaction level model, System design language,Grammar, MASIC.

APA, Harvard, Vancouver, ISO, and other styles
18

Oukid, Ismail. "Architectural Principles for Database Systems on Storage-Class Memory." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2018. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-232482.

Full text
Abstract:
Database systems have long been optimized to hide the higher latency of storage media, yielding complex persistence mechanisms. With the advent of large DRAM capacities, it became possible to keep a full copy of the data in DRAM. Systems that leverage this possibility, such as main-memory databases, keep two copies of the data in two different formats: one in main memory and the other one in storage. The two copies are kept synchronized using snapshotting and logging. This main-memory-centric architecture yields nearly two orders of magnitude faster analytical processing than traditional, disk-centric ones. The rise of Big Data emphasized the importance of such systems with an ever-increasing need for more main memory. However, DRAM is hitting its scalability limits: It is intrinsically hard to further increase its density. Storage-Class Memory (SCM) is a group of novel memory technologies that promise to alleviate DRAM’s scalability limits. They combine the non-volatility, density, and economic characteristics of storage media with the byte-addressability and a latency close to that of DRAM. Therefore, SCM can serve as persistent main memory, thereby bridging the gap between main memory and storage. In this dissertation, we explore the impact of SCM as persistent main memory on database systems. Assuming a hybrid SCM-DRAM hardware architecture, we propose a novel software architecture for database systems that places primary data in SCM and directly operates on it, eliminating the need for explicit IO. This architecture yields many benefits: First, it obviates the need to reload data from storage to main memory during recovery, as data is discovered and accessed directly in SCM. Second, it allows replacing the traditional logging infrastructure by fine-grained, cheap micro-logging at data-structure level. Third, secondary data can be stored in DRAM and reconstructed during recovery. Fourth, system runtime information can be stored in SCM to improve recovery time. Finally, the system may retain and continue in-flight transactions in case of system failures. However, SCM is no panacea as it raises unprecedented programming challenges. Given its byte-addressability and low latency, processors can access, read, modify, and persist data in SCM using load/store instructions at a CPU cache line granularity. The path from CPU registers to SCM is long and mostly volatile, including store buffers and CPU caches, leaving the programmer with little control over when data is persisted. Therefore, there is a need to enforce the order and durability of SCM writes using persistence primitives, such as cache line flushing instructions. This in turn creates new failure scenarios, such as missing or misplaced persistence primitives. We devise several building blocks to overcome these challenges. First, we identify the programming challenges of SCM and present a sound programming model that solves them. Then, we tackle memory management, as the first required building block to build a database system, by designing a highly scalable SCM allocator, named PAllocator, that fulfills the versatile needs of database systems. Thereafter, we propose the FPTree, a highly scalable hybrid SCM-DRAM persistent B+-Tree that bridges the gap between the performance of transient and persistent B+-Trees. Using these building blocks, we realize our envisioned database architecture in SOFORT, a hybrid SCM-DRAM columnar transactional engine. We propose an SCM-optimized MVCC scheme that eliminates write-ahead logging from the critical path of transactions. Since SCM -resident data is near-instantly available upon recovery, the new recovery bottleneck is rebuilding DRAM-based data. To alleviate this bottleneck, we propose a novel recovery technique that achieves nearly instant responsiveness of the database by accepting queries right after recovering SCM -based data, while rebuilding DRAM -based data in the background. Additionally, SCM brings new failure scenarios that existing testing tools cannot detect. Hence, we propose an online testing framework that is able to automatically simulate power failures and detect missing or misplaced persistence primitives. Finally, our proposed building blocks can serve to build more complex systems, paving the way for future database systems on SCM.
APA, Harvard, Vancouver, ISO, and other styles
19

Stejskal, Jan. "Nerelační databáze a jejich využití v prostředí finančních institucí." Master's thesis, Vysoká škola ekonomická v Praze, 2012. http://www.nusl.cz/ntk/nusl-196982.

Full text
Abstract:
This work deals with the use of NoSQL database systems in an environment of financial institutions. The work has several objectives: to characterize the types of NoSQL database systems, for selected systems to analyze their properties, their potential use in financial institutions to develop proposals case studies for their use, and one of them select and implement a demonstration of the possibilities of using this type of database system in the specific environment of financial institutions. These objectives are to be achieved by providing a description and analysis of the theoretical part, practical part in designing, choosing, implementation, verification and acceptance of one case study - based on acceptances criteria. In the thesis are the basic concepts of database systems explained first. It is explained in more detail the concept of NoSQL and related terms including causes and genesis, classification systems NoSQL in each category. The next part contains a comparison of the characteristics of relational database - relational systems and NoSQL database systems. The next chapter deals with the needs of financial institutions in the context of the use of database systems. There are also analyzed the properties of several selected NoSQL database systems . The next chapter is based on the analytical findings from previous chapters devoted to finding poten-tials lu use NoSQL database systems in an environment of financial institutions, which is the basic theme of the thesis . The penultimate chapter contains a suggestions of case studies, one of which is selected and a description of the results of its implementation are described in the last chapter . The main contribution of this work is a contribution to the theory of NoSQL systems and the possibili-ty of their use by financial institutions, which take into account when choosing a database system, or a combination of database systems, in practical terms can lead not only to increase the efficiency of their use, but also to optimize the acquisition and operational the costs of such systems.
APA, Harvard, Vancouver, ISO, and other styles
20

Kendric, Hood A. "Improving Cryptocurrency Blockchain Security and Availability Adaptive Security and Partitioning." Kent State University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=kent1595038779436782.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Le, Nien Nam. "A Transaction Processing System for Supporting Mobile Collaborative Works." Doctoral thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2006. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-1737.

Full text
Abstract:

The theme of this research is mobile transaction processing systems, focusing on versatile data sharing mechanisms in volatile mobile environments.

The rapid growth of wireless network technologies and portable computing devices has promoted a new mobile working environment. A mobile environment is different from the traditional distributed environment due to its unique characteristics: the mobility of users or computers, the frequent and unpredictable disconnections of wireless networks, and the resource constraints of mobile computing devices.

On the one hand, the mobile environment promotes a new working model, i.e., people can carry out their work while being on the move. The environment for accessing and processing information is changing rapidly from stationary and location dependent to mobile and location independent. On the other hand, these unique characteristics of the mobile environment pose many challenges to mobile transaction processing systems, especially in terms of long delaying periods, data unavailability and data inconsistency.

Many research proposals that focus on supporting transaction processing in mobile environments have been developed. However, there are still major issues that have not been completely solved. One of the problems is to support the sharing of data among transactions in volatile mobile environments. Our solution is to provide the mobile transaction processing system with flexible and adaptable data sharing mechanisms that can cope with the dynamic changes of the surrounding environmental conditions while ensuring data consistency of the database systems.

The results of our research consist of three important contributions:

The first contribution is a versatile mobile data sharing mechanism. This is achieved by the concepts of the mobile affiliation workgroup model that focuses on supporting mobile collaborative work in the horizontal dimension. The mobile affiliation workgroup model allows mobile hosts to form temporary and dynamic mobile workgroups by taking advantage of wireless communication technologies, i.e., the ability of direct communication among nearby mobile hosts. The data sharing processes among transactions at different mobile hosts are carried out by shared transactions, called export and import transactions. These shared transactions interact through a mobile sharing workspace, called an export-import repository. Data consistency of the database systems is assured by either serialization of transactions or applying user-defined policies. Our mobile data sharing mechanism provides an adaptable way for increasing data availability, while taking into account all the important characteristics of mobile environments, which are: the mobility of computing hosts, the frequent and unpredictable disconnections of wireless networks, and the resource constraints of mobile computing devices. Therefore, it has the ability to increase the throughput of mobile transaction processing systems.

The second contribution is a data conflict awareness mechanism that supports mobile transactions to be aware of conflicts among database operations in mobile environments. The data conflict awareness mechanism is developed based on the concepts of the anchor transaction that plays the role of a proxy transaction for local transactions at a disconnected mobile host. With the support of the data conflict awareness mechanism, the mobile transaction processing system has the capacity to minimize delay of transaction processes and to enforce consistency of the databas systems.

The third contribution is a mobility control mechanism that supports the mobile transaction processing system to efficiently handle the movement of transactions in mobile environments. We distinguish two types of transaction mobility in accordance with: (1) the movement of mobile hosts through mobile cells, and (2) the movement of mobile hosts across mobile affiliation workgroups. The mobility of transactions through mobile cells is handled by movement of the anchor transaction. While the mobility of transactions across mobile affiliation workgroups is controlled by the dynamic structure of export and import transactions.

We have developed a mobile transaction processing system for MOWAHS. Especially, we have successfully designed, implemented, and tested several important system components such as the mobile locking system and the mobile data sharing system.

APA, Harvard, Vancouver, ISO, and other styles
22

Pu, Calton. "Replication and nested transactions in the Eden Distributed System /." Thesis, Connect to this title online; UW restricted, 1986. http://hdl.handle.net/1773/6881.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Mangena, Sikhulumani Bayeza. "A compositional specification and verification of a concurrent engineering transaction processing system - CETRAPS." Thesis, University of Warwick, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.400105.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Poti, Allison Tamara S. "Building a multi-tier enterprise system utilizing visual Basic, MTS, ASP, and MS SQL." Virtual Press, 2001. http://liblink.bsu.edu/uhtbin/catkey/1221293.

Full text
Abstract:
Multi-tier enterprise systems consist of more than two distributed tiers. The design of multi-tier systems is considerably more involved than two tier systems. Not all systems should be designed as multi-tier, but if the decision to build a multi-tier system is made, there are benefits to this type of system design. CSCources is a system that tracks computer science course information. The requirements of this system indicate that it should be a multi-tier system. This system has three tiers, client, business and data. Microsoft tools are used such as Visual Basic (VB) that was used to build the client tier that physically resides on the client machine. VB is also used to create the business tier. This tier consists of the business layer and the data layer. The business layer contains most of the business logic for the system. The data layer communicates with the data tier. Microsoft SQL Server (MS SQL) is used for the data store. The database containsseveral tables and stored procedures. The stored procedures are used to add, edit, update and delete records in the database. Microsoft Transaction Server (MTS) is used to control modifications to the database. The transaction and security features available in the MTS environment are used. The business tier and data tier may or may not reside on the same physical computer or server. Active Server Pages (ASP) was built that accesses the business tier to retrieve the needed information for display on a web page. The cost of designing a distributed system, building a distributed system, upgrades to the system and error handling are examined.Ball State UniversityMuncie, IN 47306
Department of Computer Science
APA, Harvard, Vancouver, ISO, and other styles
25

Wolf, Florian [Verfasser], Kai-Uwe [Akademischer Betreuer] Sattler, Wolfgang [Gutachter] Lehner, and Thomas [Gutachter] Neumann. "Robust and adaptive query processing in hybrid transactional/analytical database systems / Florian Wolf ; Gutachter: Wolfgang Lehner, Thomas Neumann ; Betreuer: Kai-Uwe Sattler." Ilmenau : TU Ilmenau, 2019. http://d-nb.info/1182432166/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Madron, Lukáš. "Datové sklady a OLAP v prostředí MS SQL Serveru." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2008. http://www.nusl.cz/ntk/nusl-235916.

Full text
Abstract:
This paper deals with data warehouses and OLAP. These technologies are defined and described here. Then an introduction of the architecture of product MS SQL Server and its tools for work with data warehouses and OLAP folow. The knowledge gained is used for creation of sample application.
APA, Harvard, Vancouver, ISO, and other styles
27

Saladino, Renato Sebastiao. "Contribuição para estudo do uso de sistemas de informações gerenciais nos laboratórios de análises clínicas de pequeno, médio e grande porte e porte extra na Grande São Paulo, em 2005." Universidade de São Paulo, 2005. http://www.teses.usp.br/teses/disponiveis/9/9136/tde-19012018-155420/.

Full text
Abstract:
Os Laboratórios de Análises Clínicas têm acompanhado a evolução da ciência médica e da tecnologia de diagnóstico. Buscam atender as necessidades de pacientes e médicos. A evolução dos computadores favorece uso de sistemas de informações nos laboratórios. Os sistemas computadorizados auxiliam o Laboratório, melhorando serviços e diminuindo erros. Quando um laboratório informatiza sua emissão de laudos, está fazendo uso de um sistema de processamento de transações. O laboratório pode utilizar dados armazenados, adicionar dados externos, fazendo uso de um sistema de informações gerenciais. Este estudo verifica se laboratórios de análises clínicas da Grande São Paulo, que emitem seus laudos por computador, também fazem uso de um sistema de informações gerencial. Foram entrevistados 32 laboratórios, 9 pequenos, 14 médios, 7 grandes e 2 de porte extra Concluiu-se que nenhum dos laboratórios possui um sistema de informações gerencial pleno, e que o porte do laboratório não influencia nas características dos sistemas utilizados.
The Clinical Analyses Laboratories have folloied the evolution of medical science and the technology of diagnosis. They search to look after to the necessities of patients and physicians. The evolution of the computers favors use of information systems in the laboratories. The assist the Laboratory, improving services and diminishing errors. When a laboratory use a computerized systems its emission of results, is making uses of a transactions processing system. The laboratory can use stored data, add external data, making use a management information system. This study verifies if clinical analyses laboratories of the Grande São Paulo, that emit its results by computer, also make use of a management information system. 32 laboratories, 9 small ones, 14 medium, 7 big and 2 of extra size had been interviewed. It was concluded that none of the laboratories possess a full of management information system, and that the size of the laboratory does not influence in the characteristics of the used systems.
APA, Harvard, Vancouver, ISO, and other styles
28

Tröger, Ralph. "Supply Chain Event Management – Bedarf, Systemarchitektur und Nutzen aus Perspektive fokaler Unternehmen der Modeindustrie." Doctoral thesis, Universitätsbibliothek Leipzig, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-155014.

Full text
Abstract:
Supply Chain Event Management (SCEM) bezeichnet eine Teildisziplin des Supply Chain Management und ist für Unternehmen ein Ansatzpunkt, durch frühzeitige Reaktion auf kritische Ausnahmeereignisse in der Wertschöpfungskette Logistikleistung und -kosten zu optimieren. Durch Rahmenbedingungen wie bspw. globale Logistikstrukturen, eine hohe Artikelvielfalt und volatile Geschäftsbeziehungen zählt die Modeindustrie zu den Branchen, die für kritische Störereignisse besonders anfällig ist. In diesem Sinne untersucht die vorliegende Dissertation nach einer Beleuchtung der wesentlichen Grundlagen zunächst, inwiefern es in der Modeindustrie tatsächlich einen Bedarf an SCEM-Systemen gibt. Anknüpfend daran zeigt sie nach einer Darstellung bisheriger SCEM-Architekturkonzepte Gestaltungsmöglichkeiten für eine Systemarchitektur auf, die auf den Designprinzipien der Serviceorientierung beruht. In diesem Rahmen erfolgt u. a. auch die Identifikation SCEM-relevanter Business Services. Die Vorzüge einer serviceorientierten Gestaltung werden detailliert anhand der EPCIS (EPC Information Services)-Spezifikation illustriert. Abgerundet wird die Arbeit durch eine Betrachtung der Nutzenpotenziale von SCEM-Systemen. Nach einer Darstellung von Ansätzen, welche zur Nutzenbestimmung infrage kommen, wird der Nutzen anhand eines Praxisbeispiels aufgezeigt und fließt zusammen mit den Ergebnissen einer Literaturrecherche in eine Konsolidierung von SCEM-Nutzeffekten. Hierbei wird auch beleuchtet, welche zusätzlichen Vorteile sich für Unternehmen durch eine serviceorientierte Architekturgestaltung bieten. In der Schlussbetrachtung werden die wesentlichen Erkenntnisse der Arbeit zusammengefasst und in einem Ausblick sowohl beleuchtet, welche Relevanz die Ergebnisse der Arbeit für die Bewältigung künftiger Herausforderungen innehaben als auch welche Anknüpfungspunkte sich für anschließende Forschungsarbeiten ergeben.
APA, Harvard, Vancouver, ISO, and other styles
29

Gupta, Ramesh Kumar. "Commit Processing In Distributed On-Line And Real-Time Transaction Processing Systems." Thesis, 1997. http://etd.iisc.ernet.in/handle/2005/1856.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Dwyer, Barry 1938. "Automatic design of batch processing systems." 1999. http://thesis.library.adelaide.edu.au/adt-SUA/public/adt-SUA20010222.004513.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Rahm, Erhard. "A Framework for workload allocation in distributed transaction processing systems." 1992. https://ul.qucosa.de/id/qucosa%3A31970.

Full text
Abstract:
Ever-increasing demands for high transaction rates, limitations of high-end processors, high availability, and modular growth considerations are all driving forces toward distributed architectures for transaction processing. However, a prerequisite to taking advantage of the capacity of a distributed transaction processing system is an effective strategy for workload allocation. The distribution of the workload should not only achieve load balancing, but also support an efficient transaction processing with a minimum of intersystem communication. To this end, adaptive schemes for transaction routing have to be employed that are highly responsive to workload fluctuations and configuration changes. Adaptive allocation schemes are also important for simplifying system administration, which is a major problem in distributed transaction processing systems. In this article we develop a taxonomic framework for workload allocation, in particular, transaction routing, in distributed transaction processing systems. This framework considers the influence of the underlying system architecture (e.g., shared nothing, shared disk) and transaction execution model as well as the major dependencies between workload, program, and data allocation. The main part of the framework covers structural (or architectural) and implementational alternatives for transaction routing to help identify key factors and basic tradeoffs in the design of appropriate allocation schemes. Finally, we show how existing schemes fit our taxonomy. The framework substantially facilitates a comparison of the different schemes and can guide the development of new, more effective protocols.
APA, Harvard, Vancouver, ISO, and other styles
32

Dwyer, Barry. "The automatic design of batch processing systems." Thesis, 1999. http://hdl.handle.net/2440/37942.

Full text
Abstract:
Batch processing is a means of improving the efficiency of transaction processing systems. Despite the maturity of this field, there is no rigorous theory that can assist in the design of batch systems. This thesis proposes such a theory, and shows that it is practical to use it to automate system design. This has important consequences; the main impediment to the wider use of batch systems is the high cost of their development and intenance. The theory is developed twice: informally, in a way that can be used by a systems analyst, and formally, as a result of which a computer program has been developed to prove the feasibility of automated design. Two important concepts are identified, which can aid in the decomposition of any system: 'separability', and 'independence'. Separability is the property that allows processes to be joined together by pipelines or similar topologies. Independence is the property that allows elements of a large set to be accessed and updated independently of one another. Traditional batch processing technology exploits independence when it uses sequential access in preference to random access. It is shown how the same property allows parallel access, resulting in speed gains limited only by the number of processors. This is a useful development that should assist in the design of very high throughput transaction processing systems. Systems are specified procedurally by describing an ideal system, which generates output and updates its internal state immediately following each input event. The derived systems have the same external behaviour as the ideal system except that their outputs and internal states lag those of the ideal system arbitrarily. Indeed, their state variables may have different delays, and the systems as whole may never be in consistent state. A 'state dependency graph' is derived from a static analysis of a specification. The reduced graph of its strongly-connected components defines a canonical process network from which all possible implementations of the system can be derived by composition. From these it is possible to choose the one that minimises any imposed cost function. Although, in general, choosing the optimum design proves to be an NP-complete problem, it is shown that heuristics can find it quickly in practical cases.
Thesis (Ph.D.)--Mathematical and Computer Sciences (Department of Computer Science), 1999.
APA, Harvard, Vancouver, ISO, and other styles
33

(11132985), Thamir Qadah. "High-performant, Replicated, Queue-oriented Transaction Processing Systems on Modern Computing Infrastructures." Thesis, 2021.

Find full text
Abstract:
With the shifting landscape of computing hardware architectures and the emergence of new computing environments (e.g., large main-memory systems, hundreds of CPUs, distributed and virtualized cloud-based resources), state-of-the-art designs of transaction processing systems that rely on conventional wisdom suffer from lost performance optimization opportunities. This dissertation challenges conventional wisdom to rethink the design and implementation of transaction processing systems for modern computing environments.

We start by tackling the vertical hardware scaling challenge, and propose a deterministic approach to transaction processing on emerging multi-sockets, many-core, shared memory architecture to harness its unprecedented available parallelism. Our proposed priority-based queue-oriented transaction processing architecture eliminates the transaction contention footprint and uses speculative execution to improve the throughput of centralized deterministic transaction processing systems. We build QueCC and demonstrate up to two orders of magnitude better performance over the state-of-the-art.

We further tackle the horizontal scaling challenge and propose a distributed queue-oriented transaction processing engine that relies on queue-oriented communication to eliminate the traditional overhead of commitment protocols for multi-partition transactions. We build Q-Store, and demonstrate up to 22x improvement in system throughput over the state-of-the-art deterministic transaction processing systems.

Finally, we propose a generalized framework for designing distributed and replicated deterministic transaction processing systems. We introduce the concept of speculative replication to hide the latency overhead of replication. We prototype the speculative replication protocol in QR-Store and perform an extensive experimental evaluation using standard benchmarks. We show that QR-Store can achieve a throughput of 1.9 million replicated transactions per second in under 200 milliseconds and a replication overhead of 8%-25%compared to non-replicated configurations.
APA, Harvard, Vancouver, ISO, and other styles
34

Haghjoo, Mostafa S. "Transactional actors in cooperative information systems." Phd thesis, 1995. http://hdl.handle.net/1885/137457.

Full text
Abstract:
Transaction management in advanced distributed information systems is a very important issue under research scrutiny with many technical and open problems. Most of the research and development activities use conventional database technology to address this important issue. The transaction model presented in this thesis combines attractive properties of the actor model of computation with advanced database transaction concepts in an object-oriented environment to address transactional necessities of cooperative information systems. The novel notion of transaction tree in our model includes subtransactions as well as a rich collection of decision making, chronological ordering, and communication and synchronization constructs for them. Advanced concepts such as blocking/ non_blocking synchronization, vital and non_vital subtransactions , contingency transactions, temporal and value dependencies, and delegation are supported. Compensatable subtransactions are distinguished and early commit is accomplished in order to release resources and facilitate cooperative as well as longduration transactions. Automatic cancel procedures are provided to logically undo the effects of such commits if the global transaction fails. The complexity and semantics-orientation of advanced database applications is our main motivation to design and implement a high-level scripting language for the proposed transaction model. Database programming can gain in performance and problem-orientation if the semantic dependencies between transactions can be expressed directly. Simple and flexible mechanisms are provided for advanced users to query the databases, program their transactions accordingly, and accept weak forms of semantic coherence that allows for more concurrency. The transaction model is grafted onto the concurrent obj ect-oriented programming language Sather developed at UC Berkeley which has a nice high-level syntax, supports advanced obj ect-oriented concepts, and aims toward performance and reusability. W have augmented the language with distributed programming facilities and various types of message passing routines as well as advanced transactions management constructs . The thesis is organized in three parts. The first part introduces the problem, reviews state of the art, and presents the transaction model. The second part describes the scripting language and talks about implementation details. The third part presents the formal semantics of the transaction model using mathematical notations and concludes the thesis.
APA, Harvard, Vancouver, ISO, and other styles
35

Featherman, Mauricio S. "Evaluative criteria and user acceptance of internet-based financial transaction processing systems." 2002. http://wwwlib.umi.com/dissertations/fullcit/3045420.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Kaspi, Samuel. "Transaction Models and Algorithms for Improved Transaction Throughput." Thesis, 2002. https://vuir.vu.edu.au/221/.

Full text
Abstract:
Currently, e-commerce is in its infancy, however its expansion is expected to be exponential and as it grows, so too will the demands for very fast real time online transaction processing systems. One avenue for meeting the demand for increased transaction processing speed is conversion from disk-based to in-memory databases. However, while in-memory systems are very promising, there are many organizations whose data is too large to fit in in-memory systems or who are not willing to undertake the investment that an implementation of an in-memory system requires. For these organizations an improvement in the performance of disk-based systems is required. Accordingly, in this thesis, we introduce two mechanisms that substantially improve the performance of disk-based systems. The first mechanism, which we call a contention-based scheduler, is attached to a standard 2PL system. This scheduler determines each transaction's probability of conflict before it begins executing. Using this knowledge, the contention-based scheduler allows transactions into the system in both optimal numbers and an optimal mix. We present tests that show that the contention-based scheduler substantially outperforms standard 2PL concurrency control in a wide variety of disk-based hardware configurations. The improvement though most pronounced in the throughput of low contention transactions extends to all transaction types over an extended processing period. We call the second mechanism that we develop to improve the performance of disk-based database systems, enhanced memory access (EMA). The purpose of EMA is to allow very high levels of concurrency in the pre-fetching of data thus bringing the performance of disk-based systems close to that achieved by in-memory systems. The basis of our proposal for EMA is to ensure that even when conditions satisfying a transaction's predicate change between pre-fetch time and execution time, the data required for satisfying transactions' predicates are still found in memory. We present tests that show that the implementation of EMA allows the performance of disk-based systems to approach the performance achieved by in-memory systems. Further, the tests show that the performance of EMA is very robust to the imposition of additional costs associated with its implementation.
APA, Harvard, Vancouver, ISO, and other styles
37

"Performance study of protocols in replicated database." Chinese University of Hong Kong, 1996. http://library.cuhk.edu.hk/record=b5888817.

Full text
Abstract:
by Ching-Ting, Ng.
Thesis (M.Phil.)--Chinese University of Hong Kong, 1996.
Includes bibliographical references (leaves 79-82).
Abstract --- p.i
Acknowledgement --- p.iii
Chapter 1 --- Introduction --- p.1
Chapter 2 --- Background --- p.5
Chapter 2.1 --- Protocols tackling site failure --- p.5
Chapter 2.2 --- Protocols tackling Partition Failure --- p.6
Chapter 2.2.1 --- Primary site --- p.6
Chapter 2.2.2 --- Quorum Consensus Protocol --- p.7
Chapter 2.2.3 --- Missing Writes --- p.10
Chapter 2.2.4 --- Virtual Partition Protocol --- p.11
Chapter 2.3 --- Protocols to enhance the Performance of Updating --- p.11
Chapter 2.3.1 --- Independent Updates and Incremental Agreement in Replicated Databases --- p.12
Chapter 2.3.2 --- A Transaction Replication Scheme for a Replicated Database with Node Autonomy --- p.13
Chapter 3 --- Transaction Replication Scheme --- p.17
Chapter 3.1 --- A TRS for a Replicated Database with Node Autonomy --- p.17
Chapter 3.1.1 --- Example --- p.17
Chapter 3.1.2 --- Problem --- p.18
Chapter 3.1.3 --- Network Model --- p.18
Chapter 3.1.4 --- Transaction and Data Model --- p.19
Chapter 3.1.5 --- Histories and One-Copy Serializability --- p.20
Chapter 3.1.6 --- Transaction Broadcasting Scheme --- p.21
Chapter 3.1.7 --- Local Transactions --- p.22
Chapter 3.1.8 --- Public Transactions --- p.23
Chapter 3.1.9 --- A Conservative Timestamping Algorithm --- p.24
Chapter 3.1.10 --- Decentralized Two-Phase Commit --- p.25
Chapter 3.1.11 --- Partition Failures --- p.27
Chapter 4 --- Simulation Model --- p.29
Chapter 4.1 --- Simulation Model --- p.29
Chapter 4.1.1 --- Model Design --- p.29
Chapter 4.2 --- Implement at ion --- p.37
Chapter 4.2.1 --- Simulation --- p.37
Chapter 4.2.2 --- Simulation Language --- p.37
Chapter 5 --- Performance Results and Analysis --- p.39
Chapter 5.1 --- Simulation Results and Data Analysis --- p.39
Chapter 5.1.1 --- Experiment 1 : Variation of TRS Period --- p.44
Chapter 5.1.2 --- Experiment 2 : Variation of Clock Synchronization --- p.47
Chapter 5.1.3 --- Experiment 3 : Variation of Ratio of Local to Public Transaction --- p.49
Chapter 5.1.4 --- Experiment 4 : Variation of Number of Operations --- p.51
Chapter 5.1.5 --- Experiment 5 : Variation of Message Transmit Delay --- p.55
Chapter 5.1.6 --- Experiment 6 : Variation of the Interarrival Time of Transactions --- p.58
Chapter 5.1.7 --- Experiment 7 : Variation of Operation CPU cost --- p.61
Chapter 5.1.8 --- Experiment 8 : Variation of Disk I/O time --- p.64
Chapter 5.1.9 --- Experiment 9 : Variation of Cache Hit Ratio --- p.66
Chapter 5.1.10 --- Experiment 10 : Variation of Number of Data Access --- p.68
Chapter 5.1.11 --- Experiment 11 : Variation of Read Operation Ratio --- p.70
Chapter 5.1.12 --- Experiment 12 : Variation of One Site Failed --- p.72
Chapter 5.1.13 --- Experiment 13 : Variation of Sites Available --- p.74
Chapter 6 --- Conclusion --- p.77
Bibliography --- p.79
Chapter A --- Implementation --- p.83
Chapter A.1 --- Assumptions of System Model --- p.83
Chapter A.1.1 --- Program Description --- p.83
Chapter A.1.2 --- TRS System --- p.85
Chapter A. 1.3 --- Common Functional Modules for Majority Quorum and Tree Quo- rum Protocol --- p.88
Chapter A.1.4 --- Majority Quorum Consensus Protocol --- p.90
Chapter A. 1.5 --- Tree Quorum Protocol --- p.91
APA, Harvard, Vancouver, ISO, and other styles
38

Blackburn, Stephen. "Persistent store interface : a foundation for scalable persistent system design." Phd thesis, 1998. http://hdl.handle.net/1885/145415.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Alagarsamy, K. "Some Theoretical Contributions To The Mutual Exclusion Problem." Thesis, 1997. http://etd.iisc.ernet.in/handle/2005/1833.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Oukid, Ismail. "Architectural Principles for Database Systems on Storage-Class Memory." Doctoral thesis, 2017. https://tud.qucosa.de/id/qucosa%3A30750.

Full text
Abstract:
Database systems have long been optimized to hide the higher latency of storage media, yielding complex persistence mechanisms. With the advent of large DRAM capacities, it became possible to keep a full copy of the data in DRAM. Systems that leverage this possibility, such as main-memory databases, keep two copies of the data in two different formats: one in main memory and the other one in storage. The two copies are kept synchronized using snapshotting and logging. This main-memory-centric architecture yields nearly two orders of magnitude faster analytical processing than traditional, disk-centric ones. The rise of Big Data emphasized the importance of such systems with an ever-increasing need for more main memory. However, DRAM is hitting its scalability limits: It is intrinsically hard to further increase its density. Storage-Class Memory (SCM) is a group of novel memory technologies that promise to alleviate DRAM’s scalability limits. They combine the non-volatility, density, and economic characteristics of storage media with the byte-addressability and a latency close to that of DRAM. Therefore, SCM can serve as persistent main memory, thereby bridging the gap between main memory and storage. In this dissertation, we explore the impact of SCM as persistent main memory on database systems. Assuming a hybrid SCM-DRAM hardware architecture, we propose a novel software architecture for database systems that places primary data in SCM and directly operates on it, eliminating the need for explicit IO. This architecture yields many benefits: First, it obviates the need to reload data from storage to main memory during recovery, as data is discovered and accessed directly in SCM. Second, it allows replacing the traditional logging infrastructure by fine-grained, cheap micro-logging at data-structure level. Third, secondary data can be stored in DRAM and reconstructed during recovery. Fourth, system runtime information can be stored in SCM to improve recovery time. Finally, the system may retain and continue in-flight transactions in case of system failures. However, SCM is no panacea as it raises unprecedented programming challenges. Given its byte-addressability and low latency, processors can access, read, modify, and persist data in SCM using load/store instructions at a CPU cache line granularity. The path from CPU registers to SCM is long and mostly volatile, including store buffers and CPU caches, leaving the programmer with little control over when data is persisted. Therefore, there is a need to enforce the order and durability of SCM writes using persistence primitives, such as cache line flushing instructions. This in turn creates new failure scenarios, such as missing or misplaced persistence primitives. We devise several building blocks to overcome these challenges. First, we identify the programming challenges of SCM and present a sound programming model that solves them. Then, we tackle memory management, as the first required building block to build a database system, by designing a highly scalable SCM allocator, named PAllocator, that fulfills the versatile needs of database systems. Thereafter, we propose the FPTree, a highly scalable hybrid SCM-DRAM persistent B+-Tree that bridges the gap between the performance of transient and persistent B+-Trees. Using these building blocks, we realize our envisioned database architecture in SOFORT, a hybrid SCM-DRAM columnar transactional engine. We propose an SCM-optimized MVCC scheme that eliminates write-ahead logging from the critical path of transactions. Since SCM -resident data is near-instantly available upon recovery, the new recovery bottleneck is rebuilding DRAM-based data. To alleviate this bottleneck, we propose a novel recovery technique that achieves nearly instant responsiveness of the database by accepting queries right after recovering SCM -based data, while rebuilding DRAM -based data in the background. Additionally, SCM brings new failure scenarios that existing testing tools cannot detect. Hence, we propose an online testing framework that is able to automatically simulate power failures and detect missing or misplaced persistence primitives. Finally, our proposed building blocks can serve to build more complex systems, paving the way for future database systems on SCM.
APA, Harvard, Vancouver, ISO, and other styles
41

HUANG, ZHEN-PENG, and 黃振鵬. "Design of a transaction-oriented object processing system." Thesis, 1986. http://ndltd.ncl.edu.tw/handle/21824804805627974573.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Yen-LiangSu and 蘇晏良. "TSorter: A Conflict-Aware Transaction Processing System for Clouds." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/28167400686535618963.

Full text
Abstract:
碩士
國立成功大學
電腦與通信工程研究所
98
In recent years, cloud computing has been more and more popular. Companies and organizations benefit from the high scalability of cloud computing. However, most available cloud storage systems are lacking in providing the fully transaction processing support while many daily-use applications such as ticket booking systems and e-businesses really require the transaction processing support. Although, some cloud-based transaction processing systems have been proposed, they experience the low throughput when a conflict-intensive workload is performed. Therefore, this paper presents a new cloud-based transaction processing system, designated as TSorter, in which both the conflict-free and the conflict-intensive workload are concerned. TSorter introduces the conflict-aware scheduling for achieving the high throughput when the conflict-intensive workload is performed. Moreover, it also introduces the data caching and the affinity-based scheduling for improve the per-nod performance. The experimental result shows that TSorter achieves the high throughput, irrespective of the workload types (i.e. the conflict-intensive workload or the conflict-free workload).
APA, Harvard, Vancouver, ISO, and other styles
43

Subtil, Eduardo Bezerra. "Lazy State Determination for SQL databases." Master's thesis, 2021. http://hdl.handle.net/10362/133358.

Full text
Abstract:
Transactional systems have seen various efforts to increase their throughput, mainly by making use of parallelism and efficient Concurrency Control techniques. Most approaches optimize the systems’ behaviour when under high contention. In this work, we strive towards reducing the system’s overall contention through Lazy State Determination (LSD). LSD is a new transactional API that leverages on futures to delay the accesses to the Database as much as possible, reducing the amount of time that transactions require to operate under isolation and, thus, reducing the contention window. LSD was shown to be a promising solution for Key-Value Stores. Now, our focus turns to Relational Database Management Systems, as we attempt to implement and evaluate LSD in this new setting. This implementation was done through a custom JDBC driver to minimize required modifications to any external platform. Results show that the reduction of the contention window effectively improves the success rate of transactional applications. However, our current implementation exhibits some performance issues that must be further investigated and addressed.
Os sistemas transacionais têm sido alvo de esforços variados para aumentar a sua velocidade de processamento, principalmente através de paralelismo e de técnicas de controlo de concorrência mais eficazes. A maior parte das soluções propostas visam a otimização do comportamento destes sistemas em ambientes de elevada contenção. Neste trabalho, nós iremos reduzir a contenção no sistema recorrendo ao Lazy State Determination (LSD). O LSD é uma nova API transacional que promove a utilização de futuros para adiar o máximo os acessos à Base de Dados, reduzindo assim o tempo que cada transação requer para executar em isolamento e, por consequência, reduzindo também a janela de contenção. O LSD tem-se mostrado uma solução promissora para bases de dados Chave-Valor. O nosso foco foi agora redirecionado para Sistemas de Gestão de Bases de Dados Relacionais, com uma tentativa de implementação e avaliação do LSD neste novo contexto. Este objetivo foi concretizado através da implementação de um controlador JDBC para minimizar quaisquer alterações a plataformas externas. Os resultados mostram que a redução da janela de contenção efetivamente melhora a taxa de sucesso de aplicações transacionais. No entanto, a nossa implementação atual tem alguns problemas de desempenho que necessitam de ser investigados e endereçados.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography