To see the other types of publications on this topic, follow the link: Transactional databases.

Journal articles on the topic 'Transactional databases'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Transactional databases.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Fouad, Mohammed M., Mostafa G. M. Mostafa, Abdulfattah S. Mashat, and Tarek F. Gharib. "IMIDB: An Algorithm for Indexed Mining of Incremental Databases." Journal of Intelligent Systems 26, no. 1 (January 1, 2017): 69–85. http://dx.doi.org/10.1515/jisys-2015-0107.

Full text
Abstract:
AbstractAssociation rules provide important knowledge that can be extracted from transactional databases. Owing to the massive exchange of information nowadays, databases become dynamic and change rapidly and periodically: new transactions are added to the database and/or old transactions are updated or removed from the database. Incremental mining was introduced to overcome the problem of maintaining previously generated association rules in dynamic databases. In this paper, we propose an efficient algorithm (IMIDB) for incremental itemset mining in large databases. The algorithm utilizes the trie data structure for indexing dynamic database transactions. Performance comparison of the proposed algorithm to recently cited algorithms shows that a significant improvement of about two orders of magnitude is achieved by our algorithm. Also, the proposed algorithm exhibits linear scalability with respect to database size.
APA, Harvard, Vancouver, ISO, and other styles
2

Szafrański, Bolesław, and Rafał Bałazy. "Data protection in transactional and statistical applications of databases." Computer Science and Mathematical Modelling, no. 10/2019 (September 30, 2020): 31–39. http://dx.doi.org/10.5604/01.3001.0014.4439.

Full text
Abstract:
The article describes a discussion on the issue of data protection in databases. The discussion attempts to answer the question about the possibility of using a transactional database system as a system capable of data protection in a statistical database. The discussion is preceded by a reminder of the basic issues related to data protection in databases, including reminder of flow control models, access control models and the inference. The key element of the article is the analysis, based on the example of the Oracle database management system, whether data protection mechanisms in transactional databases can be effective in case of data protection in statistical databases.
APA, Harvard, Vancouver, ISO, and other styles
3

AL-Khafaji, Hussein, and Noora Al-Saidi. "DWORM: A Novel Algorithm To Maintain Large Itemsets in Deleted Items and/or Transactions Databases Without Re-Mining." Journal of Al-Rafidain University College For Sciences ( Print ISSN: 1681-6870 ,Online ISSN: 2790-2293 ), no. 1 (October 23, 2021): 5–24. http://dx.doi.org/10.55562/jrucs.v26i1.418.

Full text
Abstract:
Transactional databases can be updated by three cases; the addition of new transactions, the deletion of set of transactions, andor increasing or decreasing of the support of the itemsets. The update process affects the previously mined itemsets, some of the large items will be small and vice versa. Therefore the updated database should be re-mined to discover the changes in the hidden itemsets. There are many algorithms to avoid the re-mining process in the case of updating a database by addition and there is one algorithm in case of changing the value of the support. But there is no actual algorithm to avoid the re-mining in the case of deletion. This research presents a novel algorithm to manipulate this case. The proposed algorithm manipulates the three possibilities of the deletion that are deletion of one or more items from a transaction or set of transactions, deletion of a set of transactions, and deletion of items from transactions and deletion of set of transactions at the same time. The experimental result shows that the updating algorithm outperforms the re-mining process in considerable amount of execution time.
APA, Harvard, Vancouver, ISO, and other styles
4

Chen, Hongzhi, Changji Li, Chenguang Zheng, Chenghuan Huang, Juncheng Fang, James Cheng, and Jian Zhang. "G-tran." Proceedings of the VLDB Endowment 15, no. 11 (July 2022): 2545–58. http://dx.doi.org/10.14778/3551793.3551813.

Full text
Abstract:
Graph transaction processing poses unique challenges such as random data access due to the irregularity of graph structures, low throughput and high abort rate due to the relatively large read/write sets in graph transactions. To address these challenges, we present G-Tran, a remote direct memory access (RDMA)-enabled distributed in-memory graph database with serializable and snapshot isolation support. First, we propose a graph-native data store to achieve good data locality and fast data access for transactional updates and queries. Second, G-Tran adopts a fully decentralized architecture that leverages RDMA to process distributed transactions with the massively parallel processing (MPP) model, which can achieve high performance by utilizing all computing resources. In addition, we propose a new multi-version optimistic concurrency control (MV-OCC) protocol with two optimizations to address the issue of large read/write sets in graph transactions. Extensive experiments show that G-Tran achieves competitive performance compared with other popular graph databases on benchmark workloads.
APA, Harvard, Vancouver, ISO, and other styles
5

Mazurova, Oksana, Artem Naboka, and Mariya Shirokopetleva. "RESEARCH OF ACID TRANSACTION IMPLEMENTATION METHODS FOR DISTRIBUTED DATABASES USING REPLICATION TECHNOLOGY." Innovative Technologies and Scientific Solutions for Industries, no. 2 (16) (July 6, 2021): 19–31. http://dx.doi.org/10.30837/itssi.2021.16.019.

Full text
Abstract:
Today, databases are an integral part of most modern applications designed to store large amounts of data and to request from many users. To solve business problems in such conditions, databases are scaled, often horizontally on several physical servers using replication technology. At the same time, many business operations require the implementation of transactional compliance with ACID properties. For relational databases that traditionally support ACID transactions, horizontal scaling is not always effective due to the limitations of the relational model itself. Therefore, there is an applied problem of efficient implementation of ACID transactions for horizontally distributed databases. The subject matter of the study is the methods of implementing ACID transactions in distributed databases, created by replication technology. The goal of the work is to increase the efficiency of ACID transaction implementation for horizontally distributed databases. The work is devoted to solving the following tasks: analysis and selection of the most relevant methods of implementation of distributed ACID transactions; planning and experimental research of methods for implementing ACID transactions by using of NoSQL DBMS MongoDB and NewSQL DBMS VoltDB as an example; measurements of metrics of productivity of use of these methods and formation of the recommendation concerning their effective use. The following methods are used: system analysis; relational databases design; methods for evaluating database performance. The following results were obtained: experimental measurements of the execution time of typical distributed transactions for the subject area of e-commerce, as well as measurements of the number of resources required for their execution; revealed trends in the performance of such transactions, formed recommendations for the methods studied. The obtained results allowed to make functions of dependence of the considered metrics on loading parameters. Conclusions: the strengths and weaknesses of the implementation of distributed ACID transactions using MongoDB and VoltDB were identified. Practical recommendations for the effective use of these systems for different types of applications, taking into account the resources consumed and the types of requests.
APA, Harvard, Vancouver, ISO, and other styles
6

Vijay Kumar, G., M. Sreedevi, K. Bhargav, and P. Mohan Krishna. "Incremental Mining of Popular Patterns from Transactional Databases." International Journal of Engineering & Technology 7, no. 2.7 (March 18, 2018): 636. http://dx.doi.org/10.14419/ijet.v7i2.7.10913.

Full text
Abstract:
From the day the mining of frequent pattern problem has been introduced the researchers have extended the frequent patterns to various helpful patterns like cyclic, periodic, regular patterns in emerging databases. In this paper, we get to know about popular pattern which gives the Popularity of every items between the incremental databases. The method that used for the mining of popular patterns is known as Incrpop-growth algorithm. Incrpop-tree structure is been applied in this algorithm. In incremental databases the event recurrence and the event conduct of the example changes at whatever point a little arrangement of new exchanges are added to the database. In this way proposes another calculation called Incrpop-tree to mine mainstream designs in incremental value-based database utilizing Incrpop-tree structure. At long last analyses have been done and comes about are indicated which gives data about conservativeness, time proficient and space productive.
APA, Harvard, Vancouver, ISO, and other styles
7

Avni, Hillel, and Trevor Brown. "Persistent hybrid transactional memory for databases." Proceedings of the VLDB Endowment 10, no. 4 (November 2016): 409–20. http://dx.doi.org/10.14778/3025111.3025122.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Xiang, Yang, Ruoming Jin, David Fuhry, and Feodor F. Dragan. "Summarizing transactional databases with overlapped hyperrectangles." Data Mining and Knowledge Discovery 23, no. 2 (October 24, 2010): 215–51. http://dx.doi.org/10.1007/s10618-010-0203-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Gowtham Srinivas, P., P. Krishna Reddy, A. V. Trinath, S. Bhargav, and R. Uday Kiran. "Mining coverage patterns from transactional databases." Journal of Intelligent Information Systems 45, no. 3 (May 30, 2014): 423–39. http://dx.doi.org/10.1007/s10844-014-0318-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

GKOULALAS-DIVANIS, ARIS, and VASSILIOS S. VERYKIOS. "EXACT KNOWLEDGE HIDING IN TRANSACTIONAL DATABASES." International Journal on Artificial Intelligence Tools 18, no. 01 (February 2009): 17–37. http://dx.doi.org/10.1142/s0218213009000020.

Full text
Abstract:
The hiding of sensitive knowledge in the form of frequent itemsets, has gained increasing attention over the past years. This paper highlights the process of border revision, which is essential for the identification of hiding solutions bearing no side-effects, and provides efficient algorithms for the computation of the revised positive and the revised negative borders. By utilizing border revision, we unify the theory behind two exact hiding algorithms that guarantee optimal solutions both in terms of database distortion and side-effects introduced by the hiding process. Following that, we propose a novel extension to one of the hiding algorithms that allows it to identify exact hiding solutions to a much wider range of problems (than its original counterpart). Through experimentation, we compare the exact hiding schemes against two state-of-the-art heuristic algorithms and demonstrate their ability to consistently provide solutions of higher quality to a wide variety of hiding problems.
APA, Harvard, Vancouver, ISO, and other styles
11

TANBEER, S. K., C. F. AHMED, B. S. JEONG, and Y. K. LEE. "Mining Regular Patterns in Transactional Databases." IEICE Transactions on Information and Systems E91-D, no. 11 (November 1, 2008): 2568–77. http://dx.doi.org/10.1093/ietisy/e91-d.11.2568.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Rao, M. Venkata Krishna, Ch Suresh, K. Kamakshaiah, and M. Ravikanth. "Prototype Analysis for Business Intelligence Utilization in Data Mining Analysis." International Journal of Advanced Research in Computer Science and Software Engineering 7, no. 7 (July 29, 2017): 30. http://dx.doi.org/10.23956/ijarcsse.v7i7.93.

Full text
Abstract:
Tremendous increase of high availability of more disparate data sources than ever before have raised difficulties in simplifying frequent utility report across multiple transaction systems apart from an integration of large historical data. It is main focusing concept in data exploration with high transactional data systems in real time data processing. This problem mainly occurs in data warehouses and other data storage proceedings in Business Intelligence (BI) for knowledge management and business resource planning. In this phenomenon, BI consists software construction of data warehouse query processing in report generation of high utility data mining in transactional data systems. The growth of a huge voluminous data in the real world is posing challenges to the research and business community for effective data analysis and predictions. In this paper, we analyze different data mining techniques and methods for Business Intelligence in data analysis of transactional databases. For that, we discuss what the key issues are by performing in-depth analysis of business data which includes database applications in transaction data source system analysis. We also discuss different integrated techniques in data analysis in business operational process for feasible solutions in business intelligence
APA, Harvard, Vancouver, ISO, and other styles
13

Shokrgozar, Neda, and Farzad Movahedi Sobhani. "Customer Segmentation of Bank Based on Discovering of Their Transactional Relation by Using Data ‎Mining Algorithms‎." Modern Applied Science 10, no. 10 (September 26, 2016): 283. http://dx.doi.org/10.5539/mas.v10n10p283.

Full text
Abstract:
In this research, based on financial transactions between bank customers which extracted from bank’s databases we have developed the relational transaction graph and customer’s transactional communication network has been created. Furthermore, using data mining algorithms and evaluation parameters in social network concepts lead us for segmenting of bank customers. The main goal in this research is bank customer’s segmentation by discovering the transactional relationship between them in order to deliver some specified solutions in benefit of some policy about customers equality in banking system; in other words improvement of customer relationship management to determination of strategies and business risk management are the main concept of this research. By evaluation of Customer segments, banking system will consider more efficient and crucial factors in decision process to estimate more accurate credential of each group of customers and will grant more appropriate types and amount of loan services to them therefore it is expected these solutions will reduce the risk of loan service in banks.
APA, Harvard, Vancouver, ISO, and other styles
14

Diop, Lamine, Cheikh Talibouya Diop, Arnaud Giacometti, and Arnaud Soulet. "Pattern on demand in transactional distributed databases." Information Systems 104 (February 2022): 101908. http://dx.doi.org/10.1016/j.is.2021.101908.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Tian, Boyu, Jiamin Huang, Barzan Mozafari, and Grant Schoenebeck. "Contention-aware lock scheduling for transactional databases." Proceedings of the VLDB Endowment 11, no. 5 (January 1, 2018): 648–62. http://dx.doi.org/10.1145/3187009.3177740.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Menon, Syam, Abhijeet Ghoshal, and Sumit Sarkar. "Modifying Transactional Databases to Hide Sensitive Association Rules." Information Systems Research 33, no. 1 (March 2022): 152–78. http://dx.doi.org/10.1287/isre.2021.1033.

Full text
Abstract:
Although firms recognize the value in sharing data with supply chain partners, many remain reluctant to share for fear of sensitive information potentially making its way to competitors. Approaches that can help hide sensitive information could alleviate such concerns and increase the number of firms that are willing to share. Sensitive information in transactional databases often manifests itself in the form of association rules. The sensitive association rules can be concealed by altering transactions so that they remain hidden when the data are mined by the partner. The problem of hiding these rules in the data are computationally difficult (NP-hard), and extant approaches are all heuristic in nature. To our knowledge, this is the first paper that introduces the problem as a nonlinear integer formulation to hide the sensitive association rule while minimizing the alterations needed in the data set. We apply transformations that linearize the constraints and derive various results that help reduce the size of the problem to be solved. Our results show that although the nonlinear integer formulations are not practical, the linearizations and problem-reduction steps make a significant impact on solvability and solution time. This approach mitigates potential risks associated with sharing and should increase data sharing among supply chain partners.
APA, Harvard, Vancouver, ISO, and other styles
17

Kingsbury, Kyle, and Peter Alvaro. "Elle." Proceedings of the VLDB Endowment 14, no. 3 (November 2020): 268–80. http://dx.doi.org/10.14778/3430915.3430918.

Full text
Abstract:
Users who care about their data store it in databases, which (at least in principle) guarantee some form of transactional isolation. However, experience shows that many databases do not provide the isolation guarantees they claim. With the recent proliferation of new distributed databases, demand has grown for checkers that can, by generating client workloads and injecting faults, produce anomalies that witness a violation of a stated guarantee. An ideal checker would be sound (no false positives), efficient (polynomial in history length and concurrency), effective (finding violations in real databases), general (analyzing many patterns of transactions), and informative (justifying the presence of an anomaly with understandable counterexamples). Sadly, we are aware of no checkers that satisfy these goals. We present Elle: a novel checker which infers an Adya-style dependency graph between client-observed transactions. It does so by carefully selecting database objects and operations when generating histories, so as to ensure that the results of database reads reveal information about their version history. Elle can detect every anomaly in Adya et al's formalism (except for predicates), discriminate between them, and provide concise explanations of each. This paper makes the following contributions: we present Elle, demonstrate its soundness over specific datatypes, measure its efficiency against the current state of the art, and give evidence of its effectiveness via a case study of four real databases.
APA, Harvard, Vancouver, ISO, and other styles
18

Dias, Ricardo, João Lourenço, and Gonçalo Cunha. "Developing libraries using software transactional memory." Computer Science and Information Systems 5, no. 2 (2008): 103–17. http://dx.doi.org/10.2298/csis0802103d.

Full text
Abstract:
Software transactional memory is a promising programming model that adapts many concepts borrowed from the databases world to control concurrent accesses to main memory (RAM). This paper discusses how to support revertible operations, such as memory allocation and release, within software libraries that will be used in software memory transactional contexts. The proposal is based in the extension of the transaction life cycle state diagram with new states associated to the execution of user-defined handlers. The proposed approach is evaluated in terms of functionality and performance by way of a use case study and performance tests. Results demonstrate that the proposal and its current implementation are flexible, generic and efficient. .
APA, Harvard, Vancouver, ISO, and other styles
19

Do Van, Thanh, and Phuong Truong Duc. "FUZZY COMMON SEQUENTIAL RULES MINING IN QUANTITATIVE SEQUENCE DATABASES." Journal of Computer Science and Cybernetics 35, no. 3 (August 15, 2019): 217–32. http://dx.doi.org/10.15625/1813-9663/0/0/13277.

Full text
Abstract:
Common Sequential Rules present a relationship between unordered itemsets in which the items in antecedents have to appear before ones in consequents. The algorithms proposed to find the such rules so far are only applied for transactional sequence databases, not applied for quantitative sequence databases.The goal of this paper is to propose a new algorithm for finding the fuzzy common sequential (FCS for short) rules in quantitative sequence databases. The proposed algorithm is improved by basing on the ERMiner algorithm. It is considered to be the most effective today compared to other algorithms for finding common sequential rules in transactional sequence database. FCS rules are more general than classical fuzzy sequential rules and are useful in marketing, market analysis, medical diagnosis and treatment
APA, Harvard, Vancouver, ISO, and other styles
20

Do Van, Thanh, and Phuong Truong Duc. "FUZZY COMMON SEQUENTIAL RULES MINING IN QUANTITATIVE SEQUENCE DATABASES." Journal of Computer Science and Cybernetics 35, no. 3 (August 15, 2019): 217–32. http://dx.doi.org/10.15625/1813-9663/35/3/13277.

Full text
Abstract:
Common Sequential Rules present a relationship between unordered itemsets in which the items in antecedents have to appear before ones in consequents. The algorithms proposed to find the such rules so far are only applied for transactional sequence databases, not applied for quantitative sequence databases.The goal of this paper is to propose a new algorithm for finding the fuzzy common sequential (FCS for short) rules in quantitative sequence databases. The proposed algorithm is improved by basing on the ERMiner algorithm. It is considered to be the most effective today compared to other algorithms for finding common sequential rules in transactional sequence database. FCS rules are more general than classical fuzzy sequential rules and are useful in marketing, market analysis, medical diagnosis and treatment
APA, Harvard, Vancouver, ISO, and other styles
21

Ruppert, Evelyn, and Mike Savage. "Transactional Politics." Sociological Review 59, no. 2_suppl (December 2011): 73–92. http://dx.doi.org/10.1111/j.1467-954x.2012.02057.x.

Full text
Abstract:
In spring 2009, revelations over the expense claims of British MPs led to one of the most damaging scandals affecting the legitimacy of parliamentary democracy in recent history. This article explores how this incident reveals the capacity of Web 2.0 devices and transactional data to transform politics. It reflects, graphically, the political power of identifying and knowing people on the basis of their transactions, on what they do rather than what they say. It also shows in practice how Web 2.0 devices such as Crowdsourcing, Google Docs, mash-ups and visualization software can be used to mobilize data for collective and popular projects. Basic analytic tools freely available on the Web enable people to access, digitize and analyse data and do their own analyses and representations of phenomena. We examine media and popular mobilizations of transactional data using the specific example of the MPs' expenses scandal and relate this to larger currents in online government data and devices for public scrutiny which give rise to a new politics of measurement. We argue that this politics of measurement involves the introduction of new visual devices based on the manipulation of huge databases into simplified visual arrays; the reorientation of accounts of the social from elicited attitudes and views to transactions and practices; and, the inspection of individuals arrayed in relation to other individuals within whole (sub) populations. It is also a politics that mobilizes new informational gatekeepers and organizers in the making and analysis of transactional data and challenges dominant or expert forms of analysis and representation.
APA, Harvard, Vancouver, ISO, and other styles
22

BERBERIDIS, CHRISTOS, and IOANNIS VLAHAVAS. "DETECTION AND PREDICTION OF RARE EVENTS IN TRANSACTION DATABASES." International Journal on Artificial Intelligence Tools 16, no. 05 (October 2007): 829–48. http://dx.doi.org/10.1142/s0218213007003564.

Full text
Abstract:
Rare events analysis is an area that includes methods for the detection and prediction of events, e.g. a network intrusion or an engine failure, that occur infrequently and have some impact to the system. There are various methods from the areas of statistics and data mining for that purpose. In this article we propose PREVENT, an algorithm which uses inter-transactional patterns for the prediction of rare events in transaction databases. PREVENT is a general purpose inter-transaction association rules mining algorithm that optimally fits the demands of rare event prediction. It requires only 1 scan on the original database and 2 over the transformed, which is considerably smaller and it is complete as it does not miss any patterns. We provide the mathematical formulation of the problem and experimental results that show PREVENT's efficiency in terms of run time and effectiveness in terms of sensitivity and specificity.
APA, Harvard, Vancouver, ISO, and other styles
23

Nofong, Vincent Mwintieru. "Discovering Productive Periodic Frequent Patterns in Transactional Databases." Annals of Data Science 3, no. 3 (April 23, 2016): 235–49. http://dx.doi.org/10.1007/s40745-016-0078-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Li, Bo, Zheng Pei, Chao Zhang, and Fei Hao. "Efficient Associate Rules Mining Based on Topology for Items of Transactional Data." Mathematics 11, no. 2 (January 12, 2023): 401. http://dx.doi.org/10.3390/math11020401.

Full text
Abstract:
A challenge in association rules’ mining is effectively reducing the time and space complexity in association rules mining with predefined minimum support and confidence thresholds from huge transaction databases. In this paper, we propose an efficient method based on the topology space of the itemset for mining associate rules from transaction databases. To do so, we deduce a binary relation on itemset, and construct a topology space of itemset based on the binary relation and the quotient lattice of the topology according to transactions of itemsets. Furthermore, we prove that all closed itemsets are included in the quotient lattice of the topology, and generators or minimal generators of every closed itemset can be easily obtained from an element of the quotient lattice. Formally, the topology on itemset represents more general associative relationship among items of transaction databases, the quotient lattice of the topology displays the hierarchical structures on all itemsets, and provide us a method to approximate any template of the itemset. Accordingly, we provide efficient algorithms to generate Min-Max association rules or reduce generalized association rules based on the lower approximation and the upper approximation of a template, respectively. The experiment results demonstrate that the proposed method is an alternative and efficient method to generate or reduce association rules from transaction databases.
APA, Harvard, Vancouver, ISO, and other styles
25

Hu, Tianxun, Tianzheng Wang, and Qingqing Zhou. "Online schema evolution is (almost) free for snapshot databases." Proceedings of the VLDB Endowment 16, no. 2 (October 2022): 140–53. http://dx.doi.org/10.14778/3565816.3565818.

Full text
Abstract:
Modern database applications often change their schemas to keep up with the changing requirements. However, support for online and transactional schema evolution remains challenging in existing database systems. Specifically, prior work often takes ad hoc approaches to schema evolution with "patches" applied to existing systems, leading to many corner cases and often incomplete functionality. Applications therefore often have to carefully schedule downtimes for schema changes, sacrificing availability. This paper presents Tesseract, a new approach to online and transactional schema evolution without the aforementioned drawbacks. We design Tesseract based on a key observation: in widely used multi-versioned database systems, schema evolution can be modeled as data modification operations that change the entire table, i.e., data-definition-as-modification (DDaM). This allows us to support schema almost "for free" by leveraging the concurrency control protocol. By simple tweaks to existing snapshot isolation protocols, on a 40-core server we show that under a variety of workloads, Tesseract is able to provide online, transactional schema evolution without service downtime, and retain high application performance when schema evolution is in progress.
APA, Harvard, Vancouver, ISO, and other styles
26

Yamada, Hiroyuki, and Jun Nemoto. "Scalar DL." Proceedings of the VLDB Endowment 15, no. 7 (March 2022): 1324–36. http://dx.doi.org/10.14778/3523210.3523212.

Full text
Abstract:
This paper presents Scalar DL, a Byzantine fault detection (BFD) middleware for transactional database systems. Scalar DL manages two separately administered database replicas in a database system and can detect Byzantine faults in the database system as long as either replica is honest (not faulty). Unlike previous BFD works, Scalar DL executes non-conflicting transactions in parallel while preserving a correctness guarantee. Moreover, Scalar DL is database-agnostic middleware so that it achieves the detection capability in a database system without either modifying the databases or using database-specific mechanisms. Experimental results with YCSB and TPC-C show that Scalar DL outperforms a state-of-the-art BFD system by 3.5 to 10.6 times in throughput and works effectively on multiple database implementations. We also show that Scalar DL achieves near-linear (91%) scalability when the number of nodes composing each replica increases.
APA, Harvard, Vancouver, ISO, and other styles
27

Bhunje, Anagha, and Swati Ahirrao. "Workload Aware Incremental Repartitioning of NoSQL for Online Transactional Processing Applications." International Journal of Advances in Applied Sciences 7, no. 1 (March 1, 2018): 54. http://dx.doi.org/10.11591/ijaas.v7.i1.pp54-65.

Full text
Abstract:
<p><span lang="EN-US">Numerous applications are deployed on the web with the increasing popularity of internet. The applications include, 1) Banking applications,<br /> 2) Gaming applications, 3) E-commerce web applications. Different applications reply on OLTP (Online Transaction Processing) systems. OLTP systems need to be scalable and require fast response. Today modern web applications generate huge amount of the data which one particular machine and Relational databases cannot handle. The E-Commerce applications are facing the challenge of improving the scalability of the system. Data partitioning technique is used to improve the scalability of the system. The data is distributed among the different machines which results in increasing number of transactions. The work-load aware incremental repartitioning approach is used to balance the load among the partitions and to reduce the number of transactions that are distributed in nature. Hyper Graph Representation technique is used to represent the entire transactional workload in graph form. In this technique, frequently used items are collected and Grouped by using Fuzzy C-means Clustering Algorithm. Tuple Classification and Migration Algorithm is used for mapping clusters to partitions and after that tuples are migrated efficiently.</span></p>
APA, Harvard, Vancouver, ISO, and other styles
28

SrinivasaRao, Divvela, and V. Sucharita. "Analysis of Different Utility Mining Methodologies in Transactional Databases." International Journal of Big Data Security Intelligence 3, no. 1 (June 30, 2016): 1–10. http://dx.doi.org/10.21742/ijbdsi.2016.3.1.01.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Rana, S. "Analysis of Regular-Frequent Patterns in Large Transactional Databases." International Journal of Computer Sciences and Engineering 6, no. 7 (July 31, 2018): 1–5. http://dx.doi.org/10.26438/ijcse/v6i7.15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Qu, Luyi, Qingshuai Wang, Ting Chen, Keqiang Li, Rong Zhang, Xuan Zhou, Quanqing Xu, et al. "Are current benchmarks adequate to evaluate distributed transactional databases?" BenchCouncil Transactions on Benchmarks, Standards and Evaluations 2, no. 1 (March 2022): 100031. http://dx.doi.org/10.1016/j.tbench.2022.100031.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Funke, Florian, Alfons Kemper, and Thomas Neumann. "Compacting transactional data in hybrid OLTP&OLAP databases." Proceedings of the VLDB Endowment 5, no. 11 (July 2012): 1424–35. http://dx.doi.org/10.14778/2350229.2350258.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Rao, Divvela Srinivasa, and V. Sucharita. "Maximum Utility Item Sets for Transactional Databases Using GUIDE." Procedia Computer Science 92 (2016): 244–52. http://dx.doi.org/10.1016/j.procs.2016.07.352.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

González-Aparicio, María Teresa, Muhammad Younas, Javier Tuya, and Rubén Casado. "Testing of transactional services in NoSQL key-value databases." Future Generation Computer Systems 80 (March 2018): 384–99. http://dx.doi.org/10.1016/j.future.2017.07.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Pratt, Shannon P. "How to Use Transactional Databases for M&A." Journal of Corporate Accounting & Finance 13, no. 3 (March 2002): 71–80. http://dx.doi.org/10.1002/jcaf.10056.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Srinivas, UMohan, Ch Anuradha, and Dr P. Sri Rama Chandra Murty. "Hash based Approach for Mining Frequent Item Sets from Transactional Databases." International Journal of Engineering & Technology 7, no. 3.34 (September 1, 2018): 309. http://dx.doi.org/10.14419/ijet.v7i3.34.19214.

Full text
Abstract:
Frequent Itemset Mining become so popular in extracting hidden patterns from transactional databases. Among the several approaches, Apriori algorithm is known to be a basic approach which follows candidate generate and test based strategy. Although it is efficient level-wise approach, it has two limitations, (i) several passes are required to check the support of candidate itemsets. (ii) Towards more candidate itemsets and minimum threshold variations. A novel approach is proposed to tackle the above limitations. The proposed approach is one pass Hash-based Frequent Itemset Mining to derive frequent patterns. HFIM has feature that maintains candidate itemsets dynamically which are independent on minimum threshold. This feature allows to limit the number of scans over the database to one. In this paper, HFIM is compared with the Apriori to show the performance on standard datasets. The result section shows that HFIM outperforms Apriori over large databases.
APA, Harvard, Vancouver, ISO, and other styles
36

Liu, Si. "All in One: Design, Verification, and Implementation of SNOW-optimal Read Atomic Transactions." ACM Transactions on Software Engineering and Methodology 31, no. 3 (July 31, 2022): 1–44. http://dx.doi.org/10.1145/3494517.

Full text
Abstract:
Distributed read atomic transactions are important building blocks of modern cloud databases that magnificently bridge the gap between data availability and strong data consistency. The performance of their transactional reads is particularly critical to the overall system performance, as many real-world database workloads are dominated by reads. Following the SNOW design principle for optimal reads, we develop LORA, a novel SNOW-optimal algorithm for distributed read atomic transactions. LORA completes its reads in exactly one round trip, even in the presence of conflicting writes, without imposing additional overhead to the communication, and it outperforms the state-of-the-art read atomic algorithms. To guide LORA’s development, we present a rewriting-logic-based framework and toolkit for design, verification, implementation, and evaluation of distributed databases. Within the framework, we formalize LORA and mathematically prove its data consistency guarantees. We also apply automatic model checking and statistical verification to validate our proofs and to estimate LORA’s performance. We additionally generate from the formal model a correct-by-construction distributed implementation for testing and performance evaluation under realistic deployments. Our design-level and implementation-based experimental results are consistent, which together demonstrate LORA’s promising data consistency and performance achievement.
APA, Harvard, Vancouver, ISO, and other styles
37

Pajić Simović, Ana, Slađan Babarogić, Ognjen Pantelić, and Stefan Krstović. "Towards a Domain-Specific Modeling Language for Extracting Event Logs from ERP Systems." Applied Sciences 11, no. 12 (June 12, 2021): 5476. http://dx.doi.org/10.3390/app11125476.

Full text
Abstract:
Enterprise resource planning (ERP) systems are often seen as viable sources of data for process mining analysis. To perform most of the existing process mining techniques, it is necessary to obtain a valid event log that is fully compliant with the eXtensible Event Stream (XES) standard. In ERP systems, such event logs are not available as the concept of business activity is missing. Extracting event data from an ERP database is not a trivial task and requires in-depth knowledge of the business processes and underlying data structure. Therefore, domain experts require proper techniques and tools for extracting event data from ERP databases. In this paper, we present the full specification of a domain-specific modeling language for facilitating the extraction of appropriate event data from transactional databases by domain experts. The modeling language has been developed to support complex ambiguous cases when using ERP systems. We demonstrate its applicability using a case study with real data and show that the language includes constructs that enable a domain expert to easily model data of interest in the log extraction step. The language provides sufficient information to extract and transform data from transactional ERP databases to the XES format.
APA, Harvard, Vancouver, ISO, and other styles
38

G. Vijay Kumar, Dr, S. Vishnu Sravya, and G. Satish. "Mining High Utility Regular Patterns in Transactional Database." International Journal of Engineering & Technology 7, no. 2.7 (March 18, 2018): 900. http://dx.doi.org/10.14419/ijet.v7i2.7.11091.

Full text
Abstract:
In Transactional databases, interesting patterns discovering is a primary challenge in the field of data mining as well as research of the knowledge discovery. Presently pattern temporal regularity consider as crucial criterion in various online and real time applications such as analysis of the market basket, monitoring of network, analysis of gene data, sequence of the web page as well as the stock market. To find the transactional database regular patterns some of the efforts done. there is no method for that till now with the help of format of vertical data scan. Hence, here we find out the efficiency in time as well as the memory for finding the regular patterns.
APA, Harvard, Vancouver, ISO, and other styles
39

Irshad, Lubna, Li Yan, and Zongmin Ma. "Schema-Based JSON Data Stores in Relational Databases." Journal of Database Management 30, no. 3 (July 2019): 38–70. http://dx.doi.org/10.4018/jdm.2019070103.

Full text
Abstract:
JSON is a simple, compact and light weighted data exchange format to communicate between web services and client applications. NoSQL document stores evolve with the popularity of JSON, which can support JSON schema-less storage, reduce cost, and facilitate quick development. However, NoSQL still lacks standard query language and supports eventually consistent BASE transaction model rather than the ACID transaction model. This is very challenging and a burden on the developer. The relational database management systems (RDBMS) support JSON in binary format with SQL functions (also known as SQL/JSON). However, these functions are not standardized yet and vary across vendors along with different limitations and complexities. More importantly, complex searches, partial updates, composite queries, and analyses are cumbersome and time consuming in SQL/JSON compared to standard SQL operations. It is essential to integrate JSON into databases that use standard SQL features, support ACID transactional models, and has the capability of managing and organizing data efficiently. In this article, we empower JSON to use relational databases for analysis and complex queries. The authors reveal that the descriptive nature of the JSON schema can be utilized to create a relational schema for the storage of the JSON document. Then, the powerful SQL features can be used to gain consistency and ACID compatibility for querying JSON instances from the relational schema. This approach will open a gateway to combine the best features of both worlds: the fast development of JSON, consistency of relational model, and efficiency of SQL.
APA, Harvard, Vancouver, ISO, and other styles
40

Razu Ahmed, Md, Mst Arifa Khatun, Md Asraf Ali, and Kenneth Sundaraj. "A literature review on NoSQL database for big data processing." International Journal of Engineering & Technology 7, no. 2 (June 5, 2018): 902. http://dx.doi.org/10.14419/ijet.v7i2.12113.

Full text
Abstract:
Objective: Aim of the present study was to literature review on the NoSQL Database for Big Data processing including the structural issues and the real-time data mining techniques to extract the estimated valuable information.Methods: We searched the Springer Link and IEEE Xplore online databases for articles published in English language during the last seven years (between January 2011 and December 2017). We specifically searched for two keywords (“NoSQL” and “Big Data”) to find the articles. The inclusion criteria were articles on the use of performance comparison on valuable information processing in the field of Big Data through NoSQL databases.Results: In the 18 selected articles, this review identified 8 articles which provided various suitable recommendations on NoSQL databases for specific area focus on the value chain of Big Data, 5 articles described the performance comparison of different NoSQL databases, 2 articles presented the background of basics characteristics data model for NoSQL, 1 article denoted the storage in respect of cloud computing and 2 articles focused the transactions of NoSQL.Conclusion: In this literature, we presented the NoSQL databases for Big Data processing including its transactional and structural issues. Additionally, we highlight research directions and challenges in relation to Big Data processing. Therefore, we believe that the information contained in this review will incredible support and guide the progress of the Big Data processing.
APA, Harvard, Vancouver, ISO, and other styles
41

Jukic, Nenad, and Boris Jukic. "Bridging the Knowledge Gap between Transactional Databases and Data Warehouses." Journal of Computing and Information Technology 18, no. 2 (2010): 175. http://dx.doi.org/10.2498/cit.1001805.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Kumari, P. Lalitha, S. G. Sanjeevi, and T. V. Madhusudhana Rao. "Mining Top-k Regular High-Utility Itemsets in Transactional Databases." International Journal of Data Warehousing and Mining 15, no. 1 (January 2019): 58–79. http://dx.doi.org/10.4018/ijdwm.2019010104.

Full text
Abstract:
Mining high-utility itemsets is an important task in the area of data mining. It involves exponential mining space and returns a very large number of high-utility itemsets. In a real-time scenario, it is often sufficient to mine a small number of high-utility itemsets based on user-specified interestingness. Recently, the temporal regularity of an itemset is considered as an important interesting criterion for many applications. Methods for finding the regular high utility itemsets suffers from setting the threshold value. To address this problem, a novel algorithm called as TKRHU (Top k Regular High Utility Itemset) Miner is proposed to mine top-k high utility itemsets that appears regularly where k represents the desired number of regular high itemsets. A novel list structure RUL and efficient pruning techniques are developed to discover the top-k regular itemsets with high profit. Efficient pruning techniques are designed for reducing search space. Experimental results show that proposed algorithm using novel list structure achieves high efficiency in terms of runtime and space.
APA, Harvard, Vancouver, ISO, and other styles
43

Mary, Ms S. Elizabeth Amalorpava, and Dr R. A. Roseline. "Survey on Extracting High Utility Item Sets from Transactional Databases." International Journal of Computer Trends and Technology 25, no. 3 (July 25, 2015): 134–37. http://dx.doi.org/10.14445/22312803/ijctt-v25p126.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Tseng, Vincent S., Bai-En Shie, Cheng-Wei Wu, and Philip S. Yu. "Efficient Algorithms for Mining High Utility Itemsets from Transactional Databases." IEEE Transactions on Knowledge and Data Engineering 25, no. 8 (August 2013): 1772–86. http://dx.doi.org/10.1109/tkde.2012.59.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Li, Tianyu, Matthew Butrovich, Amadou Ngom, Wan Shen Lim, Wes McKinney, and Andrew Pavlo. "Mainlining databases." Proceedings of the VLDB Endowment 14, no. 4 (December 2020): 534–46. http://dx.doi.org/10.14778/3436905.3436913.

Full text
Abstract:
The proliferation of modern data processing tools has given rise to open-source columnar data formats. These formats help organizations avoid repeated conversion of data to a new format for each application. However, these formats are read-only, and organizations must use a heavy-weight transformation process to load data from on-line transactional processing (OLTP) systems. As a result, DBMSs often fail to take advantage of full network bandwidth when transferring data. We aim to reduce or even eliminate this overhead by developing a storage architecture for in-memory database management systems (DBMSs) that is aware of the eventual usage of its data and emits columnar storage blocks in a universal open-source format. We introduce relaxations to common analytical data formats to efficiently update records and rely on a lightweight transformation process to convert blocks to a read-optimized layout when they are cold. We also describe how to access data from third-party analytical tools with minimal serialization overhead. We implemented our storage engine based on the Apache Arrow format and integrated it into the NoisePage DBMS to evaluate our work. Our experiments show that our approach achieves comparable performance with dedicated OLTP DBMSs while enabling orders-of-magnitude faster data exports to external data science and machine learning tools than existing methods.
APA, Harvard, Vancouver, ISO, and other styles
46

Adhikari, Animesh. "A Framework for Synthesizing Arbitrary Boolean Queries Induced by Frequent Itemsets." International Journal of Knowledge-Based Organizations 3, no. 2 (April 2013): 56–75. http://dx.doi.org/10.4018/ijkbo.2013040104.

Full text
Abstract:
Frequent itemsets determine the major characteristics of a transactional database. It is important to mine arbitrary Boolean queries induced by frequent itemsets. In this paper, the author proposes a simple and elegant framework for synthesizing arbitrary Boolean queries using conditional patterns in a database. Both real and synthetic databases were used to evaluate the experimental results. The author presents an algorithm for mining a set of specific itemsets in a database and a model of synthesizing a query in a database. Finally, the author discusses an application of the proposed framework for reducing query processing time.
APA, Harvard, Vancouver, ISO, and other styles
47

Smith, Brian L., David C. Lewis, and Ryan Hammond. "Design of Archival Traffic Databases: Quantitative Investigation into Application of Advanced Data Modeling Concepts." Transportation Research Record: Journal of the Transportation Research Board 1836, no. 1 (January 2003): 126–31. http://dx.doi.org/10.3141/1836-16.

Full text
Abstract:
Given the enormous quantities of data collected by intelligent transportation systems (ITS), transportation professionals recently have focused on developing archived data user services (ADUS) to facilitate efficient use of these data in myriad transportation analyses. Most ITS systems were designed by using a transactional database design, which is not well suited to support the ad hoc queries required by ADUS. Research investigated the application of data-warehousing concepts to better support the requirements of ADUS. A case study is presented in which an ADUS for the Hampton Roads Smart Traffic Center, the regional freeway management system, was redesigned from a transactional approach to one based on data warehousing. Test queries run by using both approaches demonstrated that dramatic increases in efficiency are achievable through the use of data-warehousing concepts in ADUS.
APA, Harvard, Vancouver, ISO, and other styles
48

Zawar, Mrs Madhuri, and Ashwini Barakare. "A Better Approach for Mining High Utility Itemset from Transactional Databases." IJARCCE 5, no. 12 (December 30, 2016): 392–97. http://dx.doi.org/10.17148/ijarcce.2016.51290.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Borkar, Arati W. "Utility Mining Algorithm for High Utility Item sets from Transactional Databases." IOSR Journal of Computer Engineering 16, no. 2 (2014): 34–40. http://dx.doi.org/10.9790/0661-16253440.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Zamani Boroujeni, Farsad, and Doryaneh Hossein Afshari. "An Efficient Rule-Hiding Method for Privacy Preserving in Transactional Databases." Journal of Computing and Information Technology 25, no. 4 (January 5, 2018): 279–90. http://dx.doi.org/10.20532/cit.2017.1003680.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography