To see the other types of publications on this topic, follow the link: Transaction databases.

Journal articles on the topic 'Transaction databases'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Transaction databases.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Mohamed, Asghaiyer. "Network load traffic on MySQL atomic transaction database." Bulletin of Social Informatics Theory and Application 4, no. 1 (April 23, 2020): 35–39. http://dx.doi.org/10.31763/businta.v4i1.188.

Full text
Abstract:
Internet technology is developing very rapidly especially on the database system. Today's database has led to data that cannot be processed into the traditional way that we call big data. Some data stored on the server requires a way for the data to be valid and intact for that transaction mechanism appears on RDBMS which ensures that the data stored will become a unified whole as in customer account data, withdrawal of money at ATMs, e-transactions -commerce and so on. Of course the use of transactions in a database by not using Atomic transactions has a difference in terms of traffic on the network. This research appears by analyzing network traffic or density from a database that uses transactions and not to users who access them. This research method uses a questionnaire method by distributing questionnaires quantitatively to 300 respondents. The results of the study of approximately 300 respondents, researchers get the results that the use of transactions in databases and databases without transactions after being accessed by 300 people, the densest network is a network owned by a system that uses transaction features, this is because there is a slight increase of about 13% of traffic when compared to a network without transactions. This statement shows that two-way communication from a database that has the transaction provides feedback to the user so that the data is reliable as an indicator that the data has been stored safely. Further research can be done by finding other information or a study of big data using the atomic transaction model.
APA, Harvard, Vancouver, ISO, and other styles
2

Mazurova, Oksana, Artem Naboka, and Mariya Shirokopetleva. "RESEARCH OF ACID TRANSACTION IMPLEMENTATION METHODS FOR DISTRIBUTED DATABASES USING REPLICATION TECHNOLOGY." Innovative Technologies and Scientific Solutions for Industries, no. 2 (16) (July 6, 2021): 19–31. http://dx.doi.org/10.30837/itssi.2021.16.019.

Full text
Abstract:
Today, databases are an integral part of most modern applications designed to store large amounts of data and to request from many users. To solve business problems in such conditions, databases are scaled, often horizontally on several physical servers using replication technology. At the same time, many business operations require the implementation of transactional compliance with ACID properties. For relational databases that traditionally support ACID transactions, horizontal scaling is not always effective due to the limitations of the relational model itself. Therefore, there is an applied problem of efficient implementation of ACID transactions for horizontally distributed databases. The subject matter of the study is the methods of implementing ACID transactions in distributed databases, created by replication technology. The goal of the work is to increase the efficiency of ACID transaction implementation for horizontally distributed databases. The work is devoted to solving the following tasks: analysis and selection of the most relevant methods of implementation of distributed ACID transactions; planning and experimental research of methods for implementing ACID transactions by using of NoSQL DBMS MongoDB and NewSQL DBMS VoltDB as an example; measurements of metrics of productivity of use of these methods and formation of the recommendation concerning their effective use. The following methods are used: system analysis; relational databases design; methods for evaluating database performance. The following results were obtained: experimental measurements of the execution time of typical distributed transactions for the subject area of e-commerce, as well as measurements of the number of resources required for their execution; revealed trends in the performance of such transactions, formed recommendations for the methods studied. The obtained results allowed to make functions of dependence of the considered metrics on loading parameters. Conclusions: the strengths and weaknesses of the implementation of distributed ACID transactions using MongoDB and VoltDB were identified. Practical recommendations for the effective use of these systems for different types of applications, taking into account the resources consumed and the types of requests.
APA, Harvard, Vancouver, ISO, and other styles
3

RUSINKIEWICZ, MAREK, PIOTR KRYCHNIAK, and ANDRZEJ CICHOCKI. "TOWARDS A MODEL FOR MULTIDATABASE TRANSACTIONS." International Journal of Cooperative Information Systems 01, no. 03n04 (December 1992): 579–617. http://dx.doi.org/10.1142/s0218215792000155.

Full text
Abstract:
In many application areas the information that may be of interest to a user is stored under the control of multiple, autonomous database systems. To support global transactions in a multidatabase environment, we must coordinate the activities of multiple Database Management Systems that were designed for independent, stand-alone operation. The autonomy and heterogeneity of these systems present a major impediment to the direct adaptation of transaction management mechanisms developed for distributed databases. In this paper we introduce a transaction model designed for a multidatabase environment. A multidatabase transaction is defined by providing a set of (local) sub-transactions, together with their precedence and dataflow requirements. Additionally, the transaction designer may specify failure atomicity and execution atomicity requirements of the multidatabase transaction. These high-level specifications are then used by the scheduler of a multidatabase transaction to assure that its execution satisfies the constraints imposed by the semantics of the application. Uncontrolled interleaving of multidatabase transactions may lead to the violation of interdatabase integrity constraints. We discuss the issues involved in a concurrent execution of multidatabase transactions and propose a new concurrency control correctness criterion that is less restrictive than global serializability. We also show how the multidatabase SQL can be extended to allow the user to specify multidatabase transactions in a nonprocedural way.
APA, Harvard, Vancouver, ISO, and other styles
4

Jing, Changhong, Wenjie Liu, Jintao Gao, and Ouya Pei. "Research and implementation of HTAP for distributed database." Xibei Gongye Daxue Xuebao/Journal of Northwestern Polytechnical University 39, no. 2 (April 2021): 430–38. http://dx.doi.org/10.1051/jnwpu/20213920430.

Full text
Abstract:
Data processing can be roughly divided into two categories, online transaction processing OLTP(on-line transaction processing) and online analytical processing OLAP(on-line analytical processing). OLTP is the main application of traditional relational databases, and it is some basic daily transaction processing, such as bank pipeline transactions and so on. OLAP is the main application of the data warehouse system, it supports some more complex data analysis operations, focuses on decision support, and provides popular and intuitive analysis results. As the amount of data processed by enterprises continues to increase, distributed databases have gradually replaced stand-alone databases and become the mainstream of applications. However, the current business supported by distributed databases is mainly based on OLTP applications, lacking OLAP implementation. This paper proposes an implementation method of HTAP for distributed database CBase, which provides an implementation method of OLAP analysis for CBase, and can easily deal with data analysis of large amounts of data.
APA, Harvard, Vancouver, ISO, and other styles
5

Khanuja, Harmeet Kaur, and Dattatraya Adane. "Monitor and Detect Suspicious Transactions With Database Forensic Analysis." Journal of Database Management 29, no. 4 (October 2018): 28–50. http://dx.doi.org/10.4018/jdm.2018100102.

Full text
Abstract:
The extensive usage of web has given rise to financially motivated illegal covert online transactions. So the digital investigators have approached databases for investigating undetected illegal transactions. The authors here have designed and developed a methodology to find the illegal financial transactions through the database logs. The objective is to monitor database transactions for detecting and reporting risk level of suspicious transactions. Initially, the process extracts SQL transactions from logs of different database systems, then transforms and loads them separately in uniform XML format which gives the transaction records and its metadata. The transaction records are processed with well-defined rules to get outliers present as suspicious transactions. This gives the initial belief of the transactions to be suspicious. The belief value of transactions is further rationalised using Dempster-Shafer's theory. This verifies the uncertainty and risk level of the suspected transactions to assure occurrences of fraud transactions.
APA, Harvard, Vancouver, ISO, and other styles
6

YI, JUNKAI, GANG LU, and KEVIN LÜ. "MONITORING CUMULATED ANOMALY IN DATABASES." International Journal of Software Engineering and Knowledge Engineering 19, no. 03 (May 2009): 421–52. http://dx.doi.org/10.1142/s0218194009004210.

Full text
Abstract:
A new type of database anomaly called Cumulated Anomaly (CA) is dealt with in this paper. It occurs when submitting the time of authorized transactions or the changed data is cumulated out of some thresholds. A database-level detection method for Cumulated Anomaly is proposed based on statistics and fuzzy set theories. By measuring each database transaction with a real number between zero and one, this method quantitatively monitors how dangerous a transaction is. The real number is termed dubiety degree; therefore the method is named as Dubiety-Determining Method (DDM). After formally presenting the concepts of Cumulated Anomaly and DDM, the algorithm of DDM is given in detail. Software system architecture to support DDM was designed and implemented. Three experiments were performed on it for testing DDM. The first experiment showed the general results of DDM with a set of randomly generated audit records, while the second one simulated a practical case. DDM monitored dubiety degrees for each database transaction and detected expected Cumulated Anomaly in two experiments. The effect on database performance by DDM was tested in the last experiment. Experimental results show that DDM method is feasible and effective.
APA, Harvard, Vancouver, ISO, and other styles
7

Lewis, Philip M., Arthur Bernstein, and Michael Kifer. "Databases and transaction processing." ACM SIGMOD Record 31, no. 1 (March 2002): 74–75. http://dx.doi.org/10.1145/507338.507354.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Bozkaya, T. "Indexing transaction time databases." Information Sciences 112, no. 1-4 (December 1998): 85–123. http://dx.doi.org/10.1016/s0020-0255(98)10024-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Haraty, Ramzi Ahmed, Sanaa Kaddoura, and Ahmed Zekri. "Transaction Dependency Based Approach for Database Damage Assessment Using a Matrix." International Journal on Semantic Web and Information Systems 13, no. 2 (April 2017): 74–86. http://dx.doi.org/10.4018/ijswis.2017040105.

Full text
Abstract:
One of the critical concerns in the current era is information security. Companies are sharing vast online critical data, which exposes their databases to malicious attacks. When protection techniques fail to prevent an attack, recovery is needed. Database recovery is not a straightforward procedure, since the transactions are highly interconnected. Traditional recovery techniques do not consider the interconnection between transactions because this information is not saved anywhere in the log file. Thus, they rollback all the transactions starting from the detected malicious transaction to the end of the log file. Hence, both affected and benign transactions will be rolled back, which is a waste of time. This paper presents an algorithm that works efficiently to assess the damage caused in the database by malicious transaction and recovers it. The proposed algorithm keeps track of the transactions that read from one another and store this information in a single matrix. The experimental results prove that the algorithm is faster than any other existing algorithm in this domain.
APA, Harvard, Vancouver, ISO, and other styles
10

Setiadi, Teguh. "APPLICATION OF INFORMATION SYSTEMS FOR TRANSACTION REPORTS BADAN KESWADAYAAN MASYARAKAT SEJAHTERA CASE STUDY SUMBEREJO KENDAL VILLAGE." SAINTEKBU 11, no. 2 (September 3, 2019): 28. http://dx.doi.org/10.32764/saintekbu.v11i2.358.

Full text
Abstract:
Abstract BKM Sejahtera is a collective leadership institution from a community association in the village of Sumberejo which has the role of mobilizing the potential and resources of the community in an effort to overcome various development issues in the village / village area. BKM Sejahtera still has many obstacles in its operations including the process of making reports and transactions - transactions that occur are still done conventionally, namely the making of reports still using Microsoft Excel as a recording of existing transactions but it is less effective because it requires a relatively long time in presenting reports finance, especially savings and loans and as a storage method, are relatively inefficient because they do not use databases so that they require a large space and difficulties in finding data. The risk of errors in the transaction process and the making of a savings and loan report are relatively high because the data written in the transaction book is sometimes not the same as the data inputted in Microsoft Excel. To overcome the above problems, an Information System Application will be made to the Prosperous Community Self-Help Agency Using the Accrual Basis Method. This application will produce financial reports per period, loan transaction reports, installment transaction reports, savings and loan transaction reports and withdrawal transaction reports. This application is made using a programming language that is PHP for the application interface and MySQL for database processing software. With this application can facilitate transactions and also can facilitate in getting financial reports quickly and have a database as a safe storage medium.
APA, Harvard, Vancouver, ISO, and other styles
11

Pei, Ouya, Zhanhuai Li, Hongtao Du, Wenjie Liu, and Jintao Gao. "Dependence-Cognizant Locking Improvement for the Main Memory Database Systems." Mathematical Problems in Engineering 2021 (February 20, 2021): 1–12. http://dx.doi.org/10.1155/2021/6654461.

Full text
Abstract:
The traditional lock manager (LM) seriously limits the transaction throughput of the main memory database systems (MMDB). In this paper, we introduce dependence-cognizant locking (DCLP), an efficient improvement to the traditional LM, which dramatically reduces the locking space while offering efficiency. With DCLP, one transaction and its direct successors are collocated in its context. Whenever a transaction is committed, it wakes up its direct successors immediately avoiding the expensive operations, such as lock detection and latch contention. We also propose virtual transaction which has better time and space complexity by compressing continuous read-only transactions/operations. We implement DCLP in Calvin and carry out experiments in both multicore and shared-nothing distributed databases. Experiments demonstrate that, in contrast with existing algorithms, DCLP can achieve better performance in many workloads, especially high-contention workloads.
APA, Harvard, Vancouver, ISO, and other styles
12

Subramanyam, R. B. V., and A. Goswami. "Mining Frequent Fuzzy Grids in Dynamic Databases with Weighted Transactions and Weighted Items." Journal of Information & Knowledge Management 05, no. 03 (September 2006): 243–57. http://dx.doi.org/10.1142/s0219649206001487.

Full text
Abstract:
Incremental mining algorithms that derive the latest mining output by making use of previous mining results are attractive to business organisations. In this paper, a fuzzy data mining algorithm for incremental mining of frequent fuzzy grids from quantitative dynamic databases is proposed. It extends the traditional association rule problem by allowing a weight to be associated with each item in a transaction and with each transaction in a database to reflect the interest/intensity of items and transactions. It uses the information about fuzzy grids that are already mined from original database and avoids start-from-scratch process. In addition, we deal with "weights-of-significance" which are automatically regulated as the incremental databases are evolved and implant themselves in the original database. We maintain "hopeful fuzzy grids" and "frequent fuzzy grids" and our algorithm changes the status of the grids which have been discovered earlier so that they reflect the pattern drift in the updated quantitative databases. Our heuristic approach avoids maintaining many "hopeful fuzzy grids" at the initial level. The algorithm is illustrated with one numerical example and demonstration of experimental results are also incorporated.
APA, Harvard, Vancouver, ISO, and other styles
13

Kumar, Muruganandan, and Johnny Wong. "Transaction management in design databases." Journal of Systems and Software 22, no. 1 (July 1993): 3–15. http://dx.doi.org/10.1016/0164-1212(93)90118-h.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

BERBERIDIS, CHRISTOS, and IOANNIS VLAHAVAS. "DETECTION AND PREDICTION OF RARE EVENTS IN TRANSACTION DATABASES." International Journal on Artificial Intelligence Tools 16, no. 05 (October 2007): 829–48. http://dx.doi.org/10.1142/s0218213007003564.

Full text
Abstract:
Rare events analysis is an area that includes methods for the detection and prediction of events, e.g. a network intrusion or an engine failure, that occur infrequently and have some impact to the system. There are various methods from the areas of statistics and data mining for that purpose. In this article we propose PREVENT, an algorithm which uses inter-transactional patterns for the prediction of rare events in transaction databases. PREVENT is a general purpose inter-transaction association rules mining algorithm that optimally fits the demands of rare event prediction. It requires only 1 scan on the original database and 2 over the transformed, which is considerably smaller and it is complete as it does not miss any patterns. We provide the mathematical formulation of the problem and experimental results that show PREVENT's efficiency in terms of run time and effectiveness in terms of sensitivity and specificity.
APA, Harvard, Vancouver, ISO, and other styles
15

Valiullin, Timur, Zhexue Huang, Chenghao Wei, Jianfei Yin, Dingming Wu, and Luliia Egorova. "A new approximate method for mining frequent itemsets from big data." Computer Science and Information Systems, no. 00 (2020): 15. http://dx.doi.org/10.2298/csis200124015v.

Full text
Abstract:
Mining frequent itemsets in transaction databases is an important task in many applications. It becomes more challenging when dealing with a large transaction database because traditional algorithms are not scalable due to the memory limit. In this paper, we propose a new approach for approximately mining of frequent itemsets in a big transaction database. Our approach is suitable for mining big transaction databases since it produces approximate frequent itemsets from a subset of the entire database, and can be implemented in a distributed environment. Our algorithm is able to efficiently produce high-accurate results, however it misses some true frequent itemsets. To address this problem and reduce the number of false negative frequent itemsets we introduce an additional parameter to the algorithm to discover most of the frequent itemsets contained in the entire data set. In this article, we show an empirical evaluation of the results of the proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
16

Lu, Yi, Xiangyao Yu, Lei Cao, and Samuel Madden. "Epoch-based commit and replication in distributed OLTP databases." Proceedings of the VLDB Endowment 14, no. 5 (January 2021): 743–56. http://dx.doi.org/10.14778/3446095.3446098.

Full text
Abstract:
Many modern data-oriented applications are built on top of distributed OLTP databases for both scalability and high availability. Such distributed databases enforce atomicity, durability, and consistency through two-phase commit (2PC) and synchronous replication at the granularity of every single transaction. In this paper, we present COCO, a new distributed OLTP database that supports epoch-based commit and replication. The key idea behind COCO is that it separates transactions into epochs and treats a whole epoch of transactions as the commit unit. In this way, the overhead of 2PC and synchronous replication is significantly reduced. We support two variants of optimistic concurrency control (OCC) using physical time and logical time with various optimizations, which are enabled by the epoch-based execution. Our evaluation on two popular benchmarks (YCSB and TPC-C) show that COCO outperforms systems with fine-grained 2PC and synchronous replication by up to a factor of four.
APA, Harvard, Vancouver, ISO, and other styles
17

Dekeyser, Stijn, Jan Hidders, and Jan Paredaens. "A Transaction Model for XML Databases." World Wide Web 7, no. 1 (March 2004): 29–57. http://dx.doi.org/10.1023/b:wwwj.0000015864.75561.98.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Hwang, San-Yih, Jaideep Srivastava, and Jianzhong Li. "Transaction recovery in federated autonomous databases." Distributed and Parallel Databases 2, no. 2 (April 1994): 151–82. http://dx.doi.org/10.1007/bf01267325.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Yao, Hong, and Howard J. Hamilton. "Mining itemset utilities from transaction databases." Data & Knowledge Engineering 59, no. 3 (December 2006): 603–26. http://dx.doi.org/10.1016/j.datak.2005.10.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Zamanian, Erfan, Julian Shun, Carsten Binnig, and Tim Kraska. "Chiller." ACM SIGMOD Record 50, no. 1 (June 15, 2021): 15–22. http://dx.doi.org/10.1145/3471485.3471490.

Full text
Abstract:
Distributed transactions on high-overhead TCP/IP-based networks were conventionally considered to be prohibitively expensive. In fact, the primary goal of existing partitioning schemes is to minimize the number of cross-partition transactions. However, with the new generation of fast RDMAenabled networks, this assumption is no longer valid. In this paper, we first make the case that the new bottleneck which hinders truly scalable transaction processing in modern RDMA-enabled databases is data contention, and that optimizing for data contention leads to different partitioning layouts than optimizing for the number of distributed transactions. We then present Chiller, a new approach to data partitioning and transaction execution, which aims to minimize data contention for both local and distributed transactions.
APA, Harvard, Vancouver, ISO, and other styles
21

Tsai, Pauray S. M., and Chien-Ming Chen. "Mining interesting association rules from customer databases and transaction databases." Information Systems 29, no. 8 (December 2004): 685–96. http://dx.doi.org/10.1016/s0306-4379(03)00061-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

C, Nithin, and A. V. Krishna Mohan. "Privacy-Preserving in FiDoop, Mining of Frequent Itemsets from Outsourced Transaction Databases." International Journal of Innovative Research in Computer Science & Technology 5, no. 3 (May 31, 2017): 267–73. http://dx.doi.org/10.21276/ijircst.2017.5.3.2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Aida Jusoh, Julaily, Mustafa Man, and Wan Aezwani Wan Abu Bakar. "Performance of IF-Postdiffset and R-Eclat Variants in Large Dataset." International Journal of Engineering & Technology 7, no. 4.1 (September 12, 2018): 134. http://dx.doi.org/10.14419/ijet.v7i4.1.28241.

Full text
Abstract:
Pattern mining refers to a subfield of data mining that uncovers interesting, unexpected, and useful patterns from transaction databases. Such patterns reflect frequent and infrequent patterns. An abundant literature has dedicated in frequent pattern mining and tremendous efficient algorithms for frequent itemset mining in the transaction database. Nonetheless, the infrequent pattern mining has emerged to be an interesting issue in discovering patterns that rarely occur in the transaction database. More researchers reckon that rare pattern occurrences may offer valuable information in knowledge data discovery process. The R-Eclat is a novel algorithm that determines infrequent patterns in the transaction database. The multiple variants in the R-Eclat algorithm generate varied performances in infrequent mining patterns. This paper proposes IF-Postdiffset as a new variant in R-Eclat algorithm. This paper also highlights the performance of infrequent mining pattern from the transaction database among different variants of the R-Eclat algorithm regarding its execution time.
APA, Harvard, Vancouver, ISO, and other styles
24

Aburuotu, E. C., and P. O. Asagba. "Transaction processing monitors on real-time databases." ACADEMICIA: An International Multidisciplinary Research Journal 10, no. 10 (2020): 13. http://dx.doi.org/10.5958/2249-7137.2020.01076.9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Kang, I. E., and T. F. Keefe. "Transaction Management for Multilevel Secure Replicated Databases." Journal of Computer Security 3, no. 2-3 (April 1, 1995): 115–45. http://dx.doi.org/10.3233/jcs-1994/1995-32-303.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Djenouri, Youcef, Jerry Chun-Wei Lin, Kjetil Nørvåg, Heri Ramampiaro, and Philip S. Yu. "Exploring Decomposition for Solving Pattern Mining Problems." ACM Transactions on Management Information Systems 12, no. 2 (June 2021): 1–36. http://dx.doi.org/10.1145/3439771.

Full text
Abstract:
This article introduces a highly efficient pattern mining technique called Clustering-based Pattern Mining (CBPM). This technique discovers relevant patterns by studying the correlation between transactions in the transaction database based on clustering techniques. The set of transactions is first clustered, such that highly correlated transactions are grouped together. Next, we derive the relevant patterns by applying a pattern mining algorithm to each cluster. We present two different pattern mining algorithms, one applying an approximation-based strategy and another based on an exact strategy. The approximation-based strategy takes into account only the clusters, whereas the exact strategy takes into account both clusters and shared items between clusters. To boost the performance of the CBPM, a GPU-based implementation is investigated. To evaluate the CBPM framework, we perform extensive experiments on several pattern mining problems. The results from the experimental evaluation show that the CBPM provides a reduction in both the runtime and memory usage. Also, CBPM based on the approximate strategy provides good accuracy, demonstrating its effectiveness and feasibility. Our GPU implementation achieves significant speedup of up to 552× on a single GPU using big transaction databases.
APA, Harvard, Vancouver, ISO, and other styles
27

Lee, Gangin, Unil Yun, and Keun Ho Ryu. "Mining Frequent Weighted Itemsets without Storing Transaction IDs and Generating Candidates." International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 25, no. 01 (February 2017): 111–44. http://dx.doi.org/10.1142/s0218488517500052.

Full text
Abstract:
Weighted itemset mining, which is one of the important areas in frequent itemset mining, is an approach for mining meaningful itemsets considering different importance or weights for each item in databases. Because of the merit of the weighted itemset mining, various related works have been studied actively. As one of the methods in the weighted itemset mining, FWI (Frequent Weighted Itemset) mining calculates weights of transactions from weights of items and then finds FWIs based on the transaction weights. However, previous FWI mining methods still have limitations in terms of runtime and memory usage performance. For this reason, in this paper, we propose two algorithms for mining FWIs more efficiently from databases with weights of items. In contrast to the previous approaches storing transaction IDs for mining FWIs, the proposed methods employ new types of prefix tree structures and mine these patterns more efficiently without storing any transaction ID. Through extensive experimental results in this paper, we show that the proposed algorithms outperform state-of-the-art FWI mining algorithms in terms of runtime, memory usage, and scalability.
APA, Harvard, Vancouver, ISO, and other styles
28

Ezeife, Christie I., Vignesh Aravindan, and Ritu Chaturvedi. "Mining Integrated Sequential Patterns From Multiple Databases." International Journal of Data Warehousing and Mining 16, no. 1 (January 2020): 1–21. http://dx.doi.org/10.4018/ijdwm.2020010101.

Full text
Abstract:
Existing work on multiple databases (MDBs) sequential pattern mining cannot mine frequent sequences to answer exact and historical queries from MDBs having different table structures. This article proposes the transaction id frequent sequence pattern (TidFSeq) algorithm to handle the difficult problem of mining frequent sequences from diverse MDBs. The TidFSeq algorithm transforms candidate 1-sequences to get transaction subsequences where candidate 1-sequences occurred as (1-sequence, itssubsequenceidlist) tuple or (1-sequence, position id list). Subsequent frequent i-sequences are computed using the counts of the sequence ids in each candidate i-sequence position id list tuples. An extended version of the general sequential pattern (GSP)-like candidate generates and a frequency count approach is used for computing supports of itemset (I-step) and separate (S-step) sequences without repeated database scans but with transaction ids. Generated patterns answer complex queries from MDBs. The TidFSeq algorithm has a faster processing time than existing algorithms.
APA, Harvard, Vancouver, ISO, and other styles
29

Han, Kyong Rok, and Jae Yearn Kim. "FCILINK: Mining Frequent Closed Itemsets Based on a Link Structure between Transactions." Journal of Information & Knowledge Management 04, no. 04 (December 2005): 257–67. http://dx.doi.org/10.1142/s0219649205001213.

Full text
Abstract:
The problem of discovering association rules between items in a database is an emerging area of research. Its goal is to extract significant patterns or interesting rules from large databases. Recent studies of mining association rules have proposed a closure mechanism. It is no longer necessary to mine the set of all of the frequent itemsets and their association rules. Rather, it is sufficient to mine the frequent closed itemsets and their corresponding rules. In the past, a number of algorithms for mining frequent closed itemsets have been based on items. In this paper, we use the transaction itself for mining frequent closed itemsets. An efficient algorithm called FCILINK is proposed that is based on a link structure between transactions. A given database is scanned once and then a much smaller sub-database is scanned twice. Our experimental results show that our algorithm is faster than previously proposed methods. Furthermore, our approach is significantly more efficient for dense databases.
APA, Harvard, Vancouver, ISO, and other styles
30

Pandey, Anjana, and K. R. Pardasani. "PPCI Algorithm for Mining Temporal Association Rules in Large Databases." Journal of Information & Knowledge Management 08, no. 04 (December 2009): 345–52. http://dx.doi.org/10.1142/s0219649209002440.

Full text
Abstract:
In this paper an attempt has been made to develop a progressive partitioning and counting inference approach for mining association rules in temporal databases. A temporal database like a sales database is a set of transactions where each transaction T is a set of items in which each item contains an individual exhibition period. The existing models of association rule mining have problems in handling transactions due to a lack of consideration of the exhibition period of each individual item and lack of an equitable support counting basis for each item. As a remedy to this problem we propose an innovative algorithm PPCI that combines progressive partition approach with counting inference method to discover association rules in a temporal database. The basic idea of PPCI is to first segment the database into sub-databases in such a way that items in each sub-database will have either a common starting time or a common ending time. Then for each sub-database, PPCI progressively filters 1-itemset with a cumulative filtering threshold based on vital partitioning characteristics. Algorithm PPCI is also designed to employ a filtering threshold in each partition to prune out those cumulatively infrequent 1-itemsets early and it also uses counting inference approach to minimise as much as possible the number of pattern support counts performed when extracting frequent patterns. Explicitly the execution time of PPCI in order of magnitude is smaller than those required by the schemes which are directly extended from existing methods.
APA, Harvard, Vancouver, ISO, and other styles
31

Che Fauzi, Ainul Azila, A. Noraziah, Wan Maseri Binti Wan Mohd, A. Amer, and Tutut Herawan. "Managing Fragmented Database Replication for Mygrants Using Binary Vote Assignment on Cloud Quorum." Applied Mechanics and Materials 490-491 (January 2014): 1342–46. http://dx.doi.org/10.4028/www.scientific.net/amm.490-491.1342.

Full text
Abstract:
Replication in distributed database is the process of copying and maintaining database objects in multiple databases that make up a distributed database system. In this paper, we will manage fragmented database replication and transaction management for Malaysian Greater Research Network (MyGRANTS) using a new proposed algorithm called Binary Vote Assignment on Cloud Quorum (BVACQ). This technique will combine replication and fragmentation. Fragmentation in distributed database is very useful in terms of usage, efficiency, parallelism and also for security. This strategy will partition the database into disjoint fragments. The result shows that managing replication and transaction through proposed BVACQ able to preserve data consistency. It also increases the degrees of parallelism. This is because by using fragmentation, replication and transaction can be divided into several subqueries that operate on the fragments.
APA, Harvard, Vancouver, ISO, and other styles
32

Lin, Jerry Chun-Wei, Matin Pirouz, Youcef Djenouri, Chien-Fu Cheng, and Usman Ahmed. "Incrementally updating the high average-utility patterns with pre-large concept." Applied Intelligence 50, no. 11 (June 30, 2020): 3788–807. http://dx.doi.org/10.1007/s10489-020-01743-y.

Full text
Abstract:
Abstract High-utility itemset mining (HUIM) is considered as an emerging approach to detect the high-utility patterns from databases. Most existing algorithms of HUIM only consider the itemset utility regardless of the length. This limitation raises the utility as a result of a growing itemset size. High average-utility itemset mining (HAUIM) considers the size of the itemset, thus providing a more balanced scale to measure the average-utility for decision-making. Several algorithms were presented to efficiently mine the set of high average-utility itemsets (HAUIs) but most of them focus on handling static databases. In the past, a fast-updated (FUP)-based algorithm was developed to efficiently handle the incremental problem but it still has to re-scan the database when the itemset in the original database is small but there is a high average-utility upper-bound itemset (HAUUBI) in the newly inserted transactions. In this paper, an efficient framework called PRE-HAUIMI for transaction insertion in dynamic databases is developed, which relies on the average-utility-list (AUL) structures. Moreover, we apply the pre-large concept on HAUIM. A pre-large concept is used to speed up the mining performance, which can ensure that if the total utility in the newly inserted transaction is within the safety bound, the small itemsets in the original database could not be the large ones after the database is updated. This, in turn, reduces the recurring database scans and obtains the correct HAUIs. Experiments demonstrate that the PRE-HAUIMI outperforms the state-of-the-art batch mode HAUI-Miner, and the state-of-the-art incremental IHAUPM and FUP-based algorithms in terms of runtime, memory, number of assessed patterns and scalability.
APA, Harvard, Vancouver, ISO, and other styles
33

Buehrer, Daniel J., and Chun-Yao Wang. "Deco: A Decentralized, Cooperative Atomic Commit Protocol." Journal of Computer Networks and Communications 2012 (2012): 1–14. http://dx.doi.org/10.1155/2012/782517.

Full text
Abstract:
An atomic commit protocol can cause long-term locking of databases if the coordinator crashes or becomes disconnected from the network. In this paper we describe how to eliminate the coordinator. This decentralized, cooperative atomic commit protocol piggybacks transaction statuses of all transaction participants onto tokens which are passed among the participants. Each participant uses the information in the tokens to make a decision of when to go to the next state of a three-phase commit protocol. Transactions can progress to ensure a uniform agreement on success or failure, even if the network is partitioned or nodes temporarily crash.
APA, Harvard, Vancouver, ISO, and other styles
34

Irshad, Lubna, Li Yan, and Zongmin Ma. "Schema-Based JSON Data Stores in Relational Databases." Journal of Database Management 30, no. 3 (July 2019): 38–70. http://dx.doi.org/10.4018/jdm.2019070103.

Full text
Abstract:
JSON is a simple, compact and light weighted data exchange format to communicate between web services and client applications. NoSQL document stores evolve with the popularity of JSON, which can support JSON schema-less storage, reduce cost, and facilitate quick development. However, NoSQL still lacks standard query language and supports eventually consistent BASE transaction model rather than the ACID transaction model. This is very challenging and a burden on the developer. The relational database management systems (RDBMS) support JSON in binary format with SQL functions (also known as SQL/JSON). However, these functions are not standardized yet and vary across vendors along with different limitations and complexities. More importantly, complex searches, partial updates, composite queries, and analyses are cumbersome and time consuming in SQL/JSON compared to standard SQL operations. It is essential to integrate JSON into databases that use standard SQL features, support ACID transactional models, and has the capability of managing and organizing data efficiently. In this article, we empower JSON to use relational databases for analysis and complex queries. The authors reveal that the descriptive nature of the JSON schema can be utilized to create a relational schema for the storage of the JSON document. Then, the powerful SQL features can be used to gain consistency and ACID compatibility for querying JSON instances from the relational schema. This approach will open a gateway to combine the best features of both worlds: the fast development of JSON, consistency of relational model, and efficiency of SQL.
APA, Harvard, Vancouver, ISO, and other styles
35

Rao, M. Venkata Krishna, Ch Suresh, K. Kamakshaiah, and M. Ravikanth. "Prototype Analysis for Business Intelligence Utilization in Data Mining Analysis." International Journal of Advanced Research in Computer Science and Software Engineering 7, no. 7 (July 29, 2017): 30. http://dx.doi.org/10.23956/ijarcsse.v7i7.93.

Full text
Abstract:
Tremendous increase of high availability of more disparate data sources than ever before have raised difficulties in simplifying frequent utility report across multiple transaction systems apart from an integration of large historical data. It is main focusing concept in data exploration with high transactional data systems in real time data processing. This problem mainly occurs in data warehouses and other data storage proceedings in Business Intelligence (BI) for knowledge management and business resource planning. In this phenomenon, BI consists software construction of data warehouse query processing in report generation of high utility data mining in transactional data systems. The growth of a huge voluminous data in the real world is posing challenges to the research and business community for effective data analysis and predictions. In this paper, we analyze different data mining techniques and methods for Business Intelligence in data analysis of transactional databases. For that, we discuss what the key issues are by performing in-depth analysis of business data which includes database applications in transaction data source system analysis. We also discuss different integrated techniques in data analysis in business operational process for feasible solutions in business intelligence
APA, Harvard, Vancouver, ISO, and other styles
36

Kuzmenko, O., T. Dotsenko, and V. Koibichuk. "DEVELOPMENT OF DATABASES STRUCTURE OF INTERNAL ECONOMIC AGENTS FINANCIAL MONITORING." Financial and credit activity: problems of theory and practice 3, no. 38 (June 30, 2021): 204–13. http://dx.doi.org/10.18371/fcaptp.v3i38.237448.

Full text
Abstract:
Abstract. The article presents the results of developing the structure of databases of internal financial monitoring of economic agents in the form of a data scheme taking into account the entities, their attributes, key fields, and relationships, as well as the structure of units of regulatory information required for basic monitoring procedures based on internal and external sources. The block diagram of the financial monitoring databases, formed in the modern BPMN 2.0 notation using the Bizagi Studio software product on the basis of internal normative and reference documents, consists of tables containing information on: the client's financial monitoring questionnaire; list of risky clients according to the system of economic agent; the list of clients for which there are court rulings and financial transactions which may contain signs of risk; list of PEP clients of the economic agent; list of clients for which there is a share of state ownership (PSP); list of prohibited industries; reference books (type of financial transactions; features of financial transactions of mandatory financial monitoring; features of financial transactions of internal financial monitoring; identity document; type of subject of primary financial monitoring; type of notification; legal status of transaction participant; type of person who related to the financial transaction; the presence of permission to provide information; signs of financial transaction; regions of Ukraine); directory of risk criteria; clients with FATCA status. The scheme of the structure of databases of internal financial monitoring of economic agents using normative and reference information on the basis of external sources is presented by tables containing information on: legal entities, natural persons-entrepreneurs, public formations, public associations, notaries, lawyers of Ukraine; the list of persons related to terrorism and international sanctions, formed by the State Financial Monitoring Service of Ukraine; list of public figures and members of their families; sanctions lists (National Security and Defense Council of Ukraine; Ministry of Economic Development and Trade of Ukraine; OFAC SDN List — US sanctions list; worldwide sanctions lists; EU sanctions lists); lists of high-risk countries (aggressor state, countries with strategic shortcomings, countries with hostilities, list of the European Commission for countries with weak APC / FT regime, countries with high levels of corruption, self-proclaimed countries, countries with high risk of FT, offshore countries); The First All-Ukrainian Bureau of Credit Histories, which describes the credit history, credit risks of individuals and legal entities in Ukraine (PVBKI); International Bureau of Credit Histories, which describes the credit history of individuals and legal entities of clients of Ukrainian economic agents (MBKI); list of dual-use goods; list of persons with OSH; AntiFraud HUB — information about fraudsters; register of bankruptcies; register of debtors; register of court decisions; database of invalid documents; list of persons hiding from the authorities; register of EP payers; registers of encumbrances on movable and immovable property; data on securities; lustration register; register of arbitration trustees; corruption register; bases of Ukrainian organizations; information on foreign companies. Integrated use of the developed databases based on the proposed schemes will improve the procedures for financial monitoring by economic agents and solve several current problems. Keywords: economic agents, financial monitoring, structural scheme of the database, normative and reference information of internal securement, normative and reference information of external securement. JEL Classification E44, D53, G21, G28, G32 Formulas: 0; fig.: 2; tabl.: 0; bibl.: 12.
APA, Harvard, Vancouver, ISO, and other styles
37

Tzanis, George, and Christos Berberidis. "Mining for Mutually Exclusive Items in Transaction Databases." International Journal of Data Warehousing and Mining 3, no. 3 (July 2007): 45–59. http://dx.doi.org/10.4018/jdwm.2007070104.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Abdul-Mehdi, Ziyad, Ali Bin Mamat, Hamidah Ibrahim, and Mustafa Deris. "A model for transaction management in mobile databases." IEEE Potentials 29, no. 3 (May 2010): 32–39. http://dx.doi.org/10.1109/mpot.2010.936929.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Phatak, Shirish Hemant, and Badri Nath. "Transaction-Centric Reconciliation in Disconnected Client–Server Databases." Mobile Networks and Applications 9, no. 5 (October 2004): 459–71. http://dx.doi.org/10.1023/b:mone.0000034700.03069.48.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Wieczerzycki, Waldemar. "Transaction Management In Databases Supporting Web-Based Negotiations." INFOR: Information Systems and Operational Research 38, no. 3 (August 2000): 245–71. http://dx.doi.org/10.1080/03155986.2000.11732411.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Zhang, Jingyu, Siqi Zhong, Jin Wang, Xiaofeng Yu, and Osama Alfarraj. "A Storage Optimization Scheme for Blockchain Transaction Databases." Computer Systems Science and Engineering 36, no. 3 (2021): 521–35. http://dx.doi.org/10.32604/csse.2021.014530.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Wang, Hui. "MaxMining: A Novel Algorithm for Mining Maximal Frequent Itemset." Applied Mechanics and Materials 713-715 (January 2015): 1765–68. http://dx.doi.org/10.4028/www.scientific.net/amm.713-715.1765.

Full text
Abstract:
We present a new algorithm for mining maximal frequent itemsets, MaxMining, from big transaction databases. MaxMining employs the depth-first traversal and iterative method. It re-represents the transaction database by vertical tidset format, travels the search space with effective pruning strategies which reduces the search space dramatically. MaxMining removes all the non-maximal frequent itemsets to get the exact set of maximal frequent itemsets directly, no need to enumerate all the frequent itemsets from smaller ones step by step. It backtracks to the proper ancestor directly, needless level by level, ignoring those redundant frequent itemsets. We found that MaxMining can be more effective to find all the maximal frequent itemsets from big databases than many of proposed algorithms with ordinary pruning strategies.
APA, Harvard, Vancouver, ISO, and other styles
43

Rafique, Azra, Kanwal Ameen, and Alia Arshad. "Use patterns of e-journals among the science community: a transaction log analysis." Electronic Library 37, no. 4 (August 5, 2019): 740–59. http://dx.doi.org/10.1108/el-03-2019-0073.

Full text
Abstract:
Purpose This study aims to explore the evidence-based patterns of e-journal usage, such as the most used and least used databases, at a public-sector university in Pakistan, by analysing scientists’ usage of databases over time. Design/methodology/approach Through transaction log analysis, the frequencies of page views, sessions, session duration and size of the used data were calculated through SAWMILL software and entered into MS Excel. Findings The results revealed that the broad databases of science and engineering were being used more by users as compared to the narrower e-journal databases. Furthermore, the users were mostly accessing the e-journal databases from the university’s central library and its various academic departments. Early morning hours, working days and start of the academic year were found to be the most active timings of e-journal database utilisation. Practical implications The results of the study will help the Higher Education Commission (HEC) of Pakistan and information professionals in better access management of databases. Originality/value This study was conducted to check the feasibility of a PhD project’s first phase and presents the frequencies of HEC e-journal databases’ usage by using transaction log analysis method. The results will be used in preparing interview guide and sample selection for interview. Other Central Asian studies used COUNTER reports provided by publishers for log analysis instead of using raw log data.
APA, Harvard, Vancouver, ISO, and other styles
44

Shokrgozar, Neda, and Farzad Movahedi Sobhani. "Customer Segmentation of Bank Based on Discovering of Their Transactional Relation by Using Data ‎Mining Algorithms‎." Modern Applied Science 10, no. 10 (September 26, 2016): 283. http://dx.doi.org/10.5539/mas.v10n10p283.

Full text
Abstract:
In this research, based on financial transactions between bank customers which extracted from bank’s databases we have developed the relational transaction graph and customer’s transactional communication network has been created. Furthermore, using data mining algorithms and evaluation parameters in social network concepts lead us for segmenting of bank customers. The main goal in this research is bank customer’s segmentation by discovering the transactional relationship between them in order to deliver some specified solutions in benefit of some policy about customers equality in banking system; in other words improvement of customer relationship management to determination of strategies and business risk management are the main concept of this research. By evaluation of Customer segments, banking system will consider more efficient and crucial factors in decision process to estimate more accurate credential of each group of customers and will grant more appropriate types and amount of loan services to them therefore it is expected these solutions will reduce the risk of loan service in banks.
APA, Harvard, Vancouver, ISO, and other styles
45

Ezéchiel, Katembo Kituta, Shri Kant Ojha, and Ruchi Agarwal. "A New Eager Replication Approach Using a Non-Blocking Protocol Over a Decentralized P2P Architecture." International Journal of Distributed Systems and Technologies 11, no. 2 (April 2020): 69–100. http://dx.doi.org/10.4018/ijdst.2020040106.

Full text
Abstract:
Eager replication of distributed databases over a decentralized Peer-to-Peer (P2P) network is often likely to generate unreliability because participants can be or cannot be available. Moreover, the conflict between transactions initiated by different peers to modify the same data is probable. These problems are responsible of perpetual transaction abortion. Thus, a new Four-Phase-Commit (4PC) protocol that allows transaction commitment with available peers and recovering unavailable peers when they become available again has been designed using the nested transactions and the distributed voting technique. After implementing the new algorithm with C#, experiments made it possible to analyse the performance which revealed that the new algorithm is efficient because in one second it can replicate a considerable number of records, such as when an important volume of data can be queued for subsequent recovery of the concerned slave peers when they become available again.
APA, Harvard, Vancouver, ISO, and other styles
46

Byun, Si-Woo. "Column-aware Transaction Management Scheme for Column-Oriented Databases." Journal of Internet Computing and Services 15, no. 4 (August 30, 2014): 125–33. http://dx.doi.org/10.7472/jksii.2014.15.4.125.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Chen, Ning, An Chen, Longxiang Zhou, and Liu Lu. "A graph-based clustering algorithm in large transaction databases." Intelligent Data Analysis 5, no. 4 (November 8, 2001): 327–38. http://dx.doi.org/10.3233/ida-2001-5404.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Choe, Hui-Yeong, and Bu-Hyeon Hwang. "Transaction Management Using Update Protocol in Fully Replicated Databases." KIPS Transactions:PartD 9D, no. 1 (February 1, 2002): 11–20. http://dx.doi.org/10.3745/kipstd.2002.9d.1.011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Moon, Hyun J., Carlo A. Curino, Alin Deutsch, Chien-Yi Hou, and Carlo Zaniolo. "Managing and querying transaction-time databases under schema evolution." Proceedings of the VLDB Endowment 1, no. 1 (August 2008): 882–95. http://dx.doi.org/10.14778/1453856.1453952.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Islam, Md Ashfakul, and Susan V. Vrbsky. "Transaction management with tree-based consistency in cloud databases." International Journal of Cloud Computing 6, no. 1 (2017): 58. http://dx.doi.org/10.1504/ijcc.2017.083906.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography